Just as far as OSS SDS software solutions, I would go with ceph if you have IOP performance needs or gluster for media, file sharing, SMB etc. That said you will need a minimum 2 - 10 gbps network ports per server and really should use a switch with enough ports at minimum. To get performance out of these platforms you need a fast control network, decent/nearly current H/W and as many OSD (data) nodes, mon etc. as perf targets dictate.
If you are planning to reuse 3-4 old servers from the westmere or older era on a 1 gbps network you will be very disappointed (_very_), especially if they are already running something( and will need to continue in this role). The storage nodes should be 100% dedicated to the role and nothing else, they should be reasonably fast with many cores (min 1 osd process per disk, 4 for ssd, nvme - plan for 1 core per disk) and one or more fast flash device for journal, WAL and only enough osd processes/disks assigned that the flash device can service.
If you use consumer SSD's, old servers with too many disk for the available CPU/RAM to handle, especially running on a 1 gb network your cluster will most likely be slower than a single disk during the time it actually runs before crashing. The thing to remember about Ceph and Gluster is they were designed to run on commodity servers etc. yes - but on a scale most never see, multiple racks of servers per pool etc. To get performance at lower node counts you need very fast, expensive H/W. If you take old servers already running something else and then try to layer Ceph on top of it, that will be a disaster.
I just went though this learning experience and thanks to experienced Ceph users feedback and suggestions, saved a fair sum of money I was planning to spend on used westmere era servers with lots of disk and slower procs. From those discussions and my research I decided to go with 1U (2x - E5-26xx v2 10C/20T) servers with a maximum of 8 storage devices (10k SAS 1.2 TB) using smaller NVMe addin's for write ahead log etc and a minium of 64GB RAM per SD node. Total of 6 OSD nodes with mon and admin etc. all running on their own lower spec 1U servers of the same class. This is just for a test environment to verify a Ceph implementation will work for our intended purpose, if we go this route the initial prod deployment will most likely be 2-3 racks of EPYC servers using a large amount of NVMe U.2 SSDs and SAS SSD's in a tiered config fronting a few hundred 10k drives.
My network is 40gbps top of rack so I was already good there, using 1 gbps network is a non-starter for an SDS solution, you would just be wasting your time other than learning a few new things. Also keep in mind, just getting a few servers running in a Ceph cluster is just a start, you will need to master the crushmap and other platform components to really optimize and get the performance out of it. If you do manage to get something running on servers already performing a task , using un-allocated disk space, I doubt the cluster would be able to heal/recover or it would take a few lifetimes of waiting.
One more thing to keep in mind when sizing, you need to target your specs on your performance minimums during a node failure/recovery not just a clean good state, with a minimum of nodes (3-4) you will lose a large percentage of your capacity, recovery & re-balancing is very resource intensive.
I am back in the research stage for my test environment double checking everything before making the spend so take what I say with a grain of salt, I realistically don't think this will work for you or deliver what you want out of it.