Hello STHers,
I have piecemeal 8 TB NAS humming away in a quiet corner of my place. Overall, I'm happy with the configuration (open to suggestions though). I'm starting to run a bit low on disk space and thinking of upgrading to some larger drives (8TB, 10 TB, etc.) probably driven more by cost per GB than anything as my needs aren't very heavy.
I'll post the deets below but I currently have 4 x 4 TB HDD in software RAID 10 using my onboard SATA controller, my motherboard has support for PCIe 2 and I have a free 16X slot. Looking through the LSI-OEM mapping post (awesome btw, thank you!) I decided to look for a SAS2208 based card and I found this one on eBay:
IBM M5110 8-Port 6Gbps PCI-e SAS/SATA JBOD IT MODE Cards 2* SFF SATA (IBM M5110 8-Port 6Gbps PCI-e SAS/SATA JBOD IT MODE Cards 2* SFF SATA | eBay)
I'm planning on keeping everything the same for the moment, not sure if HW RAID 10 is worth the trouble if something goes wrong.
Anyways, just looking to see if I'm on the right path with this card or if I'm making a big ole mistake.
Thanks!
The Deets:
Motherboard: Intel DH55TC
- PCIe 2.0 16x slot
CPU: Intel Core i5 650 @ 3.2 Ghz
- PCI Express Configurations: 1x16, 2x8
RAM: 16 GB DDR3-1333 RAM (PC3-10600)
Disks:
sda ST4000DM004-2CV104 4TB
sdb ST4000DM004-2CV104 4TB
sdc ST4000DM000-1F2168 4TB
sdd ST4000DM000-1F2168 4TB
Boot Disk: KINGSTON SUV400S37120G
Software RAID 10:
md0 : active raid0 md2[1] md1[0]
7813510144 blocks super 1.2 512k chunks
md2 : active raid1 sdd1[1] sdc1[0]
3906886464 blocks super 1.2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
md1 : active raid1 sda1[0] sdb1[1]
3906886464 blocks super 1.2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
File system: /dev/md0 on /cosmos type ext4 (rw,relatime,stripe=256)
OS: Arch Linux
Sharing: NFSv4
Network: 1x Gigabit Copper (There is also 2 other GB NICs on a 1x card, might bond them some day).
I have piecemeal 8 TB NAS humming away in a quiet corner of my place. Overall, I'm happy with the configuration (open to suggestions though). I'm starting to run a bit low on disk space and thinking of upgrading to some larger drives (8TB, 10 TB, etc.) probably driven more by cost per GB than anything as my needs aren't very heavy.
I'll post the deets below but I currently have 4 x 4 TB HDD in software RAID 10 using my onboard SATA controller, my motherboard has support for PCIe 2 and I have a free 16X slot. Looking through the LSI-OEM mapping post (awesome btw, thank you!) I decided to look for a SAS2208 based card and I found this one on eBay:
IBM M5110 8-Port 6Gbps PCI-e SAS/SATA JBOD IT MODE Cards 2* SFF SATA (IBM M5110 8-Port 6Gbps PCI-e SAS/SATA JBOD IT MODE Cards 2* SFF SATA | eBay)
I'm planning on keeping everything the same for the moment, not sure if HW RAID 10 is worth the trouble if something goes wrong.
Anyways, just looking to see if I'm on the right path with this card or if I'm making a big ole mistake.
Thanks!
The Deets:
Motherboard: Intel DH55TC
- PCIe 2.0 16x slot
CPU: Intel Core i5 650 @ 3.2 Ghz
- PCI Express Configurations: 1x16, 2x8
RAM: 16 GB DDR3-1333 RAM (PC3-10600)
Disks:
sda ST4000DM004-2CV104 4TB
sdb ST4000DM004-2CV104 4TB
sdc ST4000DM000-1F2168 4TB
sdd ST4000DM000-1F2168 4TB
Boot Disk: KINGSTON SUV400S37120G
Software RAID 10:
md0 : active raid0 md2[1] md1[0]
7813510144 blocks super 1.2 512k chunks
md2 : active raid1 sdd1[1] sdc1[0]
3906886464 blocks super 1.2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
md1 : active raid1 sda1[0] sdb1[1]
3906886464 blocks super 1.2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
File system: /dev/md0 on /cosmos type ext4 (rw,relatime,stripe=256)
OS: Arch Linux
Sharing: NFSv4
Network: 1x Gigabit Copper (There is also 2 other GB NICs on a 1x card, might bond them some day).