I have 3 disks. I add them to the storage pool. I go to use the wizard to create a new virtual disk. Default settings across the board, select parity array.
So i fire up powershell.
New-VirtualDisk -StoragePoolFriendlyName big -Size 37TB -ProvisioningType Fixed -ResiliencySettingName...
I want to create a parity array with 3 disks. I have done so using the raid5 in disk management. It works well. But I know that there's some things i can do to get more speed. The problem is, google seems to be hiding them from me. Seems google results lately are not great. I've been reading and...
I know a lot of people shit on storage spaces parity array. But I have some experience.
I have 14x4tb ssd's in 2x 7 disk parity array's. One is Hardware raid5 on an H810 raid card, one is windows storage spaces. I get ~400MB/sec read/write which is sufficient for me. The arrays are 8 years old...
I'm having trouble finding a clear answer on this one. I can see 2 hdd bays at the bottom, maybe one in the 5.25" bay, but all the videos I watch have special brackets/cables that I can't find part numbers for.
I would like 3 20tb hdd's so I can use windows storage spaces parity array. My...
From what I've been reading, bifurcation is supported. It doesn't specifically say it, but on the spec sheet it says add 4 m2 drives with that specific dell ultra speed drive quad adapter. Which is why I know the dell one works. I just wasn't sure if the non-dell ones would function the same.
I...
I'm looking to pick up 4x 2tb nvme drives to do a windows storage spaces parity array for vhd storage and code repo.
My question is what pcie card would be best for it. Will I see a noticeable difference between the different levels of cards?
The offical dell one is $170...
I have ibm sfp+ modules that work all my network adapters. I picked up some dell x710 daughter cards and took my modules out of my old nic's and put them in the dell. No lights. I know the modules work.
These are the dell's: Dell Intel X710 Quad Port 10Gb DA/SFP+ Ethernet, Network Daughter...
It's going to be a vm host and take the load of a retired 4 node dell c6100 filled with L5520 cpu's. There's no need for any storage on the machine, everything is on the storage server. So all it needs is cpu's, nic's, and ram.
It's not a standard size atx board, it's fans are temperature sensing and proprietary. Anything is possible with money, but I would recommend against a case transplant as just asking the question indicates that you don't have much experience with this. I would google images of the internals of...
It seems that the r630 has 450w psu's and 750w psu's. I'm curious if I can get away with 450w's if i want to run a pair of e5 2680 v4, which is a 120w cpu. There will be a pair of ssd's for the OS, and a single dual port 10gb sfp+ card. I searched hard and couldn't' find much info.
IIRC you cannot put 4 nodes in a system designed for 2 nodes. Power, backplane all different. If i remove a node from my 4 node while other nodes are running it throws a fit and all fans peg at 100%, so I'm guessing it has something to do with airflow not being optimal. Remember the cpu's are...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.