SMA Onboard HBA or not - how to decide ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

takeawaydave

Member
Aug 20, 2013
62
2
8
I am still mulling a purchase over and what I really need in terms of power, performance versus cost and cost of running. I would like some kind of storage card either integrated or slot in PCI. The Supermicro X9SRH-7F has therefore caught my eye however so has the more workstation centric Supermicro X10SAT (with its onboard audio USB 3.0 and more PCI slots). I don't upgrade often (still currently running an LGA775) so I really want something future proof and expandable.

Its a given that the LGA2011 based E5-1650 would perhaps be overkill for my purposes (some development, ESX home lab as well as video/photo editing, minimal gaming) but for the extra $$ and sured up future proofing I am not too concerned until I come to actually keeping the machine on 12-15 hours a day which for me is typical.

Now perhaps happy to go with the X10SAT but how much real $$ would be saved with E5 vs E3 when machine is idling ? Not too much I would guess.
The other thing is if I went with the X10SAT I would gain expandability but lose the HBA onboard. I was therefore looking around at IBM M1015 ServeRaid SAS/SATA PCI-e RAID Controller cards which are a great price around 100 GBP.

What the real loss though with a card like this versus the onboard SAS 2308 (such as on the X9SRH-7F) ?

Mods: Not sure whether this belonged more in the HDA/RAID section or here. Sorry if posted in wrong location :)
 

Scout255

Member
Feb 12, 2013
58
0
6
What you really lose with the X10 1150 board VS the X9 2011 board is memory maximum memory capacity (32 GB max VS 256 GB max), maximum number of cores in a processor (4 VS 8), and PCIe 3.0 lanes (16 VS 40).

While the x10 board may *appear* to have more expansion, it only has 3 PCIe physical x16 slots that share 16 pcie 3.0 lanes (in either a 16X/disabled/disabled, 8X/8X/disabled, or 8X/4X/4X fashion) and then 3x pci x4 slots that share 3 pcie 2.0 lanes (I.e. each one is a pcie 2.0 X1 slot in effect).

As this is a multipurpose machine your likely going to want a video card, and most gaming video cards are X16 cards. This means if you installed this on the X10 motherboard, you would either need to disable your other PCIe 3.0 slots, or make it run in X8 modes (should work just fine, but I believe it would have some sort of performance hit)

Even if you were okay with running the video card in an X8 slot, if you wanted say 1 HBA / Raid Pcie X8 card and 1 Video card you would already have used all available pcie 3.0 lanes with just those two cards. If you wanted to add another HBA or say an infiniband card you would have to reduce the bandwidth available to either your HBA / Raid card or Video Card. Even adding a quad GB NIC card could be an issue as they need an X4 slot and you would severely limit its speed if you stuck them in the PCIe 2.0 1X slots.

This is why the 2011 platform can be quite useful in a server that requires multiple PCIe cards. If you are concerned with future proofing, this may make the 2011 platform more appealing for you, especially if you hold off for the V2 Ivy Bridge processors that are coming shortly.

With that said the 2011 platform is more expensive than a 1150 platform, though you do get quite a bit for the extra money.
 

takeawaydave

Member
Aug 20, 2013
62
2
8
Thanks for the reply. Certainly given me something to think about and in consideration a push to the LGA 2011 platform. Interestingly with these points you have made regarding the PCI lanes I might even think about a dual CPU but starting off with just the single CPU. The X9DA7 is very well priced in comparison to the X9SRH-7F.
Thanks again Scout255!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
Thanks for the reply. Certainly given me something to think about and in consideration a push to the LGA 2011 platform. Interestingly with these points you have made regarding the PCI lanes I might even think about a dual CPU but starting off with just the single CPU. The X9DA7 is very well priced in comparison to the X9SRH-7F.
Thanks again Scout255!
So the X9DA7 is going to be more of a workstation board. The A in the model number means audio. It also has USB 3.0 which is a big plus if you are building a workstation. On the other hand, the X9SRH-7F has IPMI which would be better for a server.

In about two weeks each LGA2011 socket will have the capacity to hold up to 12C/ 24T.
 

Scout255

Member
Feb 12, 2013
58
0
6
Thanks for the reply. Certainly given me something to think about and in consideration a push to the LGA 2011 platform. Interestingly with these points you have made regarding the PCI lanes I might even think about a dual CPU but starting off with just the single CPU. The X9DA7 is very well priced in comparison to the X9SRH-7F.
Thanks again Scout255!
If you do go with a dual board with a single processor first just make sure you read the manual and know the limitations that this could cause. Some of the PCIe expansion slots will only be useable when you install the second processor (as its 40 lanes / processor, so the lanes for the second processor which will be wired to PCIe slots will not be available). Also, the E5-1600 series of chips are uniprocessor only (i.e. you can't stick two in a dual board and expect them to work). You need the E5-2600 or E5-4600 series of chips for that (and they are MUCH more expensive than the E5-1600's)
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Might I suggest running your server apps on a separate storage? GPU and Audio passthrough can destabilize a server.

vsvga sucks right now on ESXi and dVGA is just vt-d passthrough - but I can't imagine you would want a gpu glitch to take out the entire server.

I'd just get a cheap workstation for desktop duty and a cheap server to maintain 24x7 operations and give you the reliability of a server.

virtualization has a huge price when it comes to unreliability. It is my opinion that server that crashes or loses power needs to be fully assessed or best -> rebuilt to ensure no structural damage will continue to haunt you.

This is difficult to do in real life with a server just doing server-duties. But to bring audio and GPU into the mix. It's one thing to run a slower workstation quadro with ECC but to push a consumer grade card that runs much hotter and uses much more power just seems like a bad idea. It is hard enough to provide a stable cooled airflow to a server at home. I can't imagine that an unoptimized white-box is going to get optimal cooling to keep it up 24x7.

I would think hard about separating the roles to that which you think will be realistic. Hyper-V/Xen/ESXi are not tolerant at all of system failure.

If you have never seen a hypervisor fail, corrupt the storage badly, and fail to boot your 10 vm's. Well it will happen. These failures can really stress your confidence in your server and plain sucks.

Think about it.
 

takeawaydave

Member
Aug 20, 2013
62
2
8
So the X9DA7 is going to be more of a workstation board. The A in the model number means audio. It also has USB 3.0 which is a big plus if you are building a workstation. On the other hand, the X9SRH-7F has IPMI which would be better for a server.

In about two weeks each LGA2011 socket will have the capacity to hold up to 12C/ 24T.
Thanks Patrick but I would only dream of going to a capacity of that scale. Jumping on to the E5-2600 family certainly puts a dent in the budget. I foresee a startup with probably no more that a 6-core E5-2620 slow as it might look on paper at 2 Ghz I am sure it will fly nonetheless.

If you do go with a dual board with a single processor first just make sure you read the manual and know the limitations that this could cause. Some of the PCIe expansion slots will only be useable when you install the second processor (as its 40 lanes / processor, so the lanes for the second processor which will be wired to PCIe slots will not be available). Also, the E5-1600 series of chips are uniprocessor only (i.e. you can't stick two in a dual board and expect them to work). You need the E5-2600 or E5-4600 series of chips for that (and they are MUCH more expensive than the E5-1600's)
Thanks Scout255. Yes - I might even have a read of the manual tonight before a purchase. Any idea whether the SAS 2308 would work with a single E5-2600 CPU ?

Might I suggest running your server apps on a separate storage? GPU and Audio passthrough can destabilize a server.

vsvga sucks right now on ESXi and dVGA is just vt-d passthrough - but I can't imagine you would want a gpu glitch to take out the entire server.

I'd just get a cheap workstation for desktop duty and a cheap server to maintain 24x7 operations and give you the reliability of a server.
Thanks mrkrad on the reply and the food for thought. Certainly worth to make provision for all scenarios when it comes to data. Perhaps the old LGA775 will be heading towards a life of file and storage server ? Its not the fastest CPU however perhaps with a decent HBA or RAID card it will make a decent server.

Now if I could hook the old and the new machine up with perhaps a 10GBaseT connection this segregation would be reasonably transparent...
 

Scout255

Member
Feb 12, 2013
58
0
6
Generally the integrated onboard items are linked to the first CPU socket, with the second CPU's lanes being used only for PCIe slots, however, it is best to be safe than sorry and verify it via the manual (it should have a handy block diagram to look at).

Also unless you are in a huge rush to get this machine up and running I would wait a few weeks to see what the E5-2600 V2 chipsets look like. It would appear that the chips should have at the very least the same performance at a lower power consumption rate. Not sure what the pricing picture will look like though, but I believe that comparable processors will come in at about the same price as the older sandy bridge based processors. This means you have very little to lose by waiting for the new chips to come out.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
OPs choice is simple IMO

The current bundling you get with the socket 2011 boards is unbeatable. (fyi, prices from US vendors)
You can sell/swap cards but the amount you will make back years later won't cover the savings from going with onboard chips from the start.

Downgrade from the hex core 1650, you were already considering the E3s and 1620 is more bang for the buck.
You could wait for the ivy bridge 2011s to save some on idle power, but they might not launch with the 26xx series this month and will probably have the "new hotness" price premium for awhile.
Going dual E5 is a biiiiig step up in cpu prices, does not sound like your home lab needs it either. Sure there are some nice 8 core beasts for several grand each, but the cheap 26xx models are slower by 1Ghz or so compared to 16xx.

E3 Build option #1:
$290~ E3 1245 V3 3.4/3.8 4C8T (no vga controller on most C226, otherwise could use cheaper 1230 or 1240 on C222/C224 boards)
$257 X10SAT-O (btw, that is the thunderbolt version, will you actually use it? Similar C226 w/o thunderuselessfeature: X10SAE-O $210, Asus P9D WS $236)
$100~ M1015 or equivalent


E5 Build option #2 = winner for your scenario:
$292 E5 1620 3.6/3.8 4C8T
$486 X9SRH-7TF-O
$0: 8 port LSI 2308: this is a faster version of the LSI 2008 on the M1015. Comes IR, can be flashed IT same as a card
$0: dual intel x540 10GbE: great for future proofing, these are $400+ on standalone cards


Both can use unbuffered ECC ram, however you get the option to use registered with E5 to make later upgrades easier. (required to break 32GB as well) Also buy 4x16 instead of 8x8 or 4x8 instead of 8x4 when populating slots, the cost difference is minimal (or maybe cheaper) and saves a ton if you upgrade.

Prices on everything else will be the same with both builds.

BTW, E3 V2/3s actually have 20 pci-e lanes, x16 3.0 and x4 2.0, you can run 16/4 or 8/8/4 on some C2xx boards. Chipset provides another 8 that usually gets used for onboard stuff and x1 slots.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
Also unless you are in a huge rush to get this machine up and running I would wait a few weeks to see what the E5-2600 V2 chipsets look like. It would appear that the chips should have at the very least the same performance at a lower power consumption rate. Not sure what the pricing picture will look like though, but I believe that comparable processors will come in at about the same price as the older sandy bridge based processors. This means you have very little to lose by waiting for the new chips to come out.
I may offer a thought. Motherboards are largely going to be what we already have. Maybe a few updates here and there. The biggest update will be when we start seeing new LSI 3xxx series chips integrated.

On the chips, there are two impacts. First, new possibilities. Second, new opportunities on ebay :)
 

brutalizer

Member
Jun 16, 2013
54
11
8
I am basically in the same situation. However, because I am a home user, I will go for a X10SAT. I dont need more than 32GB RAM in a foreseeable future. I will not virtualize more than 2-3 VMs I think. Those fit into 10GB RAM easily, then I have 22GB left. I could virtualize 10 VMs giving 2GB each, which would give me 12GB RAM leftover. Why would I virtualize 10 VMs? I dont have need of that.

PCIe lanes can be a problem on a X10SAT. But, if I added a quad GB NIC, I would insert it into the PCIe v2 slot. Sure it is only 1x speed, but that is the equivalent of 500MB/sec. So, I have three 1x slots, each giving 500MB/Sec. That is plenty for my small needs. I am going to add a HBA card too, an IBM M1015 (rebranded LSI2008) into a 1x slot - which will give me 500MB/sec instead of the full 1GB/sec speed. This is also ok for my small needs. I can wait an extra 1 minute when copying stuff. 500MB/sec or 1GB/sec - that is not important to me.

Then I can still use a 16x PCIe v3 slot. Or two 8x PCIe v3 slots. But the fastest graphics card out there that moves tons of pixels at insane speeds, do not saturate a 8x PCIe v3 slot today. It is no difference if you insert the fastest graphics card into 8x or 16x. Just look at some graphics benchmarks comparing 8x to 16x - no difference. Well, there might be 2-3 fps difference in some cases, but that is practically, "no difference". So, I can use a 8x slot for my graphics card. Then I have another 8x slot for something else. Like... Xeon Phi compute card? But first of all, Phi is very expensive, so I can not afford them, second, if a ultra fast graphics card is happy in a 8x slot, then the Phi will also do fine in a 8x slot. And if the Phi does not fine, it might slow down a bit in 8x slots, in worst case. Maybe it slows down... 10%? So what is 10%? Do I really care? Can I wait another minute or half an hour? Yes we can.

Sure, it would be nice with 256GB RAM and 40 PCIe v3 lanes, but that is overkill for me as a small home user. Most probably I will not use half the resources. And if I really need the resources, I buy a dedicated PC to do that stuff that I can not wait another half an hour for. Most probably I will be fine, waiting another half an hour. And the new Xeon Haswell E3v3 cpus are very fast actually. The threads are very strong, in par with hexa core cpus or so. But the only difference is that with hexacore, you get more of the threads. Can I wait another 1 minute? Yes we can.

I would like to build a silent audio station to create music. And a powerful gaming PC. So, should I build them? No. Instead I build a silent gaming PC using the X10SAT, and then I simply buy a 3.5" hard disk caddy, so I can easily swap hard disk. So, I have one hard disk configured as a audio station, ASIO drivers, installed music software, etc. Then I have another hard disk configured for gaming, installed all games, etc. And when I want to create music, I power down the PC and swap disks. Sure, it is a nuisance to power down the PC, but can I wait another 5 minutes before creating music? Yes we can. I do not create music while playing games, so my PC does only one task at a time, so I can swap disks. Sure, if I needed to game whilst creating music, I would need two PCs. But I dont. And I need to wait another 5 minutes to switch tasks. But that is ok for my small needs. I am not a company where a 5 minute wait would be disastrous. People building music workstations are building separate a PC with only the OS and nothing else, it is not even connected to the internet. They are only using it for music, and no extra software installed, because of lag which is really problematic, you dont want the antivirus software to run while you are recording to hard disk, using tons of software instruments - that would ruin the recording. Instead of buying another music PC, I can do that for the cost of another disk, which I swap.

Basically, I am not in a hurry, I can wait another minute. So I save a ton of money. And save space in my room. I will buy a DAS (16 disks in an external chassi) which I connect to my PC with the IBM M1015 card. So I can turn my PC into a file server whenever I need to, instead of buying an extra file server. That requires me to power down the PC and connect it to the DAS, but can I wait another minute before copying files? Yes we can. My DAS will never be connected, it will only be used as a backup. On my PC I will have a 4TB disk which I use for work, and then when I am done with the work I will backup to the DAS. The DAS is never connected, which means I am invulnerable to lightning strike. When lightning strikes, all your electronic hardware will be toast. I can always buy a new PC, but I can not recreate my data easily.




What you really lose with the X10 1150 board VS the X9 2011 board is memory maximum memory capacity (32 GB max VS 256 GB max), maximum number of cores in a processor (4 VS 8), and PCIe 3.0 lanes (16 VS 40).

While the x10 board may *appear* to have more expansion, it only has 3 PCIe physical x16 slots that share 16 pcie 3.0 lanes (in either a 16X/disabled/disabled, 8X/8X/disabled, or 8X/4X/4X fashion) and then 3x pci x4 slots that share 3 pcie 2.0 lanes (I.e. each one is a pcie 2.0 X1 slot in effect).

As this is a multipurpose machine your likely going to want a video card, and most gaming video cards are X16 cards. This means if you installed this on the X10 motherboard, you would either need to disable your other PCIe 3.0 slots, or make it run in X8 modes (should work just fine, but I believe it would have some sort of performance hit)

Even if you were okay with running the video card in an X8 slot, if you wanted say 1 HBA / Raid Pcie X8 card and 1 Video card you would already have used all available pcie 3.0 lanes with just those two cards. If you wanted to add another HBA or say an infiniband card you would have to reduce the bandwidth available to either your HBA / Raid card or Video Card. Even adding a quad GB NIC card could be an issue as they need an X4 slot and you would severely limit its speed if you stuck them in the PCIe 2.0 1X slots.

This is why the 2011 platform can be quite useful in a server that requires multiple PCIe cards. If you are concerned with future proofing, this may make the 2011 platform more appealing for you, especially if you hold off for the V2 Ivy Bridge processors that are coming shortly.

With that said the 2011 platform is more expensive than a 1150 platform, though you do get quite a bit for the extra money.
 

Scout255

Member
Feb 12, 2013
58
0
6
I am basically in the same situation. However, because I am a home user, I will go for a X10SAT. I dont need more than 32GB RAM in a foreseeable future. I will not virtualize more than 2-3 VMs I think. Those fit into 10GB RAM easily, then I have 22GB left. I could virtualize 10 VMs giving 2GB each, which would give me 12GB RAM leftover. Why would I virtualize 10 VMs? I dont have need of that.

PCIe lanes can be a problem on a X10SAT. But, if I added a quad GB NIC, I would insert it into the PCIe v2 slot. Sure it is only 1x speed, but that is the equivalent of 500MB/sec. So, I have three 1x slots, each giving 500MB/Sec. That is plenty for my small needs. I am going to add a HBA card too, an IBM M1015 (rebranded LSI2008) into a 1x slot - which will give me 500MB/sec instead of the full 1GB/sec speed. This is also ok for my small needs. I can wait an extra 1 minute when copying stuff. 500MB/sec or 1GB/sec - that is not important to me.

Then I can still use a 16x PCIe v3 slot. Or two 8x PCIe v3 slots. But the fastest graphics card out there that moves tons of pixels at insane speeds, do not saturate a 8x PCIe v3 slot today. It is no difference if you insert the fastest graphics card into 8x or 16x. Just look at some graphics benchmarks comparing 8x to 16x - no difference. Well, there might be 2-3 fps difference in some cases, but that is practically, "no difference". So, I can use a 8x slot for my graphics card. Then I have another 8x slot for something else. Like... Xeon Phi compute card? But first of all, Phi is very expensive, so I can not afford them, second, if a ultra fast graphics card is happy in a 8x slot, then the Phi will also do fine in a 8x slot. And if the Phi does not fine, it might slow down a bit in 8x slots, in worst case. Maybe it slows down... 10%? So what is 10%? Do I really care? Can I wait another minute or half an hour? Yes we can.

Sure, it would be nice with 256GB RAM and 40 PCIe v3 lanes, but that is overkill for me as a small home user. Most probably I will not use half the resources. And if I really need the resources, I buy a dedicated PC to do that stuff that I can not wait another half an hour for. Most probably I will be fine, waiting another half an hour. And the new Xeon Haswell E3v3 cpus are very fast actually. The threads are very strong, in par with hexa core cpus or so. But the only difference is that with hexacore, you get more of the threads. Can I wait another 1 minute? Yes we can.

I would like to build a silent audio station to create music. And a powerful gaming PC. So, should I build them? No. Instead I build a silent gaming PC using the X10SAT, and then I simply buy a 3.5" hard disk caddy, so I can easily swap hard disk. So, I have one hard disk configured as a audio station, ASIO drivers, installed music software, etc. Then I have another hard disk configured for gaming, installed all games, etc. And when I want to create music, I power down the PC and swap disks. Sure, it is a nuisance to power down the PC, but can I wait another 5 minutes before creating music? Yes we can. I do not create music while playing games, so my PC does only one task at a time, so I can swap disks. Sure, if I needed to game whilst creating music, I would need two PCs. But I dont. And I need to wait another 5 minutes to switch tasks. But that is ok for my small needs. I am not a company where a 5 minute wait would be disastrous. People building music workstations are building separate a PC with only the OS and nothing else, it is not even connected to the internet. They are only using it for music, and no extra software installed, because of lag which is really problematic, you dont want the antivirus software to run while you are recording to hard disk, using tons of software instruments - that would ruin the recording. Instead of buying another music PC, I can do that for the cost of another disk, which I swap.

Basically, I am not in a hurry, I can wait another minute. So I save a ton of money. And save space in my room. I will buy a DAS (16 disks in an external chassi) which I connect to my PC with the IBM M1015 card. So I can turn my PC into a file server whenever I need to, instead of buying an extra file server. That requires me to power down the PC and connect it to the DAS, but can I wait another minute before copying files? Yes we can. My DAS will never be connected, it will only be used as a backup. On my PC I will have a 4TB disk which I use for work, and then when I am done with the work I will backup to the DAS. The DAS is never connected, which means I am invulnerable to lightning strike. When lightning strikes, all your electronic hardware will be toast. I can always buy a new PC, but I can not recreate my data easily.
My response was tailered to the OP's specific use case, yours is signifigantly different. It's really case specific, in certain instances it LGA 1150 is great, in others it is not. If you do not need or care about ram or about PCIe lanes then 1150 is definately the way to go.

I would highly recomend though that you do not place an M1015 on a PCIe Gen 2 X1 slot. The M1015 has 8 X 6Gb lanes (total of 48 Gb = 6144 MB) of max bandwidth. Putting that into a slot limited to 500 MB would severely handycap it, even for using it with spinning disks. Likely you would want to put it into a X8 slot, leaving you one free X8 slot availible. That plus a graphics card which will run at X8 instead of X16 will utilize all of your availible PCIe V3 lanes. You would then have 3 X1 V2 lanes free. If you wanted infiniband, 10gbe, another HBA, etc. or other high bandwidth card you would need to cut your graphics card or HBA to X4 speed as I said above. Then again, if you do not care at all about transfer speed and are only using it for manual backups similar to a USB external enclosure than have at it, most people do not use SAS cards in this way (usually used for live storage & always connected)

If you do not need a gaming / workstation graphics card, you would be perfectly fine for most uses with just an HBA / Raid card and 1 other high bandwidth card (10 GBe, Infiniband, etc.) with an LGA 1150 motherboard. Otherwise it may be best to just go Socket 2011. If you price out a 4 core 2011 uniprocessor system to a 1150 uniprocessor system there isn't all that considerable of a difference price wise ($100ish max difference on a comparable motherboard, processors are quite similar in price) and that can be made up somewhat with cheaper and easier to find registered ECC ram.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
lightning strike? wow I think my servers would melt and catch on fire if lightning struck them. If not the drives, then perhaps the house on top of them?

Be careful with statements like that. Don't want to bring out the superstitions.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
lightning strike? wow I think my servers would melt and catch on fire if lightning struck them. If not the drives, then perhaps the house on top of them?

Be careful with statements like that. Don't want to bring out the superstitions.
Yeah, I would call that FUD.

Over the years my computers and electronics have survived about as direct as possible lightning strikes a couple times. (barring a full house hit which almost always starts a fire, just ask your local dept)

Some of the UPSes did not, but they were replaced at no cost. Wish I could say the same about everything else in the various places I lived, one of them needed an entire new breaker box and pole on the street...that week SUCKED, it was hot as hell too. Any wall warts not on at least a surge strip seem to die too. I put any modem/dvr/etc boxes on these for that reason. When I used to have coax internet I used fiber converters to keep them from toasting my lan, my fios ONT is indoors now with its own UPS (their battery sucks and drops all but phone service that I don't even have after 5min outage) so other than AC mains there is nothing coming in :)
 

brutalizer

Member
Jun 16, 2013
54
11
8
I would highly recomend though that you do not place an M1015 on a PCIe Gen 2 X1 slot. The M1015 has 8 X 6Gb lanes (total of 48 Gb = 6144 MB) of max bandwidth. Putting that into a slot limited to 500 MB would severely handycap it, even for using it with spinning disks. Likely you would want to put it into a X8 slot, leaving you one free X8 slot availible. That plus a graphics card which will run at X8 instead of X16 will utilize all of your availible PCIe V3 lanes. You would then have 3 X1 V2 lanes free. If you wanted infiniband, 10gbe, another HBA, etc. or other high bandwidth card you would need to cut your graphics card or HBA to X4 speed as I said above. Then again, if you do not care at all about transfer speed and are only using it for manual backups similar to a USB external enclosure than have at it, most people do not use SAS cards in this way (usually used for live storage & always connected)
Hmm.... I need to rethink this, again. Thanx for the input.

I mean, a modern graphics card is not penalized in 8x, so what difference does it make if run it in 16x or 8x? None. But... what difference does it make if the M1015 runs at penalized 500MB/sec, or at full 1GB/sec? Not much, as I am going to use my DAS only as a off line backup. In case lightning strikes. :)
 

brutalizer

Member
Jun 16, 2013
54
11
8
Yeah, I would call that FUD.

Over the years my computers and electronics have survived about as direct as possible lightning strikes a couple times. (barring a full house hit which almost always starts a fire, just ask your local dept)
I would call your reply ignorant. If you ever check some threads or google on lightning and hardware, you will see that it is a very real problem. And a serious problem, too.

If you live in a big building with condo apartments, you have a good protection against lightning. If you live in a small house, you are toast. There are surge protectors that protect against lightning. But check them up, they protect against 3000-4000 joule at most. Lightning has several million joule. There are no protections against a direct hit. Nothing protects against that. If you disconnect your PC and hard disks you are protected. This is what I am going to do, disconnecting my DAS/JBOD storage chassis.

My collegue in a big city had a lighnting strike in a power station and the entire area had hardware that got toasted. She had to get a new TV. Just google a bit on this, and you will see that lightning strikes are a real big problem. One support guy at a ISP internet provider said that after a major rain weather with lot of lightning, he had to send out... 40-50 new modems because they got toasted. I suggest you educate yourself instead of displaying ignorance, it makes you look like a newbie.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
If you disconnect your storage, it is not grounded!

j/k

But dude, lighten up - you sound like a snob.

Most decent ups include a warranty for those times when you get a direct hit.

I'd be more worried about BIT-rot from drives idling or temperature cycling or someone bumping or moving or spilling on a jbod.

I get what you are saying. Some folks put their money in their beds to keep it "secure". But honestly storing backups on-site is probably the worst idea possible since a "DR" scenario would be dumb things.

Folks breaking in and stealing. Flood. Tornado. Fire. Lighting (yes! it is very serious). Overheating. Cat peeing on. VIRUS. Dropped cola on. Dropped hammer on table while jbod is spinning. Loud BASS speakers or other vibrations. Bit rot from idling. Uncontrolled humidity. Cycling (hot to cold to hot). Bad memory corruption. Power brick failure corruption.

I like to keep things online and replicate as much as possible. There are things that are beyond our control (God etc) and you just do your best to be double-triple dog safe with multiple copies on/off-site.

I had a telco guy come in and drop his belt on the desk where a 2TB drive was doing D2D backup . Killed the drive instantly. Guy should have known to not cause a sudden thump against a consumer drive.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
I would call your reply ignorant. If you ever check some threads or google on lightning and hardware, you will see that it is a very real problem. And a serious problem, too.

If you live in a big building with condo apartments, you have a good protection against lightning. If you live in a small house, you are toast. There are surge protectors that protect against lightning. But check them up, they protect against 3000-4000 joule at most. Lightning has several million joule. There are no protections against a direct hit. Nothing protects against that. If you disconnect your PC and hard disks you are protected. This is what I am going to do, disconnecting my DAS/JBOD storage chassis.

My collegue in a big city had a lighnting strike in a power station and the entire area had hardware that got toasted. She had to get a new TV. Just google a bit on this, and you will see that lightning strikes are a real big problem. One support guy at a ISP internet provider said that after a major rain weather with lot of lightning, he had to send out... 40-50 new modems because they got toasted. I suggest you educate yourself instead of displaying ignorance, it makes you look like a newbie.
You know what they say about assumptions...

I run some computers in an old farmhouse on top of a hill, once the barns and silos were torn down about 10 years ago it became the only lightning magnet in a quarter mile circle. The only reason it hasn't caught fire is the wood is so damn old. You can ground a building well enough that a decent UPS or strip eat whatever is left and save anything behind it, the size of the building doesn't mean anything but the grounding design sure does.

The modems were sent out because a lot of places have shit for ground, they will light up the 2 dollar "ground tester" light but thats about it. There are wisps that runs lots of gear on several hundred+ foot tall antennas and the ones that know what they are doing don't have to constantly replace everything.
 
Last edited:

brutalizer

Member
Jun 16, 2013
54
11
8
The modems were sent out because a lot of places have shit for ground, they will light up the 2 dollar "ground tester" light but thats about it. There are wisps that runs lots of gear on several hundred+ foot tall antennas and the ones that know what they are doing don't have to constantly replace everything.
I agree that some have shitty grounding, but still, if you google a bit you will see lot of stories where all electronics got toasted. Probably they lived in a small house, and not a modern big building with good grounding. But I would like to play quite safe, that is why I am using ZFS and raidz3. I should backup everything on another site, but I am not going to do that. My solution will have to do.