Best storage option for ARM SBC server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

blood

Member
Apr 20, 2017
42
14
8
44
I've been slowly coming around to the benefits of using ARM SBCs as home servers including recently migrating a handful of LXC containers from an Intel SuperMicro system to a Libre Renegade. With 4GB of RAM and 4 aarch64 cores, it's doing surprisingly well, especially when you consider its power usage. I'm looking to move more services to devices like it so as to be able to shut down and maybe even sell some of my Intel stuff - but one thing I'm struggling with is what to do about local storage.

To put it bluntly, I don't trust SD cards - and I'm not sure whether I should trust eMMC modules and having USB drives dangling off my infrastructure doesn't make me feel warm and fuzzy either. I'm nervous that I'm just asking for hardware failures if I utilize the storage too much, and some of the containers that I run use the disk all the time for stuff like monitoring and logging. Does anyone else have any smallish ARM systems doing real work with better storage?

I have a filer based on a E3v2. Should I keep that up and running and use network storage for these needs? I'm not fond of this because what happens when it goes down - all of my services tank? I'd like to be able to turn it off to save on power when it's not in use.

I recognize that I'm complaining about how "cheap" the obvious options are when I'm intentionally using "cheap" solutions - but from a compute standpoint these boards are honestly really quite satisfactory.

I'm interested in suggestions for other boards that have better storage options. It seems nobody makes a SOHO server that will run vanilla linux based on non-x86/x64.
 

blood

Member
Apr 20, 2017
42
14
8
44
Thanks - I have looked at Helios4 before and it seems pretty cool, and I'm a big fan of the armbian project that supports it. I'm not trying to build an ARM based NAS - I'm just looking for a way to run containers on an ARM system that doesn't use super-unreliable storage like SD cards. Maybe I should be looking at Helios4 not so much as a NAS, but just as an ARM server with some decent disks hanging off it. That's not a bad idea. Thanks!

There are some purposes that I don't think are reasonable to move away from x64 today and a NAS (at least one that's expected to perform well) is one of them. Another is whatever is serving as my Plex server, as my clients require transcoding.
 

fossxplorer

Active Member
Mar 17, 2016
556
98
28
Oslo, Norway
Take a look at RK3399 based boards from Friendlyelec. They have Pcie that gives you some options, e.g Marvel based SATA 2/4 ports etc. Forum over at Armbian.com has lots of useful info around this.
 
  • Like
Reactions: blood

blood

Member
Apr 20, 2017
42
14
8
44
Take a look at RK3399 based boards from Friendlyelec. They have Pcie that gives you some options, e.g Marvel based SATA 2/4 ports etc. Forum over at Armbian.com has lots of useful info around this.
That's interesting. I'll take a look at what PCIe boards fit that and what their Linux support is like. I'd also like to find a case that can house it nicely. Thanks!

Well, you could always consider a Marvell MacchiatoBin Doubleshot as well. VMWare used it to showcase the ARM build of ESXi 6.7 during a recent tradeshow...
Yow, that looks burly. 2x10gbe? 4 lanes of PCIe 3.0? Yes, please! I also dig that it's mini-itx with an ATX connector so it should mount into a case pretty well. I wonder what its power footprint is... No sign of heatsinks or fans. This is more expensive than what I had been using, but the features are certainly there to justify considering it. This might be what I was looking for.

On another note, most every ARM SBC these days ships with hardware accelerators for video. It's too bad that Plex doesn't support any of them.
 

Joel

Active Member
Jan 30, 2015
854
193
43
42
Another option (not really a GREAT one, but an option) is the ODROID HC2. ~$60, but it only handles a single drive.

I'm thinking of downsizing my server since I wasn't happy with my virtualized desktop on top of Proxmox, and if I'm removing that function, an 8 core E5 with 128gb mem that idles at 100w is WAYY overkill.
 

blood

Member
Apr 20, 2017
42
14
8
44
Another option (not really a GREAT one, but an option) is the ODROID HC2. ~$60, but it only handles a single drive.

I'm thinking of downsizing my server since I wasn't happy with my virtualized desktop on top of Proxmox, and if I'm removing that function, an 8 core E5 with 128gb mem that idles at 100w is WAYY overkill.
I looked at that and though I haven't used it, from what I saw in the docs the SATA interface uses USB3 internally. It does look cleaner just using USB hard drives, I've never had good luck with accessing storage over USB - IOPS tend to suck because of the added latency of each operation going through SATA and then USB, and I've had devices get hung requiring a reboot to get them in good shape again. Maybe I should try again if things have gotten better.

I used a proxmox box as a desktop for a while but rather than virtualizing it, I just ran it straight out of dom0 so that I could easily make use of an Nvidia card with decent performance. It worked - but I ultimately stopped doing it because my habits for desktops tend to destabilize systems too much... plugging in USB devices, using tainted kernel modules, etc - and it was too painful to reboot when services went down with it. I'm much happier having a dedicated piece of gear for my desktop that I turn on whenever I want to use it, and other systems to be my servers that I don't destabilize constantly. But yeah, I'd turn that off too if I were you to save power.
 

Joel

Active Member
Jan 30, 2015
854
193
43
42
I had similar issues centering around USB with my virtualized Windows desktop under Proxmox. I PCI passthrough'd a GTX 1060. Graphics worked fine, and no issues with a few games I tried. I tried to passthrough a PCIe USB3 controller, and every 10 seconds or so the machine stuttered even with 64gb of memory and 8 'host' vCPUs and nothing else of note running on the host. So I went back to passing through individual devices, and USB hotplug no longer worked at all. I unplugged a USB audio card that had been passed through to Windows and the entire host crashed.

When it worked it was great, but it's clearly not ready for "production" use.

Main thing I'm interested in is having ZFS storage and a rocking desktop, and trying to make a single PC do both has been frustrating...
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
245
43
FWIW I run containers on a Rock 64 using a 128GB mSATA SSD in a USB adapter dongle. I've had the little guy dutifully serving up services on my lan for the last couple of years with no complaints, SSD is in fine shape as well. I run BTRFS with compression as the root fs on an eMMC and ZFS on the SSD as a backup target for all of my persistent container data and snapshots from the main filesystem and a simple NFS share for stuff small stuff I want to sling around my lan quickly, copies of scripts, config files, dotfiles, etc. If you ever see "Apricorn mSATAwire" mSATA to USB adapters up for sale either buy them for yourself or send me a PM, they're absolutely great for this.

I ZFS send the USB filesystem over to my bigboy-sized NAS for backups as well.