The STORNADO

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Ha! I sent him a note today already on that one.

The plan is to do an all-flash FreeNAS appliance that has the ability to directly connect clients.

We walked around Taipei for a bit last week. I think I am flying up to Vancouver to do the setup and shoot after HPE Discover.

As an aside, after 75 minutes walking with Linus and just prior to that having lunch and going on a MRT adventure with Wendell, I ended up buying four video camera setups. Talk about an expensive day.
 

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
@Patrick, might you consider @gea involved in this?

That would be a dream team!
He's such a wealth of ZFS tuning experience.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I am looking forward to this also - especially since this is a scenario more catered to small groups (4 editors here) and not your typical enterprise measurement with T16/QD32.
So ideally we have 4 threads (or maybe 8 or even 16 if they run a couple background jobs each), but certainly not massively parallelized.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
On this particular one, the hardware was set and I was told we are doing FreeNAS. Linus and I discussed doing more of these in the future.

I told him I actually think Ceph is more interesting.
 
  • Like
Reactions: William

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Ceph is more interesting but you need more nodes.
After your power review of the A2sdi-8c board (idle 15w) I was so tempted to go buy some for a physical ceph playpen.
Don’t really need to do so as have a virtual play ground and use different storage for openstack where it’s implemented but still fun to play with.

SSD’s are at least now a better price to be able to have fun with these things.
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,728
3,078
113
33
fohdeesha.com
TWENTY SIX drives in a single raidz2? Holy crap, I hope you have a priest present when you lose a drive and it needs to resilver a vdev that large (and with that many failure points) :p I would assume hundreds and hundreds of hours resilvering time if the drives are slightly full, I remember the last benchmarks I saw comparing vdev width to resilver time had massive time increases past 10 disks or so, I can't imagine 26
 
  • Like
Reactions: yukaia

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
TWENTY SIX drives in a single raidz2? Holy crap, I hope you have a priest present when you lose a drive and it needs to resilver a vdev that large (and with that many failure points) :p I would assume hundreds and hundreds of hours resilvering time if the drives are slightly full, I remember the last benchmarks I saw comparing vdev width to resilver time had massive time increases past 10 disks or so, I can't imagine 26
LTT has time on their hands for hundreds of hours, lol jk.

But based on the video and comment from Patrick, i believe there was a very slim chance of drive failure and being the drives are SSD it would resilver faster then HDD.