What are alternatives to ZFS for deduplication?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
I'm reading in the FreeNAS manual that I should allow 5GB of RAM for every 1 TB of deduplicated data. Is there a way to get the same deduplication result, maybe without ZFS, but without involving large amounts of additional RAM?

I'd like to consolidate some old (full) backups, and so a lot of the files will definitely be duplicate from one backup to the next.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
If it's file-level dedupe on a bunch of old static files then you might like to see if fdupes can work for you; I likewise run it over my backup dirs once a week and have it hardlink identical files.
 
  • Like
Reactions: NeverDie

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
If you just want the old stuff de-duped you could turn on dedup for the initial load of your old backups and then disable it once they are loaded. Your initial load would be painfully slow as the signature table gets thrashed from ram to disk but after you turn it back off you'd have good performance even with limited RAM.
 
  • Like
Reactions: NeverDie

brutalizer

Member
Jun 16, 2013
54
11
8
Oracle bought Greenbyte this summer. Greenbyte has rewritten the zfs dedupe engine, it is best in class. It dedupes 5,000 fat VMs using 210 TB, down to 4 TB. It dedupes zero latency. Boots 6,000 VMs in 5 minutes. Google it. Oracle Solaris will use this dedupe engine in the next upgrade coming this summer.

Also, Solaris 11.2 zfs is rewritten, so it always resilvers at full platter speed. Earlier resilvering could be slow if fragmented etc, but now Solaris always resilvers at 100-150 MB/sec per disc.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Oracle bought Greenbyte this summer. Greenbyte has rewritten the zfs dedupe engine, it is best in class. It dedupes 5,000 fat VMs using 210 TB, down to 4 TB. It dedupes zero latency. Boots 6,000 VMs in 5 minutes. Google it. Oracle Solaris will use this dedupe engine in the next upgrade coming this summer.

Also, Solaris 11.2 zfs is rewritten, so it always resilvers at full platter speed. Earlier resilvering could be slow if fragmented etc, but now Solaris always resilvers at 100-150 MB/sec per disc.
Until Brutalizer's post I hadn't realized Solaris was still being developed. I was under the mistaken impression it had been abandoned. Thanks to Brutalizer I looked into it, and now I see that since version 11.2 it has even had an integrated hypervisor. Who knew? An integrated hypervisor-ZFS might be very useful, and yet until now I've never read anything about it. Why does it get so little buzz? Is it no good or something?

That said, I've yet to see a descent high-level overview of different distros and where they're going. If you know of one, please post a link.
 
Last edited:

sasha

New Member
Dec 9, 2013
11
1
3
Yes solaris buzz is certainly not what it used to be. A lot of companies out there took zfs and are using it for compression and dedupe. Unfortunately they are all closed source
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Serious? I've done a quick search (30 second google "man look") and didn't see anything, could you point me in the right direction please?
Here it says: "Oracle Solaris 11.2 offers an integrated hypervisor on both SPARC and x86 for zero-overhead-virtualization, in addition to its current Solaris Zones capabilities."

Here it says:
"Oracle Solaris 11 enables no compromise virtualization, allowing enterprise workloads to be run within a virtual environment at no performance cost, as if they were run in a bare-metal environment. Oracle Solaris Zones has been used in production for over a decade providing a highly integrated and capable virtualization offering. In stark contrast, the leading virtualization technology vendor imparts a 25% virtualization tax, meaning a greater number of systems to manage, higher latencies and ultimately, higher cost to businesses.
Kernel Zones, a new feature of Oracle Solaris Zones added with Oracle Solaris 11.2, combines this zero overhead virtualization capability, enabling independent kernel versions and independent patch for greater flexibility with application workloads."

Here it says: "Solaris is expensive, but can be downloaded for free if you intend to use it for personal testing and development. No business use is allowed. "
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Told you, man look.

Thanks for the pointers mate, very interesting stuff... It basically means that I can run my NAS on bare metal and then my virtualised stuff on top of that. Instead of running say esxi and virtualising everything including the NAS.
 

brutalizer

Member
Jun 16, 2013
54
11
8
Oracle Solaris is under heavy development, and has always been. After Oracle bought Solaris, Solaris is now more touted for large servers, largest servers on the planet with up to 32 or even 64 sockets with 32TB RAM. That is the reason we desktop users dont hear so much about Solaris. Unix and x86 is a diminishing market, but black-box-large-database-servers (powered by Unix, ie Solaris) is growing tremendously fast at Oracle, and commands a very very hefty price, very high margin. So Oracle is betting very heavily on large black box database servers, where Oracle controls everything in the stack: hardware (including SPARC cpu), OS (Solaris), middleware (Java) and the Database. This allows Oracle to fine tune every piece and make them play well together, achieving brutal performance.

Also, this year Oracle will release the SPARC M7 server, it has 32-sockets, 1024 cores, 8,192 threads, 64TB RAM. Each SPARC M7 cpu will do SQL queries at 120GB/sec. How fast is a x86 cpu? Is it 5GB/sec? The SPARC M7 cpu is invulnerable to Heartbleed bug and other similar bugs (every byte in RAM will be tagged with a number describing which software owns it, and other software in RAM will not be able to access the byte, hence invulnerable to Heartbleed). This can be done in software today, but it slows down everything 100x or so. SPARC M7 does it in hardware at native speed. It will also be several times faster than the fastest 18-core x86 Xeon and IBM POWER8. It can run huge databases from RAM, add in 10:1 compression and you can run 640 TB databases blindingly fast from RAM. Here is more info on the SPARC M7, which only runs Solaris (Linux are for smaller Oracle servers because Linux does not really scale well above 8-sockets):
Oracle Cranks Up The Cores To 32 With Sparc M7 Chip

Greenbyte brags about their ZFS dedupe tech (which is now included in Oracle Solaris)
Ex-Sun Micro CTO reveals Greenbytes 'world-beating' dedupe • The Register
GreenBytes brandishes full-fat clone VDI pumper • The Register

The open sourced Solaris kernel is called Illumos, and there are several OpenSolaris distros out there. I think the most common are SmartOS, OmniOS. SmartOS is suited as a cloud OS which uses KVM and runs every VM in a container for increased security. SmartOS also has docker. OmniOS is more of a normal server version, I believe. Not really sure on this paragraph. Also, Nexenta is a commercial OpenSolaris derivative for storage, competing with NetApp. Here is some information how to setup ESXi and OmniOS;
OpenSolaris derived ZFS NAS/ SAN (Nexenta*, OpenIndiana, Solaris Express) - [H]ard|Forum

The open sourced ZFS version is called OpenZFS. It has some features that Oracle Solaris does not have. For instance, you can remove a vdev from a zpool. Say you accidentally added a single disk to a raidz2 vdev. In Oracle Solaris you need to destroy and recreate the zpool to get rid of the disk. In OpenZFS you can just remove the vdev consisting of the single disk.

But Solaris is very much alive and kicking. The largest servers on earth are SPARC servers running Solaris exclusively. Fujitsu has the M10-4S with 64-socket SPARC64 cpus, derivative of their Venus cpu used in the K-supercomputer. Oracle has the 32-socket SPARC M6 and SPARC M7 servers. Both only runs Solaris and have 32 or 64TB RAM. IBM 's largest server is the 16-socket POWER8 with only 16TB RAM. Intel's largest x86 server is 8-socket servers with E7 Xeon cpus, with 12TB RAM.

BTW, myself runs Solaris 11.2 on my home PC and Virtualbox with Windows. I use Solaris as backend with ZFS.
 
Last edited:

Marsh

Moderator
May 12, 2013
2,645
1,496
113
@brutalizer
Thanks for the info. Do you work for Oracle?
After reading your post, now I want to download Solaris and give a spin.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Oracle Solaris is under heavy development, and has always been. After Oracle bought Solaris, Solaris is now more touted for large servers, largest servers on the planet with up to 32 or even 64 sockets with 32TB RAM. That is the reason we desktop users dont hear so much about Solaris. Unix and x86 is a diminishing market, but black-box-large-database-servers (powered by Unix, ie Solaris) is growing tremendously fast at Oracle, and commands a very very hefty price, very high margin. So Oracle is betting very heavily on large black box database servers, where Oracle controls everything in the stack: hardware (including SPARC cpu), OS (Solaris), middleware (Java) and the Database. This allows Oracle to fine tune every piece and make them play well together, achieving brutal performance.

Also, this year Oracle will release the SPARC M7 server, it has 32-sockets, 1024 cores, 8,192 threads, 64TB RAM. Each SPARC M7 cpu will do SQL queries at 120GB/sec. How fast is a x86 cpu? Is it 5GB/sec? The SPARC M7 cpu is invulnerable to Heartbleed bug and other similar bugs (every byte in RAM will be tagged with a number describing which software owns it, and other software in RAM will not be able to access the byte, hence invulnerable to Heartbleed). This can be done in software today, but it slows down everything 100x or so. SPARC M7 does it in hardware at native speed. It will also be several times faster than the fastest 18-core x86 Xeon and IBM POWER8. It can run huge databases from RAM, add in 10:1 compression and you can run 640 TB databases blindingly fast from RAM. Here is more info on the SPARC M7, which only runs Solaris (Linux are for smaller Oracle servers because Linux does not really scale well above 8-sockets):
Oracle Cranks Up The Cores To 32 With Sparc M7 Chip

Greenbyte brags about their ZFS dedupe tech (which is now included in Oracle Solaris)
Ex-Sun Micro CTO reveals Greenbytes 'world-beating' dedupe • The Register
GreenBytes brandishes full-fat clone VDI pumper • The Register

The open sourced Solaris kernel is called Illumos, and there are several OpenSolaris distros out there. I think the most common are SmartOS, OmniOS. SmartOS is suited as a cloud OS which uses KVM and runs every VM in a container for increased security. SmartOS also has docker. OmniOS is more of a normal server version, I believe. Not really sure on this paragraph. Also, Nexenta is a commercial OpenSolaris derivative for storage, competing with NetApp. Here is some information how to setup ESXi and OmniOS;
OpenSolaris derived ZFS NAS/ SAN (Nexenta*, OpenIndiana, Solaris Express) - [H]ard|Forum

The open sourced ZFS version is called OpenZFS. It has some features that Oracle Solaris does not have. For instance, you can remove a vdev from a zpool. Say you accidentally added a single disk to a raidz2 vdev. In Oracle Solaris you need to destroy and recreate the zpool to get rid of the disk. In OpenZFS you can just remove the vdev consisting of the single disk.

But Solaris is very much alive and kicking. The largest servers on earth are SPARC servers running Solaris exclusively. Fujitsu has the M10-4S with 64-socket SPARC64 cpus, derivative of their Venus cpu used in the K-supercomputer. Oracle has the 32-socket SPARC M6 and SPARC M7 servers. Both only runs Solaris and have 32 or 64TB RAM. IBM 's largest server is the 16-socket POWER8 with only 16TB RAM. Intel's largest x86 server is 8-socket servers with E7 Xeon cpus, with 12TB RAM.

BTW, myself runs Solaris 11.2 on my home PC and Virtualbox with Windows. I use Solaris as backend with ZFS.
It's great to have an actual Solaris 11.2 user on this thread. On the x86 platform, how does the integrated hypervisor compare with, say, ESXi, or with Microsoft's free 2012 Hyper-V R2?

Aside from general interest, here is why I ask: the issue that inevitably arises in discussions around an all-in-one with ZFS using either ESXi or Hyper-V is how to "pass-through" either the HBA or the drives to an VM running ZFS, so that by linking to that VM the other VM's are thereby able to get the benefit of ZFS storage at high speed without the bottlenecks imposed by, say, gigabit ethernet connections to a dedicated, external ZFS NAS (which would also come at a higher hardware cost than if everything could be managed under a single all-in-one). Unfortunately, that discussion is almost inevitably accompanied by bold warnings that such a configuration is for testing only and not for use in production. However, I'm guessing/hoping that with Solaris 11.2 there'd be no need to pass through anything, and the VM's could easily leverage the ZFS and associated storage that gets managed by the integrated ZFS-hypervisor. i.e. there would be no worries about using it in production. Is that indeed the case, or am I oversimplifying?
 
Last edited:

brutalizer

Member
Jun 16, 2013
54
11
8
I dont work at oracle, and never have. I am a algorithmic trading researcher, statistic arbitrage, high frequency trading, etc. I am just a nerd, fan of the best tech. And right now, it is Solaris with zfs, dtrace, SMF, containers, crossbow, etc. For instance, take dtrace. Everybody wants it:
-Mac os x has ported it.
-FreeBSD has ported it.
-Linux has copied it under the name systemtap.
-IBM AIX copy is called probeVue.
-QNX has ported it.
-NetApp has ported it
-VMware copy is called vProbes.
-etc. All big players have dtrace now, as a port or a clone. It is a game changer for developers and must have.

ZFS is a nice Solaris tech, but not a must have. Only Mac os x, FreeBSD and Linux have it.

Btw, linux have also copied solaris containers (evolved into docker), Linux copied Solaris SMF as systemd. Systemd is not a good copy of SMF because SMF is mainly for huge servers, not desktops. Linux is mainly a desktop os, or running on small servers, so there is no need for systemd. Linux copied zfs as btrfs. Linux copied crossbow as Open vSwitch. Heck, entire Linux is a copy of Unix (ie. Solaris etc).

The largest Linux servers, as SGI Altix or UV2000 servers, have 10.000s of cores and 100 TB of ram, but they are clusters. They are exclusively used for number crunching HPC cluster workloads, which SGI confirms. They are similar to a small supercomputer cluster, many cheap nodes on a fast switch. They serve one scientist at a time that chooses which number crunching workload will be started next 24 hours. Clusters are very cheap, just a bunch of PCs.

In contrast, there are SMP servers, ie one huge fat server. The largest have 32 or even 64 sockets. IBM mainframes are SMP servers too, but their CPUs are slow. They typically run business ERP systems, big databases, etc, serving thousands of users at once. The latency in clusters are very bad in far away nodes, so clusters only run embarrassingly parallel code that runs for loops in each node. Not much communication going on. Typically they run computations. Otoh, SMP servers run business systems that branch the code everywhere, so you need tightly coupled CPUs, you can not use too many CPUs or latency will be bad. Maximum are 32 /64 sockets.

A cluster can never replace a SMP server. For instance, you will never see a SGI server benchmarking SAP because they can not run SMP workloads. The largest Linux SMP server is an ordinary 8-socket x86 server. In fact, there have never existed larger Linux servers than 8 sockets. I invite anyone to post such links to a larger Linux server, such as 16 sockets. Because large linux servers does not exist, Linux scales bad on 8 sockets, and extremely bad on 16 sockets. HP experimented with Linux on their huge 64 socket Unix integrity/infinity servers with bad results (40% cpu utilization under full load). Google HP Big Tux. IBM experimented with Linux on their old AIX Unix P795 server with 32 sockets, with equally bad results.

SMP servers are very difficult to build and they cost very much. For instance, IBM p595 Unix server with 32 sockets for the old TPC record, costed $35 million. Yes, that's right. One single server. You could by a very large cluster for that.

Solaris has scaled to 144 socket servers for decades. And scales extremely well on huge SMP servers. So, I am a Solaris fanboy. Btw, FreeBSD and openbsd are also very good oses. If Solaris were closed I would use bsd. Linux hacker Con Kolivas compared Linux code to openSolaris, and he said that openSolaris code was far superior. Google "con Kolivas blog openSolaris scheduler" to read his impression.

Regarding esxi or whatnot, I think for desktop usage all of them should do, but I don't know. I exclusively use Solaris on bare metal, it is rock stable. But gea_ is the man to ask on this, many use esxi with great success. I also use virtualbox for virtualization on top Solaris, but virtual box is not that stable as esxi, and should be avoided in production. I am a desktop user so it is fine for me.
 
  • Like
Reactions: methos and NeverDie

BradJensen3

New Member
Mar 3, 2015
1
0
1
70
Is there any promise that the Greenbytes improvements are going to end up in an open version of Solaris?

I worked with testing ZFS dedupe using Nexenta and then FreeNAS. All the gurus on FreeNAS shouted not to use dedupe.

And the 5GB per terabyte RAM requirement made it pretty useless anyways.

I switched to Windows Deduplication which is free in Windows Server 2012 and R2. There is also a hack to install it on Windows 8.1, but I haven't tried it.

I've written about my tests and lessons learned on windowsdeuplication.com

I understand and applaud that there are reasons to use ZFS besides deduplication, but you can run Windows Deduplication using a Western Digital Duo 8 or 12 TB drive (half of that raid 1) .

We have written a replication system for Windows Deduplication called Replacador, but we haven't quite released it yet, and I'm not sure how we could price it for the home or SSMB user.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Is there any promise that the Greenbytes improvements are going to end up in an open version of Solaris?

I worked with testing ZFS dedupe using Nexenta and then FreeNAS. All the gurus on FreeNAS shouted not to use dedupe.

And the 5GB per terabyte RAM requirement made it pretty useless anyways.

I switched to Windows Deduplication which is free in Windows Server 2012 and R2. There is also a hack to install it on Windows 8.1, but I haven't tried it.

I've written about my tests and lessons learned on windowsdeuplication.com

I understand and applaud that there are reasons to use ZFS besides deduplication, but you can run Windows Deduplication using a Western Digital Duo 8 or 12 TB drive (half of that raid 1) .

We have written a replication system for Windows Deduplication called Replacador, but we haven't quite released it yet, and I'm not sure how we could price it for the home or SSMB user.
Maybe you should post a full URL. I tried going just now to both www.windowsdeduplication.com and just windowsdeduplication.com (and also the apparent mis-spelling you gave of windowsdeuplication.com) to see what you had written, but nothing came up.
 
Last edited:

c02pon

New Member
Feb 1, 2015
1
0
1
42
A cluster can never replace a SMP server. For instance, you will never see a SGI server benchmarking SAP because they can not run SMP workloads. The largest Linux SMP server is an ordinary 8-socket x86 server. In fact, there have never existed larger Linux servers than 8 sockets. I invite anyone to post such links to a larger Linux server, such as 16 sockets.
I then give you the HP Superdome X which was released during 2014. 16 sockets of Xeon E7.
Thanks to HP's own interconnect it can use E7-28xx v2 in 16 sockets which keeps cost down to only 2M USD+...
It can also be changed into 2 8-socket servers. RHEL and Suse is supported OS and from documentation it seems to be mostly used for SAP HANA.

And I only know of it due to my IT department what to reduce license cost by consolidating our CI to one server if possible. We got more license in the end...

HP Integrity Superdome X (QuickSpecs/c04383189.pdf)
Running Linux on HP Integrity Superdome X: Delivering HP Project Odyssey with a scalable x86 server (Technical white paper/4AA5-4775ENW.pdf)
 

chinesestunna

Active Member
Jan 23, 2015
621
194
43
56
I then give you the HP Superdome X which was released during 2014. 16 sockets of Xeon E7.
Thanks to HP's own interconnect it can use E7-28xx v2 in 16 sockets which keeps cost down to only 2M USD+...
It can also be changed into 2 8-socket servers. RHEL and Suse is supported OS and from documentation it seems to be mostly used for SAP HANA.

And I only know of it due to my IT department what to reduce license cost by consolidating our CI to one server if possible. We got more license in the end...

HP Integrity Superdome X (QuickSpecs/c04383189.pdf)
Running Linux on HP Integrity Superdome X: Delivering HP Project Odyssey with a scalable x86 server (Technical white paper/4AA5-4775ENW.pdf)
Bringing back the days of the 6x6 Pentium Pros :) x86 pushing the boundaries
 

brutalizer

Member
Jun 16, 2013
54
11
8
I then give you the HP Superdome X which was released during 2014. 16 sockets of Xeon E7.
Thanks to HP's own interconnect it can use E7-28xx v2 in 16 sockets which keeps cost down to only 2M USD+...
It can also be changed into 2 8-socket servers. RHEL and Suse is supported OS and from documentation it seems to be mostly used for SAP HANA.

And I only know of it due to my IT department what to reduce license cost by consolidating our CI to one server if possible. We got more license in the end...

HP Integrity Superdome X (QuickSpecs/c04383189.pdf)
Running Linux on HP Integrity Superdome X: Delivering HP Project Odyssey with a scalable x86 server (Technical white paper/4AA5-4775ENW.pdf)
The HP Superdome was originally an Unix server up to 64-sockets. HP switched from Itanium cpus, to the x86 cpus. And slapped on Linux ontop and reduced to 16-sockets, apparently. Why did HP not keep 64-sockets? Is that because Linux scales bad? Or "to keep the cost down"?

It does not make sense to keep the cost down, as big SMP servers are the highest premium segment costing very many millions. For instance, the old IBM P595 costed $35 million, only 32-sockets. Money is not an issue in this segment. This segment is the most expensive segment, and commands the highest prices. It is like selling the most expensive sports car with plastic interior, to "keep the cost down". Or sell the most expensive car with only 200 horse power effect motor, to "keep the cost down". These things will never happen, if you can afford a million dollar sports car, then you can afford a decent interior or a better engine. Money is not an option. My take is that, because there have never been a large 16-socket Linux server before, Linux must necessarily scale bad. So it would not be a point of selling Linux onto their 64-socket Infinity servers. HP tried to do that before, google "Big Tux Linux", where Linux had utterly bad scaling on 64-socket server with cpu utilization of 40% or so, under full load. HP learned from their lessons, and now repackages their Unix servers with Linux, and decreasing size.

BTW, IBM also sells their 32-socket P795 Unix server today, with Linux ontop. I doubt any customer will pay huge amounts for a bad scaling Linux server.

Contrast this 16-socket Linux server to this claim by Linux developers from the year 2007:
http://vger.kernel.org/~davem/cgi-bi...cgi/2007/04/10
"I'm still fuming over Jeff Bonwick's entry into the anti-Linux FUD campaign....And here's the punch line, Solaris has never even run on a 1024 cpu system let alone one as big this new SGI system, and Linux has handled it just fine for years. Yet Mr. Bonwick feels compelled to imply that Linux doesn't scale and Solaris does. To claim that Solaris is more ready to scale on large multi-core systems is pure FUD, and I'm saddened to see someone as technically gifted as Jeff stoop to this level. "

This Linux kernel developer, did not understand that Bonwick (the father of ZFS) talked about SMP servers, and not clusters. There are no 1024 SMP servers out there, and never has been. The largest SMP servers are old Unix servers, Solaris, AIX, HP-UX, with 64-sockets (Solaris had a 144-socket server years ago). Linux largest server was 8-sockets until the last year, when HP repackaged their Unix servers to Linux. There never have existed 16-socket Linux servers before, let alone 1024 cpu SMP servers. Lot of ignorance (FUD?) from the Linux camp.
 

ATS

Member
Mar 9, 2015
96
32
18
48
A cluster can never replace a SMP server. For instance, you will never see a SGI server benchmarking SAP because they can not run SMP workloads. The largest Linux SMP server is an ordinary 8-socket x86 server. In fact, there have never existed larger Linux servers than 8 sockets. I invite anyone to post such links to a larger Linux server, such as 16 sockets. Because large linux servers does not exist, Linux scales bad on 8 sockets, and extremely bad on 16 sockets. HP experimented with Linux on their huge 64 socket Unix integrity/infinity servers with bad results (40% cpu utilization under full load). Google HP Big Tux. IBM experimented with Linux on their old AIX Unix P795 server with 32 sockets, with equally bad results.
SGI UV 2000 and SGI UV 300 scale to 256 and 32 sockets respectively in fully cache coherent SMP systems. HP Superdome X is at 16 sockets. And Linux scales pretty decently to at least 8 sockets. Its fine to be a bit of a fanboy, but try not to go too overboard.

SMP servers are very difficult to build and they cost very much. For instance, IBM p595 Unix server with 32 sockets for the old TPC record, costed $35 million. Yes, that's right. One single server. You could by a very large cluster for that.
If you are reffering to the 2008 TPC-C result for the 595, you might want to actually understand the pricing first. Total System cost was 17 million, not 35 million, and of that the actual server was 12 million with 9 million of that in memory (4TB DRAM was expensive in 2008!).

Solaris has scaled to 144 socket servers for decades. And scales extremely well on huge SMP servers. So, I am a Solaris fanboy. Btw, FreeBSD and openbsd are also very good oses. If Solaris were closed I would use bsd. Linux hacker Con Kolivas compared Linux code to openSolaris, and he said that openSolaris code was far superior. Google "con Kolivas blog openSolaris scheduler" to read his impression.
It should probably also be noted that most of the high socket count Sparc machines were pretty much dogs. Sun had 1 top tier large SMP machine, UE10k, that they got their reputation with but that machine wasn't even designed by SUN, instead actually being designed by CRAY.
 
  • Like
Reactions: c02pon