I have an 1883ix-24-2GB and I've been disappointed with it for how expensive it was. I ended up getting a BBU and 8GB RAM, and along with the SAS cabling brought the total to about $1500. Both the BBU and RAM upgraded died with in a year. The RAM started frequently logging ECC errors in the web console, the BBU suffered intermittent errors in the console as well, when I removed it the battery was swollen.
3 of the channels coming from the SAS expander have started to suffer faults where disks on those channels will timeout for > 30 seconds once every couple days, causing the RAID 0 they were a part of to go offline. This started with a single channel and eventually progressed to 3, I at first started to think my SSDs were dying even though they had only 1% of their rated writes consumed, but moving them off ports 1-4 fixed the issue. When disks were connected to those failing ports, very frequently all 8 disk failure/identification lights on my SuperMicro M28SACB 2.5" disk backplane would come on. The SSDs were in a RAID 0 and used as a VMFS5 data store for VMware ESXi, and when this RAID went offline the Areca driver caused the ESXi host to purple screen. Reviewing the logs indicated the driver was broken:
WARNING: LinScsi: SCSILinuxProcessCompletions:920: Command 0x8a (0x439dd51705c0) to "vmhba1:C0:T0:L0" failed (looks like driver "arcmsr 1.30.00.02" is broken) with DID_BAD_TARGET - converting to DID_NO_CONNE
Their support was pretty useless, Areca's only response was have you tried the next version, because change logs or bug tracking databases apparently aren't a thing with them. I didn't really want to force my host to crash any more that it had, so I didnt do any further testing to see if the 1.30.00.03 driver fixed it. Both SuperMicro and Areca blamed each others products for all the failure lights turning on, no one knows who is causing it although i'd be inclined to believe SuperMicro and say the RAID card was doing so. Areca claimed their card will only flash the failure lights when you use the identify disk/enclosure command. And ever since i've had it, when writes occur on the SSDs occasionally the failure lights will flash ever so briefly for the related disk for a few milliseconds. Who knows why. This also ignores that no one at the company speaks english natively.
It is probably related to the SAS channel failures, but the expander on that chip is constantly overheating (>90°C). Due to PCIe slot arrangement on my SuperMicro X9DAE motherboard and that i'm running dual GPUs in SLI, I have to put the card is in the slot furthest the bottom of the chassis (SuperMicro SC743) and the little 40mm fan the card it is inadequate and is constantly getting clogged with dust. I eventually had to buy a fan controller so I could force the bottom 80mm chassis fan to run much faster than it would normally to keep the expander around 85°C. If you ran this in a data center where you could have the chassis fans on full blast and didnt have to deal with dust this wouldn't be a problem, but 80mm 6K RPM fans are loud and not something I would want to deal with at home.
The software itself is lacking, one of the major reasons I bought it is that it did full volume encryption which ESXi didn't do, and although you can create an encrypted volume through the web console, you can not download the encryption key through the web console and have to use a Windows or Linux command line program which is a hassle in virtualized environment. And although there is an option to use a password as the encryption key, this functionaly has never worked according to Areca, so you have to upload a key file via the web console every time the host reboots to unlock the volumes. I also have 8 HDDs in a RAID 6 attached to the card, and configured scheduled volume scrubbing, but for whatever reason it never runs automatically and once a month I have to log in and start it myself. The web console has no ability to authenticate against Active Directory, LDAP or RADIUS, and is so poorly written that it wont allow symbol characters in passwords. Its SMART reporting is limited to hand full of parameters, when I can see much more if the HDDs or SSDs are plugged in to a USB SATA dock. And the firmware for the expander can't be upgraded through the web console (although for the LSI 3108 RAID chip it can) and requires the use of a proprietary serial cable with a RJ-11 plug on one end.
From what i've read online RMAing these cards can be a nightmare, as they need to go back to Taiwan. Since i'm only using 16 of the 28 channels on the card right now ive havent bothered to RMA it since I dont want my system to be down for ~1 month.