Adaptec 71605 Raid card - 70$ (obo)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

james23

Active Member
Nov 18, 2014
441
122
43
52
this is a great raid card (sas2 but has 4x ports, and are sasmini HD ports fyi).

I have the 71685 (among many other adaptecs) , which is about the same card, and am seeing excellent benchmarks with hgst SSDs (will be posting that in a different thread in a day or 2 when im done).

should be fine for esxi 6.5u2 (prob 6.7 too) CIM guest monitoring (def is fine for normal esxi volume driver) - i have a older 6805 on 6.5u2 (but getting CIM / monitoring in a guest, was a pain, but i got it working- however according to adaptec support their series 7 and 8 are much better/easier with CIM support in esxi).

i have a 45$ offer in for this listing, will the accepted price (45 would be amazing price).

i love adaptec cards specifically, as they allow you to created teired raids (or stacked raids), meaning you can have a set of 4x disks, and have a Raid6 volume AND a 2nd raid0 volume, as an example. (you cant do with with LSI)

also that is a capacitor via the wire in the pict (not a PIA battery). (s7 cap kits sell for 70-120$ ALONE on ebay)

ASR-71605 Adaptec 16 Port 2274400-R SAS SATA 6Gbps 1GB PCIe RAID Controller Card | eBay

EDIT: another seller with CARD only (no CAP backup unit, i would go for the CAP backup one above though , wow, some big host/company must have moved to SDS recently!): $60 obo:

ASR-71605 Adaptec 16 Port SAS/SATA 6Gbps PCIe x8 3.0 Raid Controller 1GB HP | eBay

FWIW; i really like Adaptec's naming scheme ... (YOU HEAR THAT LSI!)
(series) , (internal ports), (external ports), (# 5 for something?)
ie 7 16 8 5
or 6 8 0 5
or 5 16 4 5
 
Last edited:

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
maybe for HW RAID is works just fine. I had a 71605H (the HBA version), and recall having some problems with it when you have more than 12 drives connected. when I was searching for a fix, seems like many others had the same issue. i later found i had a flaky backplane, so it might have been the backplane, but i lost confidence in the 71605H and moved to LSI based HBA.
 

am45931472

Member
Feb 26, 2019
87
17
8
What is the feasibility of using these cards in a FreeNas ZFS setup. I am really only knowledgeable in the LSI IT flashed cards for freenas. Are these cards usable as non raid, what is their freebsd compatibility like for ZFS. Any help would be appreciated. The SAS 9300 cards are just so expensive.
 

james23

Active Member
Nov 18, 2014
441
122
43
52
What is the feasibility of using these cards in a FreeNas ZFS setup. I am really only knowledgeable in the LSI IT flashed cards for freenas. Are these cards usable as non raid, what is their freebsd compatibility like for ZFS. Any help would be appreciated. The SAS 9300 cards are just so expensive.
they can be set to RAID or RAW (or Hybrid mode)- and i can tell you in either mode, it does pass the disks straight through to windows OS (and can get smart data, in any mode, including with a raid volume active).

however, the FN forums (and FN experts) are super clear that you should never use anything but a HBA when passing drives to FN/ZFS. so this would be the same as using a LSI card with IR FW (and just not running any raid on the card)- so its strongly not suggested. (i know your trying to get the 4x or more ports on a single card for 16 or more drives, and it may work fine, but they are pretty emphatic on the FN forums to use a HBA. i use a true LSI hba on FN fwiw). this card is better for DA raid on esxi, or on windows (or other OSes , as a HW raid card, IMO).

on FN its prob better to use a LSI HBA + a expander , or expander backplane. (if you need the speed, use a sas3 HBA and SAS3 expander / expander BP)
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
@am45931472
Series 7 & up can be set as HBA in bios / CLI. There are couple of threads here on 78165. IIRC, Someone reported that the drive changes were not detected and required a full reboot in one of the threads. May be @james23 or @i386 can validate? Could be a combination of OS, FW, Driver etc at that time.

Speaking of FreeNAS, One of my 216 chassis I bought couple of years ago had IXsystems logo and the system came with ASR-72405 with Cache & SuperCap. It's possible they may have used that box for something else but I thought it was interesting. I sold the card on ebay to recoup some of the cost.
 
  • Like
Reactions: james23

Craash

Active Member
Apr 7, 2017
160
27
28
I have two of these, in different workstations, and I also think they are great. I just submitted a offer of $50 for another, heck, might even go for two more.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
IIRC, Someone reported that the drive changes were not detected and required a full reboot in one of the threads
Drive changes are detected just fine. Not sure if the person who mentioned this was running an old/custom (i.e. some vendor's) firmware. With the latest stock firmware, they work just fine in HBA or RAID mode.
 

jpk

Member
Nov 6, 2015
66
29
18
45
I have a 72405 that I have used in HBA mode w/ ZFS, and it recognized drive being hot-swapped out and in without a problem, and seemed to pass the bare drives onto ZFS. I did have to update to the latest firmware to get the HBA mode, but that was pretty easy w/ the arcconf util.
 

Craash

Active Member
Apr 7, 2017
160
27
28
I have two coming at $55 each. Last one I purchased several months ago at ~$80 and my first one (which is a Q) I bought at retail when they first came out.
 

Craash

Active Member
Apr 7, 2017
160
27
28
How about ASR-71605Q in comparison to ASR-71605? Is MaxCache useful feature given that similar functionality available in ZFS/StorageSpaces?

I also wonder if the following one is the ASR-71605Q since it has the caps.
ASR-71605 Adaptec 16 Port 2274400-R SAS SATA 6Gbps 1GB PCIe RAID Controller Card | eBay
@Bert, I think I only really tried to mess with MaxCache once and was disappointed to see it took a minimum of 2 SSDs in a mirror. I understand the caution - but for my setup where I care more about the performance than the data security I would have preferred at least the option. As I've moved to NVMe I have a decent collection of SSD's sitting around, I might throw a few in and see what happens.

Right now my 71605 has 5 10TB white labels on it.
 
  • Like
Reactions: Bert

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Is MaxCache useful feature given that similar functionality available in ZFS/StorageSpaces?
It's usefull if you have enough io (queue depth >8) or multiple parallel sequential workloads.
The limiting factor is the throughput of the ssds used (Even when your hdd array has a higher throughput).
 
  • Like
Reactions: nikalai and Bert