Intel S3700 400gb $50 OBO

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
NVME ports will have a separate connector, either sff 8611/oculink or SFF-8643 which will need to be wired to a NVME capable hba (lsi 9400) or something like an nvme to pcie adapter card (aoc-slg3-4e4t) or to onboard oculink ports.

You will not need a SAS3-216A as these ports are directly connected between drive outward connector and will carry SAS3 signals. The difference will just be in the connector at the HBA end. This also applies to the TQ variant but not the expander backplanes (just for completeness sake)
 

zack$

Well-Known Member
Aug 16, 2018
701
315
63
Just got done running a test on 8 x S3700 (400gb) in 4 x 2 (mirrored vdevs). System is an Inventec B420 board, 32GB udimm ecc with an e3-1265l v3

Code:
[root@freenas ~]# dd if=/dev/zero of=/mnt/S3700/TEST/test.dat bs=2048k count=100
00                                                                             
10000+0 records in                                                             
10000+0 records out                                                             
20971520000 bytes transferred in 15.250906 secs (1375099957 bytes/sec)         
[root@freenas ~]# dd if=/dev/null if=/mnt/S3700/TEST/test.dat bs=2048k count=100
00                                                                             
dd: if: illegal argument combination or already set                             
[root@freenas ~]# dd of=/dev/null if=/mnt/S3700/TEST/test.dat bs=2048k count=100
00                                                                             
10000+0 records in                                                             
10000+0 records out                                                             
20971520000 bytes transferred in 5.400691 secs (3883118018 bytes/sec)
I am running the S3700's off 2 SATA3 and 6 x SAS2 LSI2008, I believe p16 IT (cable is kinked on one of the SAS cables so I have to replace). The drives are in icydock bays (MB326SP-B and MB994SP-4SB-1)

Expecting 2 more S3700s and will run a test in 5 x 2 (mirrored vdevs).
 

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
Jumbo frames will help.

I'm not sure what the status of nfs 4.1 is in freenas, and whether it supports multiple connections.
Between ESXI and linux using multiple connections can enhance iops, bandwidth, even over a single physical link.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
Jumbo frames will help.

I'm not sure what the status of nfs 4.1 is in freenas, and whether it supports multiple connections.
Between ESXI and linux using multiple connections can enhance iops, bandwidth, even over a single physical link.
NFS41 works with FreeNAS now (the read only issue = solved, but I forget with which update)... I'm on FreeNAS-11.2-U3 and yes on multiple connections.

@svtkobra7 wishes he got in on this too, tough to optimize 120 TB raw for speed too (and likes meaningless synthetic benchmarks / graphs) without unlimited budget ...
  • NFS41 on zpool = RaidZ 3x4x10TB w/ 20GB Optane vDisk as SLOG (dataset = 64K recordsize)
  • Tested with a Win10 VM 8 vCPU / 8 GB RAM / Secondary Disk


  • IOmeter tests = 3 run avg
  • 4K Random Performance (presented by QD 1 -1024)[1]
    • 8 Workers | 1 test per QD, 3 min each test
    • Reads @ 100% Reads / Writes @ 100% Writes / Both @ 100% Random
  • Random Performance (presend by test size 512 B - 64 M)
    • 8 Workers | 1 test per Size, 3 min each test
    • Reads @ 100% Reads / Writes @ 100% Writes / Both @ 100% Random
4K Random Performance

Random Performance


[1] QD of >32 meaningless for spinning disks, IOmeter config used for testing NVMe as well
 

zack$

Well-Known Member
Aug 16, 2018
701
315
63
The freenas is on a 4x10gbe (chelsio t440lp) lagg lacp to a crs317 and out to the esxi box solarflare 7122F; all on jumbo frames.

FWIW, iperf between the VM and Freenas is 9.2gbps, which means the reads are currently maxed. However, the drives locally are reading 3x that while the reads are 11 gbps.

Overhead for sync writes taken into account with no slog(s). And this is purely 4x2x400gb s3700s.

Will try nfs 4.1 to improve iops though.
 

zack$

Well-Known Member
Aug 16, 2018
701
315
63
Same here.
Likely the STH effect in play here....Seller sees a bunch sell at the STH posted OBO then realises he can raise his price :eek:.

Saw it happen with these recently: HGST SSD1600MM HUSMM1640ASS201 - SSD - Solid State Drive - 400 GB - SAS 12Gb | eBay

FWIW, I got 2 x $45 on the counter-offer, seller did come in at $49.95 on a OBO of $40. Sorry if I didn't state this earlier.

After posting to STH I got an OBO at 1 x $45, no counter-offer. After that, they were gone. Think seller has since posted 15 more, but those are all gone too.
 
  • Like
Reactions: Samir

frogtech

Well-Known Member
Jan 4, 2016
1,480
270
83
35
Would $60 ea work? ;)
I currently have a pending offer w a seller for 55/ea and am anticipating an even better deal via direct invoice.

Send me a pm if you'd like to work out a deal.
 
Last edited:
  • Like
Reactions: Samir

frogtech

Well-Known Member
Jan 4, 2016
1,480
270
83
35
  • Like
Reactions: Samir and zack$

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,624
2,043
113
They also don't perform as well and seem to heat up more, I got some years ago + adapters way way way not worth the $ let alone dip in performance vs 2.5"
 
  • Like
Reactions: Samir and zack$