p3700 2tb for $599 @ ebay Refurb

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
These tanked in price a few months ago.

NewEgg has NIB 800GB for $399 even.


It's rather amazing the performance : $ now on storage.
So with two of these I could do raid 0 for e-peen purposes? :) And for less than 800 bucks.

That’s great that they’re starting to get more reasonable. They’d make great database drives. I’m doing a lot more DB work these days.
 

dbTH

Member
Apr 9, 2017
149
59
28
95% of life at a lifetime of 35PB of writes is something like 1.75 PB of writes or so. Totally fine for me. Since my usecase would be very much read biased and I’d be unlikely to hit the lifetime write threshold anytime soon and I’d rather have the space than anything else.
If your workload is not write intensively, it's more cost effective to get Intel DC P3600 (3DWPD) or Samsung PM1725(a) (5DWPD). You could get two 1.6TB P3600 for a total of 3.2TB between $600 and $800. This will give you more disk capacity with almost same cost.
 
  • Like
Reactions: gigatexal

nk215

Active Member
Oct 6, 2015
412
143
43
50
In a multi-user environment, the P3700 drive runs circles around the 970 EVO. Performance consistency in high load environment is where the Intel enterprise drives earn their cost.
 

Craash

Active Member
Apr 7, 2017
160
27
28
I'm putting together a new ESXi build, and my plans were to go with 4 1TB Samsung 970's in a Highpoint SSD7101A in a RAID 0 config. Since drive performance is my limiting factor, I'm more concerned about performance than anything. I have adequate backups of all the VM's running so I'm aware of and accept the possibility of data loss. For my use case, it is about performance. I currently have a mix of 'Nix and Windows VMs, all the 'Nix machines are servers providing services such as wordpress, email, plex, etc. Two of the windows machines support users.

After reading this thread, it seems like I might be ahead (performance speaking) to change my plans to two of these drives instead of the Highpoint. The fact that it would be ~$600 cheaper is a nice plus, but from a pure performance standpoint in this use case (ESXi VMs), what say you?
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
From a pure performance point of view it will depend on how much I/O you have.
(Relatively) short bursts will likely be faster on your 970 Raid. If you keep on hammering the drives for larger amounts (larger than cache) or extended time then the P3700's will be able to sustain much longer.
 
  • Like
Reactions: Craash

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,639
2,056
113
Tomshardware has a 970 evo article, look at steady state, then look at S3710, P3700, etc....
 
  • Like
Reactions: Craash

Craash

Active Member
Apr 7, 2017
160
27
28
Tomshardware has a 970 evo article, look at steady state, then look at S3710, P3700, etc....
So, by this, multiple VM's located on the same drive would be better off on the P3700, or am I missing something?
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
Most likely (again depending on I/O behaviour).

A problem might be that there currently is no way to RAID two PCIe/AIC P3700's (that I know off) but somebody else can enlighten me?

You could do it with the Highpoint controller if you got U2 ones and then used M2 -> U2 adapter cables
 
  • Like
Reactions: Craash

Craash

Active Member
Apr 7, 2017
160
27
28
Most likely (again depending on I/O behaviour).

A problem might be that there currently is no way to RAID two PCIe/AIC P3700's (that I know off) but somebody else can enlighten me?

You could do it with the Highpoint controller if you got U2 ones and then used M2 -> U2 adapter cables
I had considered this and decided I'd just do two 2TB stores instead of a 4TB. I'm ok with that. The highpoint is $400 which makes it a tad expensive just to gain a single store. A more valid consideration, at least for me, is that if for whatever reason I decided to discontinue my home lab, I'd have more use for the 970's than I would for the P3700's. But, that is unlikely and really offset by the savings.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Debating a handful of raided consumer grade drives vs this older enterprise grade pci-express drive is a no brained to me: I’d get this drive or one like it in a heartbeat. But everyone has an opinion and that’s fine. The thread is If you’re in the market for a drive like this this is a steal. If you want to go the consumer grade raided setup which I think is needlessly complex so be it.

That being said did anyone buy one of these p3700s or go with a newer say 3605?
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
Was the p3605 not two p3600s in a single package? Newer drives are p4500 or p4800x iirc
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,639
2,056
113
Debating a handful of raided consumer grade drives vs this older enterprise grade pci-express drive is a no brained to me: I’d get this drive or one like it in a heartbeat. But everyone has an opinion and that’s fine. The thread is If you’re in the market for a drive like this this is a steal. If you want to go the consumer grade raided setup which I think is needlessly complex so be it.

That being said did anyone buy one of these p3700s or go with a newer say 3605?
Neither, mostly optane lately.

Was the p3605 not two p3600s in a single package? Newer drives are p4500 or p4800x iirc
I think that was a P3608 ? Too many random numbers not all on 1 sheet to keep track of...
 
  • Like
Reactions: gigatexal

Craash

Active Member
Apr 7, 2017
160
27
28
Debating a handful of raided consumer grade drives vs this older enterprise grade pci-express drive is a no brained to me: I’d get this drive or one like it in a heartbeat. But everyone has an opinion and that’s fine. The thread is If you’re in the market for a drive like this this is a steal. If you want to go the consumer grade raided setup which I think is needlessly complex so be it.

That being said did anyone buy one of these p3700s or go with a newer say 3605?
@gigatexal Needlessly complex how?
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
@gigatexal Needlessly complex how?
Take some random card that has multiple 2280 ports on it with a PCB that does pci e 3.0 and then can without any wierdness present to the host 2 to 4 independent drives so that you can do as you please. Or it bonds them all in some raid 10 or raid 0 scenario which I just can’t stomach. I’d rather it be very simple and let my storage subsystem (say ZFS) ha dale the rest and it doesn’t get simpler than a drive like the P3700 where the whole unit has been QA’d by Intel and presented. Never mind the fact that it has power loss protection and much stronger Nand
 

Craash

Active Member
Apr 7, 2017
160
27
28
Take some random card that has multiple 2280 ports on it with a PCB that does pci e 3.0 and then can without any wierdness present to the host 2 to 4 independent drives so that you can do as you please. Or it bonds them all in some raid 10 or raid 0 scenario which I just can’t stomach. I’d rather it be very simple and let my storage subsystem (say ZFS) ha dale the rest and it doesn’t get simpler than a drive like the P3700 where the whole unit has been QA’d by Intel and presented. Never mind the fact that it has power loss protection and much stronger Nand
I'm not sure that clears up the "unnecessary complexity" for me, at least for the use case I outlined, but this isn't the place for that conversation - as you alluded to earlier. If it helps your stomach, I am leaning towards the P3700 and I appreciate your input.
 
  • Like
Reactions: gigatexal

e97

Active Member
Jun 3, 2015
323
193
43
To pile onto the EVO 970 vs 3700...the 3700’s average latency is 20us, the 970 is 120us. So while it can read more faster, technically it is 6x slower.

This matters when queue depth starts to jump. What we found was with 970s we still had queue depths of 5-8 with SQL Server workloads. We moved to an Optane and P3560 (a comparatively slow drive, but latency of 30us) and queue depths vanished. Same workloads, response time is much better. The latency drop was enough to clear the queues.

The sizable reads/writes are really only important if this is for digital editing or bulk storage. Otherwise you are probably doing a lot of random reads and writes.
+1

home/small business use cases tend to be seq read/writes: media, gaming, photo/videography, etc.
machine learning/ai tends to be sequential reads, seq writes

Enterprise workloads tend to be databases: random 4k.
patching / updates can be either (depends on algo and content), but generally is more random 4k than sequential.

Recent databases technologies have become popular again which are append only, so they tend to work better with seq access: you want BBU + seq read and write for fastest performance.

once this hits 4-500 its a no brainer. until then continue the dogpile :p
 
  • Like
Reactions: gigatexal