Proxmox learning build out of mostly used stuff .

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I assume those ATA numbers correspond to your WD drives...?

If so, I often get the same flavour of error messages from a system running WD Greens in RAID despite smart and badblocks not reporting any errors. From reading up on it this seems to be a quirk of some firmwares on WD drives (probably the more desktop-oriented ones, IIRC I've not seen it from any of my WD reds), as I've never seen the error on any of my hitachi or toshiba drives (or any SSDs for that matter) and the only time I've had it in conjunction with an array failure has been with a) a dodgy cable connection and b) a dodgy JMicron SATA controller.

Personally I wouldn't worry too much about it as I've assigned it to the "almost certainly a red herring" bucket as I've never been able to attribute it to a failure - but then I assume you have at least one backup that you've actually tested, and that you know not to trust advice from randoms on the internet...

Edited to add - I assume you've already checked the smart attrs and you're not seeing any corresponding rise in drive error counts? Here's what one of my WD greens look like [and before anyone asks, temp is high because a) it's in the middle of a RAID resync and b) it's 32°C here currently];
Code:
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   201   192   021    Pre-fail  Always       -       8916
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       16
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   062   062   000    Old_age   Always       -       27790
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       16
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       13
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       111
194 Temperature_Celsius     0x0022   107   105   000    Old_age   Always       -       45
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0
All of those values at 0 are, TTBOMK, the ones you always want to stay at 0. Your drives should have similar attrs available for you to check.
 
Last edited:
  • Like
Reactions: Klee

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Ok since I sold all of my E5-2660's which forces me to take out the two 2660's and replace them with something else.

I had a choice of dual E5-2603's, dual E5-2620's, or a single E5-2643.

I went with the E5-2643 v1 because even tho it has only 4 cores and hyperthreading, its base speed is 3.3 GHz and boosts up to 3.5 Ghz.

So as long as I run just a few VM's I should have better performance in my VM's.

Plus I have not tested that cpu at all yet, if it works good i'll get another one and run two.

Also I had to go back to 64 gigs of ram with the single socket.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
First test is running my XUbuntu 16.04.4 vm with the same settings , 4 cores and 4 gigs of ram.

Seems a little snappier.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Changed the Xubuntu 16.04.4 to 2 cores and 8 gigs of ram.

Syncing 4 blockchains it stays below %50 on both cores all the time, and less than %20 most of the time.

I think i'll go back to 4 gigs of ram, and configure all my Ubuntu-ish VM's to use two cores and 4 gigs or less of ram.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
A little update:

I'm running one Xubuntu vm with two cores and 8 gigs of ram, a Windows 10 vm with 4 cores and 8 gigs of ram, and Windows xp 32 bit with one core and 1 gig of ram.

All run just fine.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
I just realized the E5-2643's have a TDP of 130 watts each. LOL
 
  • Like
Reactions: Tha_14

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Well it died again.........

So I will take out the old 250 gig WD hard drives and replace them with three 2tb HGST drives either tonight or this weekend.

The good: Twice the total capacity, newer enterprise drives

The bad: probably will lose some performace.

EDIT: Also good, its much easier on my back......
Also should use a bit less electricity.
 
Last edited:

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Swapping out the hard drives, three 2 TB drives...........have a fourth somewhere but I cant seem to find it.

The old pwm Artic Cooling fans, one was starting to making some noise so took all three out and replaced them with the three unused OEM fans that are non-pwm.


 
Last edited:

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Getting read errors on my install disk, re-downloading ProxMox iso.

I might just call it a night.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Installed, now I copying my iso's over.

First install did not create a zpool for some reason so I reinstalled and now it works.

Also I did not have to add a delay to grub since I did not have the "Message: cannot import 'rpool' : no such pool available Error: 1 Failed to import pool 'rpool'. Manually import the pool and exit." error.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
I was wrong thinking I would lose performance.

root@pve:~# dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0 records in
1024+0 records out
402653184 bytes (403 MB, 384 MiB) copied, 0.155791 s, 2.6 GB/s

Old hard drives x12: 1.4 GB/s
New hard drives x3: 2.6GB/s

:D
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
The newer high-density drives will give you much better sequential performance than less dense drives, but you're likely to see a drop in random IO (likely much more relevant to your use case) due to having fewer total spindles and thus lower overall available IOPS.
 
  • Like
Reactions: T_Minus

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
The newer high-density drives will give you much better sequential performance than less dense drives, but you're likely to see a drop in random IO (likely much more relevant to your use case) due to having fewer total spindles and thus lower overall available IOPS.

I really was expecting slower everywhere, but if its slower in some reads and writes and faster for others its probably ok for my use.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Small update, I just ordered a second used E5-2643 off ebay for it.

So I'll soon have 8 cores and 16 threads that will have a max boost of 3.5 Ghz, paid I think $11.00 for the first one and $49.00 for the second.

Not bad for ~$60.00. :)

But a TDP of 130 watts each. :eek: LOL

Next upgrade is another 36 Gb of ram to put me at 128 Gb total.
 
  • Like
Reactions: Tha_14

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Small update, I just ordered a second used E5-2643 off ebay for it.

So I'll soon have 8 cores and 16 threads that will have a max boost of 3.5 Ghz, paid I think $11.00 for the first one and $49.00 for the second.

Not bad for ~$60.00. :)

But a TDP of 130 watts each. :eek: LOL

Next upgrade is another 36 Gb of ram to put me at 128 Gb total.

Did the upgrade to the second 2643 xeon the last weekend of September, so far it has handled everything I have thrown at it.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Small Update:
This thing has been solid with the HGST hard drives, only thing I have done to it after upgrading it with the second cpu was install the latest version of ProxMox.
 
  • Like
Reactions: Patrick

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Well after working flawlessly for about 11 months with the three HGST 2TB drives I was walking past it a few minutes ago and I heard the a hard drive thrashing sound . :(

Logged in and it was up, no VM's were running the box was just idling, and started a scrub now its unresponsive and spitting out timeout errors by the bucketfull.

" zvolume blocked for more than 120 seconds"

I have a monitor and keyboard hooked up since I did the upgrade a week ago to version 6.0. and its totally unresponsive to any command including stopping the scrub and the web login is dead.

Hit reset and it rebooted just fine, now I'm rerunning scrub.
 
Last edited:

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Web page is back up. scrub is at %7.14..... fingers crossed.

Just a reminder there is zero important data that would be lost if it does fail.

Just a testing/fun VM server.
 
  • Like
Reactions: Tha_14