LB1606R woes!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rthorntn

Member
Apr 28, 2011
81
12
8
Update: I think I have this figured out, leaving it up in case it's useful for someone else.

Hi,

I started building up an Ubuntu 20.04 Server last night with eight used LB1606R, the drives are interesting to say the least. Challenging could be another word I would use.

Five drives were OK out of the box.

The other 3 drives were not happy, claiming 0 capacity, I don’t have any experience with SAS SSDs so straight on to Google and STH...

Last night I ran:

Code:
sudo sg_format --format --size=512 /dev/sdx
On one and it fixed it.

Today I ran the same command on the other two and it fixed one, it wouldn’t even run on the second.

The final stubborn drive well I had to run:

Code:
sudo sg_sanitize -B /dev/sdy
And it "fixed" it.

Now the only "issue" is when I use fio to test the last two drives (I've not checked the other six), my dmesg is showing:

Code:
sdb: AHDI sdb1 sdb2 sdb4
sdb: p1 size 4044283343 extends beyond EOD, truncated
sdb: p2 size 2856871169 extends beyond EOD, truncated
sdb: p4 size 3357588611 extends beyond EOD, truncated
Code:
sdc: AHDI sdc1 sdc2 sdc4
sdc: p1 size 4044283343 extends beyond EOD, truncated
sdc: p2 size 2856871169 extends beyond EOD, truncated
sdc: p4 size 3357588611 extends beyond EOD, truncated
Huh, AHDI, p1, p2, p4, partitions, what partitions, I used fdisk to add a GPT and a single partition and wrote it, the errors still pop up?

Anything to worry about, it's like there are permanent partitions that can't be modified?

Cheers
Richard
 
Last edited:

rthorntn

Member
Apr 28, 2011
81
12
8
Actually I've just discovered
Code:
lsblk -a
shows partitions:
Code:
sdb                         8:16   0   1.5T  0 disk
├─sdb1                      8:17   0   1.3T  0 part
├─sdb2                      8:18   0 354.2G  0 part
└─sdb4                      8:20   0 346.1G  0 part
Code:
sdc                         8:32   0   1.5T  0 disk
├─sdc1                      8:33   0   1.3T  0 part
├─sdc2                      8:34   0 354.2G  0 part
└─sdc4                      8:36   0 346.1G  0 part
 

rthorntn

Member
Apr 28, 2011
81
12
8
This worked at getting rid of the partitions:
Code:
sudo wipefs -a /dev/sd
lol, this command seems to create the three partitions, probably because its direct to disk
Code:
sudo fio --filename=/dev/sd --rw=write --bs=128k --ioengine=libaio --iodepth=32 --runtime=30 --numjobs=1 --time_based --group_reporting --name=throughput-test-job --direct=1
I do get:
Code:
sd 0:0:2:0: [sdb] tag#2160 Sense Key : Recovered Error [current]
sd 0:0:2:0: [sdb] tag#2160 Add. Sense: Defect list not found
Every time I open smartctl on them, hopefully it's not bad, this suggests it may be a bug with smartmontools and SAS disks?
 

mattster98

New Member
Dec 7, 2022
1
0
1
Code:
sdc: AHDI sdc1 sdc2 sdc4
Hey, sorry to resurrect this old thread, but I've got some brand new drives exhibiting this same behavior. I can write to them and stuff, but they constantly send a log just like this in dmesg/syslog and those partitions are NOT on the drive. I have multiple drives I have tried in multiple slots of multiple machines and they behave the same way.

Did you resolve that particular part of the issue? I'm seeing slow write performance doing backfill in ceph on this drive, even though testing it with DD prior to adding to ceph showed the drive was as speedy as you'd expect. That's really the only symptom I have - nothing bad in the logs other than the unexpected repeating non-existent partition report. These partitions started getting reported after inclusion to ceph, so it must be some sort of misinterpreted headers or something on the drive? I'm assuming it's related to the slow write speed but am really not sure.
 

rthorntn

Member
Apr 28, 2011
81
12
8
I wasn't able to fix this, I arrived at the conclusion that on a few of my drives I had borked controllers, I gave up on them.