Solaris ZFS: Attached new disk for mirror, but now see messages about logical block alignment

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sundaydiver

New Member
Aug 24, 2015
19
1
3
44
I have an AIO server running solaris 11.4 for ZFS which has been working normally for years with ZFS pool running on single Intel S3700 SSD.
I noticed prices for S3710 SSDs are now very cheap, so thought I would buy two to add redundency and increase size of the existing pool ( add 2 x 800GB SSD and remove existing 400GB SSD).

I mounted new SSDs to Windows box to check and update firmware, and then attached to Solaris host.
I successfully attached one of the new S3710 SSD to existing pool to create mirror, and it resilvered without issue.

However, I have noticed the following messages in the system log, which relates to the new S3710 SSD, so now I am reluctant to attach the second new s3710 in case of a problem.

How do I fix this? I assummed ZFS would automatically partition everything properly during the attach process, as it does recognise this drive as 4k, but because I created a volume while attached to Windows, has it messed it up for ZFS?

Any hints would be appreciated.

Jun 20 00:36:19 sammy-solaris genunix: [ID 936769 kern.notice] sd5 is /scsi_vhci/disk@g55cd2e404c0c4e45
Jun 20 00:36:19 sammy-solaris cmlb: [ID 541439 kern.notice] NOTICE: /scsi_vhci/disk@g55cd2e404c0c4e45 (sd5):
Jun 20 00:36:19 sammy-solaris Partition 0 starts at logical block address of 34 which is not aligned to physical block size of 4096 bytes on this disk. This can cause I/O performance degradation on this disk.


1718868402575.png

1718868490424.png

Partition details seem to match between both disks in the pool:
1718869446553.png
 

Attachments

gea

Well-Known Member
Dec 31, 2010
3,468
1,356
113
DE
I suppose the problem is that the old SSD has a physical blocksize if 512B and the new one 4K. This is not compatible in a replace/add situation, only the other way is possible (add a 512B disk to a 4K one with a slight loss of capacity).

Workaround:
Create a new pool from new SSD with ashift 12 and replicate data.
 

sundaydiver

New Member
Aug 24, 2015
19
1
3
44
I suppose the problem is that the old SSD has a physical blocksize if 512B and the new one 4K. This is not compatible in a replace/add situation, only the other way is possible (add a 512B disk to a 4K one with a slight loss of capacity).

Workaround:
Create a new pool from new SSD with ashift 12 and replicate data.
Thanks Gea for the response. I am not sure I understand though, as both old and new disk report as 512 bytes logical, 4096 bytes physical, and the existing pool is set to ashift=12.

In any case, I will try creating fresh new pool and see if it changes the outcome.
What is the most complete way to synchronise all data and settings from old pool to new?


1718921137377.png

1718921185847.png

1718921329465.png
 

ericloewe

Active Member
Apr 24, 2017
337
159
43
31
I assummed ZFS would automatically partition everything properly during the attach process,
No, you told ZFS to use a device, it tried to do so - could use a bit more intelligence, but whatever. Pretty easy fix, just create a partition that starts at 1M.
 

sundaydiver

New Member
Aug 24, 2015
19
1
3
44
No, you told ZFS to use a device, it tried to do so - could use a bit more intelligence, but whatever. Pretty easy fix, just create a partition that starts at 1M.
Hmmm, but isn't that the case already? Both disks report Partition 0 starting at sector 256, and from my reading that is correct for 4k aligned block size?

I don't know why it reports
Partition 0 starts at logical block address of 34
when below partition table shows that is beginning of unallocated space Solaris adds to align to 4k.
Are sectors and LBA not equivalent?

Code:
Volume name = <        >
ascii name  = <ATA-INTEL SSDSC2BA80-0160-745.21GB>
bytes/sector    =  512
sectors = 1562824367
accessible sectors = 1562824334
Part      Tag    Flag     First Sector          Size          Last Sector
  0        usr    wm               256       745.20GB           1562807950
  1 unassigned    wm                 0            0                0
  2 unassigned    wm                 0            0                0
  3 unassigned    wm                 0            0                0
  4 unassigned    wm                 0            0                0
  5 unassigned    wm                 0            0                0
  6 unassigned    wm                 0            0                0
  8   reserved    wm        1562807951         8.00MB           1562824334


Volume name = <        >
ascii name  = <ATA-INTEL SSDSC2BA40-0270-372.61GB>
bytes/sector    =  512
sectors = 781422767
accessible sectors = 781422734
Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256      372.60GB          781406350
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  8   reserved    wm         781406351        8.00MB          781422734
 
Last edited:

sundaydiver

New Member
Aug 24, 2015
19
1
3
44
Thank you for the asistance. This is now resolved.
I don't know if it was an actual issue or not, but for each of the new SSDs I had to use them in a new temporary pool first, and then destroy that pool and attach them to the existing pool.
After that I do not get messages anymore about block alignment.
 
  • Like
Reactions: gea

gea

Well-Known Member
Dec 31, 2010
3,468
1,356
113
DE
I have only seen such problems with 512B physical sector disks when adding/replacing 4k physical sector disks.
This does not work. To avoid problems force 512B disks to ashift 12 when creating/adding vdevs.