I have an AIO server running solaris 11.4 for ZFS which has been working normally for years with ZFS pool running on single Intel S3700 SSD.
I noticed prices for S3710 SSDs are now very cheap, so thought I would buy two to add redundency and increase size of the existing pool ( add 2 x 800GB SSD and remove existing 400GB SSD).
I mounted new SSDs to Windows box to check and update firmware, and then attached to Solaris host.
I successfully attached one of the new S3710 SSD to existing pool to create mirror, and it resilvered without issue.
However, I have noticed the following messages in the system log, which relates to the new S3710 SSD, so now I am reluctant to attach the second new s3710 in case of a problem.
How do I fix this? I assummed ZFS would automatically partition everything properly during the attach process, as it does recognise this drive as 4k, but because I created a volume while attached to Windows, has it messed it up for ZFS?
Any hints would be appreciated.
Jun 20 00:36:19 sammy-solaris genunix: [ID 936769 kern.notice] sd5 is /scsi_vhci/disk@g55cd2e404c0c4e45
Jun 20 00:36:19 sammy-solaris cmlb: [ID 541439 kern.notice] NOTICE: /scsi_vhci/disk@g55cd2e404c0c4e45 (sd5):
Jun 20 00:36:19 sammy-solaris Partition 0 starts at logical block address of 34 which is not aligned to physical block size of 4096 bytes on this disk. This can cause I/O performance degradation on this disk.


Partition details seem to match between both disks in the pool:

I noticed prices for S3710 SSDs are now very cheap, so thought I would buy two to add redundency and increase size of the existing pool ( add 2 x 800GB SSD and remove existing 400GB SSD).
I mounted new SSDs to Windows box to check and update firmware, and then attached to Solaris host.
I successfully attached one of the new S3710 SSD to existing pool to create mirror, and it resilvered without issue.
However, I have noticed the following messages in the system log, which relates to the new S3710 SSD, so now I am reluctant to attach the second new s3710 in case of a problem.
How do I fix this? I assummed ZFS would automatically partition everything properly during the attach process, as it does recognise this drive as 4k, but because I created a volume while attached to Windows, has it messed it up for ZFS?
Any hints would be appreciated.
Jun 20 00:36:19 sammy-solaris genunix: [ID 936769 kern.notice] sd5 is /scsi_vhci/disk@g55cd2e404c0c4e45
Jun 20 00:36:19 sammy-solaris cmlb: [ID 541439 kern.notice] NOTICE: /scsi_vhci/disk@g55cd2e404c0c4e45 (sd5):
Jun 20 00:36:19 sammy-solaris Partition 0 starts at logical block address of 34 which is not aligned to physical block size of 4096 bytes on this disk. This can cause I/O performance degradation on this disk.


Partition details seem to match between both disks in the pool:

Attachments
-
20.5 KB Views: 0