The 22110 is a larger enterprise card I'm not sure how the extension board would help him
do you know the reference of the connector ? I'm looking for a compatible fanHi It's 5V and supported fan speed control in bios.
Fwiw I'm using a u. 2 pm983a and have no problemsI have no answer from support on this point.
Would a u.2 be a good solution?
I read few pages earlier about samsung pm9a3; i wonder if it has better heat dissipation. i've never used u.2 ssd.
He won't be able to use his 3 2210 because of the nvme fan with the ext board it wont block the fan. Sorry meant to say replace his 1/2 of his 22110 with 2230 2242 etc.The 22110 is a larger enterprise card I'm not sure how the extension board would help him
I also dont have problems with this disk but once i wanted to copy something from a Samsung 980 to it (400GB vm) and during the transfer the 980 died.... i touched the 980 and i wasnt able to even touch the chip on it.... the nvme just shuted down from temperature!!!!! . After a reboot the nvme was fine but bottom line is that 3 disks in that thing if you push them its not OK.Fwiw I'm using a u. 2 pm983a and have no problems
i meant gen4 x4. On your pic you can see that you can remove spacer on 2280 size to fit a 22110 on gen3 ports.
Completly agee with you. The specs are good but thermal design is awful. I would have made the case 3 or 4 cm thicker to have a much better airflow.I also dont have problems with this disk but once i wanted to copy something from a Samsung 980 to it (400GB vm) and during the transfer the 980 died.... i touched the 980 and i wasnt able to even touch the chip on it.... the nvme just shuted down from temperature!!!!! . After a reboot the nvme was fine but bottom line is that 3 disks in that thing if you push them its not OK.
At this moment i have thoughts to completely dissasemble the machine and put everything in a SFF box... after all we all bought it i think for the specs of it and not the size that has its caveats as matter heat...
Also the fan design that exists i am not sure that is optimal IMO... instead to bring air directly on the disks from outside, it takes the heat from above the nvme's and takes it out of the front of the machine... i dont know if this is optimal
Yeah I asked via the website and they said shipping end of April, direct from Hong Kong which may or may not be an issue for import taxes depending on how they do it - will wait for now for stock to improve - better if they stock Amazon somewhere in Europe and you can be certain on taxes.I think everything is backordered now except for Aliexpress and a $1000 price tag. There is no inventory on Amazon which there would be if they have stock.
I found a solution i think... we can raise a bit with spacers the case and with the inside metal bracket and the fan out we can put a fan underneath the box to throw air inside....Completly agee with you. The specs are good but thermal design is awful. I would have made the case 3 or 4 cm thicker to have a much better airflow.
eventually these high performance small boxes will all come with fully perforated cases, and one or two standard casefans mounted outside the case. Anything else is stupid and non-viable. It's pure stupidity to try to cram all these hot components into a small closed space, and then start crying about thermal issues.I found a solution i think... we can raise a bit with spacers the case and with the inside metal bracket and the fan out we can put a fan underneath the box to throw air inside....
I will buy an 120x15mm 5v fan and i will get power from a usb port....
good idea.I found a solution i think... we can raise a bit with spacers the case and with the inside metal bracket and the fan out we can put a fan underneath the box to throw air inside....
I will buy an 120x15mm 5v fan and i will get power from a usb port....
After almost 48 hours i dont have a tiny problem with the SFP+ ports and esxi.Also i feel a little neglected regarding the SFP+ issues reported earlier. I posted a link to a reddit post on how to individually disable ASPM for the nic's only. It might be a mandatory thing to do, If the aim is to run 2x10GB links reliably, while having low total wattage.
Also to the user having issues with NIC bonding on the SFP+ ports, my experience with multipath iscsi says flow control is essential for reliable bonds. I hope someone would test my theory. Disable ASPM, and enable flow control on the switch ports.
I tried this and ASPM was disabled in bios and on kernel. I also tried removing bond and running just a single nic and same issues. I didn’t try the flow control however, when I get the new machine. for now I’m going to stop pulling my hair out and stick with Debian 6.1 because I have less packet drops, no issues bonded issues and it’s been working flawlessly for 3 days. ( without disabling anything )This seems to be a solution:
Disabling ASPM on a per-device basis
maybe @garbinc could try to disable for 10G nic this way