Recent content by MikeWebb

  1. MikeWebb

    Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

    just went to see if the devplops link you sent forever ago still worked.....loving the new page. Sorry that it all had to come to that. Yay 100 and Yay my ICX6610. Just about to load R code and move S to secondary and have a look see at that.
  2. MikeWebb

    Planing 40Gbps Ceph Cluster Lab Build - Gotcha / Pitfalls?

    Just starting out with ceph. Did a POC with vm's to get an insight into the technology....very nice and so far alot to still get my head around. No hard provisioning yet as I've learnt my nodes are tooo fat and to (2) few. So I speak no authority, just google knowledge. 5 OSD's over 5 nodes...
  3. MikeWebb

    SX6012 port 1 & 2 LR4 transciever issues. help..maybe

    Yep ordered the sr4 optics. Might try again with a D'Arenberg "The Twenty-Eight Road" (Mourvèdre) which I think will pair nicely with a Ceph deployment I’m working on as well. The 777 sounds nice with a bit of spice.
  4. MikeWebb

    SX6012 port 1 & 2 LR4 transciever issues. help..maybe

    Hey there, thanks for dropping by. Rather then fill this post with log and config dumps, I'll just start of first with what I have done and what is happening in broad strokes....because I may just be SOL Here is hat I'm currently working with: EMC Mellanox SRX6012 that is PSID MT_1270111029...
  5. MikeWebb

    Installing Mellanox CX-3 Pro on Proxmox 5.4

    Proxmox Installer does not see Mellanox ConnectX-3 card at all? Set Mellanox CX3 VPI ports type
  6. MikeWebb

    Ceph Benchmark request /NVME

    OK thanks. Yeah, its a juggling act to balance block, record and (glusterfs) shard sizes with, also, the requirements of applications (databases or vm filesystems) etc while having a uniformity all through the stack. This all leads to rabbit hole syndrome. What doesn't help is when looking...
  7. MikeWebb

    Bare-metal kubernetes build - thoughts?

    WOW man, that's one hell of a lab. My K8s lab is 4 rPI 3B's with NFS from a little QNAP NAS (I also send ZFS snaps to that via ISCSI)
  8. MikeWebb

    Ceph Benchmark request /NVME

    Can I ask why this as a performance test criteria? I know this is synthetic testing but we are talking storage node clusters here and not a single spinner (r/w head) in a single computer. A single person accessing the cluster is still a lot of data moving around with read an writes. One of the...
  9. MikeWebb

    Ceph Benchmark request /NVME

    Yep to true. But a saturated network link with high iops is a saturated network link with high iops. Very easy to saturate a 10Gb/s link with COTS hardware and have responsiveness under load. The same also with those massive SPOF boxes the freenas guys build that can saturate 40gb/s. The...
  10. MikeWebb

    Installing Mellanox CX-3 Pro on Proxmox 5.4

    Yep, Cx-3 (69:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]) on proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve). I didn't bother with OFED, I just installed MFT and put my ports into ETH mode, rebooted and I had 2 new interface in my network page. iperf nets me...
  11. MikeWebb

    Ceph Benchmark request /NVME

    Hope this thread continues. watching
  12. MikeWebb

    32GB SATA DOMs - ~$8 each (lot of 8)

    super easy to make. I soldered some breadboard jumpers and 3pin JST connectors together from my bits'n'bobs bucket. probably less then $2 @ in parts from banggood. shame I just read this as I cut them up last month for another project. I now have powered DOMs in my X11 boards.
  13. MikeWebb

    Different disk sets for Proxmox Ceph pools?

    I see that ceph now has a nvme, ssd and hdd type. a quick google resulted in a few hits that showed how to create crushmaps and rules for device type pools. Sorry I can't help more, I'm trying to not go down toooo far the ceph rabbit hole. I'm focusing on OSD nodes with mixed ssd and hdd but...
  14. MikeWebb

    Supermicro accessories

    Yeah...but so very USofA based. I got in touch with SM in regards to a RMA (sent from Australia to Taiwan) and getting some items shipped back along with the rma motherboard. Seemed simple, they referred me to their store which would only send to a USofA address. I got back in touch with...
  15. MikeWebb

    FS: Intel Xeon Server Cluster - 37 Servers Total - $7000 for all

    Had to come to look at the pictures. Thank you