Hopefully I can get some guidance on this issue.
I have some devices (appleTVs) that I want to stop using my HE tunnelbroker (via juniper SRX). I'm trying to achieve this at the interface level on the switch. My logic could be flawed here but it seemed like a reasonable solution and a chance...
just went to see if the devplops link you sent forever ago still worked.....loving the new page. Sorry that it all had to come to that. Yay 100 and Yay my ICX6610. Just about to load R code and move S to secondary and have a look see at that.
Just starting out with ceph. Did a POC with vm's to get an insight into the technology....very nice and so far alot to still get my head around. No hard provisioning yet as I've learnt my nodes are tooo fat and to (2) few. So I speak no authority, just google knowledge.
5 OSD's over 5 nodes...
Yep ordered the sr4 optics. Might try again with a D'Arenberg "The Twenty-Eight Road" (Mourvèdre) which I think will pair nicely with a Ceph deployment I’m working on as well.
The 777 sounds nice with a bit of spice.
Hey there, thanks for dropping by.
Rather then fill this post with log and config dumps, I'll just start of first with what I have done and what is happening in broad strokes....because I may just be SOL
Here is hat I'm currently working with:
EMC Mellanox SRX6012 that is PSID MT_1270111029...
OK thanks.
Yeah, its a juggling act to balance block, record and (glusterfs) shard sizes with, also, the requirements of applications (databases or vm filesystems) etc while having a uniformity all through the stack. This all leads to rabbit hole syndrome. What doesn't help is when looking...
Can I ask why this as a performance test criteria? I know this is synthetic testing but we are talking storage node clusters here and not a single spinner (r/w head) in a single computer. A single person accessing the cluster is still a lot of data moving around with read an writes. One of the...
Yep to true. But a saturated network link with high iops is a saturated network link with high iops.
Very easy to saturate a 10Gb/s link with COTS hardware and have responsiveness under load. The same also with those massive SPOF boxes the freenas guys build that can saturate 40gb/s. The...
Yep, Cx-3 (69:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]) on proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve).
I didn't bother with OFED, I just installed MFT and put my ports into ETH mode, rebooted and I had 2 new interface in my network page.
iperf nets me...
super easy to make. I soldered some breadboard jumpers and 3pin JST connectors together from my bits'n'bobs bucket. probably less then $2 @ in parts from banggood.
shame I just read this as I cut them up last month for another project. I now have powered DOMs in my X11 boards.
I see that ceph now has a nvme, ssd and hdd type. a quick google resulted in a few hits that showed how to create crushmaps and rules for device type pools. Sorry I can't help more, I'm trying to not go down toooo far the ceph rabbit hole. I'm focusing on OSD nodes with mixed ssd and hdd but...
Yeah...but so very USofA based. I got in touch with SM in regards to a RMA (sent from Australia to Taiwan) and getting some items shipped back along with the rma motherboard.
Seemed simple, they referred me to their store which would only send to a USofA address. I got back in touch with...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.