bayleyw's latest activity

  • B
    under $500: 2080 Ti 22G ($430) best deal: used 3090 (about $750) you want a relatively recent nvidia card with tensor cores, ideally...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    do you get any MCE's in dmesg before the devices drop off the bus? if so definitely telltale of bad PCIe signaling.
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    ok, thanks. so about 1200 all in for the GPUs and trimmings except the case, or 1900 if you add the waterblocks (are there any cheaper...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    if you go by 'median trustworthy looking' ebay/taobao pricing it is 750 for the GPUs, 300 for the board not counting shipping, and...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    800 for the GPUs, plus AOM-SXMV, risers, heatsinks, fans, and whatever contraption you build to mechanically hold it all together gets...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    somewhat off topic but speaking of mixed precision on a budget, check out these beauties (eBay link): for $165 you get 2x16+1x8 out...
    • s-l1600.jpg
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    the user I was replying to is clearly interested in mixed precision AI work (transformers and stable diffusion)...I did have a lively...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    probably better off with 2x 3090 at the same price. quoting myself from earlier: and also, training resnet50 at high batch size is...
  • B
    bayleyw replied to the thread L40S vs RTX 6000 ADA - for LLMs.
    sure, if you have a revenue model which supports finetuning (some do) but many (most?) genAI apps are inference only
  • B
    bayleyw replied to the thread L40S vs RTX 6000 ADA - for LLMs.
    you don't host your own inference servers not because its not scalable, but because its not elastic. you have to buy sufficient hardware...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    does Volta actually show significant efficiency gains over Pascal? TSMC 12nm was an optimized version of TSMC 16nm, not a shrink, so I...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    you were saying that you wanted Volta because it had tensor cores that you might use in the future, I'm arguing that by the time you get...
  • B
    I've definitely seen them run on PCIe adapters. What server did you test on?
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    The problem with Volta is it is on the verge of being dropped from mainstream frameworks and libraries. I would expect the sunsetting to...
  • B
    bayleyw replied to the thread SXM2 over PCIe.
    V100 is only 7 Tflops per card, nowhere close to 2x faster. But, if your code is locked to Volta or newer CUDA features then there isn't...