Search results

  1. G

    SXM5 (H100) over PCIE Nightmare

    Are you also stuck with using the older Ubuntu 20.04/kernel?
  2. G

    H100 with SXM5 PCIE adapter issues

    i can understand needing to use the older driver. but i'm surprised you had to use the older OS also. glad you got it working though.
  3. G

    H100 with SXM5 PCIE adapter issues

    So what was your solution?
  4. G

    H100 with SXM5 PCIE adapter issues

    were you able to update the BIOS? if so, how? or did you move to an older OS? or give up?
  5. G

    SXM2 over PCIe

    To repurpose the board you just need the riser adapters to use the proprietary PCIe slots rather than the oculink ports. There are both custom risers (one end male proprietary, other end male PCIe) and PCIe adapters (to use standard risers) pretty freely available on the Chinese marketplaces.
  6. G

    WANTED: Titan V waterblock

    where are you located? how many do you want?
  7. G

    Tesla V100 16GB GPU SXM2 PCIe 3.0x16 ($267/$287)

    probably. but other than having the BIOS file and version number, it wont really tell you anything. the user is not a power user and would be difficult for them to pull the BIOS themselves. the BIOS version is 88.00.41.00.01
  8. G

    Tesla V100 16GB GPU SXM2 PCIe 3.0x16 ($267/$287)

    not FHHL. as I said, it's an SXM2 device, and that's reflected in the name. clock speed is more or less normal V100 clocks(accounting for whatever power limit is set), up to 1455MHz. mem speed, 810 is different than 808, slightly. 160W vs 150W, again different slightly. it seems to be a...
  9. G

    SXM2 over PCIe

    Does anyone know about the -N variants of the V100? Where they come from? Their history? Any other inform about their specs? I have several normal SXM2 V100s that I’ve been running for a while. All have a default power limit of 300W and configurable from 150-300W. Memory clock of 877MHz. But...
  10. G

    Tesla V100 16GB GPU SXM2 PCIe 3.0x16 ($267/$287)

    Does anyone know about the -N variants of the V100? Where they come from? Their history? Any other inform about their specs? I have several normal SXM2 V100s that I’ve been running for a while. All have a default power limit of 300W and configurable from 150-300W. Memory clock of 877MHz. But...
  11. G

    SXM2 over PCIe

    Not all tasks or models can easily scale across multiple GPUs. And GPU-GPU bandwidth (with or without nvlink) can be limiting. There’s arguments for and against it. Just depends what your actual use case is.
  12. G

    SXM2 over PCIe

    Yes. But some people want more than 16GB to run larger models or other things needing more vram
  13. G

    SXM2 over PCIe

    I don’t know the name or if it has a specific model name. If you search Taobao and/or XianYu for AOM-SXMV you should find it or something very similar
  14. G

    Tesla V100 16GB GPU SXM2 PCIe 3.0x16 ($267/$287)

    Yeah that’s true if you want to use NVLink.
  15. G

    Tesla V100 16GB GPU SXM2 PCIe 3.0x16 ($267/$287)

    Depends on your limitations. Keep in mind with this you still need to buy/acquire the extension cables and the PCIe adapter boards. And probably decently more difficult to adapt to anything other than a mining rack type setup. A lot of folks will greatly prefer the kind of board that can plug...
  16. G

    Tesla V100 16GB GPU SXM2 PCIe 3.0x16 ($267/$287)

    a lot of people dont want a 1U server (insanely loud), and also dont want to be stuck to the same platform forever. and that particular server uses a more custom PSU setup. you need the infrastructure to run it. it ends up being a lot more trouble than it's worth. being able to put the GPUs on...
  17. G

    Tesla V100 16GB GPU SXM2 PCIe 3.0x16 ($267/$287)

    not anymore. the AOM-SXMV is almost impossible to find anymore. the other models of SXM2 boards (like the Dell version) is hardware locked to their platform and requires special drivers to work and wont work with standard systems. you can't take the listings on taobao or xianyu at face value...
  18. G

    GPU Memory Bandwidth Benchmark

    I wonder what the cause of this is. Since it seems to happen at the very end on several different GPUs. Is this an unspoken decision made by Nvidia to cheap out on the last memory module(s) like the whole GTX 970 “3.5GB” ram issue? (Which resulted in a class action lawsuit against Nvidia)...
  19. G

    [FS/FT][US-WA] (TRADE) Ryzen 3900X for APU + (SELL) EPYC 7443P + ROMED6U-2L2T + RAM

    FYI, that ram you have is not compatible with your motherboard or CPU. if you had ever powered it on you would have realized this. EPYC Milan only supports RDIMM (buffered/Registered). you have UDIMM (Unbuffered/unregistered) don't bother selling them as a combo since they are not compatible.