Menu
Home
Forums
New posts
All threads
Latest threads
New posts
Trending threads
Trending
Search forums
What's new
New posts
New resources
New profile posts
Latest activity
Resources
Latest reviews
Search resources
Members
Current visitors
New profile posts
Search profile posts
Members
Search
Search titles only
By:
Search titles only
By:
Log in
Register
Search
Search titles only
By:
Search titles only
By:
Menu
bayleyw's latest activity
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
B
bayleyw
replied to the thread
GPU recommendation for mass text processing, summarizing and data analysis, serving API requests etc
.
under $500: 2080 Ti 22G ($430) best deal: used 3090 (about $750) you want a relatively recent nvidia card with tensor cores, ideally...
Mar 26, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
do you get any MCE's in dmesg before the devices drop off the bus? if so definitely telltale of bad PCIe signaling.
Mar 26, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
ok, thanks. so about 1200 all in for the GPUs and trimmings except the case, or 1900 if you add the waterblocks (are there any cheaper...
Mar 26, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
if you go by 'median trustworthy looking' ebay/taobao pricing it is 750 for the GPUs, 300 for the board not counting shipping, and...
Mar 25, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
800 for the GPUs, plus AOM-SXMV, risers, heatsinks, fans, and whatever contraption you build to mechanically hold it all together gets...
Mar 25, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
somewhat off topic but speaking of mixed precision on a budget, check out these beauties (eBay link): for $165 you get 2x16+1x8 out...
Mar 25, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
the user I was replying to is clearly interested in mixed precision AI work (transformers and stable diffusion)...I did have a lively...
Mar 25, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
probably better off with 2x 3090 at the same price. quoting myself from earlier: and also, training resnet50 at high batch size is...
Mar 25, 2024
B
bayleyw
replied to the thread
L40S vs RTX 6000 ADA - for LLMs
.
sure, if you have a revenue model which supports finetuning (some do) but many (most?) genAI apps are inference only
Mar 20, 2024
B
bayleyw
replied to the thread
L40S vs RTX 6000 ADA - for LLMs
.
you don't host your own inference servers not because its not scalable, but because its not elastic. you have to buy sufficient hardware...
Mar 18, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
does Volta actually show significant efficiency gains over Pascal? TSMC 12nm was an optimized version of TSMC 16nm, not a shrink, so I...
Mar 17, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
you were saying that you wanted Volta because it had tensor cores that you might use in the future, I'm arguing that by the time you get...
Mar 16, 2024
B
bayleyw
replied to the thread
Automotive A100 SXM2 for FSD? (NVIDIA DRIVE A100)
.
I've definitely seen them run on PCIe adapters. What server did you test on?
Mar 16, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
The problem with Volta is it is on the verge of being dropped from mainstream frameworks and libraries. I would expect the sunsetting to...
Mar 9, 2024
B
bayleyw
replied to the thread
SXM2 over PCIe
.
V100 is only 7 Tflops per card, nowhere close to 2x faster. But, if your code is locked to Volta or newer CUDA features then there isn't...
Mar 9, 2024
Top
Bottom
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more…