EU [EU][FR] WTB gpus with +24Gb VRAM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

redblood

New Member
Oct 2, 2025
27
4
3
I need some gpus for my home server.

I do a lot of inference, most use cases aren't LLMs but things like face and plate recognition, NLP, OCR, etc.

And AV1 transcoding.

I'm looking for an intel arc pro b60 but I can't seem to find one at MSRP, I found 2 or 3 listings at ~200€ above MSRP. I'd like one below MSRP depending on condition.

And pro nvidia gpus at 24Gb or more at ~1000€ a piece but shouldn't be older than Ada.
If someone has a good price for a modded gpu with much higher vram I can go above 1000€.

I live in France and am not in a hurry.
 
  • Like
Reactions: MSameer

iraqigeek

Active Member
Sep 17, 2018
122
79
28
B50/60 aren't slated for retail release until Q1. Intel now sells only to integrators, so the ones you see are gray market.

If you're using them for heavy/long workloads, I wouldn't advise you to use modded GPUs. They run pretty hot and if you have any issues you're on your own. Even GPU repair shops won't be able to help much because they won't have schematics.

Look into AMD. Mi50s are now practically gone. W6800 32GB can sometimes be found for 700-ish but that won't work for you if you need multiple cards.
 

redblood

New Member
Oct 2, 2025
27
4
3
thanks for the heads up I have a mi50 that I bought for around 150€ 2 months ago, but it's not enough.

The Mi50 practically only works well for inference, can't use it for av1 encoding nor for things like frigate in a container.
 

iraqigeek

Active Member
Sep 17, 2018
122
79
28
Arc A-series are quite good for encoding and they're quite cheap. They do require ReBar to work properly though (so do the B-series). You can mix Mi50 for inference and a few A-series Arc like the A380 or A40 for AV-1, again assuming you have ReBar or can mod your BIOS to support it. Otherwise your only option for AV-1 is Nvidia, but again mixing with your Mi50.
 

redblood

New Member
Oct 2, 2025
27
4
3
I have asrock romed8-2t it should have rebar support with the latest bios update. my idea was intel for light miscellaneous inference and av1.
mi50 for llms
 

iraqigeek

Active Member
Sep 17, 2018
122
79
28
I have asrock romed8-2t it should have rebar support with the latest bios update. my idea was intel for light miscellaneous inference and av1.
mi50 for llms
I had three A770s to try inference with Intel, and my experience made me sell them a short time after. It's not that performance is bad, but software setup was quite messy. Intel's documentation provides different instructions depending on which page you read, and I couldn't find any where instructions were correct start to end. I wasn't able to build llama.cpp for ipex successfully despite my efforts. So, if you want inference too Nvidia is your only realistic option, something like the Ampere A2000.
 

redblood

New Member
Oct 2, 2025
27
4
3
I might have misspoke when I say inferencee I don't mean llms. So no llama.cpp is involved, things like facial recognition, plate recognition, object detection, NLP, etc.
 

iraqigeek

Active Member
Sep 17, 2018
122
79
28
I might have misspoke when I say inferencee I don't mean llms. So no llama.cpp is involved, things like facial recognition, plate recognition, object detection, NLP, etc.
My comment still stands. Setting up the software stack was problematic on Intel in my experience. If you're running Pytorch or any ML framework, software setup is not a trivial thing like setting up CUDA or (nowadays) ROCm.
 

redblood

New Member
Oct 2, 2025
27
4
3
I see, thanks for your insights. Good thing I can containerize the setup for my workloads.
 

iraqigeek

Active Member
Sep 17, 2018
122
79
28
I see, thanks for your insights. Good thing I can containerize the setup for my workloads.
Did you test even with containers with an Intel card? I'd get the cheapsest A310 or A380 and test with it before committing to any solution or spending more money on it.
 

redblood

New Member
Oct 2, 2025
27
4
3
nop didn't, but looks doable I have experience with containers and linux, rocm 7.0.2 wasn't hard for mi50, since you tried both you think intel is even harder?

How much would the A310/A380 cost? saw them a while ago at around 150€ and seemed too expensive.
 

iraqigeek

Active Member
Sep 17, 2018
122
79
28
nop didn't, but looks doable I have experience with containers and linux, rocm 7.0.2 wasn't hard for mi50, since you tried both you think intel is even harder?

How much would the A310/A380 cost? saw them a while ago at around 150€ and seemed too expensive.
ROCm is practically as easy to setup as CUDA nowadays. 15mins to setup, and most of that time is downloading packages and compiling kernel. Just did it last night with ROCm 7.1.0 with Mi50s (you need to copy the tensor files from ROCBLAS).

Intel, OTOH, I couldn't get anything working after over 10hrs of trying. I didn't try with a container, but that's only because I didn't want to use a container to run things.

I have two decades of software engineering experience and just as much running and managing windows and Linux servers both personally and professionally.

A310 or A380 will cost about 100€ on ebay, less locally (depending on which country you live in). Buy a used card to test. Worst case you can sell it for the same price or 10-15€ less. Better than spending a few K only to be frustrated with setup, stability, reliability, or whatever.

I'm really rooting for Intel, but their software stack is still not there in my experience. YMMV.
 

redblood

New Member
Oct 2, 2025
27
4
3
I see that sounds reasonable, I'm in France I'll see if I can find one. I see that prices here for A750 and A380/A310 are all virtually similar ~150€
 

iraqigeek

Active Member
Sep 17, 2018
122
79
28
I see that sounds reasonable, I'm in France I'll see if I can find one. I see that prices here for A750 and A380/A310 are all virtually similar ~150€
Yep, the A750 is like the ugly duckling of the family because it has 8GB VRAM only. Guess for you it might be the best option for encoding.