Nvidia’s best graphics card might reach new heights in the next generation of GPUs — or at least that’s what a reputable leaker implies. The RTX 4090 already delivered a massive generational uplift, and we’re looking at something similar, or even better, with the future RTX 5090. It all comes down to a ridiculously large memory bus and a huge increase in CUDA cores. But does Nvidia really need all that juice to compete with AMD?
Today’s round of tantalizing leaks comes from a well-known source in the GPU space, kopite7kimi, who released some speculation about the possible architecture of the GB202 chip, which would be the Blackwell counterpart to the AD102. There’s a bit of math involved.
Kopite predicts that the GB202 will have 12 graphics processing clusters (GPCs) with eight texture processing clusters (TPCs) each, which translates to 192 streaming multiprocessors (SMs). That brings us to the most familiar measure for Nvidia cards, meaning CUDA cores. The RTX 5090 is rumored to come with a total of 24,576 CUDA cores, which means about 33% more than the AD102 that’s found in the RTX 4090. In reality, it’s even more, because the RTX 4090 doesn’t even utilize the full die — the GPU serves up 16,384 CUDA cores, but the chip has up to 18,432. In any case, those gains are significant.
It’s possible that the RTX 5090, much like its predecessor, won’t utilize the full die. We still don’t know if Nvidia might be planning an RTX Titan Ada card in this generation, although it’s suspected that an RTX 4090 Ti is not happening (and that’s a good thing). There are still some CUDAs waiting to be utilized, so if the story repeats itself in the next-gen Blackwell, the RTX 5090 might have fewer cores and some left over for a potential 5090 Ti. It’d be a monstrosity either way.
Kopite7kimi has also revealed that the GB202 may come with a 512-bit memory bus, which is something that the likes of the memory-starved RTX 4060 Ti can only dream of. There are also whispers of Nvidia switching to GDDR7 memory for the flagship. 48GB VRAM sounds plausible, however ridiculous, which could speed up AI training.
Even trying to imagine the price tag of such a GPU is stressing me out. One thing is for sure — Nvidia appears to be following a similar strategy in which the flagship delivers a huge performance uplift at a high cost, and the rest of the lineup slowly trickles down to where those gains are much smaller, or even almost non-existent.
Nvidia can afford to do that because the competition from AMD is scarce at that high end, and it seems that AMD might be sitting out in the next generation, anyway. It honestly doesn’t even need to pull out these types of gains — even if AMD were to compete against the RTX 5090, it seems unlikely that it’d be able to catch up so soon. After all, its current flagship only matches the RTX 4080. It’s too early to say anything with certainty, though, as Blackwell GPUs aren’t expected to launch before 2025.
Editors’ Recommendations