- cross-posted to:
- hardware@lemmy.world
- cross-posted to:
- hardware@lemmy.world
Life advise: If something sounds too good to be true, it most likely is…
10x Faster!*
*10x faster when thrown off the side of a building compared to stationary Nvidia 5090, actual clock speeds comparable.
Watch their announcement video on youtube, it screams VC funding scam. The whole announcement is basically just the CEO sitting on a couch and making outrageous claims, while all the supporting data and graphs carry the disclaimer “pre-silicon simulated benchmarks”. They say they’re going to have a live demo this month and in the following months, but they don’t even have real silicon, while also claiming they’re going to be shipping cards by Q4 2026.
Want to bet the “demo” is running on cloud services?
Rasterization could be simulated in software with some driver trickery, but apparently it has less fp32 performance than the 5090 so it would be significantly slower
Still, a RISC-V based GPU is very weird, normally I hear RISC-V being slower and less power efficient than even a CPU.
I expect it to be bottlenecked by complex brdfs and shaders in actual path tracing workloads, but I guess we’ll see what happens.
Isn’t the whole point of RISC that its more power efficient?
In theory it should be able to be more power efficient. In practice, less development has been put into RISC-V CPU designs so they are still less power efficient than Arm (and maybe x86 even)
Arm are RISC processors, I’m lead to believe x86-64 might even be something similar under the microcode.
Yea I edited to say RISC-V specifically, thx
There is one major catch: Zeus can only beat the RTX 5090 GPU in path tracing and FP64 compute workloads because it does not support traditional rendering techniques. This means it has little of no chance to become one of the best graphics cards.
Still nice to see some competition.
Would be kinda neat to have it as a dedicated ray tracing card, like the phys-x cards of yore
Well, until GPUs can be used as CPUs, ganes will not benefit from cardes that are that much more powerful. I mean, whether my games look fine, good or great, when there’s a drop in performance that impacts my enjoyment, it’s always CPU load (or memory leaks); never the GPU.
You generally want to balance towards a GPU bottleneck, cause like you said the impact of CPU bottlenecks are real jittery and affect enjoyability more than a GPU one.
Can recall the last game where the GPU was the bottleneck tbh lol. Maybe back on console…
Still, a fully path traced game without the loss in detail that comes from heavy spatial and temporal resampling would be great
And with enough performance, we could have that in VR too. According to my calculations in another comment a while ago that I can’t be bothered to find, if this company’s claims are to be believed (unlikely) this card should be fast enough for nearly flawless VR path tracing.
It’s less exciting for gamers than it is for graphics devs, because no existing games are designed to take advantage of this high of rt performance
Press E to doubt