I hope this can help shed the misconception that GPUs are only good at linear algebra and FP arithmetic, which I've been hearing a whole lot!
Edit: learned a bunch, but the "uniform" registers and 64-bit (memory) performance are some easy standouts.
remcob 4 hours ago [-]
It’s well known GPUs are good at cryptography. Starting with hash functions (e.g. crypto mining) but also zero knowledge proofs and multi party computation.
YetAnotherNick 46 minutes ago [-]
In a sense, GPUs are only great at matrix-matrix multiplication. For anything else you would only get 7% of the FLOPs/s compared to it(989 vs 67 TFLOP/s for H100)[1].
lol, I haven't thought about it like that, true. though of course, I mean compared to CPUs :P
I try and use tensor cores for non-obvious things every now and then. The most promising so far seems to be for linear arithmetic in Datalog, but that's just matrix-vector/gemv
qwertox 4 hours ago [-]
Wasn't it well known that CUDA cores are programmable cores?
winwang 3 hours ago [-]
Haha, if you're the type to toss out the phrase "well known", then yes!
kookamamie 2 hours ago [-]
> NVIDIA RTX A6000
Unfortunately that's already behind the latest GPU by two generations. You'd have these after A6000: 6000 Ada, Pro 6000.
pjmlp 2 hours ago [-]
Still better than most folks have access to.
I bet I can do more CUDA with my lame GeForce MX 150 from 2017, than what most people can reach for to do ROCm, and that is how NVidia keeps being ahead.
Nvidia's Quadro naming scheme really is bad these days, isn't it?
I bet there are plenty of papers out there claiming to have used a RTX 6000 instead of a RTX 6000 Ada gen.
kookamamie 11 minutes ago [-]
The naming scheme is horrible, to be quite frank.
To understand this, consider these names in the order of release time: Quadro RTX 6000, RTX A6000, RTX 6000 Ada, RTX Pro 6000, RTX Pro 6000 Max-Q.
gitroom 2 hours ago [-]
Haha honestly I always thought GPUs were mostly number crunchers, but there's way more under the hood than I realized. Wondering now if anyone really gets the full potential of these cores, or if we're all just scratching the surface most days?
gmays 5 hours ago [-]
The special sauce:
> "GPUs leverage hardware-compiler techniques where the compiler guides hardware during execution."
Edit: learned a bunch, but the "uniform" registers and 64-bit (memory) performance are some easy standouts.
[1]: https://www.nvidia.com/en-in/data-center/h100/
I try and use tensor cores for non-obvious things every now and then. The most promising so far seems to be for linear arithmetic in Datalog, but that's just matrix-vector/gemv
Unfortunately that's already behind the latest GPU by two generations. You'd have these after A6000: 6000 Ada, Pro 6000.
I bet I can do more CUDA with my lame GeForce MX 150 from 2017, than what most people can reach for to do ROCm, and that is how NVidia keeps being ahead.
A6000 was released in 2020: https://www.techpowerup.com/gpu-specs/rtx-a6000.c3686
I bet there are plenty of papers out there claiming to have used a RTX 6000 instead of a RTX 6000 Ada gen.
To understand this, consider these names in the order of release time: Quadro RTX 6000, RTX A6000, RTX 6000 Ada, RTX Pro 6000, RTX Pro 6000 Max-Q.
> "GPUs leverage hardware-compiler techniques where the compiler guides hardware during execution."