It does! I think it has *several*, they started with OpenCL. IIRC they came out with something new recently. OpenCL was just a bit more pain in the ass to use compared to CUDA (and it had less than stellar support on nvidia cards despite being open source, I wonder why). The result is that the vendor locked CUDA became much more popular and now there is a lot of inertia behind it.
It doesn’t help that nvidia actively blocks efforts to create compatibility layers that translate CUDA commands to frameworks AMD and Intel cards can use, and in general defends its monopoly with teeth and claws.
It does with ROCm, but Keras/Tensorflow (google) which most deep learning/ai applications before pytorch 2.0 (facebook) ran on only worked with CUDA. With pytorch 2.0 it does not matter if you use cuda or rocm, therefore AMD GPUs can be used for deep learning, but NVIdias GPU architecture still has a slight advantage.
They do. It is called ROCm which is short for Radeon Open Compute. Unlike CUDA it is an open source framework. This have been part of AMDs strategy regarding computation acceleration. While NVIDIA tries to keep their framework closed source in order to lock inn software to their framework making it harder to use for example OpenCL and therefore harder to switch to AMD or Intel hardware later on. AMD embraces open source software making it easier to extend their framework to add functionality that is missing and making it possible for users to read the source code to understand how to interact with it. This does mean that developers of OpenCL have been able to see the behavior of ROCm and replicate this.
There is also a huge difference in marketing for these frameworks. NVIDIA spends a lot of money marketing their CUDA framework. A lot of people have heard about OpenCL through organic marketing. But AMD does not spend much resources marketing ROCm and tend to only mention it when marketing their hardware and not on its own.
Latest Answers