ZLUDA Ports CUDA Applications To AMD GPUs
Written by Nikos Vaggalis   
Thursday, 18 April 2024

ZLUDA is a translation layer that lets you run unmodified CUDA applications with near-native performance on AMD GPUs. But it is walking a fine line with regards to legality.

NVIDIA's CUDA toolkit provides a development environment for speeding up computing applications by harnessing the power of GPUs. However, the GPUs it targets must be CUDA-enabled and that excludes the AMD ones.

To run CUDA based applications on non-NVIDIA GPUs you must either recompile the code or use a translation layer. That way you can run NVIDIA-only applications on AMD hardware.
Such apps for instance are Geekbench, 3DF Zephyr, Blender, Reality Capture, LAMMPS, NAMD, waifu2x, OpenFOAM etc. ZLUDA is focused on the end users of those application. End user means someone who uses a CUDA program, e. g. a 3D artist using Blender. A developer creating a CUDA application is not considered for the time being due to time constraints.

The first way of recompiling the code is the toughest and requires a lot of know-how. The ZLUDA way means that you can run those binaries as-is, since at the other end they'll become translated to the GPU instructions of the target hardware.

A CUDA application ships with GPU code which can be either compiled into PTX or into SASS. The difference is that PTX is a textual assembly not specific to a particular NVIDIA GPU architecture (it is still specific to NVIDIA GPU hardware capabilities) while SASS is a binary assembly specific to a given a particular NVIDIA GPU architecture.

The majority of the applications ship their GPU code in PTX. PTX is forward compatible with future GPU architectures. For all those reasons, ZLUDA's compiler only supports PTX. The compiler accepts PTX and outputs AMD GPU binary code under a sequence of passes. The output of each pass is the input to the next one. But to cut a long story short, the principle that ZLUDA abides to is similar to that of WINE or WSL.

ZLUDA itself is written in Rust. As such, building the project from source should require cargo build and the following installed on the user's machine:

  • Git
  • CMake
  • Python 3
  • Rust (1.66.1 or newer)
  • C++ compiler
  • (Windows only) Recent AMD Radeon Software Adrenalin

Alternatively if you are building for Linux, there are various developer Dockerfiles with all the required dependencies.

The easiest way is of course to download the pre-built binaries from the Github repo and then follow the instructions for your platform. For instance on Windows using command line:

<ZLUDA_DIRECTORY>\zluda. exe -- <APPLICATION> <APPLICATION_ARGUMENTS>

with <ZLUDA_DIRECTORY> being the zluda directory you have just unpacked.

You can now run Blender on AMD GPUs!

That might be all fine, but the toolkit faces potential legal issues. NVIDIA has banned running CUDA-based software on other hardware platforms using translation layers, but that legal notice was only available online and until recently wasn't included in the downloaded software. However that fact has not stopped the project accumulating more than 7.5K stars on Github.

Regardless, if the adventure succeeds and ZLUDA manages to cater for developers, then a new way opens up for writing CUDA compatible-code for AMD GPUs, and not just in C++. This means Python too, as we examined in "Program Deep Learning on the GPU with Triton".

As C++ by default is not user-friendly and difficult to master, these properties subsequently rub off on the toolkit itself. Wouldn't it be easier to use a language that is user-friendly to write your GPU-accelerated deep learning applications in? This wish has been granted by Open AI which has announced:

We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.

But to further clarify the "alternative to the CUDA toolkit" claim, while Triton enables researchers with no CUDA experience to write GPU computing and deep learning applications without needing the CUDA toolkit, the GPUs they target must be CUDA-enabled. AMD support is not included in the project's short term plans.

But this could now change thanks to ZLUDA.

 

More Information

ZLUDA on Github

Related Articles

Program Deep Learning on the GPU with Triton

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Christie's Make Over $16 Million From Paul Allen's Collections
29/09/2024

The recent Christie's auction sale of historic computers collected for his Living Computer Museum by Paul Allen saw a Cray 1, sell for over $1million, four times higher than its estimate. The computer [ ... ]



Geoffrey Hinton Shares Nobel Prize For Physics 2024
08/10/2024

with John Hopfield, for "foundational discoveries and inventions that enable machine learning with artificial neural networks."


More News

kotlin book

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Thursday, 18 April 2024 )