AMD’s Radeon RX 7900 XTX has impressed the gaming community by achieving excellent speeds with the DeepSeek R1 AI model, outperforming NVIDIA’s GeForce RTX 4090 in key tests.
### AMD Steps Up with Advanced Support for DeepSeek’s R1 LLM Models, Delivering Top-Notch Performance
DeepSeek’s latest AI model has created quite a buzz in the tech world. While the scale of computing required to train this model is significant, AMD’s “RDNA 3” Radeon RX 7900 XTX GPU makes it accessible to everyday users without compromising on performance. AMD recently revealed benchmarks that pitched their top RX 7000 series GPU against NVIDIA’s offering, with the former showing better results across various models.
To illustrate this, a tweet from David McAfee highlights the impressive performance of DeepSeek on the Radeon 7900 XTX, along with guidance on optimizing Radeon GPUs and Ryzen AI APUs for this model.
Many individuals have found success using consumer GPUs for AI workloads, largely due to their good price-to-performance ratio compared to more traditional AI accelerators. Running models locally also enhances privacy, addressing concerns often associated with AI models like DeepSeek. Luckily, AMD has released a comprehensive guide to help users run DeepSeek R1 distillations on their GPUs. Here’s a quick breakdown:
1. Update to the 25.1.1 Optional Adrenalin driver or newer.
2. Get LM Studio 0.3.8 or later from lmstudio.ai/ryzenai.
3. Set up LM Studio and bypass the onboarding screen.
4. Navigate to the discover tab.
5. Pick your DeepSeek R1 Distill. Start with smaller options like Qwen 1.5B for speed, while larger ones enhance reasoning capabilities.
6. Ensure “Q4 K M” quantization is active and download.
7. Return to the chat tab, select the distill from the menu, and check “manually select parameters.”
8. Maximize the GPU offload layers using the slider.
9. Load the model.
10. Engage with a model powered entirely by your AMD hardware!
If you hit any snags, don’t worry. AMD has a detailed tutorial on YouTube outlining each step in-depth. Following this guide will ensure your machine efficiently runs DeepSeek’s LLMs locally, keeping your data secure. With new GPUs from NVIDIA and AMD on the horizon, we’re anticipating even greater advancements in inferencing power, thanks to their specialized AI features tailored for these demands.