NVIDIA – Hackaday https://hackaday.com Fresh hacks every day Tue, 25 Feb 2025 19:20:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 156670177 Import GPU: Python Programming with CUDA https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/ https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/#comments Wed, 26 Feb 2025 03:00:30 +0000 https://hackaday.com/?p=765574 Every few years or so, a development in computing results in a sea change and a need for specialized workers to take advantage of the new technology. Whether that’s COBOL …read more]]>

Every few years or so, a development in computing results in a sea change and a need for specialized workers to take advantage of the new technology. Whether that’s COBOL in the 60s and 70s, HTML in the 90s, or SQL in the past decade or so, there’s always something new to learn in the computing world. The introduction of graphics processing units (GPUs) for general-purpose computing is perhaps the most important recent development for computing, and if you want to develop some new Python skills to take advantage of the modern technology take a look at this introduction to CUDA which allows developers to use Nvidia GPUs for general-purpose computing.

Of course CUDA is a proprietary platform and requires one of Nvidia’s supported graphics cards to run, but assuming that barrier to entry is met it’s not too much more effort to use it for non-graphics tasks. The guide takes a closer look at the open-source library PyTorch which allows a Python developer to quickly get up-to-speed with the features of CUDA that make it so appealing to researchers and developers in artificial intelligence, machine learning, big data, and other frontiers in computer science. The guide describes how threads are created, how they travel along within the GPU and work together with other threads, how memory can be managed both on the CPU and GPU, creating CUDA kernels, and managing everything else involved largely through the lens of Python.

Getting started with something like this is almost a requirement to stay relevant in the fast-paced realm of computer science, as machine learning has taken center stage with almost everything related to computers these days. It’s worth noting that strictly speaking, an Nvidia GPU is not required for GPU programming like this; AMD has a GPU computing platform called ROCm but despite it being open-source is still behind Nvidia in adoption rates and arguably in performance as well. Some other learning tools for GPU programming we’ve seen in the past include this puzzle-based tool which illustrates some of the specific problems GPUs excel at.

]]>
https://hackaday.com/2025/02/25/import-gpu-python-programming-with-cuda/feed/ 4 765574 gpu-main
Laptop GPU Upgrade With Just A Little Reballing https://hackaday.com/2024/10/02/laptop-gpu-upgrade-with-just-a-little-reballing/ https://hackaday.com/2024/10/02/laptop-gpu-upgrade-with-just-a-little-reballing/#comments Thu, 03 Oct 2024 05:00:57 +0000 https://hackaday.com/?p=725314 Modern gaming laptops are in an uncomfortable spot – often too underpowered for newest titles, but too bulky to be genuinely portable. It doesn’t help they’re not often upgradeable, so …read more]]>

Modern gaming laptops are in an uncomfortable spot – often too underpowered for newest titles, but too bulky to be genuinely portable. It doesn’t help they’re not often upgradeable, so you’re stuck with what you’ve bought – unless, say, you’re a hacker equipped some tools for PCB reflow? If that’s the case, welcome to [TechModLab]’s video showing you the process of upgrading a laptop’s soldered-on NVIDIA GPU, replacing the 3070 chip with a 3080.

You don’t need much – the most exotic tool is a BGA rework station, holding the mainboard steady&stiff and heating a specific large chip on the board with an infrared lamp from above. This one is definitely a specialty tool, but we’ve seen hackers build their own. From there, some general soldering tools like flux and solder wick, a stencil for your chip, BGA balls, and a $20 USB-C hotplate are instrumental for reballing chips – tools you ought to have.

Reballing was perhaps the hardest step of the journey – instrumental for preparing the GPU before the transplant. Afterwards, only a few steps were needed – poking a BGA ball that didn’t connect, changing board straps to adjust for the new VRAM our enterprising hacker added alongside the upgrade, and playing with the driver process install a little. Use this method to upgrade from a lower-end binned GPU you’re stuck with, or perhaps to repair your laptop if artifacts start appearing – it’s a worthwhile reminder about methods that laptop repair shops use on the daily.

Itching to learn more about BGAs? You absolutely should read this article series by our own [Robin Kearey]. We’ve mostly seen reballing used for upgrading RAM on laptop and Raspberry Pi boards, but seeing it being used for an entire laptop is nice – it’s the same technique, just scaled up, and you always can start by practicing at a smaller scale. Now, it might feel like we’ve left the era of upgradable GPUs on laptops, and today’s project might not necessarily help your worries – but the Framework 16 definitely bucks the trend.

]]>
https://hackaday.com/2024/10/02/laptop-gpu-upgrade-with-just-a-little-reballing/feed/ 5 725314 hadimg_lgpu_upgrade_feat
Hacking an NVIDIA CMP 170HX Crypto GPU for EM Sim Work https://hackaday.com/2024/09/11/hacking-an-nvidia-cmp-170hx-crypto-gpu-for-em-sim-work/ https://hackaday.com/2024/09/11/hacking-an-nvidia-cmp-170hx-crypto-gpu-for-em-sim-work/#comments Wed, 11 Sep 2024 23:00:00 +0000 https://hackaday.com/?p=706864 A few years back NVIDIA created a dedicated cryptocurrency mining GPU, the CMP 170HX. This was a heavily restricted version of its flagship A100 datacenter accelerator, using the same GA100 …read more]]>

A few years back NVIDIA created a dedicated cryptocurrency mining GPU, the CMP 170HX. This was a heavily restricted version of its flagship A100 datacenter accelerator, using the same GA100 chip. It was intended for accelerating Ethash, the Etherium proof-of-work algorithm, and nothing else. [niconiconi] bought one to use for accelerating PCB electromagnetic simulations and put a lot of effort into repairing the card, converting it to water-cooling, and figuring out how best to use this nobbled GPU.

Typically, the GA100 silicon sits in the center of the mighty A100 GPU card and would be found in a server rack, cooled by forced air. This was not an option at home, so an off-the-shelf water-cooling block was wedged in. During this process, [niconconi] found that the board wouldn’t power on, so they went on a deep dive into the power supply tree with the help of a leaked A100 schematic. The repair and modifications can be found in the appendix, right down to the end of the article. It is a long read to get there.

This Nvidia GA100 GPU is severely crippled on this board

NVIDIA has a history of deliberately restricting silicon in consumers’ hands to justify the hefty price tags of its offerings to big businesses, and this board is no different. The plan was to restrict the peak performance of the board to only applications with the same compute requirements as Ethash, specifically memory-intensive algorithms. The FP64 performance was severely limited, but instructions were not removed. This meant the code would run really badly, considering what the GPU is capable of.

The memory was limited to 8 GB, despite some A100 cards hosting a whopping 80 GB. The strategy was to use fuses to limit the crucial instructions, particularly the FP32 FMA and MAD instructions, which are used for multiply-add operations and are crucial for general computing applications. Finally, the PCIe bus was nobbled to run only as a Gen 1 interface with a single lane. They reduced the lane count by removing the coupling capacitors on the PCB, which meant they could just be added later, but it’s still only a slow interface.

[niconconi] went into great detail benchmarking the instruction types, keeping their EM simulation application in mind. After a few tweaks to make it work, they determined it was a good purchase. This article is worth reading for all those hardcore GPU nerds!

If you need a primer on GPU mining, we’ve got you covered. Once you’ve understood proof-of-work crypto, perhaps take a look at Chia?

Thanks to [gnif] for the tip!

]]>
https://hackaday.com/2024/09/11/hacking-an-nvidia-cmp-170hx-crypto-gpu-for-em-sim-work/feed/ 7 706864 cmp-170hx-watercool-4-featured
MXM: Powerful, Misused, Hackable https://hackaday.com/2024/04/18/mxm-powerful-misused-hackable/ https://hackaday.com/2024/04/18/mxm-powerful-misused-hackable/#comments Thu, 18 Apr 2024 14:00:02 +0000 https://hackaday.com/?p=648988 A standard-compliant MXM card installed into a laptop, without heatsinkToday, we’ll look into yet another standard in the embedded space: MXM. It stands for “Mobile PCI Express Module”, and is basically intended as a GPU interface for laptops with …read more]]> A standard-compliant MXM card installed into a laptop, without heatsink

Today, we’ll look into yet another standard in the embedded space: MXM. It stands for “Mobile PCI Express Module”, and is basically intended as a GPU interface for laptops with PCIe, but there’s way more to it – it can work for any high-power high-throughput PCIe device, with a fair few DisplayPort links if you need them!

You will see MXM sockets in older generations of laptops, barebones desktop PCs, servers, and even automotive computers – certain generations of Tesla cars used to ship with MXM-socketed Nvidia GPUs! Given that GPUs are in vogue today, it pays to know how you can get one in low-profile form-factor and avoid putting a giant desktop GPU inside your device.

I only had a passing knowledge of the MXM standard until a bit ago, but my friend, [WifiCable], has been playing with it for a fair bit now. On a long Discord call, she guided me through all the cool things we should know about the MXM standard, its history, compatibility woes, and hackability potential. I’ve summed all of it up into this article – let’s take a look!

This article has been written based on info that [WifiCable] has given me, and, it’s also certainly not the last one where I interview a hacker and condense their knowledge into a writeup. If you are interested, let’s chat!

Simple Wireup, Generous Payoff

Yes, an Intel A380m card in MXM format

An MXM card has a whole side dedicated to its gold finger PCB edge connector. With 285 pins, there are a whole lot of interfaces you can get out of these, and all of them are within hobbyist reach! To make an MXM card work, you don’t need much, either.

For an MXM card to work, first, you need to be able to provide between 60 W and 100 W of power, with the ability to impose a power consumption limit on the card. The standard says that the voltage can be anywhere from 7 V to 20 V. This is obviously intended for laptop use, where the main power rail can either be at charger voltage or battery voltage, and it results in high efficiency – you don’t need a separate buck-boost regulator for, say, 12 V.

Then, you need a PCIe link of up to 16x, but because PCIe is cool like that, even a 1x link will work as long as you won’t be sad if the GPU is bottlenecked by it. You also might need to set up a few control GPIOs, like the card enable pin, and the power limit pin that tells the card whether it should run in lower-power mode or not. Plus, for some cards, you might need to give the card 5 V at an amp or two – the standard requires that, but it’s not clear why. Technically, you can even connect an MXM card to a Raspberry Pi 5 or CM4, as long as you can procure enough power from some external source – if you want a low-footprint GPU paired with a Pi, MXM makes that firmly within your reach.

In return, you get a wide array of interfaces. The coolest part is, undoubtedly, DisplayPort. You can get up to six 4-lane DP links out of an MXM card, as long as the GPU chip is okay with it. You might also be able to get VGA, LVDS, and even HDMI/DVI. MXM GPUs do support DP++, a DisplayPort mode that outputs HDMI-compatible signals, and you only need a few external components.

You also get a good few low-level interfaces, both for practical and debug purposes. Need to control a small fan? There’s a PWM output you might be able to use for fan control, and a tach signal input! Backlight control for an LCD panel you’ve wired up? There’s PWM for that too. Want to poke at the GPUs’ JTAG? The MXM socket has pins defined for that. It’s up to the cards to support or not support a lot of stuff that the MXM standard defines, so you might still benefit from a small MCU, but having those things seriously helps in embedded applications.

Speaking of JTAG and vendor freedom, of course, there are OEM pins – since anyone can produce MXM GPUs and systems, and the MXM standard has lasted for decades now, manufacturers like to put their own spin on them. You can often figure things out from MXM-equipped laptop schematics, and, sometimes it’s necessary to check a few. See, giving freedom to individual implementers is a double-edged sword, and MXM is an outstanding illustration on how modular standards can go wrong for regular users.

Compatible, Mostly

Looking at MXM, you might rejoice – thinking about upgrading and repairing your laptop well beyond the few years that the warranty period covers. However, manufacturers are not exactly interested in that. For them, the incentive structure for using MXM is usually completely different.

For a start, producing a board with five BGAs can in certain cases be easier than producing a board with fifteen, which is what you often have to do if you have to put a GPU and RAM on your board as opposed to an MXM module. And, for offering multiple GPU configurations of the same model in a way that lets the manufacturer cover multiple points on the supply-demand chart, it might just be easier to produce an array of MXM cards and then pair them to an array of GPU-less mainboards that have their own configurations. Not always – which is part of why you don’t see it lately.

This is not a standard-defined shape for an MXM card.

So, while you might like upgradability and repairability, you might find that MXM GPUs are not often offered as replacement parts for sale. And, what’s worse, if you’ve found an MXM card available for a different laptop, there’s no guarantee it will fit.

For instance, some cards are of the MXM 3.0 standard, while others are MXM 3.1, with slight but important differences like support for two DP ports on LVDS pins. However, most of the real-world differences are from either lack of standardization or from manufacturers straight up ignoring the standard.

The first hurdle is the most obvious, and that is the mechanical footprint. The MXM standard defines two possible card shapes, A variant or B variant, including things like heatsink and retention screw hole layout, and even component height for heatsink compatibility purposes. Many laptop manufacturers ignore these rules, producing cards of wacky shapes, or worse, shapes that almost match but are slightly incompatible in a subtle but severe way.

Then, there’s the VBIOS and driver problems. Many MXM cards have an onboard BIOS chip, whereas other cards rely on the laptop to feed them their BIOS during boot. If your card is of the latter type, you might need to add a UEFI module or hack the code. Alternatively, some cards ship with unpopulated flash chip footprints or unflashed chips on them, so you can give a BIOS to your card with a bit of soldering and flashing, as long as you can find an image that works.

As for drivers, Nvidia stands out there. Many Windows Nvidia drivers for MXM cards run hardware checks that tie the MXM cards to hardware IDs of laptops, and refuse to install the drivers if the card is installed in a laptop it was not expected to be installed in. You used to be able to work around it, but nowadays the driver signing mechanism severely limits the things you can do, a mechanism that in Windows has no sane leeway for user-tweaked drivers and, as such, acts as an effective way of proprietary vendor lock-in. So, if you want to upgrade your Nvidia MXM card and you run Windows, you might run into a bit of a brick wall.

Some Outright Hostile

Continuing this line of reasoning, there are slots that look like MXM but aren’t MXM, and I’m not talking about SMARC, which is a fun SoM standard reusing MXM slots, just like Pi Compute Modules reuse DDR sockets. No, I’m talking about manufacturers like Lenovo, who have added MXM socketed GPUs into some of their more recent laptops, but with completely different pinouts. They don’t advertise their slots as MXM, at least, which is a bonus.

Where are the power pins? Who knows!

Still, these cards are easy to confuse for actual MXM, and they fit into the slot all the same. The most firey factor is the power pin layout – a mindboggling change that has been made on some laptop models that can destroy your card and laptop even if the card fits mechanically. On one side of the MXM card, there’s an array of power pins – a matching amount of VIN and GND, often visible as a single large gold finger. For some unimaginable reason, a few manufacturers have made cards that remap the entire pinout and specifically put those power pins on the opposite side.

The pinout swapping is bad enough, but it’s the power pin swapping that really gets us, and gets every piece of tech involved to release the magic smoke, too. And then, there’s the few outright criminal cases where manufacturers have put power pins on both sides of the pinout. You can easily notice this when you look at your card, but you have to know to look out for it.

The MXM standard can’t prevent most of these problems, and whatever it tries to limit, laptop manufacturers can freely bypass. There’s no certification or compliance checks; fundamentally, in laptops, MXM isn’t used for your convenience – it’s used for the convenience of the manufacturer. If you look at your old MXM-equipped laptop and think that you might be able to upgrade its GPU, remember that there’s more than meets the eye.

All of these things, of course, don’t mean that you can’t hack on MXM otherwise. Just remember that, whatever you build, might be more specific to a certain breed of MXM slots in certain laptop lineups, than to MXM as a standard.

Still Hackable Anyway

How about a few good MXM hacks to show you what you can do? Remember, fundamentally, MXM is a high-power connection with a high-bandwidth PCIe link on it, which lets you pull some wonderful tricks!

For instance, here’s an MXM adapter for certain kinds of iMacs, that lets you install an NVMe SSDs into the MXM slot of your trusty iMac while preserving the MXM GPU connections! It involves changing a chipset strap to enable bifurcation, so there’s no power-hungry PCIe switch involved, and going from x16 to x8 on your MXM GPU won’t involve any notable bandwidth loss either. So, you can replace your SATA HDD or SSD with a speedy modern NVMe drive, that probably is way cheaper too!

It wouldn’t be hard to make a generic MXM to NVMe adapter, in general – and, [WifiCable] has a template KiCad project for you. Just like with mPCIe and M.2 cards, an MXM card is a PCB, after all, 1.2mm thick. You might be worried about leaving your laptop GPU-less, but many laptops with MXM cards still have an iGPU that is enabled whenever the MXM card is removed, though, that’s not a guarantee. We might see an MXM to Oculink adapter too, at some point!

There are also a few adapters to reuse MXM cards on the market, cheap and expensive alike. That kind of adapter is good for checking any MXM cards you have laying around, and on the cheap ones, you might even be able to solder the extra HDMI port on, as long as you get 5 V from somewhere. Sadly, none of them are open-source – yet.

This is an MXM tinkering adapter board from [WifiCable], exposing as much of MXM as humanly possible, with a wide range of power input options. Every single option is on either pin headers or SMD resistors, able to satisfy whichever obscure feature an MXM card might need, and tap at interfaces that manufacturers don’t expect you to tap. It’s a decently complex design, still yet to be polished, and it’s a 6-layer board big enough to go over a good few price breaks for any PCB fab – we’ve both learned a ton about high-speed design as [WifiCable] went about it. However, when it comes to playing with different MXM cards, exploring manufacturer differences and tinkering with card compatibility, this is as good of a testbench board as anyone can build!

Want to build your own MXM stuff, whether cards or card-carrying PCBs? Here’s a socket on LCSC, and with easyeda2kicad, you can easily get a footprint and 3D model for it. As for designing your own card or getting the [generic] pinout, you can find the MXM standard by looking up MXM_Specification_v31_r10.pdf.

Gone But Not Forgotten

DGFF card

Sadly, with the trend of making laptops thinner, we’ve been losing MXM, and the companies involved in defining the standard have not been all that interested in updating it, or even adhering to it for that matter. Nevertheless, due to industrial use of MXM, you can still find many modern cards in MXM format!

Furthermore, the spirit of MXM lives on. The proprietary DGFF standard is superseding MXM in Dell laptops – it’s thinner, and it’s fundamentally the same functionality that MXM provides. The same goes for the Framework 16 expansion bay modules – you could easily make an MXM to expansion bay card, and, [WifiCable] has made a KiCad sketch of one too!

For now, we still have laptops with MXM and almost-MXM cards around, and if you ever look into tinkering with those, you now have a better roadmap towards that. Despite the prevalence of soldered-on GPUs in laptops, the concept of GPU modules isn’t about to die out, and companies still put “GPU module” on the whiteboards every now and then during their product design processes.

]]>
https://hackaday.com/2024/04/18/mxm-powerful-misused-hackable/feed/ 27 648988 oc_mxm_feat
NVIDIA Trains Custom AI to Assist Chip Designers https://hackaday.com/2023/11/11/nvidia-trains-custom-ai-to-assist-chip-designers/ https://hackaday.com/2023/11/11/nvidia-trains-custom-ai-to-assist-chip-designers/#comments Sat, 11 Nov 2023 12:00:00 +0000 https://hackaday.com/?p=640144 AI is big news lately, but as with all new technology moves, it’s important to pierce through the hype. Recent news about NVIDIA creating a custom large language model (LLM) …read more]]>

AI is big news lately, but as with all new technology moves, it’s important to pierce through the hype. Recent news about NVIDIA creating a custom large language model (LLM) called ChipNeMo to assist in chip design is tailor-made for breathless hyperbole, so it’s refreshing to read exactly how such a thing is genuinely useful.

ChipNeMo is trained on the highly specific domain of semiconductor design via internal code repositories, documentation, and more. The result is a vast 43-billion parameter LLM running on a single A100 GPU that actually plays no direct role in designing chips, but focuses instead on making designers’ jobs easier.

For example, it turns out that senior designers spend a lot of time answering questions from junior designers. If a junior designer can ask ChipNeMo a question like “what does signal x from memory unit y do?” and that saves a senior designer’s time, then NVIDIA says the tool is already worth it. In addition, it turns out another big time sink for designers is dealing with bugs. Bugs are extensively documented in a variety of ways, and designers spend a lot of time reading documentation just to grasp the basics of a particular bug. Acting as a smart interface to such narrowly-focused repositories is something a tool like ChipNeMo excels at, because it can provide not just summaries but also concrete references and sources. Saving developer time in this way is a clear and easy win.

It’s an internal tool and part research project, but it’s easy to see the benefits ChipNeMo can bring. Using LLMs trained on internal information for internal use is something organizations have experimented with (for example, Mozilla did so, while explaining how to do it for yourself) but it’s interesting to see a clear roadmap to assisting developers in concrete ways.

]]>
https://hackaday.com/2023/11/11/nvidia-trains-custom-ai-to-assist-chip-designers/feed/ 16 640144 ROS
Here’s Why GPUs Are Deep Learning’s Best Friend https://hackaday.com/2023/09/03/heres-why-gpus-are-deep-learnings-best-friend/ https://hackaday.com/2023/09/03/heres-why-gpus-are-deep-learnings-best-friend/#comments Sun, 03 Sep 2023 08:00:56 +0000 https://hackaday.com/?p=613746 If you have a curiosity about how fancy graphics cards actually work, and why they are so well-suited to AI-type applications, then take a few minutes to read [Tim Dettmers] explain …read more]]>

If you have a curiosity about how fancy graphics cards actually work, and why they are so well-suited to AI-type applications, then take a few minutes to read [Tim Dettmers] explain why this is so. It’s not a terribly long read, but while it does get technical there are also car analogies, so there’s something for everyone!

He starts off by saying that most people know that GPUs are scarily efficient at matrix multiplication and convolution, but what really makes them most useful is their ability to work with large amounts of memory very efficiently.

Essentially, a CPU is a latency-optimized device while GPUs are bandwidth-optimized devices. If a CPU is a race car, a GPU is a cargo truck. The main job in deep learning is to fetch and move cargo (memory, actually) around. Both devices can do this job, but in different ways. A race car moves quickly, but can’t carry much. A truck is slower, but far better at moving a lot at once.

To extend the analogy, a GPU isn’t actually just a truck; it is more like a fleet of trucks working in parallel. When applied correctly, this can effectively hide latency in much the same way as an assembly line. It takes a while for the first truck to arrive, but once it does, there’s an unbroken line of loaded trucks waiting to be unloaded. No matter how quickly and efficiently one unloads each truck, the next one is right there, waiting. Of course, GPUs don’t just shuttle memory around, they can do work on it as well.

The usual configuration for deep learning applications is a desktop computer with one or more high-end graphics cards wedged into it, but there are other (and smaller) ways to enjoy some of the same computational advantages without eating a ton of power and gaining a bunch of unused extra HDMI and DisplayPort jacks as a side effect. NVIDIA’s line of Jetson development boards incorporates the right technology in an integrated way. While it might lack the raw horsepower (and power bill) of a desktop machine laden with GPUs, they’re no slouch for their size.

]]>
https://hackaday.com/2023/09/03/heres-why-gpus-are-deep-learnings-best-friend/feed/ 20 613746 ROS
A Dedicated GPU For Your Favorite SBC https://hackaday.com/2023/05/05/a-dedicated-gpu-for-your-favorite-sbc/ https://hackaday.com/2023/05/05/a-dedicated-gpu-for-your-favorite-sbc/#comments Sat, 06 May 2023 05:00:09 +0000 https://hackaday.com/?p=587917 The Raspberry Pi is famous for its low cost, versatile and open Linux environment, and plentiful I/O, making it a perfect device not only for its originally-intended educational purposes but …read more]]>

The Raspberry Pi is famous for its low cost, versatile and open Linux environment, and plentiful I/O, making it a perfect device not only for its originally-intended educational purposes but for basically every hobbyist from gardeners to roboticists to amateur radio operators. Most builds tend to make use of the GPIO pins which allow easy connections to various peripherals and sensors, but the Pi also supports PCI devices which means that, in theory, it could use a GPU in much the same way that a modern computer would. After plenty of testing and development, [Jeff Geerling] brings us this custom graphics card interface for the Raspberry Pi.

The testing for all of these graphics cards has been done with a Pi Compute Module 4 and the end result is an interface device which looks much like a graphics card itself. It splits the PCI bus out onto a more familiar x16 slot connector and adds physical connections for power, USB, and Ethernet. When plugged into the carrier board, the Compute Module can be attached to any of a number of graphics cards, including the latest and highest-end of Nvidia and AMD offerings.

Perhaps unsurprisingly, though, the 4090 and 7900 cards don’t work with the Raspberry Pi. This is partially due to the 32-bit limitations of the Pi and other memory mapping issues, but even after attempting some workarounds Nvidia’s cards aren’t open-source enough to test properly (although the card is recognized by the Pi) and AMD’s drivers crash the system even after compiling a custom kernel. [Jeff] did find an Nvidia card that worked, although it requires using the USB interface and second-hand cards are selling for around $3000 USD. For a more economical choice there are some other graphics cards that he was eventually able to get working, albeit not with perfect performance, including some of the ones we’ve seen him test already.

]]>
https://hackaday.com/2023/05/05/a-dedicated-gpu-for-your-favorite-sbc/feed/ 35 587917 pi-graphics-main