AMD Radeon RX 6700 XT Graphics Card Running on RISC-V System for the First Time

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
8,883
Points
113
amd-radeon-rx-6700-xt-angled-render-1024x576.jpg
Image: AMD



Computer scientist René Rebe has managed to pair a RISC-V processor with an off-the-shelf gaming GPU for the first time. Using the latest desktop-level development board from SiFive, the HiFive Unmatched, Rebe patched the Linux kernel to support an AMD Radeon RX 6700 XT. Although functionality is limited, the achievement is another milestone for the RISC-V architecture. It was recently rumored that Intel may have made an offer to purchase SiFive. This goes to show how the 64-bit platform has potential beyond microcontrollers or low-powered devices.









It took him ten hours to add support for the AMD Radeon RX 6700 XT graphics card and Mesa Gallium 21.1.5 driver to the Linux system. Not only allows Radeon RX 6700 XT to display...

Continue reading...


 
I'm going to assume that the AMD GPU was used because the open-source driver is fairly mature, meaning easiest to compile for RISC-V and get running.

But to me, the interesting part is having a relatively complicated software stack running on RISC-V in the first place, given the nature of the architecture. Bravo!
 
Why is it, that now seems much easier to program and make software for other architectures? I remember back in late 90s early 2000s any discussion of any other architecture was like oh my goooooood this is sooo hard you know making this software my god!!!... even with unquestionably backed Itanium, it was like oh my! Maybe in a billion years, we don't know. ... Now everyone is like meh, it can be done, its a matter of ... Basically what it always was , deciding to do it... But I remember thinking intially back then what's the big deal? Go in and create the software pay a team and what not. But as I read over and over about how difficult it was after some time I became convinced, yeah, new architecture, must be some kind of next level near impossible task there. Then google came, and windows rt came out, and hp with WebOS, so suddenly ARM was like meh, whaddya know! If you do it, it happens, kind of like a magic of sorts. Risc v been out, and so.many are like meh, we will do it we might do it if needed... Totally different from oooohhhh myyyy goooooood , everything is soooooo hard, I mean a whoooole new, I mean imagine, new immature architecture, I don't think its the right moment, blah blah blah... Am I wrong and revising history?
 
Am I wrong and revising history?
Nah, on self-reflection, I think you're right - but perhaps there's a little more to it?

Big part in my opinion has been drivers. Most of the code in the Linux kernel is for drivers. That's also what's made Linux so very portable too. as with drivers being complete (enough), new architectures are a kernel compile away.

That's still simplifying it a lot, of course. Big thing to keep in mind is that ARM (most natural comparison) and RISC-V are both built for lower-power (draw) applications but simultaneously built to be fully-featured CPUs.

And with RISC-V, the Linux kernel was the target OS, in at least as much as the Linux kernel is the most accessible target in terms of having the code available and having drivers already in the code.

Last, MESA drivers actually being useful is a more recent development. That's what really made the effort of running an AMD GPU on RISC-V possible, as AMDs and Nvidia's own drivers for Linux are closed-source. Intel would have been another option if their hardware was available, but it's currently only widely available on IGPs, which would defeat the purpose of this test :).
 
Well, when moving to a new architecture, there are three main impediments:

First is a compiler. ~Most~ code is all written at a high level. Even in Linux, pretty much all but the first stage of the bootstrap loader is written in some version of C and compiled to machine code. So if you have a compiler, most of that high level code will translate right over with only minor changes and optimization needed.

Second would be the endianness of the architectures. To start with, I'm assuming binary - if you had something else, trinary, quantum, etc... then that also falls into this category for similar reasons. Endianness is if the machine code (binary) is read right to left, or left to right (pertaining to memory access and data transfer). There is no standard, per say, apart from how you design the machine code to work. 011 would either be 3 or 6 depending on the endianness of the architecture. This can not only affect software, but hardware that plugs in (like via a PCI bus). If two different architectures are the same endianness, then largely it's just a matter of drivers in getting hardware support for everyday things. If not, then... it gets complicated. PPC was different than x86, and that's why back in the PPC days you needed special versions of video cards to work on Macs - any old off-the-shelf wouldn't work with a PPC mac.

x86 is "little endian". ARM is actually biendian, meaning you can program it to work either way, but it largely is operated in little endian mode. In little endian, 011 would be 3, not 6. Given that ARM and x86 can share the same endianness, getting most hardware intended for use on x86 to work on ARM should be fairly straight forward.

Lastly would be the "chicken and the egg" syndrome. You can't really use a new architecture unless there are things to do with it. That means it needs drivers, it needs software, and it needs an operating system of some sort. You can't get that until you've solved problems #1 and #2, and then you gain some level of critical mass of support from developers to start making things for your architecture and users to use that and spur the developers to keep pumping products into it.

If you take ARM - it took off because the world needed a low power processor that was "fast enough". It needed to be able to run on a battery for days and/or stuffed into a tiny plastic enclosure with no active cooling device (routers, TVs, etc.), and it needed to be fast enough to run a GUI and do some light level processing tasks. There really wasn't any real competition to this at the time - some of the the older TI and Motorola processors might fit the bill, but they didn't have the performance, and Intel didn't have anything as they were busy chasing high performance applications. It didn't ~have~ to be, but it certainly helped that it was an open platform, and you could tweak the design as needed for specific applications. ARM was very much a "right place at the right time" architecture. Now that it's past that Chicken and the Egg stage, it'll be around for a while, but anything competing with it has that uphill fight to get through to establish any market share.
 
but anything competing with it has that uphill fight to get through to establish any market share.
ARM may be 'openish', in that the instruction set itself may be licensed, but it's not open open like RISC-V, which is the draw. It's also why we're seeing examples produced so early, because it can be used without paying ARM, or soon to be Nvidia.
 
ARM may be 'openish', in that the instruction set itself may be licensed, but it's not open open like RISC-V, which is the draw. It's also why we're seeing examples produced so early, because it can be used without paying ARM, or soon to be Nvidia.
It also helps that RISC-V is little endian like x86, as opposed to say, RISC PowerPC, which was big endian - at least with respect to articles like this.

I don't know that open source instruction set means a whole lot though - it's not like the program developers are fabbing their own chips, everyone at that stage is buying something available on the market. You have companies like AMD and Qualcomm and SiFive who are actually making the processors for others to use. It helps that they don't have to license an instruction set, it keeps costs down and reduces an avenue of risk - but AMD/Qualcomm/SiFive/etc aren't the ones putting these things in end-user devices or writing the applications for them - they are just one step in the logistics and production chain of making consumer electronics devices.

So there's an entire logistics chain here that goes back to the Chicken and the Egg situation. You may see these hardware companies help that process by providing compilers, but it wouldn't be common (only Apple that I can think of) where you would have a company that is making their own silicon and writing a bunch of software for it as well.
 
Become a Patron!
Back
Top