First Benchmark for Samsung Exynos SoC with AMD RDNA 2 GPU Suggest Higher Performance than Even Apple’s A14 Bionic

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,278
Points
83
amd-radeon-logo-red-circular-1024x576.jpg
Image: AMD



Mobile leaker Ice universe has shared the first benchmark for Samsung’s upcoming Exynos SoC with AMD RDNA 2 GPU. The SoC managed to attain a stunning score of 8,134 on 3DMark’s Wild Life test, which blows away the fastest Exynos 2100 and Snapdragon 888 devices on UL’s current benchmark rankings (Samsung Galaxy S21+ 5G: 5,538; ZTE Nubia Red Magic 6R: 5,823). More impressively, Samsung’s Exynos SoC with AMD RDNA 2 GPU score exceeds even the current highest-ranked phone, Apple’s iPhone 12 Mini with A14 Bionic (7,671). Ice universe previously shared that Samsung would unveil its Exynos SoC with AMD RNDA 2-based graphics in July.



Exclusive: Samsung is testing...

Continue reading...


 
It's great to see Apple get a run for their money, but in all honesty, for what I use a mobile phone for (email, web, text messages) it wouldn't need a GPU at all.

Why would I ever want to run anything requiring a GPU on my phone?
 
It's great to see Apple get a run for their money, but in all honesty, for what I use a mobile phone for (email, web, text messages) it wouldn't need a GPU at all.

Why would I ever want to run anything requiring a GPU on my phone?

Well, for starters, to display anything on the screen...

All Mobile SOCs have a GPU... whether it's Mali in the basic, stock ARM SOCs or Adreno in the Snapdragon chips.
 
Why would I ever want to run anything requiring a GPU on my phone?

It's also used in tablets and laptops - mostly just Chromebooks for Samsung at the moment, but with Windows ARM out there and Apple showing it's possible, I wouldn't be surprised to see it get to a full fledged laptop.
 
Well, for starters, to display anything on the screen...

All Mobile SOCs have a GPU... whether it's Mali in the basic, stock ARM SOCs or Adreno in the Snapdragon chips.

True, but what I am suggesting is that for most typical phone uses, an old school non-GPU SVGA-type implementation would probably be sufficient. :p

Maybe some basic 2D acceleration wouldn't hurt, if you want the scrolling to be smooth, but beyond that? I fail to see the point.

A phone is not a gaming PC.
 
It's also used in tablets and laptops - mostly just Chromebooks for Samsung at the moment, but with Windows ARM out there and Apple showing it's possible, I wouldn't be surprised to see it get to a full fledged laptop.

I'm not convinced it is read for that yet.

This report speaks to the GPU performance. We knew RDNA2 performed pretty well.

Samsungs CPU's however don't hold a candle to anything in the laptop or Desktop. They are not like Apple's M1.
 
True, but what I am suggesting is that for most typical phone uses, an old school non-GPU SVGA-type implementation would probably be sufficient. :p

Maybe some basic 2D acceleration wouldn't hurt, if you want the scrolling to be smooth, but beyond that? I fail to see the point.

A phone is not a gaming PC.
I’ve been fairly surprised by the number of people I know where their phone is their only computing device. Like, these are guys I played counter strike, Diablo ii, and other games with in college that now only have a phone and say it’s good enough for games and anything else they needed a computer for.

In these cases, you want that gpu on the phone to do more than just basic 2d.
 
I'm not convinced it is read for that yet.

This report speaks to the GPU performance. We knew RDNA2 performed pretty well.

Samsungs CPU's however don't hold a candle to anything in the laptop or Desktop. They are not like Apple's M1.

That might matter if other tasks could not be offloaded to a GPU, which is not the case anymore. The idea that a CPU does one thing and GPU does another has kind of gotten destroyed in the last not so few years as GPU capabilities and performance has spread into tons of other areas through new instructions and software designed to leverage the inherent compute power of GPUS. You are living in like 2002 still, which is pretty accurate based on a lot of your posts.
 
That might matter if other tasks could not be offloaded to a GPU, which is not the case anymore. The idea that a CPU does one thing and GPU does another has kind of gotten destroyed in the last not so few years as GPU capabilities and performance has spread into tons of other areas through new instructions and software designed to leverage the inherent compute power of GPUS. You are living in like 2002 still, which is pretty accurate based on a lot of your posts.

There are certainly some tasks that can be offloaded to a GPU, but they are generslly the exception rather than the rule. Things that can benefit from a high degree of parallelization do this a lot, but it's by no means everything. It's not even half of software.

I do kind of agree that everything started going to **** in about 2005-2007 though. Up until that point in time technology generally improved. Since then it has just gotten stupid, with mobile, cloud, IoT, massive data collection, and everything else.

I mean, i like my chips getting faster and faster, but other than that, I can't think of a single innovation or improvement in either hardware or software in tech since ~2005-2007 that has been an improvement.

It has all been garbage. Removal of choice, dilution of complexity, and dumbing down of everything, while seeking endless opportunities for stealing data.


**** had just been sliding downhill to the point where I have considered just giving up on my tech hobby, because everything tech in 2021 is just plain retarded.
 
Last edited:
There are certainly some tasks that can be offloaded to a GPU, but they are generslly the exception rather than the rule. Things that can benefit from a high degree of parallelization do this a lot, but it's by no means everything. It's not even half of software.

I do kind of agree that everything started going to **** in about 2005-2007 though. Up until that point in time technology generally improved. Since then it has just gotten stupid, with mobile, cloud, IoT, massive data collection, and everything else.

I mean, i like my chips getting faster and faster, but other than that, I can't think of a single innovation or improvement in either hardware or software in tech since ~2005-2007 that has been an improvement.

It has all been garbage. Removal of choice, dilution of complexity, and dumbing down of everything, while seeking endless opportunities for stealing data.


**** had just been sliding downhill to the point where I have considered just giving up on my tech hobby, because everything tech in 2021 is just plain retarded.

No, the exception rather than the rule is an old cliche. Day to day more and more things are being offloaded to the GPU.
 
No, the exception rather than the rule is an old cliche. Day to day more and more things are being offloaded to the GPU.

Not really. It's the same as the good old "multithreading" argument.

Most code simply cannot be multithreaded under any circumstance no matter the breakthrough. It is a fundamental violation of logic. If code depends on state at all, and most code does, you wind up with a mess of thread locks and other problems if you try.

It's sortof like a violation of the fundamental laws of physics. It simply cannot be done.

It's more and more common to take the little bits and pieces of code that lend themselves to parallelization and offload them to the GPU, but some corner cases excluded (notably rendering, encoding and some scientific workloads) this usually represents a small percentage of the code in most software.

The state issue prevents it from being more. Unless you invent time travelling code, it cannot happen.

GPU's are fantastic at tackling highly paralellized tasks, but absolutely awful at tackling tasks which are not, and most tasks are not highly parallellized, and nothing can change that. It's not a matter of time or innovation or something like that. It is an unchangeable constant.

That's not to say that ASIC's or FPGA's can't be used to offload the CPU and do a better job for many tasks, but they tend to be highly task specific, and it is generally infeasible to have an on board ASIC for every little thing. FPGA's are more flexible, but you'd have to constantly reprogram them for the task at hand, and they would be slower than ASIC's.

The general purpose CPU is far from dead. You can get by with a weak one, if you want a limited scope device that only excels at a limited subset of things for which it has GPU and/or ASIC acceleration, but this is really more of a "consumer device" approach and can never replace the flexibility of a strong general purpose CPU core.
 
Name a computer today thst runs one process at a time? Yes single programs being able to use various sequential threads is not extremely common... but it is getting more and more common every day.

So yes the powerful general purpose cpu is still the core of a computers compute engine. But that doesn't mean other functions are not useful or desired even if it's not for you.

Heck I never thought people would do photo or even video editing and compiling on a phone. But that happens. 3d gaming on phones happens. Do I want gbe world of ready player one where the primary universal compute device is a handheld? Not really... we all know the real workhorses are going to be around. But that client interface is becoming a display and compute and rendering device in our hands.

You're a bit stuck in what was.. hell so am I in many ways. I get it. I rail agsisnt it too. But even I realize it's fruitless and will just be happy being an old codger with a "real" computer as everyone moves to phones and tablets.
 
There are certainly some tasks that can be offloaded to a GPU, but they are generslly the exception rather than the rule. Things that can benefit from a high degree of parallelization do this a lot, but it's by no means everything. It's not even half of software.

I do kind of agree that everything started going to **** in about 2005-2007 though. Up until that point in time technology generally improved. Since then it has just gotten stupid, with mobile, cloud, IoT, massive data collection, and everything else.

I mean, i like my chips getting faster and faster, but other than that, I can't think of a single innovation or improvement in either hardware or software in tech since ~2005-2007 that has been an improvement.

It has all been garbage. Removal of choice, dilution of complexity, and dumbing down of everything, while seeking endless opportunities for stealing data.


**** had just been sliding downhill to the point where I have considered just giving up on my tech hobby, because everything tech in 2021 is just plain retarded.
Immediately, off the top of my head, I’ll call out Containerization and micro service based design. Both have been fantastic improvements of software design and management / support.

let’s not forget log analytics and open tracing as well. Both are fantastic when dealing with complex performance issues along a stack of software that just didn’t exist 15 years ago.
 
True, but what I am suggesting is that for most typical phone uses, an old school non-GPU SVGA-type implementation would probably be sufficient. :p

Maybe some basic 2D acceleration wouldn't hurt, if you want the scrolling to be smooth, but beyond that? I fail to see the point.

A phone is not a gaming PC.


WpGYW7N.jpg


Everything on a smartphone made in the last 5+ years is GPU accelerated.

Offloading helps battery life. Just the act of refreshing the display is helped by a discrete GPU.

A device doesn't need to be 'gaming' to benefit from a component of the SOC designed specifically to handle display processing.

Unless you think it's ideal to run a x1080 or x1440 panel in software... Something that got long in the tooth even in the 800x600 days.
 
Become a Patron!
Back
Top