MSI Launches MEG Z690 GODLIKE Motherboard ($2,099) for 12th Gen Intel Core Processors

I remember the display machines we had at CompUSA. Like the RAMBUS-equipped Willamette P4s, I steered well clear ;)
 
I rocked Efficeon TM8600 based laptop for like 6 years (2005-2011). That thing rocked. 12 hour battery life in a ~2lb laptop back then......mmmmmmm....butter
Those didn't sell all that well and weren't stellar performers as I recall.
 
Those didn't sell all that well and weren't stellar performers as I recall.
I believe Van's back in the day said that while it was a massive improvement over the Crusoe it that it was a colossal failure. The thing was, the VLIW design as a work around the x86 license was genius and the one thing no one could ever touch was its efficiency. That meant at a time that laptops would 2-3 hour machines they could produce a machine that would be a legitimate all day office/web machine. It spurred Intel into the whole Atom processor thing.
 
Itanium was supposed to be ultra efficient once programmers figured out how to code for VLIW...


....right, guys?
 
Itanium was supposed to be ultra efficient once programmers figured out how to code for VLIW...


....right, guys?
VLIW sets were.....30 years old or so at that point. The problem was not VLIW......... it was HP's implementation that Intel adopted. The real kicker was that Intel had done VLIW previously with the i860 (though they also sort of pooched that but really not as bad overall and eventually subbed XScale ARM processors into the space). So, it was not even their first go around. VLIW was also well implemented in GPU designs as ATI used it in their designs.
 
Honestly I think that VLIW will make a comeback when we start running into real performance improvement limits with smaller nodes.

Which may happen in five years, or may take fifty - we keep thinking it's around the corner, and we keep proving ourselves wrong...
 
Itanium was supposed to be ultra efficient once programmers figured out how to code for VLIW...


....right, guys?

Nope, Itanium was a big iron move that failed......not an ultra efficient processor move that also failed.

I never had any personal experience with Itanium. From what I recall reading at the time it was a solid design, it was just that getting organizations to switch instruction sets for their software base was like pulling teeth...

...yet now we are all talking about moving to ARM, so who knows.

The market is different today than it was in 2001 though. Most software is programmed using very high level languages, so there is less of a penalty when it comes to porting from platform to platform than there used to be.
 
I never had any personal experience with Itanium. From what I recall reading at the time it was a solid design, it was just that getting organizations to switch instruction sets for their software base was like pulling teeth...
At the time it seemed to involve significantly more than just swapping compilers - for IA64 VLIW, there seemed to be a significant expectation of code hand-tuning for the architecture.

That seemed to be the selling, and also failing point of IA64 - it was stupidly fast for its time for hand-optimized workloads.

I'll say that it also didn't help that their IA32 (that is, x86-32) translation layer was potato-grade. If they had the technology that Apple has put in their M1 SoCs for x86 emulation, we might all be running Itanium-based CPUs today.

...yet now we are all talking about moving to ARM, so who knows.

The market is different today than it was in 2001 though. Most software is programmed using very high level languages, so there is less of a penalty when it comes to porting from platform to platform than there used to be.
Penalty's still there, it just doesn't sting as much since computing paradigms have mostly stabilized. Servers don't need GUIs, user-facing applications mostly run in web pages, back-end and front-end languages don't need to be at all the same (but can be!), and so on.

What I'm more getting at is that once we run out of hardware-level performance advancement capacity, we're going to need to go back and look at the overhead involved in every layer of computing.

And with machine learning becoming accessible at scale, the idea behind VLIW starts making more sense. You optimize prior to compiling code as thoroughly as possible, essentially as you'd do with embedded systems, but for everything.
 
Itanium's biggest problem was always software based. Without software the explicitly supported it Itanium required emulation and the performance hit from doing that was extreme. Over time, Microsoft and RedHat dropped support for it. Hardware vendors basically abandoned it fairly early on with HPE being the exception. Effectively, most Itanium servers were HP/UX based and basically Itanium ended up competing with Intel's own x86-64 based Xeon CPU's which could be had from Dell, HPE and everyone else.

It was very good at what it did, but without software support and hardware support from more vendors it was doomed to fail.
 
The real death knell is when Intel refused to extend x86 to 64 bits... and then AMD did it. And then Microsoft supported AMDs extension.

That was game over for Itanium.
 
Become a Patron!
Back
Top