tinycc-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Tinycc-devel] (no subject)


From: Rob Landley
Subject: Re: [Tinycc-devel] (no subject)
Date: Wed, 13 Sep 2006 12:34:31 -0400
User-agent: KMail/1.9.1

On Wednesday 13 September 2006 6:05 am, Dave Dodge wrote:
> On Tue, Sep 12, 2006 at 11:46:45AM -0400, Rob Landley wrote:
> > Translation: Itanic sucks so badly that it takes a near-miraculous
> > compiler to get even reasonable performance out of it, although we
> > try to phrase it to seem like it's the compiler's fault.
> 
> Oh I'm not making any excuses for IA-64.  It comes to mind only
> because I have to deal with it on a regular basis and most people have
> no idea just how bizarre the IA-64 world is at the assembly level.
> I'd love to have things like tcc working there but I'm not masochistic
> enough to try porting it myself.

Fabrice seems to start qemu stuff based on the gdb processor description file 
thingy for the platform in question.  (Personally, I'm still thinking about 
compiling qemu with tcc.  I expect pieces would land in the Atlantic..)

That said, you'll notice QEMU doesn't emulate Itanic either.  Support for sh 
was more important.  There's a reason.

> > There's plenty of hardware out there that can get better performance
> > out of fewer transistors, fewer watts, and without requiring an
> > NP-complete (or AI-complete) optimizer to hide its most obvious
> > shortcomings.
> 
> The problem is that for the workloads where IA-64 is king, there's
> things like huge core counts and RAM sizes that the other hardware
> can't easily reach yet.

Actually, IBM is king, and using Power PC to do it:
http://www.hoise.com/primeur/06/articles/live/LV-PL-06-06-18.html

IBM's been doing large memory sizes with PPC64 for over a decade, and Fujitsu 
was doing it with Sparc 64 (for no readily apparent reason).

And I wouldn't use either SGI or HP's technical decisions as a good indicator 
here.  SGI never really recovered from Richard Belluzzo (SGI's ex-CEO who 
forced an all-Windows-NT strategy down their throats before returning home to 
Microsoft), and Carly Fiorina drove HP's best and brightest into early 
retirement.  Those two are the companies bolting rocket engines on this 
turtle:

http://www.supercomputingonline.com/article.php?sid=1318
http://www.hpcwire.com/hpc/620507.html

Both companies are struggling with bad legacy decisions by some of the 
pointiest-haired management in recent history.  And the pointy-haired eras 
are where each company bought a ticket on the Itanic.  (Remember, Windows was 
going to support Itanic as the next desktop platform.  The supercomputing 
stuff was making lemonade from lemons, that's the niche it _retreated_ to 
when it couldn't cut it on the desktop.)

> If AMD can get Opteron scaled up to those 
> levels, though, it'll probably be the final nail in Itanium's coffin.
> Much of my IA-64 stuff is also designed to be buildable on x86_64, in
> anticipation of that day.

There's no "if" about it, keep in mind x86-64 has only been shipping in volume 
for about 18 months and the limiting factor on large memory sizes is 
motherboards and backplanes.  The first x86-64 silicon had 40 bits of 
physical addressing per chip (for an even 1 terabyte per NUMA node), and 
x86-64 was put into the Linux kernel as an always-NUMA architecture.  They've 
already scaled the design 52 bits, there's just no point in offering chips 
that can take 4 petabytes of ram yet, motherboards are still catching up with 
1 terabyte.

Does the lack of motherboards sound like a problem likely to persist long 
enough to bother writing new software around it?

In any case, Itanic is not a problem for tcc to worry about, but x86-64 very 
much is.

Rob
-- 
Never bet against the cheap plastic solution.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]