BT

Are Special Purpose Chips an Answer to the Multicore Crisis?

| by Sadek Drobi Follow 1 Followers on Apr 02, 2008. Estimated reading time: 1 minute |
According to Moore's Law, processors double in speed every 18 months. However, the new multicore trend resulted in a gap between the potential increase of speed and the ability of software to adapt to the multicore world and to exploit parallelism opportunities. In the aftermath of Intel's announcement of their future six-core chip, Larry Dignan wonders "what exactly are we supposed to do with six cores when we have barely figured out what to do with four". 

Bearing this in mind, Bob Warfield suggests, in a recent blogpost, that rather than creating faster general purpose CPU, one could focus on designing new chip types that would have narrow specificity. This could have an important impact on the market structure, since the competitive advantage would not anymore lay within giants like Intel capable to create ever faster chips.  He mentions "special purpose chips built […] for maximum performance in various areas", e.g. graphics coprocessors or various network chips, and questions the interest that might bear the creation of special purpose chips optimized for running specific virtual machines.

He emphasizes that today "interpreted and scripting languages that are descendants of languages like Lisp and Smalltalk", e.g. Ruby on Rails, Python, and PHP are rather mainstream. And he recalls the observation of Alan Key that comparatively to old machines, like Dorado, "modern machines don't run dynamic languages like Lisp and Smalltalk as much faster […] as their newfound clockspeeds would imply they should." Hence, Bob Warfield wonders whether creating special chips could allow these new mainstream languages, but also Java, to get faster:

Would a chip optimized to run the virtual machines of one of these languages without regard to compatibility with the old x86 world be able to run them a lot faster?  Would a chip that runs Java 10x faster than the fastest available cores from Intel be valuable at a time when Java has stopped getting faster via Moore's Law?

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Multicore Crisis? by Michael Jessopp

What multi-core crisis? I've been working on multi-CPU mid-range systems (OpenVMS) for the past 13 years and don't understand why multi-core is such a big issue. We simply designed our software to be modular, running as many discreet processes (just like the Unix folks), to take advantage of the available CPU(s). Is it that current developers are constrained by the existing VMs? Or is it an education problem?

Re: Multicore Crisis? by Peter Veentjer

Hi Michael,

it depends on the type of application if multicores are going to become an issue. Since most enterprise applications are 'simple' request/response systems, requests can be run in parallel. So concurrency is 'easy' in these systems. For high volume dataprocessing/batchprocessing and the like, concurrency could be more difficult: can we chop up a task in chunks, are there dependencies, what to do about failure of chunks etc etc. Just running multiple processes in parallel doesn't have to lead to the required performance (a long task won't go any faster unless it is chopped up and this is where concurrency sticks its head up).

Re: Multicore Crisis? by Jim Leonardo

I have to agree with Michael, I'm not at all sure I'd say this is a crisis. For "high volume dataprocessing/batch processing", we've had multi-cpu machines to do this for 10 yrs+, so its hardly a crisis all of a sudden. If the mere fact you've got a multicore cpu is a problem, most BIOS I've seen will let you force a multicore cpu to only use one core.

Is supporting parrallel harder? Sure. But just b/c you have a multicore proc doesn't mean all of a sudden you've got a problem. You can opt to leverage it or not. So, calling it a "crisis" is a bit extreme.

BTW, if someone's going to quote Moore's law, at least they can quote it correctly... its about transistor count, not speed.

I bet for Programable Chips and Machine Virtuality by Jose Luis Rodriguez

I think programmable chips like FPGAs have a clearer future. Imagine a machine that runs a Video Editing software that, while loading, preconfigures a region of the processor specifically for this purpose, and at the same time it is running a JVM application that Just-in-time preconfigures the chip to run a server application.




The machine virtuality, and in my view Java, will play an important role for configurable chips and software that runs on these chip arquitectures.



Intel has shown interest in this area.

Re: Multicore Crisis? by Peter Veentjer

@ Jim Leonardo,

programming for 2/4 real parallel threads is different than programming for 64/128/256 real parallel threads. If the number of real parallel threads is low, you can chop up on the highest level, if the number increases and you don't chop up in smaller task, a lot of cpu power stays unused.

Multicore Crisis == Sorry, we won't make faster processors by Daniel Sobral

People have been reading the "multicore crisis" wrong. For many, many years we, software developers, have been adding complexity to our systems so as to make them more reliable, easier to develop for, etc. What people would do in just a few KB, now is measured in MB. The sheer setup time from hardware power on to being able to write on a word processor would tie whole dataprocessing centers for years a few decades back.


Our development methodologies lead to huge call stacks. We encode data into XML and decode it back just to transfer information from one module to another in the same system. We are more and more replacing edit-in-place with copy-and-change. We use garbage collection. We use libraries which use frameworks which use libraries.


In other words, software today is dense and massive. You need a lot of power to move it. And we are not stopping making it bigger and denser.


And the crisis is, the constant increase in CPU speed that enabled us to do that has come to an end. So, unless we manage to transfer this linear complexity we added to "Hello World" to a parallel world, we won't be able to keep doing what we have been doing for the last 50 or 60 years.

Azul Systems Takes this approach by Matt Passell

The hardware that Azul Systems sells takes this exact approach. It's specifically targeted at running VMs. Mmm, hundreds of cores... :)

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

7 Discuss
BT