Microsoft, Intel to invest $20M in parallel computing
Microsoft and Intel have recently announced a $20 million joint investment into parallel computing over the next 5 years. The money will go to two Universal Parallel Computing Research Centers located at University of California Berkeley and the University of Illinois. The Berkeley university will add $7 million for its center, while the University of Illinois will bring $8 million to fund the research in its own center.
The purpose of the research is to develop software that runs more efficiently on systems with multiple processors. The move was triggered by the current trend of the industry to develop processors with multiple cores. The idea is to distribute the work over multiple cores or processors so several tasks can be executed simultaneously. A white paper written by UC Berkeley researchers affirms:
Conventional wisdom is now to double the number of cores on a chip with each silicon generation. ... Our view is that this evolutionary approach to parallel hardware and software may work from 2 or 8 processor systems, but is likely to face diminishing returns as 16 and 32 processor systems are realized, just as returns fell with greater instruction-level parallelism.
The target [of the research] should be 1000s of cores per chip.
Both Microsoft and Intel have their own programs researching parallel computing, and make this additional joint effort to make sure they are prepared for the extra computing power that is to come with future multi-core technology. Intel's Tera-scale Computing Research Program aims to create processors with hundreds of cores. But the computing power of a 100 cores processor won't be of much help if the software running on it does not scale well. Microsoft has come with its Parallel Computing Initiative which "encompasses the vision, strategy, and innovative developments for delivering natural and immersive personal computing experiences, harnessing the compute power of many-core architecture", according to Corporate Vice President S. Somasegar.
One of the first visible results of Microsoft's Parallel Computing Developer Center is the Parallel Extensions to .NET Framework 3.5, CTP which is a library introducing support for concurrency in applications written in any .NET language. According to the site:
Parallel Extensions to the .NET Framework is a managed programming model for data parallelism, task parallelism, and coordination on parallel hardware unified by a common work scheduler. Parallel Extensions makes it easier for developers to write programs that scale to take advantage of parallel hardware by providing improved performance as the numbers of cores and processors increase without having to deal with many of the complexities of today’s concurrent programming models.
Another result of Microsoft's Parallel Computing Initiative is PLINQ or Parallel LINQ. This enables LINQ to execute multiple parallel queries against the database, and the topic was discussed by InfoQ here.
Parallel Extensions to the .NET Framework is a managed programming model for data parallelism, task parallelism, and coordination on parallel hardware unified by a common work scheduler.
The author made sense up to the point of discussing a "common work scheduler". How 1960s.
Oracle Coherence: Data Grid for Java, C++ and .NET
John Altidor, Yannis Smaragdakis Mar 30, 2015