Intel’s Teraflop chip and The Hypertext Computer

A chip with 80 processing cores and capable of more than a trillion calculations per second (teraflops) has been unveiled by Intel.

see the BBC report.

This new chip presents a great challenge to the programming community. The proposed HTC may be part of solving these challenges.

The BBC report continues.

The challenge

“It’s not too difficult to find two or four independent things you can do concurrently, finding 80 or more things is more difficult, especially for desktop applications.

“It is going to require quite a revolution in software programming.

“Massive parallelism has been the preserve of the minority – a few people doing high-performance scientific computing.

“But that sort of thing is going to have to find its way into the mainstream.”

What is one of the causes of this problem?

Current programming models are built on strong assumptions about continuity of the location of processing. This is true of common programming tools and languages (e.g. Java, C, C++, PHP, Visual Basic, Perl, Delphi, Pascal, Kylix, Python, SQL, JavaScript, C, SAS, COBOL, IDL, Lisp, Fortran, Ada, MATLAB, RPG) but is also true of explicitly distributed projects like seti@home and the Windows Communication Foundation.

One of the challenges in “finding 80 or more things” to do at once is overcoming the assumption of continuity of the locus of programming. Doing parallel programming using current programming models is tough. The programmer is constantly fighting the assumptions that underpin the language that she is programming in.

Contribution of the HTC

The HTC is, in part, an attempt to eliminate the effect of programmers implicitly making choices about where processing will be done through their choice of technology. Core concepts of the HTC are that

  1. all computing resources are presented as the ability to complete HTTP requests,
  2. HTC programs reference all input information as URLs.
  3. the HTC depends on an extended HTTP which includes an offer of assistance along with the request for the information at a URL. The HTTP request becomes “please give me the information located in information space at this URL, and by the way, I have processing and storage available in my HTC and I am happy to help with the processing involved.” The HTC serving the request may
    • return the HTML of a page, or
    • code that calculates it. The returned code would, of course, reference its input data in the same way – as further URLs.

The HTC brings the network right into the core of programming and removes completely any assumptions about the location of processing. If the 80-core chip was programmed as an HTC – any request for a result could be performed on the same processor, another one of the 80 on the chip or – for that matter – on a computer with spare capacity 1/2 a world away.

Extending the typical RPC model with an offer to help compute the results in one stroke enables:

  • code mobility,
  • removal of all assumptions of continuity of locus of programming, and
  • can provide “80 or more things” to do.

Comments

  1. Technology Review also have a great article about this: http://www.technologyreview.com/read_article.aspx?id=18219&ch=infotech

Speak Your Mind

*