The Hypertext Computer

The HyperText Computer (HTC) is proposed as a model computer. Built on the HyperText Transport Protocol (HTTP), the HTC is a general purpose computer. In its basic instruction set, every operator is implemented by a HTTP request and every operand is a URL referring to a document. The HTC is a foundational model for distributed computing.

Motivation for the HTC comes from observations that the browser may be considered a special purpose (as opposed to general purpose) model computer. Technologies like AJAX at the presentation level and iSCSI at the transport level are so undermining the Fallacies of Distributed Computing that inter and intra-computer communications not carried over IP are looking like special case optimizations. As noted by Cisco’s Giancarlo, IP networking is rivalling computer backplane speeds leading him to observe that “It’s time to move the backplane on to the network and redesign the computer”.

The HTC is a redesign of the computer. The transition from computers being connected by networks to the network as a computer has been anticipated for some time. The HTC is a model of a computer built from the ground up containing no implicit information about locality or technology.

Of course, implementation of this model will require optimisation of interconnections. A naive implementation of a HTC would have it spending 99.999% of its time in network routines fetching every operand and operator. Without affecting the programmer’s view of the HTC one may be built in silicon by hiding the optimised and messy implementation behind an onchip NAT. Programs written for the HTC will also run without change with its components distributed on 5 continents.

Computers of the future will contain just enough processing power to run a user agent. Any additional processing power or storage available locally will be available as an HTC with low latency. However, unplugging the local computing resources, does not impact the users or the programmers view in any way. In this case, other issues such as intellectual property will dominate decisions as to where and how processing is done.

Comments

  1. Hi David,

    I’m not sure if I understand correctly. When you say every operator is implemented as an HTTP request, how granular is it? Say I have a UI. Would I make requests to get all the listsbox and their regions? Or would I make requests to fill each listboxes with their data. Or would I make requests to perform the operations of drawing a listbox? Or are you talking about every operator in any declarative programming language, like HTML, subject to an HTTP request? Or are you talking about operators in an imperative language?

    Sorry for all the Ors. Any chance you could provide a real world example?

    Regards,

    jm

  2. Thanks for the questions John,

    Firstly there are no real examples of a “hypertext computer”. It is a proposal.

    The key idea is to actually go ahead and build a general purpose computer on top of the network. That is achieved by implementing it in such a way that the computer (you could substitute VM) has access ONLY to resources that are retrieved via HTTP. The proposed computer’s programming model is true to this principle to the lowest opcode level.

    High overheads – yes! However within the boundaries of one ownership of a computing system, much of this overhead could be compiled out to more efficient referencing mechanisms (e.g. in main memory or on local disk). The key here is that the optimisations along with the choice of where the computing is performed would be made at run time NOT made in advance through choice of technology (PHP+Javascript or plain PHP).

    Granularity of this mechanism would be at the level of a browser request. This is discussed somewhat in this post.

    It seems to me that as HTTP and HTML has revolutionised content creation and consumption, I am proposing that they may be extended to provide a unified programming model for a truely network computer.