The HyperText Computer and the web information space

After reading the following quote from Tim Berners-Lee I have been reflecting on the appropriateness of my proposal of a HyperText Computer — a model computer built on top of HTTP:

From the fact that the thing referenced by the URI is in some sense repeatably the same suggests that in a large number of cases the result of de-referencing the URI will be exactly the same, especially during a short period of time. This leads to the possibility of caching information. It leads to the whole concept of the Web as an information space rather than a computing program. It is a very fundamental concept.

Universal Resource Identifiers — Axioms of Web Architecture

Does the HTC undermine the idea of the web being an information space rather than a computer program?

The web’s information space is underpinned by an enormous amount of computing. Many, perhaps even most, URLs are dereferenced not to static information, but are generated on-the-fly by the execution of computer programs. Further, the conceptual simplicity of the web browser has been compromised by web documents carrying Javascript with them that is executed within the browser.

A foundational principle of the proposed HTC is that all computing resources are presented as the capability to fulfill HTTP requests. So long as the distinction between HTTP GET and POST are respected Berners-Lee’s observations about the information space will still apply. The proposed HTC also provides an alternative to browser hosted execution of code – this has the potential of reducing the amount of code embedded in webpages which would strengthen the role of HTML rather than reduces it.

My conclusion is that the HTC strengthens the ideal of the web “as an information space rather than a computer program”.