[Top] [] []


CHAPTER 4

Designing the Web Architecture: Problems and Insights

This chapter presents the requirements of the World Wide Web architecture and the problems faced in designing and evaluating proposed improvements to its key communication protocols. I use the insights garnered from the survey and classification of architectural styles for network-based hypermedia systems to hypothesize methods for developing an architectural style that would be used to guide the design of improvements for the modern Web architecture.

4.1 WWW Application Domain Requirements

Berners-Lee [20] writes that the "Web's major goal was to be a shared information space through which people and machines could communicate." What was needed was a way for people to store and structure their own information, whether permanent or ephemeral in nature, such that it could be usable by themselves and others, and to be able to reference and structure the information stored by others so that it would not be necessary for everyone to keep and maintain local copies.

The intended end-users of this system were located around the world, at various university and government high-energy physics research labs connected via the Internet. Their machines were a heterogeneous collection of terminals, workstations, servers and supercomputers, requiring a hodge podge of operating system software and file formats. The information ranged from personal research notes to organizational phone listings. The challenge was to build a system that would provide a universally consistent interface to this structured information, available on as many platforms as possible, and incrementally deployable as new people and organizations joined the project.

4.1.1 Low Entry-barrier

Since participation in the creation and structuring of information was voluntary, a low entry-barrier was necessary to enable sufficient adoption. This applied to all users of the Web architecture: readers, authors, and application developers.

Hypermedia was chosen as the user interface because of its simplicity and generality: the same interface can be used regardless of the information source, the flexibility of hypermedia relationships (links) allows for unlimited structuring, and the direct manipulation of links allows the complex relationships within the information to guide the reader through an application. Since information within large databases is often much easier to access via a search interface rather than browsing, the Web also incorporated the ability to perform simple queries by providing user-entered data to a service and rendering the result as hypermedia.

For authors, the primary requirement was that partial availability of the overall system must not prevent the authoring of content. The hypertext authoring language needed to be simple and capable of being created using existing editing tools. Authors were expected to keep such things as personal research notes in this format, whether directly connected to the Internet or not, so the fact that some referenced information was unavailable, either temporarily or permanently, could not be allowed to prevent the reading and authoring of information that was available. For similar reasons, it was necessary to be able to create references to information before the target of that reference was available. Since authors were encouraged to collaborate in the development of information sources, references needed to be easy to communicate, whether in the form of e-mail directions or written on the back of a napkin at a conference.

Simplicity was also a goal for the sake of application developers. Since all of the protocols were defined as text, communication could be viewed and interactively tested using existing network tools. This enabled early adoption of the protocols to take place in spite of the lack of standards.

4.1.2 Extensibility

While simplicity makes it possible to deploy an initial implementation of a distributed system, extensibility allows us to avoid getting stuck forever with the limitations of what was deployed. Even if it were possible to build a software system that perfectly matches the requirements of its users, those requirements will change over time just as society changes over time. A system intending to be as long-lived as the Web must be prepared for change.

4.1.3 Distributed Hypermedia

Hypermedia is defined by the presence of application control information embedded within, or as a layer above, the presentation of information. Distributed hypermedia allows the presentation and control information to be stored at remote locations. By its nature, user actions within a distributed hypermedia system require the transfer of large amounts of data from where the data is stored to where it is used. Thus, the Web architecture must be designed for large-grain data transfer.

The usability of hypermedia interaction is highly sensitive to user-perceived latency: the time between selecting a link and the rendering of a usable result. Since the Web's information sources are distributed across the global Internet, the architecture needs to minimize network interactions (round-trips within the data transfer protocols).

4.1.4 Internet-scale

The Web is intended to be an Internet-scale distributed hypermedia system, which means considerably more than just geographical dispersion. The Internet is about interconnecting information networks across multiple organizational boundaries. Suppliers of information services must be able to cope with the demands of anarchic scalability and the independent deployment of software components.

4.1.4.1 Anarchic Scalability

Most software systems are created with the implicit assumption that the entire system is under the control of one entity, or at least that all entities participating within a system are acting towards a common goal and not at cross-purposes. Such an assumption cannot be safely made when the system runs openly on the Internet. Anarchic scalability refers to the need for architectural elements to continue operating when they are subjected to an unanticipated load, or when given malformed or maliciously constructed data, since they may be communicating with elements outside their organizational control. The architecture must be amenable to mechanisms that enhance visibility and scalability.

The anarchic scalability requirement applies to all architectural elements. Clients cannot be expected to maintain knowledge of all servers. Servers cannot be expected to retain knowledge of state across requests. Hypermedia data elements cannot retain "back-pointers," an identifier for each data element that references them, since the number of references to a resource is proportional to the number of people interested in that information. Particularly newsworthy information can also lead to "flash crowds": sudden spikes in access attempts as news of its availability spreads across the world.

Security of the architectural elements, and the platforms on which they operate, also becomes a significant concern. Multiple organizational boundaries implies that multiple trust boundaries could be present in any communication. Intermediary applications, such as firewalls, should be able to inspect the application interactions and prevent those outside the security policy of the organization from being acted upon. The participants in an application interaction should either assume that any information received is untrusted, or require some additional authentication before trust can be given. This requires that the architecture be capable of communicating authentication data and authorization controls. However, since authentication degrades scalability, the architecture's default operation should be limited to actions that do not need trusted data: a safe set of operations with well-defined semantics.

4.1.4.2 Independent Deployment

Multiple organizational boundaries also means that the system must be prepared for gradual and fragmented change, where old and new implementations co-exist without preventing the new implementations from making use of their extended capabilities. Existing architectural elements need to be designed with the expectation that later architectural features will be added. Likewise, older implementations need to be easily identified so that legacy behavior can be encapsulated without adversely impacting newer architectural elements. The architecture as a whole must be designed to ease the deployment of architectural elements in a partial, iterative fashion, since it is not possible to force deployment in an orderly manner.

4.2 Problem

In late 1993, it became clear that more than just researchers would be interested in the Web. Adoption had occurred first in small research groups, spread to on-campus dorms, clubs, and personal home pages, and later to the institutional departments for campus information. When individuals began publishing their personal collections of information, on whatever topics they might feel fanatic about, the social network-effect launched an exponential growth of websites that continues today. Commercial interest in the Web was just beginning, but it was clear by then that the ability to publish on an international scale would be irresistible to businesses.

Although elated by its success, the Internet developer community became concerned that the rapid growth in the Web's usage, along with some poor network characteristics of early HTTP, would quickly outpace the capacity of the Internet infrastructure and lead to a general collapse. This was worsened by the changing nature of application interactions on the Web. Whereas the initial protocols were designed for single request-response pairs, new sites used an increasing number of in-line images as part of the content of Web pages, resulting in a different interaction profile for browsing. The deployed architecture had significant limitations in its support for extensibility, shared caching, and intermediaries, which made it difficult to develop ad-hoc solutions to the growing problems. At the same time, commercial competition within the software market led to an influx of new and occasionally contradictory feature proposals for the Web's protocols.

Working groups within the Internet Engineering Taskforce were formed to work on the Web's three primary standards: URI, HTTP, and HTML. The charter of these groups was to define the subset of existing architectural communication that was commonly and consistently implemented in the early Web architecture, identify problems within that architecture, and then specify a set of standards to solve those problems. This presented us with a challenge: how do we introduce a new set of functionality to an architecture that is already widely deployed, and how do we ensure that its introduction does not adversely impact, or even destroy, the architectural properties that have enabled the Web to succeed?

4.3 Approach

The early Web architecture was based on solid principles--separation of concerns, simplicity, and generality--but lacked an architectural description and rationale. The design was based on a set of informal hypertext notes [14], two early papers oriented towards the user community [12, 13], and archived discussions on the Web developer community mailing list (www-talk@info.cern.ch). In reality, however, the only true description of the early Web architecture was found within the implementations of libwww (the CERN protocol library for clients and servers), Mosaic (the NCSA browser client), and an assortment of other implementations that interoperated with them.

An architectural style can be used to define the principles behind the Web architecture such that they are visible to future architects. As discussed in Chapter 1, a style is a named set of constraints on architectural elements that induces the set of properties desired of the architecture. The first step in my approach, therefore, is to identify the constraints placed within the early Web architecture that are responsible for its desirable properties.

Hypothesis I: The design rationale behind the WWW architecture can be described by an architectural style consisting of the set of constraints applied to the elements within the Web architecture.

Additional constraints can be applied to an architectural style in order to extend the set of properties induced on instantiated architectures. The next step in my approach is to identify the properties desirable in an Internet-scale distributed hypermedia system, select additional architectural styles that induce those properties, and combine them with the early Web constraints to form a new, hybrid architectural style for the modern Web architecture.

Hypothesis II: Constraints can be added to the WWW architectural style to derive a new hybrid style that better reflects the desired properties of a modern Web architecture.

Using the new architectural style as a guide, we can compare proposed extensions and modifications to the Web architecture against the constraints within the style. Conflicts indicate that the proposal would violate one or more of the design principles behind the Web. In some cases, the conflict could be removed by requiring the use of a specific indicator whenever the new feature is used, as is often done for HTTP extensions that impact the default cacheability of a response. For severe conflicts, such as a change in the interaction style, the same functionality would either be replaced with a design more conducive to the Web's style, or the proposer would be told to implement the functionality as a separate architecture running in parallel to the Web.

Hypothesis III: Proposals to modify the Web architecture can be compared to the updated WWW architectural style and analyzed for conflicts prior to deployment.

Finally, the updated Web architecture, as defined by the revised protocol standards that have been written according to the guidelines of the new architectural style, is deployed through participation in the development of the infrastructure and middleware software that make up the majority of Web applications. This included my direct participation in software development for the Apache HTTP server project and the libwww-perl client library, as well as indirect participation in other projects by advising the developers of the W3C libwww and jigsaw projects, the Netscape Navigator, Lynx, and MSIE browsers, and dozens of other implementations, as part of the IETF discourse.

Although I have described this approach as a single sequence, it is actually applied in a non-sequential, iterative fashion. That is, over the past six years I have been constructing models, adding constraints to the architectural style, and testing their affect on the Web's protocol standards via experimental extensions to client and server software. Likewise, others have suggested the addition of features to the architecture that were outside the scope of my then-current model style, but not in conflict with it, which resulted in going back and revising the architectural constraints to better reflect the improved architecture. The goal has always been to maintain a consistent and correct model of how I intend the Web architecture to behave, so that it could be used to guide the protocol standards that define appropriate behavior, rather than to create an artificial model that would be limited to the constraints originally imagined when the work began.

4.4 Summary

This chapter presented the requirements of the World Wide Web architecture and the problems faced in designing and evaluating proposed improvements to its key communication protocols. The challenge is to develop a method for designing improvements to an architecture such that the improvements can be evaluated prior to their deployment. My approach is to use an architectural style to define and improve the design rationale behind the Web's architecture, to use that style as the acid test for proving proposed extensions prior to their deployment, and to deploy the revised architecture via direct involvement in the software development projects that have created the Web's infrastructure.

The next chapter introduces and elaborates the Representational State Transfer (REST) architectural style for distributed hypermedia systems, as it has been developed to represent the model for how the modern Web should work. REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems.


[Top] [] [] © Roy Thomas Fielding, 2000. All rights reserved. [How to reference this work.]