UNIX Unleashed, Internet Edition
- 21 -Introducing HyperText Transfer Protocol (HTTP)by James Edwards Without a doubt, the explosive growth of the Internet has been driven by the enormous popularity of the World Wide Web (the Web). The Web's continuing success is attributable to the simplicity with which it provides users an ability to locate and retrieve dispersed information from within the Internet. A major part of this success is due to the effectiveness of the hypertext transfer protocol (HTTP). HTTP is an application protocol that provides a mechanism for moving information between Web servers and clients (known as browsers). To be clear, HTTP is not a communication protocol; HTTP is an application. HTTP functions much like the other standard UNIX-based applications, such as Telnet, ftp, and SMTP. Like these applications HTTP makes use of its own well-known port address and the services of underlying reliable communication protocols, such as TCP/IP. This chapter details how the HTTP application operates. This will be done through examination of the protocols message formats and return codes, but also through examples that offer a step-by-step examination of how the protocol functions. This chapter highlights a number of performance problems attributable to the operation of HTTP. I will attempt to provide insight as to the how and why of these problems; as well as to indicate how re-configuration of certain server parameters may help to alleviate some situations. What HTTP DoesThe Internet is a huge mass of information. The development of the Web was driven by a desire for a simple and effective method of searching through this wealth of information for particular ideas or interests. Within the Web, information is stored in such a way that it can be referenced through the use of a standard address format called a Uniform Resource Locator (URL). Each URL points to a data object such that it has an uniquely identifiable location within the Internet. In addition, through the use of a standard data representation format known as Hypertext Markup Language (HTML), it becomes possible to include URLs along side actual data. These URLs can then provide reference to other related information located either on the same or remote Web server. Users are then able to creating their own discreet and distinct paths through the Web, using URLs to help them maneuver. Users access the Web through a client application called a browser. This is a special program that can interpret both the format information, such as font size and colors, as well as the URLs that are embedded within the HTML documents. The HTTP application provides the final piece in the puzzle; it provides a simple and effective method of transferring identified data objects between the Web server and the client.
Later in this chapter we outline in some detail the defined HTTP message types and protocol header formats. By way of an introduction to that section, a useful first step will be to examine the logical operation of HTTP and its interaction with the other components parts found within the Web. Figure 21.1 outlines how the HTTP application protocol relates to both the Web server and client programs. As indicated, the browser has been designed to interpret format information contained within HTML pages. It is possible for an HTML page to also contain URLs as references to other pages located elsewhere on the Internet. These links are referred to as hyperlinks and are often colored or underlined within the HTML pages. Figure 21.1. The main function of the HTTP application is to request the pages referenced by these hyperlinks. This is accomplished through the following steps:
One noted strength of HTML is that additional references to content can be encoded within each document. As indicated earlier, these could be pointers to content that could be located either on the same Web server, or alternatively on a different Web server. In addition to hyperlinks, HTML pages may often contain other information objects--particularly graphics. As the browser encounter references for embedded information it will ask HTTP to request the download these files. HTTP will initiate a separate TCP connection for each file that needs to be downloaded. Some HTML pages may contain a number data object, and to help speed up the overall download process some browsers will allow multiple TCP connection to be initiated simultaneously.
Protocol DefinitionThe HTTP application uses a version numbering scheme consisting of a major and a minor number. These numbers are arranged using the following format: <Major Version> . <Minor Version>
Current HTTP implementations are based upon the design specifications that have been outlined within the Request for Comments (RFC) 1945. These implementations are classified as HTTP version 1.0 (represented as HTTP/1.1). RFC 1945 is known as an informational RFC, as such it does not represent a validated application standard. This has resulted in a number of HTTP implementations from different vendors exhibiting some degree of variation in available functionality. A first draft of an updated version of the HTTP, termed version 1.1. (or HTTP/1.1), has been completed. The details of this upgraded specification have been published within the RFC 2068. The indicated changes contained within RFC 2068, classify as minor upgrades of the HTTP application. However, of greater significance is that HTTP/1.1 is being developed for ratification as an Internet Engineering Task Force (IETF) Standard. The objective being to set clear guidelines for the development of a common HTTP implementations in addition to providing some areas of enhanced functionality. So far we have only provided an overview of the general operation of the HTTP application by describing, at a somewhat high level, how HTML pages are transferred within the Web. The following section will extend this discussion by investigating the syntax of the HTTP application through the use of a real-world example. In addition to the example, an examination of available protocol messages, header fields, and return codes is undertaken, with summary tables provided for additional reference. HTTP Example OperationA more detailed study of the operation of HTTP can be made through an examination of the output from a standard packet analyzer. Table 21.1 provides an example of this operation through the use of trace data taken from a Linux server running the tcpdump application. The tcpdump application operates by placing the Ethernet card in a promiscuous mode, such that it can see and record each packet on the network. HTTP session information can then be collected by pointing a Web browser at an HTML page maintained on a local Web server. In order to simplify examination of the recorded session, I have made some minor adjustments to the recorded tcpdump output. These modifications consist of rearranging the presented order of some recorded packets as well as removing some superfluous detail. Table 21.1. Packet trace data illustrating HTTP operation.
This table provides an example of a Web browser requesting an information page from a Web server. Remember that HTTP functions as an application that operates above a reliable communication services provided by TCP/IP. To this end, the initial connectivity between client and server involves the setup of a TCP connection between the hosts. The establishment of a TCP connection is accomplished through the completion of something known as the three-way handshake. This process is illustrated within the example. The client sends a TCP packet to the server, requesting a new TCP connection by setting the SYN option flag and supplying an initial sequence number (ISN). The server responds, by sending an acknowledgment (ack) back to the client along with its own ISN for this connection, which it highlights by also setting the SYN flag. The client responds, with the final part of the three-way exchange, by acknowledging the server's response and ISN. Following the three-way handshake, the TCP connection is opened and ready for data transfer. This begins with the client sending an HTTP request message to retrieve an indicated HTML page (packets four through seven in the example). The server responds with an HTTP response message that contains the requested HTML page as the message body. This data is transferred over the TCP connection with the client sending acknowledgment packets back to the server as required. While this HTML page is being transferred, a reference to an embedded object is encountered. The HTTP application on the server will automatically transfer this object to the client over a separately established TCP connection. In the table, packets 15, 16, and 17 illustrate the establishment of another TCP connection, which again involves the outlined three-way handshake. The client now has two active TCP connections with the Web server.
After the server has completed the transfer of each data item, it automatically closes the corresponding TCP connection. This process involves the transfer of four additional packets. First, the server will send a TCP packet with the FIN option flag set, indicating that it wishes to close its end of the active connection. This packet is acknowledged by the client, which in turn closes its end of the connection by sending a similar packet to the server. The server sends an acknowledgment, and the connection is closed. The table illustrates how both separate TCP connections are independently terminated following the completion of data transfer. It should be noted that the establishment and termination of each TCP connection involves the exchange of a minimum of seven packets. This can represent a significant amount of protocol overhead. Messages, Headers, and Return CodesHTTP messaging is formatted using standard ASCII requests that are terminated by a carriage return and a line feed. The HTTP application makes use of two defined messages types: message requests and message responses. Requests are made from Web clients to Web servers, and are used to request either the retrieval of data objects (such as HTML pages) or to return information to the server (such as a completed electronic form). The Web server uses response messages to deliver requested data to the client. Each response contains a status line that indicates some detail about the client request. This might be an indication that an error occurred or simply that the request was successful. Both request and response messages can be accompanied by one or more message headers. These headers allow the client or server to pass additional information along with its message. Before investigating the use of available headers fields, we consider the operation of the standard message formats. HTTP Request MessagesThe Listing 21.1 provides an outline of the general format for making HTTP data requests. Listing 21.1. HTTP data request syntax.Request method headers <blank line> (Carriage Return /Line Feed) message body
Request MethodsThe general syntax for a request methods is as follows: <request method> <requested-URL> <HTTP-Version> HTTP version 1.0 defines three request methods: GET, POST, and HEAD. Table 21.2 summarizes the functions of each support method and outlines a specific example. Table 21.2. Request method syntax and examples.
Defined Header ValuesHeader values are used to relay additional information about an HTTP message. A single HTTP message may have multiple headers associated with it. Generally, it is possible to separate headers into the following four distinct groups:
The operation and use of message headers can best be seen through the following simple example: GET HTTP://www.dttus.com/home.html HTTP/1.0 - GET request
If-Modified-Since: Sun, 16 Mar 1997 01:43:31 GMT - Conditional Header
- CR/line feed
In this example, a Web client has forwarded an HTTP request to a Web server asking to retrieve a specified HTML page. This accomplished using the GET request method. This HTTP request has been supplemented with a single header field. This header asks that a conditional test be performed, asking that the indicated HTML page only be returned if it has been modified since the indicated date. The following tables provide a summary of the defined header values for each of the four groups. Note that within each grouping a large number of the headers are only available within the pending HTTP version 1.1 specification (they have been included for completeness). General header values are applicable to both request and response messages, but are independent of the message body. The following table summarizes the available values providing a short description of the related function. Table 21.3. Defined general header values.
Some available header values relate specifically to client browser request messages--either GET, POST, or HEAD methods (also applicable to the new request methods introduced within HTTP/1.1). Table 21.4 provides a summary of the headers applicable to request messages. Table 21.4. Defined request header values.
Web server generated responses to client requests may be supplement to a number of optional header values. Table 21.5 provides a summary of those header values specifically relating to response messages. Table 21.5. Defined response header values.
Message body headers define optional meta-information about the data object, or, if a data object is not present, about the resource identified within the request. Table 21.6 outlines the available header values. Table 21.6. Defined message body header values.
Response MessagesThe general syntax for a Web servers response message is as follows: Status Line headers <blank line> (CR/LF) message body The first line of the Web server response consists of something known as the status line. This is a general syntax for this information is: <HTTP-Version> <Status-Code> <Status Code Description> Table 21.7 provides a complete listing of the defined status codes and their corresponding descriptions. As with HTTP requests it is possible to include one or more headers within Web servers responses. Listing 21.2 outlines an example of how this might occur. Table 21.7. HTTP response message status line descriptions.
Listing 21.2. Using Web server response headers.workstation> telnet www.dttus.com 80
trying 207.134.34.23
Connected to 207.134.34.23
Escape character is [^
GET /pub/images/dttus/mapimage.gif request line entered
request terminated
by CR/LF
HTTP/1.0 200 OK response starts with
Status Line
Date: Friday, 14-Feb-97 22:23:11 EST header details
are here
Content-type: image/gif
Last-Modified: Thursday, 13-Mar-97 17:17:22 EST
Content-length: 5568
headers terminated
by CR/LF content is
transferred here
Connection closed by foreign host tcp connection is
terminated after
transfer is complete
workstation>
workstation>
In the preceding example, the Telnet program is used to create a TCP connection to the remote Web servers HTTP application port--port 80. After a connection has been established, an HTTP GET request is made. Listing 21.2 outlines how the requested image is returned to the Web client along with a number of header values directly after the status line. Note the established connection is automatically terminated by the Web server after the requested image has been transferred. The previous table lists a summary of the possible status line return codes and their description values. This table includes return codes for both HTTP/1.0 as well as for the pending version upgrade--HTTP/1.1. The table provides an indication where a code is supported only within the later version--which has full backward-compatibility support.
Identifying and Overcoming Some Performance ProblemsThis section will look at some real world implementations and uses of the HTTP application. In particular, we will focus on some recognized limitations of the application and provide, where possible, some alternative configurations to help ease recognized problems. There are there main areas within the operation of HTTP that have the potential to cause performance problems--connection establishment, connection termination, and communication protocol operation--we discuss some of the main issues of each. Connection Establishment--The Backlog QueueThere is an upper-limit to the number of outstanding TCP connection requests a given application server process can handle. Outstanding TCP connection requests are placed upon a queue known as the backlog. This queue has a predefined limit to the number of requests it can handle at any one time. When a servers backlog queue reaches its defined limits, any new connection requests will be ignored until space on the queue becomes available. If you have ever seen the message, "Web Site Found. Waiting for ReplyÉ", chances are that you have fallen victim to the problem of a complete backlog queue. During this time your Web browser will be resending the unanswered connection request in the hope that space on the backlog queue becomes available after a few seconds wait. To understand why HTTP is so susceptible to this problem, it is necessary to understand why a server would have any outstanding TCP connection requests. A server process will queue a connection request for two reasons: Either it is waiting for the completion of the connection request hand-shake, or it is waiting for the server process to execute the accept() system call. In order to establish a TCP connection, the client and the server must complete the three-way handshake. This process is initiated by the client, which sends a TCP connection request packet with the SYN flag set. The server will respond by sending a TCP packet with both the SYN and ACK flags set. The server will then wait for the client to acknowledge receipt of this packet by sending a final TCP packet acknowledging the server's response. While the server is waiting for this final response from the client the outstanding connection request will be placed in the backlog queue. Once a TCP connection has been established, the server process will execute the accept() system call in order to remove the connection from the backlog queue. It is possible on a busy server that the execution of this system call may be delayed and the connection request remain on the queue for an extended period of time. Generally, the server process will make the accept() call promptly and this will not cause the backlog queue to fill up. However, the required completion of the three-way handshake can and does cause the queue to fill and outstanding requests to be dropped. Why is this a particular problem of HTTP within the Internet? Well, it is possible for a Web server to fill its backlog queue if it receives a large volume of connection requests from clients facing a particularly large round trip time. Figure 21.2 illustrates that the Web server receives a number of connection requests in a very short period of time. The server responds back to each client--placing the outstanding request upon the backlog queue. Figure 21.2. As the figure illustrates, the backlog queue fills up and there is a period of time during which the Web server will ignore any new connection request. During this time the Web client will display the "Web Site Found. Waiting for ReplyÉ" message as it attempts to successfully complete a connection request. Default sizes of the backlog queue differ between UNIX flavors and implementations. As an example, BSD flavors of UNIX define the size of the backlog queue through the SOMAXCONN constant. Calculation of the size of the queue size is then derived using the following formula: backlog queue size = (SOMAXCONN * 3) / 2 + 1 By default, the SOMAXCONN constant is set to a value of five--providing for a maximum of eight outstanding TCP connections to be held on the backlog queue. Other UNIX flavors or TCP/IP implementations use other parameters and parameter values to limit the size of the backlog queues. For example, Solaris 2.4 uses a default value set to five and allows this to be incremented to 32; Solaris 2.5 uses a default of 32 and allows a maximum of 1024; SGI's IRIX implementation provides default queue size of eight connections; Linux release 1.2 defaults to 10 connections; and Microsoft Windows NT 4.0 provides for a backlog maximum size of six. For busy Web servers, system administrators should look to increasing the backlog size from the default values. Failure to do so can effectively limit the overall availability of their Web servers with the Internet.
Connection TerminationThe operation of HTTP on a busy server can become a major drain on available system resources. The previous section outlined how a server could potentially fill its backlog queue and prevent the creation of new TCP connections until system resources can be reassigned. In this section, we examine how a Web server could run out of available system resources as a result of how it handles the termination of established TCP connections. The operation of the HTTP protocol causes the established TCP connections to be terminated immediately following the completed data transfer. The termination of the TCP connection is initiated by the Web server, which completes what the TCP protocol specification outline as an active close. This is done by sending the client a TCP packet with the FIN flag set. Upon receipt, the client will return an acknowledgment packet and then complete what the specification outlines as a passive close--involving sending the Web server a TCP packet with the FIN flag and awaiting a server acknowledgment. The TCP protocol specifications allow either the client or the server to perform the active close on any connection. However, the end that performs this operation must place the connection in a state known as TIME-WAIT. This is done in order to ensure that any straggler packets can be properly handled. During the TIME-WAIT duration the connection information--stored within a structure known as a Transaction Control Block (TCB)--must be saved.
UNIX servers allocate a fixed number of TCBs--with values typically configurable up to a maximum of 1024. It is possible on a busy Web server for all available TCBs to become temporarily used up resulting "Web Site Found. Waiting for ReplyÉ" message being displayed to clients. The netstat program can be used to determine the number of outstanding TCBs currently in the TIME-WAIT state. Communication Protocol Operation--TCP and Congestion ManagementSome of the major criticism leveled at HTTP relate to its inefficient use of the underlying communication protocols. The TCP protocol was designed as a windowing communication protocol. Such protocols allow the receiving station to enforce a degree of flow control by providing an indication of how much data they are able to currently accept from the sender. The objective being that the receiving station can indicate to the sender how quickly it is able to process the sent data. Importantly, the use of windowing can fail to consider the capability of the connecting network to transfer the amounts of data within the advertised window sizes. Consider a data exchange between two network hosts: The receiving station, will advertise to the sender an amount of data it is able to receive up to a maximum value. The sending station will transmit the requested amount of data, which the receive will place within its buffers. The receiving station will read data from these buffers, freeing up space for it to accept more of the senders data. The receiver will request the sender transmit a window of data sufficient to fill the available space within its receive buffers. Such an operation will work fine--assuming that the network between the two hosts is capable of transmitting the requested volumes of data. Consider the case if the two hosts were connected a routed wide area link. If the router or the link became congested, it may not be able to process all of the data that the receiving station is advertising it can accept. The network needs to be able to indicate to the sending station to slow down its transmission! The receiving station might be advertising it is able to receive a given amount of data based on the fact that its buffers are empty, but the network in between cannot. The slow start algorithm provides this functionality. It does this through the operation of something called the congestion window. This value is incremented based upon the number of acknowledgment packets returned from the receiving station. The sending station will transmit an amount of data, which is the minimum of either the congestion window or the window size advertised by the receiver. In such a way, the sending station can now limit its transmissions based upon both the receiving station as well as the networks capability to handle different amounts of data. This relates to HTTP because slow start will add additional round trip delays to the transfer of data over established TCP connections. This is due to the fact that the congestion window size under the slow start algorithm is initialized to one packet--meaning that a sender will only transmit a single packet then await an acknowledgment from the receiving station. Upon receipt of each acknowledgment, the congestion window size will double in size until it matches the receivers advertised window size. Figure 21.3 shows how this additional round trip delay will affect the transmission under HTTP. This figure contrasts the effects of the slow start algorithm against that of a long lived connection. As the diagram indicates, the long lived connection transfers the receivers maximum window size of data enabling a fast transmission time. In contrast, under the slow start algorithm, additional delays are introduced while the sender waits for the receiver's acknowledgment packets. Figure 21.3. Under the TCP protocol, efficient transfers require long lived connection times, which reduce the overall impacts of the slow start algorithm. This is contrary to the operation of the HTTP application, which uses a number of ephemeral TCP connections, each of which being dedicated to transferring only a small amount of data. Each of these connections will invoke the slow start algorithm and, as such, reduce the overall efficiency of data transfers under HTTP. Increasingly, there is a growing desire to make greater use of persistent or long lived TCP connections within the operation of HTTP. These ideas are further examined in the following section.
Providing Multiple Links within an HTML PageThe previous section examined the operation of the HTTP application protocol. This demonstration outlined how a Web client would make use of a separately established TCP connection to download each data object specified within a single HTML page. As previously mentioned, the use of multiple TCP connections between the client and server will reduce the total elapse time required to transport all the data objects specified within the page. However, there is a price to be paid. Each TCP connection requires the additional use of Web server resources, and it is very possible for servers to have difficulty in keeping pace with these requirements. This section outlines some simple HTML page design principles that will help to combat these effects and allow for more efficient use of resources. Figure 21.4 and figure 21.5 provide a comparison of two different Web pages. Figure 21.4 makes use of a number of individual icons to outline its contents. Each icon provides a reference to a URL guiding the user through the site. This approach is in contrast to that contained in figure 21.5. The web page in figure 21.5 uses a single graphic with URLs embedded behind different parts of the graphic to provide a map of the site. Figure 21.4. Figure 21.5. From the Web client's perspective, it is possible not even to notice a difference in download time between each of the pages. This is especially true if the browser has been configured to operate multiple simultaneous TCP connections. Even though the Web page on the right contains a number of individual graphic images, each of those images could be downloaded simultaneously. The main issues relate to the effective use of resources on the Web server. The map approach as indicated in Figure 21.5 indicates how this might be configured. The map would be a single graphic that would embed hyperlinks defined as URLs. This could then be downloaded over a single TCP connection as opposed to the four separate connections required within the icon approach. This allows the Web server to potentially support a larger number of uses for the same amount of available resources. Using a Cache to Reduce DownloadsThe outline specification of HTTP within the informational RFC1945 introduces the idea of a response chain providing a link between Web server and client. The RFC indicates that there may be a number of intermediate devices upon this response chain which will act as forwarders of any messages. It is possible for any intermediary devices to act as a cache--the benefits being both a greatly reduced response time and the preservation of available bandwidth within the Internet. Figure 21.6 illustrates how this might occur. It outlines that the communication path between the Web server and client is effectively shortened with the use of a cache on an intermediate server. The intermediate server is able to return requested web pages to the client if those pages exist within its cache. In order to ensure that up-to-date information is returned, the client will send a conditional GET request. Figure 21.6. The conditional GET request is made through the use of message request headers available within version 1.0 of HTTP. Listing 21.3 provides an example of how such requests would be structured. Listing 21.3. Making a conditional GET request.workstation> telnet www.dttus.com 80
trying 207.134.34.23
Connected to 207.134.34.23
Escape character is [^
GET /pub/images/dttus/mapimage.gif request line entered
If-Modified-Since:Thursday,13-Feb-97 22:23:01 EST conditional GET Header
request terminated by CR/LF
HTTP/1.0 304 Object Not Modified response starts with Status
Line which indicates that
the object was not
modified. (Single Blank
line ends server response
headers).
Connection closed by foreign host TCP connection is terminated.
workstation>
workstation>
Listing 21.3 indicates that the client formats a conditional GET request using the If-Modified-Since request header value. This request asks the Web server only to send the image if it has changed since the indicated date. The Web server returns a response status line indicating that the image has not changed. The server closes the TCP connection and the Web client displays the image from within its cache. The use of caches and intermediary servers enable the HTTP application protocol to operate more efficiently. It should be noted that even with the use of a cache, a TCP connection between the server and the client would need to be established; however, the successful use of a cache greatly reduces the total required data transfer. Looking to the FutureHTTP version 1.1 is classed as a minor upgrade of the existing application protocol implementation. RFC 1945 tells us that minor upgrades provide enhancements in key areas of functionality--without major changes in the operation of the underlying operation of the application. Two of the key areas of improvement HTTP 1.1 focus on providing the support of persistent TCP connections and more effective use of caching techniques. More importantly, HTTP version 1.1 is being developed as a standard to be ratified by the Internet Engineering Task Force (IETF). Previous implementation of HTTP never underwent standards acceptance. This resulted in a proliferation of applications that called themselves HTTP/1.0 without implementing all the recommendations outlined within the published RFC. Given the diversity within existing implementations, a protocol version change was determined the most effective way to enable a basis for a standard solution. In this section, we focus on some of the main recommended changes to the existing protocol implementations. Refer to the previous reference tables for HTTP/1.1 included message formats and return codes. Supporting Persistent TCP ConnectionsThe standard operation within HTTP version 1.1 is for the client to request a single TCP connection with the remote server and use this for all the required transfers within a sessions. This directly contrasts the existing requirement to setup a separate TCP connection for the transfer of any single object within a given HTTP session. This single change has the potential to save both Web server resources and available network bandwidth. In addition, the use of a persistent connection will provide for greater operational efficiencies found within existing implementations of TCP. New Request Methods SupportedIn addition to the request methods supported within HTTP/1.0, version 1.1 has added the following new methods outlined in Table 21.8. Table 21.8. HTTP version 1.1 Additionally Supported Request Methods.
HTTP version 1.1 provides for support of a number of additional header values. These newly supported headers and their main functionality additions are summarized in Table 21.9. Table 21.9. HTTP version 1.1 Additionally Supported Header Values.
In addition, a number of new headers have been added that allow the server to relay more information about the actual content, such as content encoding type, language, and message size. SummaryThere is little doubt as to the importance of HTTP within the Internet. The application provides a simple solution for dynamically moving files between Web clients and servers--without the overhead or user intervention that is associated with the more standard file transfer programs, such as NFS and FTP. HTTP has some major strengths, but also some significant weaknesses. A number of serious performance problems have been associated with HTTP's operation. At the head of this list would probably be the requirement of having to fire up a separate connection to transfer each page or data object--. The use of server- and client-based file caches gets us part of the way to streamlining communications. However, we still require setup and tear down of TCP connections, which uses limited resources and slows down overall performance. The answer is to make use of persistent connections. This would involve establishing a single TCP connection and maintaining it until all data transfer between the client and web server has been completed. In such a way it would be possible to take advantage of useful performance features found within TCP that would aid the download of multiple web pages. HTTP version 1.1 attempts to provide this functionality through the use of additional protocol header values. Finally, existing vendor interoperability issues will effectively be over with the expected standardization of HTTP/1.1 in the later part of 1997. It is hoped that with HTTP/1.1 ratified as an Internet Standard it will be possible to improve the integration between different vendor solutions. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||