The performance of conventional and bare PC web servers under congestion
MetadataShow full item record
Type of Workapplication/pdf
xii, 65 pages
DepartmentTowson University. Department of Computer and Information Sciences
RightsCopyright protected, all rights reserved.
There are no restrictions on access to this document. An internet release form signed by the author to display this document online is on file with Towson University Special Collections and Archives.
The Transmission Control Protocol TCP has been by far the most used transport protocol in network communication systems. TCP has multiple algorithms to manage its functioning, such as the retransmission timer used to determine how long to wait before the retransmission of any lost or delayed packets occurs, and congestion control mechanisms used to prevent traffic volume from exceeding network resource limits. We first evaluate the performance under congestion of the popular Windows-based IIS and Linux-based Apache Web servers when serving a single request from a browser. Specifically, we conduct experiments in a real test network with several routers to compare delay, throughput and shared bandwidth percentage when the Web server is subjected to various workloads under different levels of background network traffic. We find that IIS with Compound TCP has performance advantages over Apache with Cubic TCP when the two servers compete for bandwidth, but Apache has smaller delays than IIS for large and medium-sized files. We next study the performance impact of recently recommended TCP retransmission timer settings using a bare PC Web server with no operating system or kernel running in the machine. We first evaluate server performance in a test LAN with various settings of the alpha and beta constants used for computing SRTT and RTTVAR in the presence of varying levels of background traffic generated by conventional systems. We then study performance with different minimum RTO settings, and compare the performance of the bare PC Web server using the recommended timer settings with the performance of the Apache and IIS Web servers running on Linux and Windows respectively. We find that 1) no combinations of alpha and beta, or sampling strategies, perform consistently better than others under the different levels of background traffic; 2) lower minimum RTO settings than the recommended 1-second minimum will work when there is moderate background traffic, but the 1-second minimum is best when there are higher levels of congestion; and 3) using the standard timer settings but not using the TCP SACK option and congestion control mechanisms degrades bare server performance for some levels of background traffic. Finally, we conduct experiments using an IPv6 Web server in a test LAN environment with several routers to determine the performance under congestion due to IPv6 and IPv4 traffic. The experiments use an Apache Web server and a bare PC Web server with no operating system. Requests to the servers are made using an ordinary Web browser. Different levels of congestion are created by using MGEN traffic generators. It is found that the IPv4 throughput is slightly greater than (or approximately equal to) the IPv6 throughput under the same level of congestion. When the IPv4 throughput is larger, the differences are between 4-23%. However, Apache server delays for HTTP requests over IPv6 are between 6-32 ms more than for IPv4 depending on the level of congestion. For all congestion levels, the bare PC Web server has significantly lower throughput and larger delays than Apache regardless of whether IPv6 or IPv4 is used since it does not implement any TCP optimizations. The results show that Web server throughput and delay for browser requests depends on both the congestion traffic rate and the percentage of like traffic in the congestion mix. This research shows that with respect to TCP congestion control mechanisms on the popular Web servers running on the Windows and Linux operating systems, neither always performs better than the other under various congestion scenarios. So the use of different algorithms together on servers and clients leads to some unfairness when sharing network resources. Furthermore, the choice of an initial value for TCP's retransmission timer, and optimum values for the Alpha and Beta coefficients are difficult to determine as no single pair of values seems to consistently give better performance. Also, Web server performance for requests over IPv6 and IPv4 depend on both the congestion traffic rate and the percentage of like traffic in the congestion mix; and a conventional Web server performs better under congestion than a bare PC Web server due to implementing TCP optimizations.