Wednesday, November 26, 2008

Performance in Routers

Performance has become one of the hot topics in the multiprotocol router market today. Therefore, performance results must provide sufficient information so valid comparisons are made for the network that is to be built. The definition of performance in a multiprotocol router network has required revision as the router network has evolved. When the device was a single protocol, single media machine, performance could easily be reported in frames per second. This speed specification was a sufficient performance rating given the environment was simple and well understood. As multiple media interfaces and multiple connectionless protocols were added, performance was presented as maximum speed for the specific protocol or the generic maximum throughput for the box. This led to some confusion since the type of frames and environmental modifiers were not specified.

In today's network, where the router is used for LAN interconnection and WAN transport for connectionless and connection-oriented protocols, performance reporting must be standardized to allow for comparison in both the bridged and routed environments. This standardization must result in information valid for a production environment where multiple protocols are enabled in the router, frames of router exchange information flow, and filters are activated. 

Many vendors still quote total theoretical box throughput for their boxes as sufficient input for the router decision. These reports often come without the specification of the environment in which the performance test was made. The test bed may have no likeness to production environments as the test may be chosen to optimize for maximum speed. Without a controlled environment to record the speed, these recordings are not easily compared to other vendors'
performance information. To offset this, the Internet Engineering Task Force (IEFT) formed the Benchmark Methodology Working Group (BMWG) to establish guidelines for performance testing. Terminology, test environment (frame size, router information exchange, single and multiple protocol mixture, etc) and, in the future, packet content are specified to give the marketplace data that are compared and contrasted when making purchase decisions.

The Benchmark Methodology Working Group (BMWG) of the Internet Engineering Task Force (IEFT) defined the following RFCs:
· Benchmarking Terminology (RFC1242) - Available
· Benchmarking Methodology - Draft
· Benchmarking Methodology: Test Frame Formats - Draft
In spite of the attention performance receives, it is only one of several important selection criteria and must always be weighed with the other critical determining
factors in the network, such as:
· Required functions and protocols supported
· Reliability and availability
· Network management capability
· Quality and level of service provided by vendor
· Total cost of ownership (hardware, software, service and support)

No comments: