Logo Rob Buckley – Freelance Journalist and Editor

Gold class Internet

Gold class Internet

Poor web site performance can kill a company's online strategy. What technologies and techniques can organisations leverage to create lightning fast ecommerce?

Page 1 | Page 2 | Page 3 | Page 4 | Page 5 | All 5 Pages

While it might take only a few seconds to download the code of a web page, it can take a minute and a half for an objects-heavy page to resolve on the screen. Technical consultant Richard Donkin, of UK-based software supplier Orchestream, says there are a few tricks web designers can use to improve that. “The biggest thing is optimising your graphics and not to use too many different buttons. If you cut down the colour depth of the graphics and make them simpler, that makes a huge difference. In fact, it’s better to have a button bar rather than individual buttons because that’s only one transaction under HTTP, so you don’t keep having to make requests back to the server for additional objects.”

Under HTTP 1.0, the original protocol used by web servers for transactions, it was impossible to have a constant connection between desktop client and web server. Every request for a new page object had to be a new transaction, with all the performance penalties that brought. HTTP 1.1, the latest version, allows for a persistent connection between client and server and is understood by all 3.0 browsers and higher.

It’s this advance that’s behind start-up company Redline Networks’ TX web accelerators. The systems intercept HTTP transactions directed to the server and compress them together into a few dozen TCP/IP (the protocols used to transmit data across networks and the Internet) transactions. It then filters out unnecessary data, such as unrendered scripts and comments, before sending the data back to the client in a compressed form, which is uncompressed natively by the browser. A rival accelerator from Packeteer (acquired from British firm Workfire Technologies in September) stores tables on how web objects interact with browsers and how to optimise those transactions.

Even so, Redline recommends using the TX accelerators with a standard caching system. Caching systems are software or hardware-based systems, usually independent of the web server, that intercept calls for often-requested objects and static content (content that is the same for all transactions) and serve them from their own stores of fast electronic memory.

This frees up the web server for processing dynamic content. Inktomi, CacheFlow, and Cisco all have caching systems. CacheFlow’s based on its own OS, CacheOS. The company reckons that its server accelerators can process up to 95% of inbound page requests, giving response times up 80% faster for web users.

But one particular problem with e-commerce web pages are that they include server-side includes (points where the server substitutes information from a database), points out Donkin. Because that content is dynamic, standard caching technology can’t help, except where there are static elements referenced. One approach to this problem is provided by Persistence Software’s new offering, Dynamai, which the company claims it has used on eBay and Discovery.com to improve performance 35 times. Dynamai remembers the answers to common e-commerce questions about price and availability and responds with pre-computed answers. The software listens for events such as price changes and invalidates associated cache contents to avoid serving old data.

Next problem down the line is how quickly the backend can respond to requests for data in real-time. This is the main cause of 50% of performance problems, Donkin estimates. One rule, he says, is that companies shouldn’t use the same database system for corporate data and e-commerce. “At the American Express site, you have the ability to look at a statement. The corporate system is probably running on a mainframe. But the web server will be running on a Unix box, probably linked to a separate database.”

This means American Express can scale up easily if demand outweighs the capabilities of the mainframe. And if one system goes down, not all the services go down. Having a separate database also means it can be optimised for web serving, with different tables and heavy indexing of particular areas, and less indexing of transactions, thus reducing the amount of data the server has to get through before it locates what it wants.

However, with two separate databases, synchronisation becomes an issue - both databases have to have the same information as each other. Copying the data between the two requires time and a lull period in database activity otherwise performance will take a hit.

What some companies have found is that picking a time to do this isn’t necessarily as obvious as choosing the middle of the night. David Caddis has another of his cautionary tales of e-commerce. “A bank in the Mid-West was using 4-6 in the morning to backup the database. But they discovered they were getting an unusual amount of activity at that time and the users were getting poor performance as a result of the backup. It turned out that farmers were going on-line to check their accounts every morning before going out into the fields. The result was the bank genuinely had to become a 24/7 online bank.”

Page 1 | Page 2 | Page 3 | Page 4 | Page 5 | All 5 Pages

Interested in commissioning a similar article? Please contact me to discuss details. Alternatively, return to the main gallery or search for another article: