A problem of scale
- Article 15 of 77
- Information Age, August 2001
Conventional wisdom says that Microsoft software does not scale well enough to be truly considered “enterprise class”. But as Windows 2000 gains popularity, is it time to re-assess Microsoft's high-end claims?
Page 1 | Page 2 | All 2 Pages
In the technology business, extravagant claims for new products are as commonplace as they are misleading. And the claims made in 1993 by Microsoft marketing staff about the scalability and reliability of the company's Windows NT Server operating system are a classic case in point. Unfortunately for Microsoft, it soon became clear that NT could neither compete with Unix and other high-end operating systems nor live up to other myriad marketing promises.
The company's latest enterprise operating system, Windows 2000 (launched in early 2000), now occupies the high end of the Microsoft product line, but IT directors still believe that promises of enterprise-class performance from the company are untrustworthy. But are they right to be sceptical, or is it time to erase the bad memories associated with early NT deployments, and consider Windows 2000 as a serious high-end alternative to Unix?
Analysts at IT market research company Forrester Research seem to think so. The company recently completed a report into the adoption of Windows 2000 as a serious server platform, which concluded that it is more than capable of doing the “heavy lifting” necessary. Moreover, Forrester cites some real-life success stories: the USA Today web site, a six-server system capable of delivering up to 100 million pages a day; and retail chain JC Penney's site, which last year served on average three million pages and performed $1 million of ecommerce transactions each day. Similarly, surveillance applications at technology stock market Nasdaq, after a migration to Windows 2000 Advanced Server and Datacenter Server, can now monitor up to 800 transactions per second.
These two server systems are Microsoft's true high-end offerings. Advanced Server supports computers with up to eight processors and 8GB of memory, has two-node failover clustering (where one computer takes over if another fails), and 32-node load-balancing clustering (distributing tasks between servers so that none are over-worked). It also includes a series of packages for systems management and improved performance and availability.
Datacenter Server, meanwhile, is Microsoft's ultra-high-end package, available only through specific partners. It supports 64GB of memory, 32 processors, enterprise storage area networks (SANs), server-partitioning (so one server can look like several) and four-node failover clusters — all on top of Advanced Server's capabilities.
Microsoft is trying to ensure stability by making Datacenter Server only available on certain hardware configurations that have been pre-tested and certified by its partners. According to Paul Sinton-Hewitt, a manager at Unisys, which is one of Microsoft's Datacenter Server partners, “nearly 40% of all errors in Windows NT were from badly-produced drivers from third-party device makers”. Now, before a Datacenter Server is installed, every element of the system that could bring down the servers is stress-tested for 14 days. “It's a process that's equivalent to the kind of testing that IBM does on its mainframes,” says Sinton-Hewitt.
Mark Smith, Dell's product manager for enterprise servers in the EMEA region, says that customers, after an initial delay where they evaluated the product, are now adopting Windows 2000 even faster than they took up Windows NT. “It's a function of cost of deployment and a more open architecture. Those coming from the mainframe world are astounded at the cost savings.
”But customers really have to decide the level of reliability they need. Do they really need 'five nines' availability (99.999% up-time) or is four nines or three nines good enough?“ Choice, via the three server levels of Windows 2000, has increased sales, says Smith: the option of moving from Server to Advanced Server to Datacenter Server means companies know they can scale up as their needs grow, without losing investment in applications if they move to a ”closed or proprietary architecture“ (Unix or a mainframe).
Indeed, a report conducted by DH Brown Associates – and commissioned, ironically, by Sun Microsystems, creator of the Solaris version of Unix – backs up Smith's views. The report compared the relative merits of Windows 2000 and Solaris; while Solaris won on scalability, reliability and ease of use, Windows 2000 came out on top in terms of the choice of hardware on which it will run and the number of applications that will run on it.
But in terms of overall performance, can a Microsoft product really compete with high-end Unix systems running on top-of-the-range hardware? (Sun's Solaris, for instance, has supported 64 processors, fault-tolerance and load-balancing for a number of years.) Al Gillen, an analyst at IDC, points out that Unix vendors have a head start on Microsoft. ”That experience will translate into some sort of a core competitive advantage,“ he says. And the price of a Datacenter Server implementation – approximately B#180,000 – means that Microsoft's traditional price advantage is not so obvious at the high end.
Page 1 | Page 2 | All 2 Pages
