Consortium
ACCESS TO THE EXPERTS 
 HELPING ORGANIZATIONS LEVERAGE TECHNOLOGY FOR COMPETITIVE ADVANTAGE AND BUSINESS SUCCESS
Web Services Strategies
« Journal Home
Volume XII, No. 9; September 2002
Printer Friendly PDF version

Executive Summary

This is the first issue of Web Services Strategies, which replaces Component Development Strategies with a more topical focus on Web services. As Web services are arguably just a new kind of component -- and many Web services encapsulate conventional components -- this is not as big a shift as it might seem. At any rate, I aim to continue CDS's tradition of analyzing the business impact of emerging technology and exploring the murky but critical interface zone between business and IT.

Before plunging into specific topics like standards, security, products, and case studies, I am devoting this issue to a skeptical review of Web services aimed at challenging some of the prevalent hype. In the interests of continuity, there is also a section devoted to comparing Web services and components.

It is not going to be easy living up to the high standards set by Paul Harmon and Paul Allen as editors of CDS, and I will need all the help I can get. Please send in your feedback, criticism, corrections, additional material, examples, counterexamples, and suggested topics for coverage -- especially if you are using Web services!

-- Tom Welsh, Editor
twelsh@cutter.com

Contents

·Web Services: Food for Thought
Web Services and Components
How WIll Web Services Be Better?
Conclusions
·Summary
 

WEB SERVICES: FOOD FOR THOUGHT

In this inaugural issue of Web Services Strategies (WSS), I aim to start with a wide-angle discussion of Web services -- what they are for, their claimed benefits, and the extent to which these can be delivered today. Since many subscribers will have read my Cutter Consortium Enterprise Architecture Executive Report "Web Services in Context" (Vol. 5, No. 3) by now, I will try to avoid going over the same ground again. (If you do not have the report, it is available from Cutter Consortium; please visit http://www.cutter.com/ for more information.)

During this whistle-stop tour, a fair number of important concerns will be introduced and briefly discussed. Some of these -- such as Web service standards, products, security and integrity, and the strategies of the leading players -- will be the topics of forthcoming issues of WSS. As concrete user experiences are the ultimate litmus test of any technical breakthrough, I am particularly keen to feature them as they become available.

There is so much hype and misinformation circulating about Web services that it is an analyst's duty to ask a lot of pointed questions. In the words of Will Rogers, "It ain't what you don't know that hurts you, it's what you think you know that ain't so!" This is especially true in the borderland between IT and business, where a little timely skepticism can sometimes save millions of dollars -- or, more important, a company's reputation.

THE WEB WERVICES PROPOSITION

The definition that I gave in "Web Services in Context" remains valid and (with a couple of minor changes) seems as good a starting point as any:

Essentially, Web services comprise a set of standards that let programs invoke software services across Internet protocol (IP) networks. There is a presumption that the underlying protocol is HTTP. It does not have to be, but so far, nobody has seriously tried anything else.
Web services are always invoked by means of XML-encoded messages. XML over HTTP can go anywhere there is an IP network -- which nowadays means everywhere -- and can be received by any HTTP server. Web services always invoke some sort of executable program or programs. But just what that program is need not be decided until the message arrives. In other words, Web services are characterized by late binding -- "just-in-time applications" in the words of Rod Smith, IBM's vice president of emergent technologies.
Perhaps the essential distinguishing feature of Web services is that they allow programs (rather than human beings) to exchange information and commands in the form of XML messages.

This sober definition is in line with our policy of carefully examining the logic behind vendors' enthusiastic claims to test for concealed flaws. By way of contrast, a typical vendor definition is the following, which was quoted by Dr. Luminita Vasiu in Cutter Consortium's Enterprise Architecture Executive Update "Web Services Paradigm" (Vol. 5, No. 6):

A Web service is a programmable entity that provides a particular element of functionality, such as application logic, and is accessible to any number of potentially disparate systems through the use of Internet standards, such as XML and HTTP.
As the next revolutionary advancement of the Internet, Web services will become the fundamental structure that links together all computing devices.
-- MSDN Library, October 2001

This definition from the MSDN Library is admittedly much more concise than mine. However, it is also much more vague. Note the words "potentially disparate systems" and, in particular, "Internet standards, such as XML and HTTP." Surely systems either are or are not disparate? And the second phrase skates neatly over the really interesting -- and vexed -- questions, such as "Do Web services necessarily run over HTTP?"

The second paragraph is a good example of the rhetorical excesses that have been perpetrated in the name of Web services. At best, it can be regarded as a pious expression of intent. At worst, it is unfounded speculation and -- in one plausible interpretation -- downright foolish.

At this early stage, I would like to introduce an unfashionable, even heretical, proposition: namely, that nobody can successfully predict the future except by chance. This should be obvious, but we have become so accustomed to forecasts and projections that we forget to add under our breath, "unless, of course, things turn out differently." Who predicted the invention of the Web and its explosive growth? Or Java? Or even something as prosaic as the application server? People hardly ever see a revolutionary innovation coming, precisely because it is revolutionary. Often, its very inventors have no idea how popular it will become.

Conversely, how many experts have projected smooth exponential growth of markets that stumbled or crashed spectacularly for unforeseen reasons? Artificial intelligence, CASE, 4GLs -- and even 5GLs, whatever they may be -- object-oriented database management systems (ODBMSs), enterprise application integration (EAI) -- the list goes on and on. Every one of these was, at one time, the darling of analysts who predicted hockey-stick growth. And every one -- through no fault of its own -- failed to meet the grossly unrealistic expectations that were loaded onto it.

A classic of this genre is The Long Boom by Peter Schwartz and Peter Leyden, published first as a Wired magazine cover story in July 1997, then as a book with Joel Hyatt in 1999 (Perseus Publishing). Under the headline "We're facing 25 years of prosperity, freedom, and a better environment for the whole world. You got a problem with that?" the authors predicted that everything was going to get better for at least two decades. In fact, the year 2000 saw the roof cave in on the dot-com boom, with disastrous knock-on effects on the telecommunications industry and global stock markets. Like boats caught under full sail by a sudden squall, many basically sound software companies were capsized by the ensuing market downturn.

What does this have to do with Web services? Returning to the MSDN quote, consider its second paragraph: "As the next revolutionary advancement of the Internet, Web services will become the fundamental structure that links together all computing devices." Tacked on to a harmless, if vague, definition of Web services, this sentence contains two separate predictions:

  1. Web services are "the next revolutionary advancement of the Internet." Apart from being possibly self-serving, this assertion is quite dubious. Microsoft's relationship with the Internet to date has been checkered to say the least, and sweeping pronouncements about the Internet's future from this source are apt to evoke a chorus of "embrace, extend, and extinguish." Changes to the Internet are the province of the Internet Engineering Task Force (IETF) and the other vendor-neutral bodies that have made it what it is today. Besides, Web services are not revolutionary at all. On the contrary, they represent a fairly obvious attempt to improvise a solution from existing technology.
  2. Web services "will become the fundamental structure that links together all computing devices." Taken at face value, this is ridiculous. Computing devices in different organizations, physically separated by considerable distances? Certainly. Computing devices in a single organization, separated by a LAN? Perhaps in some cases. But to assert that Web services will replace COM+, CORBA, Remote Method Invocation (RMI), Java Message Service (JMS), remote procedure calls (RPCs), and even plain sockets for all purposes -- that is way over the top. Taken as a marketing cry, however, it makes more sense. Microsoft has been stopped in its tracks by enterprise Java and the Internet. If Web services were to become the standard technique for distributed computing, it would make a lot of the enterprise Java infrastructure look redundant and go a long way toward hiding the Internet itself from most users.

All this may look like an anti-Microsoft rant, but that is not my intention. IBM, Sun Microsystems, Oracle, and all the other leading software vendors have their own varieties of Web services spin, and we will look closely at each of them in upcoming issues. Microsoft is simply the biggest, most visible, and most egregious purveyor of spin, and its statements are not questioned as often or as searchingly as they deserve.

For the time being, let's file the exact definition of Web services under "pending" and have a look at the value proposition. Just what is it that Web services can do for us that older middleware could not?

The technical advantages most often attributed to Web services include the following:

  • Loose coupling -- if asked to cite their single most important technical strength, most advocates of Web services would mention loose coupling. This is often explained as meaning that Web services and consumers (clients) can be changed independently of one another, because all that is transferred is an XML message, which (in theory at least) carries no hint of the platform that sent it. In the words of Chandra Venkatapathy, market manager for Web services marketing at IBM, "Web services enable systems to collaborate with each other regardless of the underlying infrastructure... A CICS application can collaborate with a Microsoft application using the SOAP [Simple Object Access Protocol] message." Suffice it to say that the picture is not quite as clear as it is often made out to be.
  • Self-defining -- as with loose coupling, this seems to be a rich source of confusion and woolly thinking. Two kinds of self-definition are adduced: that inherent in XML and that provided by Web Services Description Language (WSDL). XML data is supposed to be self-defining because each data field is enclosed by a tag that names it. It is quite true that XML tags are of some use to human readers, because they can apply common sense and experience to interpret them. A program, however, that is looking for a "customer" field will ignore a "<client>" tag -- anything less than a perfect match is a miss. But this is exactly the way conventional middleware works! As for WSDL, the idea is that if a Web service is modified, the changes are reflected in the corresponding WSDL file, and prospective consumers can see them. But code cannot change itself without the help of a programmer. Therefore, seeing that the WSDL file has changed does not mean that the consumer application can use it in its new form. There is nothing new or unique about this feature of WSDL: CORBA Interface Definition Language (IDL) works exactly the same way.
  • Dynamic discovery -- instead of having to be programmed to invoke a specific Web service at a specific URL, a consumer can look up suitable Web services at runtime in a Universal Description, Discovery, and Integration (UDDI) directory (either public or private). A consumer could, for instance, find the geographically closest service, the cheapest, or any other combination of attributes. It will come as no surprise by now to learn that this capability, too, has been part of CORBA for many years -- it is known as the Trader service. UDDI enthusiasts would probably do well to investigate the ways in which the CORBA Trader service has been used -- and, more to the point, why it has not been used more extensively. (This might have something to do with the recent realization that corporations prefer to negotiate supplier relationships in person, rather than leaving this business-critical activity to a piece of untested software.)
  • Vendor neutrality -- here we come closer to the secret of Web services' success. The point is that absolutely nobody has yet announced that they think Web services is a terrible idea and that they will not be supporting it. On the contrary, most of the world's leading software vendors and quite a number of large business and government organizations have given it their approval. This has not been true of many standards in the past -- network protocols like TCP/IP and languages like COBOL, Fortran, and C have been among the exceptions. The fact that everybody has agreed on the overall Web services architecture, while industry leaders like IBM and Microsoft are energetically working it out, is the main reason why we can be sure that Web services will ultimately succeed.
  • Lightweight -- it is often argued that Web services will drastically reduce the time and effort it takes to get applications interoperating. There is some truth in this belief, but, once again, things are not quite so simple. The original precursors of Web services, such as XML-RPC and the early versions of SOAP, were lightweight all right -- but that was because they could not do very much. Some of the developers who worked out the concepts behind XML-RPC and SOAP still believe they should be kept as minimal as possible and used like a kind of middleware scripting language. (It was in the same spirit that Perl was dubbed "Internet duct tape.") Some comparisons that have been made in this area are decidedly misleading: for instance, comparing SOAP with CORBA, whereas it should really be compared with Internet Inter-ORB Protocol (IIOP), which plays much the same role in CORBA as SOAP does in Web services. At any rate, there can be no doubt that, with the steady accretion of more and more Web services specifications, top-end implementations will soon be far from lightweight.
  • Easy learning curve -- software that is lightweight is usually easy to understand and use. Some maintain that Web services are far easier to learn than conventional middleware like COM+, CORBA, or even RMI and RPC. This is true so far, but mainly because Web services do much less. All the same, if millions of developers get started using Web services in the near future, they will probably find it much easier to keep up with the inevitable enhancements as they appear. Someone who has gradually acquired expertise in a particular technology tends to think of it as straightforward, whereas a different technology of similar complexity might appear desperately forbidding. (Come to think of it, this is the same situation that we all face when we set out to learn a foreign language. Because we already speak our own language effortlessly, we do not appreciate how hard it is to learn -- which tends to make us feel superior to foreigners who struggle with it.)
  • "Firewall-friendly" -- another frequent favorite, especially when the audience is full of developers, is that SOAP (and hence Web services) is "firewall-friendly." In a moderately successful bid to prevent attacks by network vandals, spies, and "script kiddies," organizations have installed firewalls between their internal networks and the Internet. A typical firewall is programmed to accept traffic addressed to only a few ports, one of which is port 80 -- allocated to HTTP. Firewall administrators assume that port 80 traffic consists of Web browsing and generally leave it alone, if only because restricting it would get senior management on their backs in no time flat. (Nowadays, everyone expects to have uninterrupted Web access.) DCOM and CORBA have normally been blocked by firewalls, although CORBA has been assigned two ports of its own by Internet Assigned Numbers Authority (IANA) -- most administrators are as reluctant to open up fresh ports as they are to tamper with port 80. Thus, Web services can pass unobstructed through port 80, without security staff even realizing what is going on. This is good for the developer, who has one less thing to worry about, but it also creates a security hole. And just as there is no such thing as a minor accident on a submarine, there is no such thing as a minor security hole.

So it looks as if most of the technical advantages routinely ascribed to Web services have their accompanying drawbacks. Again, this is not to say that there is not a net gain -- we just don't have all the evidence yet.

THE STORY SO FAR

Without reiterating the historical background given in "Web Services in Context," it may be useful to check off some of the milestones in the Web services story. The first versions of SOAP and XML-RPC followed closely on the World Wide Web Consortium's (W3C) recommendation of the new XML standard in 1998. Dave Winer of UserLand and Don Box of DevelopMentor (who has since joined Microsoft) worked with Microsoft engineers to produce Microsoft's version of SOAP. As much as a year before .NET was launched in July 2000, Microsoft executives were talking about the new "Web services" paradigm that was to replace distributed objects.

In November 1999, Microsoft, DevelopMentor, and UserLand submitted SOAP 1.0 to IETF as a working draft, but this document never stirred up much interest. Perhaps the most critical event in the whole Web services story came in May 2000, when IBM and several other companies joined the original trio (Microsoft, DevelopMentor, and UserLand) to submit SOAP 1.1 to W3C. That signaled the legitimacy of SOAP as a vendor-neutral standard and eliminated the possibility that it might take root as a proprietary Microsoft technology. Immediately, Sun dropped its opposition to SOAP and announced its support -- a 180-degree turnabout that was justified by the surprising change in SOAP's status.

With the announcement of WSDL and UDDI, the core Web services specifications were in place. Many developers were still waiting for W3C to publish its official XML Schema recommendation, which it did in May 2001. With programming tools appearing on all sides, there seemed little to prevent the Web services revolution from getting underway. The launch of Microsoft's Visual Studio .NET in February 2002, after a lengthy beta, should have been the cue for mass implementation. But, as of today, success stories are still the exception rather than the rule.

THE CULTURE GULF AND ITS CONSEQUENCES

Most of the really difficult problems in software are human rather than technical, and this is especially the case when rapid changes in technology are taking place. Because IT is a relatively young discipline, far fewer people understand it than, say, accounting or law. Nobody could become a senior manager without having a good practical grasp of these subjects, whereas the average corporation has only a handful of qualified accountants and lawyers. Yet, it is actually quite fashionable to disclaim all knowledge of computing -- an attitude that inevitably leaves far too many decisions to the IT department.

Imagine the fate of a company where none of the employees could manage to do even the most elementary arithmetic, apart from a select group who work in isolation in a "Mathematics Department." Such a scenario is inconceivable, because arithmetic is a skill that most employees need to do their jobs properly.

In times when IT does not change much, the cultural gulf between computer-literate employees and their less knowledgeable colleagues does not matter too much. Business processes hum along in much the same way from day to day and year to year, and computer systems do not need to change very often. There is plenty of time for requirements studies, pilot projects, and user satisfaction surveys. If a system does not work right, it can be rewritten. Above all, business executives are not faced with the necessity of making critical decisions about IT strategy in very short timescales.

However, when new ways of applying computer technology are introduced, the situation becomes very different. Executives have the opportunity to make the best choices, exploit the possibilities of the new technology to the maximum, and steal a march on the competition. They are also in a position to squander massive amounts of time and money on disastrous projects that deliver no business benefit, commit their organizations to dead-end strategies, or even fail by doing nothing. How can they decide what to do? If they themselves lack even the most basic understanding of how computer technology works, all they can do is take advice from others. Whether these others are vendors, analysts, consultants, or even their own IT staff, the results can hardly be optimal.

Paul Strassmann, author of The Squandered Computer (Information Economics Press, 1997) and other important books, has repeatedly stressed that there is no measurable relationship between corporate spending on IT and investor returns -- or any other reliable indicator of business success. Having been responsible for controlling the IT spending of the US Department of Defense, Xerox, and other major organizations, Strassmann knows whereof he speaks -- not many people have managed a $10-billion IT budget.

One thing that Strassmann is obviously not saying is that IT adds no value. Otherwise, he would have chosen not to spend that $10-billion budget. Besides, many types of business today cannot be run at all without computers. So what is he saying? Essentially, that you do not achieve success by throwing money at IT. Instead, you actually have to think of smarter ways of doing business, some of which may require the use of computers (while others will not).

What bearing does all this have on Web services? Simply this: most managers do not have an adequate basis for making decisions about how Web services will affect their businesses. To some extent, this is because the vendors don't have their act together yet -- while they all agree that Web services are wonderful and their customers ought to invest in them, there is no consensus as to why and how.

But even if the IT industry were delivering a single, well-reasoned message, most business managers would not be well equipped to judge its merits. It is relatively easy to put together a persuasive case based on return on investment (ROI) -- add up the cost of licenses, training, and support, plus a notional figure to allow for the learning curve; then estimate a cost saving per transaction, multiply by the number of transactions, do the math and presto! A 14,000% (or 600%) ROI!

On that basis, it is quite surprising that so many of us are not rich yet. But there are a lot of other factors that get swept under the carpet when the ROI case is manufactured. What happens if, perhaps, the new distributed system is less reliable than the old one? How many disgruntled customers will there be, and what proportion of them will cancel their purchases or go over to a competitor? Might there be an impact on security, and, if so, what is the expectation of losses from that source? Then there is maintenance to consider and the effects on future enhancements and extensions to the system. Even success could be disastrous in its own way, if it leads to massive peaks in demand that saturate your corporate servers and bring everything grinding to a halt.

Conventional wisdom dictates that responsibility for bridging the cultural gulf between business and IT lies almost wholly with the latter. How many times have we been told that technical skills are not enough, that IT staff -- and managers in particular -- must be well-rounded individuals with excellent "people skills" who are willing and able to make a business case for technical changes?

Unfortunately, that is putting the cart before the horse. Business, to put it crudely, is the pursuit of money by legal means, and those who excel at it do so because that is what interests them most. Spotting business opportunities requires skill, experience, and a special kind of creativity. There are no infallible formulas for success in business -- if there were, we could all follow them and become millionaires.

The typical IT specialist is a different kind of person -- with different skills, motivation, and goals. Most of the time, he or she will be working to solve very specific problems: design a database, optimize an algorithm, eliminate a bug, set up a network. There is plenty of room for creativity, but it is almost always exercised within the state of the art. It is reasonable to ask an IT expert the best way to implement a given business function -- but not which business functions should be provided. Apart from everything else, most IT staff are so busy that they have trouble keeping up to date with their own fields of expertise; it is too much to expect that they should be able to second-guess management as to how to run the business.

The way it should work -- and the way it inevitably will work in the long run -- is that business managers need to become IT-literate. This does not mean they should have degrees in programming or hardware design: just that they should understand the aspects of IT that impinge directly on the business. For example, how many CEOs or CFOs know Brooks' Law ("adding manpower to a late software project makes it later")? For that matter, how many have read (or even heard of) Frederick P. Brooks' classic The Mythical Man-Month (Addison-Wesley, 1995) or Cutter Business Technology Council Fellows Tom DeMarco and Tim Lister's superb Peopleware (Dorset House, 1999)? Most are not even aware of the residual value of software -- as great as that may be in some cases.

To recapitulate: the widespread adoption of Web services in the marketplace will be significantly delayed by uncertainty. This uncertainty arises from two main sources:

  1. The inability of software vendors (as yet) to articulate a single, consistent, detailed explanation of exactly how Web services will bring business benefits.
  2. The ongoing cultural gulf between IT and business, as a result of which technical experts know "how" but not "what," while business managers know "what" but not "how." New technology can best be exploited by individuals who combine business acumen and technical understanding, but these people are few and far between.

A NEW PARADIGM OR JUST ONE MORE LAYER?

Hard-won experience tells us that new software technology rarely replaces old technology to the extent that it disappears altogether. Instead, successive layers build up like a coral reef. Ten years ago, it was widely stated that PCs and Unix servers would bring about the disappearance of mainframe computers. Of course, no such thing happened: today's mainframes continue to drive much of the world's business and governmental infrastructure, surrounded though they are by hundreds or thousands of times their number of smaller machines.

Old software technology is particularly persistent -- consider, for example, the estimated 200 billion lines of COBOL code currently in active use, according to IDC. There is a very good reason for this: software is quite difficult, expensive, and time-consuming to write and even more so to maintain and extend. A plausible valuation of the COBOL code base might be between $5 trillion and $20 trillion, which helps to explain why there has been no stampede to rewrite it all in C or Java.

Legacy software has been defined as "software that works" -- a remark that started life as a joke but has now attained the status of a proverb. It was only funny because it undermined the pretensions of vendors who tried to set up their own products as the only worthy objects of future expenditure. The persistence of legacy software is most marked in the world of middleware; it is bad enough rewriting a single application, but very few organizations can afford even to think of reimplementing an entire distributed system.

This, of course, is where Web services have most to offer. Vendors like IBM and Microsoft -- not exactly fly-by-nights -- tell us that a new era is coming, in which we can link together, more or less, any systems we fancy. And we can do it quickly, easily, and inexpensively. In one sense, this means consigning a lot more working applications to the legacy category: instead of reimplementing them with the technology du jour, we will be able to reuse their interfaces. Just as the technically proficient pooch in the cartoon exulted that "on the Internet, no one knows you're a dog," in the world of Web services, nobody need know if you are a crufty mainframe COBOL application.

If that was the sum total of the Web services proposition, there would be little to argue about (and a lot less revenue for the vendors to divvy up). What could be wrong about giving people the ability to hook up systems that previously could not interoperate with one another without prohibitive expense? The Venn diagram in Figure 1 illustrates the inclusion relationships and overlaps between Web services, .NET, CORBA, Java 2 Enterprise Edition (J2EE), and the open source LAMP (Linux, Apache, mySQL, Perl/PHP/Python) platform.

Unfortunately, the "lightweight" and "easy learning curve" aspects of XML-RPC and SOAP have been carried over to Web services, leading to the frequently heard suggestion that, at last, less skilled and experienced developers will be able to create working distributed systems. If, as mentioned earlier, Web services are portrayed as a form of "Internet duct tape," that would make them a useful part of the expert programmer's toolkit. However, it would not be a good idea for .non-experts to start thinking that they do not need experience or a toolkit any more -- just duct tape!

It fits in with the strategic directions of some vendors to argue that technical advances will "democratize" the craft of software development. Visual Basic and similar graphical tools actually did make it far quicker and easier to create database applications of certain types, which gave rise to the notion of "programming for the rest of us."

So far, so good. But this is where the cultural gulf comes into the picture again. It is a very attractive pitch for a vendor to say, "Our product is so simple that you won't need all those highly paid, slow-working middleware programmers. Anyone can enter a few fields, drag and drop, and it's done!" How is a business manager to judge the validity of such claims? As it happens, all but the simplest distributed systems still require a good deal of specialist knowledge to design correctly. Failure to cover all the bases could result in anything from dreadful performance or occasional data corruption to total system deadlock.

It is a pity that the Web services community seems determined to reinvent many things that have already been worked out and successfully deployed. Although borrowings from earlier architectures like COM+ and CORBA are sometimes acknowledged, ignorance of prior art often seems to be positively flaunted.

This "not invented here" attitude may be an aspect of another troubling tendency -- the growing assimilation of the software industry to the entertainment business, which is driving a highly inappropriate emphasis on fashion. How often nowadays do we see software referred to approvingly as "cool," "hot," or "neat"? In the media, too, there is no more scathing putdown than to suggest that a given technology is "dated," "venerable," "showing its age" -- or just plain "old."

This is crazy, because all software takes some time to become mature and stable. Remember the saying, "Never buy a product until version 3.0?" Well, that probably corresponds to between two and five years from the first release of the product. But before the product could even be designed -- another year or two there -- the standards that it implements had to be agreed and refined (another two to five years). So it probably makes sense to concentrate on products that implement standards that have been around for five to 12 years -- precisely those that the media like to depict as obsolescent and that vendors of this year's technology du jour term "legacies."

DOUBLETHINK

There is a lot of what might politely be called doublethink going on. To put it more bluntly, this Web services business has more loose ends than the average sheep. In the coming months, I will look more closely at some of the ambiguities and areas of uncertainty that lurk within the superficially tidy picture presented by the vendors. In a sense, ambiguity is inevitable at this stage. It is part of what has been called the slideware syndrome -- meaning software that exists only in the form of presentations. Slideware, notoriously, has no bugs and no deficiencies. That is because, as soon as a problem is pointed out, it is easy enough to change the slide in such a way that it no longer occurs.

I have already pointed out how loose coupling is often adduced as a way in which Web services surpass conventional middleware. Unfortunately, it turns out that Web services are not as loosely coupled as they are made out to be, and conventional middleware is not as rigid either. Perhaps the distinction is exaggerated by enthusiasts who do not know very much about COM+, CORBA, RMI, and RPC; whatever the reason, every factual error undermines the case they are trying to make.

Another supposed advantage of Web services, often linked with loose coupling, is asynchronous operation. In an RPC, the caller (client) remains "blocked" until the server's reply is received. This is known as synchronous operation, and it can cause problems if network delays are long or if the server is unreliable. The alternative is asynchronous operation, in which a message is sent and the sending process then goes on processing. Both synchronous and asynchronous modes have their strengths and weaknesses, and an experienced programmer understands the tradeoffs involved.

The claim is often advanced that Web services are better than COM+, CORBA, and RMI because they support asynchronous operation. Once again, this is not the whole truth by any means: indeed, it is so misleading that it could be called a distortion of the truth. For a start, CORBA at least has optional asynchronous modes of operation. Second, the limitations of synchronous middleware can always be worked around using multi-threading. Third, HTTP -- the application protocol used by the overwhelming majority of Web services -- is itself intrinsically synchronous!

Another area of uncertainty is whether Web services should be used in RPC style or document style. Both are supported by SOAP, but their architectural implications are very different. As usual, Web service advocates tend to claim all the benefits of both styles, without acknowledging that they may be mutually incompatible.

Last but not least, even the most energetic supporters of Web services cannot agree on whether they are best used inside the corporate firewall or outside it. Cape Clear Software, one of the world's leading Web services specialists, recommends its customers start by using Web services exclusively on their own intranets (within the firewall). Emphasizing the cost/benefit advantages of Web services compared to conventional EAI products like those of webMethods, SeeBeyond, Tibco, and Vitria, Cape Clear's CEO Annrai O'Toole deprecates the visions of dynamic e-commerce conjured up by others. He is not alone: plenty of early adopters have taken a look at UDDI, for instance, and decided not to bother with it.

As one of the leading sponsors of UDDI, IBM takes a radically different view. To quote Chandra Venkatapathy again, here is what he had to say on the subject in a recent interview with the Business Integrator Journal:1

The next step in Web services development is to find out what building blocks are available, and that is where UDDI comes into the process. If corporate partners are offering to expose their business processes as Web services, they list their processes in a registry known as UDDI. UDDI is the way to organize all the WSDL documents in a common format.
Then developers can search for organizations that offer a specific service in UDDI. For example, building a local restaurant application, the developer might do a search that would "provide all the specialty pizza shops in a 10-mile radius."
The example given -- specialty pizza shops -- looks weak. But it is not easy to find applications for which the UDDI model looks attractive to all parties concerned. Throughout history, trading has been done face to face wherever possible, because businesspeople like to know their trading partners personally. Automating this core business process does not appear to be something for which there is very much demand -- indeed it looks more like a classic case of technology push in the absence of market pull.

THE ROLE OF COMPETITIVE COMMERICAL INTERESTS

The key fact to bear in mind when considering Web services is that even vendor-neutral standards are inextricably bound up with vendors' competitive interests. Though users may derive reassurance from the publication of a standard by IETF, the Organization for the Advancement of Structured Information Standards (OASIS), the Object Management Group (OMG), or W3C, the fact remains that certain vendors will have fought long and hard to make sure that standard is (in some way) advantageous to them.

Some of the questions that need to be asked as part of a thorough analysis include:

  1. What are the immediate technical merits of this proposal?
  2. Is it architecturally sound? In other words, will it scale, and can it be modified or extended to keep pace with evolving requirements over a period of years or decades?
  3. Is the proposal likely to succeed in the market even if adopted as a standard and widely implemented by vendors? For instance, would it be too expensive to adopt, represent too sudden a step change, require too steep a learning curve, or be considered incompatible with existing investments?
  4. On the contrary, is the proposal likely to entail relatively gradual changes to existing technology and relatively small incremental investments from vendors and users?
  5. Which vendors stand to gain sufficiently from this proposal that, if it became a standard, they would be sure to deliver industrial-strength products that comply closely with its specifications, within a reasonable time?
  6. Which vendors stand to lose sufficiently from this proposal that they can be counted on to oppose it tooth and nail; water it down if they cannot altogether suppress it; absolutely refuse to implement it if adopted as a standard; and talk it down should it show any signs of becoming successful in the market?
  7. What tactical alignments may emerge, as vendors improvise strategies based on "minimax" and other game theory principles (such as "my enemy's enemy is my friend")?

ODBMSs, for example, looked quite good on points 1 and 2, failed catastrophically on point 3 -- because of the irreconcilable clash with the dominant relational model -- and also scored poorly on points 5 and 6.

COM, on the other hand, looked good on every single count except 2. Apparently, it would not scale adequately to the Internet, making it inappropriate as a standard remote interface between Windows and other platforms.

One way of explaining Web services' sudden rise to universal popularity is to suggest that this new technology represents a pragmatic compromise between powerful supplier interests. When the differences between Web services and CORBA are debated, for example, it often happens that a CORBA expert refutes the supposed technical advantages of Web services one after another. Finally, the Web services advocate is reduced to insisting that CORBA "never gained industry acceptance," or some such formula.

Another way of putting this is to say, "Microsoft used its veto." As the world's biggest and most powerful software vendor, Microsoft exerts tremendous influence on buying decisions -- and as any commercial organization would, it uses this influence to further its own interests. CORBA and Java both came to be seen as seriously threatening to Microsoft because they would have denied it control of the interfaces between Windows and other computing platforms.

SUMMARY

Although much of this first issue has been devoted to probing the mystique of Web services, this does not mean that WSS is reactionary or biased against innovation. Rather, it seemed desirable to air some points of view that have not been getting enough exposure.

As I have said before -- and will certainly say again -- Web services can hardly fail with most of the software industry committed to making it work, especially since not much about it is radically new. If it were a stock, my recommendation would have to be a "buy and hold" because the long-term prospects are excellent. But investors would have to be prepared for a roller-coaster experience in the short term. What's more, the long term has a nasty habit of turning out to be rather longer than we think at the outset.

For the next year or two, caution should be the watchword. As always, remember the slogan TANSTAAFL -- "there ain't no such thing as a free lunch!" If something seems too good to be true, it probably is. Besides, when we talk Web services, we are talking about software -- that fragile, brittle stuff that takes such a long time to get right.

It would be ridiculous to suggest that there will never be a time when computers communicate routinely and automatically with other computers all over the world. Indeed, they already do -- for instance, to maintain the infrastructure of the Internet. But the implications of a network of millions of powerful computers -- running many thousands of different programs, interoperating dynamically, and, to some extent, setting their own agenda -- are very complex. In spite of the exponentially accelerating rate of progress, it will take a few years to get there.

WEB SERVICES AND COMPONENTS

It is now more than 15 years since pioneers like Brad Cox started pointing out the potential benefits of working with "software integrated circuits." The introduction of standard machine parts such as screws had enabled the Industrial Revolution, they argued, and it was time for a similar revolution in software. The rise of Smalltalk in the 1970s and 1980s, followed by C++ and other more-or-less OO languages, led to a widespread belief that one day developers would assemble applications from prebuilt objects instead of writing them from scratch.

We can trace the concept of reusable software parts back to the NATO Software Engineering Conference of 1968, where M. Douglas McIlroy gave his historic presentation entitled "Mass-Produced Software Components." This talk, which is well worth reading even today, raises issues such as the need for parameterized families of components, the economics of a component industry, and the "ticklish problem" of distribution -- of which McIlroy insightfully said, "One instantly thinks of distribution by a communication link."

Like most predictions of imminent sweeping change, this one was premature. It is well known that technical innovation is usually delayed by a kind of social inertia -- most people do not like to adopt new ways until they have had ample time to get used to them. There were also commercial and organizational factors acting to damp the rate of change. Software reuse requires a different business model if it is to succeed: it is essential to invest up front in creating assets before any benefit can be gained from reusing them. And intellectual property laws, in the form of licensing restrictions, patents, and copyright, also got in the way.

The bottom line is that software components are not screws, and although similar broad economic reasoning can be applied to both kinds of artifacts, the devil is in the details. There were technical difficulties with systematic reuse of conventional objects, in the form of class libraries. In general, they could only be used for programming with the particular language in which they were written -- and often only on a specific platform. For this reason, they could not be profitably traded on an open market as envisioned by McIlroy.

Perhaps the most insidious shortcoming of objects was that, although they behaved like black boxes at runtime, programmers needed to see their source code in order to make use of their services. This meant that class libraries were useful only within organizations or, in some cases, individual projects.

For this and other reasons, the notion started to take root that OO could not deliver the anticipated benefits of reuse. Attention began to swing toward a subtly different software unit -- the component. On the cover of its May 1994 issue, the influential Byte magazine claimed that "object-oriented computing has failed. But component software, such as Visual Basic's custom controls, is succeeding."

What is the difference between a component and an object? It is hard to say for sure, as there is still no generally accepted definition of either term. In his widely quoted book Component Software: Beyond Object-Oriented Programming (Addison-Wesley, 1997), Clemens Szyperski, now a software architect at Microsoft Research, says that "software components are binary units of independent production, acquisition, and deployment that interact to form a functioning system." It is noteworthy that Szyperski insists on the binary nature of components -- meaning that they are runtime entities rather than source code.

Szyperski's thinking about components is lucid and consistent. In fact, about its only drawback is that -- for these very reasons -- it does not always coincide with the popular terminology of the marketplace. The following statement is profoundly true and well worth bearing in mind: "While distribution, objects, and components really are three orthogonal concepts, all combinations of these terms can be found in a confusing variety of usages. For example, distributed objects can be, but do not have to be, based on components -- and components can, but do not have to, support objects or distribution." According to Szyperski, even applications executing in the context of an operating system deserve to be classed as components because they are encapsulated and reusable. However, class libraries do not qualify because they are source code and, hence, not "ready to run."

Other authorities place more emphasis on the facts that components are fully encapsulated and that they can be deployed and composed without modification according to a standard. Components do not have to be visual in the sense that they maintain a user interface at runtime; but it is expected that they can be managed, edited, and assembled using visual development tools like Microsoft Visual Studio or BEA WebLogic WorkShop.

Perhaps we are still at a stage where the best definition of components is extensional. A Visual Basic custom control is certainly a component -- we have Byte's authority for that -- and so is a Delphi object. A JavaBean is a component, and most developers would agree that an Enterprise JavaBean (EJB) is a server-side component -- even though it lacks a user interface. A CORBA object is, strictly speaking, not a component, but a CORBA Component (a language-neutral analog of an EJB) definitely is.

Components are no longer unthinkingly identified with objects, but most of us would still expect them to be written in OO languages. Szyperski's point about the orthogonality of objects and components is powerfully supported by Paul Bassett's work, which shows that software reuse can be achieved with any 3GL through "frame technology." Netron, the company Bassett cofounded 21 years ago, has demonstrated impressive levels of reuse with COBOL, for instance.

Now it is being suggested that Web services will succeed where component models failed -- an amusing echo of Byte's provocative headline eight years
ago. Phrased this way, the assertion also confirms Szyperski's comment about the endemic confusion between objects, components, and distribution.

Nevertheless, the core question remains valid and important enough to deserve carefully consideration. Are Web services going to initiate a new era in which applications are more functional, less expensive, and, above all, quicker and easier to build?

COMPONENTS AND THEIR LIMITATIONS

In 1997, Forrester Research declared that CORBA was being "overthrown by 'populist' components," noting, "Frustrated by the long wait for CORBA, programmers are prepared to adopt more basic architectures centered on ActiveX controls and Java Applets for their component needs." Forrester also predicted that the incremental cost of providing middleware services for one more user would fall to zero by 2000, for both Microsoft's COM and "the JavaBeans/CORBA federation."

As we have already seen, the notion of a component is vaguely defined in the marketplace, although various experts have offered their own rigorous (but not always mutually consistent) definitions. It is generally conceived, however, as an object of some kind with additional features that make it easier to look up, obtain, assemble into applications, and deploy.

The systematic creation and reuse of objects or components has a surprisingly long history. To take just one well-known case, 15 years ago, Magnavox used Ada to deliver the Advanced Field Artillery Tactical Data System (AFATDS) project for the US Army, and obtained a documented 30% reuse -- an excellent result for a first project. Soon after, users of the NeXT operating system were said to be getting up to 80% reuse in some applications, thanks to the extensive class libraries shipped with each workstation.

These successes, though, were sharply limited. The Ada software packages written for AFATDS could hardly be used outside the highly specialized domain of artillery fire control -- even if military security had allowed them to see the light of day. And the NeXT class libraries were available only within the NeXT operating system, which was notorious for its tiny market share.

Today there are two dominant component models -- Microsoft's COM+ and Sun's Java. To these may be added CORBA, on the grounds that it is widely deployed, even though CORBA Components as such have only just arrived on the scene.

COM+ (which, for simplicity's sake, we will take to include COM and ActiveX) is undoubtedly the most widely used component model in the world today, even though the 200 million users claimed by Microsoft are entirely theoretical. This figure is presumably based on the number of copies of 32-bit Windows installed -- all of which admittedly incorporate COM or COM+ and, indeed, depend on it for many of their functions -- rather than the far smaller number of developers actively working with it.

There is a lot to be said for COM+ and not very much against it. Unfortunately, its two main limitations tend to be showstoppers. In spite of a brief flurry of activity five years ago, COM is essentially limited to Windows, and it has turned out to be less suitable than hoped for large-scale distribution. With the advent of .NET, COM+ has been more or less relegated to the engine room of Windows, where it continues to do an excellent job, while distribution is mainly seen as the province of Web services.

The CORBA picture is more or less the inverse of the COM+ picture. Deployed on a wide variety of platforms and commonly used with at least a dozen languages, CORBA has remained very much of a minority interest because, along with power, come complexity and cost. Free open source ORBs like TAO, MICO, JacORB, and omniORB are seeing ever-increasing use, but mostly in the context of quite sophisticated projects.

Although at least as complex as CORBA, Java presents a shallower learning curve. Anyone with a programming background can learn how to write simple classes and applets within a day or two and should progress rapidly in view of the many development tools available (some of them free). Once having learned the language and the basics of applets and JavaBeans, a developer can gradually acquire further skills until he or she has mastered the whole gamut of the J2EE specifications.

Although software reuse and components are often talked about in the same breath, many experts feel that reuse is not the most important motive for adopting a component approach. Furthermore, reusing source code or binary components is not necessarily the best or even the most efficient form of reuse. For years, software gurus have been pointing out the advantages of reusing design, initially in the form of pseudocode or a design notation like the Unified Modeling Language (UML). More recently, patterns have become very popular as a simple, easily understood, low-ceremony way of distilling and reusing know-how.

Although not meeting all the definitions of components, open source software and freely published specifications like those of the IETF, OMG, and W3C are other important forms of reusable software technology.

HOW WILL WEB SERVICES BE BETTER?

Over and over, we have been told that Web services are more flexible than conventional middleware because they are loosely coupled. Supposedly, if any changes are made to clients or servers, RPC-like architectures like COM, CORBA, and RMI require everything to be recompiled and redeployed -- a process that is time-consuming, labor-intensive, and error-prone. Web services, on the other hand, somehow use the presence of XML tags to work out what the associated fields mean, even if their format has changed.

It cannot be overemphasized that this is not how things work at all. There are two main scenarios:

  1. A server object has been changed, but none of its interfaces have changed. In this case, the server application must obviously be recompiled (unless it is written in an interpreted language like Java, in which case compilation takes place on the fly as code is executed). However, no changes will be required to the clients, as their "contract" with the server object is still the same as it was before.
  2. A server object has been changed in a way that requires its interface to change as well. (For instance, an extra parameter has been added.) Because the interface has changed, all clients will also need to change -- whether they are implemented using COM, CORBA, RMI, Web services, or anything else. Otherwise, how will they "know" about the extra parameter? To put it another way, what was the point of adding the extra parameter unless the client is going to use it for something? COM is a little different here because the rules forbid changing a COM interface. Instead, the developer must add a new one, leaving the old one unchanged for any original clients that are still out there.

In some ways, Web services standards are closely equivalent to those that already exist in the world of components. The best example of this is WSDL, generally accepted as the one indispensable Web services specification. It is rarely mentioned, though, that WSDL is almost identical in its way of working to IDL used by RPC, COM, and CORBA. CORBA developers, however, almost always write their IDL first and use special IDL compilers to generate source code skeletons and stubs from it.

Viewed at a high enough level of abstraction, Web services can no doubt be considered components. After all, they are encapsulated blocks of software, whose functionality can be invoked -- by outsiders, any.way -- only through the interfaces published in their WSDL files. They lack inheritance and polymorphism, but these are essential attributes of objects -- not necessarily of all components. And what if Web services are requested by sending XML messages rather than by RMI? The idea of "message sending" is fundamental to all OO models -- how the messages are sent is an implementation detail.

There are at least two serious objections to this way of thinking. The first is inherent in the vague way in which Web services are defined today. Compared with component models such as COM, CORBA, and Java, they are quite loosely defined -- perhaps too loosely to permit reliable interoperation between different implementations. The Web Services Interoperability Organization is working to overcome this state of affairs, but it will inevitably be a delay before its efforts come to fruition.

The other difficulty is a practical one. True, an application could be built using Web services as components (instead of COM, CORBA, or Java objects), but it would not work very well. It would be a good deal slower at runtime, for a start. The developers would be extremely frustrated by the lack of standard services for such things as lifecycle management, security, transactions, and even notification. What's more, they would soon notice the downside of working with components that are not objects.

Furthermore, according to Szyperski's definition, a component is a binary unit. A Web service is binary at runtime, but there is no obvious reason why its source code should not be available during development. This means that a Web service is strictly only a component if invoked at runtime, which means that an application assembled from Web services must use them in situ -- wherever their respective owners have installed them. If, on the other hand, the application assembler has access to the Web service source code, the Web services would not qualify as components.

USING WEB SERVICES AND COMPONENTS TOGETHER

Scores of vendors have already proclaimed their success in converting their existing products to support Web services, but this is not quite as exciting a feat as might be imagined. It is actually quite easy to "WSify" a random Java class, and the same could be said of a COM+ object if Microsoft had not already made it trivial by adding that functionality to Visual Studio .NET and SOAP Toolkit. As .NET is essentially a single-source environment, and Microsoft has also gone to some pains to provide the best general-purpose integrated development environment in the world, there is no obvious reason why anyone would look further.

In the world of Java, things are, as usual, much less tidy. Small specialist companies like Cape Clear and The Mind Electric were among the first to market, followed by the leading J2EE application server vendors such as BEA, Borland, IBM, Iona, and iPlanet (now Sun ONE). In principle, this new class of tools (which we might dub Web Service to Java Interface Generators or WSJIGs) has two main functions:

  1. Given some WSDL, generate corresponding Java skeleton code to fulfill the WSDL contracts.
  2. Given some Java code (classes, JavaBeans, or EJBs) generate WSDL that describes its interfaces.

One of the many arguments advanced in favor of Web services is that here, at last, is a fast, easy, and inexpensive way of interoperating between .NET and Java. Previously -- so the story goes -- this could be done only with complicated, expensive, proprietary gateway products. Well, everything is relative. Actional (originally known as Visual Edge) has offered dynamic runtime translation between COM and CORBA since 1994, and there are at least half a dozen good options for Java, such as Intrinsyc's J-Integra Plug-In.

It is unclear how the "complexity" of a single COM/Java bridge can be improved upon by replacing it with two other bridges: one from COM to SOAP, and the other from SOAP to Java. Similarly, the job of translating between Chinese and Afrikaans would not be materially simplified by first translating from Chinese into English and then from English to Afrikaans. (Even if the translator is a native English speaker -- although, in that case, he or she might suggest that everyone agrees to speak English all the time.)

It is fortunate that interfacing COM+, CORBA, and Java to Web services turns out to be so easy; every precedent in the book tells us that all four architectures are going to be coexisting for years (or even decades) to come. But who will be in control of a world where distributed systems, made up of a patchwork of Web services and components, exist in a state of continual flux?

This may well be where the OMG's new Model Driven Architecture (MDA) fits in. Based largely on UML, MDA aims to let organizations create and maintain pure business logic and data definitions in the shape of platform-independent models. As shown in Figure 2, these can be semi-automatically translated into platform-specific models, suitable for deployment on CORBA, EJB, .NET, or other middleware standards.

This would sharply reduce the need to decide on deployment platforms before entering the design stage, setting users free to deploy to whatever platform is most convenient for a particular purpose. Moreover, if the platform initially chosen does not work out, moving to a different one would be far easier than it has been up until now.

Frankly, the most promising role for Web services is as an Internet lingua franca -- a sort of middleware equivalent of scripting languages like Perl and JavaScript. This may seem something of a comedown, but it is actually a high accolade. True network developers understand the importance of flexible tools like these and value them highly.

CONCLUSIONS

Web services have much in common with components, and we can easily qualify or disqualify them as components, depending on which definition of components we choose. For instance, Szyperski's definition, which insists on the binary nature of a component, would admit Web services in theory but exclude them in practice. There are enough difficulties in building an application from binary components that are all gathered together on one computer. Doing so with binary components scattered all around the Internet would be unlikely to yield satisfactory results in the foreseeable future.

Nevertheless, Web services stand out from conventional components, if only because they do not address the same problems. Components, as understood up until now, are used to build single applications or sets of tightly coupled applications distributed across a high-speed network. Moreover, applications created with components are written by coherent teams of developers working as part of single projects -- if not for single organizations. This is because components are gathered before an application is written and used to replace code that would otherwise have had to be written from scratch.

If Web services just did the same thing in a new way, they would be no more than a new species of component. Instead, they mostly set out to tackle the problems with which components have struggled -- problems that, almost by definition, have not yet been solved.

This results in an uncomfortable situation which a cynic could describe as "jam tomorrow." In other words, the roles for which Web services are proposed fall into two categories: things that can already be done by other means and things that currently cannot be done at all. The interesting question is whether there is an intermediate zone of things that can be done only, or best, by Web services -- and, if so, how extensive it is.

NOTES

1See "Web Services: Myths and Realities." Business Integrator Journal (www.eaijournal.com/bij/pdf/wsmyths&realities.pdf).

[ Back to Top ]

Web Services Strategies
« Journal Home
Web Services Strategies® is published 12 times a year by Cutter Information LLC

Editor: Tom Welsh
Publisher: Karen Fine Coburn
Group Publisher: Bruce Lynch, +1 781 641 5106, E-mail: blynch@cutter.com
Subscription Manager: Christine Doucette, Tel: +1 781 641 5118; E-mail: service@cutter.com
Production Editor: Pamela Shalit, Tel: +1 781 641 5116; E-mail: pshalit@cutter.com

Editorial Office: Tom Welsh. Tel: +44 12567 363781; Fax: +44 8700 520080; E-mail: twelsh@cutter.com.

Circulation Office: Cutter Information LLC, 37 Broadway, Suite 1, Arlington, MA 02474-5552. Tel: +1 781 641 9876 or, within North America, +1 800 492 1650; Fax: +1 781 648 1950 or, within North America, +1 800 888 1816; E-mail: service@cutter.com; Web site: www.cutter.com/webservices/.

Subscriptions: $497 per year; $567 outside North America. For subscriptions visit www.cutter.com/itorder.htm#nls, or call +1 800 964 5118 or +1 781 648 8700, or Email: sales@cutter.com.

©2002 by Cutter Information Corp. All rights reserved. No part of this document may be reproduced in any manner without express written permission from Cutter Information Corp.