Emerging Technologies and Trends

How Should GTE Respond?

 

  

  

 

 

April 1, 1999

  

Prepared by

 

James T. Smith

 

 



Preface

Emerging Technologies and Trends—How Should GTE Respond?

Introduction

Scope

Objective

Setting the stage of this discussion

Gutenberg's invention—mass repeatability

Nationalism and the Industrial Revolution

The radical realignment of relationships

Megatrends

Appliancization

Why ration something that’s virtually free?

When new is cheaper than used

When prices hit the floor

Repackaging what’s already available

Consumerization—the disposable appliance

Server-side appliancization—a reality

Appliancization of the network

Mass Customization

What are the key enablers of mass customization?

More than an efficient production and delivery system

How many choices are enough?

Mass production versus mass customization

Turning marginal opportunities into major revenue streams

The middleman’s new role—customer agent

Customer-management—keep them coming back

The business-to-business version of mass customization

My personal experience with telecomm-based mass customization

Convergence

The PC revolution—convergence typified

The Internet—the epitome of convergence

The Internet’s convergence is customer-focused

Convergence and supply-chains meet the Internet

Convergence yields virtual corporations

Whence the virtualization of a company?

The new business imperative—embrace and extend

Technology’s new role—key enabler of the digital economy

The synergistic value of technology convergence—killer apps

The convergence of all networks—all networks lead to one

Whence network convergence—how will it happen?

The new converged PSTN

The new converged home network

How Does GTE Respond?

Why must GTE monitor technology?

Preparing organizations for innovation!

Managing technological innovation

Organic corporate cultures critical to innovation

alphaWorks and IBM’s successful turn-around

Emerging Technologies

Materials Science

Nanotechnology—the coming revolution in molecular manufacturing

Indium phosphide (InP)–the next high-performance elixir for electronic circuits

Light-Emitting Silicon Chips

Polymer electronics—creating complete electronic systems in plastic

Molecular Photonics

SOI – Silicon on Insulator

Silicon Germanium

MAGRAM—Magnetic RAM

Chiral Plastics

Fiber-Optic Amplifiers

Photonic crystal confines optical light

The perfect mirror

Optical CDMA and all-optical networks

Systems Science

Spherical IC’s

MEMS – Micro-Electromechanical Systems

SOC – System-on-a-Chip

Switching to Switching

Photonic Optical Switching Systems

Configurable Computing

IP–Intellectual Property

Chaos-based systems that evolve—an alternative to current computing

Veri—Instantaneous pattern recognition

Computational sensing and neuromorphic engineering

Killer Applications

Set-top box on a chip

3G—third-generation—cellular devices are coming

Sony’s next-generation playstation

The time has come to move beyond the PC

Endnotes


Preface

A word is in order explaining the origin and development of this document.  Its first version consisted of a significant portion of the materials now present in the Emerging Technologiessection of this document originally were gathered together as part of an annually updated GTE five-year technology planning document, “Technology Strategies & Guidelines for 1999-2003” released in the early summer of 1998. 

That version was focused at providing a short-list of technologies that were in various stages of development—from basic R&D status to near-term commercial rollout—the fruits of which would most certainly impact the technology planning of GTE at some point in the next few years.

That version also served as the basis for several off-site, outside GTE, invited-speaker presentations.

In the late summer of 1998, I decided to include a section to introduce and discuss the major megatrends that these technologies were enabling—indeed were driving.  The Megatrends section of this document is the result that effort.

In the fall of 1998, the decision was made to follow-up with a discussion of what the direct impact of these megatrends and technologies on GTE would be, and how GTE consequently should respond to them.  The result of that analysis is contained in the section How Does GTE Respond?

In the spring of 1999, the need for the section Killer Applicationswas identified as a way to reinforce the message of opportunity and urgency that these megatrends and emerging technologies represent.

As far as content is concerned, the many references used in this document are readily available as published books or online documents—via the Internet Web.  In particular, some subset of these online articles—with at-the-time appropriate commentary—have already been emailed to the many people on my distribution lists.  One might note that many of the referenced materials are newer than this document.  Once the framework and vision of this document was set, one only had to collect the pieces of the gigsaw puzzle and drop them into place.  This only reconfirms that the vision is true.

I wish to thank each of you for being such an attentive audience—often responding with point-counterpoint comments and pointers to other related sources of information.  To you this work is dedicated!  It represents an attempt to coalesce that vast set of email’s previously sent to you into a complete compelling story of how technology does matter, can be very exciting, and can be very profitable—if recognized and appropriated wisely.

I particularly wish to thank Don Jacobs and Russ Sivey for their managerial support and encouragement, and the many staff of Technology Business Planning who have read the draft versions, and have been a sounding board for the personal analysis and vision I have incorporated into this document.



Emerging Technologies and Trends

  How Should GTE Respond?

Introduction

The general technology areas of materials science and systems science have recently witnessed a number of technology breakthroughs which promise to impact a broad spectrum of industries—from computing and communications, to manufacturing, to medicine, etc.  All of society will be indelibly changed.

In particular, many of these breakthroughs will have a profound impact on GTE’s core businesses—everything from the infrastructure that network operations deploys and supports, to the depth and variety of services that the various GTE SBU’s will be able profitably to offer our customers.

Transcending consideration of any specific examples of these technological advances, general megatrend changes due to consequences of these advances already are happening—independent of the specifics of which, and of when a particular breakthrough occurs.  These megatrends will profoundly affect the way GTE does business.

Some lines-of-business will significantly shrink in profitability—unless they are re-engineered both from a technology, as well as from a business perspective.  New line-of-business opportunities will be created—if foresight is taken now to capitalize on them.  Nearly all lines-of-business will be affected in some significant way—for better or worse, depending on their flexibility to adapt to these megatrend paradigm shifts.

Scope

This study identifies and considers the following megatrends: 1) appliancization, 2) mass customization, and 3) convergence.  The fundamental force enabling each of these megatrends is technology.

Donald Tapscott in his book, The Digital Economy,[1] discussed the forces behind the new digital economy.  In particular, he emphasized the impact of the new medium of communications that is emerging:

Today we are witnessing the early, turbulent days of a revolution as significant as any other in human history.  A new medium of communication is emerging, one which may prove to surpass all previous revolutions—the printing press, telephone, television, computer—in its impact on our economic and social life.  Interactive multimedia and the so-called information highway, and its exemplar, the Internet, are enabling a new economy based on the networking of human intelligence.

Of the twelve major themes that Mr. Tapscott discussed that would characterize the new digital economy, one theme stands out in particular—innovation.  How important is the role innovation will play in the new digital economy?  Consider the focus placed on this characteristic by Microsoft—whose employees are told, “Obsolete your own products.”  This mindset is constantly reinforced into all aspects of their work.  Nathan Myhrvold and Bill Gates have expressed this mindset in their book The Road Ahead:[2] “No matter how good your product, you are only 18 months away from failure.”

Objective

Extrapolating beyond, and yet still in harmony with Mr. Tapscott’s initial reasoning and analysis, this document recommends four policies that will characterize—define the climate of—those organizations that would be successful in the new digital economy:

1.       You must innovate beyond what your markets can imagine.

2.       You must understand the needs of your customer’s customer.

3.       Your organization needs a deep-seated and pervasive comprehension of emerging technologies.

4.       You need a climate in which risk taking is not punished, creativity can flourish, and human imagination can soar.

The theme of emerging technologies is reflected in the title of this paper, and in the analysis that is presented.  The analysis in this paper seeks to illuminate those megatrends and technologies that will strategically affect the way GTE conducts its business.  In particular, this paper attempts—as Mr. Tapscott recommends—to lay a foundation for a “deep-seated and pervasive comprehension of emerging technologies” that will be the enablers of the megatrend changes GTE will face.

This paper seeks to convince the reader of the correctness and gravity of the above stated policies.  Strategic steps for GTE to take are also presented.  Many related materials from a number of sources—most of which have versions that are readily available online from Internet websites—have been brought together to lend their credibility to the thesis of this argument.  Care has been given to presenting a well-integrated coherent view—not just a collection of related articles.

Setting the stage of this discussion

Even before commencing the examination of any megatrend changes, this paper wishes to set a general atmosphere of expectancy that we—as we enter the twenty-first century—indeed are entering into a new era in the history of man’s civilization.  This new era will be one in which our current perceptions of how business is organized and operated, of how people relate to one another will undergo—in fact, it already is undergoing—radical transformations.

The megatrends discussed in later sections are but some of the more visible manifestations of this new atmosphere.  The emerging technologies that are presented here are but point examples from the bread spectrum of new ideas—many that until recently were the ‘impossible dream’—that are now finding concrete realization.

John Gehl and Suzanne Douglas recently have written an article, “From Movable Type to Data Deluge,”[3] that re-examines the theories and observations of Marshall McLuhan regarding the role, function, and influence of the media—with consideration given to recent emerging technologies.

The thesis, or theme, of their article is:

Instant, global news and the hypertext Web are carrying us into realms of information access that alter knowledge foundations laid by Gutenberg's printing technology.

John Gehl and Suzanne Douglas are editors and publishers of the magazines Exec and Educom Review, and the Internet newsletters Innovation, Edupage, and Energy News Digest, which are found at http://www.newsscan.com.

The essence of Marshall McLuhan’s thoughts are collected in his classic work Understanding Media: The Extensions of Man,[4] which recently was reprinted in response to the increased awareness by so many people of the prophetic enlightened content of his writings.

Marshall McLuhan is perhaps best remembered as the pundit who coined the slogan "the medium is the message.”  He went on to describe the information age as an age of all-at-onceness—an age or era where space and time finally are overcome.  During the time of McLuhan, this transcendence was achieved by television, jets, and computers.  Now as the age of a new digital economy approaches—today, typified by the Internet—this transcendence is achieved to an even greater extent.

When McLuhan first began to express his ideas, the television media was dominated—controlled—by three TV networks that operated in the United States.  Shows such as Ed Sullivan and I Love Lucy were mass-produced for mass-consumption.  This was a world dominated by one-to-many broadcast.  Now, due to advances in cable, satellite, and computer networks, we are offered all news all the time, all comedy all the time, all MTV all the time, all shopping all the time, all anything you want all the time.  Notice how the theme of ALL-ness occurs over and over—all the time?  Could anything be more "all-at-once'' than this?

The correct answer is—perhaps surprising for some—a resounding yes!  The description above is focused on the transcendence of time.  The other component of transcendence that McLuhan described—one that is just now beginning to occur—is that of the transcendence of space.  The emergence of the Internet and the imminent commoditization of long distance telephone service are but two manifestations of this transcendence of space.

In an all-at-once world where limitations due to space and time are overcome—if not entirely eliminated—the linear “cause-effective” thinking processes that have characterized the industrialized mass production focused world are giving way to a new "discontinuous integral consciousness.”

This all-at-onceness is more than simply an abstraction of societal inclinations.  The effects of this all-at-onceness are now being manifested in all areas of technology.  The discussion of system-on-a-chip (SOC) and reconfigurable computing technologies that are presented in the Emerging Technologies section of this paper provides concrete—physically realizable—examples of the impact and effects of this all-at-onceness phenomenon at the microchip and systems levels of computer and systems technology.  System and applications functions that once out of necessity would have been physically separated on discrete chips or components are now being integrated in ways only imagined a few years ago.

In the section on Killer Applications that examines some of today’s new applications of emerging technologies are given the examples of new cell phone and set-top box implementations as single integrated chips that leverage FPGA—field programmable gate array—technology.  The elimination of the physical requirement to discretize functionalities on discrete devices facilitates much more than simply the potential to deliver today’s currently defined features more cost effectively.  This new level of integration more importantly will facilitate the delivery of what previously would have been unheard of, even undreamed of features—technological breakthroughs.

The organizational theories that underpin the emerging digital economy are indelibly affected by this all-at-onceness.  The Megatrendssection of this document considers the explanation by Larry Downes and Chunka Mui regarding the arise and impact of what they termed killer apps.  The three primary principles or forces they identified as driving the changes behind the digital economy—Moore’s Law, Metcalfe's Law, and Law of Diminishing Firms—are intimately affected by this all-at-onceness.

Just as the systems engineer’s constraints on the physical placement of components have radically changed—as indicated above, so also have the constraints of time and space on organizational theories been forever transformed.

George Gilder, author of the privately circulated Gilder Technology Report, foresees a fundamental technological shift with "catastrophic consequences for some and incredible profits for others.''  He says the future will bring us universal technology "as mobile as your watch, as personal as your wallet,'' a technology that will be able to recognize speech, navigate streets, collect your mail, and even cash your paycheck.

Gutenberg's invention—mass repeatability

This kind of quickening has occurred once before—though at a slower pace and on a smaller scale than what is happening now.  The time and circumstance was Gutenberg's invention of the movable-type printing process embodied in the printing press.  The process he invented accomplished much more than simply opening the way for texts that were smaller, more portable, and easier and more economically produced.

At the more abstract societal levels, it encouraged a great surge in literacy, individualism, ... and rebellion.  Among the results of this strategic redirection were a new Europe and an entirely new world.  James Burke and Robert Ornstein explained this strategic redirection in their book, The Axemaker's Gift.[5]

The effect of Gutenberg's letters would be to change the map of Europe, considerably reduce the power of the Catholic Church, and alter the very nature of the knowledge on which political and religious control was based.  The printing press would also help to stimulate nascent forms of capitalism and provide the economic underpinning for a new kind of community.

As with all interesting technologies—which is what the printing press was, the publisher's intentions were ultimately far less important than the consequences of publishing.  At the political and societal levels, using the vernacular languages legitimized those languages, detracted from Rome's authority, and made it easier for monarchs to enforce their laws and extend bureaucratic control far beyond what had been previously possible.

At the knowledge level, printing led to the prominence of specialists and experts, who wrote books on every subject and fed Europe's growing demand for information of all kinds.  These sources included the old, reliable kind, as well as the novel, heretical kind that fomented dissent and upset all the traditional relationships that had sustained medieval Europe.  Because of portable, printed books (and later newspapers), people could study the Bible without relying on a priest, learn a subject without going to a master, and think thoughts without asking for permission.  The consequence, as expressed to Mr. Burke, was that:

Things would never be the same again, because ideas were now as free as air.  The genie was out of the bottle.

Nationalism and the Industrial Revolution

In a 1964 interview, McLuhan explained that nationalism did not exist in Europe

"... until typography enabled every literate man to see his mother tongue analytically as a uniform entity.  The printing press, by spreading mass-produced books and printed matter across Europe, turned the vernacular regional languages of the day into uniform closed systems of national languages—just another variant of what we call mass media—and gave birth to the entire concept of nationalism.''

The social transformation that was wrought by printing resulted in much more than the free flow of ideas empowered by the printed word.  The printing process was just that—a process!  It offered a fundamentally new approach to the use of machines.  It was the prototype for the science of mass production—one of the cornerstones of an industrialized society.

McLuhan explained these extended consequences thus:

Printing, remember, was the first mechanization of a complex handicraft; by creating an analytic sequence of step-by-step processes, it became the blueprint of all mechanization to follow.  The most important quality of print is its repeatability; it is a visual statement that can be reproduced indefinitely, and repeatability is the root of the mechanical principle that has transformed the world since Gutenberg.

Typography, by producing the first uniformly repeatable commodity, also created Henry Ford, the first assembly line and the first mass production.  Movable type was archetype and prototype for all subsequent industrial development.  Without phonetic literacy and the printing press, modern industrialism would be impossible.  It is necessary to recognize literacy as typographic technology, shaping not only production and marketing procedures but all other areas of life, from education to city planning.

The radical realignment of relationships

This repeatable sequential characterization of information processing which printing enabled is also the hallmark of modern programming!  We are able to mass-produce and distribute floppy disks and CD-ROMS with the latest programs and data to everyone everywhere.  Now, with the total connectedness that the Internet portends, this distribution can occur at anytime, in realtime!

Now, the Internet and the World Wide Web are once again changing the very nature of communication by radically realigning the relationships between the people involved in a communications process.  Books, newspapers, radio, and television are all essentially one to many—that’s broadcast—media, with one source transmitting to many readers, listeners, or viewers.

In contrast, the Internet allows a surfer—one who uses the Internet—to exercise complete control over what now has become an interaction with rather than a reception of news or information.  Hypertext links—called URL’s—and search engines permit the Internet user to become entirely and quickly free of the confines of the information sender's intended message and purpose.  John Gehl and Suzanne Douglas in their article, “From Movable Type to Data Deluge,” have expressed this radical realignment thus:

Whereas the communication process has in the past typically implied an assumption that the message sender had more information than the message receiver, now the relationship is effectively reversed.  The one with control is not the one with the message but the one with the mouse.

The immense consequences of this realignment—could even be a reversal—of relationships will be borne out in the discussion presented in the sections that follow.  We consider the rapid appliancization of technology, and the move from a mass production dominated industrial world to one where mass customization is the norm.  We consider the move from a world where one anxiously awaits the reporting of what has happened to one where real-time interaction with what currently is happening—not just the news on the TV, but the information required to operate the corporation effectively, competitively—is the norm.

With the full transcendence of space and time that all-at-onceness achieves, all work becomes virtualized—it could be done with equal ease just about anywhere, at anytime, by anyone—a defining characteristic of twenty-first century corporations.


Megatrends

Transcending consideration of any specific examples of emerging technological advances are general megatrend changes due to consequences of these advances.  They already are happening—independent of the specifics of which, and of when a particular technological breakthrough has or will occur.  These megatrends will profoundly affect the way GTE does business.

Some lines-of-business will significantly shrink in profitability—unless they are re-engineered both from a technology, as well as from a business perspective.  New line-of-business opportunities will be created—if foresight is taken to capitalize on them.  Nearly all lines-of-business will be affected in some significant way—for better or worse, depending on their flexibility to adapt to these megatrend paradigm shifts.

What are these megatrends?  The full force and consequent changes of the digital economy have begun to manifest themselves.  A number of strategists have developed their theories of and written their books predicting and explaining the megatrend phenomena that everyone now is beginning not only readily to observe, but to experience not only in their work, but also in their personal lives.

In his book, The Digital Economy: Promise and Peril in the Age of Networked Intelligence, Donald Tapscott discussed the forces behind the new digital economy.  He enumerated twelve themes: 1) knowledge, 2) digitization, 3) virtualization, 4) molecularization, 5) integration/internetworking, 6) disintermediation, 7) convergence, 8) innovation, 9) prosumption, 10) immediacy, 11) globalization, and 12) discordance.

Larry Downes and Chunka Mui explained the arise and impact of what they termed killer apps in their book,[6] Unleashing the Killer App.  They identified three primary principles or forces that are driving the changes behind the digital economy: Moore’s Law, Metcalfe's Law, and Law of Diminishing Firms.

1.       Moore's Law explains how computers, telecommunication services, and data storage systems are becoming faster, cheaper, and smaller, all at increasing velocity.

2.       Metcalfe's Law demonstrates why the impact of these technologies spreads quickly and pervasively through the economy—from early adoption to widespread acceptance.

3.       The Law of Diminishing Firms, states that as the market becomes more efficient, the size and organizational complexity of the modern industrial firm becomes uneconomic, since firms exist only to the extent that they reduce transaction costs more effectively.

The virtual corporation—explained and examined in this study—becomes the new corporate model of organization and operation.

In his book, Donald Tapscott explained a new medium of communication that he saw emerging—one enabling a new economy based on the networking of human intelligence:

Today we are witnessing the early, turbulent days of a revolution as significant as any other in human history.  A new medium of communication is emerging, one which may prove to surpass all previous revolutions—the printing press, telephone, television, computer—in its impact on our economic and social life.  Interactive multimedia and the so-called information highway, and its exemplar, the Internet, are enabling a new economy based on the networking of human intelligence.

Of the twelve major themes that Mr. Tapscott discussed, one theme in particular was innovation.  How important is innovation in the new digital economy?  Consider the focus placed on this human characteristic or quality by Microsoft—whose employees are instructed, “Obsolete your own products.”  This innovative mindset is constantly reinforced into all aspects of their work.  Nathan Myhrvold and Bill Gates have expressed this mindset in their book The Road Ahead: “No matter how good your product, you are only 18 months away from failure.”

In particular, Mr. Tapscott recommends to those who would be successful in the new digital economy:

You must innovate beyond what your markets can imagine.  You must understand the needs of your customer’s customer.  Your organization needs a deep-seated and pervasive comprehension of emerging technologies.  And you need a climate in which risk taking is not punished, creativity can flourish, and human imagination can soar.

This innovation theme is reflected in the title of this document, which seeks to provide an analysis of those megatrends that will affect the way GTE conducts its business.  In support of that goal, this paper also seeks to lay a foundation for a “deep-seated and pervasive comprehension of emerging technologies” that will be the enablers of the megatrend changes GTE will face.

With due consideration of the analysis of the above noted writers, among others, and to the body of daily announcements of technological breakthroughs, this paper identifies and discusses the following three megatrends: 1) appliancization, 2) mass customization, and 3) convergence.

As computing and communications costs continue to shrink, the devices and applications that leverage these will become less general-purpose and more task-specific.  The ready replacement of one device or application by another that is better—cheaper to buy, to operate, to maintain—better suited to performing the task for which it is designed—becomes the norm.  This trend is a characteristic of appliancization.

The forces of the digital economy also facilitate the transition from a focus on mass production to a focus on mass customization—a world where mass-market goods and services are uniquely tailored—customized—to the needs of the individuals who buy them.  The move from an industrial-focused society to one that is digital-focused means the demotion of products and the promotion of customers—which is what services are all about—as the focus of all commerce.

Convergence—of processes, of operations, of communications, of content, of supply-chain management, of process management, of marketing, etc.—will be what enables such dynamic partnering to occur—transparently to the customer, and seamlessly among the partners.

The fundamental force enabling each of these megatrends is technology—emerging technologies.  The profound opportunities that they harbinger are demonstrated by the example killer applications included here.

Appliancization

The first of these megatrends might be called the appliancization[7] of computing and of communications.  When a given resource becomes readily available at economical prices, the justification for rationing its use and for encouraging its reuse diminishes—if not completely disappears.  To the extent that computing and communications resources previously have been relatively expensive to acquire and operate, schemes were developed by which these resources could be shared, reused, repaired, etc.

 

With respect to computing and communications, significant progress has already been made—we’ve come a long way, baby!  After all, the timeshare mainframe and the multi-party line and neighborhood pay phone were once the only games in town!  Now we have multi-line party’s (one for the Internet, one for fax, one for the teenagers, etc.) in place of party lines.  Similarly, PC’s are everywhere—the office, the home, the hotel, etc.

As computing and communications costs continue to shrink, the associated devices, applications, and services become less general-purpose and more task-specific.  This trend is a characteristic of appliancization.

When a product is considered expensive to acquire, to maintain, etc, the motivation exists to seek as much value as possible from that product—to find as many possible uses for it as possible.  For example, few people would purchase an automobile if it could only be driven between one particular pair of locations, for only one particular trip, on only one particular day, of one particular year, of—you get the idea.

A typical new family automobile easily can cost over $20,000—several months wages for most families.  The justification for one purchasing an automobile can only be justified by most people based on its general—if not universal—utility for a multitude of tasks.  The automobile industry must design it for general reusability, durability, etc.  The support sector of this industry for maintaining the family’s automobile—including parts suppliers, distributors, mechanics, body shops, etc.—is a large as the manufacturing sector.

The more expensive an item—product, service, whatever—is to acquire and to maintain, the more value it must be able to provide the user to justify its expense.  This additional value can be derived in the form of broader functionality, general utility, reusability, extended durability, etc.

On the one extreme, there is the item that lasts forever, can do everything, and so costs more than all but the most affluent can afford.  In the realm of computing, this would be the description of the traditional mainframe.  At the other extreme, this would be the item that does a very specific task—possibly only one time—but costs practically nothing.  What it enables more than justifies its insignificant cost.  In the realm of computing, this could be a smartcard—a computer chip embedded into a piece of plastic—used as a calling card, or credit card, a one-day pass, etc.

To reinforce this principle, consider an example from another technology—the electric motor and electrical appliances.  Each electrical appliance—the hair dryer, dish washer, garbage disposal, etc.—has its own dedicated motor—for blowing in the dryer, for spinning and pumping in the washer, for grinding in the disposal.  This situation not only is more cost effective but also results in appliances that are more convenient to use.

Alternatively, all these appliances would need to be designed—for example—to use a common universal electric motor.  Furthermore, that common motor would need to be readily installed, in its turn, into the garbage disposal, the washing machine, etc.—as each appliance was used.

This example—the universal home electric motor, with a multitude of attachments—is not nearly farfetched as one might think.  In his book The Invisible Computer,[8] Donald Norman provides a photocopy of an advertisement taken from a 1918 Sears and Roebuck catalog for just such a common universal electric motor, along with an assortment of attachments for churn and mixer, fan, buffer and grinder, and sewing machine, among others.

This scenario was economically plausible when the per-unit cost to manufacture an electric motor were significantly greater than the additional per-unit cost added to each appliance to make it motor-swappable.  Since electric motors today are fairly inexpensive—relative to total appliance cost—the economic incentive is negligible in comparison to the customer’s perceived value in not having to hassle with such motor sharing across all their electric appliances.

Furthermore, this example discussion of electric appliances has not considered many other issues that this scenario raises in the areas of maintenance and convenience.  For example, what happens if the motor fails—to which appliance vendor should it be returned?

The bottom line—guiding design, development, marketing, and support considerations—is that appliances should share no more in common than is necessary.  The appliance manufacturer is motivated to make the device as applicable to the specific customer task as possible.  The transformation is from such product-focused issues as durability, supportability, and reuse to customer-focused issues such as specific tailored functionality, convenience, and ease of use.

The sections that follow examine characteristics of the appliancization phenomenon—in particular, how this phenomenon is affecting the information and communications industries.  The impact of appliancization covers all aspects of the business model—from the design and planning processes, to the products and services developed, to how they are delivered and supported.

Why ration something that’s virtually free? examines the justification for the product-focused compromises in functionality—such as rationing and reuse—that in the past have been associated with computing and communications.  The cost of processing and communications at all levels—from the complex systems level down to the subsystems and the components that form them—continues to drop precipitously.

The consequence is that the value proposition for justifying the preferential importance of economical reuse of computational resources over the importance of customer-valued characteristics—convenience, flexibility, etc.—quickly dissipates.

When new is cheaper than used considers the transformations that occur in the producer’s business model and in the customer’s perception of a product’s value when product replacement becomes preferred over product retrofit/upgrade.

When prices hit the floor examines the transformation in how a company must approach product evolution and continuity when its products have been appliancized.  The incentive to lower a product’s cost is replaced with the incentive to add new just-gotta-have features while preserving the current (replacement) floor cost.

Repackaging what’s already available considers the possibility of how delivery of the next just-gotta-have feature can be achieved through the repackaging and delivery of an existing feature in new scenarios or applications via new methods or devices.  The examples of services presented later in the section The synergistic value of technology convergence—killer apps,” discussed under the major section of Convergence, are examples of this approach to creating just-gotta-have features.

Consumerization—the disposable applianceaddresses the transformation of business model that occurs when a company’s product transitions—from the perspective of the customer—from a durable good to being a consumable.  The consumerization of an appliance occurs whenever that appliance is not only cheaper to replace than to repair.  It has become so inexpensive for the customer to acquire—in comparison to the value it returns: great value for negligible cost—that it becomes consumable, or disposable.

Server-side appliancization—a realityconsiders that the designation of a product as being an appliance should not necessarily be restricted in functionality to the scope of—say—a small kitchen appliance.  Intel recently proposed a Server Appliance Design Guide to ensure product reliability and broad application support for a class of products termed server appliances.  Other industry leaders have teamed with Intel to develop a set of platform specifications.

"Appliancization of the Network" examines the profound influence that the appliancization megatrend will have on how the Internet and the PSTN evolve.  Its influence will manifest itself in a number of ways.  Emergence of the I3A’s (the three I’s are for Internet, Information, and Intelligent) appliances will place entirely new sets of requirements upon the Network.

Two obvious areas of application-enabled innovation are the next generation of cellular handsets that are now on the way and the home network, discussed later in detail in the section, The new converged home network.”  Today's corporate networks can be characterized as fairly homogenous networks.  In contrast, environments such as the appliance-rich home network—and in time, the twenty-first century office network, as well—will be network environments characterized by their diversity.

Why ration something that’s virtually free?

The entry-level price of a computing device—a la, the PC—until recently was above $2000.  In fact, the author can remember when an 8086-based PC-XT cost over $5000!  At those prices, the cost to perform any given computing task was so expensive that one naturally sought to perform as many tasks as possible with that one relatively expensive resource—just as the farmer often has depended upon one general purpose tractor.

(See the open letter—titled The time has come to move beyond the PC,” written by George F. Colony, President, Forrester Research—attached to the end of this document.)

The cost of processing at all levels—from the complex systems level down to the subsystems and the components that form them—continues to drop.  The consequence is that the value proposition for justifying the importance of economical reuse of computational resources over other customer-valued characteristics—convenience, flexibility, etc.—quickly dissipates.

The prospect for a new information age centered around the information appliance must now be taken seriously.  Today’s VCRs, microwaves, etc. already have more processing power than the PC’s of a decade ago.  Several articles[9] address the emergence of the information appliance.

The following quote taken from the last of the articles above provides insight into the appliancization megatrend that is now developing:

"Over the next five years, we expect to see companies forsake the 'PC for everything' approach and start moving toward specialized devices with limited but well-understood functions.”

Howe predicts that the market for this new breed of devices will reach $16 billion before 2002.

... In addition, career-oriented consumers will be the prime catalysts of the consumer revolution, purchasing Internet-enabled phones and other communications devices in order to increase productivity out of the office.

The last statement indicates that the office professional will be part of the vanguard in this adaptation of information appliances.  In a later analysis report “Forrester Says PC Market Will Stall After 2000,”[10] Forrester reiterates this perspective, as reported in the article:

… PC vendors will drop prices to try to spur demand, but corporations will increasingly go the route of Internet appliances.

… Instead of investing in PC’s, corporations increasingly will expend application development money and energy to support Internet browsers and appliances.

The Acer definition of an information appliance—which they have termed the XC—describes the situation where each consumer application is handled by an appliance dedicated to that particular functionality:  The XC concept includes devices such as handheld computers and set-top boxes, as well as terminals designed for viewing digital content, Internet-only viewing, or home banking.

According to Stan Shih, Acer CEO,

"We are repackaging PC technology in a form for single applications.  The PC is a good computer, but it is complicated to use.  There are more consumers and end-users right now looking to enjoy IT technology, but not by the PC approach."

What will happen to the PC?  The functionally of the PC will soon be available at commodity prices, as indicated in the article “Cheap System-On-A-Chip Challenges National”[11] by Richard Richtmyer.  In the discussion of system-on-a-chip technology from STMicroelectronics, executives indicate their expectation of being able to deliver a commodity-priced PC:

Perhaps even more important than the design and processing power of ST's new chip is its low cost, with the potential to enable a complete PC system for less than $100!

Further indication that the era of the information appliance indeed has arrived is provided when discussion of the trend becomes the cover story of an issue of Business Week: “Beyond the PC—Who wants to crunch numbers? What we need are appliances to do the job--and go online.”[12]

According to Donald A. Norman, co-founder of the consultancy Nielsen Norman Group as well as author of The Invisible Computer, is a leading apostle of so-called information appliances—simple devices that do one or two jobs cheaply and well:

''We're entering the consumer era of computing.  The products of the future will be for everyone.''

Winning in the digital-appliance business will depend not so much on the latest geek-like specifications—such as megahertz and gigabytes—as on identifying consumer needs, and satisfying them with products that hide their complexity.  As explained by Hewlett-Packard Chief Executive Lewis E. Platt,

''The PC is so general-purpose that very few of us use more than 5% of its capability.''

Market researcher International Data Corp. says that Internet access is now 94% via the PC; but estimates that number will fall to 64% in 2002—thanks to set-top boxes, Web phones, and palm-size computers.  IDC estimates that by 2002, more information appliances will be sold to consumers than PC’s.  While 48% of U.S. homes now have a PC, analysts do not expect that to rise above 60%.  Information appliances will handle many of the jobs now performed by the PC.

One might ask what will be the killer applications for these appliances?  In today's PC-centric world, modern cybernauts now spend upwards of 40 hours per month online, according to Sky Dayton, chairman of Internet service provider EarthLink Network Inc.  By providing consumers with information appliances to log on to the Web more often and more conveniently—say to check the local movie schedule or even buy a car—that usage easily could rise to 200 hours per month.

The business models that guide how such appliances are developed and marketed are still evolving.  Instead of designing cool boxes and hoping they find uses, companies first dream up killer services—and then build devices that can deliver them these services effectively.  Furthermore, these devices will let companies lock customers into their services—and harvest rich new revenues from advertisers and E-merchants.

An example of one such appliance is the BlackBerry mobile device recently announced[13] by Research In Motion (RIM).  The BlackBerry is a wearable wireless handheld device with integrated e-mail/organizer tools such as a calendar and address book along with an alarm, a PC docking cradle, and mailbox integration with Microsoft Exchange.

When new is cheaper than used

Another consequence of the appliancization phenomenon is that replacement becomes cheaper than retrofit/upgrade.

This phenomenon occurs whenever labor becomes a significant component of manufacturing and assembling (compared with the costs of the materials used to produce the product).  It has already happened in any number of industries—hair dryers, can openers, weed whackers, etc.  Note that these example industries are all appliance focused.

When this condition occurs—when the cost of labor has become the greater part of maintenance and service cost—an industry can greatly reduce its labor costs to manufacture a product relative to the labor costs required to service the product (i.e., in the field, one-on-one).  This reduction in manufacturing labor costs is achievable by a number of means—mass production via automation, offshore assembly, etc.

Allow me to share a recent personal experience to demonstrate the point with a concrete example.  Recently, one of my garage door openers quit functioning.  One repair company spent over one week—including my afternoons and weekends—in trying to have it repaired.  During this time, the replacement parts estimate to repair it escalated from $80 to $160—besides the labor costs (which I would owe if the person succeeded in repairing my appliance)!

Furthermore, to add another feature I had always wanted (but which was not available when I bought the original opener in 1991) would cost another $150—and the integration of the after-market product with the old system would still fall short of the new system’s level of integration.

I then checked over the phone with Sears only to learn that a new garage door opener with these new features (and then some)—with 50% more horsepower, the desired enhanced programmability, etc.—would cost only $159!  Care to quess what I did?

Earlier in the year, I had this same type of eye opening experience with a weed wacker and with a leaf blower—I already knew not to waste my time trying to repair wrist watches or hair dryers.

Such experiences are not personalized only to myself.  The Dallas Morning News recently ran a story[14] that conveys the same notion regarding the plunge in prices of electronic goods.

Today, low-end VCRs, 19-inch to 32-inch TVs, calculators, handheld stereos and other ubiquitous consumer electronics have become so inexpensive that many consumers consider them disposable.

Even professional fix-its say that buying a new machine often makes more sense after consumers consider the cost of repair, the equipment's average life span, the inconvenience of doing without while it's in the shop, the extra bells and whistles on new machines and declining prices.

With the price of a general purpose PC dropping below $500—on its way to the predicted XC price of about $100, the choice to replace rather than upgrade becomes a no-brainer.  So, what’s an appliance manufacturer to do?

When prices hit the floor

Another characteristic of the appliance business is the existence of a floor price for a given feature or capability set that is low enough to entice a significant market segment to purchase it.  Further depression (or subsidy, a la, today in the cellular phone business) of the price to gain a sale no longer is required (to reach that market segment).

The cordless phone has reached this point with the general consumer market (the largest market of all)—last year the U.S. consumer preferred to purchase cordless phones in deference to traditional tethered phones, although the tethered phone was cheaper!  The basic cell phone (together with basic service packages) also soon will reach a floor price for a significant marker segment. 

Consider the recent articles “Cutting the Phone Cord to Stick with Cellularby Roy Furchgott,[15] and “Price Is King For Wireless Users” by Bill Menezes.[16]

"By the year 2002 we might be seeing more replacement phones sold than original phones.  When you've got a base of 100 million users buying phones every two to three years, those can start outselling the new user phones.  Brand will be a very important factor."

The phenomenon that then occurs is feature-creep as the basic configuration of the appliance is delivered with more and more just-gotta-have features.  These are features that can be included in a next-generation of the product—without increasing the total cost to the manufacturer—which will justify the customer’s perceived need to upgrade to the latest version of the product.  In many instances, the cost of the new version—because of technological advances, etc.—is actually less than the cost of the current model.

The incentive to lower a product’s cost is replaced with the incentive to add new just-gotta-have features while preserving the current (replacement) floor cost.

Jim Barry, the spokesman for the Consumer Electronics Manufacturers Association, or CEMA, interviewed in the previously referenced Dallas Morning News article, explains:

"What other industry are you going to pay less for an item and get more technology as time progresses?  The reason we bought 17 million VCR’s last year is because the prices are lower and you get more."

An example of feature-creep in the cellular handset arena is the Nokia 650 cell-phone that was recently announced[17] on November 2, 1998.  The Nokia 650 integrates a built-in FM radio, as well as caller groups and profile functions, a calendar, a calculator, and four games!

Speaking from personal experience, I recently refreshed two of my older digital PCS phones with new ones for the cost of otherwise replacing batteries in the older phones—and ofcourse, I also picked up those new just-gotta-have features!

In the future, people gladly will pay the price of a new phone—or some other appliance with phone-like functionality—to gain those latest new features or enhancements.  One question the phone manufacturers and GTE should be asking concerns exactly what those next just-gotta-have features will be?  Furthermore, what enhancements—if any—will the network need to effectively support them?

Repackaging what’s already available

Sometimes delivery of the next just-gotta-have feature can be achieved through the repackaging and delivery of an existing feature in new scenarios or applications via new methods or devices.  On the other hand, another scenario occurs when existing technologies and previously fielded systems that originally were developed and implemented for other—often times, quite different—purposes can be reborn to perform new tasks, develop new applications, and deliver new services.

One example of how technological advances are enabling new just-gotta-have features—which previously were trialed but not at that time well accepted—is described in the article CenturyTel Unveils Multi Mode Cellular Phone.”[18]

The multimode service is something of a breakthrough, Newsbytes notes.  As well as supporting multiple cellular systems, calls can be seamlessly handed off between networks, rather than requiring calls to redial when the call "moves" between networks.

In addition to seamless coverage, CenturyTel says that its customers also have transparent use of call features, including calling line identification (caller ID), message waiting indication and voice-mail.

The new business model is clear: make it easy for the customer to access his legacy wireline service with the same device (and its associated new just-gotta-have features—see the next example below) that is used in the wireless network.  Then, for example, one need not be concerned with synchronization of the same set of speed dial numbers into multiple handsets (for each cordless wireline phone, and for each cellular phone).  [As an aside:  Today I personally maintain a set of speed-dialing codes (1 = home, 2 = me@work, etc.) across four cell-phones two cordless phones, a key system, etc.!]

As another example of just-gotta-have features, Samsung recently announced—and Sprint PCS is offering—a cell phone that supports VAD, voiced activated dialing, of a frequently-dialed-list of numbers—e.g., “Mom-n-Dad,” “Home,” “Work,” etc.  Both the wireline and the wireless carriers have trialed various forms of VAD.

These have been network-side implementations—that is, the VAD processing requires that the caller’s voice command be transmitted over the network to a central server.  Thus, the network must be actively engaged in understanding my voice commands before performing what its really there to do—completing a phone call, etc.  In the first case (wireline) the handset is never convenient to me.  In the latter case (wireless), the noise interference usually has been a problem.

Now, with VAD processing implemented client-side in the handset, which could be either a cordless or wireless handset (or both—see the above CenturyTel example), the network is not engaged in decoding commands—only in making connections.  Additionally, the implementation cost of client-side VAD can be distributed over—shared by—a broader set of applications (VACvoice activated commands).  VAC need not be limited to controlling a phone call.

Technological advancements—such as the ones identified in this paper—are why VAC now can be implemented client-side—for instance, in a handheld device—instead of necessarily implemented network-side on some super server architecture.

How will this feature—new in implementation, but recycled in concept—be further leveraged in the appliance arena?  Next generation handsets will communicate with other appliances in their vicinity (e.g., the VCR, TV, garage door controller) via Bluetooth, HomeRF, HAVi, HomePNA, and other initiatives—not to mention globally via WAP, MIP, etc. to bring VAC to all manner of remote applications.

Voice control need not be embedded in all appliances—rather, it can be localized (or personalized) in each individual’s personal handset.  Then one will be able to say “ABC,” “FOX,” etc. and the VCR, TV, set-top box, etc. will tune to the proper channel—and one will not know nor care what the particular channel number or radio frequency is!  This concept has been presented in various papers.[19]

Examples of the secondly described scenario include the overlay of the current voice-focused PSTN with data-centric services—via analog modems, today, and via DSL technology, tomorrow.  In the information processing arena of Internet Web servers, the mainframe is making a major come-back.  A computing platform that was developed for a mass-production focused world finds new life as the dispenser of personalized webpages.  This second example is discussed in detail later in the section Server-side Appliancization.”

Consumerization—the disposable appliance

The phenomenon—when replacement becomes cheaper than retrofit/upgrade—has now hit the computer industry full force.  An analysis of this situation appeared recently in the article “Analysis: Consumerization stands IC business model on its head.“[20]

The changes now occurring in the computer industry—the semiconductor industry, in particular—will overturn more than just the industry’s common business models—with their return-on-investment and return-on-assets assumptions—that date to the early days of the business.  These megatrend changes will impact the design process itself.

At the core of this megatrend shift is the consumerization of the appliance.  This occurs when the functionality it provides has become so inexpensive for the customer to acquire—in comparison to the value it returns: great value for negligible cost—that it essentially becomes consumable, or disposable.

Such an appliance is not only cheaper to replace than to repair.  The residual value that the consumable appliance can offer becomes so miniscule—due to obsolesence of its functionality.  Finding a new secondary use for such an appliance can be more expensive in continued costs and expended effort than its out-right replacement.

Consider a personal real-life example of this phenomenon.  I recently tossed my old HP 500C InkJet printer.  I originally paid over $500 for it in 1992.  Today, the newer replacement machines—with far greater functionality—cost little more than $100.  On the other hand, cartridges for the HP 500C are about $30—and are becoming more difficult to procure.  The costs of its recycled use are greater than the effort is worth.  No one would buy the HP 500C, if I tried to sell it—even in a yard sell.  So, I finally gave it away—to charity.

This consumerization phenomenon—the extreme of product appliancization—is now happening to the PC, in particular—and to computing and communications bandwidth, in general.

The extreme of product appliancization—its consumerization—is reached when the cheaper to replace than repair scenario is when the appliance essentially becomes disposable—great value for little cost.

The razor blade industry has long been used an example of this phenomenon.  Razors have been given away as part of selling blades.  Now, with the advent of inexpensive plastics, the blade is wrapped around the individual blades—resulting in even greater ease of use.

In the photo industry, the use-n-toss camera has matured in capability and quality.  Such cameras now come with such previously considered advanced features as built-in flashes, zoom lenses, panoramic lenses, etc.  Few people care to purchase a nice camera costing several hundred dollars with technology that is obsolete in less than four months—not when a throw-away version that is up-to-date can be purchased for about the cost of the film.

Admittedly, this example about cameras begs the issue of whether the customer should buy any film-based camera, now that the digital models have arrived.

When an appliance becomes disposable, the business-model focus for providing that appliance necessarily moves from one of products to one of services.

It is affecting everything about the business—from design and development to marketing and support considerations—from the integrated system level down to individual components.  The same process is playing itself out in the cellular-telephone market, whose turmoil is now being visited upon companies playing in the PC game.

While many won't utter it publicly, an argument is emerging that hardware—everything from semiconductors to systems—is becoming the razors and the Internet and the vast services resident there are the blades.

Mario Morales, program director of semiconductor research at IDC (Mountain View, Calif.), predicts that within two years the PC industry could conceivably have a robust services-based business akin to the cell-phone industry, which gives away boxes for free or at nominal prices in exchange for a raft of services.

Martin Goslar, a PC-sector analyst for In-Stat (Scottsdale, Ariz.), has described the megatrend transformation this industry segment faces:

"The PC is the third-largest purchase [in a household] but it's just becoming something you pay for like water or gas and it's an information service."

James Bartlett, vice president of consumer solutions for IBM, understands the transformation that is occurring:

"This is new space.  You have to stop thinking this is a hardware-only business.  The cell-phone industry knows this.  They have the infrastructure already in place and every person extra they sign up goes to the bottom line."

This same phenomenon is now occurring in the PC and Internet access arenas, as typified by the discussion in an article[21] by Craig Bicknell.

Free-PC.com is a firm that gives away computers to anyone willing to wade through an assortment of targeted advertising.  Similarly, the Antrim, New Hampshire-based PC Free is preparing to woo the vast masses of Americans not yet online and shake up the whole computer industry in the process.  David Booth, the CEO of PC Free, says:

"What we are doing is challenging the existing paradigm in our industry.  No, not challenging—breaking. … We anticipate a tremendous response.  There are 38 million homes in the US without Net access.  We aim to get half."

His ground-breaking service is Internet access, complete with a fully configured PC, color printer, and software, for $40 a month.  Sick of the service after two months?  Cancel at any time.  This is not lease to own.

Booth's basic business theory is simple.  Computers and Net access are becoming a ubiquitous utility, like cable television or cell phones.  One does not pay separately for a cable set-top box, so why should you pay for a computer?  All the widgets should come with a simple service fee.

The company currently is working in tandem with Metro 2000, a New Hampshire-based ISP.  As the subscription base grows, Booth plans to supplement income by charging for ad space or selling goods on the computers' locked-in default homepage.

Booth’s deal-making does not stop here, either.  He also has cut a strategic deal with Compaq Computer[22] to supply him with a million PC’s, which he plans to roll out service in 19 states.  For US$40 a month, PC Free customers receive unlimited Net access, a Compaq computer, monitor, keyboard, mouse, and color printer.  If someone is dissatisfied, the person can cancel anytime.

So what is the catch?  Every computer's homepage will be permanently locked on Compaq's AltaVista portal site.  Furthermore, a permanent desktop icon will link directly to Compaq's Shopping.com online mall.  Fifteen other permanent icons on the desktop will link to the sites of vendors who pay Compaq and PC Free a bounty.  Need software? Just click the icon and up comes a software selection complete with order form.  There will be icons for nearly everything—insurance, cars, you name it.

Booth’s PC Free is not the only one pursuing this business model.  Another New Hampshire firm, an ISP called Empire.Net, recently introduced its own combination PC-Net-access offering.

The successful application of a consumable, disposable appliance business model need not be limited to the consumer market.  People, such as Sandy Reed, editor-in-chief of InfoWorld magazine, are now proposing[23] business-focused models—targeted at the business enterprise, as opposed to the consumer market—that leverage this trend.

What's most intriguing about free PCs is how the idea could be modified for the business marketplace. … Given volume pricing from manufacturers, software vendors easily could include free hardware with their solutions.

One particularly interesting possibility suggested by Sandy Reed would benefit Linux and other open-source software vendors.  Today, Microsoft Windows is one of the most costly components of current business systems, and yet one of the most difficult to avoid buying.  One likely outcome of the current trial over Microsoft's business practices is that vendors will be more likely to offer non-Microsoft operating systems.  That would make it cost-effective for Linux vendors to move to a free PC model.

The concepts of consummability and commoditization are not limited to tangible appliances.  Chuck Martin recently discussed what he calls “The Great Value Shift.”[24]

In the Net Future, it will be possible for core assets to become peripheral in what I call "the great value shift," leading to the commoditization of various products.

Commoditization can affect just about any product, from hard to soft goods.

Mr. Martin discusses stock quotes and real-estate MLS listings as examples of this commoditization.  Realtime stock quotes formerly were available only to brokers who had paid significant fees to the New York Stock Exchange.  Not long ago the standard fee for real-time quotes was $29.95 a month.  Today, realtime or slightly-delayed quotes are distributed free.  MLS real-estate listings traditionally they have been a valued asset for participating brokers.  Today, the MLS listings are available in abridged form at such Web sites as Cyberhomes.com in various cities, while owners.com allows those who want to sell their homes themselves to post pictures, property data, and prices.

Previously mentioned above are the commoditization of the PC and of Internet access—two sectors that greatly impact the information services and the telecommunications industry sectors.  As significant as the examples of PC’s and Internet access may seem, the full impact of this trend in technology-enabled commoditization has yet to be felt in these sectors.

Mr. Martin explained the danger and the opportunity of the situation thus:

When products are commoditized, companies must find other ways to provide new value—something that people will be willing to pay for.  In many cases, what used to be a company's core asset can become a loss leader, with peripheral products and services driving new revenue streams.

Part of the opportunity therefore, is to see where the core product or service can be leveraged for an unrelated—and often unexpected—product or service.  But the main challenge, and opportunity, during the great value shift is for companies to co-opt the value of someone else's product or service, while increasing the value of their own.

AT&T and Sprint PCS both have demonstrated their willingness to cannibalize what currently is their bread-n-butter long distance services as a means to enhanced their nationwide wireless service offerings.  From current indications, their efforts surely will be quite successful.

Today, long-distance is to be given away.  Next to come will be the cannibalization local service, as both AT&T and Sprint move toward all-one-can-eat local wireless offerings—as reported in “AT&T Plan Is a Search for Loyalty.”[25]  Note in this latter situation, neither company is giving away traditional fixed-wireline POTS service.  Rather, they are providing the functional—and even enhanced—equivalent of such service.

According to the article, the issue is not just about wireless or local service access.  Rather, its about customer loyalty, a principle to be examined more fully in the discussion of the Mass Customization megatrend.

The announcement was about much more than savings for consumers.

It was a sign that the long-simmering battle among the nation's giant phone companies to provide an integrated yet easy-to-understand basket of communications services had moved to the front burner.

The important thing about the new AT&T plan is not simply the variety of services that the company says it will provide on a single bill—wireless, long-distance, calling card services and a personal toll-free number.  Far more important is the pricing.  By charging the same rate for both wireless and traditional calls, AT&T may be set to make a psychological breakthrough in the marketplace.

Another smaller PCS company, Leap Wireless in fact has announced[26] commercial—not just a test marketing trial such as AT&T held in Plano, TX—availability of an all-one-can-eat local wireless offering in its midsize markets—places like Knoxville and Chattanooga, TN.  Leap seems to recognize the potential, both in opportunity and responsibility, that such a psychological breakthrough represents.  As explained by Harvey White, Leap's chief executive,

"We want people to think of their wireless phones as a basic telephony service."

Matthew Hoffman, an analyst at Dataquest Inc., agrees that the pricing may be enough to persuade consumers to cut the cord to their landline telephones.  Dataquest research shows that 30% of consumers are willing to switch to wireless from their landline phones if the costs are about $30 a month.

Such an offering can have unintended, or not anticipated, consequences.  During a prior trial of the all-one-can-eat service, Leap learned that customers talked more often and longer on their cell phones than they did on landline phones.  After all, the phone is now even more convenient to use!

Tomorrow—this author’s personal prediction—expect to see such services as business centrex, etc. bundled—given away as a loss-leader—as part of total e-commerce business solutions.  Further consideration of how current telephony service offerings can be leveraged as components of new killer apps is discussed in the section The synergistic value of technology convergence—killer apps.”

Server-side appliancization—a reality

This phenomenon is not only hitting the PC industry.  The same drivers—technological and manufacturing advancements—furthermore will affect other equipment—both client and server side—that goes into GTE networks.  Business models that presume a long life expectancy over which to amortize deployed equipment will be greatly affected by the appliancization phenomenon.

One example of such server-side appliancization is the new networked drive paradigm—Network Attached Storage Devices (NASD)—as typified in the work of a group of researchers at Carnegie Mellon University, led by Garth Gibson, and described in a recent article[27].

Their radically simple idea is to put more intelligence into hard drives so that the devices can communicate directly over the network to clients.  Instead of using a workstation or superserver to interface the hard drives to the rest of the network, the drives themselves can be distributed and connected directly to the local-area network via native high-speed network connectivity—including network intelligence—built into the drive.

For example, one can replace the current EIDE, S-Bus, or SCSI connection with a Fast-Ethernet or ATM connection.  These network-intelligent drives become autonomous units that are intelligent peers to PC and other clients that would access them, rather than function as dumb slave peripherals.

Making the hard drive less of a dumb commodity could lead to sustainable profits for manufacturers.  It is no wonder companies that such as Seagate and Quantum have readily provided funding for Gibson's work.  More recently, such traditionally network-focused players as 3Com now have announced[28] entry into the storage area networks (SAN) arena:

The strategic predictions of analysts following the SAN market are that the convergence between storage and networks will really take off.  The networking companies—such as 3Com, Cisco, Lucent, Nortel, etc.—will build every part of the SAN except for the drives themselves.  The importance of the traditional OS—Unix, NT, Novell—as the intermediary between the data (on the drive) and the application (on some server or client) becomes the network itself.

Besides economic self-interest, there is another factor that has driven the NASD research.  Workstation servers have become a bottleneck, in terms of I/O.  For example, data must be fetched by the intervening workstation, then retransmitted to the ultimately requesting device—say, a Web browser on someone’s PC.  In other words, corporate IT organizations do not want to manage a multiplicity of Novell servers, Windows NT servers, and Unix servers—each with its own separate disk subsystem.

The reason corporations now are into network-centric storage—a la, SAN’s—goes back to the mainframe days:  RASthat is, [1] Reliability, [2] Availability, and [3] Scalability.  It is inefficient to manage discrete sets of individual servers and clusters—each with its respective disk subsystem.

The future looks pretty clear.  Instead of the slow, isolated file systems that now are so prevalent, the trend is towards networks of shared disk subsystems.  Instead of direct cables connected to individual subsystems, SAN hubs and switches—shall we call them SANswitches?—will make the drives directly available to the network, while the glue holding them together will be Fibre Channel.

Furthermore, these are not your typical hard-drive systems.  Additional network-centric intelligence will be built into all of these devices, from the disk drives to the switches, for the most efficient management and operation possible.

For example, a directory-driven implementation—as described in the referenced paper—makes for much faster, reliable data delivery, at a potentially lower cost.  One of the developers described the situation thus:

"Think about the benefits--common access to all corporate data on an IP network via a Web browser, with limitations only being a matter of password and ID restrictions, rather than the incompatibilities of the hardware storage devices."

Such directory-enabled, network-connected storage devices are much more easily distributed around a network—without losing the control that the directory-enabled functionality provides.

Another more sophisticated example of the new generation of networked server-side information appliance is the recently announced[29] Oracle lightweight database.

The goal of the initiative is to lower the cost of ownership for Oracle's databases.  Dell Computer Corp., Compaq Computer Corp., Hewlett-Packard Co., and Sun Microsystems Inc. are expected to begin selling the appliance servers by the end of the first calendar quarter in 1999.

The implementation of the appliance server runs a lightweight, easy-to-manage operating system developed in part by Oracle.  The design strategy is for the new integrated appliance operating system and database to most efficiently provide database-support features needed to run and maintain an Oracle database—realtime backup and recovery, etc.

Mr. Ellison, Oracle’s CEO, said the system will be easier to use, and less costly for corporations to operate and maintain than would the more general-purpose Windows NT or Unix, which are extremely complex—being designed to provide general purpose multi-application computing platforms.

"We're not saying that we're better than NT, we're saying that in some cases you don't need an operating system."

"If the only thing you're running on the server is [Oracle8I], you don't need much of an operating system; you certainly don't need an operating system like Windows NT or any other full-blown complex OS.”

So small is the new operating system that this server appliance database initiative has been dubbed "Raw Iron" inside of Oracle, because the databases virtually run directly on Intel Corp.'s microprocessor hardware, Ellison said.

The use of databases is not restricted to server class platforms.  In addition to the Oracle’s and the IBM DB2’s of the world, PC-based databases such as Microsoft’s Access have been around a long time.  Now, database functionality is needed by and is moving to the information appliance arena.  What better place to house data and information—even temporarily within an appliance—than in the structure of a database?  A recent example of this trend is provided in the article “Sun, Sybase look to bring databases to small-footprint devices.”[30]

Sybase is in the process of adding Personal Java and Embedded Java support to its SQL Server Anywhere database.  This will provide developers a platform-independent environment for creating applications that let small-footprint devices interact with enterprise mainframes and servers.  Sybase's SQL Anywhere database includes replication technology and an Ultra Lite deployment option that enables developers to create databases less than 50 Kbytes in size.

Regarding this trend, Mark Tiller, president Sun Microsystems' Consumer and Embedded Division, expects to see set-top boxes, screen phones, cellular phones and in-car infotainment/telematic systems using the technology beginning next year.

The alliance to create Java and database solutions for consumer and embedded devices stemmed from Sun and Sybase's participation in the Open Service Gateway Initiative, which is aimed at creating an open interface for connecting consumer and small business appliances with Internet services.  This and other such home network-related initiatives is discussed later in the section The new converged home network.”

The above examples of network-based storage and databases are specific niche application areas of server appliancization.  However, a much broader movement to embrace this paradigm is now well underway, such as an effort initiated by Intel[31].

Intel has proposed a Server Appliance Design Guide to ensure product reliability and broad application support for server appliances by teaming up with other industry leaders to develop a set of platform specifications.  Companies committed to supporting the development of this guide include Cobalt Networks, Dell, Dialogic, Digex, Hewlett-Packard, Lucent, Novell, Oracle, PSINet, and SCO.

The goal is to produce a design guide that defines common hardware platform basics for the emerging network-based server-appliance market, said executives at the Santa Clara, Calif.-based chipmaker.  The first specification development tools and test suites are scheduled for release in the second quarter of 1999.

As explained by Lauri Minas, general manager of Intel's server-industry marketing operation:

Server appliances are custom-built to perform a single function or a limited set of functions. ... They don't require any configuration by the end user.  Businesses now use them mainly as Web servers, caching servers, e-mail servers, or for a firewall.

Minas said server appliances are a growing market, especially with the growth of the Internet.  However, the larger ISPs, telecommunications vendors, and larger companies with IT departments are holding back on buying the devices until they see some industry standards established, she added.

James Staten, industry analyst at Dataquest, explains the problem that this effort—and this guide, in particular—seeks to address:

Right now, most of the players in this space are hand-building or not yet at the process where they can outsource the construction of their appliances. …

Part of the problem is you can't take a traditional server board and build an appliance with it because it's got a bunch of extraneous components you don't need. …

And no one is really building motherboards without those components you can buy in volume, and that's what's really necessary and that's what Intel's trying to push.

Just how large can one of these new server appliances be—not in physical size, but in magnitude of processing that is performed, number of customers handled, bandwidth consumed, etc.?  One needs to look no further than the Internet to begin to answer this question.

The principles of server appliancization have moved even further up the functionality ladder—to the application level of dedicated website appliances.  The real technical discontinuity—or megatrend change—created by the Internet may be just emerging.  Rather than happening on the browser and consumer end of the spectrum, it is happening at the server end on big Websites.

A discussion[32] of this change was featured recently in the Fortune magazine article, “The Rise of the Big Fat Website.”

At first, these phase shifts are subtle.  However, little by little, financial backing and human resources are committed to technologies that are different from the status quo, and these new architectures gain so much momentum that they become the new reality.

Welcome to the world of Big Fat Web Servers (BFW’s).

These mega-websites are not operated as a single Web server—as a single physical platform, such as a SUN cluster, for example.  Rather, these mega-websites—BFW’s, as they are called—are implemented as Web farms: hundreds of servers networked together in a highly complex environment.  Mr. Gurley describes them thus:

These BFW’s represent something genuinely new in the history of computing.  It [a BFW] might best be described as a high-end complex machine fine-tuned specifically for the task at hand.

Does this not sound like the Intel description of the Server Appliance?

The ability to equip a new infrastructure for less cost than that of upgrading an existing one becomes a real possibility—the appliancization of the network itself is upon the horizon.

This appliancization of the network is borne out by the recent announcement[33] of the switch vendor Castle Networks Inc. of Westford, MA.

The C2100 circuit switch from Castle costs one-tenth that of competing products from such vendors as Lucent and Nortel.  At the same time, it requires only a fraction of the real estate in the central office that normally is required by switches of like capability.

Furthermore, the Castle switch is prepared for the future.  This switch can be upgraded to support new technologies like voice over IP.  It features two switching backplanes: a traditional TDM (time-division multiplexing) bus for shunting phone calls between ports, and a cell-switching fabric for routing voice calls carried over IP or ATM.

Taqua Systems is an example of yet another start-up company that has introduced a programmable carrier switch for the edge of the service-provider network, as reported in a recent news brief[34].

Their Open Compact Exchange (OCX) switch is designed to make it easier for competitive service providers or large enterprises to set up services such as unified messaging or virtual private networks on a per-user or per-call basis. Taqua will sell the switch beginning in June, priced at approximately $50,000.

According to Taqua, their OCX combines the best of Class 5 access and programmable switching.  The OCX offers the flexibility to support any interface or protocol; the scalability to meet the rigorous demands of the carrier central office and the access edge of the network; and the extensibility and programmability to meet the emerging voice/data integration needs.

Appliancization of the network

The appliancization megatrend will have a profound influence on how the Internet and the PSTN evolve.  Its influence will manifest itself in a number of ways.  The emergence of the I3A’s (the three I’s stand for Internet, Information, and Intelligent) appliances will drive entirely new sets of requirements from the Network.

One obvious area of innovation is the next generation of cellular handsets that are now on the way.  Short-messaging already is readily available with handsets now shipping.  Several Palm Pilot and Windows CE-enabled handsets also are now shipping.

Symbian was recently formed by a group of major cellular phone handset manufacturers to develop an OS and tools appropriate for the creation of this next generation of I3A-enabled handsets.  Similarly, Qualcomm and Microsoft have formed the partnership, Wireless Knowledge, to develop not only new handsets, but also the server-side infrastructure, to support new I3A-enabled applications.

At the CeBIT tradeshow in Hannover, Germany, Ericsson launched its R380 dual-band smart phone.[35]  Based in part on the Psion Series 5 handheld computer and featuring the EPOC operating system of Symbian, the R380 is Ericsson's attempt to catch up with the handheld offerings of other vendors such as Nokia.  Two generations of the pioneering Nokia 9000 communicator already combine the functions of a GSM mobile phone and a palm-top computer.

Taking a different approach, Qualcomm is working with the help of Microsoft on a new semiconductor that would allow cellular phones and handheld computers to be controlled by a single microprocessor.[36]  The proposed new chip would permit Qualcomm to develop cellular devices that incorporated both a mobile phone and a handheld computer—supposedly at a lower price than competitors' devices that need two chips.

Ofcourse, other groups also are pursuing an even more grandiose strategies.  The 3G—that’s third-generation digital broadband—cellular devices are coming.  One particularly impressive effort is the “universal communicator” now being developed by QuickSilver Technology of Campbell, CA.  It is discussed in detail in the section 3G—third-generation—cellular devices are coming.”

Such handsets can be expected to communicate with one another, as mobile users share information and otherwise collaborate.  They also will demand convenient, efficient access—transparently, effortlessly through the network—with other non-wireless devices, and with servers.

After the mobility market, another major market segment ripe for development and innovation is the home network, discussed later in detail in the section, The new converged home network.”  As reported in the article “Use of Internet, home PC’s surging,”[37] the growth of networking in the home quickly is becoming a new status quo—the networked home.  Half of all U.S. homes now have a PC, and over one-third are now online.

The arrival of the networked home represents more than an incremental change.   The impact of its arrival is easily on the scale of when the first electric wires were run through the house.

Think of all the electrical home appliances that have been invented—but only after there was a reliable (except during thunder storms!) convenient source of electricity to power them.  One can easily imagine the new features and capabilities that the next-generation of existing appliances will have—not to mention what has not even yet been invented!

For several generations, the television and the radio have been the information appliances of choice.  That is now changing as more people turn to their networked PC’s as their primary source of information—and even for entertainment, now that multimedia streaming, online gaming (no, I do not mean gambling), chatrooms, etc. are readily available over high-speed always-on connections such as a cable modem or DSL modem can provide.

Not only the PC, but also the TV, and the VCR—through HAVi enablement—as well as the microwave, the refrigerator, etc. are destined to be networked.  The vendors of these products, and indeed of all electronic appliances now are racing to develop new ones that are network aware,[38] as well as to find ways to re-invigorate old ones—through bridging technologies such as X-10 and Lonworks.

Today, consumer products with Internet connections still exist mostly on the drawing board.  In contrast, the use of network devices is already common in industry.  Light and temperature sensors in office buildings monitor usage, turning themselves on and off to accommodate human traffic.  Home appliances talking to each other in similar fashion is the blueprint for the consumer market.

According to Jack Powers, director of the International Informatics Institute and chairman of the Internet World conference, high-speed Internet connections like cable modems and digital subscriber lines now rolling out across the country are the key to building such a world.

That trend toward home networking is known as "ubiquitous IP," the IP standing for Internet protocol.  It's akin to an old political promise—but instead of a chicken in every pot, there'll be an Internet connection in every device.

Ok, so everyone now is convinced the home of the twenty-first century will be an Internetworked one.  The question then is what will the character of this network be?  Will it, should it model the PSTN with five 9’s of reliability?  How will it be managed?  Who will manage it?

As one moves from the days of the mainframe with its multitude of dumb thin clients—your 3270 terminals—to the client-server model of many of today’s business applications, to the new Web-enabled Intranet model, to the world of semi-autonomous collaborating I3A’s, the type of network required to support such a model changes.  As previously mentioned, the details of various efforts now underway to network the home are discussed later in the section, The new converged home network.”

This section considers what qualities the home network will require to support an appliance-centric world.  In the introduction to the discussion of Appliancization,” the characterization of an appliance was given:

The bottom line—guiding design, development, marketing, and support considerations—is that appliances should share no more in common than is necessary.  The appliance manufacturer is motivated to make the device as applicable to the specific customer task as possible.  The transformation is from such product-focused issues as durability, supportability, and reuse to customer-focused issues such as specific tailored functionality, convenience, and ease of use.

In particular, appliances tend to emphasize specific tailored functionality, convenience, and ease of use.  The network must provide plug-n-play user friendliness in setup, support, and use.  Network enablement should make an appliance more useful, or utilitary, as well as easier to use.

Consider the business arena—major corporations typically spend a significant budget to provide internal organizations of highly trained staff who are dedicated to the planning, management, and maintenance of their computing and communications infrastructures.  In contrast, the home network environment must be much easier to setup, provision, and use.

The IT organization might disagree, but—the corporate network generally can be characterized as a fairly homogenous network.

It typically consists of hundreds, even thousands, of PC’s—all from a select few models from an even more select set of vendors, all running the same operating system and software suites, all networked over one—possibly subnetted—network architecture to a relatively small set of look-alike servers, etc.

From the viewpoint of network support and complexity, the delivery of pretty much the same level and quality of network resources, bandwidth, etc. is the norm.  The exceptions are handled as special cases—for example, localized to their own dedicated subnetworks—so as not to disrupt the orderly management of everyone else.

In contrast, the home network environment will be network characterized by its diversity.

There typically will be only one networked dishwasher, one networked refrigerator, one security system, etc. together with a multitude of diverse multimedia devices—TV’s, VCR’s, DVD’s, radios, stereos, etc.  The network transport will be just as diverse—a plethora of cable, twisted-pair, electrical wiring, and ofcourse wireless—probably of several types (TDMA, CDMA, DECT, IR, etc.).

The qualities of service these networked appliances require will be just as diverse.  The monitoring of security devices, for example, may be low bandwidth, but require extreme robustness, encryption, etc.  On the other hand, the video streaming may be more tolerable of a few packets lost here and there, but require considerably more realtime bandwidth—to reduce or eliminate fitter, etc.

Consider for comparison the home electrical network is literally plug-n-play.  One simply plugs the power cord into an electrical socket—which is always on.  In the home electrical network there are different types of breakers—GFI’s for the bathroom, kitchen and outside, a standard type for normal outlets, and high-impulse breakers for the heating and cooling systems which can momentarily generate surges when the units start up.  Many appliances also may have there own built-in fuses and fusible links, or surge suppression, as well as some form of battery backup to handle keep-alive functionality.

Similarly, requirements for some rather unusual qualities of service that are appliance specific may occur.  For example, if the high-compression video technology of such companies as Tranz-send—which is claiming 400-to-1 today and 1500-to-1 next year[39]—come to fruition, then the requirement for highly stable, low-bandwidth QoS could become a given.

The home communications network needs to be just as easy to setup and to use, and yet just as flexible.  It will undoubtedly combine cable, telephony, and electrical access with wireless as valid means of connecting appliances to the network.

The home network of the future decidedly will be more than a skinny-down version of the office LAN.

Mass Customization

The classic reference on the subject of mass customization is Joseph Pine’s Mass Customization.[40]  Recently, an article[41] providing an updated summary—along with several examples of this principle’s realization in today’s economy—appeared in the September 28, 1998 issue of Fortune magazine:

A silent revolution is stirring in the way things are made and services are delivered.  Companies with millions of customers are starting to build products designed just for you.  You can, of course, buy a Dell computer assembled to your exact specifications.

Welcome to the world of mass customization, where mass-market goods and services are uniquely tailored to the needs of the individuals who buy them.

Mass customization is more than just a manufacturing process, logistics system, or marketing strategy.  It could well be the organizing principle of business in the next century, just as mass production was the organizing principle in this one.

The two philosophies couldn't clash more.  Mass producers dictate a one-to-many relationship, while mass customizers require continual dialogue with customers.  Mass production is cost-efficient.  But mass customization is a flexible manufacturing technique that can slash inventory.

In the past, mass production has offered a cost-efficiency that could not be matched by mass customization efforts.  For example, the shop floor machinist’s tools performed most efficiently when one could configure the machine once—then with that same setting mass produce many copies of the same component.  The cost of introducing more flexibility into the system was too expensive—driving up the cost of parts produced beyond what the market would bear.

Recent technological advances in communications, computing, etc., have completely changed the customization landscape.  Custom manufacturing automation is quickly becoming as economical as mass manufacturing automation—everything from custom design, to custom production, to custom assembly, etc.  To continue with the above example, today’s computerized, networked, supply-chained shop floor machinist tools can be reconfigured and provisioned just as easily on a per-part basis as on a once-a-shift basis.

Mass customization not only can be as cost effective—it in fact often can be more cost effective than its mass production counterpart.  The truth of this statement is independent of additional consideration of other more strategic advantages—such as being better able to meet a customer’s needs and desires.

Mass customization is an attractive business proposition for manufacturing companies because it does away with the problem of inventory, according to Don Peppers and Martha Rogers, authors of The One to One Future: Building Relationships One Customer at a Time[42] and Enterprise One to One: Tools for Competing in the Interactive Age.[43]  Peppers and Rogers are said to have coined the term mass customization in the early 1990’s.

The build-to-order business model facilitated by mass customization is inherently more efficient than build-to-forecast because one is not taking ownership of parts, materials, etc. any earlier than they are needed in products already sold.  Thus, from a purely cost-to-build perspective, mass customization can be an attractive proposition.  However, as explained by Peppers and Rogers:

"The real power in mass customization is in building strategic relationships.  The companies that are taking full value are the ones keeping track of customers. They're decommoditizing their products."

“The litmus test of mass customization comes down to one question.  Is it easier for me to buy the next one?"

Today’s classic example of mass customization is Dell Computer.  Dell Computer has perfected this model with its Web-based ordering system, where no computer is made until the order for it is in house.  The Fortune article describes Dell’s success thus:

The best—and most famous—example of mass customization is Dell Computer, which has a direct relationship with customers and builds only PC’s that have actually been ordered Dell's triumph is not so much technological as it is organizational. …

Dell keeps margins up by keeping inventory down. …

… Dell's future doesn't depend on faster chips or modems—it depends on greater mastery of mass customization, of streamlining the flow of quality information.

Technology has a way of turning an otherwise handicap situation into an opportunity.  Dell leverages technology to ensure that the right parts and products are delivered to the right place at the right time.  JIT (Just-in-Time) manufacturing—the Holy Grail of the 1980’s—has become standard practice at Dell!

Mass customization offers two huge advantages over mass production.  First, its implementation makes full use of cutting-edge technology.  Secondly, its purpose is focused at service of the customer—rather than at the product.

What are the key enablers of mass customization? enumerates some of the key enablers of mass customization.  In particular, mass customization is a technology-driven advancement—it is able to happen only because of technological advances.  Joseph Pine, author of Mass Customization, has quite succinctly characterized what mass customized products and services have in common: "Anything you can digitize, you can customize.”

More than a more efficient production and delivery system explains how that the combination of mass customization with the Internet results in much more than simply a more efficient production and delivery system.  The primary consequence of mass customization is that the nature of a company's relationship with its customers is forever changed.  Much of the leverage that once belonged to companies is shifted to customers.  The customer enjoys the world of new highly customized possibilities.  Choice has become a higher value than brand in America.

How many choices are enough? asks and answers this rhetorical question.  Not to be misunderstood, mass customization isn't about infinite choices—its about offering a healthy number of standard—in other words, mass produced—components, options, etc. which readily can be mixed and matched in literally thousands—even millions—of ways.  Such controlled, managed limitlessness gives customers the perception of boundless choice while keeping the complexity of the manufacturing and service delivery processes manageable.

Mass production versus mass customization compares and contrasts what is happening in the ways products are manufactured and services are delivered.  Mass production is product focused—at the efficient manufacturing of products.  Mass customization is customer focused—at the effective servicing of customer relationships.  Mass production and mass customization are not enemies!  Quite the contrary, the two in fact can be complementary—each making the other more effective, efficient, etc.

Turning marginal opportunities into major revenue streams explains a secondary benefit of mass customization.  Existing investments—in equipment, infrastructure, etc.—needed to service a company’s major business segments often can be adapted (customized!) and thus leveraged on an individual per-customer—or, at least on a per-customer-segment—basis.  Thus, a company may be able to enter additional markets and market segments.  Opportunities that once would have been written off as marginal, at best, become major revenue streamsbecause of technology!

The middleman’s new role—customer agent examines the new role of the middleman function that the new mass customization-centric economic model supports.  In this model, the middleman is focused on the customer being served—he becomes the owner of the relationship!  In the mass production model, the middleman is focused on bringing the manufacturer’s products to various customers.  In the mass customization model, the middleman is focused on assisting the customer in finding, analyzing, and purchasing various products and services.  This disintermediary now functions in the new role of a “trusted adviser" who assists his clients in sifting through the thousands of directly accessible choices available to them.

Customer-management—keep them coming back looks at how mass customization is now impacting customer loyalty.  One important consequence of mass customization—the customer almost effortlessly can turn to a competitor’s products and services.  The effectiveness of a customer-relationship is measured by how often does the customer return—versus seek some other source to meet their needs.  The result of this increased ease with which a customer can leave must be offset by a corresponding increased ease for the customer to stay.  Key to this is the ease with which a company can interact with its customers—on their terms, their hours, their issues, etc.  As GTE phrased this concept, “being easy to do business with.”

The business-to-business version of mass customization examines the new business dynamics due to the effects of mass customization.  Businesses must now simultaneously compete and cooperate to meet the customer’s highly personalized demands.  The key differentiator of services and products in a mass customized world may well be in how quickly a company can serve a customer—whatever it takes.  As a consequence of this customer urgency, a company often will not be able to satisfy the customer’s demand solely from resources within the company’s immediate control.  Dynamic partnering—often with one’s competitors—on a case-by-case basis to meet the customer’s needs will become the norm.

My personal experience with telecomm-based mass customization provides a brief antidotal experience of the author that ties together the concepts and observations presented here regarding mass customization.  Today, the telephone industry has switches with literally hundreds of features that are ineffectively used by the customer, or not used at all.  There seems to be a disconnect—no pun intended—between what functionality the customer wants and what functionality the switch vendors, telco product managers, etc. have chosen to mass-produce in their pre-packaged solutions.  We have to do at least as well as the automobile industry has done in reducing our actual feature inventory, while, delivering more customization in the solutions offered to our customers.

What are the key enablers of mass customization?

Mass Customization is a technology-driven advancement—it is able to happen only because of technological advances.  A list of some technological advances that have made mass customization possible includes:

1.       Computer-controlled factory equipment and industrial robots make it easier to quickly readjust assembly lines.

2.       The proliferation of bar-code scanners makes it possible to track virtually every part and product.

3.       Databases now store trillions of bytes of information, including individual customers' predilections for everything from cottage cheese to suede boots.

4.       Digital printers make it a cinch to change product packaging on the fly.

5.       Logistics and supply-chain management software tightly coordinates manufacturing and distribution.

6.       Then there is the Internet, which ties these disparate pieces together.

One might ask what services, products, etc. constitute good candidates for mass customization?  Joseph Pine, author of Mass Customization, has quite succinctly characterized what mass customized products and services have in common:

 "Anything you can digitize, you can customize.”

In particular, one may ask where does the Internet fit in the story of mass customization?

1.       The Net makes it easy for companies to move data from an online order form to the factory floor.

2.       The Net makes it easy for manufacturing types to communicate with marketers.

3.       Most of all, the Net makes it easy for a company to conduct an ongoing, one-to-one dialogue with each of its customers, to learn about and respond to their exact preferences.

4.       Conversely, the Net is also often the best way for a customer to learn which company has the most to offer him—if he's not happy with one company's wares, nearly perfect information about a competitor's is just a mouse click away.

More than an efficient production and delivery system

The combination of mass customization with the Internet results in much more than simply a more efficient production and delivery system:

… the nature of a company's relationship with its customers is forever changed.  Much of the leverage that once belonged to companies now belongs to customers.

Other more subtle—but just as significant—changes are occurring in the area of customer relationships.  If a company cannot customize, it has a serious problem.  The Industrial Age model of making things cheaper by making them the same no longer holds.  From a pragmatic perspective, competitors can copy product innovations faster than ever.

However, an even more fundamental change is occurring—consumers now demand more choices.  Having experienced such freedom and flexibility in product selection and fulfillment, the customer is less and less willing to accept de facto one-size-fits-all standards of products and services that sort-of satisfy some perceived need.

This phenomenon has been analyzed extensively in the classic work of Gary Heil, Tom Parker, and Rick Tate, Leadership and The Customer Revolution.[44]  They later summarized their thesis, arguments, and conclusions in the Information Week article “One Size No Longer Fits All.”[45]

In the old days—about three or four years ago—we consumers asked the companies we did business with for higher quality and greater responsiveness.  They did not let us down.

Everyone knows the success stories, for example, in the automotive industry—American automobile manufacturers have scaled J.D. Power's indexes, moving from the middle of the pack, or lower, into the top 10.  Where as bankers' hours once were a joke, now automated teller machines (ATM’s), computerized phone systems (IVR’s), and now Internet web access make sophisticated banking services available 24 hours a day, 7 days a week.

Today, however, responsiveness and quality no longer guarantee that consumers will be loyal to those we do business with.  Aware that companies will give us what we want, we're asking for more, and that “more” is flexibility.  At one time we were satisfied with a one-size-fits-all product or service; now we want businesses to bend to our will.

 We want them to give us what we want, not what they want to give us.

As quoted in the Fortune article, marketing guru Regis McKenna has explained this trend thus:

 "Choice has become a higher value than brand in America."

The largest market shares for soda, beer, and software do not belong to Coca-Cola, Anheuser-Busch, or Microsoft.  They belong to a category called Other.  Now companies are trying to produce a unique Other for each of us.  It is the logical culmination of markets' being chopped into finer and finer segments.  After all, the ultimate niche is a market of one.

In addition to companies such as Dell, many other less likely companies in other less likely industries also have embraced the principles of mass customization.  Mattel now is operating the website barbie.com where girls are able to log on and design their own friend of Barbie's.  They are able to choose the doll's skin tone, eye color, hairdo, hair color, clothes, accessories, and name (6,000 permutations will be available initially).  Up to this point we have mass customization.  This is the first time Mattel has produced Barbie dolls in lots of one.

Offering such a product without the Internet would be next to impossible.  Like Dell, Mattel must use high-end manufacturing and logistics software to ensure that the order data on its Website are distributed to the parts of the company that need them.

However, Mattel does not stop there.  Each girl also is encouraged to complete a questionnaire that asks about her doll's likes and dislikes.  When her Barbie pal arrives in the mail, the girl finds her doll's name on the package, along with a computer-generated paragraph about her doll's personality.

The result of taking the thought, “After all, the ultimate niche is a market of one,” to its logical conclusion could be termed mass personalization.

We already are accustomed to PIM’s (personal information managers—such as Microsoft’s Outlook) and personal profiles, so this personalization concept is not totally new.  Now personalization is manifesting itself is other much less obvious ways.

Today’s high-end automobile—Cadillac, Lincoln, Mercedes—supports multiple profiles for the family members that drive the car.  Upon being entered by a given driver, the automobile—having identified the driver, say, by the driver’s unique entry code—proceeds to configure the mirrors, seat, climate control, musical system, etc. to the preferences of that driver.

As stated by Parducci, the product manager of Mattel’s barbie.com effort,

Personalization is a dream we have had for several years. … We are going to build a database of children's names, to develop a one-to-one relationship with these girls.

By allowing each girl to define beauty in her own terms, Mattel is in theory helping each young lady to feel good about herself—even as Mattel collects personal data.  This is quite a step for a company that previously has operated in mass production mode—for decades stamping out annually its own stereotypes of beauty.

Will such a degree of personalization be a success?  Parducci's market testing indicates that a girl’s enthusiasm for being a fashion designer or for creating a personality is going to be a "through the roof" success.

The clothing industry also has learned the value of mass customization.  Levi-Strauss, the blue jeans company, now is giving its customers the chance to play fashion designer.  For the past four years, Levi’s has made measure-to-fit women's jeans under the Personal Pair banner.  Now, Levi's is launching an expanded version called Original Spin, which will offer more options and will feature men's jeans as well.  Yes, you can go there now!

With the help of a sales associate, customers create the jeans they want by picking from six colors, three basic models, five different leg openings, and two types of fly.  Then their waist, butt, and inseam are measured.  They try on a plain pair of test-drive jeans to make sure they like the fit.  Finally, the order is punched into a Web-based terminal that is linked to the stitching machines in the factory.  How about that?  Your pants then are manufactured without further intermediation.

Furthermore, customers even can give their jeans a name—say, Rebel, for a pair of black ones.  Two to three weeks later the jeans arrive in the mail; a bar-code tag sealed to the pocket lining stores the measurements for simple reordering. This is Levi’s approach to mass personalization.

Mass personalization has moved well beyond the embryonic stage of being the newest marketing fad offered by a few businesses seeking a competitive advantage.  Perhaps the biggest manifestation that mass personalization has definitely arrived, especially among the younger generation, is the recently witnessed phenomenon associated with the digital format for the encoding of music known as MP3.  This ground swell by the younger generation is described in the Washington Post article “Music Fans Assert Free-dom.”[46]

A generation has declared its wish, “I want my, I want my, I want my MP3.”  This refrain reverberates across the Internet, from the crowded chat rooms of America Online to the noisy channels of Internet Relay Chat.  The young people are confronting a music industry that has controlled popular music since the inception of recording technology.

What do they want?  They want their music delivered their way—on the Net so they can play it though their personal computers.  They want greater control and determination over song sets than what today’s CD players offer.  Good-bye, album format.  Hello, personal playlist.  They are tired of suffering the music industry to carefully manipulate and to cultivate their purchases—with packaging as the music industry so chooses.

A recent issue of Business Week devoted its cover story to analysis of this mass personalization generation—called the Y Generation.[47]

The transformation from a mass production-focused customer base, managed via television’s homogeneity, to a mass customization-focused customer base was recognized.  Several causes contribute to this transformation from what has characterized earlier generations.

Most important, though, is the rise of the Internet, which has sped up the fashion life cycle by letting kids everywhere find out about even the most obscure trends as they emerge.  It is the Gen Y medium of choice, just as network TV was for boomers.  ''Television drives homogeneity,'' says Mary Slayton, global director for consumer insights for Nike.  ''The Internet drives diversity.''

Nike, the tennis shoes manufacturer, has learned the hard way that Gen Y is different.  Although still hugely popular among teens, the brand—Nike’s in particular, but all brand in general—has lost its grip on the market in recent years, according to Teenage Research Unlimited, a Northbrook (Ill.) market researcher.

The Internet's power to reach young consumers has not been lost on marketers.  These days, a well-designed Web site is crucial for any company hoping to reach under-18 consumers.  Other companies are keeping in touch with this generation by E-mail.  For example, American Airlines Inc. recently launched a college version of its popular NetSaver program, which offers discounted fares to subscribers by E-mail.  As John R. Samuel, director of interactive marketing for American, observes:

They all have E-mail addresses.  If a company can't communicate via E-mail, the attitude is “What's wrong with you?”

This torrent of high-speed information has made Gen Y fashions more varied and faster-changing.  How do they keep up with what is the latest thing?  Via the Internet, of course!

How many choices are enough?

Not to be misunderstood, mass customization isn't about infinite choices—its about offering a healthy number of standard—in other words, mass produced—components, options, etc. which readily can be mixed and matched in literally thousands—even millions—of ways.  Such controlled, managed limitlessness gives customers the perception of boundless choice while keeping the complexity of the manufacturing and service delivery processes manageable.

According to Sanjay Choudhuri, Levi's director of mass customization (Yes, such an organizational position exists within Levi!), "It is critical to carefully pick the choices that you offer.  An unlimited amount will create inefficiencies at the plant."  Dell Computer's Rollins not only agrees, he in fact suggests, "We want to offer fewer components all the time."

Surprisingly, thirty years ago automobile manufacturers—Ford, GM, Chrysler, etc.—were, effectively, mass customizers.  Customers would spend hours in the office of a car dealer, picking through pages of options.  That ended when automobile manufacturers attempted to improve their manufacturing efficiency by offering little more than a few standard options packages—significantly reducing the number of choices from which the customer could choose.

The problem with their solution was that they had focused on efficiency rather than effectiveness.  The number of distinct specifications of components being designed, engineered, manufactured, and assembled had not been significantly reduced—only the number of allowed combinations from which a customer could choose.  The efficiency gain was limited to the final assembly stage.  Now their customer was even more inclined to find that other dealer with the automobile that had the right packaging of features, accessories, etc.

In the last few years, the automobile manufacturers have begun to re-engineer their manufacturing processes along the same principles as those espoused by Levi’s Choudhuri and Dell’s Rollins.  This time through the re-engineering cycle, the manufacturers have reduced significantly—by an order of magnitude—the number of distinct component specifications of what must be manufactured.  At the same time, they have facilitated a greater variety of possible configurations of the final product—your automobile of choice.

Furthermore, a customer today not only is able to pre-configure their next automobile of choice.  They also can arrange its financing, insurance, etc. via the automobile manufacturer’s website!  The manufacturers finally have learned how to offer a healthy—more choices than one ever wanted—while using a constrained number of standard parts that readily—via JIT (that’s just-in-time manufacturing), etc.—can be mixed and matched in literally thousands—even millions—of ways.

The choices that the mass customizer, or the customer agent offers—see the section on middlemen, following below—need not be restricted only to those choices which directly impact the actual product.  While working with the customer, the customer agent seeks to eliminate any hurdles to the successful completion of the deal.

For example, the choices which the automobile manufacturer offers—say on their website—can include not only the automobile’s make, model, exterior color, etc., but also important—to the customer—auxiliary supporting considerations.  These include such items as financing, leasing, insurance, third-party accessories, service warrantees, etc.  The days of waiting for approval of finance, and post notification of insurance premiums are over.

According to Reinhard Fischer, head of logistics for BMW of North America, as quoted in Fortune magazine:

"The big battle is to take cost out of the distribution chain.  The best way to do that is to build in just the things a consumer wants."

Mass customization is about creating products--be they PCs, jeans, cars, eyeglasses, loans, or even industrial soap--that match your needs better than anything a traditional middleman can possibly order for you.

Mass production versus mass customization

To grasp the significance of the transformation that is happening in the way products are manufactured and services are delivered, one need only to look at the literal words used to express the two concepts.

Mass production is product focused—at the efficient manufacturing of products.  Mass customization is customer focused—at the effective servicing of customer relationships.

The transformation of a company from being product-focused to customer-focused has significant consequences.  Levi's charges a slight premium for custom jeans, but what Levi's Choudhuri really likes about the process is that Levi's can become your "jeans adviser."  Selling off-the-shelf jeans ends the relationship—the customer walks out of the store as anonymous as anyone else on the street.  In contrast, customizing jeans starts the relationship—the customer likes the fit, is ready for reorders, and gladly provides her name and address so that Levi's may send her promotional offers.

Mass production and mass customization are not enemies!  Quite the contrary, the two in fact can be complementary—each making the other more effective, efficient, etc.

We tend to think of automation as a process that eliminates the need for human interaction—take the individual out of the loop.  Mass customization makes the relationship with customers more important than ever—keep the individual in the loop, at least their inputs and feed-back.  Customers who design their own jeans—or anything else, such as a phone service feature set—make the perfect focus group!  The manufacturer, service provider, etc. can apply what is learned from this perfect focus group to continually improve the products and services it mass-produces for the rest of us.

Similarly, in terms of the management of total production costs at a company, the purchasing advantages of mass production quantities immediately benefit—can be leveraged by—the mass customization arm of the company.  Since more products are being shipped—those customized as well as the standard ones—the unit cost of components that are common to both should be more economical than if only the mass production product count were being manufactured!

Hotels that want you to keep coming back are using software to personalize your experience.  All Ritz-Carlton hotels, for instance, are linked to a database filled with the quirks and preferences—shall we say, the personalizations—of half-a-million guests.  Any bellhop or desk clerk can find out whether you are allergic to feathers, what your favorite newspaper is, or how many extra towels you like.

Interestingly enough, the effort described below was in fact proposed as a concept during a directories ideation session held at GTE Place in June 1997.  The idea at that time was deemed to be long term.  In this instance, long-term turns out to be about fifteen months!

In the not-so-distant future, people may simply walk into body-scanning booths where they will be bathed with patterns of white light—no harmful x-rays, here—that will determine their exact three-dimensional structure.  A not-for-profit company called [TC]2—funded by a consortium of companies, including Levi's—is developing just such a technology.  Last year some MIT business students proposed a similar idea for a custom-made bra company dubbed Perfect Underwear.

Taking this level of personalization even further, Morpheus Technologies [see Mainebiz] in Portland, Maine plans to set up studios equipped with body scanners to "digitize people and connect their measurement data to their credit cards."  Someone with the foresight to be scanned by Morpheus could then call Dillards or J.C. Penny, provide his credit card number, and order a robe, suit, or whatever, that matches his dimensions—or those of his wife, etc.  Ones digital self could also be sent to the department store—or pulled up on the manufacturers website—where one is able to project exactly how one would look wearing the custom clothing he is able to self-design.

The traditional product-focused mode of customers selecting clothing also will be transformed.  The time-consuming process of someone trying on ten garments—which I know from my family’s shopping habits to be the case—before finally picking one to purchase is greatly reduced.  Instead of the store selling one garment for every ten that the customer takes to the fitting room, the effectiveness well could be reversed—for example—to nine out of ten!

Not only are the department store’s resources better utilized; more importantly, the customer’s shopping experience is greatly enriched.  The gratification in knowing one’s appearance will most certainly look acceptable when stepping from the fitting room into public view is great for one’s personal self-image development.  Today, members of my family dread shopping for clothes because of the fear of how they look while fitting new clothes.

Turning marginal opportunities into major revenue streams

Another area in which mass customization complements mass production is the enablement of the company to service peripheral market segments that otherwise do not fit the standard one-size-fits-all model of that product.  Existing investments in equipment, infrastructure, etc. to service the major segments often can be adapted (customized!) and thus leveraged on an individual per-customer—or, at least on a per-customer-segment—basis, thereby enabling a company to enter these peripheral markets and market segments.

Opportunities that once would have been written off as marginal, at best, suddenly become major revenue streamsbecause of technology!

Wells Fargo, the largest provider of Internet banking, already allows customers to apply for a home-equity loan over the Internet.  Within three seconds of the customer’s application submission, Wells Fargo returns with the decision on a loan structured specifically for that customer.  A wide range of behind-the-scenes technology contributes to making this possible—including real-time links to credit bureaus, databases with checking-account histories and property values, and software that can do cash-flow analysis.  With a few pieces of customized information from the loan seeker, the software makes a quick decision.

Wells Fargo now has implemented similar procedures and software in its small-business lending unit.  Previously, according to vice chairman Terri Dial, Wells Fargo turned away many qualified small businesses—Wells Fargo could not justify the time and resources spent on the credit analysis for these loans versus the revenue they would generate.

Now the company can collect key details from applicants, customize a loan, and approve or deny credit in four hours—down from the four days the process once required.  In some categories that Wells Fargo once categorically ignored, their loan approvals now are up as much as 50%.  Dial: has reached the conclusion, "You either invest in the technology or get out of that line of business."

In the future, those once marginal business opportunities will become strategic to every business’s operation—as the previously explained Other segment becomes an ever increasingly larger segment of our markets.

The middleman’s new role—customer agent

Several writers—such as Donald Tapscott in his book, The Digital Economy—have predicted that the Internet-based economy would disintermediate many middlemen from the economic value chain.  Their logic followed from the observation that the Internet is able to bring the customer into direct contact with the sources of that product.

The problem with their conclusion is that it is based on a mass production-centric economic model.  As noted previously, this model is focused on products.  So, their model of the middleman—be it an automobile dealership, a department store, etc.—is focused on products.  Any business in this mode of operation, with this focus certainly is at risk.

Fortunately the new mass customization-centric economic model also supports the role of a middleman function.  In this model, the middleman is focused on the customer being served—he becomes the owner of the relationship!

In the former model, the middleman is focused on bringing the manufacturer’s products to various customers.  In the latter model, the middleman is focused on assisting the customer in finding, analyzing, and purchasing various products and services.  Frank Shlier, at the Gartner Group, envisions this disintermediary in the new role of a “trusted adviser" who assists his clients in sifting through the thousands of directly accessible choices available to them.

The Internet is in a unique position to facilitate the livelihood of such disintermediaries.  Interestingly enough, however, most websites have not yet made this discovery.  In the Fortune magazine article, Tapan Bhat, the exec who oversees quicken.com, makes the observation:

"The Web is probably the medium most attuned to customization, yet so many sites are centered on the company instead of on the individual."

Many companies have invested heavily to develop websites that put the best spin possible on their products.  The advertisement business has quickly moved to embrace the web’s ability to bring a company’s logo, etc. before the viewer’s eyes.

The Internet's ability to personalize products and services with pinpoint precision is shaking the very foundations of modern-day commerce.  It heralds wrenching change for how manufacturers, distributors, and retailers will be organized and function. 

The situation for one-to-one marketing was recently stated in the Business Week article,[48] “NOW IT'S YOUR WEB.

Today, most companies organize themselves by products: Product managers are the basic drivers for marketing.  In the future, companies instead will have customer managers, predicts Martha Rogers, co-author of The One to One Future: Building Relationships One Customer at a Time and a professor at Duke University.  Their job: Make each customer as profitable as possible by crafting products and services to individual needs.

Companies such as AOL, InfoSeek, Yahoo, etc. have been busy building customer relationships.  The e-commerce community has discovered the value of portals, hubs, and home bases.  Jesse Berst, Editorial Director at ZDNet, described these in an article, “What's Next After Portals?”[49]

A portal is a gateway that passes you through to other destinations.  It's an on-ramp, if you will. ... Portals will be enormously important for a long time. … Portals are general interest—they help you find anything, anywhere.  They don't have a focus.

A hub is a central position from which everything radiates.  It's more like a railway station than an on-ramp. … a hub becomes the focus of your activities, not just a pass-through.

Hubs are more narrowly organized.  To succeed as a hub, a site must surround itself with content, commerce and community appropriate to one particular audience. … Hubs are arising around all sorts of other interests too—hobbies, professions, issues, health problems, life situations.  Indeed, the best portals are halfway to the hub idea already, via their special-interest channels.

A home base is where you hang out between forays.  It's your headquarters, the place to which you return.  The first early experiments with the home base idea are the personalized start pages now being pioneered by the portals.  You use them to gather everything you need onto one page.

… Web users will gravitate to home bases with lots of "comforts"—lots of services, in other words.  Email, shopping bots, customized news, calendars and virtual offices are a few of the early experiments.  AOL is evolving into a collection of home bases for consumer users.  CompuServe and Netcenter want to be a home base for business users.  Geocities and Tripod could go this way if they get it figured out in time.

In the Fortune article, Pehong Chen, CEO of the Internet software outfit BroadVision describes the new Internet intermediary’s role:

"The Nirvana is that you are so close to your customers, you can satisfy all their needs.  Even if you don't make the item yourself, you own the relationship."

Amazon.com, the well-known Internet bookstore, currently services over three million such relationships.  It has been selling books online and now is moving into music, with videos probably next.  Every time someone buys a book on its Website, Amazon.com learns more about the customer’s tastes.  This knowledge then is used to suggest other titles that customer might enjoy.

The more Amazon.com learns, the better it serves its customers; the better it serves its customers, the more loyal they become.  About 60% are repeat buyers.

Wal-Mart recently has filed a lawsuit against Amazon.com, Kleiner Perkins Caufield & Byers, Drugstore.com, and others to protect its expertise in customer management, as reported in the news.[50]

Wal-Mart is world renown for its expertise in supply-chain management, and in the collection and analysis of customer data regarding who buys what, where, when, how, and why.  The ability to characterize the customer’s needs in realtime will become increasingly critical in the age of mass customization.

Customer-management—keep them coming back

This section looks at how mass customization is now impacting customer loyalty.  One important—some would say, unintended—consequence of the wide availability of mass customization is ease with which the customer almost effortlessly can turn to a competitor’s products and services.  The effectiveness of a customer-relationship then is measured by how often does the customer return—versus seek some other source to meet their needs.

The result of this increased ease with which a customer can leave must be offset by providing a corresponding increased ease—justification—for the customer to stay.  Key to this is the ease with which a company can interact with its customers—on their terms, during their hours, in direct response to their issues, etc.  As GTE has phrased this concept, the goal is “being easy to do business with.”

One measure of how effective a customer-relationship is being cultivated is how often does the customer return—versus seeking some other source to meet the customer’s need.[51]  Martha Rogers and Don Peppers described the issue:

“The litmus test of mass customization comes down to one question.  Is it easier for me to buy the next one?"

Improved customer service is often related to changes in a company's internal processes, which can lead to greater efficiency and cost savings for the service provider.  Interestingly, paybacks in greater efficiency and cost savings are rated as secondary.  With respect to ROI—return on investment—in customer-management technology, only 41% of the respondents to the Information Week Research survey consider ROI to be "very significant."

In contrast, IT managers identified:  [1] improved customer satisfaction and [2] quicker response to customer inquiries as being the top two benefits of investing in customer-management tools.  Companies are finding that technology enables a number of unique ways to keep customers coming back.

Mobil is installing a new wireless application, called Speedpass, in gas stations across the country to reduce the time customers spend at the pump.  The application automatically reads information from a customer ID tag located on a windshield, or from the customer’s key chain—for those instances when he is not in his usual vehicle.  According to Joe Giordano, manager of marketing technology for Mobil:

"There is no thinking required to use this. … We want to bring customers back to the super-simple good old days, where you can wave and say, 'Hey, Eddy, just put it on my account.’”

The Web also is emerging as a customer-service platform.  Many companies—such as Virtual Vineyards, which made its name selling wine over the Internet—are trying to distinguish themselves with improved follow-up and support services.  Virtual Vineyards ships packages via Federal Express or United Parcel Service then uses an automated system to check the shipping status of orders once an hour.

If a customer calls to check an order's status, the customer-service representative can instantly provide that information.  Once the shipper notifies the company that the package has been delivered, a follow-up E-mail message conveying that information is automatically generated to the customer.  This author personally has received this type of follow-up service when ordering a book from Amazon.com, CD’s from CD Connection, and stereo equipment from Crutchfield via the Internet.

These companies are continuing to explore other ways that would improve service further.  One possibility is the deployment of artificial intelligence-based search engines to answer E-mail queries in the off hours.  According to Virtual Vineyards’ chief operating officer Robert Olson, "People are coming up with questions at 10 at night, and they're ready to buy at that point.”

Companies such as Novus Services allow Discover Card holders to check their records online.  Recognized by J.D. Power for its customer satisfaction, Novus supports customers access via Web browsers the same account information that service representatives are able to see.  The next logical step is to support online statements—customized to the individual users.  Increasingly, says Novus CIO Floyd,

“Customers will want to manage their accounts without Novus' help.  Novus expects the self-service trend to take some of the pressure off its extensive call-center operations, which handle 40 million calls a year.”

BellSouth permits its customers to sign up for new services on the company's Web site.  According to BellSouth’s CIO Yingling, the Web-based self-service can be more accurate than the customer going through a call center, because the data is entered directly into the supporting back-end applications.  Improved accuracy of input data is not the only benefit.  CIO Yingling identifies additional benefits derived via the Internet approach:

"The Internet, with its ability to mix text, graphics, video, and voice, has potential that phones [and IVR systems] can't match.”

While assisting the customer with after-the-sale service support as just described above is important, other ways to keep-em-coming-back are now beginning to emerge in Internet-enabled solutions.  A later section of this document,The Internet’s convergence is customer-focusedin the discussion of “Convergence,” presents the mass customization methodologies of customer relationship management (CRM) and enterprise relationship management (ERM), along with the use of recommendation engines.

Another approach that is easily implemented in the digitally-enabled economy is the capture of a customer’s favorites, or preferences, regardless of whether a sale or contract is executed at that time.  Again, the focus is on keeping-em-coming-back.  By storing knowledge about the customer’s interest in a given music artist, book, CD, etc., a company is enabling that customer to return at a later time to make that purchase, as well as contributing to the company’s knowledge-base of what is of interest to its customers—this one in particular.

Every encounter with the customer is an opportunity to conduct an implicit survey with that customer—the chance to better know them.  By all means, learn something about that customer, and about how the company might better serve them.  Furthermore, allowing the customer to explicitly designate an item as being of interest for possible future purchase provides one more way to reinforce that customer relationship.

The business-to-business version of mass customization

An important consequence of mass customization is in the new business dynamics in which businesses must now simultaneously compete and cooperate to meet the customer’s highly personalized demands.

The Web is literally a supermall of mass customizers with products and services intended to address what Joseph Pine, the author of Mass Customization, has called customer sacrifice—the compromise we all make when we can't get exactly the product we want.  How many times have you heard a person say, "If only someone would create a [You fill in the blank] for me, I would buy it!"

The Internet intermediary is focused upon responding to this type of customer need—by identifying and filling it, and thereby eliminating one more customer’s sacrifice—their settling for less than what really was wanted.  The Web will make this kind of response the norm.

With the rapid increase in the number of new middlemen who customize orders for the masses, the ability of one company to differentiate itself from its competitors will become tougher than ever.  Responding to price cuts or quality improvements will continue to be important, but the key differentiator may be in how quickly a company can serve a customer—whatever it takes!  Artuframe.com’s CEO Bill Lederer says,

 "Mass customization is novel today.  It will be common tomorrow."

The consequence of this customer expectation—that a company will go to any means to satisfy—will be that often, the company will not be able to satisfy the customer’s demand—relative to what else is readily available—solely from those resources found within the company’s immediate control.  Dynamic partnering—often with one’s competitors—on a case-by-case basis to meet the customer’s needs also will become the norm.

The Web creates a new competitive landscape, where several companies will temporarily connect to satisfy one customer's desires, then disband, then reconnect with other enterprises to satisfy a different order from a different customer.

A detailed discussion of the Internet-enabled technologies that make this possible, along with examples of how such are being successfully leveraged, is presented later in the section Technology’s new role—key enabler of the digital economy as part of the discussion of Convergence.”

According to Matthew Sigman, an executive at R.R. Donnelley & Sons, whose digital publishing business prints textbooks customized by individual college professors:

"The challenge is that if you are making units of one, your margin for error is zero."

For example, the return of a custom-fit suit is of little rework value.  The cost to rework it—whether for that customer or for resale to someone else—could be greater than to start from scratch with new raw materials which can be processed in a computerized factory.  More importantly, that customer’s future business may be lost forever!

My personal experience with telecomm-based mass customization

I now will share a brief antidotal experience the ties together the above thoughts on mass customization.  Many people know me as a technology geek—I have installed in my home, my own LAN, key-system, video distribution system, use four PCS CDMA-based cell phones, have a cable-modem as of January 1999, etc.

When I moved to the Dallas/Ft Worth area in 1991, I contacted Southwestern Bell (Yes, the realtor deceived me! —thought I would be in GTE territory) about network-based voicemail.  For two years, all they could provide was separate—not integrated—boxes on each of my lines.  This was unacceptable—who wants to check for three stutter dial tones, etc.?

In 1993 (I regularly would call back to see if I would—inadvertently find someone more knowledgeable), I was informed that a new switch/voicemail upgrade was scheduled for deployment.  I would still be given one mailbox per line, plus a collector box to which all the others would dump.  Again, I was not interested in purchasing N+1 mailboxes when I effectively would have only one.

Fortunately, I had finally made contact with SBC’s voicemail engineer located in St. Louis.  We determined that I did not really need a hunt-group after all—based on consideration of what features and service behavior I actually wanted, as opposed to the mix of services SBC offered (saving me about $3 per line).  Instead of using a hunt-group, we configured my forward-on-busy and my forward-on-no-answer to produce the desired effect—by using only one mailbox, and NO switch/voicemail upgrade for the switch!  My cost (for all other features) actually went down, when I had my feature-set reconfigured to support voicemail.

That engineer probably received the SBC equivalent of GTE’s Warner award for that effort.  There is no telling for how many switches the proposed upgrade no longer was needed.

Today, the telephone industry has switches with literally hundreds of features that are ineffectively used by the customer, or not used at all.  There seems to be a disconnect (no pun intended) between what functionality the customer wants and what functionality the switch vendors, telco product managers, etc. have chosen to mass-produce in their pre-packaged solutions.  We have to do at least as well as the automobile industry has done in reducing our actual feature inventory, yet, delivering more customization in the solutions offered to our customers.

Convergence

Donald Tapscott in his book, The Digital Economy, described twelve major trends that would characterize the new digital economy.  The analysis in the previous major section explained how mass customization has transformed Mr. Tapscott’s disintermediation—the sixth trend on his list—into one of re-intermediation—where the intermediary now is customer focused, rather than product focused.

The seventh trend on Don Tapscott’s trend list is a concept that could be termed appropriately industrial convergence:

In the new economy, the dominant economic sector is being created by three converging industries (communications, computing and content-producing industries) that, in turn, provide the infrastructure for wealth creation by all sectors.

Don Tapscott further proceeds to make the prediction that, of these three converging industries, content will be king:

Computer hardware and communications bandwidth are both becoming commodities.  The profit in the new sector is moving to content because that’s where value is created for customers, not in boxes or transmission.

Again, Mr. Tapscott has overlooked the importance of the customer in his analysis of the new digital economy.  He recognized that the boxes and transmission components of convergence are subject to commoditization—via the forces of mass production.  He failed to recognize that content also is subject to commoditization—via the forces of mass customization.

The ultimate final focal point of convergence is the customer.  The customer figures strategically into the core of all the sub-themes that define the convergence now in progress—as the following discussion will clarify.

How the customer figures strategically into the convergence equation is typified by the example of Microsoft’s recently announced purchase of LinkExchange, one of the new customer-focused intermediary companies on the Internet.  LinkExchange assists its membership of primarily small and medium-sized Web-based businesses in the placement of targeted ads and in selling products.  Member Web sites trade free ad banners among themselves—under LinkExchange’s coordination.  They also permit the LinkExchange to sell additional ad space on their pages to bigger advertisers.

An analysis of the deeper significance of this Microsoft purchase is provided by the article, “Why Microsoft bought LinkExchange.”[52]

General-content plays on the Web have been generally unsuccessful, a lesson that most online publishers have learned by now.

The software giant has learned that establishing long-term customer relationships on the Web is far more lucrative than just creating content.

The previous section on mass customization also holds the secret to convergence.  Recall that Joseph Pine, author of Mass Customization, has quite succinctly characterized what mass customized products and services have in common:  "Anything you can digitize, you can customize.”

A similar statement can be made regarding convergence!

 Anything that can be digitized, can and will be converged.

The phenomenon of convergence is now occurring in our economy at all levels and in all dimensions—technology, industry, services, markets, etc.—from components and devices, to systems, to infrastructure, to services, to support functions, to business models, etc.  Interestingly enough, the forces that have enabled and have driven mass customization are the same forces now enabling and driving convergence.

Digital technology is the enabler of convergence, and the customer’s demand that he have his way is the driver.

Previously noted, with each increase in the number of new middlemen who customize orders for the masses, the ability to differentiate one’s company from its competitors will become increasingly difficult.  Responding to price cuts or quality improvements will continue to be important, but the key differentiator may be in how quickly a company is able to serve a customer—whatever it takes!  According to Artuframe.com’s CEO Bill Lederer, "Mass customization is novel today.  It will be common tomorrow."

The consequence of the new customer expectation—that a company will go to any means to satisfy the customer—is the driving force behind convergence.  Ever more increasingly, a given company will not be able to satisfy a customer’s demand solely from those resources found within the company’s immediate control.

The need for dynamic partnering—often with one’s competitors—on a case-by-case basis to meet each customer’s needs will become the norm.  A new competitive landscape is now beginning to emerge—one in which several companies will temporarily connect to satisfy one customer's desires, then disband, then reconnect with other enterprises to satisfy a different order from a different customer.

Convergence—of processes, of operations, of communications, of content, of supply-chain management, of process management, of marketing, etc.—will be what enables such dynamic partnering to occur—transparently to the customer, and seamlessly among the partners.

The sub-sections that follow examine in greater detail how convergence is being achieved in the digital economy, and how some old metaphors and paradigms necessarily must be adapted as a consequence.

The transition to a mass customization economy—an economy of customer-focused intermediaries—demands that partnering relationships be much more flexible and dynamic, and that support be provided for much shorter product and service life-cycles.  In the extreme—this means providing support for one-of-a-kind, on-demand products and services.  The achievement of this strategic goal is the impetus behind all the efforts towards convergence.

In the past, partnering relationships have been product-focused and fairly static—changing little over time.  The designation of a few strategic partnerships was considered the best one could do.  The life-cycle for a given product or service—from its conceptualization to its development, and through its procurement, support, etc.—has traditionally been measured in years.  These phenomena are all consequences of the mass production economy that has dominated the twentieth century—an economy of product and service-focused intermediaries.

Now, those relationships that have persistence—and so are strategic—are those that exist with the customer—not those with the product or service offered to the customer—possibly on a one-time-only basis.

The customer’s needs—or at least his demands—are constantly changing.  The new customer-focused intermediary must constantly adapt and enhance existing products and services, as well as introduce entirely new ones to continue to please the customer.

This is another consequence of the feature-creep paradigm previously described in one of the sections on appliancization.  Understanding the customer is now much more than simply the taking orders for current product or service offerings.

Examples of this paradigm shift to the creation of the customer-focused intermediary are readily found in today’s digital economy.  As previously explained in the section on appliancization, the two forces of mass production and mass customization are actually quite complementary.

The megatrend of convergence is accomplished by the delicate balancing of the two forces of mass production and mass customization—together with balancing the pull of customer demand and the push of technology enablement.

The PC revolution—convergence typified presents the PC revolution as a microcosm of the digital economy—a microcosm where the megatrend of convergence has been in progress for some time now.  The delicate balance of mass production and mass customization is typified in the PC revolution.  The PC environment—a digital one—provides a perfect example of the observation that “anything that can be digitized, can and will be converged.”

The Internet—the epitome of convergence explains how the Internet is perhaps the best known epitome of how to create and to leverage an interoperability that accomplishes the delicate balancing of the forces of mass production and mass customization—together with balancing the pull of customer demand and the push of technology enablement.

The killer application of the Internet is end-users working with live data—only one webpage away—making (realtime) operational decisions.  Now, for the first time in the history of computing, business end-users are able to work with live data to make operational decisions.

The Internet’s convergence is customer-focused presents the enablement and fulfillment of the customer relationship as the driving purpose behind the convergence, which the Internet has enabled.  The strategic goal of this convergence is to make the right information available to the right person(s) at the right time at the right price, and in the right condition.

Moreover, the enablement of customer-focused decision making does not stop at a company’s information processing boundary.  The Internet is ideal for enabling the delivery—as well as the sharing and collaboration—of complicated information and analysis both with collaborators and directly to customers in a timely manner.

Convergence and supply-chains meet the Internet examines how the Internet, in particular, is transforming supply-chain management—one significant area in which the convergence of knowledge sharing and collaboration of decision-making is now demonstrating its strategic value.  CPFR (Collaborative Planning Forecasting and Replenishment) is turning traditional inventory management on its head.  The prospect of giving every partner a total, unified, realtime view of the big picture means that retailers can share—even delegate direct—responsibility of inventory management with their suppliers.

Unlocking the vast wealth of knowledge stored in enterprise resource planning (ERP) systems, Web sites, data warehouses, legacy mainframes, and client/server systems is at the top of the list for every major corporation today.  Furthermore, many companies are now moving to bring their disparate tools together using Web technologies to essentially create Enterprise Information Portals (EIP’s) that allow internal users—as well as their suppliers and customers—to access data stored in any one of their applications.

Convergence yields virtual corporations explains how the business principles upon which a company builds and conducts its business are being changed forever by convergence.  Corporations will not disappear, but they will become smaller, comprised of complicated webs of well-managed relationships with business partners that include not just suppliers, but also include customers, regulators, and even shareholders, employees, and competitors.

Convergence is now the preferred approach to achieving competitive efficiencies, while enhancing the ability to adapt—in realtime—to the customer’s ever-changing demands.  In particular, strategic dependence on technology—the critical enabler of customer personalization—is one of the distinguishing characteristics of the virtual corporation.

Whence the virtualization of a company? examines the process of corporate virtualization now transforming our economy.  It is not restricted only to the information-focused industry segments, such as the media industry.  Every segment of the economy—regardless of the nature of its end products and services—is increasingly impacted by information.

The flow of information associated with any given industry provides a natural basis, or starting-point, for the virtualization of that industry.  Everyone and everything in the virtual corporation must be connected—so that it may contribute to its fullest capability in realtime to both the long-term health as well as the immediate bottom-line of the company.

The new business imperative—embrace and extend is typified by the Internet, which is evolving so quickly and so extensively, in so many different directions.  The Internet—through its multitude of participants—is constantly introducing and embracing new and enhanced protocols, environments, communications metaphors, partnering arrangements, business models, etc.  These serve not only to extend and to enhance current capabilities—sometimes with entirely new metaphors—but more importantly, to meet the ever changing and expanding needs of this customer, the digital economy!

The virtual corporation in particular, and the digital economy in general must have global ubiquitous connectivity to survive and to thrive.  The technology-enabled opportunities derived as the result of embrace-and-extend convergence offer far greater value than the proprietary solutions they displace.

Technology’s new role—key enabler of the digital economy explains why the adoption of new technology increasingly now is seen as the preferred way to quickly, and cost effectively bring new innovation and increased performance and competitiveness to an enterprise’s operations—as well as to reduce its TCO’s and to improve its ROI’s.

Technology no longer functions as a stabilizing force that can be selectively applied by the incumbent company to preserve its installed base, or its status quo.  Rather, technology now provides the means for conducting aggressive economic warfare.  More importantly than ever before, each company must understand those technologies that could be used either by it or against it.

The synergistic value of technology convergence—killer apps have the potential to create far greater value than the currently closed, proprietary, go-it-alone solutions that are being displaced.  The resulting opportunities are not only in terms of what is enabled now, but even more so in terms of what becomes achievable through the synergy such interoperability fosters—the juxtaposition of opportunities that total connectivity enables.

While one may incrementally improve the efficiency with which one conducts an activity, real progress is to make more effective use of ones resources—by enabling more meaningful things to be do—and thereby adding increased value and satisfaction to the customer.  The convergence of services—that otherwise today exist in their own vertical silo’s—that is enabled as the direct consequence of technological convergence—is almost limitless.

The convergence of all networks—all networks lead to one is a theme central to the total connectivity that the convergence megatrend enables.  This convergence encompasses all networks—be they in the home, in the office, in the automobile, over the neighborhood, across the country, or around the world—be they copper-based, optics-based, wireless-based, or some combination.

Metcalfe's Law—first proposed by Robert Metcalfe of Ethernet fame, and the founder of 3Com Corporation—values the utility of a network as the square of the number of its users.  The mercuric rise in the number of Internet participants—individual users to mega-corporations—attest to the validity of this law.  Everyone—the operators, their suppliers, and most of all, the customer—has a vested interest in the outcome.

Whence network convergence—how will it happen? examines the breadth of initiatives now in progress, or proposed, for achieving this convergence of all networks—one universal converged network where every device, application, etc. has realtime access to whatever resources it needs.  Efforts on all fronts—in the home, at work, over town, across the country, around the world—are now well underway to satisfy the demand for converged solutions.

From all sectors of the digital economy—the telecommunications, multimedia, and information technology sectors—are acting upon the opportunities enabled by technological advances.  The impetus is not only to enhance their traditional services, but as importantly to branch out into new activities.  In particular, they are pursuing cross-product and cross-platform development as well as cross-sector share-holding.  Every week, some company, consortium, standards body, or other birds-of-a-feather group introduces another convergence initiative.

The new converged PSTN looks in particular at the efforts involving the legacy PSTN.  At the lowest levels are arguments over the appropriate combination of a circuitless IP packet infrastructure versus the smaller circuit-oriented ATM cell infrastructure.  Should one topology be overlaid on the other?  Can both coexist at Layer 2; etc.?  What about Sonet?  Various proposals have been offered and various approaches now are being trialed.

The long-term prospect—especially with the third-world’s adoption of wireless technology—is that most of the world’s voice and data traffic can be expected to originate and terminate on wireless devices.  The backhaul network should be prepared to match the performance and efficiency requirements that the wireless environment places on carrying voice and data.

The new converged home network is perhaps the last frontier for the convergence of voice and data connectivity—wireline and wireless, coax and twisted-pair.  A number of organizations have formed to work at defining the infrastructure of the networked home.  Each group approaches the networking of the home from its own particular perspective—the wireless industry, the appliance industry, the multimedia industry, etc.

Network connectivity has become a mantra for consumer companies.  Their objective is to build a home network infrastructure so that a newly purchased digital consumer appliance is no longer just another standalone box, irrelevant to the rest of the systems.  Connectivity-or distributed computing power on the home network-should breathe new life, new value and new capabilities into home digital consumer electronics.

The PC revolution—convergence typified

The PC revolution may be viewed as a microcosm of the digital economy and convergence.  The delicate balance of mass production and mass customization is typified in the PC revolution.  The PC environment—a digital one—provides a perfect example of the observation that “anything that can be digitized, can and will be converged.”

On the one hand, no two PC’s necessarily are alike—in terms of installed hardware, software, or how they are configured to use that combination of hardware and software.  Recall that the “P” in PC stands for personal!  On the other hand, the components—hardware and software—from which the total PC environment is defined have been designed and implemented both for mass production and distribution, as well as to interoperate in a plug-n-play fashion—transparently to the customer, and seamlessly with each other.

Microsoft with Windows has become the de facto intermediary—manager of the window (pardon the pun) on the information world which the PC provides.  Although the functioning of the PC is much more powerful and complex than the ASCII text-based terminals of a few years ago, much of this complexity is hidden (mediated) by the Windows environment.

At the same time, many forms of customer-focused assistance have been provided—from the drag-n-drop metaphor of the graphical user interface, to the plug-n-play metaphor of hardware and software components, to the Wizards metaphor for providing on-line help for hardware and application installation, setup, and execution, etc.

Microsoft—regardless of what one may otherwise think of their business practices (i.e., as witnessed by the current Microsoft-DOJ lawsuit)—exemplifies the spirit of the customer-focused intermediary!  Microsoft is continuously introducing new features to existing applications and capabilities—as well as new API’s (application programming interface) for new capabilities.

Microsoft is practicing the principles of the appliancization megatrend to their fullest—provide that must-have feature that justifies the customer’s decision to upgrade.  Even further reaching, Microsoft’s improvements are targeted not only for currently pervasive PC platforms (the desktop and laptop), but also for new potential platforms—such as the PDA, the cell-phone, the automobile PC, the embedded appliance, etc.

Today, many people think of Microsoft as a software company—a developer of operating systems, middleware, developer kits, and applications.  Technically, this is a true statement—but Microsoft is much more.  One needs simply to consider Microsoft’s motto:  “Where Do You Want To Go Today?”  This motto suggests nothing of software, hardware, or PC’s.  Rather, it is focused on the customer—what is the customer’s desire, his wish list—NOW! 

Microsoft certainly is actively diversifying into new technology areas that leverage its current technology strengths in the PC arena—witness Microsoft’s recent joint announcement[53] with the cellular phone vendor—Qualcomm:

However, Microsoft also has been actively leveraging its way into many new customer-focused intermediary roles.  These include its general MSN Internet site, its travel agency site, Expedia, and its automobile site, Car Point, as well as its smartcard and banking-related efforts—such as bill payment.

The introduction to this section on convergence began with a quote regarding one such recent Microsoft endeavor—its purchase of LinkExchange:

The software giant has learned that establishing long-term customer relationships on the Web is far more lucrative than just creating content.

Technology does not sit still.  The day surely will come when much of what now is done by software-based applications executed in general purpose PC’s under the control of a software-based operating system is done elsewhere by other means.  When such functionality has migrated to dedicated silicon (hardware computer chips), been integrated into IA’s (information appliances), and otherwise is made transparently available by the Network, Microsoft intends to be the customer’s intermediary—his gateway to the world!  “Where Do You Want To Go Today?”

The Internet—the epitome of convergence

Until recently, Microsoft has been able to leverage its strategic position as owner of the PC environment to unilaterally dictate the rules of interoperability and convergence within that environment—by its management of API specifications and its incorporation—or appropriation—of new features into OS releases.  In the general global digital economy—which is expected to encompass much more convergence than has been achieved in the current PC environment—such unilateral domination by a single company—or group of companies—is not to be the case.

The best known example of how to create and to leverage an interoperability that accomplishes the delicate balancing of the forces of mass production and mass customization—together with balancing the pull of customer demand and the push of technology enablement—is the Internet!

The Internet’s customer is the global digital economy—with needs and demands that are in a constant state of flux.  George Lawton recently presented an analysis[54] of how Intranet systems—Web-based, in particular—are transforming client-server designs for mission-critical applications

For decades, companies have sought a universal application interface that would facilitate cross-platform deployment throughout every business enterprise application.  It was like searching for a Rosetta stone to give data new purpose and unify islands of information in sales, finance, marketing, production, customer service, accounts receivable—all interdependent functions of the modern enterprise.

In the past, centralized data processing support of the corporate user has been characterized by providing business users with access to corporate data—only through limited dumb terminal connections to mainframe applications.  In spite of the democratization of corporate computing achieved by the PC in the 1980’s, the centralized data planners continued to maintain a tight control—both in policy and implementation—on operational systems which prevented departmental PC’s and networks from gaining access to live data.

During the 1990’s, client-server systems emerged as the model for distributed computing.  This architecture did pave the way for enterprise resource planning (ERP) systems to become a dominant force for data management and analysis.  However, the complexity and expense of the client software often limited deployment of these applications to specialized analysts.  Furthermore, such solutions still were not in realtime, and still were not available on a general company-wide basis.

Now, with the rapid development of Internet technologies, profound implications for the architecture of business applications have began to manifest themselves.  According to Perry Mizota, vice president of marketing at Sagent Technology, a provider of data mart systems:

"The biggest contrast between client-server and the Web is that you can deploy access to an extremely broad set of users in a very cost-effective manner from a hardware and administrative perspective."

The killer application is end-users working with live data, making (realtime) operational decisions.  Now, for the first time in the history of computing, business end-users are able to work with live data to make operational decisions.

Using Web-based business intelligence tools, companies now are harvesting the benefits of information they already have in-house to direct strategies as diverse as product branding, supply chain logistics, and customer service.  The focus is not so much on producing new information, as on providing timely access to what information already is available.

Continuing to quote Mr. Mizota:

"In the past if you wanted to provide access to data, you would have to make sure the clients had a particular piece of software on the desktop.  It is not necessarily new information.  It could be the same information you have already been giving away.  It is the timeliness that is important."

For example, ERP transaction processing programs that run across an enterprise now are being migrated to the Web because of their ease of access and use, reconciling real-time e-commerce with manufacturing and inventory control on an event-driven basis.

The huge advantage of Web systems over traditional client-server applications is their availability.  Because of the ubiquity of the Web browser, all levels of the organization are simultaneously affected by this resonant wave of knowledge.

Furthermore, this trend towards webified application delivery has tremendous ramifications for forward compatibility.  An application that can be accessed with a browser will continue to work in the future—regardless of upgrades and changes to the server.

The ease of use represented by the Internet in general and by the Web in particular is not restricted to human interface considerations.  This ease of use also is applicable to the much broader focused consideration of systems integration and interoperability.

Once an application has been made universally available for human consumption—via the Web-based user interface, it also can be easily accessed and used by other servers and applications, as well.  This unintended consequence of the webification of knowledge and information management is of immense impact.

The same information and knowledge available to an end-user via a webified interface also is readily available to any mechanized intelligent process, agent, etc.—which enables further value-adding use of it.

For example, the customer interface for services such as Federal Express and United Parcel Service package tracking is typically accessed via a Web browser.  In particular, companies such as webMethods have developed tools for encapsulating these interfaces for use within other ERP applications.  This reduces the need to re-implement the backend functionality of these web-enabled applications.

With this kind of integration, Internet technologies thus provide a natural way to extend a company’s existing hardware and software infrastructures.  Rather than replace its established applications, a company can use Intranets to tie them together via middleware and multi-tier applications—say, using the Common Object Request Broker Architecture (CORBA).

The Internet’s convergence is customer-focused

The strategic goal or objective for the Webification of a business’s information processing must be more than simply that of making otherwise disparate applications function together.  The Internet provides far more functionality than just another way to access information from disparate sources in realtime.

Sukan Makmuri, vice president of interactive banking at BankAmerica (formerly Bank of America in San Francisco) was quoted in the Knowledge Management article:

“We need to develop a middle tier and make the customer the center of the universe so that we can serve the customer better.  The customer should be served by this middle tier that knows this relationship.”

To better serve the customer relationship is the driving purpose behind the convergence, which the Internet has enabled.  The strategic goal of this convergence is to make the right information available to the right person(s) at the right time, at the right price, and in the right condition.

One of the great benefits of Web applications is the ability to push relevant information to people and processes at the time it is needed, cutting through the glut of data that could otherwise overwhelm them.  The Internet provides the standards-based infrastructure that enables—for example—networks of intelligent agents to actively or passively collect data, and to prepare it for JIT (just-in-time) delivery to make the right information available to the right person(s) at the right time.

Push technology provider MicroStrategy has termed this approach active data warehousing because the warehouse actively sends data where needed, as opposed to the traditional passive data warehousing model in which the user had to make an appropriate query to retrieve the relevant data.  Michael Saylor, CEO of MicroStrategy, has said:

“Just as a fire chief would not have time to scan the temperatures of every house before identifying a dangerous blaze, decision-makers need to be notified of corporate 'fires' while continuing their day-to-day activities.”

“Similarly the corporate end-user should not have to continually monitor the warehouse.  Active data warehousing automates the flow of necessary information as soon as it is available, providing decision-makers with the essential means to conduct their business.”

The ability of Intranets to move information where it is required—on a demand basis—facilitates faster business cycles.  A company’s ability to respond to the customer’s ever changing needs and demands in a timely manner—often in realtime—is greatly enhanced.

The Gartner Group has introduced the notion of the Zero Latency Enterprise (ZLE) as a set of technologies that allows an enterprise to reduce the time of capturing information and making it actionable throughout the organization.  Quoted in the Knowledge Management article, Maureen Flemming, an analyst with the Gartner Group, has stated:

"It should not matter where information is stored, only how we get access to it. … The emerging trend is not only to get reports, but to get them in real-time or on an event-driven basis.  The challenge is how to structure content with enough relevance for people to get what they need."

This relevance—the presenting of information with sufficient, proper context to enable decision-making—is what transforms raw information into actionable knowledge.

Corporate Web-based applications are able to extend an organization’s relationships with its customers in new relevant ways.  Vernon Keenan, principal analyst with Keenan Vision, has said:

"The successful companies in knowledge management think like the customers and give information to customer-facing employees.  It helps them solve the needs of the customer better.  ...  The trick is to give the customer-facing worker access to the tools that let them take action."

The strategic benefit of this approach is the shifting of focus and purpose back to a decision-making process versus a data-gathering process.  In the long-term, it will provide more opportunities because our customers will know we are more actively listening to their needs.

Because of the convergence of interoperability that the Internet represents, this enablement of customer-focused decision making does not stop at a company’s information processing boundary.  The Internet is ideal for enabling the delivery—as well as the sharing and collaboration—of complicated information and analysis both with collaborators and directly to customers in a timely manner.

Not only can the customer access information made available statically on a company’s Website, more importantly; the customer can be provided the ability to serve himself.  This includes far more than simply such obvious e-commerce activities as checking an account balance or the status of an order, or retrieving how-to instructions, etc.

As an example, Peace Computers has developed for utility providers a customer information system that allows corporate utility customers to log onto the power provider’s Web server and survey their power usage in realtime.  Each corporate customer is able to interact with their utility provider in realtime to optimize their usage of electricity.  Explained by Brian Peace, president of Peace Computers:

"The really great thing about the Internet is that it enables customers to access the information themselves.  They can see demand profiles and do trend analysis with our [the utility company’s] tools.  This is very important because one of the things you want to do as an industrial company is smooth out power usage so you don't have any peaks."

The natural generalization of this interaction model is for the corporate customer now to enable the utility company to interact with its customers’ own analyses of power demands.  The utility provider then would be better able to manage its generation of power—based on more complete knowledge of its customers dynamically changing power requirements.  Ultimately, the utility company and its customers have the ability to collaborate in realtime to their mutual benefit on the generation and usage of electricity.

The generalization of this principle—providing the customer the ability to serve himself—has ramifications well beyond even what the above example suggests.  The open source movement discussed later in the section Managing technological innovation is the adaptation of this principle to software, systems, and service design, development, and deployment.

The customer-focused nature of the Internet is perhaps best demonstrated by the mass personalization possible via the Internet Web.  Joseph Pine, author of Mass Customization: The New Frontier in Business Competition[55] captures the essence of this megatrend with his statement.

"Electronic commerce is really customized commerce because you can deliver a different page to each person.  Anything you can digitize, you can customize, because once you've embedded it in a computer system you can customize it."

Increasingly, in the new digital economy, the Web well may be a company’s first or initial contact with a prospective customer.  In fact, the Web could be a company’s only interface to the customer.

The technological backbone infrastructure supporting the management of customer contact is known as enterprise relationship management (ERM).[56]  This field includes products from Open Market, Epiphany Software, BroadVision, and others.

ERM products—which include a range of technologies from data collection to transaction systems—enhance the one-to-one relationship between customers and companies.  This represents the realization of mass personalization previously described in the section More than a more efficient production and delivery system.”

Every day, more companies are able to mass-produce products customized for specific individuals-millions of them.  As explained by Steve Blank, vice president of marketing for Epiphany,

"The idea is to identify your most profitable customers, and then interact with them to establish a learning relationship.  The emergence of the Web makes this possible."

At the front end of mass customization are customer relationship management (CRM) products such as those from Calico Technology which offers something dubbed a "configurator" called Calico eSales.  This technology gives companies-and customers-easy access to data embedded in widespread enterprise applications. Both Dell Computer and Gateway, competitors in the made-to-order PC market, use Calico's technologies to enable their Web site visitors to tap into their enterprise applications and configure a custom-built system.

The Internet takes things a step further by also allowing companies to strategically anticipate customer demands.  For the first time, companies have access to information about customers' buying patterns and preferences that they can analyze and use in real time.  Recommendation engines, for example, offer suggestions to customers based on what they've purchased in the past.

Convergence and supply-chains meet the Internet

One significant area in which such convergence of knowledge sharing and collaboration of decision-making is now demonstrating its strategic value is in supply-chain management.

John Evan Frook has reported[57] on such convergence efforts in the area of supply-chain management.  Retailers—looking to slash $150 billion from the industry's supply chain through a Web standard for sharing inventory data—have released a newly published specification, dubbed Collaborative Planning Forecasting and Replenishment (CPFR) (http://www.cpfr.org/intro.html), which promises to free retailers and their suppliers from the guesswork in supply-chain management.

Today, suppliers replenish inventory based on forecasts and historical data that are collected in the course of doing business.  CPFR proposes to standardize the organization of that data and make it available to trading partners who could collaborate over the Internet.  Each partner—having access to live sales data in realtime—would be able to optimize its production and inventory in real time.  According to Robert Bruce, vice president of supply-chain management at Wal-Mart Stores:

"CPFR is a dynamic enabler that ties our customers, retailers, suppliers and their suppliers together. … It has a real value to the total value chain, supporting full supply-chain integration."

Pilot tests of CPFR currently are in progress with brand-name suppliers—including Nabisco Food, Kimberly-Clark Corp., Hewlett-Packard, Lucent Technologies Inc., Proctor & Gamble Co. and Warner-Lambert Co., as well as such retail giants Kmart Corp., Circuit City Stores Inc. and Wal-Mart.

According to Carl Salnoske, IBM's general manager for electronic commerce:

"When you combine traditional forecasting tools with real-time collaboration capabilities, you get a lot more fine-tuning of the supply chain. … A retailer can now say, gee, because of El Nino, I need more umbrellas on the West Coast and fewer snow shovels in the East."

CPFR could turn traditional inventory management on its head.  The prospect of giving every partner a total, unified, realtime view of the big picture means that retailers can share responsibility of inventory management with their suppliers, who can then manage price and promotional changes more rapidly.

Contrast this opportunity for realtime collaboration with today’s status quo—non-realtime, catch-as-you-can collaboration—where each trading partner is able to see and to analyze only its own internal data assets, supplemented by non-realtime reports and analyses from others in the supply chain as they become available.  According to Ted Rybeck, chairman of the research firm Benchmarking Partners Inc., such secondhand data is prone to error, and necessarily results in costly inventory stuffing tactics to ensure adequate product availability.

A grass-roots effort at supply-chain optimization using the Internet is becoming commonplace, with supply-chain hubs forming around individual suppliers and buyers that are large enough to influence their partners to standardize on a specific set of technologies and business practices.  CPFR is an attempt to expand these grass-root efforts to involve entire industry segments, which is how many IT authorities expect such standards to emerge.  Already, industry-specific supply-chain initiatives are under way in the IT and automotive industries, among others.

George Lawton’s previously mentioned comparison of IT’s search for an interoperability solution to one of searching for the Rosetta stone has been taken quite literally by some important participants in e-commerce.  A recently formed global organization, RosettaNet, has dedicated itself to adopting and deploying open, common business interfaces for the IT industry and to advancing supply chain efficiency.

The RosettaNet managing board of executives includes representatives from American Express, IBM, Intel and Microsoft.  Execution partners include KPMG Consulting, which recently helped deploy RosettaNet's implementation methodology.  As explained by RosettaNet's CEO, Fadi Chehadé:

"In spite of the public network—the Internet—everyone is doing business with their trading partners on a proprietary basis. … We want to get the IT supply chain to agree on common business interfaces."

"RosettaNet builds consensus so that in the future we can get past the myth of e-commerce and create a supply chain in which everyone can do business more efficiently.  … The ultimate result will be a more satisfied customer."

Another augmentation of the supply-chain model that leverages the ubiqitousness of the Internet is the Enterprise Information Portal (EIP), as explained by the article, “Web opens enterprise portals.”[58]

Unlocking the vast wealth of knowledge stored in enterprise resource planning (ERP) systems, Web sites, data warehouses, legacy mainframes, and client/server systems is at the top of the list for every major corporation today. Many companies are using Web technologies to bring together their disparate tools and resources.  Essentially, they are creating EIP’s that allow not only their internal users, but also their suppliers and customers, to access data stored in any one of their applications.

As a concept borrowed from online applications, EIP is a data management strategy that provides a window into enterprise knowledge by bringing life to previously dormant data so it can be compared, analyzed, and shared by any user in the organization.  New uses and added value is being created from previously existing data that was compartmentalized—hidden from view and from reach.

According to experts, those companies that are early implementers of EIP’s can earn a sizable competitive advantage, bolstered by lowered costs, increased sales, better deployment of resources, and internal productivity enhancements such as sharper performance analysis, market targeting, and forecasting.

EIP's are not restricted to use only within the company's intranet.  The EIP also can be a strategic benefit in the corporate extranet of suppliers and customers.  By creation of EIP’s, companies can extend the benefits gleaned inside the company to the outside.  They can cement customer and supplier relations, and coordinate workflow, collaboration, and transactions with other progressive companies.

Linking such companies through their EIP’s, according to the experts, puts into place a linchpin of broadly based automated, transactional Internet commerce—the big enchilada of the connected world.  According to Mansoor Zakaria, founder and CEO of 2Bridge, a maker of Web-based content aggregation servers and applications,

"To gain the benefits hoped for from the Internet, computing platform companies need a new class of software that serves internal and external customers seamlessly through a common gateway that provides personalization, publishing, and analysis—not just browsing and searching."

How great is this trend to embrace, to leverage, and even to extend the Internet’s power for convergence.? IDC (International Data Corporation) is in the process of releasing a series of knowledge management reports.[59]  Their research predicts that the corporate portal market will rapidly evolve beyond the enterprise information portals (EIP's) prominent in today's market.  Specifically, the development of collaborative portals in the corporate market will lead to enterprise knowledge portals (EKP's) that connect people, information, and processing capabilities in the same environment.  Gerry Murray, IDC's Director of Knowledge Management (KM) Technologies has coined a new corollary to describe this development.

"The development of EKPs provides an important corollary to the idea that the network is the computer, and that corollary is the portal is the desktop.”

This will have a fundamental impact on how IT systems are implemented, the way customers spend their money on hardware, software, and services, and the very structure of the IT industry itself.

IDC has announced its release of the report, Sourcebook for Knowledge Superconductivity.  Portals are identified as the ideal medium for knowledge management.  However, EIPs are not currently seen as sufficient for the corporate world.  Rather, IDC identifieds four points of evolution for corporate portals:

1.       Enterprise information portals (EIP) — provide personalized information to users on a subscription and query basis

2.       Enterprise collaborative portals (ECP) — provide virtual places for people to work together

3.       Enterprise expertise portals (EEP) — provide connections between people based on their abilities

4.       Enterprise knowledge portals (EKP) — provide all of the above and proactively deliver links to content and people that are relevant to what users are working on in real time

The first three of these are being developed today.  Currently, EIP’s are the most active segment.  Collaboration portals are developing rapidly, and the expertise portals are just now beginning to get the attention they deserve from developers.  Finally, while not available today, IDC expects EKPs to be a market reality later in 1999.

Convergence yields virtual corporations

In his book, The Digital Economy, Donald Tapscott described the new business entity, which he called the virtual corporation.  On first thought, a virtual company might refer to a company that exists only on-paper—as a legal front, or fasad, for something or someone else—this is the political interpretation.

A better interpretation of its meaning might be derived from that of virtual memory in the computer arena.  The term virtual memory refers to the abstraction of separating logical memory—memory as seen by the process—from physical memory—literal memory as addressed and accessed by the processor.

Similarly, a virtual corporation exhibits all the functionality and processes associated with a company—manufacturing, marketing, distribution—but its implementation or realization of these characteristics is by means outside the company proper.

The supply-chains described in the previous section are one aspect of the low-level realization of virtual corporations.  Within the third theme of his book—virtualization—Mr. Tapscott described a digital world where physical and tangible things become virtual, represented by scenarios created by the high-speed manipulation of digital information.  Mr. Tapscott provided the following formal definition of a virtual corporation:

“The conjunctional grouping, based on the Net [the Internet], of companies, individuals, and organizations to create a business.”

The Internet has already spawned many such companies—the simplest type being constituted by someone who has setup a webserver on the Internet.  As a personal example, in 1997, I purchased a computer monitor from such a virtual Internet-based company—located in New Jersey.  After making my selection via the vendor’s website, I called their 800-number to determine when the monitor would be delivered—as I planned to be out of town a few days from then.

I was pleasantly surprised to learn that I could expect next-day delivery—but was concerned to be paying for typically higher-priced next-day delivery service.  This was not the case—to my surprise, the monitor would be delivered the next day because it was coming directly to me from the manufacturer’s warehouse near DFW airport.

How could this be?  This Internet-based company had no warehouses from which to ship products—though it sold a multitude of computer-related items.  Rather, the company had access into their manufacturers’ and distributors’ warehousing and delivery systems via the Internet.  The monitor thus was transported once—directly to me—rather than twice—first to the intermediating dealer, and eventually to me.

Better known examples of the new Internet-enabled virtual corporation include the virtual bookstore, Amazon.com, recently featured in the Business Week article, “AMAZON.COM: the Wild World of E-Commerce.”[60]

Amazon has become cyberspace's biggest consumer merchant, with 4.5 million customers and an expected $540 million in sales this year—up from $148 million last year.  The market believes not only in the viability of Amazon, but also in its success—having recently awarded it with a market capitalization greater than that of Sears—one of the world’s largest traditional retailers!  Similarly, America Online (AOL) now has a larger capitalization than that of Disney!

In juxtaposition to the business models that characterize traditional retailers, Amazon has had relatively high initial costs for infrastructure such as computer systems and editorial staff—which partly explains its red ink today.  Unlike facilities-based traditional retailers, who must continually invest in new stores to hike revenues, Amazon can boost sales simply by enticing more people to visit its single online store.  Says Amazon.com’s Chief Financial Officer Joy Covey:  ''I don't think we could have grown a physical store base four times in one year.''

According to the Business Week article:

Amazon offers an easily searchable trove of 3.1 million titles—15 times more than any [bricks-and-mortar] bookstore on the planet and without the costly overhead of multimillion-dollar buildings and scads of store clerks.  That paves the way for each of its 1,600 employees to generate, on average, $375,000 in annual revenues—more than triple that of No. 1 bricks-and-mortar bookseller Barnes & Noble Inc.'s 27,000 employees.

The lack of bricks-and-mortar certainly is one distinguishing characteristic of the new Internet-based virtual corporation.  Other less obvious characteristics of virtual corporations, however, are more informative about and strategic to their evolution and functioning.  The virtual corporation is the epitome of Ronald Coase’s Law of Diminishing Firms that Nicholas Necroponte described in his Introduction to the book, Unleashing the Killer App—digital strategies for market dominance.[61]

As the market becomes more efficient, the size and organizational complexity of the modern industrial firm becomes uneconomic, since firms exist only to the extent that they reduce transaction costs more effectively.  Trends toward downsizing, outsourcing, and otherwise distributing activities away from centralized to decentralized management support this view.  These trends will only accelerate in the coming years.

A major consequence of these trends is a total rethinking—such as process re-engineering—of what constitutes an effective corporate infrastructure.  The mega-mergers that now are being formed are simply a consolidation of existing corporate entities—they are not a true reflection of what is to be expected long-term.  This artifact is similar to the occasional consolidations that occur during a long Bull market.  Rather, the Law of Diminishing Firms predicts:

Firms will not disappear, but they will become smaller, comprised of complicated webs of well-managed relationships with business partners that include customers, suppliers, regulators, and even shareholders, employees, and competitors.

Why is such a transformation unavoidable?  The mega-corporate infrastructure was perhaps optimal for a mass production era.  However, the rise in importance and inevitable preeminence of mass customization—together with the appliancization of technology—has stood the underpinning assumptions of the mega-corporate era on their head.

Convergence is now the preferred approach to achieving competitive efficiencies, while enhancing the ability to adapt—in realtime—to the customer’s ever-changing demands.  Furthermore, a strategic dependence on technology is one of the distinguishing characteristics of the virtual corporation.  In particular, technology is the critical enabler of its personalization of each customer.

Amazon's cutting-edge technology gives it several advantages.  By automatically analyzing past purchases of a customer, Amazon is able to make additional purchase recommendations customized to each buyer—one of many mass customization techniques that confound twentieth century mass marketing approaches.  Their strategy is to leverage technology to make shopping a friendly, frictionless, even fun experience that can take less time than finding a parking space at the mall.

Besides spurring more purchases, there's another huge bonus for Amazon:  It can gather instant feedback on customer preferences to divine what else [besides books] they might want to buy.  Such valuable information has proven forbiddingly effective in capturing new markets online.

Furthermore, Amazon is extending its warm and fuzzy formula far beyond the bibliophile set with the debut of a video store, as well as an expanded gift shop—the clearest sign yet that Bezos aims to make Amazon the Net's premier shopping destination.  While it may appear as though the company is careening willy-nilly into new terrain, Amazon in fact is targeting areas its customers have already requested.  According to Amazon CEO Bezos.

''We want Amazon.com to be the right store for you as an individual. … If we have 4.5 million customers, we should have 4.5 million stores.''

This mass personalization—the personalization of each customer’s services and products—is accomplished in a number of ways.  What Mr. Bezos is leveraging is the ability of the Web to connect almost anyone with almost any product.  One meaning of connect certainly is to locate or find—which Amazon certainly facilitates quite effectively.  The result is the ability to do things that could not be done in the physical world—such as sell three million books in a single store.

More importantly, Amazon is able to connect with the customer—that all important customer relationship.  Amazon has created a sense of online community.  For example, customers are invited to post their own reviews of books; some 800,000 are now up on the web.  It recently introduced the new service GiftClick, which allows customers to choose a gift and simply type in the recipient's E-mail address—Amazon then takes care of the rest.  What a way to realize word-of-mouth (or is it word-of-mouse?) advertising!

The virtualization of Amazon involves far more than simply the elimination of bricks-and-mortar.  Early on, Mr. Bezos offered other Web sites the opportunity to sell books related to their own visitors' interests—through a link to Amazon.  Their inducement: a cut of up to 15% of sales.  Now, Amazon has over 140,000 sites in its so-called Associates Program.

Examples such as Amazon are but the tip of the iceberg in the now rapid corporate virtualization of our increasingly digital economy.  The media-related industries—entertainment, newsmedia, bookstores, etc.—are fundamentally information-focused enterprises; hence, the adoption of the Internet as a means to their improvement is natural.

Consumer-oriented retail is certainly an area that is ripe for the Internet virtual corporate model of e-commerce—as witnessed by the entry of online versions of Barnes & Noble and Borders bookstores to this domain.

Whence the virtualization of a company?

The corporate virtualization now transforming our economy is not restricted to the information-focused industry segments, such as the media industry.  Every segment of the economy—regardless of the nature of its end products and services—is increasingly impacted by information.  The flow of information associated with any given industry provides a natural basis, or starting-point, for the virtualization of that industry.

Increasingly, companies from every industry segment are turning to the Internet to provide the information-related infrastructure for new approaches to solving their problems.  This breadth of industry segments includes everything from consumer merchandizing and information delivery—every newspaper, television network, etc. now has its own website—to hardcore manufacturing—such as the automotive and aerospace industries.

The transformation occurring at Boeing—as presented in a recent article “Boeing's Big Intranet Bet[62]—is an excellent example of the corporate virtualization process.

Ironically, a drastic move by Boeing to mimic the manufacturing style of mass producers resulted in Boeing’s costs spiraling out of control and its aircraft deliveries consistently running behind schedule.  In an almost desperate effort to keep production humming and customers satisfied, Boeing has initiated some rather drastic measures—in contrast to the currently typical manufacturing enterprise.

Topping this aerospace giant's list of remedies is a series of intranet and extranet applications designed to drive production and logistical information—not only to all corners of the company but also out to suppliers and partners—and even to its customers on an on-going support basis.

Its recently implemented web-enabled applications give Boeing and its suppliers’ full visibility into the specifications of fuselages, engines, wings, pilot controls, custom interiors and other parts, as well as the production schedules and maintenance histories for every airplane.  One result is better parts compatibility.  Participating suppliers are now responsible—as opposed to being on-call by a Boeing employee—for assuring that components match up.

In particular, Boeing has gone on the record as declaring itself in the process of strategically re-engineering itself into a virtual company.  According to William Barker, manager of the project, called Boeing Partners Network:

"If you look at the suite of applications coming up on our extranet, what you're looking at is the creation of a virtual company, … It's not just Boeing entities that now make up the company.  Suppliers, customers and partners extend the span of Boeing.  They have the same data we have.  They see metrics from the same source."

How large is this effort?  Some 320 major partners currently access the Boeing Partners Network.  This number is scheduled to expand to 5,000 suppliers within 18 months.  Ultimately, all of Boeing's 40,000 trading partners are expected to be partners of the Boeing Partners Network.

Internally, more than 192,000 Boeing workers, out of about 236,000 total employees, use the Web applications, and 91 percent of all managers use them regularly.  Nearly 31,000 users are hourly employees—web browsers are appearing virtually everywhere, from crane cockpits to the workstations of workers building nose cones.  When parts are needed from inventory, a worker can page internal couriers using the Web.

How has Boeing’s collaboration with its customers been enhanced?  Roughly 40 percent of Boeing’s extranet users are government entities, an indication of how confident Boeing is in its security infrastructure.  While initially motivated by the needs of Boeing’s airplane production operation, the extent of scope and the accrued results of this effort have not been restricted to airplane production.

The Boeing Network also is aiding Boeing's space projects—by providing the power to share information globally.  Forty-five separate regulatory agencies in the United States, Canada, Japan, Russia and several European countries use the Boeing intranet to collaborate on the international space station now under construction.

Boeing’s success in this endeavor has been featured recently in the article, “Killer Supply Chains.”[63]  The benefits of Boeing’s new infrastructure are enormous.  For starters, Boeing will be able to rapidly increase the number of planes it produces.  The company expects to build 620 planes in 1999—triple the number of 228 produced in 1992.  Customers no longer will wait 36 months from the time a plane is ordered until it is finally delivered.  Boeing Commercial Airplanes now commits to deliver them in eight to 12 months.

Achieving such a performance gain is easier said than done, especially since the customer has literally thousands of configuration options from which to choose.  The trick is to defer all such customization to the latter stages of production, rather than to make those decisions at the beginning.  This approach is similar to the approach the automotive industry now is pursuing—as explained previously in this paper in the section on mass customization.

More than just people are being connected to the Boeing Partners Network.  Equipment in all areas is being re-engineered to function as IA’s—Internet or Information Appliances.  Sensors on heavy equipment provide instant reports—via the Intranet—on whether they are churning away, require attention, or are down altogether.

The benefits of bringing all of its manufacturing equipment online reach well beyond the automation of the day-to-day operation of that equipment.  For example, managers can access relevant information in realtime, as well as aggregate performance data over time for use—say—in negotiations for capital equipment.

David Dobrin, chief business architect at the consultancy Benchmarking Partners Inc., has described the significance of Boeing’s efforts to fully integrate the shop floor into its overall knowledge management strategy:

"There's no other manufacturing company that I know of that has this type of commitment to Internet connectivity on the factory floor. … They are trying to give individual workers more information and consistency."

According to Chuck Kahler, vice president for wing operations in Boeing's commercial airplane group:

"In the past, the person who had the data had the power. … In the future, with the help of the Internet and Intranet, everyone will have the data, and the power will be held by the group.  Our vision has everything that has been typed on a keyboard available for others to use.”

"By having this data readily available, all the people involved can look at it. … It's not going to be me looking at data and expecting people to do better.  The idea is to let everybody see how they're doing individually.”

Furthermore, the Boeing web development strategy is not restricted to the use of its Intranet as the means to provide everyone access to relevant information, and knowledge.  Boeing has gone much further in its efforts to fully integrate of its employees—with their application-specific knowledge of how the company should function.  The decision was made to distribute Intranet development among the business units.  Now, the applications are tailored by application-focused employees to address their unique problems.

This represents a new frontier in IT organization, especially for a company known for its centralization and bureaucracy.  Boeing’s CIO Milholland explained,

"We believe the advantage to us is people who know: [1] how to build applications, [2] how to access data, [3] how to talk to customers, and [4] how to talk to suppliers through these kinds of applications."

Boeing’s embrace of the Internet as providing the best paradigm for re-inventing itself has not been restricted to its operations and production facilities.  Boeing’s approach to working with new technologies and to it systems development also had to be improved, if Boeing were to remain competitive.[64]  Boeing’s competitors were introducing superior products and components at better price points than what Boeing could offer.  Immediate action on this front was demanded for survival.

Again, Boeing turned to use of collaborative extranet applications—this time to revamp how their rocket-engine designers build rocket engines.  The result was a cut by a factor of 100 in their development costs.

To achieve this level of performance gain and to be more productive in the design phase, Boeing placed a heavy emphasis on concurrent engineering with suppliers.  Prior to the deployment of its extranet, Boeing—in its traditional top-down mass production focused approach to design and development—would contract with a supplier to build a particular part to a predetermined spec, requiring extensive up-front work.

This time Boeing approached the problem as an extranet-enabled pilot project.  Boeing drafted engineers from its supplier firms without specific roles in mind for them.  The extranet yielded a more free-flowing creative—read innovative—process that non-the-less could be tightly monitored through revisions because the extranet served as a repository for project data.

The results of this new approach to systems design and development were nothing short of dramatic.  Robert Carman, program manager for advanced propulsion at Rocketdyne identified significant breakthroughs for this new approach to design and development:

1.       The product, normally made up of 140 different parts, was redesigned with only five parts.

2.       First unit cost –or the cost to develop an initial version—dropped from $1.4 million to $50,000.

3.       Design time was compressed from seven years to less than one year.

4.       The engine is being tested now and will soon go into production.

Mr. Carmen goes even further in describing the strategic significance of the new methodology now common at Boeing:

"Virtual team collaboration is not a fantasy; it's a requirement in our world."

Not only at Boeing, but across the whole aerospace industry, everyone is going through a massive collaborative outsourcing—not just of production but up front with design and development.  Companies are focus on their core competencies while integrating with their partners and suppliers using extranets, said Ram Sriram, president of Nexprise, the company that developed ipTeam.

At the heart of Boeing’s extranet-enabled collaborative design and development application is ipTeam Suite 2.0 from Nexprise–a company run by former Lockheed Martin engineers who designed their packaged application for the Defense Advanced Research Projects Agency.

The ipTeam Suite facilitates collaborative engineering and design activity, in addition to helping manage supply chain knowledge.  At the center of the product is their Internet Notebook, an electronic-engineering workspace where engineers are able to collaborate on virtually every aspect of a project.

"This is the equivalent of ERP but in the product-development area," said Alex Cooper, president of Management Roundtable, which tracks product development-process technologies.

Such far-reaching effort to enable and empower a company’s employees is not limited to Boeing.  Home Depot's store-distributor setup is part of a broad effort to put decisions into local hands, a critical factor in the company's successful supply chain.  According to Home Depot's CIO Ron Griffin:

We're empowering people on the floor. … They feel as if they have ownership, and having that ownership is what makes it work."

When an associate enters orders directly into mobile computing devices—called the Mobile Ordering Platform—the request is transmitted almost instantly to more than 80% of Home Depot's manufacturers, which then respond immediately.  Home Depot offers its partners recognition incentives to get them on board.

Many other examples of the new virtual corporation could be presented—such as the automotive industry’s initiative to build what could be the world’s largest extranet—ANX.  All these examples would reinforce the same conclusion:

Everyone and everything in the virtual corporation must be connected—so that it may contribute to its fullest capability in realtime to both the long-term health as well as the immediate bottom-line of the company.

The new business imperative—embrace and extend

The Internet—through its multitude of participants—is constantly introducing and embracing new and enhanced protocols, environments, communications metaphors, partnering arrangements, business models, etc.  These serve to extend and to enhance current capabilities—sometimes with entirely new metaphors—and more importantly, to meet the ever changing and expanding needs of this customer, the digital economy!

Many industry pundits were amazed when Microsoft did an apparent “about face” in 1995 with its profound and unequivocal embrace of the Internet—including its emphasis on open, readily available standards as an engine of convergence.  Until its awakening, Microsoft was very committed to domination in all of its markets—current and future—with Microsoft-exclusive solutions.  At the time, Microsoft was in the process of releasing its MSN—functionally equivalent to the then current pre-internet versions of the Prodigy, AOL, and CompuServe networks—with its own proprietary MSN client software, business model, etc.

The emergence of the Internet has forced this transformation—not only by Microsoft, but also by everyone.  This influence of the internet was examined recently by a feature article[65] in Network World.

In 1994, Network World previously published a series of articles that suggested Microsoft was gaining control of the Internet—just as it had the desktop—through its ability to set de facto standards.  Those articles made the case that Microsoft had "sprinted ahead of plodding standards groups" and "used its market share to push its specifications so far into the market that even its biggest rivals have to play along."

Today, Microsoft has far less ability to dominate Internet standards.  With much analysis, the new opinion is argued that while Microsoft is intimately involved in the standards process, the company has not been able—nor will it be able—to manipulate the standards process to give itself a competitive advantage.  Two important points are made:

Rather, the evidence shows that the onset of Internet technologies has put a premium on standards and interoperability.

The software giant still has standards clout, but in a world gone Internet-mad it can no longer dictate the agenda.

The Internet is evolving so quickly and so extensively, in so many different directions—everything from IP extensions, to Web and XML extensions, to many other areas of information and knowledge processing and e-commerce—that now even companies with the clout of Microsoft will be able to hi-jack it.

One obvious example of this general movement away from Microsoft-proprietary solutions in favor of Internet-based solutions is typified by the MAPI versus SMTP debate.  The impact of MAPI—which remains a proprietary Microsoft specification—has been significantly muted by the advent of the Internet.  "A lot of vendors are just using the Internet standard SMTP as opposed to MAPI," explains Tom Austin, vice president and research fellow at Gartner Group.

Furthermore, the other functionality of such products as Microsoft’s Outlook and Exchange Server that would have been leveraged by Microsoft in pre-Internet times as its next lock-in are instead now available as Internet standards[66]—which now are being supported by everything from Internet appliances to legacy mainframes.

The Internet Engineering Task Force (IETF) is in the process of completing its portfolio of enterprise calendaring standards with the release of five new protocols.  The protocols will ensure compatibility between different collaboration and scheduling applications.

The new Internet protocols include:

1.       Internet Calendaring (iCal) — proposes a standard format for representing calendaring information,

2.       Internet Transport-Independent Interoperability Protocol (iTip) — standardizes the mode of transporting iCal information,

3.       Internet Mail Interoperability Protocol (iMip) — lets end-users query scheduling systems from heterogeneous vendors using e-mail as the transport,

4.       Internet Real-Time Interoperability Protocol (iRip) — makes scheduling over e-mail much more effective, and

5.       Client Access Protocol (CAP) — makes the calendaring client and server independent of each other.

In particular, CAP would allow users to choose what kind of calendaring client or user interface they wish to use—regardless of which particular vendor's system is running in the back end.  CAP has already garnered support from CS&T, Lotus, Microsoft, Sun, and Netscape.  According to Andre Courtemanche, president and CEO of CS&T:

"With iRip, my system queries your system to tell me when you're available instead of me needing to sit and wait for an e-mail response from you before I can schedule a meeting."

"CAP changes all the rules and will make a very healthy marketplace by allowing for a lot more players.”

The IETF LDAP (Lightweight Directory Access Protocol) standard now is being integrated into all network-related application areas—from device management, to calendars, to email, etc.—as all these become directory-enabled.  Even Microsoft’s own ActiveDirectory is LDAP-enabled!

Microsoft certainly still strives to be the dominant player in all of its markets, and aggressively defends its current dominant position on the PC platform.  However, the approach now being taken to reach that goal has been modified significantly.  First, Microsoft has embraced existing Internet standards—from the IP network protocol, to the Web-HTML browser paradigm.

Microsoft now proactively contributes extensively to the standardization process in several Internet-related areas of technology.  For example, many Microsoft originated or co-sponsored initiatives currently are in process within the W3C (World-Wide-Web Consortium), and the IETF (Internet Engineering Task Force).

Rather than introduce new features and capabilities as Microsoft-exclusive enhancements, the approach Microsoft now takes is to introduce them via an appropriate Internet-related standardization body as Internet enhancements to be adopted by the entire Internet community.  Examples of standards to which Microsoft has made introductions include XML (Extensible Markup Language), XSL (eXtensible Stylesheet Language), TIP (Transaction Internet Protocol), and P3P (Platform for Privacy Preferences).

Microsoft has committed to native support of XML and the multitude of XML-related tools and capabilities within its own product suites—not only by its Internet Explorer that was born out of the Internet phenomenon, but also its more traditionally non-Internet offerings, such as Microsoft Office.  Thus in the future, the user of Microsoft products can expect Word, PowerPoint, Excel, etc. documents all to be XML-derived files—rather than proprietary *.DOC, *.PPT, *.EXL, etc.

The Microsoft business model and the Internet’s implied business model are one and the same—embrace and extend!

The strategy of embrace and extend is not limited to Microsoft.  IBM recently[67] introduced a comprehensive set of developer tools that support XML.

As part of its XML initiative, IBM has released—for free download and use—nine XML-related development tools and applications.  While the initiative is aimed largely at developers, IBM's push should help bring XML more quickly to the forefront by driving the production of XML-compatible applications.

Furthermore, this adoption of an embrace and extend strategy is not restricted to individual companies—such as Microsoft and IBM.  Nearly every day, another group—who otherwise would-be (and continue-to-be!) competitors in some supporting technology area of the digital economy—announces their collaboration.  They come together to create a new level of interoperability in their area—a la, some new Internet proposal—from which each can build new enhanced solutions for their customers.

The convergence typified by this embrace and extend behavior is occurring in all areas of the digital economy—in particular, in regard to Internet technologies and applications.

A good example is the recent announcement of convergence (consensus) for MGCP (Media Gateway Control Protocol) by two competing groups, each with its own proposal—the SGCP (Simple Gateway Control Protocol) and the IPDC (Internet Protocol Device Control).  Each group had developed its own approach for the integration of Internet-based telephony with that of the circuit switched PSTN telephony.

The spirit of embrace and extend again is observed in their action.  The two groups decided that a common unified effort was in the better interest of both groups and that of their customers.  Their new proposal no only embraces the functionality of the two competing proposals—it also extends the scope of the new protocol to include additional multimedia related features and capabilities that were not part of either of the original ones.

The technology-enabled opportunities derived as the result of this convergence of such efforts is perceived as offering far greater value than what either original proposal would have supported standalone!

An indication of just how profound and global this all-encompassing drive towards convergence has become is typified by the recent formation of the Interoperability Clearinghouse (IC), which includes in its membership major vendors, standards organizations, and government and private user groups.  The IC has launched an ambitious quest for the Holy Grail of true interoperability and open systems, as reported.[68]

The IC is forming a consortium of standards organizations that will promote standards adherence and better understanding of the interrelationships of industry.  Additionally, the IC is forming a joint venture with Lockheed Martin, Ernst & Young, SAIC/Bellcore, Boeing, IBM, and the standards body Objective Technology Group to develop and administer a knowledge base to assist users seeking to answer standards conformance and interoperability questions.  The inference engine for this knowledge base currently is under development by the IC, with funding from the Defense Agencies Research Projects Agency (DARPA), and private sources.

Technology’s new role—key enabler of the digital economy

In the past, each vendor (or vendor group, or alliance) would develop and roll out its own proprietary solution.  Over an often-protracted period of time, one of the competing solutions might displace the others.  During this time, a customer either must choose among the several solutions, or else choose to support more than one of them—e.g., multiple browsers, multiple object models, multiple network protocols.

During the time of industry shakeout that then would ensue, the strong, dominant players in particular areas (e.g., IBM in mainframes, Microsoft in PC software, Cisco in routers) would leverage the FUD (fear, uncertainty, and doubt) factor to their advantage.  This provided a way for them to hold their customer base captive, or locked-in.  Such an approach to product and service evolution clearly was neither desirable nor satisfactory for the customer.

Technology evolved relatively slowly—in terms of performance improvements, new feature-set rollout, etc.  The business world typically adjusted to such long technology cycles with a long, protracted depreciation schedules, metered-out phased-in improvements, etc.  The would-be new vendor could not develop a new product or service offering with enough improvement or differentiation—in technology, cost, feature set, etc.—to justify the displacement of the incumbent company and its technology.

The operative word in the above word-picture of how technology would be introduced is time—time to plan, time to act, time to react and adapt.  The fairly slow incremental evolution of technology contributed to the economic stability.

A company—knowing that stable technology practically assured a done deal—could develop a multi-year plan, then successfully execute it without significant modification.  Within the constraints of this scenario, utility companies such as the typical telephone company generally have performed quite well—promise planned growth, then deliver as scheduled!

Times have now changed.  Due to the rapid acceleration in the rate of significant technology evolution—enhanced performance, reduced cost, new highly desired capabilities, etc.—such a status-quo approach to technology management now is proving ineffective as a means of achieving customer lock-in, service and product stability, and ultimately, a company’s regularly planned profitability.

The equivalent of a Moore’s Law—which states that computing performance will double every eighteen months—now applies not only to computing-related technology, but to all aspects of the technology-empowered economy.  The second half of this document—presenting such “Emerging Technologies”—focuses on what some of these breakthroughs might be and how their impact will be felt.

The effects and the consequences of the previously explained megatrends—appliancization and mass customization—now are dominant factors contributing to the adoption of convergence as the preferred way of managing a business.

The rapid appliancization enabled by new technologies—where replacement becomes preferable to upgrade—stands in stark contrast to the once generally accepted lock-in value that the installed-base of technology once represented.  Rather than provide the means by which the incumbent can lock-in the customer—technology now provides the would-be protagonist with the means by which to liberate the customer—with better service and product offerings.

An excellent example of the new breed of company that will characterize the digital economy is the previously discussed Internet company Amazon.  As an aside, the capitalization of Amazon—over $17 billion versus less than $15 billion—has now surpassed that of Sears, one of the world’s largest traditional—bricks and mortar—retailers.

Were one to ask, “Is Amazon a retailer or a technology company?” CEO Jeff Bezos' answer would be, ''Yes!''  The truth is he is right—it indeed is both a retail business and a technology company.  Technology has provided the rocket that Amazon has ridden to the top of the online-retailing world.  In Bezos’ own words, quoted from the Business Week feature article:[69]

''In physical retail, the three most important things are location, location, location.  At Amazon.com [and in the digital economy], the three most important things are technology, technology, technology.''

Bezos aims to keep it that way.  Surprisingly, some 75% of today’s retailers still are not on the Web, according to Kip Wolin, business development director at retail Web consultant NetTech Group Inc.  Any model of the typical electronic appliance—TV, stereo, CD player, cordless phone—today has an expected proverbial shelf-life of about three-months before the given model is made obsolete by a new generation of that appliance—one that is cheaper to purchase with new just-gotta-have features.

This three-month—or has it now become a two-month—shelf-life for the typical electronic appliance is quickly overshadowed by the even faster-changing nature of Internet technology’s evolution—or is it revolution?  The almost realtime reaction-time—the time for a competitor to functionally duplicate, and even to surpass a current offering—of Internet technology means that one never can expect to find a place to rest, and never expect to achieve a status quo.

This is why Amazon cannot permit itself to set still on its current achievements.  This is why the three most important things at Amazon.com are “technology, technology, technology.''

Amazon’s previously discussed Associate Program—under which URL links to the Amazon website are located on the pages of thousands of other retailers—already has been duplicated by the online bookstores of both Barnes & Noble and by Borders.  For example, the technical publisher CMP—which publishes Information Week, Internet Week, EE Times, etc.— not only continues to provide a URL link to the AltaVista Web search engine, but now also offers to link to the Barnes & Noble online bookstore, when one performs a search of the CMP online archive.  Similarly, another technical publisher C|Net—which operates The Computer Network—now offers to link one to Borders books online service. When a query fails to deliver results from one of C|Net’s websites.

Furthermore, Barnes & Noble has duplicated the functionality of Amazon’s one-click purchase, and alternative book suggestion features, shortly after they were introduced.  In particular, Amazon’s GiftClick feature—developed for the Christmas season—remained unique to the Amazon website for less than one week before Amazon’s competitors had duplicated its functionality on their respective websites!

Further exacerbating technology’s impact on the digital enterprise is the customer’s changing attitude towards technology as the preferred way to improve a business position.  The customer’s recent experience with mass customization well could be summed up with the words of Martin Luther King, “Free at last!  Free at last!  Thank God, I’m free at last!”

Now that the appliancization of technology is well established, the appliancization of the business process as the next major development in the evolution of the digital economy is already begun.  The adoption of new technology increasingly now is seen as the preferred way to quickly, and cost effectively bring new innovation and increased performance and competitiveness to an enterprise’s operations—as well as to reduce its TCO’s and to improve its ROI’s.

The customer—whether a consumer seeking to improve his quality of life, or a business enterprise seeking to reduce its TCO’s and to improve its ROI’s—not only is willing to entertain new possibilities—enabled by new technologies—that are offered, but now actively seeks out such better solutions.

The following major section in this paper presenting Emerging Technologies describes a number of technologies—fundamental and applied—which have the potential to significantly alter the incumbent’s position of technological stability.  No one can say with certainty just what the next breakthrough technology may be, or what breakthrough killer applications it will enable.

The particular examples of emerging technologies presented in this document have the potential, on the one hand, to open whole new areas of opportunities.  On the other hand, the full realization of their consequence and impact may not only significantly diminish the profitability of existing business endeavors, but perhaps should preclude entry into currently contemplated ones.  The Microsoft attribute of “turning the company on a dime” has become imperative for a company’s continued success.

Such technologically perilous times as these require some type of technology insurance—insurance that the opportunities such technology represents are not missed; while minimizing the risk of miss-directed efforts.  Such insurance is achieved through embracing convergence.

The tactical immediate approach is to practice dynamic partnering—often with one’s competitors, on a case-by-case basis—that is focused on meeting the customer’s needs.

As was explained previously in the discussion of mass customization, the Internet has created a new competitive landscape, where several companies will temporarily connect to satisfy one customer's desires, then disband, then reconnect with other enterprises to satisfy a different order from a different customer.  Today, most retail outlets accept the credit cards of several competing organizations—say, American Express, Discover, Master Card, and Visa.  Similarly, a given website might provide its customers with links to multiple competing partners—say, to both Amazon and Borders bookstores, or to both Yahoo and Excite.

The strategic long-term approach to technology insurance is to practice convergence.

Companies and organizations, which previously have pursued a go-it-alone strategy, now readily are adopting a spirit of coopetition—the balanced combination of cooperation and competition.[70]  Management guru Peter Drucker and other experts now express their belief that the collaborative dynamic of networks, partnerships, and joint ventures is a main organizing principle in the New Economy.

This approach assures each participating coopetive company has ready—timely and cost effective—access to those technology-driven solutions that will be needed—demanded—by the customer as part of tomorrow’s superior product, service, etc.  A company then with assurance can safely promise the customer—as part of building and preserving that valued customer relationship—that the company will be able to meet the customer’s expectations of better products, services, etc.

One concrete manifestation of this phenomenon of increased cooperation and competition has been described in the article, “Changing The Rules.”[71]  Online business opportunities are breaking long-held rules and conventions concerning interactions with customers, suppliers, and partners.  Among the traditional precepts that now are under siege include:

1.       Companies don't share information with competitors;

2.       Suppliers don't share information with buyers, especially information that determines pricing;

3.       Corporate procurement of commodities isn't a strategic activity, and should be determined solely on price; and

4.       No financial transaction occurs without some involvement by a bank.

According to Jim Shepherd, a VP at AMR Research:

"Companies are going to have to collaborate, and the Internet is the most effective vehicle to make that happen because of its ubiquity and relatively low cost.  Unlike earlier forms of electronic commerce like EDI, the Net permits informal dialog among partners, not just purely structured transactions."

One Internet-enabled infrastructure now becoming increasingly common is the online marketplace where several competitors—or, should we be saying co-optitors?—in a given market can provide a critical mass of service and functionality to their customers.  The argument for the formation of an online marketplace is the same reasoning for explaining why in the real world one finds multiple gas stations, multiple restaurants, etc. at a given physical location.  The online marketplace increases the likelihood of a customer choosing to stop at that location.

 "A marketplace site can aggregate numbers of buyers that a single-seller site can't," says Chris Peters, executive VP of MetalExchange and a consultant who helped develop Weirton's site. "Sooner or later, someone was going to create an online marketplace, so we thought it might as well be us."

Another example of how the rules have changed is given by AMR's Shepherd,

"The rule used to be that you never, ever shared demand or production information with suppliers because that gave them an unfair advantage in negotiations.  You'd have been fired for even suggesting it.  Now it's becoming routine."

In business-to-consumer commerce, one of the most radical rules-changers is Priceline.com, which has sold 50,000 airline tickets over the Internet since April.  Priceline.com allows customers to post prices they wish to pay for trips.  The first airline to accept that offer gets the business.  As further diversification of its new business model, Priceline is expanding its business—which has received a patent—to packaged vacations that include rental cars, plane tickets, and hotels.  Travellers will state their price for a specific trip, challenging the businesses involved to cooperate to make the package work for all parties.  According to Priceline president and CEO Jay Walker,

"It introduces cooperation because they all know if a rental car company is too piggish [on price], nobody is going to play.  Consumers will say, 'You sellers work it all out. I'm going to bed.'"

This concept of online marketplaces is rapidly maturing, especially when open-ended marketplace principles are converged with the more traditional closed supply-chain and EDI technologies, as well as with the newer Internet-based enterprise portal technologies such as EIP and ERP previously discussed in the section “Convergence and supply-chains meet the Internet.”  Major efforts already are underway to standardize the software and infrastructure needed to implement these online marketplaces.

According to the article, “Procurement Shifts To Portals,”[72] such companies as Commerce One are moving beyond electronic procurement—a la, supply-chains and EDI—with broad plans to enable the building of open-trading marketplaces worldwide.  MarketSite 3.0, the latest version of its software platform, can be used to build and to link online trading communities via a flexible XML-based architecture.  Plans for use of the Commerce One infrastructure include recent deals with international telecom carriers BT and Nippon Telegraph and Telephone, which will join MCI WorldCom in the United States in hosting large MarketSite trading communities.

Rivals are also building trading portals.  For example, Ariba Technologies and Intelisys Electronic Commerce have detailed procurement portal plans based upon the OBI (Open Buying on the Internet) standard.

Ariba has gone even further with its embrace and extend proposal for the integration of the OBI-based and XML-based approaches with its submission of cXML (Commerce XML).  According to the Ariba announcement,[73] several leading E-commerce companies have agreed to collaborate on this lightweight standard for Business-to-Business E-commerce transactions.

The key aspect of online marketplaces for buyers, according to Forrester Research analyst Stan Dolberg, is the adoption of an open approach:

"You don't want to get locked into enterprise software that is completely hard-wired into one portal or aggregation hub."

Technology no longer functions as a stabilizing force that can be selectively applied by the incumbent company to preserve its installed base, or its status quo.  Rather, technology now provides the means for conducting aggressive economic warfare.  More importantly than ever before, each company must understand those technologies that could be used either by it or against it.

Microsoft’s embracing of browser technology and the Internet’s Web model of client server technology is but one concrete example of how a company must adapt itself to new technology-enabled products, services, business paradigms, etc.  Many pundits have described Microsoft’s prowlness in “turning the company on a dime.”  Microsoft simply has practiced a smart technology policy.

The synergistic value of technology convergence—killer apps

The technology-enabled opportunities that result from convergence have the potential to create far greater value than the currently closed, proprietary, go-it-alone solutions that are being displaced.  The resulting opportunities are not only in terms of what is enabled now, but even more so in terms of what becomes achievable through the synergy such interoperability fosters.

Technological advances represent more than simply the new enabler of the digital economy.  New technology often can be quite disruptive.  In war, there is the concern for collateral damage.  More generally, people speak about the unintended consequences of an action, product, etc. being more significant than what originally was intended.  The example of the printing press presented in the introductory section, Gutenberg’s invention—mass repeatability,” is such an example.

Nicholas Necroponte explained such in terms of the Law of Disruption which he used to explain killer apps in his Introduction to the book, Unleashing the Killer App.

Killer apps are examples of the Law of Disruption in action, a use of technology whose novelty turns the tables on some previously stable understanding of how things work or work best.  In business, killer apps undermine customer relationships, distribution networks, competitor behavior, and economies of size and scale.  Killer apps create global competitors where only local players previously mattered.  They give customers, suppliers, and new entrants power, upsetting the careful cultivation of competitive advantages that were themselves based on technology, technology that is now suddenly obsolete.

A current example of this phenomenon can be seen in what now is happening in the EDI (Electronic Data Interchange) industry.  The traditional EDI industry has been evolving its standards and product offerings for some time now.  GTE participates in this arena—both currently planning as well as having already made significant expenditures to support this type of service.  These efforts are focused both internally as a critical component of GTE’s operations, as well as at commercial service offerings to the public.

Along comes I-EDIInternet-based Electronic Data Interchange—which has radically changed the traditional EDI business case, as indicated in a recent InfoWorld article.[74]  Consider the immediate order-of-magnitude cost savings over the traditional approach to EDI that can result from the adoption of an I-EDI approach:

"It used to cost small suppliers roughly $10,000 per year to do EDI," says Geri Spieler, an analyst at Gartner Group, in Stamford, Conn.  "Today, that figure ranges between $650 to $1,000 for Internet-capable EDI services."  Chris Liccari, general manager at Lancaster Nameplate, in Palmdale, Calif., had that experience when a big customer changed the rules of the partnership.

The direct cost savings and improvements in efficiency are significant in themselves, and provide an immediate tactical benefit.  However, the strategic killer-app significance is the prospect of extending EDI functionality into markets and applications in which no one previously would have thought the application of EDI technology to be economically feasible, or pragmatically implementable.

The I-EDI infrastructure is capable of supporting not only realtime end-to-end business-to-business supply-chain management, but also the much more open-ended domain of consumer-focused e-commerce transactions.

I-EDI's cost is so low that Forrester Research's Bell predicts EDI will soon embrace routine consumer transactions such as online auction bids and structured catalog sales.  How low?  A survey commissioned by Premenos Technology, a division of Harbinger in Atlanta, found that processing one paper-based purchase order can cost between $50 and $70.  Processing the same order with traditional EDI costs about $2.50, and the cost drops to less than $1 for companies that are using I-EDI.

This approach makes much more sense—and cents—than supporting one EDI-like infrastructure for telco-to-telco activity and payment reconciliation, another infrastructure for GTE’s management of its own supply-chain, and another for our customer-focused interactions—such as bill presentment and payment.

While one may incrementally improve the efficiency with which one conducts an activity, real progress is to make more effective use of ones resources—by enabling more meaningful things to be do—and thereby adding increased value and satisfaction to the customer.

Another example of such a convergence of underlying technologies to improve the customer experience was reported in the previously referenced article, “The Service Imperative.”[75]

BellSouth Corp, a winner of the J.D. Power & Associates' customer-satisfaction survey three years in a row, has developed a customer-care application that integrates sales, service, and bill collections.  This application provides BellSouth’s service representatives with a single view of customer data to better handle incoming calls.  According to Bob Yingling, CIO of consumer services with BellSouth, as reported in the article:

"[The application] allows us to bring the whole power of the corporation to that call."

The convergence of services—that otherwise today exist in their own vertical silo’s—that is enabled as the direct consequence of technological convergence—is almost limitless.

As a currently evolving GTE-specific example of service-convergence, consider the Internet Fax business that GTE and other Telco’s and ISP’s recently have entered.  In the simplest scenario, an Internet phone call is substituted for the traditional circuit-switched phone call, while pretty much all other aspects of the fax operational and business models are preserved.  Such an effort—as currently conceived by most Fax service providers—does not even begin to realize the full potential of the opportunity that the Internet offers to GTE to improve its (store-n-forward) Fax service.

As background, a Fax is fundamentally a point-to-point operation between two phone lines with attached fax-enabled devices.  The current concept of this fax service is to cache/store a fax from point A to point B when a circuit between the two fax machines cannot be established—e.g., the terminal fax machine is turned off, its line is busy, etc.  The Fax service then attempts delivery of the cached Fax at later times, until the fax is finally delivered.

An obvious short-term solution—which leverages the Internet as a transport media—is to deliver faxes between points that normally would require a long distance toll charge by instead sending them via the Internet connectivity between fax servers in the two local areas.  This solution is strictly an efficiency enhancement—cheaper perhaps, but with no fundamental change to the quality or the capability of the service.

One way to offer a more effective service would be to also support delivery of faxes between fax machines and fax-enabled PC’s on the Internet.  One obvious mechanism to facilitate this type of enhancement is to integrate fax processing with email processing—something that could be done by the unified messaging services now under development by GTE.

Such enhanced services add value to the customer—making the service more effective for the customer.  However, such enhancements in fact barely scratch the surface of what could be enabled by the integration of traditional fax store-n-forward service with other Internet-enabled capabilities.

Some specific examples should make this point clear.  Companies now are able to interact with their customers in a number of ways—such as, person-to-person, IVR, fax-on-demand, Internet website, EDI, etc.  Today, such media are pretty much standalone from each other.  The integration of such otherwise independent services not only is possible, but some companies already have begun to explore and to exploit these possibilities.

Consider the example of how Alloy Onlinea New York-based teen clothing retailer—has embraced a converged approach to e-commerce, as reported recently in the article, “Portal site for teens sheds some light onto possible future of Internet commerce.”[76]

Alloy Online foreshadows the future of e-commerce in another way: The site is almost completely outsourced.  Alloy used the services of Virginia-based OneSoft to host, design, and launch the new site last January.

This arrangement allows Alloy to concentrate on the marketing and merchandising of its goods, on producing content for its new portal, and on high-level management of the site.

OneSoft's commerce system automatically pipes Web product orders into Alloy's proprietary order-processing system—the same one used for telephone orders from the print catalog.  Phone and Web orders alike are then routed to the company's fulfillment center in Tennessee.

GTE is in the enviable position to offer its customers the integration of ALL these communication channels!  GTE could offer a customer the unified management of all their communication channels—after all, GTE is a communications company—from a common consistent perspective.

Then, a change in the pricing of an item by a business customer, for example, could be maintained consistently across all these media—IVR, fax-on-demand, internet website, EDI, etc.

Consider the following example scenario.  As a (consumer) customer (of the GTE business customer typified above), I could use fax-on-demand to be sent an order form—which I had previously ordered via the IVR service—with check-boxes that correspond to check-boxes on the web page containing the same information.

I then could checkmark my selections and fax the sheet back to the GTE-supported fax service, which then presents (delivers) the information which is OCR’d from a scan of the fax—say, over the GTE.INS internet, via EDI, etc.—to the GTE business customer’s own e-commerce enabled order processing department.  Note that in this scenario, the GTE business customer does not care whether I requested, received or completed my order via web, or fax, or whatever.

Does this hypothetical example seem far-fetched?  Consider the converged service now being offered by ImproveNet—a web-enabled home improvement facilitator.  ImproveNet charges tradesmen for sales leads, plus finder's fees for contracts they arrange with homeowners via ImproveNet, according to reporting in Internet World.[77]

While the homeowners on ImproveNet are familiar with the Internet, the majority of the contractors are not technology savvy, and often do not even own a computer.  So what was ImproveNet to do? ImproveNet began to contact building professionals via a technology with which they were familiar—the fax!  The consumer provides ImproveNet with details of the job—via the ImproveNet website.  The relevant information then is faxed to appropriate contractors, who can reply if they are interested.  According to Robert Stevens, the founder of ImproveNet:

"That allows the homeowner to be in a position, not just to get a list of good people, but get a list of good people interested and available in your particular job and your particular area.  And that makes the market."

“What we can do is match that consumer need with a supplier who wants to convey information.”

Using its custom-built fax distribution server, ImproveNet currently sends out 10,000 faxes a day over the Internet to contractors.  ImproveNet's involvement goes beyond making a match.  It also acts as project advisor and chaperone.

Mr. Stevens goes on to explain his approach to growing his business:

"Early on, we were finding many of the projects that came in were not ready to hire a contractor.  They needed to have a design done, so we added a network of architects and designers.  Your customers design your business for you by telling you what they need."

So what can GTE do?  Most importantly, GTE could commit to the customer that new media—as they become available—would be seamlessly integrated into this unified converged service!

Contrast this just described converged service scenario with today’s typical service offerings—the customer is responsible independently to notify the Web service provider, the Fax service provider, the EDI department, etc. of a product’s price change, and to coordinate any derived level of integration among them.  If the customer had one-stop management of all his e-commerce, how much more effective could that customer be!

The convergence of all networks—all networks lead to one

One fundamental component of the general convergence megatrend is the convergence of all networks—be they in the home, in the office, in the automobile, over the neighborhood, across the country, or around the world—be they copper-based, optics-based, wireless-based, or some combination.

Nicholas Necroponte described a principle called Metcalfe’s Law—first proposed by Robert Metcalfe of Ethernet fame, and a founder of 3Com Corporation—in his Introduction to the book, Unleashing the Killer App—digital strategies for market dominance.  As Necroponte has explained Metcalfe’s Law:

Networks (whether of telephones, computers, or people) dramatically increase in value with each additional node or user.  Metcalfe's Law values the utility of a network as the square of the number of its users, and can be easily appreciated by considering the impact of standard railroad gauges, Morse code, and standardized electrical outlets in the last century and telephones, fax machines, and the Ethernet and Internet protocols today.  Once a standard has achieved critical mass, its value to everyone multiplies exponentially.

In a nutshell,

Metcalfe's Law values the utility of a network as the square of the number of its users.

Why does there exist such increasing pressure to embrace one universal network?  One simple explanation is pure economics.  Until now, each network infrastructure has been focused at some specific subset of one’s customer base, at some specific subset of one’s current suppliers, at some specific subset of one’s enterprise applications.  The key unifying terms in this proposition are the words focused and specific.

By contrast, the situation now is that by the party’s adoption of—investment in—one communication and application infrastructure offers the prospect of universal interoperability with the whole world—all customers, all suppliers, everyone, both actual and potential, both now and into the future.  The economic pressure to participate fully in such a universal network has become too overwhelming for anyone to ignore it.  The key unifying terms in this proposition—in contrast to the prior one—are the words: universal and interoperability.

In a recent article of Red Herring magazine[78], the editors enumerated ten major trends which they foresee in the coming years of ubiquitous computing.

The importance and pervasiveness of one ubiquitous network was identified—for the second year in a row—as being the number one trend!

... as communications hardware and software vendors keep introducing technologies that unify voice, video, and data networks, we will begin to enjoy lower usage costs for all forms of communications, along with greater access to a wider array of services.

… new devices designed to take advantage of the Universal Network will dramatically change the computing-platform landscape.

Consumers do not want more computers in their lives; they want devices … that perform discrete functions.  Moreover, increases in bandwidth … will reduce the need for local storage on a PC.

Several years ago, SUN Microsystems coined the phrase:

 “The Network Is the Computer”

This phrase has become the mantra of the Internet model of computing, communications, and information processing.  The concepts of ubiquitous computing and ubiquitous networking are synonymous—one will not occur without the other.

So, what is the current state of the quest for this mantra?

Today, our daily lives are touched by a number of distinct networks—they operate as ships passing in the night.  These include not only of the PSTN’s which are different for the United States, Europe, etc., but also various wireless networks—AMPS, TDMA, CDMA, GSM—as well as a multitude of other explicit and implicit networks, often proprietary and non-interoperable.  The wide availability of cable-based and of global satellite-based data and telephony networks is imminent.

Today, a person typically may have to operate a combination of wireline POTS phones, ISDN phones, any of several different types of cell-phones, cordless phones, fax machines, and pagers—along with various radio and infrared-enabled devices (garage door openers) and appliances (TV’s and VCR’s).

Today, these multitudes of often proprietary, explicit and implicit networks are pretty much non-interoperable—in terms of either the underlying communications protocols, or the information that would use those protocols.  Information from one source—say, a speed calling list stored in my cell-phone handset—cannot readily be transferred between or synchronized with other information sources—such as with a PIM (personal information manager) on a PC or handheld PDA, or with the telephone company’s directory service.

Today, across the end of my own coffee table in the family room lie five different remote controls for various multimedia appliances—my TV, VCR, CD changer, amplifier-tuner, and a cable set-top box.  The remote controls to my garage doors and those to my automobiles—power door locks and trunk release—are non-interoperable with each other, nor with my home’s security system.

Today, I keep within reach my cell-phone that I carry with me—even when at home—and a cordless phone to access my wireline POTS service—behind my key-system!  I have six distinct voicemail boxes—one on each of my family’s four cell-phones, one on my work phone, and one for my multiple home lines.  [How this latter accomplishment was achieved—multiple home lines serviced by one voicemail service—is a story in itself, described in the previous section titled My personal experience with telecomm-based mass customization.”]

Today, I have to maintain two different remote access configurations for each of my home PC’s—one for access to the GTE’s internal RNA network, and one for access to the general Internet via a commercial ISP (gte.net).  When I get a cable modem in January 1999, I will have yet another configuration to update—the one by which all my home PC’s already are networked via an Ethernet hub.

What about tomorrow?

Fortunately, the many participants of the digital economy—from the mega-corporations and organizations, to individual corporate entities and to individual consumers—are highly motivated to resolve this non-interoperability, as is typified by the previously referenced article “Group forms to end software chaos.”

According to Necroponte,

The market today is improving its efficiency at the speed of Moore's Law and with the effectiveness of Metcalfe's Law, moving it ahead of Industrial Age firms whose long histories of anti-competitive regulation and whose aging and expensive technology infrastructure keep them from adopting new hardware, software, and standards at anywhere near the pace of the market itself. … The market can achieve critical mass in a matter of months or even weeks.

All areas of communication and service—in the home, in the office, in the automobile, over the neighborhood, across the country, or around the world— are affected.  One critical component of this general convergence megatrend is the convergence of all networks—be they in the home, in the office, in the automobile, over the neighborhood, across the country, or around the world—be they copper-based, optics-based, wireless-based, or some combination.

Whence network convergence—how will it happen?

There is widespread agreement that convergence is occurring at the technological level.  Available digital technology now allows both traditional and new communication services—whether voice, data, sound, or pictures—to be provided over many different networks.

With such an overwhelming impetus toward the convergence of all networks, one can rest assured of the final outcome—one universal converged network where every device, application, etc. has realtime access to whatever resources it needs.  The question that remains is how will this convergence be achieved?

Current activity in the digital economy suggests that operators from the sectors affected by convergence are acting upon the opportunities provided by technological advances—both to enhance their traditional services, as well as to branch out into new activities.  The telecommunications, multimedia, and information technology sectors are pursuing cross-product and cross-platform development as well as cross-sector share-holding.

Everyone—the operators, their suppliers, and most of all, the customer—has a vested interest in the outcome.  Efforts on all fronts—all services: telecomm, multimedia, and information technology—all environments: in the home, at work, over town, across the country, around the world—already are well underway to satisfy the demand for converged solutions.  Every week, some company, consortium, standards body, or other birds-of-a-feather group introduces another convergence initiative.

Several of these efforts are led by the current titans of computing and networking—such as Microsoft, Sun, Lucent, and Cisco, as well as the major carriers—AT&T, MCI Worldcom, Sprint, the RBOC’s, etc.  However, in the spirit of the Internet, a number of previously unnoticed companies—Tut Systems, Diamond Multimedia, emWare, etc.—also are offering their solutions.

At the operating system and middleware levels of integration, several significant efforts are underway to provide a converged network-smart infrastructure.  Of particular note are those efforts that have been initiated by Microsoft and SUN.  While Microsoft has focused on the extension of Windows and its diversity of API’s into the telephony arena, SUN has chosen to feature JAVA—in all its forms and derivatives.

Microsoft’s efforts include:

1.       DNA—Distributed interNet Applications,

2.       DNS—Digital Nervous System,

3.       Millennium—a next-generation self-tuning and self-configuring distributed network, and

4.       PARLEY—a set of API’s for dynamic telecommunications applications created and maintained by the customers themselves.

SUN’s efforts include:

1.       JAVA—in its three dimensions: the language, the API’s, the virtual machine,

2.       JINI—a JAVA-based next-generation self-tuning and self-configuring distributed network,

3.       JAIN—the Java Advanced Intelligent Network, and

4.       JAMBALA—a JVM containing the information to run a telecomm network.

On February 7, 1998, Reuters reported a speech given in Helsinki, Finland, by Microsoft Chairman Bill Gates in which he introduced a new term—the Digital Nervous System (DNS)—for networks of personal computers.  Mr. Gates provided the definition:

"The DNS means using PC’s together with Internet standards to create an environment of easy information access to replace current information tools."

According to his vision, the DNS networking solution could replace telephone calls, paper and databases on large computers where information is hard to browse.  It could offer significant business value by enhancing the way a company shares information.

"Its most important benefit is the ability to navigate the information and see patterns, and be able to send mail messages to colleagues to share the information and get comments, all this contributing to a more efficient mode of making decisions."

DNS is similar in principle—in fact, it builds upon—an earlier announcement by Microsoft of its Windows Distributed interNet Applications (DNA).  DNA is the name of the Windows-centric framework of services, interfaces, and gateways that Microsoft introduced at its Professional Developers Conference in September.  At the core of DNA is Microsoft's COM (Component Object Model).  In particular, DNA is the underpinning for DNS (Distributed Nervous System)—Microsoft's metaphor for Internet/intranet/extranet-enabled computing.

Microsoft’s plans for DNA have been summarized in the article,[79] “Microsoft's goal: DNA everywhere.”

In short: Microsoft wants DNA to be all things to all people.  Expect to see versions of DNA tailored for nearly all the vertical market segments that Microsoft is targeting, such as health care, retail and insurance.  At the same time, look for Microsoft to claim that DNA is the heart and soul of all products and technologies going forward.

In addition to these Windows-centric efforts, Microsoft also is looking further into the future—where the network will be of even greater strategic value.  This far-looking work includes Microsoft’s Millennium project.  An overview of Millennium is found on the Microsoft website. [80]

The Millennium project at Microsoft Research is investigating new ways to build distributed systems.  The resulting systems are expected to manage machines and network connections for the programmer—in much the same way that operating systems today manage pages of memory and disk sectors.

The envisioned distributed system will be self-tuning and self-configuring—it will automatically adapt to changes in hardware resources and application workload.  Under the umbrella of the Millennium project, Microsoft has been building a number of prototype systems—in particular: Borg, Coign, and Continuum.

A goal of these Millennium prototypes is to make application distribution over the network completely invisible to the application developer.  Millennium significantly raises the programmer's level of abstraction.  The prototypes are focused on such concepts as the use of aggressive optimization techniques throughout the lifetime of the application—even modifying the application while it is running.

The Advanced Intelligent Network (AIN)—a North American standard for intelligent telephone networks—offers a standard method for interfacing with telephone company equipment and doing elaborate processing on calls, including features like automatic callback, automatic recall, selective call acceptance, fax-on-demand, fax broadcasting, and a host of other bells and whistles.

In addition to its Microsoft-exclusive efforts, Microsoft also has partnered with British Telecomm, Siemens, and DGM&S to develop a new computer-telephony integration application API—called Parlay—to support the convergence of PSTN services—such as those now enabled via AIN—with the private applications of network provider customers.  This group has established a formal organization with website: [81]

The Parlay API specification is intended to be open, technology and network independent, and extensible.  Its purpose of the API is to provide secure and open access to the capabilities of a wide range of today’s communication networks, while being sufficiently adaptable to address similar capabilities in future networks.

This API presents a single standardized, abstracted and in many cases simplified way to control the communications networks of today, and through extensions to the API, to evolve and address the networks of tomorrow.  In particular, this API is targeted for use by the end user’s application developers, by third-party software development companies, and by enterprises of all sizes—as well as by the network operators.

The currently proposed specification provides the initial functionality needed to develop a number of powerful network and CTI applications.  This release provides, for example, access to call control and messaging functionality, plus the essential supporting functions, such as authentication.

Sun Microsystems Java technology offers three distinct types of portability: 1) source code portability, 2) CPU architecture portability, and 3) OS/GUI portability—a critical component of interoperability.  Each type of portability is independent of the others, but the combination of the three provides Java with much of its power and promise.  An examination of these three types of Java portability is presented an in article[82] in JavaWorld.

As a programming language Java provides the simplest and most familiar form of portability—source code portability.  A given Java program should produce identical results regardless of the underlying CPU, operating system, or Java compiler.  The issue is one of syntax versus semantics.

Although the syntax of computer languages such as C and C++ are well defined, their semantics are not.  This semantic looseness allows a single block of C or C++ source code to compile to programs that yield different results when run on different CPU’s, operating systems, compilers, and even on a single compiler/CPU/OS combination.

Java is different—Java provides much more rigorous semantics and leaves less up to the implementers.  Even without the JVM, programs written in the Java language can be expected to port (after recompiling) to different CPUs and operating systems much better than equivalent C or C++ programs.

Most compilers produce object code that runs on one family of CPU—such as, Intel’s x86 family.  Even compilers that are capable of producing object code for several different CPU families only produce object code for one CPU type at a time.  If one needs object code for three different families of CPU, the source code must be compiled three times.

Current Java compilers are different.  Instead of producing output for each different CPU family on which the Java program is intended to run, the current Java compilers produce object code (called J-code) for a CPU that does not yet exist.  A Java interpreter, or virtual machine—called a JVM—is implemented for each real CPU on which Java programs are intended to run.  This non-existent CPU allows the same object code to run on any CPU for which a Java interpreter exists.

Producing output for an imaginary CPU is not new with Java.  Other notable examples include: 1) the UCSD Pascal P-code, 2) Lucent’s Limbo programming language, and 3) Smalltalk.  The Internet-savvy JVM distinguishes itself from these other virtual CPU implementations because it is designed to allow the generation of provably safe, virus-free code.

This safety feature, combined with a much better understanding of how to quickly execute programs for imaginary CPUs, has led to rapid, widespread acceptance of the JVM. Today, most major operating systems, including OS/2, MacOS, Windows 95/NT, and Novell Netware, either have, or are expected to have, built-in support for J-code programs.

The benefit to compiling programs (in any language) to J-code is that the same code runs on different families of CPU’s.  The downside is that J-code does not run as fast as native code.  For most applications, this won't matter, but for the highest of high-end programs—those needing every last percent of the CPU—the performance cost of J-code will not be acceptable.

The elimination of the semantic problems and the CPU porting problems still leaves programmers with different operating system calls and different GUI API calls.  For example, Windows programs make very different calls to the operating system than do Macintosh and Unix programs.  Such calls are critical to the writing of non-trivial programs.  Until this type of portability problem is addressed, porting still remains difficult.

Java solves this problem by providing a set of library functions that interface to an imaginary OS and imaginary GUI.  Just as the JVM presents a virtual CPU, the Java libraries present a virtual OS/GUI.  Every Java implementation provides libraries implementing this virtual OS/GUI.  In addition to the basic OS functions—file access, etc.—Java API’s are also being developed for various application domains.  These include, for example, an API for access to LDAP-enabled directories.

Sun is working on a more general strategy for achieving the long-stated goal: “The Network Is the Computer.”  Jini is a Sun R&D project inspired by Bill Joy that would dramatically expand the power of Java technology as a network enabler.  The goal of Jini technology is to enable the spontaneous networking of a wide variety of hardware and software—anything that can be connected.

An introduction to SUN’s Jini is found[83] on the SUN website.

Jini allows people to use networked devices and services as simply as using a phone—plug-and-participate via a network dialtone.  The goal of Jini is to dramatically simplify interaction with networks.  With Jini, for example, a disk no longer need be a peripheral to a computer, but functions as a type of storage service to the network.

Jini takes advantage of Java technology.  Jini consists of a small amount of Java code in class library form and some conventions to create a "federation" of Java virtual machines on the network, similar to the creation of a community today.  Network citizens such as people, devices, data, and applications within this federation are dynamically connected to share information and perform tasks.

An overview of Jini’s position in Sun’s Java strategy has been presented in an article[84] that appeared in InfoWorld.  Jini is a Java-based network infrastructure that allows devices and applications to automatically join a network and offer their services across that network.  Jini does not resolve all of the details of how a particular application will function across the network, but rather it provides the crucial capability for those services to be aware of each other and make a connection.

What are these two Java technologies and what do they do?

1.       Jini provides the distributed system services for look-up, registration, and leasing

2.       JavaSpaces manages features such as object processing, sharing, and migration

Together, Jini and JavaSpaces represent a shift away from current approached to system services that work on a centralized model—where system services are administered from a single point, usually the operating system.  An operating system, though, is really a collection of smaller subfunctions that perform multiple duties, such as cleaning up garbage, directing traffic, assigning tasks, and establishing who gets priority over others.

Jini, combined with JavaSpaces, breaks away from this monolithic model and distributes many services across various parts of the network, essentially breaking the OS into separate subsystems and then scattering them across the network, clients, and servers.

Jini is the crucial first step for the Java infrastructure that achieves this distributed cooperation.  Due to Java's object-oriented nature and capability of executing portable code, Jini distributes a variety of software objects across the network.  These discrete applications, or objects, can be moved across the network to interact with other objects, based on the needs of users.

In addition to Sun’s introduction of its Java-based Jini for general network enablement, several telecomm suppliers are working on approaches to leveraging Java which are focused directly at the telecomm industry and the PSTN.  One such effort is JAMBALA, recently announced by Ericsson on September 23, 1998 in Orlando, Florida, as reported on Sun’s web page[85].

In simple terms, JAMBALA is a machine that contains the information to run a telecomm network. Among other things, it tracks subscriber information, locates subscribers, handles special services like call forwarding and voice mail, and manages basics, such as subscriber registration and record-keeping.

The release of JAMBALA constitutes a pioneering step towards open systems in telecommunications.  JAMBALA contains "middleware"—the operating system and surrounding environment—that makes the system function and tie the hardware and applications together.  This middleware fully supports the capabilities of the Java platform, allowing for free and open options to customers in hardware, applications, services, and through the JavaBeans API.

The entries of Sun and Microsoft—both announced in June of 1998—into the AIN market with new programs and a raft of new partners who are backing their respective software initiatives signals convergence on a grand scale.  With either proposed solution, thanks in part to AIN, the lines between the LAN, WAN, ISP, telco, and applications software become increasingly blurred.

At stake for both companies is not only the prospect of becoming firmly embedded in the current telephone network—the PSTN—but also the possibility of and foundation for expanding into even bigger markets which will be developing in the coming decade.  These include home networking, intelligent houses, and virtual corporations, as well as many other control and automation applications.

These offerings from Sun and Microsoft were examined closely in a recent article in SUN World.  According to that article[86], Sun is building its efforts around the Java Advanced Intelligent Network (JAIN), which defines both services and network elements as JavaBeans.  The Jini, JavaSpace, and JavaBeans efforts thus are expanded to embrace and extend the PSTN.  According to said Chris Hurst, vice president, worldwide telecommunications industry, Sun Microsystems, Inc.:

"The basic idea of Java Advanced Intelligent Network technology is simple: it creates a level playing field and a set of standards that will enable IN services to run anywhere, anytime, on any network. … What’s really important here is the support for JAIN solutions by key SS7 stack providers who recognize the need for common standards.  Sun's Java software is the ideal choice to serve as the foundation for this effort, because of its platform-neutrality, its rapid application development and its built-in networking capabilities."

At its core, the JAIN architecture defines a software component library, development tools and a service creation environment to build IN services for wireline and wireless operators.  Companies will be able to create SS7 middleware component libraries incorporating Java technology.  Components for specific capability sets can then be built on top of these library components.

"The real strength of Sun's JAIN technology lies in these specific capability sets.  They provide interfaces that will allow a carrier or network equipment provider to write a service independent of protocol, standard or transport mechanism. Imagine wireless services that can run on top of the European GSM protocol and the North American IS41 protocol.  Or consider a telephony application that runs on top of standards such as AIN and INAP, where the transport is either SS7, ATM or the Internet, and the application can migrate from a service control point to an IP (Internet Protocol), a backoffice system or a handset.  This kind of service portability will drastically reduce time to market and the cost for the carrier--and ultimately the consumer."

More than coincidentally, considerable overlap exists among the supporters of JAIN and the supporters of PARLEY.  JAIN technology already is supported by several SS7 protocol stack vendors—including ADC NewNet, DGM&S Telecom, Ericsson, and Apion Ltd.  MCI WorldCom Inc. is using Java in its network as a means of giving business customers more control over their network services. Engineers from both companies have worked closely to build Java into MCI WorldCom applications.

Microsoft announced its Active OSS (Operational Support Services) framework, which is aimed at the same market.  Microsoft is using its COM and Distributed COM object models running under Windows NT Server as the basis for its work.  Active OSS Framework also includes parts of the Windows Distributed Internet Applications (DNA) Architecture.

Both approaches rely on AIN technology, which provides a standard way of breaking the call connection into a number of steps and of checking at each step to determine what type of advanced intelligent processing—hence the term AIN—is indicated.  Based on the AIN processing, a given connection can be handled in different ways.

AIN alone has its limitations.  It defines interfaces, but it does not specify APIs or languages.  To generate AIN services, telephone companies and independent software vendors need tools that create the API’s for the interfaces and support common languages.

According to Paul Tempest-Mitchell, systems engineering manager for Sun in telecommunications:

"Using Java was just a natural step in putting together this API. … Java is a great language and building environment for AIN."

The JAIN enhanced PSTN is seen taking over many of the functions now performed by private networks, such as e-mail and scheduling, in much the same way that the telephone companies have grabbed a significant share of the voice-mail business by offering voice messaging.

In addition to the JAIN initiative, Sun also has been collaborating with DSC—recently acquired by Alcatel—and with STR—a Chicago-based consulting company—on Project Clover.[87]

The project's goal was to Java-enable intelligent network switches so that they could be accessed by a browser or other client via the Internet.  Charles Lee, an Alcatel USA engineer who worked on the project, elaborated:

"Usually, the client is a telephony switch. We took the switch interface and expanded it so the client could be a browser on the Internet.  We used an interface that allowed a Java server to talk to a telephony server.  Now standard Java applets that talk to the Java server can talk to the telephony server."

Being Java-based, the client does not have to be a PC.  These services will extend to cable set-top boxes, cellular telephones, personal digital assistants and any other access device that runs Java.  Reza Nabavi, Sun's market development manager, predicts that the next step for Alcatel USA will be to look at the service creation environment and rewrite the service-independent building blocks in Java beans.

The field of possible solutions is much larger than those proposed by Microsoft and Sun.  Other just as compelling possibilities are being proposed[88].

Next year, even more products will emerge that will let users seamlessly share virtually any type of resource—code or device—across a network.  The evolutionary path starts out humbly with simple mechanisms like those adopted by the Salutation Consortium to discover network peripherals.  It then moves into directory-centric specifications like the IETF's SLP (Service Location Protocol).

Then comes the larger vision of instant access to any network program or service inherent in Sun Microsystems’ Java-based Jini.  Products are also apt to flow one day from Microsoft's Millennium, a next-generation distributed operating system now in Microsoft's research labs.  In fact, there's no shortage of next-generation architectures, including AT&T's [now Lucent’s] Inferno and Caltech's Infospheres.

According to its organizational statement, the purpose of the Salutation Consortium is to define an open architecture interface specification that will enable conforming products to identify device capabilities across a network.  The Salutation Specification[89] describes a capability exchange protocol and an application program interface (API) independent of hardware platforms and operating system software.

The Salutation Consortium has a broad conceptual design that bridges between more narrowly focused efforts such as the Infrared Data Association (IrDA), the Multi-Function Peripheral Association (MFPA), and Desktop Management Task Force (DMTF).  Implementations based on Salutation's architecture would also bridge between Microsoft's Windows environment and a broader, heterogeneous environment.

The Salutation Consortium is a non-profit corporation with member organizations in the United States, Europe, and Japan.  Member companies include Adobe Systems, APTi, Axis Communications, Brother, Canon, Cisco, Eastman Kodak, Fuji Xerox, Fujitsu, Hewlett Packard, Hitachi, Integrated Systems, IBM, Kobe Steel, Komatsu, Konica, Matsushita, Mita, Mitsubishi, Murata (Muratec), Okamura, Oki Data, Ricoh, Rios Systems, Sanyo, Seiko Epson, Sharp, Sun Microsystems, Toshiba, and Xerox.

As recently as Sept. 21, 1998—months after Microsoft and Sun announcements of Millenium and Jini—Xerox Corporation and IBM announced[90] plans to add support for the Salutation Architecture in upcoming products, according to Robert F. Pecora, managing director of the Salutation Consortium:

"Salutation technology will enable IBM and Xerox to provide a new generation of products that simplify network collaboration."

Previously, at AIIM'98 in May of 1998, the Salutation Consortium demonstrated several such collaborative products developed by its member companies.  These included a scan-directly-to-Notes application using products from Axis Communications and Salutation-enabled NuOffice software from IBM.  NuOffice is marketed in Japan.  Market momentum around the NuOffice effort has resulted in Fuji Xerox adopting the Salutation Architecture as a company standard for networking office automation equipment.

Mita's Salutation-enabled Network Connection Kit for Notes was named "Best of Comdex" in the category of Enterprise System Software at Comdex Japan in April.  NuOffice provided a complete office system for large customer sites with many mobile or telecommuting users.  It included Salutation extensions to Lotus Notes that enable users to print, scan, fax, and email without concern for device drivers or directories.  Additionally, a NuOffice user can access and distribute information right from a peripheral device, without opening a laptop, logging in to a workstation, or dialing a phone number.

The Service Location Protocol (SLP) is a product of the SVRLOC Working Group of the IETF.  It is a protocol for automatic resource discovery on IP networks.  SLP is designed to simplify the discovery and use of network resources such as printers, Web servers, fax machines, video cameras, file systems, backup devices (tape drives), databases, directories, mail servers, calendars, and the unimaginable future variety of services coming our way.  In the networked world of the future, interchangeable services will appear and disappear, and providing for the dynamic nature of their availability is an important accomplishment for SLP.

The Service Location Protocol website contains resources and information for those interested in SLP.  In particular, there is an introduction to the protocol, a white paper and references to important documents.  To quote from the Introduction:

Through the use of tools that have been enabled with Service Location Protocol (SLP), a clearer picture of the network attached resources and services are available to all users.  Users can browse resources and select the most appropriate service to meet the task at hand based on any attribute.  For example, finding the HR corporate web server, the nearest color printer, alternative /usr dist servers or routing a print job to a printer in a remote sales office is easy and automatic using Service Location Protocol.

The above referenced whitepaper on SLP provides several diagrams to clarify the discussion of such issues as:

1.       Introduction

2.       Protocol Overview

3.       Service Naming and Handles

4.       Keyword and Attribute Grammar

5.       Extensibility and ease of Administration

6.       Other approaches to locating network services

7.       SLP vs. DNSSRV

8.       SLP Vs LDAP

The charter of the IETF's directory-centric specification SLP (Service Location Protocol) is found at:  http://www.ietf.cnri.reston.va.us/proceedings/96dec/charters/svrloc-charter.html

The SLP whitepaper was written by Sun staff, and is hosted on a Sun website.  Consequently, one would expect the IETF’s SLP and Sun’s Jini to have much in common.  Similarly the SLP is intimately related to the Salutation Consortium’s efforts, as witnessed in the recent “Tech Talk” memo[91] on the Salutation website.

The Technical Committee of the Salutation Consortium is working to enhance the Salutation Architecture to support a directory-based service discovery mechanism that uses the IETF’s Service Location Protocol (SLP).  The intent of the effort is to achieve better scalability of the architecture in large workgroup or enterprise environments.

The current proposal has the Salutation Manager (SLM) searching for a SLP directory agent through multicast, broadcast, or manual configuration.  If one is found, the SLM will defer to the SLP protocol—instead of the Salutation protocol—to register and un-register Functional Units supported with the SLP directory.  Furthermore, the SLM will use SLP Protocol to search for services requested by Salutation client applications.

The Salutation API is designed to make Salutation applications unaware of the underlying transport and discovery protocols.  Since the SLP directory agent can be a gateway to a LDAP-based directory, the Salutation API and SLM provide a single application interface to all three of these protocols.  Salutation, SLP, and LDAP are all complementary with Salutation providing a single API into each.

Inferno—being developed by the Inferno Network Software Division of Lucent Technologies—is a revolutionary software platform for network-aware devices and applications, whether consumer devices, such as web phones, network elements, or innovative new network-based services.  Inferno's mission is to equip its customers with the necessary software elements to build a successful networked society.  More details are found on the Inferno website. [92]

The Inferno venture was established to rapidly introduce an innovative software platform for information appliances and network elements.  According to the Inferno website, the Inferno platform is targeted for:

1.       Consumer Electronics Manufacturers: Inferno addresses the unique challenge associated with resource-constrained environments—how to provide powerful computing with limited physical resources.

2.       Network Element Manufacturers: The Inferno platform was constructed to simplify networking communications, and to perform across multiple processor and operating system environments. Simple networking and interoperability—all in one.

3.       Network Service Providers (NSP): Inferno introduces service providers to a wide array of new devices and new customer-focused service offerings.  Implementing Inferno-based services augment an NSP's customer base, increase customer satisfaction, and strengthen customer loyalty.

The Caltech Infospheres Project[93] researches compositional systems—which are systems built from interacting components.  The group is primarily concerned with developing reliable distributed applications by composing existing and newly created software components in structured ways.

The focus of the Infospheres research is to study the theory and implementation of compositional systems that support peer-to-peer communication among persistent multithreaded distributed objects.  Their current example systems and services are implemented in Java and Web technologies; however, the theories, models, and ideas are directly applicable to any distributed component-based system.

The new converged PSTN

In addition to approaches such as those being championed by Microsoft and Sun, several consortia of vendors and service providers have come forward with efforts to define the framework and infrastructure of the new converged PSTN.

At the lowest levels are arguments over the appropriate combination of a circuitless IP packet infrastructure versus the smaller circuit-oriented ATM cell infrastructure.  Should one topology be overlaid on the other?  Can both coexist at Layer 2; etc.?  What about Sonet?  Various proposals have been offered and various approaches now are being trialed.

Lucent Microelectronics took two key steps toward uniting the burgeoning worlds of Internet Protocol and optical networking, as recently reported[94].

Lucent formally proposed to the Internet Engineering Task Force a standard, dubbed Simple Data Link (SDL), to put IP packets directly on an optical layer.  SDL puts IP packets on an optical layer without intervening Sonet frames or High-level Data Link Control (HDLC) encapsulation.

Lucent also announced sampling its Detroit (Data encapsulation and transport overhead device for point-to-point interface termination) chip set, the first silicon to implement SDL.  Detroit is also the first chip-set to truly offer packet-over-wavelength division multiplexing without an underlying Sonet frame.  Detroit also can be used for ATM-over-WDM, IP-over-Sonet, or IP-over-ATM-over-Sonet-over-WDM—in fact, the CMOS chips can be used to support multiple protocol-stack options in one system.

Diffserv and MPLS (Multi-protocol Label Switching) are two pending IETF standards for providing quality of service on IP networks.  A detailed technical description of each, and a comparison of the two approaches is provided in a Data Communications article. [95]

Diffserv uses the IP TOS field to carry information about packet service requirements, operating strictly at Layer 3.  On the other hand, MPLS specifies how to map Layer 3 traffic to Layer 2 transports and adds labels with specific routing information to packets.  MPLS offers extra capabilities such as traffic engineering but requires more investment in routers to implement and is likely to be used mostly at the carrier-network core.  The basic differences between Diffserv and MPLS could affect everything from costs to compatibility.

Diffserv relies on traffic conditioners sitting at the edge of the network to indicate each packet’s requirements, and capability can be added via incremental firmware or software upgrades; MPLS requires investment in a network of sophisticated label-switching routers capable of reading header information and assigning packets to specific paths like virtual circuits on a switched network.

Another important issue to be resolved is how could and how should the current control protocols of the PSTN—which is SS7-based—and the underlying IP-based protocols of the Internet world be merged and converged.  Several major efforts have been underway—and now are beginning to converge and consolidate.  From a business perspective, various groups, consortia, etc. are organizing themselves.  From a technology perspective, various protocols, API’s, etc. are being proposed and developed by these efforts.

One such example in this area is a proposal code-named IPS7 that was recently proposed[96] by Nortel.  According to said Oscar Rodriguez, general manager of Nortel's signaling solutions group:

"Voice-over-IP networks can now have carrier-class reliability. … IPS7 is the next generation of signaling. It brings intelligent network services into the IP world."

Rodriguez also indicated the support of Cisco Systems, Lucent Technologies, and Ascend Communications as unconventional allies that would assist in resolving the proposed standard quickly.

In the same issue of Internet Telephony, Bellcore and Level 3 Communications announced[97] the convergence of their respective companies’ efforts to develop specifications for integrating the Internet and the PSTN:

The new convergence specification—called the media gateway control protocol (MGCP)—is to be submitted for discussion at the Internet Engineering Task Force's December meeting.  MGCP combines Bellcore's simple gateway control protocol (SGCP) with Level 3's Internet protocol device control (IPDC).  MGCP will operate mostly at the interfaces between IP and circuit-switched networks.

The greatest significance of MGCP is the removal of call processing intelligence from media gateways, allowing them to scale almost infinitely into "gateway farms" without the need to insert service control logic and its accompanying data into each gateway.  MGCP will centralize these processing functions externally, allowing the gateways to grow.

Formation of one such group of telecommunications carriers, called the Packet Multimedia Carrier Coalition, as recently reported in two related articles[98].

The formation of this coalition is aimed at easing the transfer of voice and data between IP-based networks and traditional circuit-switched phone networks.  According to David Powers, director of corporate marketing at Level 3 Communications:

“The coalition's top priority is to push establishment of protocols that bridge the circuit-based, public switched telephone network (PSTN) and Internet protocol (IP) networks.”

This group of new carriers hopes to increase their clout with the International Telecommunication Union (ITU) and the Internet Engineering Task Force (IETF) in determining protocols that will govern the future of the communications industry.  In particular, the group will support the IETF's proposed Media Gateway Control Protocol (MGCP), a hardware and software specification, when it is finished.

The Packet Multimedia Carrier Coalition plans to develop protocols that would enable new network services for Internet appliances, such as IP phones or personal digital assistants that receive voice, data, or video anywhere an IP connection is available.  According to Mark Hewitt, senior director of engineering and product development for coalition member Frontier Communications:

"This will open the networking market to any creative mind that wants to create a new network application."

In the opinion of Doug Crawford, director of network and telephony technical planning for Kaiser Permanente:

"The more the carriers get away from proprietary network platforms, the quicker smaller vendors will be able to introduce new applications."

The Multiservice Switching Forum (MSF) is another group that has organized to develop specifications to let access devices, switches, and network controllers interoperate in service provider facilities, as recently reported[99].

The vendors and carriers of the MSF hope to speed up development of multiservice carrier networks that can handle voice, video, and data traffic.

Officials said the group will utilize standards developed by existing bodies, including the ATM Forum, the Internet Engineering Task Force, the Frame Relay Forum, the International Telecommunications Union, and the Bellcore Generic Requirements process.  Although these groups develop standards for their own technology areas, an organization is needed to make the various components of a multiservice network interoperate.

A standardized set of interfaces for products such as voice gateways, ATM and IP switches, and separate network control devices will allow service providers to build multiservice networks without independently certifying each element, the officials said.  Services could be rolled out more quickly, and the standards would foster competition and downward pressure on costs.

Charles Corbalis, Vice President and General Manager of Cisco’s Multiservice Switching Business Unit explained the strategic purpose of the MSF:

"The MSF is dedicated to an open systems model that will expedite the delivery of new integrated broadband communications services to the marketplace. … Multiservice switching systems will benefit from the same innovation and cost reductions that open systems in the computing world have achieved."

The founding members have proposed provisional Implementation Agreements for the Architecture and the Virtual Switch Interface (VSI) protocol for Switch Control.  The MSF also endorses and is contributing to the IETF activity to standardize the Media Gateway Control Protocol (MGCP) for supporting Voice over Internet Protocol (VoIP) and Voice over Asynchronous Transfer Mode (VoATM) services.

As strategic and encompassing as the convergence of the Internet and the PSTN is—as reflected by the previously presented activities, above—there is another dimension to the convergence problem that is just as important.  While the IP revolution originated in the United States, the mobile revolution is most strongly European.

The percentage of people with mobile phones is higher in Western Europe than anywhere in the world—30 percent versus 25 percent in the United States.  Mobile data services are more developed in Europe; and now the trend known as fixed/mobile integration (FMI) is advancing there first[100].

As the role of the circuit switch in both the fixed and mobile network disappears over time, the shift from circuits to packets will create new opportunities for fixed-mobile integration and pose some tough challenges.  Most service providers and operators generally agree that FMI over IP will begin in the core transmission network and spread to the edge.

According to Dick Snyder, wireless strategy director at Lucent Technologies Inc.,

"There's no doubt that fixed and mobile network capabilities will converge, and what will bring them together is IP. … Does that mean the wireline and wireless worlds will have the same capabilities?  No, the wired network—from a bandwidth and speed perspective—will always be ahead."

Packet-switched data service is scheduled to hit the airwaves by the end of next year, but whether the same airwaves will carry voice over IP (VoIP) remains to be seen.  Because headers containing address information can add up to 40 percent of "packet tax" to any voice transmission, IP is viewed as an inefficient protocol to run directly over airwaves.  That problem is exacerbated when the system asks to repeat lost packets.

The long-term prospect—especially with the third-world’s adoption of wireless technology—is that most of the world’s voice traffic can be expected to originate and terminate on wireless devices.  The backhaul network should be prepared to match the performance and efficiency requirements that the wireless environment places on carrying voice.

The new converged home network

A number of organizations have formed to work at defining the infrastructure of the networked home.  Each group approaches the networking of the home from its own particular perspective—the wireless industry, the appliance industry, the multimedia industry, etc.  Some of the major organizations—consortiums, forums, etc.—announced thus far include:

1.       Bluetoothhttp://www.bluetooth.com/index.asp

2.       Home RF – Home Radio Frequency Work Group http://www.homerf.org/

3.       Home PNA – Home Phoneline Networking Alliance http://www.homepna.org/

4.       Home APIhttp://www.homeapi.org/

5.       ETI – Embed the Internet http://www.emware/eti

6.       HAVi – Home Audio-Video interoperability http://www.havi.org/

7.       AMIC Automotive Multimedia Interface Consortium

8.       TSC – Telematics Suppliers Consortium  http://www.telematics-suppliers.org

9.       Open Service Gateway – the convergence of the above efforts http://www.osgi.org/osgi_html/osgi.html

Bluetooth and Home RF are focused on defining a wireless network infrastructure for the home.  Home PNA focuses on a network overlay of the install phone wire already in the home.  Home API is focused on the systems middleware for the networked home’s appliances.  ETI is focused on the hardware devices that are to be made network intelligent.  HAVi seeks to provide network interoperability to the multimedia devices of the home—the VCR, TV, etc.  AMIC is defining standards for an embedded automobile network.

Bluetooth—named for the 10th century Danish king who unified Denmark, the companies will create a single synchronization protocol to address end-user problems arising from the proliferation of various mobile devices—that need to keep data consistent from one device to another.  Such devices include smart phones, smart pagers, handheld PC’s, and notebooks.  Vendors choosing to participate would include Intel's chip set in their devices, enabling the devices to identify themselves and transfer data using proximity-based synchronization.

The mission of the HomeRF Working Group is to enable the existence of a broad range of interoperable consumer devices, by establishing an open industry specification for wireless communications in the home.  The proposal would use unlicensed RF spectrum to enable digital communications for PC’s and consumer devices anywhere, in and around the home.

The specification of this group—which includes the leading companies from the personal computer, consumer electronics, peripherals, communications, software, and semiconductor industries—is called the Shared Wireless Access Protocol (SWAP).  The SWAP specification—on target for release at the end of 1998—defines a new common interface that supports wireless voice and data networking in the home.

The Home Phoneline Networking Alliance (HomePNA) has been formed to develop specifications for interoperable, home-networked devices that would use the phone wiring already in place.  In particular, this implementation must be compatible with the ADSL-lite (splitterless-ADSL) technology that many telco’s are planning to offer—as the two will be using the same existing phone wiring.

The Home API Working Group was organized by Compaq Computer Corporation, Honeywell, Intel Corporation, Microsoft Corp, Mitsubishi Electric, and Philips Electronics.  This group is dedicated to broadening the market for home automation by establishing an open industry specification that defines a standard set of software services and application programming interfaces that enable software applications to monitor and to control home devices.

The goal of the group is to provide a foundation for supporting a broad range of consumer devices by establishing an open industry specification that defines application programming interfaces (API’s) for the home network which are protocol and network media independent.  This will enable software developers to more quickly build applications that operate these devices.

In addition, they will allow both existing and future home network technologies such as HAVi, Home PNA, Home RF, CEBus, Lonworks, and X-10 to be more easily utilized.  Furthermore, it should also be possible to integrate control of existing A/V devices (using IR-based control, for example) into one system.

The following are potential application scenarios.

1.       Home Automation and Security

2.       Home Entertainment

3.       Energy Management

4.       Remote Monitoring and Control

5.       Computer/Telephony/CE Integration

The Home API Working Group—dominated by software and systems vendors—is dedicated to defining a standard set of software services and application programming interfaces.  In contrast, other groups with more of a hardware and component focus have offered lower-level appliance-based solutions.

The Embed The Internet consortium (ETI) is such an effort.  While many devices—utility meters, vending machines, thermostats, elevators, etc.—are controlled by 8- and 16-bit microcontrollers, most proposed networking architectures call for 32-bit microprocessors in each device.  Then a stripped-down Web server is stuffed into the device, taxing resources, increasing costs, and sometimes lacking full Web server functionality.  This view is inappropriate for existing 8- and 16-bit devices, requiring a complete retooling of devices to 32-bit microprocessors—an expensive proposition in itself.

Embed The Internet takes an alternative view based on traditional standards. Implementations of embedded device networks will come more rapidly if existing devices can be networked with a cost effective, but powerful, solution.  A truly open device networking architecture must be appropriate for devices ranging from those with 8-bit microcontrollers on up.  Internetworking resources are distributed across the network according to individual needs, providing full functionality with maximum flexibility and freedom of choice.

HAVi, abbreviation for Home Audio-Video interoperability, pertains to interconnecting and controlling AV electronics appliances connected in the Audio/Video Home Network based on 1394.  The HAVi core specification—a core home networks application for AV electronics appliances—is being actively promoted as a home network standard for the AV electronics and multimedia industries.

For different brands of AV electronics appliances to interconnect and to interoperate, each appliance must incorporate middleware that contains certain software elements common to all appliances on the network.  The core of this open home network specification defines these elements, their roles, and their functions.  In addition, it ensures that the software elements of different appliances will work together.

Other areas besides the home where effort is underway to embed network interoperability include in the automobile.  Six leading carmakers have banded together to create a standard that defines a common way for information, communications and entertainment systems to interact with the electronics in an automobile.

The Automotive Multimedia Interface Consortium (AMIC) hopes to complete its work in the next few months, and to see its standards deployed in about three years.  The group has announced support for the ITS Data Bus, an emerging hardware specification, and its members plan to write software that will allow consumer products work together in the automotive environment.

Their goal is to create a common way for various electronic products to be plugged into different cars while retaining their ability to work together.  For example, a navigation system, PDA, pager, and other products could share a single screen in a vehicle, with data from one item driving a response from another.

This concept is similar to the home network where the LCD display on the refrigerator, or the television in the family room could provide the display function for any of the smart appliances of the networked home.

The technical foundation for a common hardware interface has been under way for some time under the auspices of the Society for Automotive Engineers (SAE).  The hardware interface, which is based on the IEEE 488 specification, will provide a single connection scheme using connectors currently available from Molex Inc. and AMP Inc.  The physical link will be augmented by software that is now under development.  It will probably use a Java API that will allow products to communicate and share information.

On October 19, 1999, telematics industry leaders announced plans to create a Telematics Suppliers Consortium (TSC) to facilitate communications with the AMIC and to lead to the development of open, non-proprietary standards from the vehicle out to telematics services.  Telematics is an emerging market of automotive communications technology that combines wireless voice and data to provide location-specific security and information services to drivers.

The convergence of these various efforts is already in process.  An example of such is the recently announced Open Service Gateway (OSG) alliance reported in a news article. [101]

The alliance stated its aim to secure ways for Internet-based service businesses to deliver home services like security, energy management, emergency healthcare, and electronic commerce.  Alliance membership includes telecommunications equipment suppliers like Alcatel, Cable & Wireless, Ericsson, Lucent Technologies, Motorola, and Nortel Networks.  Also participating are computer companies IBM, Oracle, and its Network Computer Incorporated affiliate, Philips Electronics, Sun Microsystems, and Sybase, as well as U.S. energy giant Enron.

The Open Service Gateway will be based entirely on Java,  The working group plans to publish an initial version of the OSG specification by the middle of 1999.  By the end of the third quarter of this year, a number of products based on the standard are expected to be on the market, including a home network system from IBM that connects multiple PC’s in a household

In addition to these consortium-led efforts, several major companies—Seiko, Toshiba, etc.—in the appliance arena are partnering with high-tech start-ups in their search for lightweight approaches to embedding Internet functionality.  The focus of their efforts is to develop Internet-ready appliance components that are completely PC and OS-independent—being implemented entirely in hardware.

One such example is iReady, a startup company who is attracting much interest[102] with its Internet-ready LCD’s—called the Internet tuner.  The small iReady LCD panels feature a chip-on-flex (COF) module with built-in network, e-mail and Web-browsing capabilities that allow embedded designers to add, for a nominal cost, TCP/ IP network features to their systems.

The iReady core supports Compact HTML, currently proposed as a standard by a Japanese software company called Access.  The strategic significance of this development is clear, according to Ryo Koyama, chief executive officer and president of iReady:

"Internet connectivity has now become truly a drop-in feature.  … The pendulum is swinging back to a dedicated, hardwired engine once again, especially for small consumer devices.  We think that the days of trying to do everything in software on a powerful CPU are over."

There are several advantages to this approach—such as a considerable reduction in the power consumption of a handheld system to 1/50 of what it would be with a conventional system, and a shorter time-to-market.  At a time when the product cycle-especially for cell phones in Japan-is shrinking to six months, it is clear that development methodology has to change.

Embedding Internet-ready functionality into an appliance has the potential to facilitate applications that have little to do with accessing email, schedules, or websites.  These devices, for example, could use Internet protocols not necessarily to search Web pages, but to download specific types of information available on a certain network.

A case in point is adding TCP/IP protocols to a refrigerator.  It may sound farfetched, said Koyama, "but once [the fridge] is networked with a home security system, that very network capability allows people to find out how many times their old parents living in the next town have opened their refrigerator in a day and whether they've been eating properly."

Products based on this technology are now ready to appear.  Seiko Instruments is sampling three prototype LCD panels outfitted with the COF iReady tuner, and incorporates network, e-mail and Web-browser functions.  Mass-produced panels will become available early in 1999.  Toshiba already has begun sampling its customizable Internet ASIC at $17 per unit.

Toshiba also will provide iReady's Internet tuner as an IP core for its ASIC business.  The IP (intellectual property) version of this ASIC means that many other FPGA's, ASIC's, etc. soon will follow with this internet-ready capability—minus the OS, etc.

Efforts to bring the networked world into the home environment were reported[103] recently.  The quest to design a killer convergence product such as a PC/TV is over.  Instead, engineers at the top suppliers have set about developing a host of distributed, connected digital devices in an environment where networking, Java-and a good deal of partisan politics among technology factions-are on the rise.

One area where consensus appears to be forming is in the need for connectivity, generally enabled by Java.  Though sources said no one company or technology will dominate the digital consumer space in the way Wintel has ruled the PC, Java has gained dominance in the consumer-electronics industry, said Koomen.

Key consumer players along with several other companies are working to define a digital-TV application programming interface, tentatively called Java.TV.  Java.TV is not a subset of Personal Java or Embedded Java, but rather a set of API’s defined to be suited for television.

In the mean time, Microsoft continues its efforts to 'embrace' this market too.  Microsoft Corp. together with Thomson Consumer Electronics, for example, is now working to define what is necessary for the next-generation television—which they have dubbed eTV.

On the other hand, efforts with Java are much further along.  In addition to its contribution to the TV effort, a Java Virtual Machine is expected to go inside many advanced digital consumer systems, serving as a glue and as a run-time environment.

According to Rodger Lea, vice president of the Distributed Systems Laboratory at Sony, the result will be “a higher level interoperability” among devices compliant with the HAVi home networking spec agreed upon last year.  A set of HAVi API's based on Java will give an independent consumer system the power to remotely execute applications, provide a graphical user interface or upload Device Control Modules written in Java byte code.

The home network strategy expressed by Rodger Lea is:

Indeed, networking is becoming a mantra for consumer companies.  Their object is to build a home network infrastructure so that “suddenly, a newly bought digital consumer appliance is no longer just another standalone box,” irrelevant to the rest of the systems.  Connectivity-or distributed computing power on the home network-should breathe new life, new value and new capabilities into home digital consumer electronics.

“A user ultimately shouldn't even have to care which device within the home needs to be activated in order to listen to his or her favorite song. … We can just display a list of contents to consumers. All consumers have to do is to choose what they want to hear or watch.”

This spring, Panasonic plans to launch a 5.7-GHz wireless PC multimedia transceiver system called MicroCast.


How Does GTE Respond?

Transcending consideration of any specific examples of emerging technological breakthroughs, general megatrend changes due to consequences of these advances already are happening—independent of the specifics of which, and of when a particular breakthrough occurs.  Three such megatrends presented earlier in this paper are already profoundly affecting the way GTE does business: 1) appliancization, 2) mass customization, and 3) convergence.

The critical question for GTE to ask and to answer is “What corporate climate—organizational structure, employee mindset, etc.—is most conductive to the nurture and furtherance of the innovation that will characterize those companies that successfully compete in the new digital economy?”

This section examines the implications of this question for GTE from three perspectives:

1.       Organizational—What kind of infrastructure provides the flexibility both to take advantage of the many golden opportunities that will arise, and yet minimize the continual obsolescence of existing programs, services, etc. that also is occurring?

2.       Technological—How does GTE take advantage of the multitude of emerging technologies?  What policies, methodologies, etc. should GTE adopt and adapt?

3.       Cultural—What is the mindset that GTE should cultivate and nurture within its employees—its most valuable resource?  What kind of actions and policies—formal and informal—should GTE introduce?

These three perspectives are in fact intertwined.  Recall that this paper began in its “Introduction” with the identification of four policies that will characterize—define the climate of—those organizations that would be successful in the new digital economy:

1.       They must innovate beyond what their markets can imagine.

2.       They must understand the needs of their customer’s customer.

3.       Their organization needs a deep-seated and pervasive comprehension of emerging technologies.

4.       They need a climate in which risk taking is not punished, creativity can flourish, and human imagination can soar.

The principles espoused within this document now are being echoed throughout the IT world.  Using the recent strategic partnering announcement by IBM and DELL valued at $16 billion, Bob Evans, Editor-in-Chief, Information Week, briefly explained the strategic significance of innovation in his Letter from the Editor that appeared in the inside cover of the March 15, 1999 issue[104]:

I would submit that it's a careful blend of three essential and interlinked pieces: [1] corporate culture, [2] knowledge, and [3] innovation.  Without the right culture permeating an organization, risk isn't rewarded, change is stifled, constructive criticism is unwelcome, and the focus remains on competitors instead of on customers.  On the flip side, though, companies that have fostered forward-looking cultures usually find that knowledge flows freely among employees and appropriate partners, allowing better decisions to be made more quickly and promoting a focus on understanding customers better and communicating with them more effectively.

When those two pieces come together, innovation can flourish: Unswerving focus on customers allows IT organizations to think in new ways and deliver new capabilities, driven not by budgets and what's been done in the past but rather by the power that new thinking and partnerships can deliver.  It's the world of E-business, which is more a philosophy than a product or a technology.

Why must GTE monitor technology?

While GTE may not actively conduct R&D—or even conduct independent product development—in all the areas presented in this paper, GTE must constantly evaluate the potential impact that such breakthroughs can have on its business.  For example, GTE does not conduct R&D in the area of silicon wafer fabrication, or in how systems are built using such new technologies as SOC (system-on-a-chip).

However, GTE does use products and services that are dependent upon the capabilities that such breakthroughs make possible.  Revolutionary changes in technology can mean revolutionary changes in the products and services that they enable.

According to says William Storts, managing partner of Andersen Consulting[105]:

"Technology is simultaneously an unstoppable catalyst for change, a colossal problem, and a strategic solution for just about every financial services firm."

While this statement was focused at the financial services industry, it is in fact apropos to all segments of the new digital economy.

New devices, features, services, business models, etc.—that until recently were only dreamed of by most people—become not only feasible and demonstrable, but in fact very practical and economical.  At the same time, they raise serious concerns about the viability of what previously have been considered rock-solid business propositions.

Even the technologically astute are susceptible to being surprised by how fast technology is evolving.  A number of the technological breakthroughs that now appear the “Emerging Technologies” section of this evergreen document were as yet unannounced and unexpected when the writing of this document was commenced during the spring of 1998.  Other breakthroughs that then were projected as producing tangible—commercialized—results sometime over the horizon are now expected to produce commercial results in 1999.

Each particular research group within GTE's R&D organizations is subject to being blind-sided by events (technology breakthroughs) in other areas of technology which have the potential to radically effect their own specific area.

What combination of these perceptions or consequences of technology are experienced by GTE will depend upon the vigilance with which GTE remains alert to, and prepared for these technological breakthroughs.  All on-going efforts, as well as any proposed new efforts, must be regularly evaluated in terms of such events.  This document provides examples of the type of forward-looking analysis that is required.  Such analysis is required if GTE is to receive the best return on its investments into general R&D, product & service development, infrastructure procurement, etc.

Two major reasons for the monitoring of technology thus are demonstrated: 1) obsolesence, and 2) opportunity.

In the first case, GTE does not want to be investing in new or existing devices, features, services, business models, etc. which emerging technology is about to make obsolete.  In the second case, GTE does want to take advantage of new opportunities not only to be more efficient in what we currently do, but in fact to be more effective by finding better things to do.

The prior section of this paper “Technology’s new role—key enabler of the digital economy” explained the two-edged sword nature of technology—how that it is not only a key enabler but also a key leveler.  A company cannot wait for technological breakthroughs “to come knocking at the door.”  In the new digital economy, everyone—in particular, ones competitors—has ready access to the same technologies.  The company that hesitates is apt to be left behind.

This transformation by today’s companies in their appreciation and valuation of technology—from a position of technology watcher to one of technology-enthusiast—is typified in the above mentioned section by the discussion of the rush to enter the E-commerce world by Barnes and Noble and by Borders bookstores.  Both companies have recently confronted and responded to urgent pressure to develop web-based strategies that could compete against Amazon.com.

The previously discussed characteristics of appliancization are beginning to pervade—to an ever increasing extent—all areas of technology.  The explanation of why this is happening is simple.  Increasingly, the new breakthrough value in a given technology is highly informational, or knowledge-based—representable in digital form.  The almost realtime reaction-time—the time for a competitor to duplicate, and even to surpass a current offering—of Internet technology means that one should never expect to find a place to rest, and never to achieve and sustain a status quo.

The increasing importance and role of intellectual property—discussed later in the “Emerging Technologies” section of this paper—is an example and indication of this trend.  Intellectual property—which can be digitally represented and managed—is readily transferred between those who would buy or sell it, readily integrated into ever increasingly complex systems, etc.

Preparing organizations for innovation!

Monitoring and responding to technological breakthroughs—in and of itself—will not be enough to assure that a company survives—let alone flourishes—in the new digital economy.  Proactive monitoring of technology is a necessary but not a sufficient condition for the success of a company.

At the beginning of this major section “How Does GTE Respond?” the question regarding organizational considerations was raised: “What kind of infrastructure provides the flexibility both to take advantage of the many golden opportunities that will arise, and yet minimize the continual obsolescence of existing programs, services, etc. that also is occurring?”

Part of the answer to this question has already been presented in the section “Convergence yields virtual corporations.”  The virtual corporation is the epitome of the Law of Diminishing Firms that Nicholas Necroponte described in his Introduction to the book, Unleashing the Killer App—digital strategies for market dominance.  This law predicts:

Firms will not disappear, but they will become smaller, comprised of complicated webs of well-managed relationships with business partners that include customers, suppliers, regulators, and even shareholders, employees, and competitors.

The term smaller in the above quote is not so much used to indicate absolute size, but relative size.  That is, the minimal critical mass—of functionality, resources, etc.—required to be held explicitly by a company—under its direct uncompromised control—is much less in the new digital economy than in prior times.

By analogy, consider the amount of memory and disk capacity that a desktop computer needs to perform a typical office automation task.  In a standalone self-contained scenario, the computer may need many mega-bytes of memory in which to load the application, and giga-bytes of disk capacity to store all the applications and associated data.  In a high-performance network scenario, that application and the associated data may all be hosted—virtualized—anywhere in the network.

One beauty of the Web paradigm is that from one user interface a person can access and interact with any number of other resources located virtually (no pun intended) anywhere in the world!  The type and magnitude of investment in resources needed locally—that is, under ones direct control—is completely rethought.  Collaboration and coopetition become the new operatives.

Convergence is now the preferred approach to achieving competitive efficiencies, while enhancing the ability of a company to adapt—in realtime—to the customer’s ever-changing demands.  In particular, a strategic dependence on technology—the critical enabler of customer personalization—is one of the distinguishing characteristics of the virtual corporation.

The section “Whence the virtualization of a company?” explained the process of corporate virtualization now transforming our economy.  As was noted, this transformation is not restricted only to the information-focused industry segments, such as the media industry.  Every segment of the economy—regardless of the nature of its end products and services—is increasingly dependent upon the flow and management of information.

This flow of information associated with each given industry provides the natural basis, or starting-point, for the virtualization of that industry.  The content, or multimedia, industry is an example of an industry where information content constitutes its primary purpose for existence.  Other industries, such as the manufacturing sector have tangibles—automobiles, airplanes, etc.—as their primary purpose to exist.

Information has become a two-edged sword.  All sectors are more and more finding that they are information-driven—the problem—and that they are information-enabled—the solution.  This paradoxical situation was typified by the example of Boeing given in this paper in the above mentioned section.  Their problem—their inability to deliver airplanes on time and within budget—was traceable to their poor management and leveraging of information—at all levels.  Their solution has been to create an organization where information could flow in realtime to wherever it is needed.

That section noted how Boeing has gone on the record as declaring itself in the process of strategically re-engineering itself into a virtual company.  According to William Barker, manager of the project, called Boeing Partners Network:

"If you look at the suite of applications coming up on our extranet, what you're looking at is the creation of a virtual company, … It's not just Boeing entities that now make up the company.  Suppliers, customers and partners extend the span of Boeing.  They have the same data we have.  They see metrics from the same source."

Another example of the new virtual corporation--closer to the information and communications industries is that of America Online Inc. (AOL), Netscape—now purchased by AOL, and Sun.  Sun Microsystems announced[106] that along with AOL and Netscape Communications Corp., it had formed an alliance consisting of Sun and Netscape employees to develop and deploy e-commerce solutions based in part on Netscape's software line.

As part of the triumvirate's strategy, 1,000 Netscape employees in the newly formed AOL division Netsape Enterprise Group, will be working with 1,000 Sun employees to continue to develop the Netscape software. According to a Sun spokesperson, the Sun and Netscape contingents will constitute a "virtual company" and will act independently.  John Loiacono, Sun's vice president of brand marketing, explained how this virtual company would function:

"All three of the companies had separate e-commerce strategies, and we could have gone our separate ways, but the fact is that the strategies are complementary.  AOL has 30 million eyeballs to bring to the table, Netscape has great middleware, and Sun has the system infrastructure."

Other recent events signal that the beginnings of corporate virtualization have already touched the telecommunications industry, and significant results now are being manifested.  Two recent restructuring announcements[107]—one between IBM with AT&T, another between MCI WorldCom with EDS—are examples of such virtualization.

Electronic Data Systems and MCI WorldCom agreed to a $17 billion computer-services deal involving a swap of assets and 13,000 employees.  In an earlier announcement, AT&T agreed to buy IBM's global communications network and the two companies agreed to contract services to each other worth about $9 billion.

In both of these instances, each company focuses on its core competencies—while depending on the virtualized (out-sourced) flow-through of other non-core functionalities provided by the other member company—in real-time.  There must be a transparency, so that the ultimate customer of either

During this first phase of corporate virtualization, the focus of the virtualization process will be on such strategic static partnering. In time, the virtualization process will transition to include the leveraging of dynamic partnering as a means of fulfilling customer needs, meeting functional requirements, etc.  The key to making this happen is the real-time flow of meaningful information and knowledge, as previously explained.

Information must be able to flow—to be shared, collaboratively—unencumbered not only within all parts of a company but between it and all of its suppliers, its customers, etc.  For this to be possible, everyone and everything in the virtual corporation must be connected—so that each may contribute to its fullest capability in realtime to both the long-term health as well as to the immediate bottom-line of the company.

Managing technological innovation

The term information normally associated with such a discussion generally conjures up images of shipping and receiving records, of personnel records, of customer accounts, of realtime data acquisition on a manufacturer’s shop floor, etc.—your traditional organizational databases.  There is, however, another critical source of information—one that is much more knowledge intense than—say—a personnel record, or the record of a product’s shipment.

This other critical source of information was alluded to in the section “How Does GTE Respond?” with questions regarding technological considerations: “How does GTE take advantage of the multitude of emerging technologies?  What policies, methodologies, etc. should GTE adopt and adapt?”

This additional critical source of information is the knowledge about the technologies that are becoming increasingly critical to the success of a business in the digital economy.  This technological knowledge base is becoming less static or steady-state, and more transitory, evolving, expanding all the time.  The effective management of this knowledge will become increasingly critical to the success of a company in the new digital economy.

A previous section “The Internet—the epitome of convergence” explained how the Internet typifies the convergence of the new digital economy.  The Internet is perhaps the best known epitome of how to create and to leverage an interoperability that accomplishes the delicate balancing of the forces of mass production and mass customization—together with balancing the pull of customer demand and the push of technology enablement.

The killer application of the Internet is the ability of end-users working with live data—only one webpage away—to make (realtime) operational decisions.  Now, for the first time in the history of computing, business end-users are able to work with live data to make operational decisions.

One particular technology-focused aspect of the Internet is the valuation that has been placed on openness.  For a technology to be adopted as an Internet standard, it must be made available ubiquitously to all Internet participants.  In the area of software, the concept of open source is now becoming and Internet mantra.

Recently, an article of Bob Young, CEO of Red Hat, a supplier and supporter of the open source Linux operating system appeared in Linux World[108].

In this article, Mr. Young addressed the question, "Why does the world need another OS?"  When the question is posed from a purely technological viewpoint his answer is, “Probably not!  If it's to succeed, Linux must prove to be more than just another OS.”

He then reposed the question from another perspective, “We should instead ask if Linux represents a new model for the development and deployment of OS’s.”

And the answer is: Linux, and the whole open source movement, represents a revolution in software development that will profoundly improve the computing systems we build, now and in the future.

The main difference between Unix and Linux is not the kernel, the Apache server, or any other set of features.  The primary difference between the two is that Unix is just another proprietary binary-only OS.

Mr. Young argues the PC model of openness was successful—not because of IBM’s name recognition, but because “consumers love choice.”  IBM’s decision to publish the specs for building a PC enabled a multitude of clones to appear.  Many vendors entered the market—often targeting their wares at some specific niche—say, lab equipment.

The principles behind this new business model are not unique to the PC industry. This new business model is yet another example of how an industry leverages the principles of mass customization that have been previously discussed in this paper.  In particular, the section “Mass production versus mass customization” focused on the relationship and balance that exists between these two approaches to product and service delivery.  Mass production is product focused—at the efficient manufacturing of products.  In contrast, mass customization is customer focused—at the effective servicing of customer relationships.

More recently, the entire information and communications industries are turning to this new model—which seeks to balance the assumptions, the methodologies, and the advantages of mass production with those of mass customization.

Today, the IT—information technology—industry is accustomed to an approach to systems, software, and hardware development, delivery, and support that is based upon and is well suited to the mass production mold of thinking.  The methodologies that have been developed previously to support this industry are mass production focused.

The end-game of their process is a product that can be deployed and supported through its life-cycle.  As the term life-cycle suggests, the product has a life of its own—so to speak.  The customer of that product must live with what he gets, and hope that any problems are fixed and that any needed features and enhancements are added with the next release.

This traditional model—with its mass production roots—has worked fairly well for a world where innovation in technology, new business models, etc. came slowly—say, multi-year planning cycles.  Now that innovation has become every company’s watch-word, approaches that are more responsive—almost in realtime—to changing requirements are required.  This transformation was explained previously in the section “Technology’s new role—key enabler of the digital economy.”

Contrast this mass production focused approach with that of the open source movement—which is more aligned with the philosophy and methodologies of mass customization.  A good explanation of the mindset exhibited by this movement is presented in an article[109] in NewMedia.  The claim and invitation of the open source movement is stated:

Open Source will change your business forever.  Step into the flow of evolving software, make changes, and become part of a brain trust solution.

The Internet is seen by Mr. Blume as a digital potlatch “running on code that developers put back in the bit stream to be enriched by other programmers.”  The term potlatch is an American Indian word that can be loosely translated as gift or giving.  For example, this potlatch characteristic of the Internet is true of Linux, BIND, sendmail, Apache, and Perl.  According to Mr. Blume:

Take these core programs out of circulation and the Internet as we know it ceases to exist.  What these programs have in common is a model of software development known as open source, meaning the source code is not withheld as proprietary, but is instead made available to the community at large.

One might think of the open source movement as being a fairly recent phenomenon.  In fact, Mr. Blume points out:

But open source goes back even farther than that.  It actually has place of pride in the history of technology.  In the early years of the computer industry manufacturers such as IBM shipped the source code of their operating systems along with their computers.  There were so few people who knew how to keep this kind of software working that IBM invited the community to help.


Today, many companies in the Internet’s digital media community are already using open-source software to run their online businesses.  With their industry seeing major transformations also monthly, weekly, and even daily, they find access to quickly customizable responses necessary.  They cannot afford to wait for whatever system updates will be doled out to them every two years by the mass production focused software industry.

In particular, the telecom industry is now beginning to embrace the open source model, as well.  As an example, the formation of the Open Telecom consortium was announced[110] at the CT Expo conference in Los Angeles on March 1, 1999.  This group proposes to provide its source code “to the cause of rapid growth for computer telephony.”

Consortium members initially include Natural MicroSystems, Lucent Technologies, Ericsson, Motorola Computer Group, and Telgen.  The Open Source for Open Telecom initiative has setup an associated Web site from which enabling source code will be available.  The initiative is focused on PCI and CompactPCI platforms for computer telephony.

According to Brough Turner, senior vice president and chief technology officer of Natural MicroSystems

"This will drive the growth of CompactPCI for telecom.  Our goal is to enable interoperability by allowing developers to share and evolve a common software code base.  This will reduce equipment vendors' time to market.”

Natural MicroSystems has contributed the code—free of charge—for its CT Access boards; its CT Access point-to-point switching (PPX) service and the CT Access basic switching service; and the device drivers for various boards, as well as operating system specifics for Windows NT, Solaris, Unixware and Linux.

What does one do when the business practically has to be re-invented each week?  When the battle cry is “Technology, technology, technology”?  This is the scenario painted by Amazon CEO Bezos, as previously discussed in the section “Convergence yields virtual corporations.”

The technical staffs of such companies are standing in the flow of the evolving software, seeing and requesting changes while a large brain trust tackles problems.  These companies cannot wait two years for the next major release to add a much needed feature or enhancement.

There are the obvious direct bottom-lineTCO and ROI are terms that come to one’s mind—reasons why a company would consider embracing an open source approach to addressing their IT requirements.  For example, one benefit claimed by open source proponents is that small and large companies are able effectively—by leveraging the extended global development community—to increase the productivity of their own internal development and support teams with no additional costs.

Some open source proponents go so far as to argue that because of the low—sometimes non-existent—cost of using open-source products, the industry could save millions.  These factors are easily related directly (explicitly) to a company’s bottom-line.  However, such factors are basically issues of efficiency—the efficient use of limited resources.

The real power delivered by the open source movement is its strategic megatrend impact on how a company leverages emerging technologies in a new approach to embracing mass customization.  It offers the capability of making a company more effective—more responsive to those ever changing, ever evolving personalized customer demands.  In particular, a new role for the customer is established with the adoption of open source approaches.  The customer now becomes an integral part of the team—a core asset.

This transformation must occur as one moves toward the mass customization focus of the twenty-first century.  Eric Brown, a senior analyst at Forrester, explains the transformation thus:


Hi-tech firms are beginning to understand that their constituency—not just their code—has to be counted as a core asset.  These firms use open source as a way to forge links between the technology and the community.


Mr. Brown describes this transformation as forming a synaptic model, a linkage to the customer that allows the businesses to transfer data to and from their community.  According to Mr. Brown,

Symbiotic relationships of this kind are springing up all over the Net, fueled by a networked economy's increased rate of change.  In such an economy, it's less important to lay claim to a finished product than to connect to the process of innovation itself.

"It lets them engage as many minds as possible.”

The power of this approach is not limited to up-start, cutting-edge, Internet gick companies now trying to establish their new offerings.  The open-source model also is helping large corporations trying to keep an innovative edge.

The IBM alliance with the open sourced Apache Group—which supports the most widely used web server software on the Internet—is proof of the impact that the open source model is now having on the information technology industry.  IBM lending its own credibility to an already popular product effectively smoothes away the remaining corporate disquiet about open source.  True to the open source model, IBM has agreed to release the source code for all IBM-generated upgrades back to the Apache Group.

IBM’s embrace of Apache is not the reason for its success.  Rather, IBM simply has embraced that technology and those methods which had already proven themselves worthy of IBM’s embrace and support.  The proven success of the open sourced Apache effort has been further confirmed by Datamation.  This IT-focused magazine has announced[111] its Datamation Product of the Year awards for 1998.  In the category of Electronic Commerce and Extranets, the winner is Apache 1.3.3 from The Apache Group, a collaborative software development effort jointly managed by a worldwide group of volunteers.  Apache is freeware!  Anyone can download it, and everyone who uses it can help support it.

Furthermore, support—not cost—is the key factor for its success, notes Colin Mahony, an analyst with The Yankee Group, of Boston.

"What would prevent Apache from being in the enterprise might be the whole notion of support, but there's a feeling not much support is needed—this thing works.  It's simple, scalable, and it runs."

Trailing the open source Apache were such commercial products as Site Server 3.0 Commerce Edition from Microsoft, and webMethods B2B for R/3 from webMethods Inc.

The results of Datamation's survey should prove two points.

1.       Readers still consider foundation technology more important than enhancements in the planning and building of their Web enterprises.

2.       Open source software is a powerful concept.

Jim Jagielski, owner of jaguNET Access Services LLC, based in Forest Hill, Md. explains his confidence in Apache:

"Apache succeeds on many levels: On one level, along with Perl and Linux, Apache clearly shows that open-source technology is not only feasible, but it's worthwhile too.  On another level, Apache's success is due to its incredible performance, rock-solid reliability, and almost limitless expandability.  It's little wonder that over half the Web sites on the entire Internet run Apache."

Since its first release in 1995, Apache—by the way, the name has nothing to do with Indians, it's a joke meaning the server was "patched" together—has proven conclusively that open source is a meaningful approach to technology development, deployment, and support—even for the largest, most demanding situations.

In February 1999, the monthly survey of NetCraft—a networking consultancy that polls all the servers it can find running Web services—surveyed 4.301 million sites and found Apache is used in 54.65% of Web installations.  Not only is Apache already used in over half of all Web installations—currently pegged at 54.65%, it continues to gain market share—nearly a half-point between January and February alone.

IBM’s embrace of the open source business model is not limited to non-IBM originated products that originally were developed under the open source model.  Consider IBM’s recently announced[112] development of Jikes, its Java compiler technology.  IBM chose to adopt an open-source model for Jikes as a way to push more development in Java, a critical platform for many IBM technologies.  Jim Russell, senior manager of Java technology for IBM, explains this strategic decision:

"It's a way to drive the growth of markets that are built on open standards platforms, that then make it much easier for everybody to compete with commercial products on top of that [platform]."

"Clearly in the end, IBM, like everybody else, is in the business to make money.  But there are cases where it makes sense from a strategic, business sense to take a technology out of research and make it open source."

Actually, Jikes is the second piece of IBM developed technology to be offered via an open-source model.  The first was IBM’s XML Parser for Java.

Mr. Russell has suggested that “…there are cases where it makes sense from a strategic, business sense to take a technology out of research and make it open source."  One naturally would ask what are the circumstances that characterize such cases?  Another example of the open source movement will help to shed additional light on the answer.

Sun Microsystems is another major corporation that has embraced—or at least is modeling its new Java business model after—the open source movement.[113]  Sun strategically shifted its software licensing process for Java from one that was already collaborative but controlled to what Sun calls a community source model.  This is neither an "open source" process, nor a proprietary scheme.  The new policy calls for Sun to freely license its core Java source code to all comers, provided the licensees follow certain terms on how they ultimately realize a profit from their efforts.

The explanation for why Sun has adopted this approach is easy to understand.  It emphasizes the importance of active customer participation in the development, the evolution, and ultimately the success of any products and services.

It represents Sun’s efforts to embrace and leverage the Internet-based model of doing business.  Mr. Murphy’s explanation of Sun’s particular dilemma is in fact apropos to emerging technology, in general, in the new digital economy:

It is a classic dilemma now tightly bound in the new mentality of the Internet era.  Sun can not possibly hope to quickly advance Java—especially to ubiquity—by itself.  So, it must embrace many of the contributions of those who would extend it, competing forces and brave-new-world advocates alike.  For a firm whose platform lives or dies by the participation of both ends of the developer spectrum, the question is: How far do you go to manage these expectations?

Sun's strategy was initiated by no less a figure than Sun’s chief scientist, Bill Joy.  With its co-author Richard Gabriel, he explains:

"Community Source creates a community of widely available software source code just as does the Open Source model but with two significant differences requested by our licensees, as follows: [1] compatibility among deployed versions of the software is required and enforced through testing; [2] proprietary modifications and extensions including performance improvements are allowed.  These important differences and other details make Community Source a powerful combination of the best of the proprietary licensing and the more contemporary Open Source technology licensing models."

Another indication that the open source model is about more than free software is provided by another announcement by SUN.[114]  Sun Microsystems has decided to distribute the basic designs of its two major chip architectures: Sparc and PicoJava under its Community Source model.  According to Jim Turley writing in the Microprocessor Report's Embedded Processor Watch,

"Anyone can download, modify, and synthesize the processors for free. Sun will charge a royalty only if customers ship the processors for revenue. The maneuver is not unlike the open-source movement that is growing in popularity among software developers.”

“Like Linux, Apache, Netscape's Communicator, and other software products, the 'source code' for 'synthesizing' Sun's processors will be free for the asking."

"On the surface, it appears to be a good move to broaden the appeal of Sun's two processor families. Developers can evaluate SPARC and Java processors with no up-front cost or risk."

How strongly does Sun fill about this opening-up of the company?  Bill Joy, a founder and chief scientist at San Jose, Calif.-based Sun, recently stated Sun’s position regarding its open source policy:

"Community source licensing is the distribution model for intellectual property in the 21st century."

A natural question to pose and for this paper to answer is “How does the open source model—and its derivatives—complement the previously discussed virtual corporation?”  Phil Hood, a senior analyst for the Alliance for Converging Technologies—a group of forward-thinking individuals brought together by Dan Tapscott—sees the open source model as an indispensable element of digital age business.

"This is how you solve complex problems.  You modularize them.  You break them up into little problems, and you build a critical mass of people [that includes your customer base] to work on them."

Mr. Hood further explains—with this author’s annotations interjected—how this kind of problem solving leads to internetworked e-business communities, a description of how the virtual corporation operates.

"We consider this to be the new form of organization.  The form of organization for the industrial age was the vertically integrated company—Henry Ford's Ford.  In the new model, it's about focusing on what you do best and acquiring partners [including your customers] to create everything else you need to bring your service to [those same] customers."

Organic corporate cultures critical to innovation

The virtualization of the corporation is a necessary organizational step to its success in the new digital economy.  However, there is an even more fundamental defining characteristic of the successful corporation—its dynamic innovative nature.

This importance of a company possessing a dynamic innovative nature was raised in the section “How Does GTE Respond?” with the questions regarding cultural considerations:  “What is the mindset that GTE should cultivate and nurture within its employees—its most valuable resource?  What kind of actions and policies—formal and informal—should GTE introduce?”

This necessary characteristic of twenty-first century corporations has been explained[115] by comparing the corporation to a living, breathing organism.

Some executives and consultants are of the opinion that companies should be functionally, organizationally, and operationally compared to innate machines that can be reengineered, reorganized, or reprogrammed.  Of a quite different persuasion is Michael Rothschild, author of Bionomics: Economy as Ecosystem (Henry Holt and Co. Inc., 1990), and founder and CEO of Maxager Technology Inc., a software company in San Rafael, Calif.  He says,

"Fundamentally, organizations are complex, intelligent social organisms. …They do evolve, adapt, change, learn and grow over time depending upon what's going on in their environment and what's going on with technology."

Unfortunately, companies also die, usually well before their time.  In his book The Living Company (Harvard Business School Press, 1997), Arie de Geus notes that the average life span of a Fortune 500 company typically has been only about 50 years.  According to de Geus, these companies failed to achieve and to sustain their potential—and to remain a Fortune 500 company—because they had a heavy economic bent rather than an organic one.  What does he mean?

“Companies become so focused on turning a profit that they effectively shut down any feedback mechanisms that could promote learning and growth.”

Everyone agrees that profits are important; however, the most valuable resource of the twenty-first century corporation—the innovation of its employee base—often is systematically controlled into non-existence for the sake of an immediate demonstration of profitability.

In the present rough-and-tumble climate, however, time for steady-state adaptation—for the sake of a slightly larger quarterly report—is a luxury most companies no longer enjoy.  With the Internet and other emerging technology advancements—not to mention their distant cousins, globalization and deregulation—the business environment changes seemingly overnight.

Achieving the necessary fluidity is no mean feat.  Repeatedly one hears the phrase, “Time to think out-of-the-box.”  How often is the pursuit of this injunction then in fact discouraged?  According to Megan Santosus, companies court trouble when they try to impose adaptive strategies from above.  As organic entities, survival instincts should rise up from within the ranks.

If adaptive behaviors are to take root and be nurtured, companies need to sustain an environment in which employees are not only permitted, but in fact are encouraged and motivated to do things differently and to take responsibility for making decisions.

The problem with many top-down strategies is that they do not derive from the active participation of employees.  Contrast this to the view of Michael Fradette, a partner at Deloitte Consulting LLC in Boston, and co-author of The Power of Corporate Kinetics (Simon & Schuster, 1998).  By giving employees the ability to respond to what Fradette called "customer events," a company can in effect change its behavior to be more responsive, from the line employees to the executive ranks.

He points to farm-equipment manufacturer Deere & Co., of Moline, Ill., as an example of the power of giving employees responsibility.  In his book, the story is recounts recounted of a Deere salesman who encountered a farmer who wanted to plant a new corn hybrid.  Because this variety of corn had to be planted in tight rows, Deere's existing equipment could not handle the task.  Undeterred, the salesman took the initiative to get new specifications from the customer.  He then transmitted the specs to Deere's factory, which proceeded to build a customized planter within 16 hours of receiving the order.

In the earlier section “My personal experience with telecomm-based mass customization,” this author shared a brief antidotal experience that ties together the above thoughts on mass customization.  In the process of his solving my problem, the SBC voicemail engineer not only gave me what I wanted—he also eliminated the need of a proposed switch upgrade, as well as eliminated the need of a rather convoluted service provisioning of multiple virtual mailboxes.  My total cost—for all subscribed features—actually decreased, when I had my feature-set reconfigured to support voicemail across multiple lines.

Companies like Deere, and the engineer at SBC, have glommed onto an important survival tip:

Long-term, sustainable adaptability derives both from speed and from a self-organizing capacity.  The ability to interpret broad marketplace trends, anticipate what customers want and expand beyond traditional markets equips businesses to rise to the top of the food chain.  Such companies are organic in that they are able to process continuous feedback and signals from the environment and convert such information into fluid plans of action.

As identified in the previous section “Managing technological innovation,” the killer application of the Internet is the ability of end-users—this includes both employees and customers—working with live data to make (realtime) operational decisions.  Now, for the first time in the history of computing, business end-users are able to work with live data.  The question that naturally follows is, “Will they be allowed to make realtime operational decisions?”

Long-lived organizations—irrespective of industry or national origin—share a surprising number of characteristics that have enabled them to change and to thrive with the times.  Perhaps a surprise, neither the recent emergence of the Internet nor the appearance of breakthrough technologies have changed this characterization.

Among the most important of these characteristics, Arie de Geus believes, is a focus on learning and an overriding sense of community, identity and purpose.  Furthermore, learning necessarily results in innovation—the application of what is newly learned.

"There's a tremendous element of trust between employees and management that enables living companies to focus on the long term rather than on next quarter's profit figures."

Here is manifested the cultural clash that the mass production versus mass customization discussion produces.  Most companies are still anchored in the mass production focused Industrial Age, organized in a hierarchical way and managed by command and control.  Contrast this to the mass customization focused organic enterprise.

The essence of acting organically is continuous learning but not in the way that most companies go about it.  In particular, one of the best means to promote learning throughout an organization is through a company’s information systems.  By providing real-time decision-making enabled systems, instead of traditional after-the-fact reporting systems, CIO’s have an opportunity to facilitate the individualized learning that forms the foundation of organic companies.  This organic culture flies in the face of those cultures found at hierarchical, rules-bound companies.

alphaWorks and IBM’s successful turn-around

Fortunately, we have a real-life example available of one Fortune 500 company that has successfully adopted and adapted the principles espoused above.  The resulting turn-around of that company during the decade of the 1990’s has been nothing short of phenomenal.  As recently as a few years ago this company was thought to be on its way—if not into oblivion—at least to secondary status in its industry.

Today, the prospects for this company are much improved.  This company has more than convinced its customer base, the stock market, and—most important to its success—its employees.  A case study analysis[116] of the turn-around that has occurred at this company—yes, its IBM—recently appeared in Application Development Trends.

Your executives are appearing in federal court day after day.  Large portions of the industry are calling for your breakup.  You are a big company at the top of your game, but are confronted with technology paradigm shifts that come with increasing frequency.  Microsoft in 1998?  Yes, but also IBM in the 1980’s.

What eventually is written in future case studies about Microsoft’s success or failure is still a work in progress.  The case study for IBM’s turn-around is now clear to see.

There were some years of stumbling, and some bloody noses.  There were significant encroachments by upstarts like Sun, Lotus and the redoubtable Microsoft, as well as incursions by established forces like Hewlett-Packard and Digital Equipment, to name just a few.  The company that can trace its lineage back to the 1896 census and Hollerith's tabulating machines seemed ready to join other mainframe-centric firms that did not fully make it out of the punch-card era.

As an aside, the other large computer vendor mentioned above—DEC, that is Digital Equipment—was not so fortunate.  How many times over the past decade did DEC re-engineer itself, but without ever really effecting a significant change?

Today, IBM is a new company with a new stature and luster.  No firm has ever approached Big Blue in the breadth of its offerings.  In recent years, its software portfolio—which accounted for $12.8 billion or 16.3% of total 1997 revenue of $78.5 billion—has been bolstered as the company absorbed some former competitive upstarts, notably Lotus and Tivoli.  Under Chairman and CEO Louis Gerstner, the firm has fairly deftly "played the Web" for high-level impact.

IBM has emerged as a much different company.  The selection of RJR Nabisco's Gerstner as chairman and CEO in April 1993 is generally cited as a turning point.  Many thought his mostly marketing background would not translate effectively in the high-tech world.  After initially misgauging the importance of the Internet—just as had Microsoft, IBM under Gerstner has since leveraged the Web—in particular, and the full potential of the Internet in general—to push forward all manner of IBM products.  This scenario is especially true of its development tools.

In many respects, the situation that confronted IBM then is quite similar to the one that now confronts the communications industry—the traditional telco’s in particular.  One can argue that the solutions also will be concluded the same.  After all, with the continued convergence now in progress, these two industries—information and communications—will come to look more and more like each other.

In both situations, major legal hurdles lay before.  IBM had already been forced to unbundle its hardware and software businesses—and this was only the beginning.  The original AT&T was dismembered over a decade ago.  Long distance—its bread and butter—is on the verge of such commoditization as to become virtually free—certainly not to be billed as a separate line item.  The incumbent local telephone companies—the ILEC’s—now are facing a similar legal picture and prospect for commoditization.

In both situations, emerging technologies were about to re-invent the fundamental assumptions of how their businesses worked.  IBM was an SNA world that ran over token ring technology.  Today IBM has embraced the Ethernet and IP with native support, even at the mainframe level.  Today, the telecommunications industry is confronted with what to do about the Internet and its connectionless IP—without suffering the obliteration of its existing circuit switch-based infrastructure.

In both situations, the impetus of mass customization is transforming the way customers wish to do business.  IBM was nearly swamped by the PC generation, when information processing took a significant turn toward de-centralization.  The megatrend of mass customization—in its extreme sense mass personalization, a la, the PC—was already manifesting itself.  Today, the PBX and CTI industry is now determined to become the new focal point of the customer’s services.  Telco efforts to offer sophisticated services—up until now—have not met with much success.  How many AIN service efforts can really be deemed more than a break-even success, at best?

In both of situations, the support of a mass production focused infrastructure was an expensive proposition—expensive to procure, expensive to operate.  This served as a barrier to those who would enter the market.  The mass production model was king, and had served both industries well—love those five-year planning cycles.  IBM mainframes could efficiently serve the same centralized set of applications to thousands of functionally identical users—love those 3270 terminals.  POTS service is ubiquitous all over the United States, and over much of the modern civilized world.  We did finally migrate everyone off rotary dial and onto DTMF, didn’t we?

We need not labor the points of similarity any further.  The listing of similarities could go on infinitum.  The question to ask is how did IBM successfully adapt to the megatrend changes of the new digital economy?  The follow-up question should be what can we—the telecommunications industry, GTE in particular—learn from the answer to the first question?

IBM has been executing the strategic actions identified at the beginning of this major section.  First, IBM has whole heartedly embraced the emergence of the future digital economy—as being typified now by the current Internet world.  Many of the BFW’s—that’s big fat websites—previously described in this paper in the section “Server side appliancization—a reality” are being hosted on IBM mainframes—that now support the Internet and IP, natively.

This provides a perfect example of how the issue of mass production versus mass customization has been resolved so that the benefits of these two paradigms can more than simply coexist as complementary.  They in fact can be integrated to provide functionality and services that neither approach by itself could ever do.

IBM has totally embraced the principles and practices of the virtual corporation.  The example of Boeing in that section could just as easily and effectively been replaced with the example of IBM.  Its collaboration with SUN on Java, and with Oracle on XML—that’s eXtensible Markup Language—are examples of how IBM is now working shoulder-to-shoulder with its competitors for the betterment of all.  The role of IBM’s enterprise-level support of Java—first developed by SUN Microsystems—is at least as critical to the Java effort as what SUN has provided.

IBM has extended its embrace of Internet standards into the area of development repositories.  With Unisys and Oracle—an arch-enemy on the database front, IBM has proposed using XML Web technology to represent repository data.  XML promises to be the ‘SQL’ of the Internet.  In fact, XQL is the eXtensible Query Language component of XML.

IBM’s efforts in the open source movement are almost renown.  Its embrace and support of Apache—the de facto for web server technology—and of Linux has already been noted.  IBM has placed a whole suite of internally developed XML tools into the open source mode, as well as its Jikes—its Java compiler technology.

John Swainson, general manager of IBM's Application Enabling and Integration Unit of the Software Solutions Division, has described how the IBM vision has evolved to one that supports the virtual corporation model:

"People's perceptions of repositories and what they are good for has changed. Going back 10 years, most of the discussions were about how repositories were going to save the world—if you could just get everything in the repository."

"But one of the things we learned is not to try to do everything in a single, monolithic model. And that there are two grains, and that for fine-grain [components] you need fast execution."

"Now, the role of the repository is seen as [1] a place where coarse-grain elements—the source, the screens, the documents—can be brought together to store and manage in a consistent way; and, [2] a place where fine-grained elements of the applications can be stored for tool access.  We've come to the notion of a federated repository—fine and coarse."

That is, one can expect that the information, the knowledge, upon which a corporation depends to reside as one logical, functional entity—the virtual corporate database in the Network—that in fact is fully distributed and managed by a federation of corporate entities that competitively collaborate in realtime—that’s coopetition.  Each is using the same core information and knowledge at any given time to collaborate in delivering just what the customer ordered.

In software, where perception can be key, IBM has proved itself adept.  An industry megatrend shift such as that presented by the Internet could prove to be daunting to no less an industry hand than Bill Gates.  Many people expected that IBM might fail here—to circle the wagons until all lines of business had died.

IBM has not only met the challenge but has fashioned a new corporate image at the same time.  The company's introduction of a WebSphere product line—focused at the Internet and E-Commerce, and such moves as the development of its innovative alphaWorks site—have helped depict a company transformed.

AlphaWorks initially was developed to be a platform to show off new technologies.  It has evolved into a testing ground for technologies developed by engineers from throughout the corporation.  The unit includes engineers and researchers who have joined the effort from a variety of IBM units as well as from nearby Sun Microsystems Inc.

According to John Wolpert, emerging technology development manager in the alphaWorks unit based in San Jose, Calif.:

"Our challenge is to change the way IBM does new product development.  We are always looking at emerging technologies.  We're trying to find the next big technology.  [The group] has a passionate interest in technology.  We understand the technology.  If we don't have the right people in IBM, we send briefs to high-level, distinguished engineers."

The management approach for alphaWorks is really quite simple.  Proposals for the site are submitted from IBM developers and researchers to the alphaWorks unit, which decides what technologies are posted.  Once a technology is on the site, it is available to all IBM groups for comment, suggestions and opinions.  The technology can be changed several times based on that input.

The technology on the site is also available to IBM product groups to use in new and existing products.  According to added Wolpert,

"We can bring early adopters directly into the earliest phase of development.  Our job is to drive people to the site.  The idea was to get a new site on the Web to show off new IBM technologies.  We built a cool Web site that brought us a new community of users.  Now we're trying to gain mindshare within IBM."

The track record of the alphaWorks group is now renown both inside and outside of IBM.  For example, technologies first made available through alphaWorks have been incorporated into the IBM WebSphere and Bean Machine offerings.  More recently, IBM unveiled nine new XML tools that are now distributed without charge through the alphaWorks site.

So, how do its critics rate these new efforts by IBM to re-invent itself as a twenty-first century virtual corporation?  The phrases that are used by the critics are the same ones that are espoused in previous sections of this paper.

Overall, the company has come to be known as more flexible, surprisingly so for such a large entity.  If it has been unafraid to embrace standards, it has also been willing to jettison standards that ebb.  Witness the move to quickly demote Netscape's Web server.  Witness not a wasted moment in endorsing the popular Apache server.

In particular, consider what one-time IBM watcher Sam Albert, president of Sam Albert Associates, Scarsdale, N.Y., has to say about IBM and its CEO Leu Gerstner, in particular: "This guy has been an amazing turnaround artist.  He has made IBM customer-centric and customer-driven.”

Sam Albert, the man who claims to have coined the term "co-opitition,'' gives Gerstner the highest grades as one of its practitioners.  “He cooperates with Microsoft, yet he can compete with them.  He competes with Sun, yet he can cooperate with them.”


Emerging Technologies

MEMS (Micro-ElectroMechanical Systems) technology is part of the broader area of microelectronics.  Research in the field of microelectronics has influenced a number of different areas.  For the purpose of this discussion, three major sub-fields are identified: 1) materials science, 2) systems science, and 3) killer applications.

Materials science is focused on the physics and chemistry of materials, manufacturing processes, etc.  How can one make devices that are smaller, faster, more energy efficient, flexible, adaptable, etc.?

Systems science is focused on how these materials and processes can be integrated to form systems to solve problems, to perform applications, etc.  Are there better ways to organize and to integrate the tasks, processes, etc. that are required for the given activity?

Killer applications takes a brief look at examples of how these emerging technologies already are being used in ways that will have a profound impact on products, services, and business strategies that will greatly affect the communications industry—GTE, in particular.  First-generation versions of some of these applications are already beginning to appear in the market, and can be expected to have a significant impact on the markets into which they are introduced.

The discussion that follows is arranged beginning with the most basic, fundamental elements of materials science—the molecular level—and proceeding to the most complex levels of systems science—the finished products, services, business models, etc.  Obviously, not all potential topics can be presented here; rather, these were chosen for their breadth and diversity, and for their potential value to and impact upon the telecommunications industry.

I.   Materials Science

1.       NanoTechnology

2.       Indium Phosphide

3.       Light-Emitting Silicon

4.       Polymer Electronics

5.       Molecular Photonics

6.       SOI—Silicon-on-Insulator

7.       Silicon Germanium

8.       MAGRAM—Magnetic RAM

9.       Chiral Plastics

10.   Fiber-Optic Amplifiers

11.   Photonic Crystals

12.   The Perfect Mirror

13.   Optical CDMA

II.  Systems Science

1.       Spherical IC’s

2.       MEMS—Micro Electro-Mechanical Systems

3.       SOC—System-on-a-Chip

4.       Switching to Switching

5.       Photonic Optical Switching Systems

6.       Configurable Computing

7.       IP—Intellectual Property

8.       Chaos-Based Systems

9.       VERI—Pattern Matching

10.   Computational Sensing

III.  Killer Applications

1.       Set-top box on a chip

2.       3G—third-generation—cellular devices are coming

3.       Sony’s next-generation playstation

Materials Science

Materials science is focused on the physics and chemistry of materials, manufacturing processes, etc.  How can one make devices that are smaller, faster, more energy efficient, etc.?

This major section begins with efforts at the lowest minute level—the molecular level—with nanotechnology.  While these efforts may not be of any immediate near-term or direct impact on the telecommunications industry, the results being achieved here are indeed being leveraged in other areas—that are discussed in later sections—that are already impacting our industry.

Next, several breakthroughs in enhancements of existing materials in computing—i.e., chip fabrication—are considered.  These efforts include indium phosphide, light-emitting silicon, polymer electronics, molecular photonics, silicon on insulator, silicon germanium, and MAGRAM—or, Magnetic RAM.

Finally, materials and methods that could greatly enhance communications over fiber-optics are considered.  These include the use of chiral plastics as an inexpensive, but more capable replacement for glass fiber, a new approach to implementing fiber-optic amplifiers, photonic crystals, optical mirror technology, and O-CDMA—the application of CDMA spread spectrum technology over fiber networks.

Nanotechnology—the coming revolution in molecular manufacturing

The most general statement of microelectronics—in terms of its scope and possibilities—is captured in the term molecular nanotechnology, which is focused on the thorough, inexpensive control of the structure of matter based on molecule-by-molecule control of products and byproducts of molecular manufacturing.”  The FORESIGHT INSTITUTE [117]maintains an active online analysis of this exciting area of research at its website:

The central thesis of nanotechnology is that “Almost any chemically stable structure that can be specified can in fact be built.”  This concept was first advanced in 1959 by Richard Feynman, who later was awarded the 1965 Nobel Prize in physics.  As he phrased the idea, "The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom.”  In fact, DNA is nature’s realization of this technology—so it is plausible, reasonable technology to pursue!

In its full visionary idea, nanotechnology is in pursuit of the possibility of building manufacturing machines and robots on the nanometer scale.  Like life forms, these nanomachines would be able to rapidly build billions of copies of themselves.  Additional resource websites with extensive discussion of on-going NanoTechnology research are found at The Museum of NanoTechnology maintained by Wired magazine, and at The NanoComputer Dream Team.

The Nanocomputer Dream Team (NCDT) is an accredited nonprofit organization of nanotechnology and nanocomputation professionals and enthusiasts.  The purpose of the NCDT is to provide an Internet development environment in which nanotechnology developers in widely disparate geographical regions and scientific fields can pool information and cooperate in the largest scientific effort since the Manhattan Project.

The Nanocomputer Dream Team is divided into twelve teams to accommodate interested individuals in a wide variety of fields.  The teams are: Brainstorming, Educational Outreach, Nano Space and Colonization, NanoMedical, NanoLaw, Logic Systems, Design, Molecular Modeling, Net Supercomputing, Construction, and Public Relations.

Research in nanotechnology is being conducted at several major academic and commercial research centers.  A recent high-technology article from EE Times presents a good general overview[118] of the various areas of nanotechnology research.

Described research includes that at MITRE, MIT’s NanoStructures Laboratory, Notre Dame's Microelectronics Lab, Purdue's Nanoscale Physics Laboratory, and Stanford's Nanofabrication Facility.  The National Science Foundation's National Nano-fabrication Users Network includes as members: Stanford University, Cornell University, Howard University, Penn State, and the University of California, Santa Barbara.

More recently, the Center for Nanotechnology at the University of Washington reported[119] breakthrough results in the development of new nano-scale switch mechanisms.  According to Viola Vogel, associate professor of bioengineering,

"This is the first time a tension-activated switching mechanism has been discovered on an atomic scale," she said. "Our discovery not only gives new insights into how nature regulates functions, but we hope will be the basis for a new family of biotechnology devices."

"This tension-activated switching mechanism is very, very small—it takes place at the angstrom level—so we hope it will eventually result in a very elegant mechanical switch for nanotechnology devices,"

Ms. Vogel predicts this research well could lead to an entirely new field dedicated to the study of single-molecule mechanics.

Several major semiconductor companies such as IBM, TI, Hitachi, and others also maintain facilities dedicated to the issues surrounding extremely small fabrication methods.  The techniques being developed in these labs have already begun to pay off in the areas of semiconductors, electro-optics, electron-wave effects, etc.  The areas further discussed below are each specializations of this broad area of research.

Indium phosphide (InP)–the next high-performance elixir for electronic circuits

A new material InP (Indium phosphide) that offers greater performance characteristics than those of GaAs has been reported[120].  InP (Indium phosphide) not only outshines GaAs in terms of raw speed, but has spawned an entirely new type of quantum-effect device—the resonant tunneling diode (RTD)—that is transforming the design of high-performance circuits.  InP-based transistors are able to switch up to three times faster than GaAs transistors—at speeds of about 1.5 picoseconds, enabling RTD’s at speeds of 700 GHz.

The results of this research is so substantive that one major wafer manufacturer has halted all GaAs development!  The Nanoelectronics Group at TI expects by 2001 to introduce a 16-kbit, 200-ps SRAM requiring only 10 mW of standby power.

The low power high-speed circuits that this technology enables are needed in a multitude of millimeter wave and microwave applications in defense and communications.  Examples of such use include software-controlled wideband radars and digital RF systems, wideband digital, and fiber-optic and satellite communications systems.  InP circuits provide the flexibility to change an antenna’s tuning with software without the need to redesign and change a system's analog front end.

The lasers and detectors used for fiber-optic communication are already made in InP.  Consequently, InP-based transistors would facilitate the manufacture of yet more highly integrated chips.  Such products as cell-phones—which today are composed of separate analog and digital components—could be implemented as software-configurable all-digital circuits.

Light-Emitting Silicon Chips

The InP research discussed above is but one example of research in the area of light-emitting silicon chips.  This research should have a significant impact on the networking industry.  Fiber optics employ photons to carry information between two points—say, the ends of an optic fiber.  Today, at each point, there must be semiconductors of compound silicon to transform the data from photons into electrons, and visa-versa.

Specific examples of this research have been reported[121].  Scientists at the Quantum Device Laboratory at the University of North Carolina, Charlotte—with the help of silicon provided by a small, New York-based R&D house, NanoDynamics Inc—have conducted research in the area of light-emitting silicon chips.  Their research so far has found that whenever electrical voltage is sent through the substrate, visible light is created that "shines" from the silicon.

"We believe that a giant step has been taken in silicon technology to include photons," said Raphael Tsu, a professor of electrical engineering at UNC-Charlotte.  "The integration of electronic and photonic capability on a single silicon chip is a very real possibility."

In terms of its impact on the communications industry, this breakthrough can be compared to “the development of fiber optic cable for transmitting telephone and data messages across long distances.”

Currently, electronic and photonic semiconductors cannot be built on the same chip.  But with light-emitting silicon, electronic and photonic devices could be built, conceivably, on the same chip.  That would simplify the "transformation" process, said Zhang.

The big payoff would be the creation of ultra-fast hybrid communications-computer chips that operate at the speed of light—or about 100,000 times faster than current semiconductors!  In place of monolithic silicon integration, Towe [manager of Darpa's VLSI Photonics program] put forward another concept, heterogeneous integration, wherein diverse-materials systems and components are fused into a working whole.

Polymer electronics—creating complete electronic systems in plastic

Opticom ASA, an R&D house in Oslo, Norway, has been researching conjugated polymers as conductors and semiconductors, since its founding in 1994.  Their on-going research projects include joint efforts with Lucent Technologies Inc. and connector maker AMP Inc

Opticom’s prior achievements in this area include a single-layer polymer memory structure, a polymer radio transmitter, an active memory film on an organic substrate, and submicron feature sizes.  An overview of its work is available online.

Currently, Opticom is developing a PC-card format all-plastic polymer memory subsystem for a customer, with plans to complete the subsystem before the end of 1999.  The first version of the card is expected to contain 1 Gbyte of storage with a data-transfer rate of 0.5 Gbyte/second, and to have an access time of around 50 nsec.  In particular, the switching materials that constitute the active part of the memory should switch in less than 10 nsec.  Furthermore, the inherent read time is less than the switching time.

According to Thomas Fussell[122], Chairman of Opticom:

Polymer-based memory systems could be used to provide large-capacity memories held in multiple layers, either standalone or laid down over conventionally processed silicon dice that would contain associated logic or microprocessor systems. The plastic systems could be manufactured using inexpensive reel-to-reel continuous production processes.

According to Johan Carlsson, senior research manager at Thin Film Electronics—one of Opticom’s units,

Opticom's concept of "a passively addressed crosspoint matrix in which no components are single-crystal materials" promises several advantages over other memory subsystems.

The benefits of this approach to building memories as well as more complex computational devices include:

1.       Stackable memory architectures—with or without plastic substrates as carriers.

2.       The smallest possible memory cell with inherently high yield.

3.       Passive crosspoint memory structure—which avoids the use of space-consuming transistors.

4.       Very high data-transfer rates—because of the large area and parallel readouts that are possible when contacts are arranged in multiple layers.

5.       Low-cost—because of the ease of reel-to-reel processing and because they are founded on low-cost materials.

NOTE: Other efforts in the area of stackable architectures are presented in a later section of this document.

In the long-term, Opticom is advancing the proposition that polymer electronics technology could soon be used to create complete electronic systems in plastic.

Molecular Photonics

Other groups also are studying organic, plastic-like materials for their applicability to computing. Dr. Jonathan Lindsey, Glaxo Distinguished University Professor of Chemistry at North Carolina State University.[123]  There, he leads a team of researchers in the new emerging scientific field known as molecular photonics.  He summarizes their results thus:

"Now that we've made the wire and figured out how it works and how to make it better, we can apply that knowledge to building logic gates, input-output elements, and other molecular-scale materials for computer circuits."

Unlike conventional circuitry, the wire that Lindsey and his colleagues have developed does not conduct electricity, nor is it an optical fiber.  Instead, it is a series of pigments—similar to chlorophyll—that works on a principle similar to photosynthesis.  Lindsey's wire works by absorbing blue-green light on one end and electronically transmitting it as light energy to the other end, where a fluorescent dye emits the signal as red light.

According to Lindsey, scientists previously thought that energy flow was controlled by four factors: distance between molecules; molecules' orientation; their energies; and their environment.  Now, two postdoctoral fellows in Lindsey's lab have identified yet a fifth factor.  The orbital—the pattern in which electrons are distributed within a molecule—also affects the energy flow.  As Lindsey explains their efforts:

"Our research looks at ways to control this energy flow and use it to create future generations of super-fast, molecular-scale computer circuitry and information processing devices."

The team also is working to build molecular photonic devices for use in solar-energy systems.

SOI – Silicon on Insulator

IBM and others have announced[124] they will combine copper interconnects with SOI (silicon-on-insulator) transistors.  Performance gains of 20 to 30 percent are expected, and such devices would be particularly suited for low-voltage operation.

Devices implemented in SOI overcome the heat and power-dissipation problems in currently available high-performance ICs.  In particular, they offer the mobile market (which includes laptops, PDA’s, cell-phones, etc.) a means of delivering reasonable performance at single-volt supply voltages.

Furthermore, the use of SOI technology is not restricted to mobile applications.  IBM's SOI gambit could also make waves on the systems side.  The technology could give IBM's RS/6000 workstations and AS/400 servers—as well as Apple Computer Inc.'s Macintosh line—a significant boost in the market against more mainstream systems using Intel's Pentium and Merced processors.

IBM started SOI research in earnest in the mid-1980s.  The effort picked up momentum in 1990, when Shahidi and others demonstrated that a partially depleted CMOS (rather than fully depleted) could overcome the short-channel effect common to fully depleted devices in SOI wafers.

Peregrine Semiconductor, another cellular ASIC provider, plans to offer 1.9-GHz SOI devices next spring, leading to the introduction of a new series of radio frequency IC’s.  Peregrine said it has received seven patents and has 10 more pending in SOI technology.  Peregrine has received seven patents and has others pending in SOI technology.  The company introduced SOI products in 1997.  According to Jon Siann, [125] director of marketing at Peregrine:

"The primary difference is that Peregrine uses a pure synthetic sapphire insulator rather than the thin silicon dioxide layer that IBM uses. … Sapphire is by nature simply a better insulator, making it ideal for the RF wireless satellite communications and low-power markets that Peregrine serves."

"The commercial viability of silicon-on-insulation technology has become a practical reality, and a new era in smaller, more affordable wireless communications has begun."

SOI technology continues to gain wide attention, as a number of major players announce[126] their products and plans in this area.  Sharp Corp. which has established a production system ready for orders, is pushing SOI into communications IC’s with prototype PLL’s (phased lock loop) that operate at 1.2 GHz at a 1.5-V supply voltage.  The power consumption of such devices is about 3 mW—or about one-ninth that of corresponding to conventional wafers.  Sharp further anticipates devices that run at 0.5 V.  The stated goal of Sharp is:

"Our final target is mobile equipment that runs with a solar battery."

Bijan Davari, director of advanced logic development at IBM, says that his company has developed an SOI "recipe” that resolves three major challenges to the successful use of SOI technology:

1.       How to implant and anneal an oxide insulation layer in the bulk silicon at minimal defect rates;

2.       How to create partially depleted CMOS devices that can handle the "floating-body" effect common to SOI transistors; and

3.       How to prepare relatively accurate models and circuit libraries.

Silicon Germanium

According to Will Straus[127], an analyst with Forward Concepts, "Silicon germanium has for a long time looked like the future for the microprocessor industry.  But until now it has been too costly to produce."

In particular, this new microchip technology is intended to improve computing power and to reduce the cost of handheld devices such as cell phones and personal digital assistants.

Using IBM’s announced technology, chip manufacturers will be able to build devices that are up to 50 times faster than standard microprocessors at a fifth of the cost.

Interestingly, IBM originally designed this technology to increase the power of its mainframe computers.  The focus now has changed to reflect new opportunities for its application:

In the past, hardware manufacturers seeking power-hungry microchips for devices such as PDA's and cell phones have turned to gallium arsenide for their chips—an expensive proposition.

"The cost of gallium arsenide microchips is very high because you need to go through a number of very complex steps," said Straus.  "For example, you need to produce the chips at a very low heat, other wise arsenide turns to arsenic."

Another cost benefit, according to IBM, is the ability to use the same fabrication plants for the silicon germanium microchips that are used for its regular chips.  Furthermore, IBM believes that in the next couple of years it will reduce costs by integrating multiple functions on a single chip.

"Our ultimate goal is to build a 'system-on-a-chip' for the communications industry," said Bill O'Leary, manager for the microelectronics division.  "It will then be cost-effective to include wireless communications microprocessors in many products."

MAGRAM—Magnetic RAM

Years ago, the memory of the mainframe computer was magnetic in nature.  I have friends—nearing retirement—who used to work with such memory modules for IBM.  Now, computer technology has returned full circle—back to a magnetic form for the storing of data in memory.  Ofcourse, the new material is much smaller—comparable to existing RAM in its performance, capacity, etc.

Working under contract from Pageant Technologies—a wholly owned subsidiary of Avanticorp International, Electrical Engineering Associate Professor Larry Sadwick from the University of Utah recently announced[128] a major breakthrough in the development of a new type of memory aimed at revolutionizing the computer industry and related fields, as reported in:

They have developed of a new class of magnetic-field sensors that will allow the future manufacturing of low-cost, high-volume, high-density memory devices and circuits.  The new memory cell, called a MAGRAM—short for "magnetic random access memory"—uses magnetic fields to store data.

Similar to conventional RAM memory devices, the MAGRAM should allow rapid, random access to information stored within it.  However, unlike conventional RAM, the MAGRAM memory cell is nonvolatile—that is, continuous power is not required to maintain the memory content.  Even after the power source is removed, the information remains, giving MAGRAM the advantage of long-term data storage reliability.

Pageant believes this technology eventually could become standard among computers and other electronic devices that use memory.  Potential applications for MAGRAM include cellular phones, pagers, palm PCs, digital clocks, microwaves, VCRs, answering machines, calculators and integrated circuits in vehicles.  Furthermore, the new technology should offer the advantage of decreased power consumption in such devices.

While the work of Pageant is near-term, other more exotic approaches to leveraging magnetic technologies also are in progress.  One such approach is the work of some researchers at Cornell University.[129]  They are testing devices that could form the basis for a potential ultra-small computer data storage system that could gather up to 100 times as much information in the same space as present-day magnetic data disks. An array of the devices that make up the system is considerably smaller than the period at the end of this sentence.

The devices are "nanomagnets"—tiny bar magnets as small as 25 nanometers long.  To develop a system based on nanomagnets work, the designers had to learn some new physics.  Magnets less than about 100 nm wide have a unique property.  When magnetized, each one forms a single magnetic "domain."  That is, the magnetic fields of all the atoms in the magnet are perfectly aligned.

This is in contrast to larger magnets, like the ones used to stick recipes to refrigerator doors, there are composed of many smaller domains, or groups of atoms, aligned in various directions; the behavior of the magnet depends on how the majority of the domains are oriented.

These single-domain nanomagnets can be used for data storage: A magnet could represent a one or a zero depending on which way its north and south poles pointed.  The application of such is dependent upon the development  of mechanisms for reading and writing to these devices.  The researchers have been able to read the orientation of individual magnets by using a magnetic force microscope (MFM).

Chiral Plastics

Molecular OptoElectronics Corp. (MOEC) scientists, chemists and engineers have discovered a new form of plastic that could offer a cheap substitute for more expensive non-linear optical materials.[130]  These one-handed or chiral plastics presumably could be used in plastics-based fiber-optic devices, new polarizing coatings and lenses, and novel optical wave-guides for processing electronic signals.

Near-term, Chiral plastics may become important for telecommunication technologies such as wave-division multiplexing, in which different data streams are carried at different wavelengths on an optical fiber.

Far-term, the plastics also may help telecommunications carriers achieve their dream for all-optical switching networks.

For the present, MOEC is working on a multiyear contract with the U.S. Air Force's Wright Laboratory in Dayton, Ohio to build fiber-optic modulator devices for the U.S. Defense Department's high-bandwidth needs.  MOEC scientists also recently presented a paper detailing their work at the national meeting of the American Chemical Society held in Boston.

Liquid-crystal display developers, among others, also are toying with the idea of using the chiral polycarbonate—as a substrate material that could operate in much the same way as liquid crystals, which are also a chiral molecule.[131]

Fiber-Optic Amplifiers

In a corollary to their work with chiral plastics, MOEC has received a patent for its process for developing fiber-optic amplifiers.[132]  This new approach could open up a much wider region of the spectrum to telecommunications.  The new approach offers an integrated method for building fiber-optic amplifiers, allowing the optical gain material to be fabricated into a light-guiding chip.  Furthermore, the process is compatible with a wide variety of materials.  According to MOEC:

This integration method makes it possible to build optical amplifiers that are optimized for virtually any wavelength that installed fiber can carry.

Currently, commercial optical amplifiers are built exclusively with erbium-doped glass fibers, which operate only at 1,550 nm.  While erbium-doped fiber has made dense-wavelength division multiplexing possible, its frequency band is limited.  The new chip-based process could open up a frequency range 10 times wider.

Photonic crystal confines optical light

Researchers Shawn Lin and Jim Fleming at the US Department of Energy's Sandia National Laboratories have created a microscopic three-dimensional lattice that is able to confine light at optical wavelengths.[133]

The microscopic three-dimensional lattice is fabricated from tiny slivers of silicon.  The optical lattice is the smallest—ten times smaller than an infrared device previously reported—ever fabricated with a complete three-dimensional photonic band gap.  It is effective at wavelengths between 1.35 and 1.95 microns.  The structure is a kind of microscopic tunnel of silicon slivers with a 1.8-micron minimum feature size.

The commercial importance of this technique to the fiber-optics communications industry because appears to be an inexpensive efficient way to bend light entering or emerging from optical cables.

The device is called a photonic crystal because its regularly repeating internal structure can direct light, thus mimicking the properties of a true crystal.  The device traps light within the structure's confines as though reflecting it by mirrors, and makes possible transmission and bending of electromagnetic waves at optical frequencies with negligible losses.

The lattice's creation crowns a quest in laboratories around the world that began 10 years ago.  The underlying principle that motivated this effort was the simple idea that a light-containing artificial crystal was possible.

Compare this approach of shaping and guiding light within a crystal to the approach reported below of implementing actual hollow wave-guides—the insides of which act as mirrors.

The perfect mirror

A team of scientists at the Massachusetts Institute of Technology has recently announced the development of a perfect mirror technology.  The basic idea behind their perfect mirror technology is quite simple.  In particular, it requires no new physical insight or mathematical theory.  Anyone who reads the MIT paper[134] is quickly convinced of its correctness, and of its importance:

In a nutshell, the perfect mirror combines the best characteristics of two existing kinds of mirrors—metallic and dielectric.  The perfect mirror can reflect light from all angles and polarizations, just like metallic mirrors, but also can be as low-loss as dielectric mirrors.  On the one hand, the perfect mirror is able to reflect light at any angle with virtually no loss of energy.  On the other hand, it can be "tuned" to reflect certain wavelength ranges and to transmit the rest of the spectrum.

The familiar metallic mirror is omni-directional, which means it reflects light from every angle.  However, it also absorbs a significant portion of the incident light energy.  On the other hand, dielectric mirrors do not conduct electricity and therefore can reflect light more efficiently.  Dielectric mirrors are used in devices such as lasers, which need very high reflectivity.

The principles behind the development of this perfect mirror technology are really quite simple[135]:

Dielectrics like water or glass do not reflect light well, so practical dielectric mirrors are made by stacking alternating thin layers of two dielectrics.  Every time light passes from one layer to the next a little bit of it is reflected.  If the thicknesses of the layers are chosen carefully these reflected light waves combine and reinforce one another, strengthening the intensity of the reflected light.  By stacking many layers scientists can make mirrors that are nearly perfect reflectors.

Another useful property of dielectric mirrors is that they can be designed to reflect only specific frequencies and to allow the rest to pass through unaffected.  For example, dielectric mirrors can be designed to reflect infrared light but transmit visible light.

The main drawback of dielectric mirrors, unlike metallic mirrors, is that they reflect only light that strikes them from a limited range of angles.  This limitation of dielectric mirrors has restricted their use to specialized devices like lasers in which the light can be constrained to strike at a known angle.

The potential impact of this technology on the communications industry is nothing short of mind-boggling.  In one early application the MIT group has rolled the mirrors into spaghetti-thin tubes called omni-guides.  A beam of laser light can be guided by such tubes far more efficiently than by fiber optics because glass fibers absorb light.  In particular, unlike fiber optics, the omni-guides can guide light around corners.

One promising possibility is to use such omni-guides as a replacement for the conventional fiber optics now used in communications.  The absorption of light by conventional glass fibers means that the signal must be boosted, say, every 20 kilometers or so.  This requires amplifiers, which only work in a narrow band of frequencies—which further constrains the available bandwidth through the fibers.

Omni-guides would carry light with far less loss of energy.  The omni-guides could stretch for thousands of miles without amplifiers.  Additionally, engineers would not be limited to a small band of wavelengths by the abilities of amplifier technology.  Dr. Dowling, another MIT scientist, has suggested that, because the new mirrors could be made to reflect radio waves, they could be used to boost the performance of cellular telephones.

"You could have a thousand times the bandwidth.  That's a very big deal," Dr. Fan said.

The universe of other potential applications of this technology is limited only by our imaginations.  Perfect mirror technology promises to have significant applications in many fields—including fiber optics, cellular telephones, energy conservation, medicine, spectroscopy and even, perhaps, cake decoration.  A few examples have already been offered:

In the operating room such omni-guides could precisely guide the light of the powerful lasers surgeons use.  The M.I.T. scientists also envision coating windows with infrared reflecting mirrors to keep heat in or out of rooms.  The mirrors could be chopped into tiny flakes and mixed with transparent paint to allow them to be applied directly to walls or windows.  The M.I.T. mirrors could also be useful in improving thermophotovoltaic cells, devices that trap waste heat and convert it to energy.  Even the apparel industry could benefit. … This type of stuff to make fiber and very lightweight clothing to keep the heat in.

Optical CDMA and all-optical networks

Commercial Technologies Corp. (CTC) of Richardson, Texas (a subsidiary of Research and Development Laboratory of Culver City, CA) plans to deliver an optical networking system in early 1999 based on the use of CDMA (Code Division Multiple Access) technology over optical fiber.

CTC's product, CodeStream, uses a single light source, which passes light through filters to create up to 128 channels by blocking certain portions of the light source and letting others through.  The technology is called O-CDMA, or Optical-CDMA.  A receiver has a filter that matches one of the bar codes generated by transmitters on the system and receives only those signals that match its filter (see the figure that follows).

The prototype system is built from commercial off-the-shelf components with no high-speed backplanes or multi-layer circuit boards.  O-CDMA has received promising reviews in recent articles.[136]

A system based on O-CDMA is cheaper than a high-channel WDM system because the WDM system requires one laser per channel.  The bar code system allows all channels to be transmitted to all points on a network served by the CodeStream system, allowing for optical-level cross connecting and add/drop.

CTC’s parent company, Research and Development Laboratory, invented O-CDMA for the Air Force, which was seeking a way to reduce the weight of satellites by replacing copper with fiber optic cable.  Interestingly, the telecom industry had long since dismissed the application of CDMA technology as a way to add capacity to fiber, although CDMA is the most promising technology being used for current and third-generation wireless system.

This lack of interest in O-CDMA has been due to the lack of an economical means to derive the property of phase from a photonic signal.  It could be done in the lab, but at great expense.  To address this problem, CTC has developed the approach called photon phasing, for which a patent now is pending.

While an O-CDMA system approach system may compete with a WDM approach to channelizing fiber, the two approaches also may be used to complement each other.

"Carriers could use WDM on express channels and CDMA on add/drop channels," he [Johnson] said.  "We're creating an environment where everything's photonically switched."

The WDM approach, in particular, lends itself to point-to-point applications.  The O-CDMA lends itself to broadcast and multicast applications.  For example, one could insert a video source (say the Olympics, World Series, etc.) at some add-point, each independent of the other broadcasts.  At each particular drop-point one could pull off (with the proper CDMA code) the broadcast of interest.

More generally speaking, an all-photonic network must be able to accomplish four tasks:

1.  Accommodate a large number of users on a single fiber,

2.  Add and drop traffic at short distances economically,

3.  Cross connect traffic, and

4.  Restore service, according to Johnson.

According to CTC’s Johnson: "We can do all four of those with O-CDMA."

Systems Science

Systems science is focused on how materials and processes can be integrated to form systems to solve problems, to perform applications, etc.  Are there better ways to organize and to integrate the tasks, processes, etc. that are required for the given activity?

This major section begins with efforts to develop more efficient and effective ways to manufacture chips—currently, the basic building blocks of computational systems—from embedded processors, such as found in today’s cellular phones, to mainframe-class super-computers.

Two techniques discussed here—spherical IC’s and chip stacking—promise to radically improve the way hardware designs are realized on chips.  First in consequence, traditional designs—such as the SOC technology discussed later in this section—can be developed and manufactured much more economically and efficiently.  More importantly, new system architectures are enabled that were not previously feasible.

Next, new approaches to the organization and the integration of the various components and functions of a system design are considered.  MEMS (micro-electromechanical systems) technology accomplishes what the name implies—it facilitates the integration of electrical and mechanical devices at the micro-level.  SOC (system-on-a-chip) represents the trend to integrate and optimize increasingly diverse functionally (analog and digital sub-systems, etc.) on the same chip.  Many fundamental systems architectural principles are being re-evaluated, enhanced, or even superceded by new principles—e.g., the trend to move from bus-based interconnections to switch-based interconnections.

Furthermore, the delineation between what constitutes hardware and software is rapidly disappearing.  Configurable computing represents the grand unification of these two branches of computing science.  This integration facilitates more than the optimization of traditional Von Neumann architectures—the possibility of radically novel approaches to system design.  IP, that is intellectual property, provides the industrial (business and technical) infrastructure by which designs (having both hardware and software components) can be marketed, licensed for use by others, integrated into products, etc.

The extensions of systems science mentioned thus far are natural extensions of existing approaches.  However, many other researchers are pursuing radically novel approaches.  One such example is a revolutionary computing technique using a network of chaotic elements to "evolve" its answers could provide an alternative to the digital computing systems used today.  In particular, this "dynamics-based computation" may be well suited for optical computing using ultra-fast chaotic lasers and computing with silicon/neural tissue hybrid circuitry.

Another area where novel approaches are being taken to revisit classical problems is in pattern recognition.  Today's pattern-classification algorithms can be too complex for real-time operation, and they often result in false alarms.  On the other hand, biological brains—performing fewer operations per second than silicon chips—can accomplish such tasks almost instantaneously.  The results of the research in this area by Sandia Laboratories are new heuristically based algorithms—call Veri—that mimic the pattern-matching capabilities of the human brain.  Finally, computational sensing and neuromorphic engineering provide an example of the synergism that can result when new systems integration methodologies such as SOC are coupled with a rethinking of those algorithms that better mimic, or model, the real world situation.

Spherical IC’s

Startup Ball Semiconductor Inc. of Allen, TX is preparing to fabricate the world's first spherical IC’s on tiny balls of single-crystal silicon.  The goal is to replace flat chips with ball devices, which are fabricated as they travel at high speeds through hermetically sealed pipes and tubes.

A revolutionary new kind of chip making which has gone into prototype production “looks as if it could have come from Mars.”  As explained[137] by Hideshi Nakano, a Ball Semiconductor co-founder, and chief operating officer, tiny silicon balls zip through a maze of hermetically sealed tubes as circuitry is wrapped around them.

Key manufacturing features of this technology include:

1.  Spherical IC’s could be made in production plants with a capital investment of about $100 million vs. $1.5 billion with conventional chip plants.

2.  Spherical IC’s would not need additional investments for assembly and packaging,

3.  Compared to wafer-based IC production, much less silicon material will be wasted in the process—95% of original silicon is recycled by today’s wafer processes versus a modest 5% recycling with the Spherical Ball process.

4.  Turning out IC’s this way would also speed up the production.  The entire process—from polysilicon materials to finished ball IC products—should take only five days to complete, compared with three months or more for conventional wafer-based manufacturing.

5.  Spherical IC’s facilitate the production of basic building block functions—logic, I/O, processor cores, and memories—as building blocks which then can be clustered together by metal interconnect balls attached to spherical IC’s to form more complex systems.

According to Hideshi Nakano, a Ball Semiconductor co-founder, and chief operating officer—as well as having been a former manager at TI Japan:

"This could change the way companies do business … enabling them to take orders and then start materials into production.  There would be no need for inventories."

Spherical IC’s could also have a number of key performance advantages over conventional chip manufacturing processes:

1.  With the use of high-inductance features and the ability to wrap interconnect lines around the surface of the ball to the tiny I/O area, along with its spherical shape, the ball could act as an antenna and transmit radio frequencies.

2.       Both the silicon spheres and process geometries are expected to be shrunk to produce circuitry on balls as small as 20 microns in diameter.  That could make IC’s brain-cell size, Nakano believes.

3.       Because of the porous nature of IC’s manufactured by this process, the heat dissipation problems of running at higher speeds is much easier to manage.  Chips based on this new technology should be capable of running a much faster speeds than those based on the traditional approaches and draw less power.

4.       The ability to pre-manufacture IP (intellectual property) in a hard form (versus sharing design and source code for software-based intellectual property) could greatly transform critical issues now facing the IP community.

The transformation in the way silicon is manufactured into systems is almost mind-boggling.  The current wafer methodology is analogous to one building a home by first growing a giant Sequoia tree, from which the finished home then is wheedled with a pocket knife.  This methodology is replaced with one in which all sorts of lumber materials (from 1”x12”s and 2”x4”s to 4’x8’ plywood, etc.) are economically manufactured in quantity, quality, etc. in a much more timely manner, and are readily available for assembly into all sorts of finished products, not just homes.

While the manufacturing implications of Spherical IC’s are great—in and of themselves—the real leap forward that this new process will enable is the transformation that now is occurring in how systems are designed, as well as how their various subsystems will interact.  The concepts discussed in the section on Configurable Computing, which appears later in this document, will be greatly enabled by the application of Spherical IC technology to those designs.

1.       The fundamentally flat two-dimensional (bus-based paradigm) chip designs that are produced today now can be superceded by three-dimensional (switch-based paradigm) multi-point to multi-point designs.

2.       Dedicated processing can be distributed within each of the spherical IC balls that are assembled to form an integrated chip.  Contrast this approach to the current computational model of segregated processor and bulk memory—connected by instruction and data busses.

3.  The literal (in hardware) realization of object-oriented designs (in which data and processing are encapsulated together as black-box objects) in silicon will be much easier to achieve.  The SOC (system-on-a-chip) technology that is discussed later will be a direct beneficiary of spherical IC technology.

MEMS – Micro-Electromechanical Systems

Additional references to other breakthroughs in the area of MEMS technology include the following recent articles from the trade literature:

“ADI bets on MEMS splash -- New surface-micromachined devices are geared toward consumer applications,” Electronic Buyers News, March 16, 1998.  http://www.techweb.com/se/directlink.cgi?EBN19980316S0019

“Little Displays Think Big,” Electronic Engineering Times, February 25, 1998.

http://www.techweb.com/se/directlink.cgi?EET19980225S0009

 “Loopholes of Opportunity—Designers are bending old rules to produce new types of connectors to achieve the small size, high pin count, and rising data rates needed in future systems,” Electronic Buyers News, February 16, 1998.  http://www.techweb.com/se/directlink.cgi?EBN19980216S0002

“Bringing MEMs from R&D to reality—A handful of companies have developed microrelay prototypes, but full production is a ways off,” Electronic Buyers News, January 26, 1998.  http://www.techweb.com/se/directlink.cgi?EBN19980126S0006

“Switches & Relays,” Electronic Buyers News, January 26, 1998.  http://www.techweb.com/se/directlink.cgi?EBN19980126S0001

“MEMS The Word,” Jeffrey R. Harrow, TechWeb, August 31, 1998.  http://www.techweb.com/voices/harrow/1998/0831harrow.html

SOC – System-on-a-Chip

Perhaps the most significant consequence of the breakthroughs in materials science technology—such as those that previously have been discussed—is the rethinking of how the various components and subsystems of a product can be better integrated.  The new systems design methods that result from this rethinking not only are able to solve today’s problems better, but also to solve problems that previously were thought impossible with known technology.

The raw processing capacity of silicon chips continues to grow by orders of magnitude—both in terms of the numbers of transistors that can be implemented (from 3 or 4 million today to 400 and 500 million within the next 1-2 years), and in terms of their raw speed (from MHz to GHz).

A significant consequence of this growth in capacity is the explosion in the possibilities of how this capacity can be utilized in the design and implementation of components, resources, systems, etc. on a single dye of silicon.

A large part of the economic impetus to the integration of increasingly complex sets of functionality on a single chip is the previously discussed appliancization phenomenon.  Given that a manufacturer can increase the number of transistors on a single chip by two orders (by 100’s) of magnitude or better, what is the configuration (application) of these transistors that will add the most value to the chip?  Current SOC activity now being witnessed in the chip industry is the direct consequence of the previously described feature creep phenomenon—seeking to add more value to a device while maintaining its retail cost to the customer.

Currently, little overlap exists between the stand-alone PC chip market and the embedded processor market.  This situation is likely to shift significantly over the next five years as consumer products—such as cellular phones, set-top boxes, internet appliances, etc.—come to possess many of the features that previously were found only in PC-class systems.

One of the key technical enablers of this shift is the ability to put an entire system’s worth of applications on a practical-size (and practically priced!) chip.  Functions that previously were handled by stand-alone dedicated-function chips—such as memory, analog signal decoding, and even DSP microprocessing—now can be combined on a single piece of silicon.

With this newly found ability to integrate many complex functions (even a complete system) onto a single chip, another systems integration issue is being raised and addressed.

In the past, the flexibility in the system design of a PC was accomplished through the integration of specialized components via standard interfaces—such as the ISA, and later the PCI bus, for the PC, as well as via interface chip sets from Intel and others.  A particular PC system would be given specific capabilities (e.g., USB or network NIC) by the incorporation of those specific components as distinct chips or boards.

To add a new functionality thus meant add a new card to the system.  Each chip or card is treated as a black box by the systems integrator or PC maintainer.  The key to functional and component interoperability is the adherence to the systems bus or chipset specifications.

Now, the purpose and design of system buses in the system-on-a-chip arena is undergoing significant rethinking.  The trend has three identifiable phases:

1.       SHORT-TERM—map the current real buses between real devices, to a virtual bus between virtual devices all of which reside on the same chip.

2.  MID-TERM—development of new virtual buses that will capitalize on system-on-a-chip flexibility—with only moderate redesign of IP interfaces.

3.  LONG-TERM—development of point-to-point integration of IP on the same chip; MSOC (Multi-Systems-on-a-Chip).

In the short-term, expedience is the driving force—getting to market quickly is all-important.  The goal or objective is to reuse current IP (intellectual property) designs, licenses, etc. with as little necessary technical redesign and licensing re-negotiation as possible.  Consequently, the first generation of virtual buses seek to be true emulators of existing physical buses—including the minute details that are specific to the given physical bus, such as timing and signaling constraints.

This approach facilitates fairly straightforward reuse of existing IP designs; however, it does NOT take advantage of many new possibilities in integration which system-on-a-chip eventually should facilitate.  A prime example of this short-term approach is the efforts of National Semiconductor’s new system-on-a-chip for low-cost PCs which is due midyear 1999.

Efforts of the mid-term type of systems development already are in process.  In this case, IP (intellectual property) vendors are collaborating together to define new virtual buses which capitalize on system-on-a-chip flexibility.  These classes of buses are strictly virtual—they correspond to no existing physical bus realization.

This approach also is a compromise approach since its goals are to require only moderate redesign of existing IP interfaces.  The Virtual Socket Interface Alliance (VSIA) is among the working groups currently developing a set of such on-chip virtual bus specifications.  The Reusable Application Specific Intellectual Property Developers (RAPID) is another group with technical interests in the development of intellectual property standards that are more general than those of VSIA.

During this interim, IP (intellectual property) core designers should be able to adapt their existing designs to the new virtual bus interface by using thin interface layers to adapt the new virtual bus interfaces to the existing physical interfaces from which the designs are derived.  Later, the IP behind these thin interfaces will be redesigned and re-engineered to completely remove the implicit—and now no longer needed—physical bus constraints around which it originally was designed.

Long-term designs are expected to be completely independent of the physical bus limitations that now exist.  Going further in the transformation, the supposed need of buses—virtual or otherwise—is being restudied.  Such technologies as the Spherical IC breakthrough that previously have been discussed should facilitate fresh approaches to systems design and integration—including the development of point-to-point integration of IP on the same chip with elimination of the system-bus model!

Precursors of this level of systems design and integration already are beginning to appear in the market.  Hewlett-Packard Co. recently reported[138] on such a project still in the research stage.

This project promises to leapfrog current systems-on-a-chip efforts, and for the first time to make it economically feasible to quickly roll custom processors, in low volumes, for specialized, deeply embedded applications.  Experts believe this could be the hardware foundation of the emerging "post-PC" world.

Hewlett-Packard is attempting to take embedded computing to the next step: beyond off-the-shelf processors, beyond system-on-a-chip, and into the domain where custom processors are architected by an automated integrated hardware-software co-design process.  Such chips would be cost effectively designed and manufactured not only for high-volume runs, but also for very low volumes for specific embedded applications.

A major impetus for the venture is a perceived need to supply the burgeoning demands of smart embedded devices.  Such IA’s—information appliances—include Web processors, car navigation systems, and other new-age consumer-electronics devices being suggested by ventures such as Sun Microsystems Inc.'s Jini distributed computing concept.

These HP's custom chips would not be limited to low-level microcontrollers with puny compute capabilities, but would include full-fledged explicitly parallel-instruction computing (EPIC) architectures.  EPIC is the powerful VLIW-like (very-long-instruction-word) platform that forms the foundation for the HP-Intel Merced microprocessor now being developed.

To support the development process for realizing complex embedded requirements—for low power consumption and high MIP’s—in custom chips, HP has already developed a prototype way to design such circuits, which it has called the PICO (Program In, Chip Out) Architecture Synthesis System.

Switching to Switching

The first fruits of the long-term re-evaluation of the purpose, function, and use of bus-based architectures are already visible in the forth-coming (already planned and/or announced) generations of many of today’s well-known processor families.  Specifically, traditional bus-based architectures are migrating to the use of switch-based architectures.

First, interconnects between the chip and other supporting chips, boards, etc. are being implemented as switch-based interfaces.  Compaq’s forth-coming Alpha chip set, the EV6, was recently described in an article[139] in Internet Week.

The current Compaq AlphaServer GS60 and GS140 will be the last Alpha servers built using bus technology.  Beginning next year, Compaq AlphaServer systems will be based on a switched architecture similar to the VAX 9000 mainframe, in order to improve performance and scalability.  Switched interfaces that once were used only in the highest-end products now are migrating to ever-lower levels in the product set.

Secondly, within the chip with its various subsystems—instruction units, DSP’s, I/O processors, data cache units, etc.—are being interconnected by switch-matrixes.  Pauline Nist, vice president of the Tandem products and technology group at Compaq, states[140] that their upcoming Alpha chip, the EV7, will contain an on-board memory controller and that its processor will use a "more robust switch-like interface rather than a bus-based interface."

Similarly, Intel has provided the first public glimpse of its so-called NGIO (Next Generation I/O) at a recent developer forum.  Intel’s approach will use separate host and target adapters linked by a switching matrix over gigabit/second serial connections.  Production silicon for the new channel-like architecture could become available in 2000—about the same time as the first Merced servers.

Furthermore, this trend towards using switch-based interconnects is not restricted to high-end servers.  As another example, Sharp Microelectronics has been developing the ARM7 Thumb to be deployed as a standard microcontroller targeting portable applications and the consumer and industrial markets.  The ARM instruction set currently is a widely deployed standard choice for many embedded appliances.[141]

The key feature [of the processor] is a patent-pending programmable crossbar switch Sharp engineers designed in to speed internal communications.  The switches can make connections between 64 internal nodes and 40 external pins.  The switches are divided into three levels, each using a common multiplexing cell that implements a four-way switch.

Photonic Optical Switching Systems

The merger of optical systems technology with switching technology is soon to become a commercial reality.  Thomas Hazelton of Optical Switch Corp. of Richardson, Texas, recently wrote a feature article[142] on the state of optical switching technology for LIGHTWAVE.

The motivation for such technology is obvious.  To effectively manage the growing number of traffic-bearing wavelengths, a new breed of optical networking element will be required for adding, dropping, routing, protecting, and restoring optical services cost-effectively.

The realization of these capabilities is linked to the availability of a reliable, performance-oriented optical-switching technology that enables N x N matrix growth well beyond a one of size 4 x 4 or 8 x 8 matrix.

Until now, no single technology has provided all the necessary "bells and whistles" to truly address the stringent requirements of the optical networking element.  Lithium niobate optical-switching technology was first introduced in 1988 and is only addressing a small segment of this market opportunity.

Fortunately, frustrated total internal reflection (FTIR) provides a very promising alternative to mechanical, solid-state, and even micro-mirror optical switching.  FTIR has the potential to address many of the requirements of future network elements.  The FTIR optical switch could be classified as a virtual solid-state optical device.

FTIR has demonstrated a number of highly desirable characteristics.  Perhaps, the most significant attribute of the FTIR switch is its use as a building block—optical junctions—to form large nonblocking optical matrices (potentially 1024 x 1024) without compromising fundamental optical performance.  With its symmetry and scalability, FTIR optical-switching technology offers the "fabric" necessary for applications such as optical add/drop multiplexers, optical protection switching, and optical crossconnects.

One of the key benefits of FTIR technology is derived from the ability to configure 1 x N’s in a back-to-back sequence, thus creating a true nonblocking optical-switching matrix, providing growth scalability as well as optical symmetry.  All port-to-port connections are capable of bidirectional transmission, and light can originate on either the "A" or "B" side of the matrix.

Within the telecommunications arena, the first application of FTIR switching likely will occur within the optical add/drop multiplexer (OADM) network elements currently in definition by ITU and Bellcore standards committees and in development by telecommunications-equipment manufacturers around the world.  These "next-generation" optical networking elements will provide the building blocks necessary to extend WDM from a static point-to-point service into a dynamic platform of enhanced features and services.

In addition to add/drop capabilities, OADM's will also be required to provide user-network interface (UNI) services, which are wavelength tributaries offered by a WDM optical ring.  It is also envisioned that optical performance monitoring, service restoration, and fault isolation will be necessary components of the uni service offering.

Configurable Computing

The principle behind configurable computing is rather than to write step-by-step algorithms for a fixed circuit configuration, simply to use flexible hardware—such as field-programmable gate arrays—to rewire the circuit for the problem—in realtime.  The need to rethink the best tools, algorithms, etc. to effectively use the SOC’s that now are possible to design leads naturally to the field of configurable computing.[143]  SOC creates some radically different architectural possibilities.

A principal reason why custom computing machines in the past have not been commercially successful is in part due to the clumsy programming tools that have been available.  To reach the lofty design goals to which designers now aspire requires new sets of design tools, etc—new tools for designing the new SOC chips—and new tools for the development of applications that operate on these chips.

Readily available tools are geared to circuit design, rather than to high-level algorithm design.  Fortunately, much effort now is in process to develop new tools for attacking problems differently.  Currently, two competing trends attempt to address the problem.

One approach dispenses with FPGA’s entirely.  An example of this approach is National Semiconductor Corp.'s National Adaptive Processing Architecture (NAPA) project.[144]  The NAPA approach integrates adaptive logic and scalar processing with a parallel-processing interconnect structure.  Taking a coarse-grained view, standard circuit blocks such as microprocessor cores, memory arrays and floating-point units are configured via programmable interconnect schemes.

In the opinion of Charle Rupp, an engineer on National Semiconductor Corp.'s NAPA project:

"Computer architectures are the result of a continual balancing effort over the past 30 to 40 years.  AND gates are just too far from application code.  We have all come up against the intractable problem of synthesis, which lies between the programmer and the problem.  Floating-point processors are just too cheap not to have on-chip."

The other approach makes a frontal assault on the high-level programming problem by borrowing from the best ideas to come out of conventional programming.  Examples of such include Aristo’s block-level topology and design tools for large chips with multiple blocks of IP, and Stanford’s PAM-Blox project.

In retrospect, the traditional view of computing has focused on the development and compilation of step-by-step algorithms which are executed by a fixed circuit configuration, e.g., develop stored programs for an Intel X86-like processor, and a general purpose OS—such as Windows or Unix.  Programmability exists only in software—the hardware’s capability is fixed.

There also are a number of implicit assumptions that have been universally true for traditional hardware architectures until now, but are being questioned, and even suspended by new architectures that are possible.  An example of such an implicit assumption is the processor instruction unit (which are a limited resource) being segregated from and shared by the stored programs and data.

Traditionally, the end product of software development has been the object-code program—a stored program of machine-readable and executable instructions.  Each object-code instruction—in sequence—would be fetched from memory storage, decoded (interpreted), and executed by the processor.  In particular, the memory and the processor have been distinct subsystems.

The demarcation of effort for the software and hardware industries traditionally has been the declared set of machine instructions (e.g., the Intel X86 instruction set) defined for the system.  The software industry focused on how better to organize the instructions of a program to reduce storage requirements and to increase performance.

For example, the development of high-level languages served to make this effort more efficient (less time to develop the target object-code, reuse of design components, etc.) and more effective (better performance).  On the other hand, the hardware industry focused on how to make the declared set of machine instructions execute as efficiently as possible.  For example, new instructions—e.g., for floating point operations—were added.

Then for the sake of yet more gains in improved efficiency and performance, the two industries began leveraging knowledge of the internal functioning of each other.  The software industry developed tools that not only took into account the hardware instruction set, but also leveraged hardware-specific knowledge about the timing requirements of various instructions, of how they were be overlapped, pipelined, etc.—to rearrange instructions for better throughput.

Likewise, the hardware industry attempted to optimize the execution of an arbitrary object-code program—based on speculative processing not indicated by the program.  For example, the next instruction might be prefetched and decoded—before the current one had completed its execution—in speculation that it would indeed be the next instruction to execute—i.e., no branching resulted from the current instruction’s execution.

The new view of configurable computing considers the possibility of dynamically configurable circuits, such as is possible today with field-programmable gate arrays (FPGA’s) which permit the prewire/rewire of the circuit for the particular problem.  In the future, this approach will be enabled further by the use of Ball Semiconductor’s previously discussed Spherical IC technology.  An application then will be mapable from the systems/object model level directly into executable silicon.

Why does configurable computing matter?  The ability to express and execute a program in reconfigurable hardware offers the potential to produce much faster execution of a program.  The use of hardware compilation—means mapping the objects of the problem formalization directly to hardware—thereby bypassing intermediate Von Neumann hidden assumptions.

There is no intermediate assembly/machine code that requires a serialized instruction-processor to load, decode, and execute processes.  When configurable computing is taken to its ultimate realization, the designer, or programmer then is able to bypass the intermediate von Neumann computational model of a sequential processor acting upon a stored memory program.

Excellent general discussions of configurable computing are presented in: “Trends in ASICs and FPGAs,”[145] and “FPGA arena taps object technology.”[146]

Configurable computing, like the parallel-processing trends that preceded it, was launched with the enthusiastic recognition that VLSI advances coupled with new processor designs could result in a massive leap in performance.  But like other approaches to advanced computing, configurable computing is running up against the puzzle of translating high-level problems into code that low-level hardware components are able to efficiently crunch.

In theory, the idea is attractive in its simplicity: Rather than write step-by-step algorithms for a fixed circuit configuration, simply use field-programmable gate arrays to rewire the circuit for the problem.  But once computer designers began to move down that path, the simplicity of configurable computing began to take on complexities of its own.

Efforts in the configurable computing arena are undergoing significant rethinking.  The trend has three identifiable phases:

1.  SHORT-TERM—In taking a coarse-grained view, standard circuit blocks such as microprocessor cores, memory arrays and floating-point units are configured via programmable interconnect schemes.

2.  MID-TERM—High-level software structures such as object-oriented programming are blended with lower-level structures and design techniques taken from hardware-description languages to create end-to-end development tools.

3.  LONG-TERM—Provide support for dynamic computing—the capability to synthesize and configure circuits as a program is running.

The short-term coarse-grained approach eases the problem of compiler design, since the compiled code does not have to reach all the way down to circuit configurations.  The strategy is to selectively borrow from and appropriately extend (today’s) conventional programming tools and methods.  Conventional microprocessor-based systems have inherited sophisticated compiler technology that was developed to link increasingly abstract software structures with underlying von Neumann architectures.

The fine-tuned match of the two solves many problems in an acceptable amount of time; since, conventional high-level-language/compiler/CPU technology is widely accepted, readily available, and well supported.  Consider how that the engineer and programmer once solved such tasks as floating-point calculations, memory management operations, and graphics computations in software, but now have replaced these software subroutine calls with hardware-implemented instructions.  The same principle is being used as a guide in how to successfully marry the hardware circuit designer’s tools with the software programmer’s compilers.

New candidate functions for hardware sublimation into SOC’s include network interface stacks, web browsers and servers, various DSP functions (such as CDMA, xDSL), etc.  Specific examples of this trend—where the efforts of configurable computing and SOC design meet—include the joint effort of IBM and STMicroelectronics[147] to produce small but very powerful devices for multi-function information appliances, other FPGA efforts by STMicroelectronics[148], and the more recently announced CommFusion SOC announced by Motorola.[149]

In the mid-term, the configurable computing strategy is to make dynamic circuit synthesis available through familiar programming object-oriented methods.  Proponents expect the combination could prove to be the critical path from high-level problem descriptions to the underlying circuit diagrams that yield solutions.  By making dynamic circuit synthesis available through familiar programming methods, the power of configurable computing could be delivered to a broad group of scientists and product engineers.

An example of this approach is JHDL, being developed at Brigham Young University and the University of California, Berkeley.  JHDL adapts object-oriented features of Java to a hardware-description approach to capture the wiring-on-the-fly aspect of configurable computing.

Programming becomes a process of building custom circuit-generator objects from a library of circuit-generator objects.  Each object has local methods that wire up the subcomponents and initialize input and output terminals.  The object hierarchy makes the circuit structure explicit.  The programmer can simulate and observe the system's behavior as the object classes are being constructed.  Components can be easily added or rewired using basic object-construction functions.

Objects therefore include constructor/destructor code that allocates space in memory and initializes parameters when an object is invoked, then releases the memory when it is no longer in use.  Similarly, FPGA configurations need to be allocated over an array of gates when they are needed.  In JHDL, circuit configurations are represented as objects and the same type of constructor approach is used to configure them onto a specific FPGA array.

A second example of this approach is the work by OptionsExist Ltd.[150] of Cambridge, England, which has developed their Handel-C compiler for use with the ARM processor family that is widely used in the embedded appliance world, including within many cell-phone designs.

The goal of OptionsExist Ltd. is to provide engineers with a software/hardware tool set of compilers and breadboards such that from a single code-base, the designer is able to:

1.  Compile software/object code for an ARM-family processor,

2.  Compile to FGPA to performance test the hardware version,

3.  Compile to ASIC's for final system delivery, and

4.  Optionally, compile to ASIC's which are integrated with embedded ARM-family components—providing a hybrid delivery for those implementations that need some functions in software for required flexibility and other functions in hardware for required performance, etc.

One more example of this approach is now commercially available from Triscend Corp.  This company has introduced[151] a configurable processor system unit, or CPSU for short, that combines: 1) hard-macro versions of industry-standard processors, embedded SRAM, 2) system-level functions, and 3) a programmable logic fabric (to implement custom I/O functions) all on a single chip.  These blocks are interconnected on the chip via an open system bus and a fully routable peripheral bus.

The chips in the E5 family of configurable processor system units from Triscend can be called the first microcontrollers with on-board FPGA’s.  Or, they can be thought of as the first FPGA’s with on-board microcontrollers.

The functions will be interoperable across the various members in the Triscend CPSU family.  Currently, the 8-bit version of the E5 family is implemented using a "turbo" version of the popular 8032 microcontroller (the ROM-less version of the 8052).  Versions at the 16-bit and 32-bit level also are planned.

Included in the programmable fabric is dedicated high-speed carry logic able—for example—to implement fast adders, counters, and multipliers.  This logic permits high-performance functions to be realized in the logic fabric.

In support of this chip family, Triscend has crafted an easy-to-use development tool, dubbed FastChip.  The FastChip toolset provides a graphical, integrated development environment.  With this tool, users are able to configure the CPSU with predesigned "soft" peripheral support functions, which can be selected from the company's function library.  Stored as hardware-design-language descriptions, these modules are compiled and implemented in the programmable logic fabric after having been selected.

Additionally, designers can add new functions developed with other schematic or HDL tools.  Once merged into the design flow, the functions are configured in the FPGA portion of the chip.  Then real-time, in-system hardware and software debugging can take place.

In the long-term, the configurable computing strategy is to enhance mid-term efforts—such as those described above—to further provide support for dynamic computing—the capability to synthesize new and reconfigure existing circuits in real-time while a hardware program is running.  A very similar problem had to be solved by object-oriented software compilers, since objects are created and destroyed dynamically as a software program executes.

Efforts to provide support of hardware-enabled dynamics are being pursued from a number of avenues.  Such an effort is described in “Cell architecture readied for configurable computing.”[152]

A team at NTT's Optical Network Systems Laboratories believes it has overcome a major hurdle in the quest to design a computer architecture around FPGA’s.  They have introduced a reconfigurable circuit family and companion software system that is flexible enough to form the basis for a general-purpose computer that could dynamically reconfigure itself for specific problems.

The computing paradigm offers a novel feature—the ability of one circuit to dynamically configure another circuit.  So far, it's been possible to reconfigure such processors only via software.

"PCA (Plastic Cell Architecture) is a reference for implementing a mechanism of fully autonomous reconfigurability," ...  The new capability represents a "further step toward general-purpose reconfigurable computing, introducing programmable grain parallelism to wired logic computing."

The NTT researchers have devised an object-oriented programming language, which they call SFL, to program PCA arrays.  As with high-level hardware-description languages, SFL allows the programmer to specify circuit behaviors, rather than the circuit wiring diagram itself.  SFL behaviors are represented as software objects that are instantiated on the hardware cell array.

The enhanced version will have a structure called dynamic instantiation, which will distinguish it from other HDLs.  That new capability will offer a major step toward general-purpose computing for configurable computers, according to Nagami.

If such efforts as this were to leverage the Spherical IC technology of previously discussed Ball Semiconductor—by the way, several of Ball’s largest private investors are large Japanese organizations—the sky is the limit as to what could be accomplished!

PCA’s represent another route to parallel computing, since the cells are structured as combined memory and processor units.  A small amount of SRAM is used to store data, which is then processed by another part of the cell circuit.  As with conventional FPGA’s, each cell can operate as a lookup table to implement Boolean logic.  The cells also can execute other functions such as primitive ALU operations and critical-interconnection primitives.

Since the array of cells has no specific programmable interconnection elements, the cells must have primitive interconnection functions along with arithmetic and logic capabilities.  A cell could then be programmed as a connection, rather than a processor.

The resulting processing array is able to mimic the ability to create specialized cells.  In a living system, that capability is ultimately used to define neural networks that implement critical information-processing functions.

Von Neumann originally created the cellular automata architecture to demonstrate a more general approach to computation that would mimic the ability of cellular life to create fully autonomous creatures with both robotic capabilities such as limbs and navigation and control components like neural networks.  Likewise, NTT's PCA architecture, based on the same concept, could ultimately prove to be a more general computing approach that could dynamically reconfigure itself for specific problems.

IP–Intellectual Property

IPIntellectual Property—is the hardware equivalent of what source code is to software.  Intellectual property is the specification of how to implement a function, process, application, etc. in hardware.  The IP specification is designed to deliver a specified functionality—usable within a given environment or setting.  This domain of application or use is constrained to specified API’s—termed interfaces by the hardware community—to specific silicon foundry processes (e.g., CMOS, SOI, FPGA), etc.  Given that those constraints are met, an implementation based on a given IP should readily integrate as a component of more complex systems.

The advent of IP represents a fundamental change in how systems are designed and chips are developed and manufactured.  No longer does the development of new systems require that a company provide support for end-to-end design, development, and manufacturing of all subsystems, components, etc.  In fact, a number of small companies have arisen which have no physical resources for manufacturing—rather, their expertise is in the design of highly valued for its functionality, but reusable IP, which is marketed to many different sources. 

These are the hardware counterparts to the multitude of software-focused companies that develop libraries of C++ subroutines, VisualBasic scripts, Java applets, etc. for resale to both major application developer operations, as well as to the custom one-job-to-do developers.

In particular, IP facilitates mass-customization at the SOC level.  The SOC developer can focus on integration of the total system design, rather than on the engineering of each subsystem.

In a press release on April 6, 1998, National Semiconductor recently announced:

"National has assembled, through acquisition and internal development, all the pieces it needs to integrate a PC on a single chip," said Brian Halla, National CEO, speaking today at the Semico Summit, a semiconductor industry conference in Phoenix, Ariz.  "We have all the intellectual-property building blocks and the methodology to stitch them together onto a square of silicon less than half an inch wide.

As reported on May 11, 1998, in the press release “Lucent Technologies combines FPGA, standard-cell logic on single silicon chip for high performance, flexibility,” Lucent also has been strategically gathering IP for its various SOC efforts.  Sanjiv Kaul, Vice President of Marketing at Synopsys, an electronic design automation firm known for its system-IC design tools, has observed:

"As system-on-a-chip design begins to enter the mainstream, it is essential that customers can integrate complex IP functions at their desktop."

"By combining embedded standard-cell cores and programmable logic, Lucent is clearly staking out a position that offers customers the right combination of flexibility, cost, and performance.  Synopsys is currently working to enhance tools and methodologies to better support this type of design flow."

Several organizations now exist to assist in the standardization of what constitutes a reasonable set of design constraints (API’s, etc.) for the IP development community to target, much as software developers now target the Microsoft Windows COM, SUN’s Java, CORBA, the various W3’s specifications, etc.

In particular, RAPID—Reusable Application-Specific Intellectual Property Developers—which is developing an online IP catalog located at http://www.rapid.org/, and the TSMC’s IP Alliance Work on SOC designs with the second wave of fabless companies now arising in the United States.  Their system IC’s will meld customer-developed IP with embedded memory, analog circuitry and logic imported from the commercial IP industry.

More recently, ARM, Cadence Design Systems, Mentor Graphics, Motorola, Nokia, Siemens, Toshiba, TSMC—plus two smaller companies involved in the cores business, ISS and Phoenix—have joined together in the founding of the Virtual Component Exchange (VCX).  This group is focused on the legal and business hurdles to trading virtual components in an international open market.

Chaos-based systems that evolve—an alternative to current computing

A revolutionary computing technique using a network of chaotic elements to "evolve" its answers could provide an alternative to the digital computing systems used today.  This "dynamics-based computation" may be well suited for optical computing using ultra-fast chaotic lasers and computing with silicon/neural tissue hybrid circuitry.

This new model of computation was first reported by Dr. William L. Ditto—professor of physics at the Georgia Institute of Technology and head of the Applied Chaos Laboratory—in the September 7 issue of Physical Review Letters.  It has been termed as chaos-based or dynamics-based computation.  An explanation of the principles on which this computational model is based and the impact of its application is presented in “Chaos-based system that "evolves" may be alternative to current computing.”[153]

Dr Ditto and his associates—as well as many other scientists—have observed a variety of behavioral patterns created by chaotic systems, including those found in living organisms.  Dr Ditto further reasoned that these natural chaotic systems should have been eliminated through evolution unless they served a purpose.

From a practical viewpoint, Dr. Ditto’s implementations of this system has demonstrated the ability to handle a range of common operations, including addition and multiplication, as well as Boolean logic and more sophisticated operations such as finding the least common multiplier in a sequence of integers.

According to Dr. William L. Ditto,

"We've shown that this can be done, but we've only seen the tip of the iceberg. … This is a glimpse of how we can make common dynamic systems work for us in a way that's more such as how we think the brain does computation."

Chaotic elements are useful to this system because they can assume an infinite number of behaviors that can be used to represent different values or different systems such as logic gates.  Because of this flexibility, altering the initial encoding and changing the connections between the chaotic elements allow a single generic system to perform a variety of computations using its inherent self-organization.  In conventional computing, systems are more specialized to perform certain operations.

"We aren't really setting up rules in the same sense that digital computers are programmed, … The system develops its own rules that we are simply manipulating.  It's using pattern formation and self-organized criticality to organize toward an answer.  We don't micromanage the computing, but let the dynamics do the hard work of finding a pattern that performs the desired operation."

He compared dynamics-based computation to DNA computing and quantum computing, both of which are computing paradigms still in their early stages of development.

He's done theoretical work applying dynamics-based computing to an ammonia laser system and hopes to see the system implemented experimentally.

"Potentially, we could stimulate a very fast system of coupled lasers to perform a highly complicated operation such as very fast arithmetic operations, pattern detection and Fourier transforms. … We have something that very naturally performs an operation in an optical system.  This would provide an alternative to existing efforts, which try to make optical systems operate like transistors."

Because this system differs dramatically from existing digital computers, it has different strengths and weaknesses.  Since its functioning depends on interaction among its coupled elements, the system is naturally parallel.  Ditto believes the system would work particularly well in optical systems.

"It might be better than digital computing for those activities that digital computing doesn't do very well—such as pattern recognition or detecting the difference between two pieces of music."

Long-term, the possibilities for how systems could be architected and problems could be solved by the application of chaos-based or dynamics-based computational models are almost unlimited.

"We hope that you can take any dynamic system, stimulate it in the correct way, and then get it to perform an operation for you. … This would provide an alternative to engineering a system from the ground up."

Veri—Instantaneous pattern recognition

Today's pattern-classification algorithms can be too complex for real-time operation, but biological brains, performing fewer operations per second than silicon chips, can accomplish such tasks almost instantaneously.  The recognition tasks seem to occur without any conscious thought.

Sandia Labs researchers therefore reasoned that the brain must be performing some sort of low-level template-matching operation automatically.  The new approach that they developed is reported in “Vision template inspires real-time pattern classification.”[154]

After extensive testing and questioning of many individuals, Gordon Osbourn's group surmised that people superimpose a dumbbell-shaped pattern over any two points to determine whether the points belong to the same object or to different objects.

If both points fit inside the dumbbell shape without having to include other extraneous points, then they belong to the same object.  If extraneous points have to be included, the brain concludes that the two points do not belong to the same object.

“Since our classification method is based on human perception rather than mathematical equations, it is almost too simple to explain to those of us who expect complexity.”

One unexpected advantage the researchers found was that data need not come in nice, statistically significant batches for the technique to work.  Indeed, data can come in fits and starts or be clumped together in nearly any type of distribution.  On the other hand, conventional pattern-recognition algorithms often require that your data be collected in Gaussian distributions to work well, but modern sensor arrays can often relay data in non-uniform distributions, causing false alarms.

Though the group derived its simple template-matching mechanism from two-dimensional visual experiments, the researchers since then have discovered that it can be applied to any sort of sensor data and in any number of dimensions.

The computer merely applies the same empirical judgments based on the proximity of points in a data set, regardless of what the points represent or in how many dimensions they exist.

In effect, the mechanism permits computers to “see” sounds, smells or even combinations of sensor inputs forming multidimensional spaces.  The mathematical transformation from two-dimensional vision data to multidimensional data from different types of sensors is very straightforward and seems to work just as well as it did for vision.

“Instead of just classifying an unknown sensor input in the closest matching category, as neural nets can sometimes do when there is a lot of noise in the data, Veri will recognize an unknown input as something new.  In this way, Veri minimizes the number of false alarms it causes.”

Sandia Labs has applied for a patent covering the use of the algorithm, which the researchers call Veri (visual empirical region of influence).

Computational sensing and neuromorphic engineering

Another area where the development of newer unconventional algorithms—especially, when coupled with dedicated hardware implementations—is producing significant results is in the area of computational sensing.

Ralph Etienne-Cummings, a Johns Hopkins University electrical engineer, has developed a new robotic vision system implemented on a single microchip.[155]  Today, it enables a toy car to follow a line around a test track, avoiding obstacles along the way.  In the near future, the same technology may allow a robotic surgical tool to locate and operate on a clogged artery in a beating human heart.

Etienne-Cummings, an assistant professor of electrical and computer engineering at Johns Hopkins, explains the technique:

"The idea of putting electronic sensing and processing in the same place is called computational sensing.  It was coined less than 10 years ago by the people who started this new line of research.  Our goal is to revolutionize robotic vision or robotics in general.  It hasn't happened yet, but we're making progress."

The development of computational sensors is not an effort to make electronic versions of biological cells and brain tissue, but rather, to mimic their function.  Carver Mead from the California Institute of Technology started the idea of neuromorphic engineering—basing your electronic designs on biological blueprints.[156]

The new chip mimics the eye by implementation of a two-stage system.  A high-definition central region of pixels that are very sensitive to movement is surrounded with a low-resolution peripheral-vision area that tracks the location of objects so as to keep the central region centered on them.

More specifically, the design takes advantage of the parasitic bipolar transistors inherent in CMOS chips, fashioning them into surface arrays of photosensitive pixels. First, the analog interconnecting matrix for these bipolar transistors implements motion detection in the central region, deriving speed calculations from it.  The periphery analog interconnection matrix figures out the location of the object by bumping against the edge of its central region, deriving a heading calculation from it.

What is the biological principle being mimicked?  Etienne-Cummings explains:

"This resembles the early type of processing that takes place in a rabbit's eye or a frog's.  The animal sees a shape moving up ahead. If the shape is small enough, it may be food, so the animal moves toward it.  But if it's too large, it might be a threat to its safety, so the animal runs away.  These reactions happen very, very quickly, in the earliest moments of biological processing."

The strategy being implemented by computational sensing is to have certain immediate decision making performed at the lowest levels of vision sensing, before higher-level analysis and synthesis is begun.  The system supports both instinctive, as well as, higher-level cognitive reasoning.

Key to the success of this system is the way that several critical functions have been combined on a single chip.  This chip performs both analog and digital processing, extracts relevant information, makes decisions and communicates them to the robot.  Because the decision making and communications are done on the microchip itself, not on a separate computers or chips, the response time for reacting to the environment and communicating between the two subsystems is much faster than that of previously developed robotic vision systems.

Potential applications for this technology include such tasks as autonomous navigation, medical systems, pick-and-place parts manufacturing, and videoconferencing systems that "lock on" to a speaker as he or she moves around the room.

Killer Applications

The focus of this section is to examine some of the killer applications that will leverage this body of emerging technologies in the near future.  First-generation versions of some of these applications are already beginning to appear in the market, and can be expected to have a significant impact on the markets into which they are introduced.

The first application discussed below, Set-top box on a chip,” is a prime example of how SOC technology will transform the entire business landscape—triggering all sorts of new opportunities, and furthering the process of convergence.  One should particularly note that this first generation of SOC-enabled solutions are targeted at what one would consider the appliance market, rather than at the more traditional areas of information processing—such as the development of yet smaller PC’s or faster mainframes.

Equally impressive in terms of its potential impact on future communications is the recent announcement by QuickSilver Technology presented in 3G—third-generation—cellular devices are coming.” The efforts of QuickSilver leverage Xilinx Inc.'s reconfigurable-computing program to offer a single baseband controller for a cell-phone handset module—based on CMOS and silicon-on-insulator technology.  This handset would be a versatile universal communicator that could support all major cellular schemes.

A third place where exciting changes are coming is in the previously discussed home computing environment.  The trend is not toward faster general purpose Intel-compatible PC’s.  Rather, the recent trend in the consumer market—in line with the previous discussion of appliancization—is to develop less expensive, but more feature-rich devices which are each targeted at specific applications.  The recent announcements of Sony regarding its next-generation graphics engine discussed below in the section Sony’s next-generation playstation is a prime example of what can be expected.

Set-top box on a chip

One area in particular now being targeted by a number of vendors is the multi-purpose set-top box arena.  Consider the announcement[157] by Microtune of Plano, TX.  Microtune has leveraged the emerging technology of SOC integration to develop a low-cost tuner on a chip family that offers an integrated, universal solution for high-speed media delivery via broadband systems, including digital cable, TV and satellite, and provides for a seamless transition from analog to digital TV broadcast.  Based on patented technology and industry standards, the MicroTuner functions as a gateway component, propelling high-definition video, high-end audio, high-speed data, and IP telephony to the home or business via terrestrial, cable or satellite networks.

According to Gerry Kaufhold, principal DTV analyst for Cahners In-Stat Group,

"Microtune achieved what many in the industry said couldn’t be done.  It developed a silicon-based tuner that is smaller than a thumbnail, packed it with advanced functionality, and designed it to accommodate the requirements of diverse products and applications.  The chip is universal enough to support today's traditional analog TVs and VCRs and future digital entertainment and information appliances.  The company is well positioned to attack the 300-million unit opportunity for its products projected by the year 2001."

The MicroTuner2000, the first product in MicroTune’s portfolio, is a high-performance, dual-conversion tuner that supports the reception of multiple digital broadband standards (QAM for digital cable and VSB for digital TV), while maintaining compatibility with analog NTSC standards.

Microtune has accomplished much more than simply to provide a total smorgasbord of those tuner technologies that are already available today—except, as discrete components—now shrunk and integrated to fit on a single chip.  The MicroTuner2000 furthermore has been engineered with patented techniques to solve the packed-spectrum challenges that previously have resulted in the need for large guard bands between the usable ones.  These guard bands have served as a means of providing sufficient separation between distinct channels.

In the past, bandwidth buffering has been required to compensate for the inability of existing equipment designs to adequately tune the end-to-end performance of such mass produced systems—each of which could involve many components. The recovery of this no-man’s zone of bandwidth buffering will undoubtedly become a major breakthrough as network providers look for ways to deliver more content through their networks.

The techniques of Microtune and LSI Logic— described here and below—are broadly applicable to a number of areas of transmission efforts—including the wireless domain, coax or cable, and the various xDSL efforts over twisted-pair telephone wire, and even to power-lines.

Their device delivers superior channel selectivity, image rejection, impedance matching, and wide dynamic range input amplification to outperform traditional tuners.  It also provides the consumer superior, stable pictures and dependable high-speed, high-density data delivery without the threat of interference.

This chip shrinks the electronics real estate from a box the size of a pack of cards to a space the size of a fingernail.  The MicroTuner2000 is sampling now and will be available in Q2, 1999.  Bulk prices, in quantities of 10,000, will be $19.95.

The MicroTuner solution features higher-end dual-conversion tuner capabilities, offers reception of a greater number of channels without channel interference, enables equal reception of strong and weak signals, eliminates frequency drifting, and serves as the gateway to both digital and analog applications.  It also adds value to manufacturers' appliances by providing the ability to deliver "digital ready" analog TVs, VCRs and other consumer electronics.

The chip allows manufacturers to incorporate tuners into smaller products, enables faster assimilation into PC/TV convergence, eliminates signal interference from other home or office appliances, and eliminates the need for multiple power supplies.

The MicroTuner chip has an additional built-in bonus.  Since DTV transmissions can carry scads of bits per second, handheld computer makers like Palm Computing could use the MicroTuner2000 to receive data from the airwaves

LSI Logic also has announced[158] a new chipset initially targeted at the set-top market.  Not to be outdone by Microtune, LSI Logic Corp. of Milpitas, CA. has rolled out a two-chip SOC based chip set for a television tuner aimed at replacement of TV tuner modules that today contain dozens of discrete components and must be manually tuned in assembly plants.  The new chip set is initially aimed at an emerging IC application that could represent hundreds of millions of tuners in cable TV converters, set-top boxes, television sets and PC’s.

LSI Logic said its chip set will support a range of advanced set-top box applications, including video entertainment, Internet data delivery, and wireless communications.  Volume production is slated to begin in the second quarter of 1999.  In 100,000-piece quantities, the chip set is targeted to sell for $16 each.

3G—third-generation—cellular devices are coming

Equally impressive in terms of its potential impact on future communications is the recent announcement[159] by QuickSilver Technology of Campbell, CA. Reported in the article:  This startup has developed some novel implementation techniques for software-defined radios for third-generation digital cellular phones,  QuickSilver Technology Inc.—organized by executives from Xilinx Inc.'s reconfigurable-computing program—hopes to use a reconfigurable architecture to hit a performance target for which many DSP and RF vendors have been striving.  They intend to offer a single baseband controller for a cell-phone handset that could cover a fragmented market of cellular air interfaces and frequency bands.

Just as with Microtune has chosen to focus initially on a set-top box offering, QuickSilver is focusing its efforts initially on the cellular world that is in the process of defining a 3G—third-generation—cellular standard, hopefully to be adopted by the whole world. 

QuickSilver is not limiting its market targets to cellular radio, but figures that the integration of retargetable baseband and intermediate-frequency (IF) functions is one of the clearest applications for its new “WunChip.”

QuickSilver’s Wireless Universal 'Ngine, or WunChip, supports a variety of baseband algorithms that are downloaded from software, yet run at the full hardware speed required for a given frequency band.  This is not your typical ROM-based BIOS approach, but the newer results of reconfigurable computing—where hardware has the programmability normally available only through a software-based solution. 

The real-time retargetability is faster than traditional downloads from firmware ROM.  The design will handle a variety of air interfaces, including TDMA, CDMA and GSM baseband designs.  Reconfigurable-computing enthusiasts claim that a large array of programmable logic can be used in place of predefined processing elements—CPU cores, DSP cores or even custom data paths—to achieve flexibility and higher throughput.

QuickSilver is not ready to disclose the details of its Adaptive Computing Machine architecture just yet.  It incorporates elements of integer, RISC and DSP computing in a system-on-a-chip architecture.  QuickSilver expects to aim at several markets.  QuickSilver says the WunChip will offer inherently lower power dissipation and easier interfaces to RF front ends than programmable DSP systems.

They have convinced BellSouth Mobility, an initial investor.  The two companies have signed a request for a proposal to design a universal handset around WunChip technology.  In particular, BellSouth wants a module, based on CMOS and silicon-on-insulator technology, that will handle at least four air interfaces and any frequency band from 800 MHz to 2.1 GHz, while digitizing all steps from the first RF stage (after the low-noise amp) down to baseband.

QuickSilver is not the only to have entered this emerging technology.  Another company, International Wireless Technologies L.L.C [IWT] also has announced[160] its development of a single-chip solution that permits wireless device manufacturers to use one platform for multiple mobile and fixed voice and data standards.

According to Laslo Gross, a principal and founder of the privately held company, IWT expects shortly to commence beta tests with two equipment manufacturers of its patent pending Re-Configurable Application Specific Programmable Communications Platform, known as RASP-CP.

"We set out to achieve simplicity, not complexity.  We would like to see a phone number associated with an individual, regardless of the device, so your [personal digital assistant] could also become your pager and your phone."

"We could make a pager into a voice pager, and it would cost only about $60.  We can give pagers easy access to e-mail, and we are talking to some paging companies, which need this to survive.  Implementing speed capability into the platform is quite easy, just another software load.

"We can do voice-activated dialing because it doesn’t matter to a [digital signal processor] whether you’re speaking or whether it’s data."

The high prospect exists for building a versatile universal communicator that could support all major cellular schemes—say as one travels from a area that is CDMA based to one that is GSM based.  Alternatively, perhaps a carrier might need a handset that could support both second-generation and third-generation modes concurrently.  Another side benefit—in addition to the universality this approach offers—is the reduced power consumption that a hardware solution can offer the traditional software-based DSP approach now used in current generation handsets.

Sony’s next-generation playstation

Another place where exciting changes are coming is in the home computing environment.  The trend is not toward faster general purpose Intel-compatible PC’s.  The recent trend in the consumer market—in line with the previous discussion of appliancization—is to develop less expensive, but more feature-rich devices which are each targeted at specific applications.  The recent announcements[161] of Sony regarding its next-generation graphics engine is a prime example of what can be expected.

An issue of RoFoC—The Rapidly Changing Face of Computing featured an analysis[162] by Jeffrey R. Harrow of the significance of Sony’s latest development.  Sony presents quite a different picture of the future of computing in the home than Intel’s much publicized vision of high-performance general processors implementing all applications in software.  Quite a different picture now seems likely to unfold.

The new Sony playstation is based on SOC technology presented earlier in this document and clearly embraces the principles of appliancization that have previously been discussed.

The Sony Playstation 2 will feature the following SOC-enabled components:

1.       Emotion Engine—a 128-bit CPU jointly developed with Toshiba Corp. and SCE,

2.       Graphic Synthesizer—an embedded graphics chip developed by Sony,

3.       I/O processor—co-developed with LSI Logic Corp., and

4.       SPU2 sound synthesizer—a second-generation version of the sound synthesizer now used in Playstations.

The Emotion Engine has floating-point performance of 6.2 Giga-flops and a bus bandwidth of 3.2 Gbytes/second.  This performance is achieved through the use of Direct Rambus DRAM in two channels.  Running at 300 MHz, according to SCE, the CPU's floating-point performance is three times greater than the current 500-MHz Pentium III PC.

The Graphic Synthesizer—a 0.25-micron chip with 42.7 million transistors on a 16.8-mm die—integrates video memory and pixel logic on one chip to achieve a total memory bandwidth of 48 Gbytes/second.  The parallel rendering processor operates at 150 MHz and has 16 parallel pixel engines and 4 Mbytes of multiport DRAM onto this chip..  In terms of graphics performance, it has a drawing capability of 50 million polygons/s with 48 quad, 24-bit color and Alpha-z data.  Pixel fill rate is 2.4 Gpixels/second.

To achieve such raw improvements in multimedia-related performance is nothing short of phenomenal.  Just as importantly, though, the new Playstation console will be backward-compatible with today's Playstation based on its use of an I/O processor that uses the CPU of the current console as its core and is built by LSI Logic.  This backward compatibility is very important seeing as how Sony has already shipped more than 50 million units of its earlier-generation Playstation worldwide.  Additionally, the newer chip further integrates Firewire (IEEE 1394) and Universal Serial Bus interfaces along with current Playstation functions.

Although still more than a year away from store shelves, the Sony Playstation II already is creating quite a stir in Silicon Valley among game developers who have been briefed on its capabilities.[163] Why all the stir among game developers?

“It is the first machine to deliver graphics that until now could be produced only by supercomputers—and at prices that will put it under Christmas trees in 2000.”

Some of these 1,000 Silicon Valley software game developers recently shown the machine are suggesting that

The state of the art in computing is moving from the aisles of CompuUSA to the shelves of Toys ‘R’ Us.”

The megatrend of 'appliancization' also is in play here—no pun intended.  Sony expects to sell its Playstation II for substantially less than $500.  This is another striking example of a coming generation of powerful computer processors that are not designed for traditional computers.  Instead, they are engineered to concentrate all their considerable power on performing highly specialized tasks.

According to Richard Doherty, president of Envisioneering, a computer industry consulting firm,

"Sony is clearly riding on a consumer mandate and delivering supercomputer graphics.  People will buy the Playstation II just to get at the chip."

Looking ahead to what the future could hold, Phil Harrison, a vice president at Sony Computer Entertainment, told the developers

"We are looking for a new generation of software that has the same impact on a person as a great book or a great movie."

The Playstation’s new processor has enough power to begin to convey humanlike motions and abilities, ranging from natural movement and facial expressions to artificial intelligence like the ability to learn and to recognize speech.

Sony has given every indication that it envisions a new computing world—one that has little to do with the office desktop.  In this new world, brilliant graphics and mathematics-intensive tasks like voice recognition will matter most.  In such a world, Harrison said, that Sony’s Emotion Engine should excel.  The Sony vision is of a world of enormous potential for Sony's computer is because its graphics power will be coupled with high-speed connections to the Internet through cable and satellite links.

All these new features and capabilities of the Sony Playstation II chipset could make Sony a powerful competitor to the Microsoft and Intel duopoly if software developers begin to abandon the personal computer platform when creating their newest and most advanced applications.  Stewart A. Halpern, a Wall Street analyst at ING Baring Furman Selz, recognizes the strategic significance of Sony’s potential.

"This machine heralds the merger of film, television and the video game businesses."

Others are already looking to take Playstation II beyond the domain of gamesmanship.  Carl Malamud, chairman of Invisible Worlds, an Internet software company in Redwood City, California, observes

"This is the first credible alternative to the PC for reaching people on the Internet."

In the mean time, until actual Playstation II systems are readily available, Sony will be using the Linux operating system to provide a simulation platform to help developers come up with games for its next-generation Playstations more quickly.[164]  The simulation software, a mockup of the chip that will underlie future Playstation equipment, will enable game developers to write code despite the fact that Playstation II hardware is not yet available.

Why would Sony choose to use Linux for such a platform?  In a profound demonstration of Linux’s scalability,[165]  IBM recently clustered a $150K set of Intel-based servers running an off-the-shelf Linux to match the performance record for a multimedia-focused benchmark previously set by a $5.5M Cray supercomputer.  Game developers will need such supercomputer-like capability to meaningfully emulate the new Sony chipset's behavior.

As an aside, the Linux community also is the most likely group to port the Linux O/S to this platform.  That would be a real coup for those who would make this platform Internet-ready.

Of course, the Playstation II is not being pitched as a general purpose PC.  It probably will not run word processing software or do spreadsheets.  However, Sony is planning[166] to press it into quite distinctive service.  Nobuyuki Idei, president of Sony, describes it as

"... the core media processor for future digital entertainment applications."

"It's a totally different animal.  The chip] will create new business--not an extension of the PC business."

The Sony vision is bigger than the rollout of a next-generation PlayStation, which is just one piece of the comprehensive groundwork Sony is laying today for a future of digital products and services in the home.  Sony also is working on software, set-top boxes, and storage systems in addition to technologies for delivering movies and music to homes over the Internet and via satellites.

One recent article that analyzed the situation was “Tuning in to the Fight of the (Next) Century.”[167]  This article proposes the thesis:

Battle for control of the future of computing is looming between the personal computer industry and consumer electronics manufacturers.

Many observers believe there is a war brewing.  On one side, Microsoft, Intel, and others support a PC-centric plan for controlling all those future computerized intelligent appliances about to begin appearing around the home.  They plan to use a set of PC-centric API’s called Home API (Home Application Programming Interface).

Microsoft's strategy is to re-create the powerful business model of the personal computer industry in consumer electronics.  However, "PC-ization" of consumer electronics would most likely mean a world of increasing uniformity and low margins, like those that computer makers have experienced.

On the other side, Sony, Philips, Sun, and others are supporting a decentralized view of autonomous smart appliances—including those powered by Sun's Jini, interconnected by high speed FireWire cables (also known as IEEE 1394 and I-Link).  This proposed standard is called Home Audio Visual Interoperability, or HAVI.

Sony is preparing to challenge Microsoft's Windows CE with its forthcoming Aperios "appliance" operating system, with which it intends to control not only digital TV’s but also cellular phones, as well as the home's networked DVD player and set top box.

Sony's view of the digital future is far more decentralized.  Its product designers scoff at the notion that the PC is to be the "mainframe" of the home.  Instead, they envision homes in which dozens, even hundreds, of smart appliances are seamlessly interconnected, perhaps without a PC involved at all.

The battle lines are being drawn—as the NY Times explains,

"... it will be the consumer who decides whether the future will be a post-PC, or a PC-centric world."

Jeffrey Harrow closes his article in RoFoC with the following observation:

Even if computer games aren't your forte, these are examples of how the "lowly games" have the potential to change all the rules.  Although they may provide only a "back door" to the business environment, with new operating systems, new interconnect schemes, and graphics performance of the PlayStation II's caliber becoming relatively inexpensive and widely available, how long do you think these attributes will stay "just for games...?"

Another ‘dark horse’ development that may emerge—at least, in the opinion of this paper’s author—from the Sony Playstation effort is the integration of the Playstation chipset with the LSI’s implementation of a tuner-on-a-chip similar to that of Microtune previously discussed in the section Set-top box on a chip.”  Such a development would represent the convergence of two strategic killer applications!

Recall from above that LSI is Sony’s collaborator on the development of its I/O processor, which is critical to how the Playstation interfaces to the rest of the world.  It also is the source of the Playstation II’s backward compatibility to the original Playstation, since it uses the CPU of the current console as its core.

The time has come to move beyond the PC

When all is said and done—the WinTel duopoly is about to undergo radical transformation--with or without the help/interference (depending on how one views it) of the government and the courts.  These transformations will be technology-driven!  True, people do not buy technology just to be buying technology.  However, they certainly will buy technology for the applications and benefits which the new technology makes possible.

In short, the process that Mr. Colony is trying to describe, but could have better declared from the beginning, is the appliancization of the PC.

The farmer (in Mr. Colony's article below) used the tractor as a general purpose power tool that is capable of many uses--when the appropriate accessories are connected (plow, hay bailer, dirt-wagon, bush-hog, ...).  This particular model of tractor use is based on the fact that a tractor is too expensive to purchase, maintain, etc. for any farmer to buy multiple tractors--one specialized for every specific task to be done about the farm—when with a little planning the farmer can reuse that one tractor for a number of tasks.

On the other hand, the appliance model says that the resources for each particular task (coffee bean grinder, blender, mixer, garbage disposal, can opener, electric tooth brush, etc.) are cheap enough that I know longer need to endure the inconvenience of purchasing ONE general purpose electric motor that must be interchangeable with a number of different accessories to support all these tasks.

FACT: The rapid drop in the cost of processing power, memory, display technology, etc. will result in the appliancization of the PC.  The applications may need to share information (a la, the I-Net), but they will NOT need to share computing resources (common OS, general-purpose applications, etc.)!

____________________________________________________________

To:  Forrester electronic clients

From:  George F. Colony, president, Forrester

Quickly:  The time has come to move beyond the PC.  New devices, linked via IP, will surround and draw functionality away from the PC.

Content:  The PC's like a tractor.  It slowly bulls its way through the furrows and doggedly gets the job done.  It's versatile:  You can attach lots of farm implements—from hay bailers, to plows, to harvesters—made by many different companies.  It's familiar:  Everyone knows where the clutch is, how to steer, how to shift.

But riding a tractor for 15 years is no treat.  And using it for most of your transportation needs—from plowing fields to driving into town to visiting Grandma 500 miles away—makes no sense.  I don't know about you, but I'm sick of bouncing up and down on an uncomfortable seat, breaking down three times a day, and having to be an expert on fixing carburetors and fuel pumps.  Just think of Citibank--it's forced to maintain a fleet of 50,000 tractors, each with its own weird mechanical eccentricities.

I've got tractor fatigue.  This came to me in January when I bought a new PC for my home, something I do every three to four years.  I was excited to get a new Dell 300 MHz machine with every cool new option including surround sound, DVD drive, etc.  But as I put the machine together and began to run it, two realities dawned on me:

Reality No. 1:  This machine was no better than the PC I had set up four years ago.

Reality No. 2:  Microsoft's cloying attempts to stay in front of the customer had exceeded the boundaries of good taste and common sense.  As I set up the PC, I was faced with pop-up Microsoft dialog boxes that wouldn't go away, multiple Microsoft applications loaded on the hard drive that I didn't want, constant reminders that I should go on-line with MSN, the look and feel of IE thrown at me via Windows Explorer.

So if not the PC, then what?  No brute force will destroy the PC--it will be pecked to death.  Other devices will surround and augment the PC.  Yes, I want a tractor.  But I also need a car for daily travel, a train to get to work, a plane for long trips, and my sailboat for fun.

Big deal.  This has been predicted for years.  Remember pen computing, the Newton, Go, and other venture capital holes in the ground?  All of them tried to replace the PC and were crushed.

But the Internet changes the rules.  In the old days (1993), non-PCs like Go had no open network—they moldered away in their small, limited worlds.  They were vehicles stuck in the garage.

But now, if devices can get access to IP and manage HTML and Java, they can talk to each other.  The more devices that can talk, the less domination by the PC.  Think of it this way—we now have one fuel that can power the tractor, the car, the lawn mower, the chainsaw, and the truck.  We don't have to go to town on the tractor anymore; we can take a car.

You get a glimmer of this with the PalmPilot.  It links to the PC but serves a very different function from the PC.  It doesn't have to run Windows, x86 instruction sets, or ActiveX.

Now imagine using many devices like the PalmPilot all synchronized and linked via the Internet [my emphasis]—your calendar machine, your spouse's calendar machine, your pager, your telephone, the network computer in your hotel room.  As long as they could all get to IP, they could all exchange information and stay synchronized.  It's as if the PC were shattered into 20 pieces that still worked well together, even though they were separated.

Microsoft and Intel don't mind change, as long as it's on their trajectory.  They want us to keep bouncing along in our oil-stained bib overalls for another 10 years.  That won't happen.  The Internet will let a whole new generation of devices surround and absorb pieces of the PC—and put all of us fatigued tractor drivers into cars, planes, boats, and trucks.

George

P.S. I welcome your comments.  Please e-mail me at gfcolony@forrester.com.

If you don't want to receive "My View," simply send e-mail to listproc@forrester.com with the following command in the body of your message:  "unsubscribe myview."

Thanks.


ENDNOTES



[1]  Donald Tapscott, The Digital Economy: Promise and Peril in the Age of Networked Intelligence, McGraw-Hill, 1996.  http://www.inforamp.net/~tci/digital.html

[2]  Bill Gates, Nathan Myhrvold, Peter M. Rinearson, The Road Ahead, Viking Penguin, 1996.

[3] “From Movable Type to Data Deluge,” John Gehl and Suzanne Douglas, The World & I, January, 1999.  http://www.worldandi.com/article/ssjan99.htm

[4]  Marshall McLuhan, with a new introduction by Lewis H. Lapham, Understanding Media: The Extensions of Man, MIT Press, 1994 [Reprint].  http://mitpress.mit.edu/book-home.tcl?isbn=0262631598

[5] James Burke and Robert Ornstein, The Axemaker's Gift: A Double-Edged History of Human Culture, Grosset/Putnam, New York, 1995.

[6] Larry Downes and Chunka Mui, Unleashing the Killer App—Digital Strategies for Market Dominance, Harvard University Press, 1998.  http://www.killer-apps.com/Contents/booktour/tour_introduction.htm

[7] “Computer Industry Laws, Heuristics and Class Formation: Why computers are like they are,” Gordon Bell, Smithsonian Institution, 1997.  http://americanhistory.si.edu/csr/comphist/montic/bell/index.htm

[8] Donald Norman, The Invisible Computer Why Good Products Can Fail, The Personal Computer Is So Complex, and Information Appliances Are the Solution, The MIT Press, 1998.     http://www.businessweek.com/1998/43/b3601083.htm

[9] “Electronics firms unveil APIs for digital appliances,” Junko Yoshida, EE Times, May 15, 1998.      http://techweb.cmp.com/eet/news/98/1007news/firms.html

   “Will Net Appliances Edge Out PCs?” Kathleen Ohlson, IDG News Service, June 18, 1998.

   “Acer takes a gamble,” Michael Kanellos,” CNET NEWS, June 23, 1998.      http://www.news.com/News/Item/0,4,23448,00.html, and

   “Internet Appliance Revolution Starts In The Office,” Joe McGarvey, Inter@ctive Week, July 7, 1998.  http://www.zdnet.com/intweek/daily/980707c.html

[10] “Forrester Says PC Market Will Stall After 2000—After the sky falls in 2000, so will PC sales and prices, says market research firm,” Nancy Weil, IDG News Service, October 29, 1998.  http://www.pcworld.com/cgi-bin/pcwtoday?ID=8580

[11] “Cheap System-On-A-Chip Challenges National,” Richard Richtmyer, Electronic Buyers' News, October 28, 1998.  http://www.techweb.com/wire/story/TWB19981028S0017

[12] “Beyond the PC—Who wants to crunch numbers? What we need are appliances to do the job--and go online,” Peter Burrows, Business Week, March 8, 1999.

[13] “Handheld box provides e-mail access,” Laura Kujubu, InfoWorld, January 25, 1999.  http://www.infoworld.com/cgi-bin/displayArchive.pl?/99/04/n09-04.47.htm

[14] “Fix or Ditch?  Falling electronics prices influence repair decision,” Jean Nash Johnson, The Dallas Morning News, March 9, 1999.  http://www.dallasnews.com/technology-nf/techbiz103.htm

[15] “Cutting the Phone Cord to Stick with Cellular,” Roy Furchgott, New York Times, September 17, 1998.  http://www.nytimes.com/library/tech/98/09/circuits/articles/17cell.html

[16] “Price Is King For Wireless Users,” Bill Menezes, Wireless Week, June 29, 1998.  http://www.wirelessweek.com/News/June98/ten629.htm

[17] "Nokia launches the world's smallest NMT 450 phone The Nokia 650 sets a new benchmark for NMT 450 technology," Nokia Press Release, November 2, 1998.  http://wwwdb.nokia.com/pressrel/webpr.nsf/5655df1cd51f86f8c225661b005e27c0/c225663600509ded422566b0003390e9?OpenDocument

[18] CenturyTel Unveils Multi Mode Cellular Phone,” Steve Gold, Newsbytes, August 19, 1998.

[19] “An Approach to Customer-Centered Interfaces,” Smith, James T., Third International Conference on Universal Personal Communications, pp. 619-623, Sept. 1994.

[20] “Analysis: Consumerization stands IC business model on its head,“ Brian Fuller, EE Times, January 15, 1999.  http://www.eet.com/story/OEG19990115S0011

[21] “Free PC? Nope, PC Free,” Craig Bicknell, Wired News, February 19, 1999.  http://www.wired.com/news/news/business/story/17998.html

[22] “PC Free Gets a Big-time Buddy,” Craig Bicknell , Wired News, March 25, 1999.  http://www.wired.com/news/news/email/explode-infobeat/business/story/18714.html

[23] “Free PC’s: A good idea for consumers could become a better idea for business,” Sandy Reed, InfoWorld, March 1, 1999.  http://www.infoworld.com/cgi-bin/displayArchive.pl?/99/09/o07-09.57.htm

[24] “The Great Value Shift,” Chuck Martin, NewMedia, March, 1999.  http://newmedia.com/newmedia/99/03/strategist/Value_Shift.html

[25] “AT&T Plan Is a Search for Loyalty,” Seth Schiesel, NY Times, February 1, 1999.  http://www.nytimes.com/yr/mo/day/news/financial/phone-price-wars.html

[26] “Leap Wireless Plans Flat Rate For Cellular-Phone Customers,” Nicole Harris and Stephanie N., The Wall Street Journal, March 17, 1999.

[27] “In-Depth News Analysis—Storage Technology Starts To Get NAS-ty,” Rivka Tadjer, Network Computing, June 15, 1998.  http://www.nwc.com/911/911btb2.html

[28] “3Com brings some SANity to storage,” John Taschek, PC Week Online, November 9, 1998.  http://www.zdnet.com/pcweek/stories/news/0,4153,369898,00.html

[29] “Oracle goes after Microsoft with lightweight database,” James Niccolai, IDG News Service, November 16, 1998.  http://www.nwfusion.com/news/1116oracle.html

[30] “Sun, Sybase look to bring databases to small-footprint devices,” Margaret Quan, EE Times, March 11, 1999.  http://www.eet.com/story/OEG19990311S0007

[31] “Intel Pushes Specs For Server Appliances,” Marcia Savage, Computer Reseller News, December 14, 1998.  http://www.techweb.com/wire/story/TWB19981214S0002

[32] “The Rise of the Big Fat Website,” J. William Gurley, Fortune, Vol. 139, No. 3, February 15, 1999.  http://www.pathfinder.com/fortune/technology/gurley/index.html

[33] “Castle’s low-cost C2100 eases CLEC’s into the local loop,” Andrew Cray, Data Communications, January 1999.  http://www.data.com/issue/990107/npn5.html

[34] “Switch will bring services to carrier edge,” InfoWorld, March 1, 1999.  http://www.infoworld.com/cgi-bin/displayArchive.pl?/99/09/t05-09.5.htm

[35] “Ericsson Launches Psion-Like Communicator,” Peter Clarke, EE Times, March 19, 1999.  http://www.techweb.com/wire/story/TWB19990319S0003

[36] “Qualcomm banking on dual-role chip,” Corey Grice and Jim Davis, CNET News.com, March 30, 1999.  http://www.news.com/News/Item/0,4,34439,00.html?st.ne.fd.gif.i

[37] “Use of Internet, home PC’s surging,” Stephanie Miles, CNET News.com, March 23, 1999.  http://www.news.com/News/Item/0,4,34137,00.html?st.ne.fd.mdh

[38] “Coming soon: Cooperating appliances,” USA Today, October 7, 1999.

     http://archives.usatoday.com/cgi-bin/makedo.cgi?cfg=do.cfg&filename=1998/10/07/u_621753.html&docid=42860&headline=Coming+soon:+Cooperating+appliances

[39] “Net Video Coming of Age?” Christopher Jones, Wired News, March 23, 1999.

     http://www.wired.com/news/news/technology/story/18645.html

[40] Joseph Pine II, Mass Customization—The New Frontier in Business Competition, Harvard Press, 1992.

[41] “The Customized, Digitized, Have-It-Your-Way Economy: Mass customization will change the way products are made—forever,” Erick Schonfeld, Fortune, Vol. 138, No. 6, September 28, 1998.  http://www.pathfinder.com/fortune/1998/980928/mas.html

[42] Don Peppers and Martha Rogers, The One to One Future: Building Relationships One Customer at a Time, Doubleday, 1997.

[43] Don Peppers and Martha Rogers, Enterprise One to One: Tools for Competing in the Interactive Age, Doubleday, 1999.

[44] Gary Heil, Tom Parker, and Rick Tate, Leadership and The Customer Revolution, Van Nostrand Reinhold, 1995.

[45] “One Size No Longer Fits All—Technology lets companies customize goods and services to give customers what they want,” Gary Heil and Tom Parker, Information Week, February 27, 1995.  http://www.techweb.com/se/directlink.cgi?IWK19950227S0001

[46] “Music Fans Assert Free-dom,” Leslie Walker, Washington Post, February 18, 1999.

     http://search.washingtonpost.com/wp-srv/WPlate/1999-02/18/200l-021899-idx.html

[47] “Generation Y:  Today's teens—the biggest bulge since the boomers--may force marketers to toss their old tricks,” Ellen Neuborne and Kathleen Kerwin, Business Week, February 15, 1999.

[48] “NOW IT'S YOUR WEB—The Net is moving toward one-to-one marketing—and that will change how all companies do business,” Robert D. Hof, Business Week, October 5, 1998.

     http://www.businessweek.com/cgi-bin/covpackage?page=%2F1998%2F40%2Fb3598023.htm&folder=specrep

[49] “What's Next After Portals?” Jesse Berst, ZDNet AnchorDesk, July 1, 1998.

     http://www.zdnet.com/anchordesk/story/story_2263.html

[50] “Wal-Mart sues Amazon, others,” Corey Grice and Jeff Pelline, CNET News.com, October 16, 1998.  http://www.news.com/News/Item/0,4,27654,00.html?st.ne.fd.gif.j

[51] “The Service Imperative--Companies are devising new IT strategies to bolster service and keep their customers coming back,” Mary E. Thyfault, Stuart J. Johnston, and Jeff Sweat, Information Week, October 5, 1998.  http://www.informationweek.com/703/03iusrv.htm

[52] “Why Microsoft bought LinkExchange,” Dan Mitchell, CNET News, November 5, 1998.

     http://www.news.com/News/Item/0,4,28425,00.html?st.ne.ni.lh

[53] “Microsoft and QUALCOMM Form Broad Strategic Wireless Alliance,” Microsoft website.

     http://www.microsoft.com/presspass/press/1998/Nov98/QualcommPR.htm

[54] “The Webified Enterprise,” George Lawton, Knowledge Management Magazine, November, 1998.  http://enterprise.supersites.net/kmmagn2/km199811/home.htm

[55] Joseph Pine II, Mass Customization—The New Frontier in Business Competition, Harvard Press, 1992.

[56] “Built To Order—How relationship management technology is driving the revolution in mass customization and electronic commerce,” Laurie J. Flynn, Knowledge Management, January, 1999.  http://enterprise.supersites.net/kmmagn2/km199901/home.htm

[57] Trading Partners Unite—New standard takes guesswork out of supply-chain mgm't,” John Evan Frook, Internet Week, February 23, 1998.  http://www.techweb.com/se/directlink.cgi?INW19980223S0001

[58] “Web opens enterprise portals,” Emily Fitzloff and Dana Gardner, InfoWorld, January 25, 1999.  http://www.infoworld.com/cgi-bin/displayStory.pl?/features/990125eip.htm

[59] “Enterprise Knowledge Portals to Become the Shared Desktop of the Future,” Computer World, March 25,1999.  http://www.idg.net/go.cgi?id=108521

[60] “AMAZON.COM: the Wild World of E-Commerce  By pioneering—and damn near perfecting—the art of selling online, Amazon is redefining retailing,” Robert D. Hof, Ellen Neuborne, and Heather Green, Business Week, December 14, 1998.

     http://www.businessweek.com/cgi-bin/covpackage?page=%2F1998%2F50%2Fb3608001.htm&folder=covstory

[61] Larry Downes and Chunka Mui, Unleashing the Killer App—digital strategies for market dominance, Harvard University Press, 1998.  http://www.killer-apps.com/Contents/booktour/tour_introduction.htm

[62] “Boeing's Big Intranet Bet—Beset with cost overruns and production snafus, aerospace giant extends Web to the shop floor,” John Evan Frook, Internet Week, November 9, 1998.  http://www.techweb.com/se/directlink.cgi?INW19981109S0001

[63] “Killer Supply Chains—Six Companies Are Using Supply Chains To Transform The Way They Do Business,” Tom Stein and Jeff Sweat, Information Week, November 9, 1998.

     http://www.informationweek.com/708/08iukil.htm

[64] “The Right Stuff For Boeing's Extranet,” Richard Karpinski, Internet Week, March 15, 1999.  http://www.techweb.com/wire/story/TWB19990315S0017

[65] “Microsoft and standards: The rules have changed,” Geoffrey James, Network World, October 5, 1998.  http://www.nwfusion.com/news/1005ms.html

[66] “IETF attempts to standardize calendaring,” Emily Fitzloff, InfoWorld, December 8, 1998. http://www.idg.net/go.cgi?id=41010

[67] “IBM's XML giveaway: Seeking to boost market, Big Blue releases nine new free tools,” Robin Schreier Hohman, Network World, November 16, 1998.  http://www.nwfusion.com/news/1116xmi.html

[68] “Group forms to end software chaos,” Ted Smalley Bowen, InfoWorld, November 16, 1998.  http://www.infoworld.com/cgi-bin/displayArchive.pl?/98/46/t05-46.5.htm

[69] “AMAZON.COM: the Wild World of E-Commerce  By pioneering—and damn near perfecting—the art of selling online, Amazon is redefining retailing,” Robert D. Hof, Ellen Neuborne, and Heather Green, Business Week, December 14, 1998.  http://www.businessweek.com/cgi-bin/covpackage?page=%2F1998%2F50%2Fb3608001.htm&folder=covstory

[70] "’Coopetition’ In The New Economy: Collaboration Among Competitors,” Robert D. Atkinson and Ranolph H. Court, The New Economy Index, The Progressive Policy Institute,   http://www.neweconomyindex.org/section1_page07.html

[71] “Changing The Rules—Online business is challenging-and changing-the rules of commerce,” Clinton Wilder, Gregory Dalton, and Jeff Sweat, Information Week, August 24, 1998.  http://www.techweb.com/se/directlink.cgi?IWK19980824S0020

[72] “Procurement Shifts To Portals,” Richard Karpinski, InternetWeek, March 26, 1999.  http://www.techweb.com/wire/story/TWB19990326S0032

[73] “Ariba Announces Support for Commerce XML (cXML) Standard for Business-to-Business E-Commerce,” Ariba Website, February 8, 1999.  http://www.ariba.com/News/AribaArchive/cxml.htm

[74] “Easy EDI for everyone: I-EDI can slash the high price of document exchanges,” Howard Millman, InfoWorld, August 17, 1998.  http://www.infoworld.com/cgi-bin/displayArchive.pl?/98/33/i07-33.38.htm

[75] “The Service Imperative—Companies are devising new IT strategies to bolster service and keep their customers coming back,” Mary E. Thyfault, Stuart J. Johnston, and Jeff Sweat, Information Week, October 5, 1998.  http://www.informationweek.com/703/03iusrv.htm

[76] “Portal site for teens sheds some light onto possible future of Internet commerce,” Dylan Tweney, InfoWorld, November 23, 1998.  http://www.infoworld.com/cgi-bin/displayArchive.pl?/98/47/o03-47.48.htm

[77] “Linking Homeowners To Good Contractors,” Andrew Marlatt, Internet World, January 11, 1999.  http://www.iw.com/print/current/ecomm/19990111-linking.html

[78] “Ten Trends for the Post-PC World—What’s next in the new era of ubiquitous computing,” The Red Herring magazine, December 1998.  http://www.redherring.com/mag/issue61/trends.html

[79] “Microsoft's goal: DNA everywhere,” Mary Jo Foley, Sm@rt Reseller, December 4, 1997.

[80] “Millennium: Self-Tuning, Self-Configuring Distributed Systems,” Microsoft Website.  http://www.research.microsoft.com/sn/Millennium/

[81] “PARLEY—secure access to converged networks,” PARLEY Organization.  http://www.parlay.org/default.htm

[82] “Java's three types of portability,” Mark Roulo, JavaWorld, May 1997.  http://www.javaworld.com/javaworld/jw-05-1997/jw-05-portability.html

[83] “Jini technology lets you network anything, anytime, anywhere,” SUN Website  http://java.sun.com/products/jini/

[84] “Tech Spotlight: Insight into cutting-edge technologies. Will Jini grant Sun's wish?” Sean Dugan, InfoWorld, September 28, 1998.  http://www.infoworld.com/cgi-bin/displayArchive.pl?/98/39/jinia.dat.htm

[85] “JAVA TECHNOLOGY'S WAKEUP CALL TO THE TELECOMM INDUSTRY,” SUN Website.  http://java.sun.com/features/1998/10/ericsson.html

[86] “Sun and Microsoft try to reach phone companies: Sun's JAIN, Microsoft's Active OSS Framework aim to offer new kinds of telephone services,” Rick Cook, SUN World, October, 1998.  http://www.idg.net/go.cgi?id=31623

[87] “Project Clover on hold—Alcatel USA, STR and Sun initiative could integrate intelligent network and IP,” Hanna Hurley, Internet Telephony, November 23, 1998.  http://www.internettelephony.com/content/frames/archive.htm

[88] “Letting Distributed Computing Out of the Bottle: A Comparison of Sun's Jini, Microsoft's Millennium, Salutation and SLP,” Christy Hudgins-Bonafield Network Computing, September 28, 1998.  http://www.networkcomputing.com/online/jini.html

[89] “Salutation Consortium formed, introduces open specification that will link office machines, computers, and personal communicators,” Salutation Consortium, June 15, 1995.  http://www.salutation.org/salute.htm

[90] “XEROX, IBM ANNOUNCE PLAN TO ADD SALUTATION TECHNOLOGY TO PRODUCTS,Salutation Consortium, September 21, 1998.  http://www.salutation.org/pr-ibmxer.htm

[91] “Salutation and SLP,” Pete St. Pierre, Sun Microsystems and Tohru Mori, IBM Japan, Salutation Consortium website, June 1998.  http://www.salutation.org/greetings698.htm

[92] “Inferno—Software for a Networked Society,” Lucent-Inferno website.  http://www.lucent-inferno.com/Pages/About_Us/index.html

[93] “Caltech Infospheres Project—Researching the composition of distributed active mobile objects that communicate using messages,” Caltech Website.  http://www.infospheres.caltech.edu/

[94] “Lucent puts Simple Data Link into silicon via Detroit chip set,” Loring Wirbel, EE Times, November 13, 1998.  http://www.eet.com/story/OEG19981113S0028

[95] “Diffserv and MPLS—A Quality Choice—The Diffserv and MPLS specs both address IP QOS, only they go about it in different ways,” Ashley Stephenson, Data Communications, November 21, 1998.  http://www.data.com/issue/981121/quality.html

[96] “SS7, IP converge in proposed standard–Nortel submits open-based architecture to IETF,” Hanna Hurley, Internet Telephony, November 23, 1998.  http://www.internettelephony.com/content/frames/archive.htm

[97] “Bellcore, Level 3 merge IP gateway protocols—New MGCP spec removes processing logic, adds scalability,” Brian Quinton, Internet Telephony, November 23, 1998.  http://www.internettelephony.com/content/frames/archive.htm

[98] “Telecom carriers create coalition,” Corey Grice, CNET News.com, December 15, 1998.  http://www.news.com/News/Item/0,4,29969,00.html?st.ne.ni.lh

“Voice-IP Network Links Sought,” Mary E. Thyfault, InfoWorld, December 21, 1998.  http://www.techweb.com/se/directlink.cgi?IWK19981221S0025

[99] “Switch makers, carriers team up to create WAN standards,” Stephen Lawson, InfoWorld Electric, November 23, 1998.  http://www.idg.net/go.cgi?id=38456

[100] “Merging Networks Ahead: Watch out for dangerous curves as network operators deploy IP as a new way to pave the integration of fixed and mobile infrastructures.” John Blau, Tele.com, December 1998.  http://www.teledotcom.com/1298/features/tdc1298cover1.html

[101] “Top tech firms to link home devices to Web,” CNET News, March 1, 1999.  http://www.news.com/News/Item/0,4,33034,00.html?st.ne.lh..ni

[102] “iReady, Seiko develop Internet-ready LCD’s,” Junko Yoshida and Yoshiko Hara, EE Times, October 26, 1998.  http://www.techweb.com/se/directlink.cgi?EET19981026S0026

[103] “At CES, the TV becomes the network,” Junko Yoshida, EE Times, December 30, 1998.  http://www.eet.com/story/OEG19981230S0019

[104] “Corporate Change: A Culture Of Innovation,” Bob Evans, Editor-in-Chief, InformationWeek, March 15, 1999.  http://www.informationweek.com/725/25uwbe.htm

[105] “BREAKING THE BANK—Technology is forcing banks to become nimble financial service aggregators that cater to customers through a variety of electronic channels—because if they don't, Yahoo and Intuit will,” Nikki Goth Itoi, The Red Herring, October 1998.

[106] "Does Sun have an e-commerce strategy?  AOL, Sun form e-commerce 'virtual company' amid layoffs at AOL and Netscape," Steven Brody, SunWorld, March 24, 1999.  http://www.idg.net/go.cgi?id=108876

[107] “EDS, MCI WorldCom Make $17B IT Services Deal,” Reuters, February 11, 1999.  http://www.techweb.com/wire/story/reuters/REU19990211S0001

[108] “Unix redux?  Why Linux won't make like Unix and split,” Bob Young, Linux World, January 28, 1999   http://www.linuxworld.com/linuxworld/lw-1999-01/lw-01-thesource.html

[109] “SET YOUR CODE FREE,” Harvey Blume, NewMedia, January 1999.  http://newmedia.com/newmedia/99/01/feature/Set_Free.html

[110] “Computer telephony code will be available for the taking,” David Lieberman, EE Times, March 1, 1999.  http://www.eet.com/story/OEG19990301S0038

[111] “Freeware pulls ahead of the corporate giants,” Dana Blankenhorn, Datamation, March 1999.  http://www.datamation.com/PlugIn/newissue/03poy5.html

[112] “Jikes! More open source code IBM jumps on the open-source bandwagon by releasing code for Java compiler,” Antone Gonsalves & Peter Coffee, PC Week Online, December 7, 1998.  http://www.zdnet.com/intweek/stories/news/0,4164,2173812,00.html

[113] “Dev community ready for ‘community source’?” Kieron Murphy, GameLan's Java Journal, January 21, 1999.  http://www.gamelan.com/journal/techfocus/012199_community.html

[114] “Sun gives away major chip designs,” Brooke Crothers and Stephen Shankland, CNET News.com, March 2, 1999.  http://www.news.com/News/Item/0,4,33136,00.html?owv

[115] “The Organic ROOT System,” Megan Santosus, CIO Magazine, January 1, 1999.  http://www.cio.com/archive/010199_over.html

[116]  “Special Report on IBM: Past and Future King?” Mike Bucken and Jack Vaughan, Application Development Trends, December, 1998.  http://www.adtmag.com/

[117] “Nanotechnology: the Coming Revolution in Molecular Manufacturing,” K.Eric Drexler, Chris Peterson, Gayle Pergamit, FORESIGHT INSTITUTE.  http://www.foresight.org/NanoRev/index.html

[118] “Molecular building technique inches closer to mainstream—At birth of nanotechnology, Web sites abound,” Chappell Brown, EE Times, March 30, 1998.  http://techsearch.techweb.com/se/directlink.cgi?EET19980330S0052

[119] “Molecular switching is probed for nanocomputers,” R. Colin Johnson, EE Times, March 15, 1999.  http://www.eet.com/story/OEG19990315S0046

[120] “New material takes GaAs to task for high speed,” Gary Dagastine, EE Times, May 6, 1998.  http://techweb.cmp.com/eet/news/98/1006news/new.html

[121] “Chip Design Reaches for Light Speed,” Gene Koprowski, Wired, January 27, 1998.  http://www.wired.com/news/news/technology/story/9888.html

“Darpa program seeks chip-level optical interconnect,” Chappell Brown, EE Times, May 26, 1998.  http://www.eet.com/news/98/1009news/darpa.html

[122] “Polymer electronics tapped for multi-layer memory, logic,” Peter Clarke, EE Times, October 14, 1998.  http://www.eet.com/story/OEG19981014S0030

[123] “Tiny Molecular-Scale Devices May Lead To Faster Computers,” Science Daily, March 23, 1999.  http://www.sciencedaily.com/releases/1999/03/990323050945.htm

[124] “IBM takes SOI technology to market,” David Lammers, EE Times, August 3, 1998.  http://www.eetimes.com/news/98/1020news/ibmtakes.html

[125] “Peregrine plans 1.9-GHz SOI devices,” Semiconductor Business News, August 12, 1998.  http://pubs.cmpnet.com/sbn/stories/8h12pere.htm

[126] “SOI gains wider industry attention,” Yoshiko Hara and Michael, EE Times, August 11, 1998.  http://www.edtn.com/news/aug11/081198tnews2.html

[127] “Super Chip for the Little Guys,” Niall McKay, Wired News, October 13, 1998.  http://www.wired.com/news/news/technology/story/15581.html

[128] “New Memory For Computers—University Of Utah Researchers Developing Nonvolatile Ram Technology,” Science Daily, December 16, 1998.  http://www.sciencedaily.com/releases/1998/12/981216080224.htm

[129] “Possible Computer Data Storage System Smaller Than A Dot On This Page Is Described By Cornell University Researchers,” Science Daily, March 24, 1999.  http://www.sciencedaily.com/releases/1999/03/990324062632.htm

[130] “'One-handed' polymer opens door to cheap optical processing,” Margaret Quan, EE Times, September 21, 1998.  http://www.eet.com/news/98/1027news/one-handed.html

[131] “Asymmetric Scattering of Polarized Electrons by Organized Organic Films of Chiral Molecules,” K. Ray, S. P. Ananthavel, D. H. Waldeck, and R. Naaman, Science, Feb 5 1999: 814-816.

[132] “Optical amplifier finds new range,” EE Times, October 26, 1998.  http://www.techweb.com/se/directlink.cgi?EET19981026S0048

[133] “Sandia Photonic Crystal Confines Optical Light: 'Very Important Work,' Says Physics Nobel Laureate,” Science Daily, January 7, 1999.  http://www.sciencedaily.com/releases/1999/01/990107074439.htm

[134] “MIT researchers create a 'perfect mirror’—Besides blocking or reflecting selected frequencies, this new device could help ‘trap’ a light beam,” MIT News, November 26, 1998.  http://web.mit.edu/newsoffice/nr/1998/mirror.html

[135] “M.I.T. Scientists Turn Simple Idea Into 'Perfect Mirror,” Bruce Schecter, NY Times, December 15, 1998.  http://www.nytimes.com/library/national/science/121598sci-mirror.html

[136] “New option on the networking menu—Optical CDMA gives carriers something else to think about,” Wayne Carter, Switching & Transmission, Telephony, June 15, 1998.  http://www.internettelephony.com/content/frames/archive.htm

“Bar-coding fiber CDMA works for wireless—and may hold promise for optical networking,” Annie Lindstrom, Americas Network, July 1, 1998.  http://www.americasnetwork.com/issues/98issues/980701/980701_cdma.html

[137] “New way to fab IC’s—Startup gears up to make first ball devices,” J. Robert Lineback, Semiconductor Business News, May 15, 1998.  http://pubs.cmpnet.com/sbn/pub/05b98/ball.htm

[138] “HP lays foundation for embedded's future,” Alexander Wolfe, EE Times, March 1, 1999.  http://www.eet.com/story/OEG19990226S0010

[139] “Compaq Servers Use Next-Generation Alpha Chip,” Mitch Wagner, Internet Week, October 19, 1998.  http://www.techweb.com/se/directlink.cgi?INW19981019S0017

[140] “Intel's Merced schedule was seen as a 'risk' for Tandem system—Server rift widens as Compaq chooses Alpha,” Rick Boyd-Merritt, EE Times, September 28, 1998.  http://www.techweb.com/se/directlink.cgi?EET19980928S0022

[141] “Sharp gives Thumbs-up to MCU’s,” Brian Fuller, EE Times, September 28, 1998.  http://www.techweb.com/se/directlink.cgi?EET19980928S0037

[142] “Get ready for the optical revolution,” Thomas G. Hazelton, LIGHTWAVE, September 1998.  http://www.broadband-guide.com/lw/reports/report09989.html

[143] “Configurable-computing puzzle—FPGA arena taps object technology,” Chappell Brown, EE Times, May 4, 1998.  http://www.techweb.com/se/directlink.cgi?EET19980504S0096

[144] “Researchers add RISC cores, floating-point units to configurable machines–Conference sees shift from FPGA computing,” Chappell Brown, EE Times, April 20, 1998.  http://www.techweb.com/se/directlink.cgi?EET19980420S0009

[145] “Trends in ASICs and FPGAs: Smaller feature sizes in silicon that enable very large, more complex designs will induce change in design methodologies,” Tets Maniwas, ISD Magazine, May 1998.  http://www.isdmag.com/ic-logic/ASICsFPGAs.html

[146] “FPGA arena taps object technology,” Chappell Brown, EE Times, April 29, 1998.  http://techweb.cmp.com/eet/news/98/1005news/fpga.html

[147] “IBM and STMicroelectronics to jointly develop systems-on-a-chip,” Electronic Design Times, July 8, 1998.  http://pubs.cmpnet.com/sbn/stories/8g08stm.htm

[148] “Oxford lab work rekindles code-design interest—STM looks anew at integrating cores with FPGA,” Peter Clarke, TechWeb News, April 20, 1998.  http://techsearch.techweb.com/se/directlink.cgi?EET19980420S0045

[149] “Comm-Fusion Engine Bridges Multiple Protocols At Near-Gigabit Speeds—It Slices, It Dices, It Makes Lovely Julienne ATM Packets ... Handling ATM, Ethernet, And TDM Traffic, This Dual-CPU, Multi-PHY Communication Processor Does It All--And Does It Well,” Lee Goldberg, Electronic Design, September 14, 1998.  http://www.penton.com/ed/Pages/magpages/sept1498/ti/0914ti1.htm

[150] “Rapid-prototyping boards aspire to role in faster FPGA development—ARM systems' hardware compilation goes to beta,” Peter Clarke, EE Times, July 27, 1998.  http://www.techweb.com/se/directlink.cgi?EET19980727S0030

[151] “Configurable System Chips Free Designers To Customize Functions—By Combining A CPU, A Field-Programmable Logic Fabric, And A Large Block Of SRAM, A Single Chip Can Tackle System Needs,” Dave Bursky, Electronic Design, October 12, 1998.  http://www.penton.com/ed/Pages/magpages/oct1298/ti/1012ti2.htm

[152] “Cell architecture readied for configurable computing,” Chappell Brown, EE Times, June 23, 1998.  http://www.eet.com/news/98/1014news/cell.html

[153] “Chaos-based system that "evolves" may be alternative to current computing,” Charles H. Small, Computer Design, November, 1998.  http://www.computer-design.com/Editorial/1998/11//1198NOVCHAOS.htm

[154] “Vision template inspires real-time pattern classification,” R. Colin Johnson, EE Times, December 28, 1998.  http://www.eet.com/story/OEG19981228S0009

[155] “Engineer Gives Robots A New Way To 'See',” Science Daily, March 18, 1999.  http://www.sciencedaily.com/releases/1999/03/990318134001.htm

[156] “Researchers Create Artificial Eye Chip,” Colin Johnson, EE Times, March 30, 1999.  http://www.techweb.com/wire/story/TWB19990330S0032

[157] “MicrotuneTM Revolutionizes Traditional Tuner Industry with the World's First TV Tuner on a Single Chip,” Microtune Website, December 6, 1999.  http://www.microtune.com/news/012699.htm

[158] “LSI Logic joins TV tuner race,” Semiconductor Business News, February 2, 1999.  http://www.semibiznews.com/stories99/feb99x/9b02lsi.htm

[159] “Quicksilver tackles reconfigurable software radios,” Loring Wirbel, EE Times, January 25, 1999.  http://www.eet.com/story/OEG19990125S0011

[160] "Company sets out to achieve simplicity with multiple platform chip," Elizabeth V. Mooney, CINEWS, March 29, 1999.  http://www.rcrnews.com/CGI-BIN/SM40i.exe?docid=100:84873&%70aramArticleID=9261

[161] “Sony puts $1 billion into Playstation IC’s,” Yoshiko Hara, EE Times, March 8, 1999.  http://www.techweb.com/se/directlink.cgi?EET19990308S0015

[162] “The Next Years Should Be Awesome, Indeed,” Jeffrey R. Harrow, RoFoC—The Rapidly Changing Face of Computing, March 15, 1999.  http://www.digital.com/rcfoc/19990315.htm

[163] “Silicon Valley's Awesome Look at New Sony Toy,” John Markoff, NY Times, March 19, 1999.  http://search.nytimes.com/search/daily/bin/fastweb?getdoc+site+site+65037+0+wAAA+Sony%7EPlaystation

[164] “Sony taps Linux for PlayStation development,” Stephen Shankland, CNET News.com, March 29, 1999.  http://www.news.com/News/Item/0,4,34346,00.html?st.ne.ni.lh

[165] “IBM demonstrates Linux servers matching supercomputer speeds,” Ed Scannell, InfoWorld, March 9, 1999.  http://www.idg.net/go.cgi?id=103717

[166] “Will Future PlayStations Target PCs?” Rob Guth, PC World, March 4, 1999.  http://www.pcworld.com/cgi-bin/pcwtoday?ID=9983

[167] “Tuning in to the Fight of the (Next) Century,” John Markoff, NY Times, March 7, 1999.  http://search.nytimes.com/books/search/bin/fastweb?getdoc+cyber-lib+cyber-lib+10387+0+wAAA+HAVi