This is about the failure of companies to stay at the top of their industries when they confront certain types of market and technological change.
It’s not about the failure of simply any company, but of good, well-managed companies that have their competitive antennae up, listen astutely to their customers, invest aggressively in new technologies, and yet still lose market dominance.
In resolving this paradox, many commentators suggest firms like Digital, IBM, Sears, Xerox, and Bucyrus Erie must never have been well managed. Maybe they were successful because of good luck and fortuitous timing, rather than good management. Maybe they finally fell on hard times because their good fortune ran out.
But Clay Christensen’s research, outlined in this book, says that’s not the case. These failed firms were as well-run as one could expect a firm to be. There is just something about the way decisions get made in successful organizations that sows the seeds of eventual failure.
Specifically, in the cases of well-managed firms, Christensen argues it is precisely because these firms listen to their customers, invest aggressively in new technologies that provide their customers more and better products of the sort they want, and because they carefully study market trends and systematically allocate investment capital to innovations that promise the best returns, that they lose their positions of leadership.
My notes are broken into the following sections:
Most new technologies foster improved product performance. Christensen calls these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued.
Occasionally, however, disruptive technologies emerge: innovations that result in worse product performance, at least in the near-term. Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value.
Counter intuitively, these reasons that make disruptive technologies weak early on, pave their route to success in future.
Because they are simpler and cheaper, disruptive technologies offer lower margins, less attractive profits, and render themselves unusable by the leading firm’s most profitable customers. Hence, most companies with a practiced discipline of listening to their best customers and identifying new products that promise greater profitability and growth are rarely able to build a case for investing in disruptive technologies.
Operators of disruptive technologies, therefore, are given time and room to work with early adopters, find new markets and improve their offering.
The final piece of the disruptive jigsaw is the nature of technology itself. Technologies can progress faster than market demand, meaning that in their efforts to provide better products than their competitors and earn higher prices and margins, suppliers often “overshoot’’ their market: They give customers more than they need or ultimately are willing to pay for.
As a result, disruptive technologies that underperform today, relative to what users in the market demand, may be fully performance-competitive in that same market tomorrow.
In summary, a technology may be disruptive if
The best way to assess whether a technology is disruptive is to graph the trajectories of performance improvement demanded in the market versus the performance improvement supplied by the technology.
The first step in making this chart involves defining current mainstream market needs and comparing them with the current capacity of your technology.
To measure market needs, watch carefully what customers do, don’t just listen to what they say. Watching how customers actually use a product provides much more reliable information than can be gleaned from a verbal interview or a focus group.
To assess if the technology will progress faster than the pace of improvement demanded in the market, be careful to keep asking the right question: Will the trajectory of electric vehicle performance ever intersect the trajectory of market demands (as revealed in the way customers use cars)?
Try not to rely on the opinions of industry experts because the track records of experts predicting the nature and size of markets for disruptive technologies is very poor.
Good managers do what makes sense, and what makes sense is primarily shaped by their value network, i.e. their customers’ needs, because that’s how they meet their sales targets and get paid.
To investigate the forces at play, Christensen held interviews with more than eighty managers who played key roles in the disk drive industry’s leading firms, both incumbents and entrants, at times when disruptive technologies had emerged.
In these interviews he tried to reconstruct, as accurately and from as many points of view as possible, the forces that influenced these firms’ decision-making processes regarding the development and commercialization of technologies either relevant or irrelevant to the value networks in which the firms were at the time embedded.
His findings consistently showed that established firms confronted with disruptive technology change did not have trouble developing the requisite technology: Prototypes of the new drives had often been developed before management was asked to make a decision. Rather, disruptive projects stalled when it came to allocating scarce resources among competing product and technology development proposals.
Sustaining projects addressing the needs of the firms’ most powerful customers almost always preempted resources from disruptive technologies with small markets and poorly defined customer needs.
Here’s what Christensen found:
Although entrants led in commercializing disruptive technologies, their development was often the work of engineers at established firms, using bootlegged resources. Rarely initiated by senior management, these architecturally innovative designs almost always employed off-the-shelf components.
The engineers then showed their prototypes to marketing personnel, asking whether a market for the smaller, less expensive (and lower performance) drives existed. The marketing organization, using its habitual procedure for testing the market appeal of new drives, showed the prototypes to leading customers of the existing product line, asking them for an evaluation. Unsurprisingly, they were met with little interest.
In addition, because the products were simpler, with lower performance, forecast profit margins were lower than those for higher performance products. Financial analysts, therefore, joined their marketing colleagues in opposing the disruptive program.
In response to the needs of current customers, the marketing managers threw impetus behind alternative sustaining projects. These gave customers what they wanted and could be targeted at large markets to generate the necessary sales and profits for maintaining growth. Although often involving greater development expense, such sustaining investments appeared far less risky than investments in the disruptive technology: The customers existed, and their needs were known.
New companies, usually including frustrated engineers from established firms, were formed to exploit the disruptive product architecture.
For example, the founders of the leading 3.5-inch drive maker, Conner Peripherals, were disaffected employees from Seagate and Miniscribe, the two largest 5.25-inch manufacturers. The founders of 8-inch drive maker Micropolis came from Pertec, a 14-inch drive manufacturer, and the founders of Shugart and Quantum defected from Memorex.
The start-ups, however, were as unsuccessful as their former employers in attracting established computer makers to the disruptive architecture. Consequently, they had to find new customers. The applications that emerged in this very uncertain, probing process were the minicomputer, the desktop personal computer, and the laptop computer.
In retrospect, these were obvious markets for hard drives, but at the time, their ultimate size and significance were highly uncertain. Micropolis was founded before the emergence of the desk-side minicomputer and word processor markets in which its products came to be used. Seagate began when personal computers were simple toys for hobbyists, two years before IBM introduced its PC. And Conner Peripherals got its start before Compaq knew the potential size of the portable computer market.
The founders of these firms sold their products without a clear marketing strategy—essentially selling to whoever would buy. Out of what was largely a trial-and-error approach to the market, the ultimately dominant applications for their products emerged.
Once the start-ups had discovered an operating base in new markets, they realized that, by adopting sustaining improvements in new component technologies, they could increase the capacity of their drives at a faster rate than their new market required.
They blazed trajectories of 50 percent annual improvement, fixing their sights on the large, established computer markets immediately above them on the performance scale.
The established firms’ views downmarket and the entrant firms’ views upmarket were asymmetrical. In contrast to the unattractive margins and market size that established firms saw when eyeing the new, emerging markets for simpler drives, the entrants saw the potential volumes and margins in the upscale, high-performance markets above them as highly attractive.
Customers in these established markets eventually embraced the new architectures they had rejected earlier, because once their needs for capacity and speed were met, the new drives’ smaller size and architectural simplicity made them cheaper, faster, and more reliable than the older architectures.
When the smaller models began to invade established market segments, the drive makers that had initially controlled those markets took their prototypes off the shelf (where they had been put in Step 3) and introduced them in order to defend their customer base in their own market.
By this time, of course, the new architecture had shed its disruptive character and become fully performance-competitive with the larger drives in the established markets.
Although some established manufacturers were able to defend their market positions through belated introduction of the new architecture, many found that the entrant firms had developed insurmountable advantages in manufacturing cost and design experience, and they eventually withdrew from the market.
The firms attacking from value networks from below brought with them cost structures set to achieve profitability at lower gross margins. The attackers therefore were able to price their products profitably, while the defending, established firms experienced a severe price war.
For established manufacturers that did succeed in introducing the new architectures, survival was the only reward. None ever won a significant share of the new market; the new drives simply cannibalized sales of older products to existing customers.
As the stages above illustrate, a characteristic of each value network is a particular cost structure that firms within it must create if they are to provide the products and services in the priority their customers demand.
Moving upmarket toward higher-performance products that promise higher gross margins is usually a more straightforward path to profit improvement. Moving downmarket is an anathema to that objective.
Gross margins are clearly higher in higher-end markets, compensating manufacturers for the higher levels of overhead characteristic of those businesses. The differences in the size of these markets and the characteristic cost structures across these value networks creates serious asymmetries in the combat among these firms.
Aggressively moving downmarket pits them against foes who have honed their cost structures to make money at 25 percent gross margins. On the other hand, moving upmarket enables them to take a relatively lower-cost structure into a market that is accustomed to giving its suppliers 60 percent gross margins.
Committing development resources to launch higher-performance products that could garner higher gross margins generally both offers greater returns and causes less pain. As managers make repeated decisions about which new product development proposals they should fund and which they should shelve, proposals to develop higher-performance products targeted at the larger, higher-margin markets immediately above them always get the resources.
The most vexing managerial aspect of this problem of asymmetry, where the easiest path to growth and profit is up, and the deadliest attacks come from below, is that “good” management—working harder and smarter and being more visionary—doesn’t solve the problem.
As companies become larger and more successful, it becomes even more difficult to enter emerging markets early enough. Because growing companies need to add increasingly large chunks of new revenue each year just to maintain their desired rate of growth, it becomes less and less possible that small markets can be viable as vehicles through which to find these chunks of revenue.
The most straightforward way of confronting this difficulty is to implant projects aimed at commercializing disruptive technologies in organizations small enough to get excited about small-market opportunities, and to do so on a regular basis even while the mainstream company is growing.
Because emerging markets are small by definition, the organizations competing in them must be able to become profitable at small scale. This is crucial because organizations or projects that are perceived as being profitable and successful can continue to attract financial and human resources both from their corporate parents and from capital markets. Initiatives perceived as failures have a difficult time attracting either.
In each of the several industries explored, technologists were able to provide rates of performance improvement that have exceeded the rates of performance improvement that the market has needed or was able to absorb.
Historically, when this performance oversupply occurs, it creates an opportunity for a disruptive technology to emerge and subsequently to invade established markets from below.
As it creates this threat or opportunity for a disruptive technology, performance oversupply also triggers a fundamental change in the basis of competition in the product’s market: the criteria used by customers to choose one product over another changes to attributes for which market demands are not yet satisfied.
Consider, for example, the product evolution model created by Windermere Associates of San Francisco, California, which describes as typical the following four phases: functionality, reliability, convenience, and price.
This evolving pattern in the basis of competition—from functionality, to reliability and convenience, and finally to price—has been seen in many of the markets so far discussed. In fact, a key characteristic of a disruptive technology is that it heralds a change in the basis of competition.
Two additional important characteristics of disruptive technologies consistently affect product life cycles and competitive dynamics:
Managers must understand these characteristics to effectively chart their own strategies for designing, building, and selling disruptive products. Even though the specific market applications for disruptive technologies cannot be known in advance, managers can bet on these two regularities.
Additionally, established firms confronted with disruptive technology typically view their primary development challenge as a technological one: to improve the disruptive technology enough that it suits known markets. In contrast, the firms that are most successful in commercializing a disruptive technology are those framing their primary development challenge as a marketing one: to build or find a market where product competition occurs along dimensions that favour the disruptive attributes of the product.
It is critical that managers confronting disruptive technology observe this principle. If history is any guide, companies that keep disruptive technologies bottled up in their labs, working to improve them until they suit mainstream markets, will not be nearly as successful as firms that find markets that embrace the attributes of disruptive technologies as they initially stand.
These latter firms, by creating a commercial base and then moving upmarket, will ultimately address the mainstream market much more effectively than will firms that have framed disruptive technology as a laboratory, rather than a marketing, challenge.
Established firms that successfully built a strong market position in a disruptive technology were those that spun off from the mainstream company an independent, autonomously operated organization.
An independent organization would not only make resource dependence work for you rather than against you, but it would also address the principle that small markets cannot solve the growth or profit problems of large companies.
In the early years of this new business, orders are likely to be denominated in hundreds, not tens of thousands. If you are lucky enough to get a few wins, they almost surely will be small ones. In a small, independent organization, these small wins will generate energy and enthusiasm. In the mainstream, they would generate skepticism about whether you should even be in the business.
You want your organization’s customers to answer the question of whether you should be in business. You don’t want to spend your precious managerial energy constantly defending your existence to efficiency analysts in the mainstream.
Innovations are fraught with difficulties and uncertainties. Because of this, you want always to be sure that the projects you manage are positioned directly on the path everyone believes the organization must take to achieve higher growth and greater profitability.
If your program is widely viewed as being on that path, then you can have confidence that when the inevitable problems arise, somehow the organization will muster whatever it takes to solve them and succeed.
If, on the other hand, your program is viewed by key people as nonessential to the organization’s growth and profitability, or even worse, is viewed as an idea that might erode profits, then even if the technology is simple, the project will fail.
You can address this challenge in one of two ways: you could convince everyone in the mainstream (in their heads and their guts) that the disruptive technology is profitable, or you could create an organization that is small enough, with an appropriate cost structure, that your program can be viewed as being on its critical path to success.
In a small, independent organization you will more likely be able to create an appropriate attitude toward failure. Your initial stab into the market is not likely to be successful. You will, therefore, need the flexibility to fail, but to fail on a small scale, so that you can try again without having destroyed your credibility.
Finally, you don’t want your organization to have pockets that are too deep. While you don’t want people to feel pressure to generate significant profit for the mainstream company (this would force us into a fruitless search for an instant large market), you want them to feel constant pressure to find some way—some set of customers somewhere—to make your small organization cash-positive as fast as possible.
Of course, the danger in making this unequivocal call for spinning out an independent company is that some managers might apply this remedy indiscriminately, viewing skunkworks and spinoffs as a blanket solution—an industrial-strength aspirin that cures all sorts of problems.
In reality, spinning out is an appropriate step only when confronting disruptive innovation. The evidence is very strong that large, mainstream organizations can be extremely creative in developing and implementing sustaining innovations. In other words, the degree of disruptiveness inherent in an innovation provides a fairly clear indication of when a mainstream organization might be capable of succeeding with it and when it might be expected to fail.