Our world is changing rapidly around us. Some companies that once dominated global industry no longer exist, and newcomers arise every week. Companies that wish to survive and grow must change to fit new ways of working, and adapt to the immense differences that the growth in telecommunications and transport have brought.
In recent years there has been a major shift in company characteristics. Companies used to organise themselves in a ‘vertical integration’ mode, in which they believed that they had to make or control, under their own management, as many elements of their business as possible. Thus large companies in the electronics field would make as many as possible of their own components, sub-systems, assemblies, etc., and write all their own software.
All companies have had to recognise that new methods of focus and specialisation, leading to the concept of ‘horizontal integration’, is a better model for survival and growth. In this model, a company specialises in the elements that are more closely focused on its business goals. For example, many airlines no longer own their aircraft but lease them, often including the crew, from leasing companies which serve many airlines. Similarly, hotel chains often only own the booking system, the frequent customer programme and the name, leaving the owning and running of the actual hotels to a local operation.
Figure 1 illustrates one vision of this shift in emphasis. It often includes virtual integration elements, where outside assets get integrated in a virtual manner into the company, in the form of knowledge to use, trade and build upon. So Boeing’s plans for 7E7 construction include thousands of outside companies integrated into the total system.
This fundamental change is particularly evident in transport and communications. These are the great enablers of today’s global industries. The rapid move to integrated supply management, and to the integration of physical transport and the transport of electronic assets, has totally changed the way people do business. In many cases the physical transport of goods can be replaced by the electronic transport of data and physical generation elsewhere. This internet-based model is reducing cost and complexity in many areas of business.
How to fund R&D
A thorny problem in the funding of research and development in all companies is how to make the key choices of what to fund and at what levels, given limited resources. When one is faced with multiple choices in market demand, pressure from product managers, and researchers and developers pushing pet projects, tools to assist decision-making are an essential element of the R&D process. The popular coupling of ‘research’ and ‘development’ – run together into one supposed composite entity – really does not stand up to hard examination. So in the new models of hard-nosed industrial development, what role should research take as compared with development?
Companies also need to decide whether universities have any role in their development plans, and if so, what? University-funding agencies and the universities themselves are faced with the same set of dilemmas, of how to exploit their own research.
So how should a company manage research and development? Former so-called ‘differentiators’ – quality, customer service, etc. – are now ‘table stakes’ without which no company can even hope to compete. Customer loyalty, so long taken for granted, no longer exists, and companies have to strive to retain any vestiges of it, by beating the competition on price, services, or any other differentiator. Competition is intense, so good quality, competitively priced, high-performance products must be created in a fast stream, and coupled with reliability, service and long-term commitment.
Companies must therefore develop a strategic approach – even a very simplified one – to connect their business approach to their ‘product’ development, whatever the nature of their products. They need to understand when to build and when to buy in, when to go solo and when to partner or collaborate. A strategic plan for internal and external technology is essential. Many companies don’t articulate this very coherently and then are not aware they have changed direction and lost focus, especially in the connection between their business direction and the associated product and technology imperatives.
The essential message is that old-fashioned models, where R&D labs have a life of their own independent of the strategy of the business, or where they try to drive the business, are less and less likely to lead to success. Many major companies have adopted different models of sharing R&D knowledge and resources with their key customers. Indeed, in many cases they integrate customer supply and research databases with their own to considerable mutual benefit, and sometimes joint business ventures result. From the business perspective, R&D becomes a major knowledge asset. The rapid rise in availability of knowledge exchange tools has had a major effect on these processes. The capability to share knowledge with suppliers via internet, intranet and extranet systems changes totally the way companies can work together.
In the 1990s, as Chief Scientist of Nortel Technology, I was faced with the problems of recommending how our central core R&D budget should be divided. Inevitably, each year, demand for funds from each operating division far exceeded available budgets, and reconciliation of corporate goals with specific and often urgent product and technology development projects seemed impossible. Indeed, before the work reported here was done, decisions were usually made in an arbitrary manner. I therefore set out to present the choices on a more rational basis, and developed some tools to help in the decision making. In subsequent and recent years the same tools have been used with small and medium-sized companies to help choose and prioritise development projects, and to help decide which elements to do outside, often with university groups.
We also must not ignore all the non-technology elements which make the difference between success and failure in many businesses. Innovation in operational and logistical factors, customer relationships of every kind, valuing employees and showing it explicitly, ownership and development of intellectual property, all add to the necessity for world-class product development. Indeed, we must always be aware of the oncoming fashions because they fuel customer expectations of our own behaviour – R&D in some of these areas is as essential as product R&D if a company is to operate as effectively as its competitors rather than be a mere follower of fashion.
In the so-called ‘high tech’ industries, intellectual property – its possession, generation, maintenance and guarding – have become of vital importance. Even for small companies this is true – often their ability to make their intellectual property work for them is all that stands between prosperity and closure. And universities are now under considerable financial pressure, especially in the UK, where their reputation for excellence in creativity, while fraying at the edges, still survives.
The price of failure
History is littered with the debris of companies that ignored the basic precepts of survival for their time. RCA and Zenith, great names in the vacuum tube industry, have long since vanished, conquered and eliminated by upstart semiconductor companies now giants themselves. Even the once mighty Bell Laboratories, inventors of the transistor, were by the 1960s far behind the growing new semiconductor chip manufacturers such as TI and Intel, through outdated and dogmatic management practices and ‘not invented here’ syndromes. Indeed, by the early 1970s Bell’s semiconductor operation was struggling under management-imposed dictates on technology choices, and the company was ignoring the speed of change in the outside world.
The linear model of research and development was in fashion in the 70s and 80s. The underlying idea was to fund vast amounts of basic research because from that ‘would come the ideas and products which could be developed, manufactured and sold’. Industrial collaboration took place in ‘pre-competitive’ research areas, and universities took part and did their bit for industry, often with little reward. With the rise in realisation of the importance of intellectual property, the growth in partially funded programmes such as the EU Framework in its later guises, and the Government’s desire, driven by the Treasury, to cut back on all forms of industrial support as far as possible in the early 90s, the linear model was firmly declared to be dead. Some universities, driven by an early understanding of the value of IP by Vice Chancellors of the calibre of Sir Eric Ash of Imperial or Sir Derek Roberts of UCL, understood the commercial value of their research and started to look for new and more equitable exploitation models.
Thus the old-fashioned idea, that ‘pre-competitive research’ that was somehow far ahead of the market could be performed in splendid isolation, gradually fell out of fashion. New models leading to the sharing of IP arose. In that environment, companies were still faced with the dilemma of how to prioritise their internal work and what resources to place on externally funded research.
Of course, collaborative or non-linear concurrent engineering models were, in reality, always used. My own version is a hexagonal model linking research, development, marketing, prototyping, sales and manufacturing in one team for collaboration and rapid feedback. Such a model avoids the recrimination which inevitably follows the separation of these functions and the resulting difficulties in multi-company collaborations.
Reversal of the linear model
There is one problem, however: to sell something it must be manufactured, the market has to be developed, the product itself has to be developed, and before that research has to have been done. So the linear model can’t be quite dead after all! You could say the linear model is valid in reversed time! Even in a concurrent engineering environment, when you look back at the sequence of events that led to the end point, linearity (or rather the sequence: research – development – production – sales) has actually taken place. In a successful sequence leading to product development, good choices were made, and in failures, usually poorer alternatives were chosen. So the linear model tells us something very important: innovating and making the right choices in marketing, manufacturing, products, development technology and research is vital.
Collaboration in research helps us make better choices earlier. It helps eliminate unworkable hypotheses, inferior lines of development, and premature exploitation of new technology. It also points to a continuing need for good choices in research, development, products and technology, and on into marketing and sales. So collaborative or non-linear concurrent engineering models were, in reality, always used in practice. Simpler models, linear or not, don’t give us enough insight into how we need to work. One needs to find better ways of managing R&D, taking up R&D done by others, choosing what areas to collaborate in, and working with everyone who can give you an advantage.
I believe there are three basic methodologies – order, hero worship, and chaos. ‘Order’ is what everyone thinks they have or thinks they would like to have. Companies boast of their ability to plan their strategy for R&D, speak of their operating plans, and give the impression of total control of their destiny! Often the reality is different; warring factions fight for resources; powerful figures in the company push their own agendas, and underneath the corporate engineering community pushes upwards what it believes the company needs most.
‘Hero worship’ is where a ‘hero’ takes charge, defines what needs to be done and pushes his or her ideas against all opposition. The company may be lucky or dead depending on whether the hero is right or wrong. Who could argue that Morita of Sony was wrong to push his Walkman ideas? But what about the Sinclair battery-powered C5 pseudo-car?
‘Chaos’ is where research and development are left to the engineers ‘on the ground’ to fight for their own pet budgets and projects. Often they do know best individually, and about their own projects, but have little idea of the overall strategic needs of the company. Again, the company may be lucky and somehow manage to develop what its market needs.
Every company in reality needs a mixture of these three methodologies. Capturing the value in the ‘hero’ and ‘chaos’ elements and imposing some order on them can work wonders, focusing a company’s resources on its real product and technology needs for the future. I offer you here a useful way to achieve this.
Companies often create internal or external visions and slogans suitable for their markets – ‘rapid time to market’, ‘value to the customer’, ‘technology intercept’, etc. – without really understanding either their meaning or their implication. A relatively simple matrix-based methodology can assist both perception and reality, and simplify understanding of the optimum way forward to achieve these visions.
First one must begin to understand the differences between products and the underlying technology that enables them. Products may be arbitrarily classed as current, evolving, novel/innovative, and ‘blue sky’ or ‘futuristic’. ‘Current’ is what we make and sell now. ‘Evolving’ means, in this context, products wherein the concepts are clear, the directions are more or less set, and choices have been made on most aspects of their production – in other words, the principal risk is time. ‘Novel/innovative’ products, on the other hand, might not work at all – novel concepts, after all, may need proving, new products’ architectures may not work, or some key concept could be flawed. Here the risk is much greater; ultimately, it could mean non-existence. ‘Blue sky’ or ‘futuristic’ products, often at the stage of a gleam in the eye, are even riskier.
Technology can be classified in a similar way. Current technology is available now. Risk is low or nonexistent. Evolving technology, on the other hand, carries more risk; it may be in-house or external. These days, for example, few companies produce their own silicon chips, often relying instead on the few giant global vendors who have evolved this technology to timescales shared with their large global customers. But sometimes they are late. Some element in their expectation fails to materialise and their customers suffer the consequent delays. But as with products, this risk tends to be time and delay rather than total failure.
Novel and innovative technology, by contrast, carries high risk, where total failure may be the result. As an example, the classic case of carbon fibre-based fan blades in the Rolls- Royce RB211 Aeroengine comes to mind. The novel ‘Hyfil’ carbon fibre blades could not be made strong enough for practical use in the real environment, and were subject to rapid erosion at the tips causing catastrophic efficiency loss. But note that the product, the RB211 engine itself, did not fail: by moving to (then) current technology – polycrystalline titanium fan blades – the product could be fully developed and has been an outstanding success, with huge global sales of the RB211 and its many derivatives. The novel technology failed to deliver but more conventional technology saved the product.
‘Blue sky’ technology carries both enormous risk for a manufacturer, and yet has the potential for spectacular success. Philips developed the basic technologies for the CD in the 1960s and 1970s. Virtually every element was novel – the coding of the signals, the laser-reading methods, and the means of producing the patterns on the disc. Initially the product was a spectacular failure – who remembers ‘laser discs’? The novel technologies, for practical purposes, didn’t exist at the time. But now Philips collects royalties from every CD made, and derivatives power DVDs as well. In novel technology we often find the key intellectual property that provides long-term profits for the owner.
Evaluating the risks
It is of prime importance to separate clearly the development of a product from the enabling technologies and evaluate carefully where the risks lie.
Suppose one were to collect together all the R&D plans for an operation. Each project within those plans could be classified as to its degree of currency or novelty in technology or product concept, its risk elements, and its proposed cost. At the planning stage for any given budget, bids for R&D funds will exceed supply by a considerable margin. Putting putative numbers against each project, however, is essential, as value judgements need to be made.
Plotting these projects onto a current-to-novel matrix can be most revealing. The very act of quantifying the degree of novelty and risk in both product and technology terms can be sobering in itself. The optimum matrix seems to be technology vs product – this allows separation of technology risk and novelty, from product risk and novelty. At an earlier stage, science vs technology can be plotted, and at a later stage product vs market (similar to an expanded ‘Boston Matrix’) can be derived.
We may then consider two further factors. Building new, or evolving novel and innovative products in current technology carries least technology risk and offers the fastest possible route to realisable products – in other words, the ‘Holy Grail’ of rapid time to market. Every element of technology risk taken, beyond the known state-of-the-art, raises the risk of lateness in the marketplace. On the other hand, adding evolving and innovative technology to current products has a totally different set of effects. These include the addition of new or novel features, cost reductions, and the optimisation of function. These fall into the ‘value to the customer’ class – adding to the capability to reduce costs and prices while adding features and functions.
The application of evolving technology to evolving product is what used to be called ‘technology interception’. Here product plans assume future technology availability, on the same timescale as the product development. Risks are moderate, timescales may slip, but many companies in leading market positions aim to operate in this mode, taking the risk of lateness or lack of exact technical fit, to build front-line products at market edge.
Riskiest of all is putting novel technology into novel product sector, where innovative technology is combined with innovative product ideas. The rewards of success can be very high, especially when a product which ‘breaks the mould’ appears. Our example of the RB211 falls into this class. The novel technology of carbon fibre blades went into a novel engine design. The ‘rescue’ was to move to conventional technology for the new product. Sony is a good example of a company that is not afraid to be at the cutting edge, gaining both rewards and penalties (remember Betamax). But Sony is reaping the rewards of many years of research into CCD imaging in its innovative digital cameras.
The model has yet another facet. Networks or systems, often incorporating externally sourced products, can be classified in the same way, as current, evolving, novel and ‘blue sky’. When plotted against product evolution, this offers a vision of how to match the corporate offerings to a customer’s expectation. Current networks can be used as test beds for new product ideas, working with current customers to expand the networks’ frontiers. Initially, new customers can be rapidly brought on board with current products, and later have their networks expanded with more advanced offerings. And totally new customers in new product areas are a risky, but potentially lucrative, new segment.
Realignment by the matrix
At Nortel in the 1990s, we realigned the R&D proposals by quantifying every project against the product technology matrix. At the time the corporate values – and mantras – were ‘time to market’ and ‘value to the customer’. The initial cut at budgets for corporate research showed double demands on resources of people and money, and a match of less than 10% to the corporate goals. Realignment by the matrix brought these figures to compliance with budget and 45% direct alignment to goals, with the remainder focused on longer-term aims.
It was not easy to do. All the VPs of different divisions had their own priorities, all of which demanded more resources than were available. It was only by breaking down each demand into smaller units and prioritising them that progress was made. We also used a slightly simpler 3x3 version of the matrix. Even then a degree of arbitrariness was needed in the final decisions. In subsequent use in smaller companies, decisions were usually more clear-cut, as smaller resource levels did not leave much room for choice. In all cases, however, the models were useful and resulted in better and clearer decisions.
Time is another factor. Clearly there is a challenge – if a new and innovative product can be brought to the market early, the face of the market can be changed significantly. Often young VCowned companies are trying for just that – looking to leapfrog the current market with innovative products and services. Therein lie the risks and the rewards.
We can summarise the whole in one diagram (see Figure 7).
Thus the R&D choice methodology has shown itself to be a useful and practical way of assisting in the difficult choices to be made.
World-class products, whether hardware, software, systems or services, are essential for survival. These come from focused and managed R&D. This new model provides insight into ways to manage that R&D, including how to bring in external resources. We should recognise that ‘waves of technology’ are rolling towards us; today’s novel or innovative technology or product will be commonplace the day after tomorrow.
The consequences of ignoring the factors these models bring out can be dire. For large operations, waste of resources is no longer an affordable luxury, especially in fast-moving marketplaces. For small operations, making the right choices can be a matter of life or death. When the product, technology and system thinking forced by models such as those presented here are combined with market appreciation, risk of failure is reduced, and development is more focused.
The model can also help in assessing elements of collaborative development by highlighting areas of risk, and how bringing in partners can short-circuit development and shorten timescales, and show other methods of risk reduction.
To make it all work well requires flexibility in management, determination to focus on the desired outcomes, and perhaps most important, a realistic assessment of timescales and resources for the tasks to be done. Management that objectively assesses its capabilities, and sets realistic goals matched to marketplace demand, gives a company a running start and the best chances of success.
We also must not ignore all the non-technology elements which make the difference between success and failure in many businesses
Daniel V McCaughan OBE FREng
McCaughan Associates, Holywood, N. Ireland
Daniel McCaughan started his career at Bell Laboratories in the US, followed by RSRE, Malvern. Various managerial and director-level posts at GEC, Marconi, STC and Nortel Networks followed, including Chief Scientist at BNR and Nortel Technology. In 2000 he was President and COO of CDT Ltd, before leaving to work in the venture capital industry with a fund based in the USA, and as a technical and managerial consultant to a number of SMEs and the UK Government. He is a Fellow of the IEE, the IoP and the RAeS, and visiting professor at the University of Manchester.