The Future of Work – The Global Challenge

The global challenge of work is two-fold. First, will automation, in its various forms, destroy jobs? And second, even if not, will workers be paid enough to sustain the global economic system? This is why the former US Treasury Secretary Larry Summers has said the problem of “good jobs” is the central problem of the richer economies.

The combination of economic stagnation, global competition and digital technology has created something of a social and public panic about work. We are losing “the race against the machine,” or reaching “the end of labor”. But there are two diverging stories about the future of work, one dystopian, one utopian, as Flipchart Rick has observed. On the one hand: it “will revolutionise the workplace … and enable us to have more fulfilled working lives.” And on the other: a future “of factories without people, of vanishing jobs, of a hollowed out labour market and … vast profits with few employees.”

Our present model of work is, broadly, a creature of the industrial revolution, dominated by the division of labour, the supervision of labour, and payment of workers for their time or their tasks. This includes so-called “new economy” models such as Uber, whose casualisation of its workforce would be recognised by any 19th or 20th century dock-worker. Some of the big shifts shaping work reinforce this model. Others are starting to reshape it, potentially marking the start of a transition beyond it.

To understand how this is likely to change over the next decade and beyond, we need to understand the global landscape of work. These are a shift towards services, the globalisation of supply chains, the growth of ubiquitous technology, an increased squeeze on resources, and a shift in social values towards well-being. These pull in different directions.

Globalisation and digitisation take you towards rawer forms of capitalism, whereas resources and values take you towards more inclusive versions. The way you deliver services depends on which model of these two that you prefer. The version of the story about the future of work you subscribe to tends to depend on your assumptions about how these drivers will play out.

The shift to services: The deep shift in the global economy is in the long-term rise of services to “become the dominant economic activity” (UNIDO, 2009). The economists Timmer and Akkus (2008) describe this as a “powerful historical pathway of structural transformation,” which every country follows.

One of the reasons for the long boom in living standards in the 20th century was because of the long boom in manufacturing, the dominant economic trend for much of the century. Productivity growth and economic growth tends to fall as services become dominant, and the influence of trades unions, which are effective in maintaining the value of wages, tends to decline.

The globalisation of the supply chain: Manufacturing is also tradable, meaning that it is open to export competition. The growth of the Asian economies, in particular China, has been extensively driven by manufacturing. Taking a long view, Asia’s share of world production almost doubled between 1970 to 2008, from 15.5% to 28.5%, at the expense of Europe and North America. (Unido, 2009). This growth was driven largely by the development of containerisation, not digital technology, because it transformed shipping costs.

But globalisation is reaching its limits. Wages in export sectors in both China and India are now relatively high (a pattern seen in other emerging economies in the past) and companies are moving their production closer to their markets, both anticipating rising transport costs and wanting to be able to respond more flexibly to demand.

The other effect of globalisation, of course, is an increase in migration: more than 500 million people globally now live in a country they weren’t born in. Economists generally agree that immigration is good for economies. Migrants tend to be younger, more enterprising, and economically active, and their effect on wages, economic growth and tax contributions is almost completely positive. However, in weak labour markets migration also tends to push down unskilled wages by increasing competition for such jobs; such competition is gamed by unscrupulous employers.

The growth of ubiquitous technology: There is a widespread fear that the rise of robots – or more exactly, a combination of computing power, algorithms and robotics – will destroy the labour market, even, possibly, the very idea of labour value. A widely publicised study by Oxford University academics Carl Benedikt Frey and Michael Osborne argued that for the United States jobs are at high risk of being automated in 47% of the conventional occupational classifications (Frey and Osborne, 2013). In The Second Machine Age, Erik Brynjolfsson and Andy McAfee suggest a reason: that computing power is capable of exponential growth in performance over time, and that we’re just at the start of that progression. If robotics did for blue-collar work, then artificial intelligence will do for white collar work.

This argument, however, tends to miss the fact that technological innovation, historically, has created new jobs, typically after a period of turbulent transition. In his analysis of the labour market, David Autor (2014) finds that between 1999 and 2007 “routine task-intensive” jobs were indeed largely removed by computerisation, while knowledge jobs (“abstract task-intensive”) tended to survive or increase where human knowledge was complemented by computers. “Manual task-intensive” jobs, at the less-skilled end of the market, were much less affected by computerisation, and demand for them seemed to be rising. Yet their wages fell. His explanation: labour supply for these jobs increased because of the collapse in demand for “routine task-intensive” jobs.

The squeeze on resources: Population and consumption pressures mean that we are breaching many of the natural planetary boundaries. For capitalism this is a new game: traditionally it has been able to use resources without worrying much about the consequences. And after a century of cheap energy, the long-run trend is up, despite the current downward blip in the oil price. In our recent Futures Company report The 21st Century Business, Jules Peck and I argue that this resource shift is changing the way that companies behave; we are moving to post-sustainability (socially, economically, and environmentally). An important element is a shift from consumers to citizens, among both customers and employees, where the overall impact of a business matters. An example: it’s argued that one of the reasons why McDonald’s sales are slumping among Millennials is that eating there is depressing, because of “the feeling that the people behind the counter, flipping burgers and taking orders, have dead-end jobs where they’re treated poorly.”

The shift to wellbeing: One of the long trends is a trend towards wellbeing, physical and psychological, individual and social. This complements one of the strong workplace trends: that significant competitive performance is typically produced only by empowered and engaged employees, who are intrinsically motivated to work for the business. This is true of lower-wage environments as well as higher-wage businesses.

Striking research by Zeynep Ton (2014) has found that companies such as Costco in the United States and Mercadona in Spain out-perform their sectors – by some margin – through a combination of better wages, significant investment in training, and appropriate technological investment to support staff. With such a “good jobs” strategy, increases in wages translate directly into far larger sales increases. High value work benefits individuals, businesses, as well as society as a whole.

The Future of the Company – The Global Challenge

Big business has become disconnected from the broader society within which it operates. A narrow focus on short-term returns has prevented businesses from investing in innovation to foster long-term sustainable growth.

The common understanding of the purpose of publicly listed companies, particularly in Anglo-American markets, is that they exist to maximize shareholder value. Publicly listed companies are under tremendous pressure from activist shareholders, takeover threats, and general market dynamics to generate short-term value by spinning off parts of the company, buying back shares, and laying off staff. External pressure is compounded by executive compensation schemes that are heavily weighted towards stock options. In theory, incentive compensation systems should reduce agency costs so that managers will act in the interests of shareholders. In practice, they create perverse incentives to extract value from the company at the expense of customers, employees, organizational health, the community in which the business operates, and ultimately society as a whole.

A number of unintended consequences result, including:

  • The failure of companies to adequately consider and respond to societal challenges, such as environmental damage and climate change, due to the perceived cost;
  • Erosion of trust between society and the corporate sector, including the role of corporations in shaping public policy, which in turn leads to a loss of trust in democratic processes; and
  • Firm mismanagement through stock manipulation, insider trading and tax evasion, with a number of associated firm-level and macroeconomic risks including treating employees as disposable; undermining investment, research and development; hollowing out whole organisations; turning executives into caricatures of self-interest and greed powered by narrowly focused remuneration schemes; focusing talent in the corporate world on systematically extracting value rather than creating it; stock price manipulation; and fueling market failure and economic crash.

Inequality has greatly increased in the last twenty years, in part due to the failure to translate corporate profits into increased salaries across the firm. Even as worker productivity has continued to rise, real worker wages have essentially flat-lined. At the same time, executive compensation has markedly increased due to the afore-mentioned stock option schemes. Rising inequality within companies has in turn contributed to macro-level inequality that threatens to concentrate economic and political power in the hands of a privileged few.

The biggest questions we face go to the very core of business: what is the purpose of the corporation, and specifically of the large listed company with dispersed shareholders? Will the current model of large publicly listed companies survive the next decade, and if not, what will it be replaced by?

Another question is about the alternatives to public companies, such as B-corporations, co-operatives, companies controlled by foundations, privately held companies, partnerships and family-owned businesses. Many of these alternatives have shown themselves to be capable substitutes for corporate bodies, but will they pick up momentum and drive the way forward? Will they eventually eclipse publicly listed companies? Research by the CFA Institute shows that global equity listings have declined by 17% between 1998 and the end of 2012, from 56,119 to 46,674. US stock exchanges were hardest hit, losing nearly 50% of their listings from their high of 9,253 in 1997. Europe has also seen a significant decline of 23% of its listed companies, while Asian exchanges have seen the least change with less than 5% lost. Given the sharp decline in number and longevity of public companies, it is unsurprising that many ask if a model of public limited company will survive the next decade.

Perhaps the most pressing issue today for financial regulators is the question of how to address short-termism in the markets and its significant influence on the strategies of public companies. It is widely acknowledged that an excessive focus on quarterly returns fed into the 2008 crisis but opinions vary widely on the causes of and solutions to short-termism. What is the role of financial markets and investors in promoting responsible capitalism? Can we turn institutional investors into patient capital, willing to invest in innovative research that will yield returns in the long-term? And conversely, is it possible to limit short-term trading, or at least to reduce its impact on the governance of companies?

Stewardship has become a central focus of regulators seeking to push markets to a long-term orientation. What do good stewardship and responsible investment look like in practice? Is it reasonable to expect institutional investors and corporate managers to serve as good stewards and act sustainability?

The Future of Work – Proposed Way Forward

The way forward depends on how you prefer to read this bifurcation between the technologists and the sceptics. We don’t know which group is right: there are no future facts. But there are some observations that can help shape our perspectives on this.

The first is that these widely divergent views are a feature of this point in the technology cycle. The most the most excitable projections of the future of the car were seen at just this point on the oil and auto curve in the 1950s. The technology S-curve in Figure 1, based on the work of Carlota Perez, helps us to understand why. At this point, when the S-curve is at or approaching its second inflection point, people have been experiencing rapid technological change for the best part of two generations. The notion that “the only constant is change” has become a breathless platitude in the public discourse. So, the technologists’ perspective (point ’t’ in Figure 1) is a projection of this steep ramp. The sceptics note instead sign of falling returns and declining customer utility – and see a flattening of the line (point ’s). The gap is large, and one’s perspective on it is a matter of worldview, not evidence.

Figure 1

fig1

Source: Carlota Perez/ additional analysis by The Futures Company

Second, almost all business innovation and new business value is driven by the application of knowledge, and the way it is embedded in individuals, teams, and systems. The Futures Company has explored this in recent research with the Association of Finnish Work on the idea of ‘high value work.’ The important point here is that this is true of a whole range of knowledge, including knowledge of service and customers, and knowledge of culture and place, as well as technological knowledge. The most successful businesses use technology to complement and enhance this knowledge, not to replace it.

Third, the trend towards is a deep and powerful one. If Millennials express a desire for meaningful work, this is also true more broadly. We are on the cusp of a transition to a world where, as Hardin Tibbs (2011) has argued, half of the populations of Europe and the United States subscribe to post-modern values (drawing on Inglehart) of autonomy and diversity. The workplace will not escape this trend. One way in which this is expressed is in a transition from consumer or employee to citizen. Increasingly, anyone with any degree of choice in the labour market is choosing employees who recognise them as a whole person, not just as a unit of labour. The evidence suggests that the engagement that the employer gets in return (even, say, in retail) is a powerful driver of performance and profitability.

Fourth, the bargain that businesses struck in the 1980s and 1990s, as they enforced flexibility and “downsized” headcount, may turn out to be a Faustian pact. Shedding jobs and exerting tight control of labour markets increased short-run profits. But at the same time that same control squeezed out their sources of growth. And as both the OECD (Cingano, 2014) and the IMF (Ostry et al, 2014) have noted recently, wage inequality has been a further drag on economic growth. To regain growth, they are likely to have to increase wages and give back some control and power to their workforces.

My own best guess is that we are not headed for long-run technological unemployment. I have changed my mind about this over the past year as I have spent more time with the evidence.

The explanation that seems best to fit present state of work and labour markets is that it has been through a “perfect storm” of a globalised workforce, the deskilling of routine work (which was highly vulnerable to automation) and the shift of these workers into manual or service work, and aggressive deregulation of labour markets driven by a neoliberal political agenda.

The discourse around technological unemployment is not persuasive to me. The “abstract” jobs (using David Autor’s analysis above) will be complemented by technology, and so, in a different way, will be the manual jobs. Meanwhile, the projected gains from Artificial Intelligence and analytics are going to be harder to achieve than currently anticipated. As an example, big data gets less useful as the data sets get larger, and the driverless car, the poster child for the tech future, is a far tougher proposition than Google lets on. Meanwhile, these tech scenarios never seem to include the new jobs that will emerge as we understand better the potential of the technologies, other, sometimes, than as a panic about the possible speed of change.

But, and it is a big but, we’re only part of the way through the dislocation to work and to labour markets caused by this perfect storm. Things will not get better quickly.

The Future of Work – Impacts and Implications

Looking at the shorter-term impacts, then, it’s possible to see a range of approaches to this turbulence in the world of work. Government have options, largely about whether to intervene in labour markets to influence work outcomes, or not. But employers are also moving to new strategies not out of goodwill but through self-interest.

These options, highly simplified, are shown in the matrix (Figure 2), which contrasts laisser-faire approaches with interventionist approaches.

Figure 2

fig2

Source: Andrew Curry/The Futures Company

The race to the bottom: This laisser-faire option operates on the principle that labour market flexibility is the secret to increased employment in a globalised labour market. In practice, nearly all countries have increased flexibility and permitted more casualised work over the past decade – even somewhere with a strong tradition of labour protection such as Germany. The evidence increasingly suggests, however, that the pursuit of low value jobs leads to a vicious cycle of low productivity, low investment, low growth, and low tax and social contribution from business.

This policy approach also involves government subsidy to employers, as low-paid workers are supported by state payments. In the United States, a study showed that the fast food sector was effectively subsidised to the tune of $6 billion because its low paid workers were dependent on food stamps and subsidised housing. Increasingly this looks like a political choice that is no longer supported by economic evidence.

Enlightened self interest: It appears that employers who pay better and create better working environments do better financially. Walmart is a relevant case. Over the last decade, its share price has been broadly stagnant, while Costco has outperformed it “by a considerable margin”, in terms of sales, earnings or stock market returns. One reason: according to HBR, far lower staff turnover means knowledge is kept in the company – and drives customer engagement. Such employers also invest in technology to enhance the performance of their staff, using each to complement the other. The Spanish retailer Mercadona similarly invests heavily both in training and stock management systems.

Wages and labour performance are also becoming part of businesses’ reputational capital. See, for example, the increasing success of the UK Living Wage campaign in signing up large companies as “living wage employers”. The public sector can encourage this, for example by giving tax breaks or other forms of support to companies who deliver such commitments, and sharing evidence of business benefits.

Keeping the market honest: Turning to more interventionist approaches, the state can take the view that it wants to drive unscrupulous low-wage employers out of the market as a way of driving up standards and investment (because low-wage, employers are unlikely to commit to training, and have little incentive to invest in capital equipment, which reduces productivity.) This leads to approaches such as enforcing (and increasing) minimum wages, both through regulation and legal frameworks, and also through public procurement rules.

Such a policy complements the “enlightened self-interest” approach by removing free-riders from the market. Although conventional wisdom has argued in the past that minimum wage legislation costs jobs, this seems to be a weaker effect than claimed.

Re-imagining work: Much of our intervention in the labour market is driven by a view that it creates social goods, both from an economic perspective and also from a social perspective (over a long period studies have shown that worklessness produces adverse psychological and physical effects). But it is possible that such findings are linked to a set of “modernist” social values that are rapidly giving way to “post-materialist” values. Certainly, people with some income and a degree of social capital who do not have to work find worthwhile things to do, including volunteering. This is part of the argument for the Basic Income: that as we move to the “post-industrial” world envisioned by Daniel Bell, in which skills are more embodied in personal knowledge, that encouraging traditional work is no longer the only, or the best, way to get the social benefits from productive engagement.

The rise of the basic income: Until very recently, the idea of a basic income, a minimum sum paid to all people regardless of their work status, was right of the fringe of political discourse. But it has been moving rapidly towards the mainstream. The idea has deep roots:  George Bernard Shaw promoted it as “a vagabond’s wage” a century ago.

The analysis in this provocation helps to explain why. It is a policy idea that helps to improve outcomes whether the technologists or the sceptics turn out to be right. And in the meantime it helps to shore up economies, and individuals, that are struggling in the slow readjustment of labour markets.

If the “robots” hypothesis is right, we’ll need a basic income to make the economy work (markets need people who can afford to buy products). If the market power argument is right, then basic income keeps employers honest, by ensuring they have to pay good enough wages, in good enough conditions, to attract and keep their workers. One interesting side effect is that it would mean that our fundamental notions of the value of paid work could be about to shift, for the first time since the Industrial Revolution. A recurring feature of the ICT era has been that questions of power and politics have frequently been diagnosed as issues of technology. The future of work is just the same.

The Future of Connectivity – Options and Possibilities

Demand will continue to grow exponentially in the next decade: Demand for mobile broadband is closely related to the evolution of device and screen technologies, one of the fastest evolving areas in the Information and Communication Technology (ICT) industry. In 2011, the Retina display of an iPad already had nearly twice as many pixels to fill with content compared to a Full-HD television. New device form factors such as Google’s glasses, another hot topic introduced in 2012, continue to drive this evolution and ultimately only the human eye will set the limits for the amount of digital content that will be consumed by a mobile device. And these devices will not only consume content – ubiquitous integrated cameras with high resolution and frame rate are producing Exabytes of digital content to be distributed via networks.

Enabled by these powerful new devices, the app ecosystem continues to fuel demand for mobile data by continuously inventing new categories of applications that test the limits of the network. It started with mobile web browsing in 2007 and accounted for more than 50% of video traffic in 2012. And by 2020, people might demand mobile networks that allow them to broadcast live video feeds from their glasses to thousands of other users in real time.

Many of the apps will be cloud based or rely on content stored in the cloud. IDC estimates in their digital universe study that by 2020 30% of all digital information will be stored in the cloud – and thus be accessed through networks.

An even broader range of use cases for networks will develop as communication technologies and applications proliferate into all industries and billions of machines and objects get connected. They will go far beyond the classical examples of the smart grid or home automation. Just imagine the potential – but also the requirements – that remotely controlled unmanned vehicles would bring to mobile broadband networks.

In summary, we believe that device evolution, cloud based application innovation and proliferation of communication technologies into all industries will ensure that the exponential growth in demand for mobile broadband we have seen in the last few years will continue in the next decade.

The Future of Connectivity – Proposed Way Forward

Having understood what drives demand we can define the requirements for future mobile networks: As stated earlier, one gigabyte of data traffic per user per day is about 60 times the average data traffic seen in mature mobile operator networks today. On top of this, the growth in mobile broadband penetration and the surge of connected objects will lead to around ten times more endpoints attached to mobile operator networks than today. To prepare for this, we need to find ways to radically push the capacity and data rates of mobile networks into new dimensions to handle this amount of data traffic.

Yet, being able to deal with this traffic growth is just one aspect. An increasing number of real-time apps will test the performance of the networks. To support them with a good user experience we need to find ways to reduce the end-to-end latency imposed by the network to milliseconds. Tactile (touch/response) and machine-to-machine interactions in particular have low latency demands that can be as low as in the single digit milliseconds range.

To ensure mobile broadband remains affordable even while supporting the capacity and real-time requirements described previously, we also need to radically reduce the network Total Cost of Ownership (TCO) per Gigabyte of traffic. We believe one important lever to address this will be to automate all tasks of network and service operation by teaching networks to be self-aware, self-adapting and intelligent. This will help to reduce CAPEX/IMPEX for network installation as well as OPEX for network and service management. In addition to lower TCO, self-aware and intelligent networks will be able to understand their user’s needs and automatically act to deliver the best personalized experience.

To further reduce costs per GB, we need to share network resources through both within a single operator network, as well as between operators. It will include physical infrastructure, software platforms, sites, spectrum assets or even the network as a whole. We must also find ways to increase the energy efficiency. In addition to their environmental impact the energy costs account today for up to 10% (in mature markets) and up to 50% (in emerging markets) of an operator’s network OPEX and they have been growing constantly in the last years.

The most powerful way of course to deal with the cost pressure will be to identify new revenue streams. Are end customers and termination fees really the sole revenue source for operators, or will technologies enable new business models that allow operators to better monetize all their assets?

Ultimately we of course need to admit that due to the fast pace of change in the industry it is simply not possible to predict all requirements future networks will face. There will be many use cases that are simply not known today. To cope with this uncertainty, flexibility must be a key requirement as well.

The Future of Connectivity – Impacts and Implications

More spectrum, high spectral efficiency and small cells will provide up to 1000 times more capacity in wireless access. In the world of wireless, Shannon’s law is the one fundamental rule that defines the physical limits for the amount of data that can be transferred across a single wireless link. It says that the capacity is determined by the available bandwidth and the signal to noise ratio – which in a cellular system typically is constrained by the interference.

Therefore the first lever to increase the capacity will be to simply utilize more spectrum for mobile broadband. In total the entire spectrum demanded for mobile broadband amounts to more than 1,100 MHz and a large amount (about 500 MHz) of unlicensed spectrum at 2.4 GHz and 5 GHz can provide additional capacities for mobile data. Of course reaching an agreement on spectrum usage requires significant alignment efforts by the industry and is a rather time consuming process. Therefore it is also necessary to look at complementary approaches such as the Authorized Shared Access (ASA) licensing model, which allows fast and flexible sharing of underutilized spectrum that is currently assigned to other spectrum-holders such as broadcasters, public safety, defence or aeronautical.

A key challenge associated with more spectrum is to enable base stations and devices to utilize this larger and a potentially fragmented spectrum. Here technologies such as intra- and inter-band Carrier Aggregation will be essential to make efficient use of a fragmented spectrum.

The second lever for more capacity will be to address the interference part of Shannon’s equation. This can be achieved for example through beam forming techniques, which concentrate the transmit power into smaller spatial regions. A combination of multiple spatial paths through Coordinated Multipoint Transmissions (CoMP) can further increase the capacities available to individual users. We believe that with the sum of these techniques the spectral efficiency of the system can be increased by up to 10 times compared to HSPA today.

Advanced technologies and more spectrum will help to grow capacity by upgrading existing macro sites for still some time. However, a point will be reached when macro upgrades reach their limits. By 2020 we believe mobile networks will consist of up to 10…100x more cells, forming a heterogeneous network of Macro, Micro, Pico and Femto cells. Part of this will also be non-cellular technologies such as Wi-Fi, which need to be seamlessly integrated with cellular technologies for an optimal user experience.

Although the industry today has not defined what 5G will look like and the discussions about this are just starting, we believe that flexible spectrum usage, more base stations and high spectral efficiency will be key cornerstones.

The capacity demand and multitude of deployment scenarios for heterogeneous radio access networks will make the mobile backhaul key to network evolution in the next decade. The backhaul requirements for future base stations will easily exceed the practical limits of copper lines. Therefore from a pure technology perspective, fiber seems to be the solution of choice. It provides virtually unlimited bandwidth and can be used to connect macro cells in rural areas and some of the small cells in urban areas. However the high deployment costs will prevent dedicated fiber deployments just to connect base stations in many cases. Due to the number of deployment scenarios for small cells, from outdoor lamp post type installations to indoor, we believe a wide range of wireless backhaul options will coexist including microwave links and point to multipoint link, millimetre wave backhaul technologies. For many small cell deployment scenarios (e.g. for installations below rooftop level) a non-line-of-sight backhaul will be needed. The main options here are to either utilize wireless technologies in the spectrum below 7 GHz or to establish meshed topologies.

Besides pure network capacity, the user experience for many data applications depends heavily on the end-to-end network latency. For example, users expect a full web page to be loaded in less than 1000ms. As loading web pages typically involves multiple requests to multiple servers, this can translate to network latency requirements lower than 50ms. Real-time voice and video communication requires network latencies below 100ms and advanced apps like cloud gaming, tactile touch/response applications or remotely controlled vehicles can push latency requirements down to even single digit milliseconds.

The majority of mobile networks today show end-to-end latencies in the range of 200ms-500ms , mainly determined by slow and capacity limited radio access networks. Therefore the high bandwidth provided by future radio access technologies and the use of fast data processing and transmission will provide a major contribution to reduce the network latency. Due to the amount of data being transferred the user perceived latency can be much higher than the plain round-trip-time. Thinking of future ultra high resolution (UHD) real time video applications this clearly motivates the need for further technology evolution.

Equally important is the real traffic load along the end-to-end path in the network. A high traffic load leads to queuing of packets, which significantly delays their delivery. When attempting to solve this, it is not efficient to just overprovision bandwidth in all network domains. Instead latency sensitive media traffic might take a different path through the network or receive preferred treatment over plain data transfers. This needs to be supported by continuously managing latency as a network quality parameter to identify and improve the bottlenecks. In return, low latency traffic could be charged at a premium, providing network operators with new monetization opportunities.

One physical constraint for latency remainins: Distance and the speed of light. A user located in Europe accessing a server in the US will face a 50ms round-trip time due simply to the physical distance involved, no matter how fast and efficient the network is. As the speed of light is constant, the only way to improve this will be to reduce the distance between devices and the content and applications they are accessing. Many future applications such as cloud gaming depend on dynamically generated content that cannot be cached. Therefore the processing and storage for time critical services also needs to be moved closer to the edge of the network.

The introduction of additional radio access technologies, multiple cell layers and diverse backhaul options will increase complexity and bears the risk that network OPEX will rise substantially. This is why the Self- Optimizing-Network (SON) is so important. SON not only increases operational efficiency, but also improves the network experience through higher network quality, better coverage, capacity and reliability. Extending the SON principles now to a heterogeneous network environment is a challenge and opportunity at the same time.

Fortunately, big data analytics and artificial intelligence (AI) technologies have matured in recent years, mainly driven by the need to interpret the rapidly growing amount of digital data in the Internet. Applied to communication networks, they are a great foundation for analyzing Terabytes of raw network data and to propose meaningful actions. In combination with AI technologies, actionable insights can
be derived even in the case of incomplete data; for example machine-learning techniques can find patterns in large and noisy data sets. Knowledge representation schemes provide techniques for describing and storing the network’s knowledge base and reasoning techniques utilize this to propose decisions even with uncertain and incomplete information. Ultimately we believe that both, big data analytics and AI technologies will help to evolve SON into what we call a “Cognitive Network”, one that is able to handle complex end-to-end optimization tasks autonomously and in real time.

Customer Experience Management (CEM) can provide insights that will enable operators to optimize the balance of customer experience, revenues and network utilization. Cognitive Networks will help to increase the automation of CEM enabling network performance metrics to be used to govern the insight/action control loop, as well as experience and business metrics. This again increases the operational efficiency and at the same will be the prerequisite to deliver a truly personalized network experience for every single user.

The big data analytics and AI technologies introduced with the Cognitive Networks will be the foundation for advanced customer experience metrics. The ability to deal with arbitrary amounts of data in real time will allow a much more detailed sensing of network conditions and the resulting user experience in real time.

It also will be the foundation for large-scale correlations with other data sources such as demographics, location data, social network data, weather conditions and more. This will add a completely new dimension to user experience insights.

Cloud technologies and being able to provide computing and storage resource on-demand have transformed the IT industry in the last years. Virtualization of computing and storage resources has enabled the sharing of resources and thus their overall efficiency. Virtual cloud resources can also be scaled up and down almost instantly in response to changing demand. This flexibility has created completely new business models. Instead of owning infrastructure or applications it is possible to obtain them on-demand from cloud service providers. So far this approach has mainly revolutionized IT datacenters. We believe that similar gains in efficiency and flexibility can be achieved when applying cloud technologies to Telco networks. Virtualization will allow decoupling of traditional vertically integrated network elements into hardware and software, creating network elements that consist just of applications on top of virtualized IT resources. The hardware will be standard IT hardware, hosted in datacentres and either owned by the network operator or sourced on-demand from third party cloud service providers. The network applications will run on top of these datacentres, leveraging the benefits of shared resources and flexible scaling.

Also user plane network elements such as base stations will be subject to this paradigm shift. Over time, the migration of network elements in combination with software defined networking will transform today’s networks into a fully software defined infrastructure that is highly efficient and flexible at the same time.

Efficient radio technologies, high utilization and network modernization will reduce the network energy consumption, another important cost factor for operators. Having the forecasted traffic growth in mind, reducing the network energy consumption must be a major objective. The focal point for improving network energy efficiency will be the radio access, which accounts for around 80% of all mobile network energy consumption. Ultimately the energy efficiency that can be achieved depends on the pace of network modernization. Efficiency gains materialize only when the new technologies are introduced into the live network. Determining the right pace for modernization requires careful balancing of CAPEX and OPEX. We believe that energy efficiency can beat the traffic growth – which makes keeping the network energy consumption at least flat a challenging – but achievable goal.