The Future of Privacy – Proposed Way Forward

That the technology mega-trends predicted for 2020 and beyond will continue on their march seems to me to be inevitable; we’re just left debating the timeframe. But it’s the counter-trends that I believe will determine whether privacy is a winner or a loser.

Business models that put the individual in control: Today, data about people is almost exclusively controlled by organisations, whether public or private sector. People have very little control over their own personal data. If data is power, then the scales are tipped heavily in favour of corporations and governments against the individual.

But the cost and complexity of processing, storing, transferring, computing and analysing data are such that it is perfectly feasible for individuals to control their own data – in fact, billions of people now do this daily in a rudimentary form, as they manage profiles on social media, and use smartphones to capture, manipulate and share data. There is no longer any reason why the organisation should be the default point of control of personal data.

What’s more, where organisations function as the default data controllers, the economic potential for personal data is limited, because data remains locked up in corporate silos (even silos as big as those controlled by Google are still silos). The utility of much of this data cannot be unleashed because it cannot easily, legitimately or lawfully be connected with other data from other sources. This data only becomes really valuable when it can be combined with relevant data across all services that relate to a person’s life – online, retail, financial, governmental and the myriad other sources coming available.

New entrepreneurs recognise this and are developing solutions that put the individual back in control. By making the individual “the single point of control and integration of data about their lives”[1], they are able to aggregate data about an individual from all sources and services. In doing so, they are creating an entirely new, and enormously valuable, asset class[2] that is currently diminished by being spread across the myriad data silos owned by the many hundreds of corporations and government agencies we interact with. And there is good evidence that this will enable entirely new services, and significant new economic growth and value[3].

Aside from enabling economic growth, these new models also happen to offer a market-driven solution to many of the privacy problems we are facing with the onward march of data-generating technology where the organisation is the default controller of that data. Shifting the balance of power back towards the individual must produce a positive outcome for privacy. And because it also offers the possibility of enabling innovation and economic growth, privacy is no longer trapped in one–sided conflict with forces it cannot hope to defeat. It does not require a balance, or a trade-off, between privacy and growth – it enables both.

A typical example of the sort of new service provider that is beginning to emerge is the personal data vault or bank[4]. A personal data bank provides the single point of integration for personal data under the control of the individual, and provides related services (much like a normal bank does with your money) that enables the individual to get value from their data – from eliminating repetitive form filling (providing address, delivery and payment data to online merchants), to monetising one’s own data through purchase preference and ‘intent-casting’, to enabling new, complex ‘decision support’ services[5].  In this model, the individual becomes the curator of their own personal data, able to volunteer more, or more relevant, data and manage that data to ensure it is relevant, accurate and as comprehensive as they want it to be.

Once consumers have realistic alternatives, we can expect to see an end to the ‘privacy paradox’, i.e. individuals’ actual behaviours defying their expressed attitudes, as it becomes possible, without disproportionate consequences, to act upon those attitudes by making meaningful choices.

While the emergence of personal data banks and similar business models do not in and of themselves prevent organizations from collecting and exercising control over personal data regardless, they have the potential to disrupt this simply by being inherently more valuable. Because the value of personal data is closely connected to its relevance and currency – think of personal data as having a ‘half-life’[6] –  ‘personally curated’ sources of data will have higher value simply due to the fact that they will represent the actual wishes and desires of an individual, rather than the presumed wishes and desires based on derived data. Plus, our personal data changes all the time (think of musical tastes, favourite bars or hangouts, travel interests, and, for many people, even where they live, or the job they are doing). Maintaining personal data at the level of accuracy and currency needed for many applications to be optimally effective is an impossible task for an organization without the individual’s direct involvement. Conversely, for the individual it is practically impossible to manage and keep up-to-date and accurate their own personal data when it is spread across hundreds of organisations, each with their own interfaces and approaches[7].

Technology development that supports social norms and values; It’s a cliché that technology is disruptive. And too often we hear that we should accept disruption to our sense of privacy because technology has made it an outdated and redundant concept, and we can’t turn back the clock. Not infrequently the people who express these views are the very people who helped to create the technology that has brought these things to pass in the first place. This is simply a form of technological determinism.

But technology should and can develop in a way that reflects and supports social norms and values. Since technology is created by people, we are perfectly capable of creating it in ways that take account of privacy and other values. Urban architects have learnt to do this with our physical environment – concerning themselves not just about function and aesthetics, but also with broader environmental impacts, the need for building communal living spaces and creating a sense of community[8].

More significantly, technology is largely the product of private enterprise. To understand why technology has developed the way it has, or how it will develop in future, we need to understand the economic motivations and drivers of those who create it, and the business models that justify investment.

Early applications for data processing technology were focused on efficiency – replacing manual processes with automated processes. Automated data processing requires data as input, but once used, remained surplus to requirements. Personal data was relatively scarce, and even though it was recognised that data needed to flow across borders, it was not seen as a valuable asset in and of itself. But it was recognised that automated data processing had the potential to cause harm to people’s privacy, and so new codes and regulations[9] were created that essentially treated personal data like ‘toxic waste’, to be contained and made safe. Now, today, rather than being a mere by-product of digitisation, data is a resource defined by superabundance, and has become perhaps the most important driver of economic growth in the digital economy. This will become even more so as we move towards 2020. Organisations are therefore incentivised to create and capture personal data and exercise control over it.

In short, technology continuously causes friction with privacy because commercial organisations haven’t really tried to address the problem. While “Privacy Enhancing Technologies” have a reasonably long history, particularly within academia, they have failed to be adopted commercially or at sufficient scale[10]. For instance, cryptographic tools have not been adopted by the general user due to a lack of commercial investment in embedding them seamlessly into products that consumers want[11]. This is because, beyond mere legal compliance, privacy hasn’t featured as a strategic priority, and correspondingly there has been insufficient investment by organisations in developing the broader range of skills and expertise needed to create and deploy privacy-enhancing products or services, such as in product marketing, engineering or user experience. There simply hasn’t been a sufficient incentive to do so. And now there is precisely the opposite incentive – to generate and use data as a revenue driver in and of itself.

However, if the individual begins to become the point of control, businesses that want to leverage the vast pool of personal data assets available will need to compete with each other to provide the most attractive destination for people’s data. And if businesses are competing to provide individuals with the best ‘personal data banks’ and other tools that enable them to gain control of their own data, and ‘invest’ it on their own terms, then it will become a business imperative to find innovative and attractive approaches to issues such as individual control and permission, transparency and usability, data portability and ownership, as well as data protection, anonymisation and other counter-surveillance measures. There will be an economic incentive to encourage technology development where personal data control and privacy are functional necessities, not regulatory pipe dreams.

This in turn will create a demand by organisations for new skills from technologists and service designers that enable them to create products that embed respect for privacy- related values from the outset. Universities and colleges will seek to meet this demand by providing courses and modules on the fundamentals of what privacy is and why it’s important, but also qualifications in new fields like privacy engineering and privacy design.

The contrast in this respect between privacy and security couldn’t be greater. One the one hand, the security industry has been estimated to be worth $350 Billion in the US alone[12]; security is a sophisticated and maturing market. The ‘privacy industry’ by contrast is hardly recognizable at all. The reason is simple – in an organisation-centric world, where data is valuable and where corporations control data, it is in their self-interest to secure that data. Hence, supply meets demand. But in the privacy arena, there has simply been insufficient demand to stimulate a supply.

But this is changing. Something approximating a privacy marketplace is now becoming a reality[13], consisting of tools that prevent tracking[14] and other counter-surveillance services on the one hand[15], and personal data vaults and banks that enable the curation and management of one’s own data on the other[16]. Major players in the internet and communications space have also already begun to lay down their markers[17]. As this market develops, consumers will benefit from the greater control over their personal data that results.

Second generation regulation: Nevertheless, we must be wary of substituting technological utopianism with economic utopianism. These competitive forces can be harnessed, but are unlikely to create change for the good all by themselves. Regulation has an important role to play. But we need a different type of regulation to the existing data protection and privacy regulation we have today.

Existing data protection regulation emerged in the 1970s and 1980s in response to computing and data processing developments beginning in the 1960s. The underlying assumption was that data processing would always be a complex and resource intensive activity, and hence would always be the preserve of large, well-resourced organizations. Individuals needed the protection of regulation against the impacts of automated data processing and the decisions it enabled. The regulatory frameworks were generally “command and control” style frameworks that provided rules that regulated the behaviour of large, static organization (the ‘data controller’), and were designed to protect the individual who lacked any means to exercise control themselves (the ‘data subject’).

This assumption that the organization is the natural point of control for personal data no longer holds. Yet our current data protection frameworks are built upon this assumption. Even the latest EU proposals are still essentially based on this model[18]. But with the real possibility for personal control over personal data, and business models emerging to support this, policy makers need to focus on helping this nascent market develop, rather than trying to stem the tide of technology with rules and guidelines.

What’s more, policy makers have struggled to find ways to effectively regulate technology in a way that produces commercially deployed technologies that reflect or support privacy norms and values, rather than disturbing them. While there are regulatory restrictions surrounding the use of personal data, this has predominantly resulted in legalistic methods of compliance. I would contend that these haven’t had any significant impact on the design of technologies themselves, how they generate data, or how they make that data available.

Issuing decisions and guidelines after technology has already been commercially adopted and has started to negatively impact privacy is like closing the stable door after the horse has bolted[19]. And yet while concepts like data protection or privacy ‘by design’ are constructive ideas, they are unlikely to translate into better technology design on a large scale simply because they happen to appear in a regulatory instrument[20]. What is so often needed on many aspects of privacy is creativity and innovation, and you cannot command an organization to innovate.

But you can incentivize it to innovate. If a market is encouraged to develop where individuals are placed in a controlling position at the centre of a personal data market and ecosystem, there will be economic incentives to look for better solutions to issues people care about. The role of regulation should then become less about issuing detailed rules and requirements (e.g. telling companies what to include in their privacy statements, or specifically how they should capture consent, or whether they need to seek regulatory approval to use data for certain purposes), and more about ensuring that fair and open competition develops and operates to produce beneficial privacy outcomes for individuals, while also allowing innovation and growth with data. This type of regulation has been called “second generation” regulation, a term coined by Professor Dennis Hirsch in the context of evolving environmental regulation[21]. Hirsh describes the evolution from the not-so-effective early post-war environmental “command and control” regulation to the more sophisticated and effective frameworks we see today that embrace a broad understanding of how economic incentives can stimulate innovation. Hirsch sees a parallel between regulating information privacy and environmental degradation – both require innovation if they are to achieve satisfactory and effective outcomes without stifling economic growth.

However, one very important principle that has emerged within Europe’s attempt to modernize its data protection regime is “data portability”[22]. This principle will require organisations to allow personal data to be exported to another entity at an individual’s request. While the mechanisms for achieving this are by no means trivial (look at how long it took the mobile industry to implement mobile number portability[23], which is a far simpler undertaking), this is the sort of measure that will facilitate a personal data market to develop and grow. It is both a typical “second generation” form of regulation, and an essential component of an individual taking control of their personal data.

[1] Alan Mitchell, Strategy Director, Ctrl-Shift, speaking on “The Business and Economic Case” at Personal Information Economy 2014, available at: https://www.youtube.com/watch?v=xbQh0DNzAlA&feature=youtu.be&t=5m2s (accessed 17/11/2014)
[2] World Economic Forum, “Personal Data: The emergence of a new asset class”, available at: http://www3.weforum.org/docs/WEF_ITTC_PersonalDataNewAsset_Report_2011.pdf (accessed 10/12/2014)
[3] Ctrl Shift, “Personal Information Management Services: An analysis of an emerging market”, available at: https://www.ctrl-shift.co.uk/research/product/90 (accessed 12/12/2014)
[4] Some examples are You Technology (http://you.tc/), Personal.com (https://www.personal.com/) and QIY (https://www.qiy.nl/)
[5]  An example of a complex decision support service would, for instance, enable a household to recalibrate its domestic energy consumption needs. For more information, see “Personal Information Management Services: An analysis of an emerging market”, supra note 27.
[6] Martin Doyle, “The Half Life of Data”, available at: http://www.business2community.com/infographics/half-life-data-infographic-0971429 (accessed 10/12/2014)
[7] Online contact books, like Plaxo (http://www.plaxo.com/), and social networking services like Facebook (https://www.facebook.com/) and LinkedIn (https://www.linkedin.com/home) are good examples of how there has already been a shift of control to the individual. In these cases, the process of giving out contact information (e.g. via business cards) and allowing others to manage one’s contact data is replaced with the individual managing their own contact information and creating stable connections online with people they want to stay in touch with.
[8] Somewhat ironically, urban architecture is also concerned with other social issues, such as how to reduce crime in urban planning and design through ‘natural surveillance’.
[9] The 1980 OCED Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (http://www.oecd.org/internet/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm), 1981 Council of Europe  Convention for the Protection of Individuals with regard to Automatic Processing of Personal (http://conventions.coe.int/Treaty/en/Treaties/html/108.htm), and the 1995 EU Data Protection Directive 95/46/EC (http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML)
[10] As this recent academic paper illustrates, solutions are available to many of the privacy problems highlighted with pervasive technologies - “A Roadmap for IoT/Cloud/Distributed Sensor Net Privacy Mechanisms”, available at: http://internet-science.eu/publication/1141  (accessed 15/12/2014)
[11] Justin Troutman, “People Want Safe Communications, Not Usable Cryptography”, MIT Technology Review, available at:  http://www.technologyreview.com/view/533456/people-want-safe-communications-not-usable-cryptography/ (accessed 12/12/2014)
[12] ASIS International, “Groundbreaking Study Finds US Security Industry to be Worth $350 Billion Market”, available at: https://www.asisonline.org/News/Press-Room/Press-Releases/2013/Pages/Groundbreaking-Study-Finds-U.S.-Security-Industry-to-be-$350-Billion-Market.aspx (accessed 17/12/2014)
[13] Mark Little, Ovum, “Personal Data and the Big Trust Opportunity”, available at: http://www.ovum.com/big-trust-is-big-datas-missing-dna/ (accessed 10/12/2014)
[14] For example, Ghostery, Inc. Website available at: https://www.ghostery.com/en-GB/
[15] For example, devices like the Blackphone are designed to ensure highly secure and encrypted mobile communications. Website available at: https://www.blackphone.ch/
[16] Supra note 27
[17] CNET, “Google to encrypt data on new version of Android by default”, available at: http://www.cnet.com/uk/news/google-to-encrypt-data-by-default-on-new-version-of-android/ (accessed 17/12/2014); and see supra note 14.
[18] The current draft of the EU Data Protection Regulation is available at: http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf (accessed 17/12/2014)
[19] The controversy over the European Court of Justice decision in the so-called ‘right-to-be-forgotten’ case against Google is illustrative of this, where traditional data protection rules are applied to a technology, i.e. search engines, that was never designed to ‘forget’, to ‘age’ search results, or otherwise address the privacy issues with indexing against individuals’ names. The European Commission’s Factsheet on the case is available at:  http://ec.europa.eu/justice/data-protection/files/factsheets/factsheet_data_protection_en.pdf  (accessed 12/12/2014)
[20] Article 23 (Data Protection by Design and Default) in the Draft Data Protection Regulation, available at: http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf  (accessed 17/12/2014)
[21] Dennis D. Hirsch, “Protecting the Inner Environment: what Privacy Regulation can Learn from Environmental Law”, available at:  http://users.law.capital.edu/dhirsch/articles/hirschprivacyarticle.pdf (accessed 01/12/2104)
[22] Article 18 (Right to Data Portability), available at: http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf  (accessed 17/12/2014)
[23] For a general description of mobile number portability - http://en.wikipedia.org/wiki/Mobile_number_portability (accessed 15/12/2014)

The Future of Travel – The Global Challenge

The travel and tourism industry is often described as the largest industry in the world. It accounts for 9% of world GDP, $1.3tn in exports and 6% of world trade across multiple sectors, including transport (aviation, rail, road and sea), accommodation, activities, food and drink. It is estimated that it creates about 120 million direct and 125 million indirect jobs and is closely linked to other sectors in domestic and international markets, such as the manufacturing industry, agriculture and the service sector.  In turn, these create broad multiplier effects for local and national economies.

In 2012, for the first time in history, the number of tourists crossing international borders in a single year reached over one billion. While just over 50% of these arrivals were from Europe, much of the demand is being fuelled by rising household incomes in emerging economies – not only the Brics (Brazil, Russia, India and China) but increasingly across the rest of south-east Asia and Latin America. In addition another five to six billion people travel in their own country every year. Technological innovations are fuelling this growth, and include developments in low cost air travel and the widespread use of increasingly sophisticated applications that aid online researching and booking travel.

Mass tourism was one of the great game-changers of the 20th century. Thomson Holidays ‘Sustainable Holiday Futures’ report explains: “Cheap flights meant travel was no longer the preserve of a wealthy elite, enabling millions of people to travel beyond their border and dramatically widening the horizons, tastes and expectations of an entire generation in the developed world.” Today I see the mobilisation of the middle classes in the Indian Subcontinent, Asia and South America as the game-changer for the early part of the 21st Century.

In general people like traveling and which is probably why the industry has remained resilient, adapting in the face of a range of challenges such as armed conflict (particularly the Gulf Wars) and disease (Sars, H1N1, Foot and Mouth, and more recently Ebola), earthquakes and other natural disasters. Looking ahead the future looks positive; for example international tourist arrivals worldwide are expected to increase by c. 3.3% a year to reach 1.8 billion by 2030 with the majority of market share tipping toward the emerging economies over this period.

Despite the positive trajectory the Thomson Future Holidays report warns that the challenges for the industry are formidable: “The dream of affordable travel for all is being obscured by climate change, future long-term projections of rising fuel prices and a growing awareness among consumers that sustainability and responsible travel are set to have an impact on how we understand, embrace and manage our holiday plans.” Given this, I see that there are broadly two main challenges ahead for the development of a robust travel and tourism industry: how to continue to grow further to deliver jobs, exports, economic growth and development, and in doing so, how to manage this sustainably.

The Future of Water – Options and Possibilities

The UN has sagely noted that “water is the primary medium through which climate change impacts will be felt by humans, society and the environment” and accordingly climate change will necessitate improvements in water resilience systems in cities across the globe. Increasingly they will have to focus on local water sourcing, reuse and recycling in order to sustain their ever-expanding population. There are multiple ways in which efficiency can be improved not least through significant investments in green infrastructure, the adaption of smart technology and widespread public education which will help to manage water demand through a broader understanding about its natural process. Water is a key contributor to life. We need to be constantly reminding ourselves of this and take action.

Many countries are currently working to maintain and improve the quality of their sources.  About 96% of the earth’s total water supply is found in oceans and there is broad agreement that extensive use of desalination will be required to meet the needs of growing world population.  Worldwide desalination plants are producing over 323 million cubic metres of fresh water per day, however energy costs are currently the principal barrier to its greater use. The State of Singapore has innovative water technology, aiming, despite its size and population density, to become fully self-sufficient by 2061.  Plans include tripling its desalinated water supply by 2030, the large-scale collection of rainwater, and the collection of recycled water which, as well as the standard procedures, uses micro filtration processes, reverse osmosis and UV treatment to deliver potable water to its citizens. In short they are converting their city into a catchment and focusing on source diversity.

Elsewhere efficiencies will be improved by the use of intelligent robots, which will play a greater role in the inspection of infrastructure.  New materials, such as graphene, that are lighter, stronger, smarter and greener will also become more popular replacing traditional materials such as stainless steel pipes.

Growing concern for the environment and for public health means that water companies will be held to greater account for their environmental impact and water quality. A stronger emphasis on green infrastructure will support a trend for companies to transform from providing base utilities to creating a system of amenities that support the water cycle. An example of this can be found at the Illinois Institute of Technology. Rain gardens have been reutilized as communal meeting spaces, through-ways turned in to permeable walkways and three acres of new native plant communities with underground cisterns collect rainwater for future non potable reuse. Once all the changes are implemented the IIT predicts a 70 – 80% reduction of run-off into Chicago’s sewer system while making the collected non-potable water available for irrigation. Expect this repurposing of public spaces for multi-functionality for both amenity and wider sustainability purposes to be widely adopted.

Alongside making improvements to the infrastructure, there is a pressing need to do more with less water. Smart technology and big data will help. Changing public behavior is a huge challenge however.  Although there is widespread understanding that rising consumption of raw materials is both intensifying resource scarcity and increasing competition, most people, certainly in the developed world, live materialistic lifestyles resulting in high levels of waste.  In Australia for example, on average around 20 million tonnes of waste per year is thrown away at a value of AUD10.5 bn. Digital lifestyles can increasingly link consumer behavior to consumption and growing connectivity, utilizing the Internet of Things, will mean that it will be possible to monitor the consumption and cost of water in real time allowing consumers to understand their impacts and take action.

Data analytics can help build understanding on how to use the water cycle to respond to the challenges of climate change. It can also lead to increased scrutiny of water utilities and a better understanding of cost. Companies will therefore be able to integrate the true cost of water into their decision-making. In addition the availability of data provides an opportunity to educate customers about consumption. Publicity campaigns and a growing sense of urgency will nudge consumers to reduce consumption and should be used in partnership with economic levers that recognize the true value of water.

Growing populations and changes in diet mean that we need to produce more food. Water is a fundamental part of this process. In Australia, for example, the agricultural sector accounts for around 65% of total water consumption.  This could be greatly reduced if we could change consumer behaviour. It is estimated that Australians throw away AUD5.3bn of food waste every year. This is simultaneously wastewater. There is a real need to change this approach and developments in this sector will continue to have tangible knock on effects for the water supply industry and the natural environment from which this water is sourced.

Science will also have a key role in reducing the amount of water we use. Nano and biotechnology is a potential game-changer for the water industry, and can enable breakthrough products and technologies to tackle pressing global challenges such as reducing environmental footprints, using less and cleaner energy and decreasing water usage and waste generation. For example microorganisms are now being used to treat water that has been contaminated by hazardous materials. The global market for nanostructured product used in water treatment was worth an estimated USD1.4bn in 2010 and is expected to rise to USD2.1bn in 2015.[1]  Initial success in this area has also raised the possibility of the utility as a self-healing ecosystem.

Greater efficiency is the driving force for manufacturing companies where energy and water can be as much as 50% of the total manufacturing cost.  In the future expect more green manufacturing and increased co-operation when companies forge alliances across traditional boundaries, for example to share common costs. In the water industry this will manifest itself in knowledge sharing and contributions to joint research and development across catchment boundaries.  Through using resources more efficiently countries could also become more active trading partners; this would allow for more equal water redistribution amongst users. This could include a water balance concept similar to carbon emissions reduction strategies where water saved in one country offsets additional water use in another.

Looking ahead, users are likely to have to pay for the real cost of infrastructure. One short-term option is the financial recycling of assets and capital where old assets are sold or leased to fund the new. However, in the longer-term we will have to pay the true value for key resources. This shift could also lead to the greater application of the circular economy, which will help stretch resources through end of life recycling and reuse. More awareness will lead to increased scrutiny of water utilities and pricing of services as the widespread availability of data provides the opportunity to educate customers about consumption and managing resource use.  Looking through an international lens, water trading would allow for the efficient redistribution of water amongst users, so countries could become active trading partners. As the amount of water used in agriculture in arid regions is two to three times higher than in rain fed regions water trade could help save water on a global scale.

Once efficiencies and improvements are made, consideration should be given to the most cost effective way to provide access to basic services.  The fixed nature of water supply infrastructure and its history as an essential government supplied service gives rise to natural monopolies within supply areas. Governments need to ensure the pricing policy is appropriate to balance the essential need for water, the impacts on consumers (particularly those on lower incomes) and the requirements of the suppliers to remain financially viable.  To do this there should be better integration between urban water planning and urban development planning with considerations on limitation to green-field development.

Recognizing innovation opportunities for the future more and more companies are tapping into the public’s intellectual capital by crowdsourcing product ideas and solutions.  In exchange they are giving creative consumers a direct say in what gets developed, designed or manufactured. Crowd-funding added at around 270,000 jobs and injected more than US$65bn into the global economy by the end of 2014 with an expected industry growth of 92%.

[1] . Nanotechnology Now. Nanotechnology in Water Treatment. 2012; Available from: http://www.nanotech-now.com/news.cgi?story_id=45894

The Future of Privacy – The Global Challenge

The right to privacy finds its expression in all the major international human right instruments. They were all, without exception, drafted and agreed in different times to those we find ourselves in today. Even as we contemplate the years ahead, there is almost universal acknowledgement of the continuing value and relevance of these instruments and the rights enshrined. Yet, the subject of privacy has never been more in flux, facing a seemingly endless barrage of pressures. Privacy is becoming one of the most vexing public issues of our time, and will remain so in 2020.

Contemporary concerns and debates about privacy are essentially debates about technology and the role and impact of technology on our lives and societies. Practically every mega-trend in the world of technology is creating tensions for privacy, personal freedom and autonomy – ubiquitous connectivity, big data, the cloud, wearable tech, artificial intelligence, the internet of everything, connected health, drones – the list goes on.

It’s no longer just a case of leaving digital footprints from our movements around a digital landscape. As the size of computing continues to shrink to nanotech levels, and the cost continues to fall, technology will become embedded in both the physical world and our physical bodies. We will be living in a world where we are ‘surrounded by computational intelligence’[1].

Technology is becoming invisible. And its unobtrusiveness will aid its pervasiveness – there are already estimated to be 16 billion connected objects today and this is predicted to reach 40 billion by 2020[2]. And this pervasive connected technology will create ever more data. IDC estimates that by 2020 people and connected objects will generate 40 trillion gigabytes of data that will have an impact on daily life in one way or another[3]. This data will make known about us things that were previously unknown or unknowable (including to ourselves). And in doing that, it will enable actions and decisions to be taken about us that will have profound consequences far beyond the display of adverts on our variously sized screens, or personalised pricing based on profiles of our income and propensity to pay[4].

Evgeny Morozov, the author[5] and researcher, gave an example of this recently in his talk at the Observer Ideas festival 2014 in London[6]. In the Philippines, sensors have been placed in public toilets which emit an alarm if someone uses one of the stalls and then tries to leave without using the soap dispenser. You can only turn off the alarm by using the soap dispenser. The sensor thereby has a deliberately regulating effect on the behaviour of users, in this case encouraging hand washing. This is just a logical extension of the seat belt alarms fitted to most new cars built today or the use of speed cameras, the purpose in both cases being to use technology to regulate our behaviour and thereby reduce injury and the cost to health services of car accidents.

Let’s stick with cars for a moment. The installation of a wide range of new sensors in vehicles is already transforming other aspects of motoring, such as insurance. Usage based insurance schemes utilise sensors that collect data on location, speed, braking and acceleration to determine the risk profile of the driver, and consequently their insurance premium. The other touted benefit is that such technology acts to discourage  risky driving behaviours. In return, we subject ourselves to a degree of surveillance. It is not long before we can see the same technology being used for other ostensibly worthy purposes, e.g. perhaps identifying if you are too tired to drive and automatically disabling the engine.

Of course, it might be argued that none of this compels us to allow sensors into our cars, homes and other parts of our lives, and the collection of data about us – we are not compelled to use usage based insurance or drive “intelligent” cars, and so we have a choice. But if refusing to allow the collection of data by sensors begins to become a costly decision (e.g. increased car, home or health insurance[7] premiums), it’s a choice that is easier to make for those who can afford it. And, of course, once sensors and data-generating technologies become embedded in products as standard, there will come a point when there are few realistic alternatives.

This rise of technology that not only observes, but intervenes (I’ll term it “bossy tech”), is a consequence of placing sensing technology in more and more places where these ‘interventions’ can be automated, based upon the exponential increase in data sources that can be analysed in real time with intelligent computing. And as bossy tech gets a lot smarter it will no doubt get bossier, as public authorities acquiesce in the notion that technology can regulate our behaviour far more efficiently than traditional enforcement methods – why waste money on policing public spaces if cameras and audio sensors can detect potentially unsociable behaviours, use facial and voice recognition to identify the individuals involved, and then order them to stop or else face the consequences?

The value of digital identity, i.e. the sum of all digitally available information about an individual, has been estimated to be worth €1 trillion to the European economy by 2020[8]. The internet of things is predicted to generate a value-add of $1.9 trillion globally by 2020[9]. Much of that value is not likely to be from the ‘things’, but from data derived about those things that promise to transform every sector, bringing efficiencies and cost savings, but also entirely new service possibilities[10]. Whatever the figures, there is undoubtedly a huge economic incentive to generate and collect data from whatever sources it becomes available. As more data from more things becomes available, we can expect to see a data “land grab” by organisations.

The control of data provides organisations with valuable insights and enables influence over purchasing decisions and other behaviours. Increasingly, therefore, data is power, economic or otherwise. But there is already undoubtedly an asymmetry in power between organisations and individuals today, as organisations have an abundance of information about consumers and analytics tools to interrogate it, while consumers suffer information scarcity and possess few tools to make any sense of their own data[11]. And this appears to be getting worse, according to Sir Tim Berners-Lee[12]. In the 2014 – 15 Web Index, an annual report measuring the Web’s contribution to social, economic and political progress published by the World Wide Web Foundation, it is revealed that the web is becoming less free and more unequal.

In the absence of any countervailing forces, the current technology mega-trends look set to create further asymmetries in power resulting in less privacy for individuals in 2020.

[1] Brian David Johnson, Intel, Wired UK retail talk, available at: http://www.wired.co.uk/news/archive/2014-11/24/brian-david-johnson-intel (accessed 10/12/2014)
[2] ABI Research, “The Internet of Things Will Drive Wireless Connected Devices to 40.9 Billion in 2020”, available at:  https://www.abiresearch.com/press/the-internet-of-things-will-drive-wireless-connect (accessed 10/12/2014)
[3] ICD white paper, “The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things”, April 2014, available at: http://idcdocserv.com/1678 (accessed 10/12/2014)
[4] Blogger Alistair Croll declares that “Personalization” is another word for discrimination” in his post titled “Big data is our generations civil righjts issue”, available at: http://solveforinteresting.com/big-data-is-our-generations-civil-rights-issue-and-we-dont-know-it/ (accessed 23/11/2014)
[5] Evgeny Morozov homepage, available at:  http://www.evgenymorozov.com/ (accessed 01/12/2014)
[6] Observer Ideas - A Festival for the Mind, 12 October 2014. For an introduction: http://www.theguardian.com/reader-events/2014/jul/18/observer-ideas-2014-an-intoduction (accessed 17/12/2014)
[7] Barclay Ballad, “Now you can get financial reward for your personal fitness data”, 9 December 2014, available at: http://www.itproportal.com/2014/12/09/health-insurance-firm-offering-240-year-personal-data/  (accessed 17/12/2014)
[8] Liberty Global, “The Value of Our Digital Identity”, available at: http://www.libertyglobal.com/PDF/public-policy/The-Value-of-Our-Digital-Identity.pdf (accessed 10/12/2014)
[9] Gartner, Inc. newsroom, “Gartner Says the Internet of Things Installed Base Will Grow to 26 Billion Units By 2020”, available at: http://www.gartner.com/newsroom/id/2636073 (accessed 09/12/2014)
[10] Harbour Research, “Where Will Value Be Created In The Internet Of Things & People?”, available at: http://harborresearch.com/where-will-value-be-created-in-the-internet-of-things-people/ (09/12/2014)
[11] Mark Little, Ovum, “Personal Data and the Big Trust Opportunity”, available at: http://www.ovum.com/big-trust-is-big-datas-missing-dna/ (accessed 10/11/2014)
[12] World Wide Web Foundation, “Recognise the Internet as a human right, says Sir Tim Berners-Lee as he launches annual Web Index”, available at: http://webfoundation.org/2014/12/recognise-the-internet-as-a-human-right-says-sir-tim-berners-lee-as-he-launches-annual-web-index/ (accessed 17/12/2014)

The Future of Privacy – Options and Possibilities

There are plenty of predictions about technology – from the utopian visions of a bright new hyper-efficient world where robots free humanity from drudgery, to doom-laden predictions of pervasive surveillance and the demise of personal autonomy at the hands of governments and corporations. But there are a number of counter-trends emerging that present their own narrative about how the future will play out.

Privacy is a public issue: The public’s perception of the threats to privacy, personal freedom and autonomy – whether from corporations or governments – is growing. Privacy has already emerged beyond a niche, specialist concern to being a mainstream public issue. It seems that almost weekly new research is released revealing increasing public concern about privacy and declining levels of trust in organisations’ handling of peoples’ personal data[1].

In addition, a lesson the public has learnt thanks to the revelations from Edward Snowden is that data controlled by organisations will always be susceptible to access by governments using extensive legal powers of disclosure and surveillance. This is becoming a liability for communications and technology companies, under pressure from their users, who are beginning to take measures to put some control back into the hands of their users[2].

This growing consumer and citizen awareness and distrust looks set to accelerate and will increasingly become a factor in decision making for ordinary people – decisions about the products we use or abandon, the brands we associate with, the political leaders we elect. And as data insights become increasingly actioned by bossy tech, this will exacerbate the trend – behavioural observations, and the interventions that result, will increasingly be seen as unwarranted intrusions and restrictions on personal freedom and autonomy.

Digital activism will expand the digital commons: Consumers are taking matters into their own hands. A 2013 study from the Pew Research Internet project found that “86% of internet users have taken steps online to remove or mask their digital footprints—ranging from clearing cookies to encrypting their email, from avoiding using their name to using virtual networks that mask their internet protocol (IP) address”[3].

The plummeting cost and complexity, and increased ‘consumerisation’, of computing, processing and storage means that activists are now able to harness technology for themselves, without the aid of corporations and governments. The ‘digital commons’[4] will continue to grow, empowering more and more citizens and consumers to take matters into their own hands, such as deploying end-to-end encryption, anonymizers[5], and by “watching the watchers”[6].

Business model disruption is inevitable: The default internet business model – advertising – is showing some signs of strain, and even the biggest players such as Google are openly exploring new models[7]. Yet the value in personal data is so great, and the levels of public mistrust in organisations’ handling and use of personal data is so high, that it is inconceivable to me that entrepreneurs will not make a serious effort to exploit this disparity. What we are already witnessing is the emergence of new business models that threaten to disrupt not just the default internet business model, but more broadly the assumption that the organisation is the natural and legitimate point of control and ownership of personal data. Instead, new disruptive providers are seeking to put the individual in control of their personal data[8]. In the process, they are seeking to disintermediate data-intensive businesses from their existing sources of data.

Regulation will get tougher: Policy makers will act to toughen laws, even though they move at geological speeds compared to the rate of technology development.

New laws and regulations are being promulgated around the world, many following the European model[9]. And Europe is on a journey to update and toughen its data protection laws[10]. The EU proposals will increase fines, place tougher requirements on organisations for obtaining consent, and create a new ‘data protection by design’ obligation. The fines alone will focus attention, forcing organisations to devote more time and resources to compliance.

[1] The Royal Statistical Society, “New research finds data trust deficit with lessons for policymakers”, available at: https://www.ipsos-mori.com/researchpublications/researcharchive/3422/New-research-finds-data-trust-deficit-with-lessons-for-policymakers.aspx (accessed 10/12/2014)
[2] Apple, Inc, “A message from Tim Cook about Apple’s commitment to you privacy”, available at: https://www.apple.com/uk/privacy/ (accessed 10/12/2014)
[3] Pew Internet Research, “Anonymity, Privacy and Security Online”, 5th September 2013, available at: http://www.pewinternet.org/2013/09/05/anonymity-privacy-and-security-online/ (accessed 12/12/2014)
[4] In her 2012 book, “Consent of the Networked”, Rebecca Mackinnon describes how activist individuals play a key role in influencing the shape of technologies and the balance of power in her chapter on the Rise of the Digital Commons. Summary available at: http://consentofthenetworked.com/about/
[5] For example, The Onion Router (TOR). See the Wikipedia entry available at: http://en.wikipedia.org/wiki/Tor_%28anonymity_network%29
[6] An example is the TrackMap project, whose aim is to show where data travels when people visit their favourite news websites through visualization, available at: https://github.com/vecna/trackmap (accessed 15/12/2014)
[7] CITEworld, “Google for business: Now 100 percent ad-free”, 16th May 2014, available at: http://www.citeworld.com/article/2156043/cloud-computing/gmail-ad-free.html (accessed 10/12/2014)
[8] Ctrl-Shift, “New market for ‘empowering’ personal data services will transform relationships between customers and brands”, 20th March 2014, available at: https://www.ctrl-shift.co.uk/news/2014/03/20/new-market-for-empowering-personal-data-services-will-transform-relationships-between-customers-and-brands/  (accessed 10/12/2014)
[9] For example, in South Africa the Protection of Personal Information Act 4 of 2013 (http://www.saflii.org/za/journals/DEREBUS/2014/84.html), in Ghana the Data Protection Act 2012 (http://mobile.ghanaweb.com/GhanaHomePage/NewsArchive/artikel.php?ID=229717) and in India proposals in the form of a Privacy Bill (http://www.dataguidance.com/dataguidance_privacy_this_week.asp?id=2233)
[10] European Commission Data Protection newsroom, available at:  http://ec.europa.eu/justice/newsroom/data-protection/news/120125_en.htm

The Future of Privacy – Impacts and Implications

Threats to privacy from new trends and developments in technology look set to continue in 2020 and beyond. But the impact of the counter-trends and the effect they may have in constraining or shaping technology has received less attention – perhaps with the exception of law and regulation. As someone who has spent most of their professional life helping large organisations comply with law and regulation, I am often surprised at the level of faith in the law or regulation alone in delivering acceptable outcomes to complex problems like the impact of technology on our privacy.

Law and regulation is very effective at creating momentum and movement. By creating fear in board rooms, it can galvanise organisations to focus on compliance. But this does not guarantee that the things organisations do as a result will be pleasing to all concerned, even if they appear to meet the requirements of the law, and organisations can claim to be fully compliant. This is the problem we have faced to date with technology and privacy – there is no lack of law, legal opinion and guidance; yet there is continuing dissatisfaction with how things are, i.e. the outcomes we are left with.

This is because very often policy makers do not know what those outcomes should be and it would be a mistake for the law to try to determine them. While we are capable of identifying what we don’t like, it’s much harder to say what we do like – or more to the point, how we would like the future to actually look.

It’s therefore a case of sticks and carrots. Hit the donkey with a stick and the donkey will move. But it’s unlikely to go in the direction we want it to. Dangle a carrot under its nose in the direction we do want it to go, and it will generally follow the carrot. Law and regulation is good at creating impetus and momentum, but it won’t guarantee that we get to a desirable destination. To do that, we need incentives. Fortunately, the green shoots of these incentives can be found among the other counter-trends.

The possibility that individuals can now begin to take control of their own personal data is upending long established norms about the control of personal data – the assumption that the organisation is the default point of control. This is heralding the emergence of new entrepreneurs that see an opportunity to strike a new deal with consumers, offering them control. But not control simply for its own sake (worthy though that may be); rather control as a way to exercise greater autonomy over many aspects of their lives that today are made too complex and too difficult by data being controlled elsewhere.  And in doing so, there is the potential to unlock enormous economic value from personal data.

This potential for economic disruption to come to the aid of privacy (if not its complete rescue) by shifting power over data from the organisation to the individual is one of the most significant trends emerging as we look to 2020. It needs to be harnessed if we want to shape the development of technology to preserve the rights enshrined in all the major human rights instruments.

The 19th August 2014 was the 25th anniversary of the Web. This year, 2015, is the 800th anniversary of one of the most important legal developments in history – the Magna Carta. The Magna Carta was all about a shift in power – from the English King to the nobles, but in defining the principles for how power is distributed and constrained, it laid down the foundations of England’s legal system, and has influenced legal systems across the world. In celebration of the 25th anniversary of the web and the 800th anniversary of the Magna Carta, Sir Tim Berners-Lee has called for the creation of a ‘Magna Carta for the Web’ in 2015[1], and has declared that we need to “hardwire the rights to privacy, freedom of expression, affordable access and net neutrality into the rules of the game[2].

This is a fitting aspiration. But just as the Magna Carta was a response to the shift of power from King to nobles, hardwiring the web in order to protect privacy will require a shift of power over personal data from the organisation to the individual.

[1] “Tim Berners-Lee calls for internet bill of rights to ensure greater privacy”, The Guardian, available at: http://www.theguardian.com/technology/2014/sep/28/tim-berners-lee-internet-bill-of-rights-greater-privacy (accessed 17/12/2014)
[2] Supra note 12

The Future of Connectivity – Options and Possibilities

Demand will continue to grow exponentially in the next decade: Demand for mobile broadband is closely related to the evolution of device and screen technologies, one of the fastest evolving areas in the Information and Communication Technology (ICT) industry. In 2011, the Retina display of an iPad already had nearly twice as many pixels to fill with content compared to a Full-HD television. New device form factors such as Google’s glasses, another hot topic introduced in 2012, continue to drive this evolution and ultimately only the human eye will set the limits for the amount of digital content that will be consumed by a mobile device. And these devices will not only consume content – ubiquitous integrated cameras with high resolution and frame rate are producing Exabytes of digital content to be distributed via networks.

Enabled by these powerful new devices, the app ecosystem continues to fuel demand for mobile data by continuously inventing new categories of applications that test the limits of the network. It started with mobile web browsing in 2007 and accounted for more than 50% of video traffic in 2012. And by 2020, people might demand mobile networks that allow them to broadcast live video feeds from their glasses to thousands of other users in real time.

Many of the apps will be cloud based or rely on content stored in the cloud. IDC estimates in their digital universe study that by 2020 30% of all digital information will be stored in the cloud – and thus be accessed through networks.

An even broader range of use cases for networks will develop as communication technologies and applications proliferate into all industries and billions of machines and objects get connected. They will go far beyond the classical examples of the smart grid or home automation. Just imagine the potential – but also the requirements – that remotely controlled unmanned vehicles would bring to mobile broadband networks.

In summary, we believe that device evolution, cloud based application innovation and proliferation of communication technologies into all industries will ensure that the exponential growth in demand for mobile broadband we have seen in the last few years will continue in the next decade.

The Future of Connectivity – Proposed Way Forward

Having understood what drives demand we can define the requirements for future mobile networks: As stated earlier, one gigabyte of data traffic per user per day is about 60 times the average data traffic seen in mature mobile operator networks today. On top of this, the growth in mobile broadband penetration and the surge of connected objects will lead to around ten times more endpoints attached to mobile operator networks than today. To prepare for this, we need to find ways to radically push the capacity and data rates of mobile networks into new dimensions to handle this amount of data traffic.

Yet, being able to deal with this traffic growth is just one aspect. An increasing number of real-time apps will test the performance of the networks. To support them with a good user experience we need to find ways to reduce the end-to-end latency imposed by the network to milliseconds. Tactile (touch/response) and machine-to-machine interactions in particular have low latency demands that can be as low as in the single digit milliseconds range.

To ensure mobile broadband remains affordable even while supporting the capacity and real-time requirements described previously, we also need to radically reduce the network Total Cost of Ownership (TCO) per Gigabyte of traffic. We believe one important lever to address this will be to automate all tasks of network and service operation by teaching networks to be self-aware, self-adapting and intelligent. This will help to reduce CAPEX/IMPEX for network installation as well as OPEX for network and service management. In addition to lower TCO, self-aware and intelligent networks will be able to understand their user’s needs and automatically act to deliver the best personalized experience.

To further reduce costs per GB, we need to share network resources through both within a single operator network, as well as between operators. It will include physical infrastructure, software platforms, sites, spectrum assets or even the network as a whole. We must also find ways to increase the energy efficiency. In addition to their environmental impact the energy costs account today for up to 10% (in mature markets) and up to 50% (in emerging markets) of an operator’s network OPEX and they have been growing constantly in the last years.

The most powerful way of course to deal with the cost pressure will be to identify new revenue streams. Are end customers and termination fees really the sole revenue source for operators, or will technologies enable new business models that allow operators to better monetize all their assets?

Ultimately we of course need to admit that due to the fast pace of change in the industry it is simply not possible to predict all requirements future networks will face. There will be many use cases that are simply not known today. To cope with this uncertainty, flexibility must be a key requirement as well.

The Future of Connectivity – Impacts and Implications

More spectrum, high spectral efficiency and small cells will provide up to 1000 times more capacity in wireless access. In the world of wireless, Shannon’s law is the one fundamental rule that defines the physical limits for the amount of data that can be transferred across a single wireless link. It says that the capacity is determined by the available bandwidth and the signal to noise ratio – which in a cellular system typically is constrained by the interference.

Therefore the first lever to increase the capacity will be to simply utilize more spectrum for mobile broadband. In total the entire spectrum demanded for mobile broadband amounts to more than 1,100 MHz and a large amount (about 500 MHz) of unlicensed spectrum at 2.4 GHz and 5 GHz can provide additional capacities for mobile data. Of course reaching an agreement on spectrum usage requires significant alignment efforts by the industry and is a rather time consuming process. Therefore it is also necessary to look at complementary approaches such as the Authorized Shared Access (ASA) licensing model, which allows fast and flexible sharing of underutilized spectrum that is currently assigned to other spectrum-holders such as broadcasters, public safety, defence or aeronautical.

A key challenge associated with more spectrum is to enable base stations and devices to utilize this larger and a potentially fragmented spectrum. Here technologies such as intra- and inter-band Carrier Aggregation will be essential to make efficient use of a fragmented spectrum.

The second lever for more capacity will be to address the interference part of Shannon’s equation. This can be achieved for example through beam forming techniques, which concentrate the transmit power into smaller spatial regions. A combination of multiple spatial paths through Coordinated Multipoint Transmissions (CoMP) can further increase the capacities available to individual users. We believe that with the sum of these techniques the spectral efficiency of the system can be increased by up to 10 times compared to HSPA today.

Advanced technologies and more spectrum will help to grow capacity by upgrading existing macro sites for still some time. However, a point will be reached when macro upgrades reach their limits. By 2020 we believe mobile networks will consist of up to 10…100x more cells, forming a heterogeneous network of Macro, Micro, Pico and Femto cells. Part of this will also be non-cellular technologies such as Wi-Fi, which need to be seamlessly integrated with cellular technologies for an optimal user experience.

Although the industry today has not defined what 5G will look like and the discussions about this are just starting, we believe that flexible spectrum usage, more base stations and high spectral efficiency will be key cornerstones.

The capacity demand and multitude of deployment scenarios for heterogeneous radio access networks will make the mobile backhaul key to network evolution in the next decade. The backhaul requirements for future base stations will easily exceed the practical limits of copper lines. Therefore from a pure technology perspective, fiber seems to be the solution of choice. It provides virtually unlimited bandwidth and can be used to connect macro cells in rural areas and some of the small cells in urban areas. However the high deployment costs will prevent dedicated fiber deployments just to connect base stations in many cases. Due to the number of deployment scenarios for small cells, from outdoor lamp post type installations to indoor, we believe a wide range of wireless backhaul options will coexist including microwave links and point to multipoint link, millimetre wave backhaul technologies. For many small cell deployment scenarios (e.g. for installations below rooftop level) a non-line-of-sight backhaul will be needed. The main options here are to either utilize wireless technologies in the spectrum below 7 GHz or to establish meshed topologies.

Besides pure network capacity, the user experience for many data applications depends heavily on the end-to-end network latency. For example, users expect a full web page to be loaded in less than 1000ms. As loading web pages typically involves multiple requests to multiple servers, this can translate to network latency requirements lower than 50ms. Real-time voice and video communication requires network latencies below 100ms and advanced apps like cloud gaming, tactile touch/response applications or remotely controlled vehicles can push latency requirements down to even single digit milliseconds.

The majority of mobile networks today show end-to-end latencies in the range of 200ms-500ms , mainly determined by slow and capacity limited radio access networks. Therefore the high bandwidth provided by future radio access technologies and the use of fast data processing and transmission will provide a major contribution to reduce the network latency. Due to the amount of data being transferred the user perceived latency can be much higher than the plain round-trip-time. Thinking of future ultra high resolution (UHD) real time video applications this clearly motivates the need for further technology evolution.

Equally important is the real traffic load along the end-to-end path in the network. A high traffic load leads to queuing of packets, which significantly delays their delivery. When attempting to solve this, it is not efficient to just overprovision bandwidth in all network domains. Instead latency sensitive media traffic might take a different path through the network or receive preferred treatment over plain data transfers. This needs to be supported by continuously managing latency as a network quality parameter to identify and improve the bottlenecks. In return, low latency traffic could be charged at a premium, providing network operators with new monetization opportunities.

One physical constraint for latency remainins: Distance and the speed of light. A user located in Europe accessing a server in the US will face a 50ms round-trip time due simply to the physical distance involved, no matter how fast and efficient the network is. As the speed of light is constant, the only way to improve this will be to reduce the distance between devices and the content and applications they are accessing. Many future applications such as cloud gaming depend on dynamically generated content that cannot be cached. Therefore the processing and storage for time critical services also needs to be moved closer to the edge of the network.

The introduction of additional radio access technologies, multiple cell layers and diverse backhaul options will increase complexity and bears the risk that network OPEX will rise substantially. This is why the Self- Optimizing-Network (SON) is so important. SON not only increases operational efficiency, but also improves the network experience through higher network quality, better coverage, capacity and reliability. Extending the SON principles now to a heterogeneous network environment is a challenge and opportunity at the same time.

Fortunately, big data analytics and artificial intelligence (AI) technologies have matured in recent years, mainly driven by the need to interpret the rapidly growing amount of digital data in the Internet. Applied to communication networks, they are a great foundation for analyzing Terabytes of raw network data and to propose meaningful actions. In combination with AI technologies, actionable insights can
be derived even in the case of incomplete data; for example machine-learning techniques can find patterns in large and noisy data sets. Knowledge representation schemes provide techniques for describing and storing the network’s knowledge base and reasoning techniques utilize this to propose decisions even with uncertain and incomplete information. Ultimately we believe that both, big data analytics and AI technologies will help to evolve SON into what we call a “Cognitive Network”, one that is able to handle complex end-to-end optimization tasks autonomously and in real time.

Customer Experience Management (CEM) can provide insights that will enable operators to optimize the balance of customer experience, revenues and network utilization. Cognitive Networks will help to increase the automation of CEM enabling network performance metrics to be used to govern the insight/action control loop, as well as experience and business metrics. This again increases the operational efficiency and at the same will be the prerequisite to deliver a truly personalized network experience for every single user.

The big data analytics and AI technologies introduced with the Cognitive Networks will be the foundation for advanced customer experience metrics. The ability to deal with arbitrary amounts of data in real time will allow a much more detailed sensing of network conditions and the resulting user experience in real time.

It also will be the foundation for large-scale correlations with other data sources such as demographics, location data, social network data, weather conditions and more. This will add a completely new dimension to user experience insights.

Cloud technologies and being able to provide computing and storage resource on-demand have transformed the IT industry in the last years. Virtualization of computing and storage resources has enabled the sharing of resources and thus their overall efficiency. Virtual cloud resources can also be scaled up and down almost instantly in response to changing demand. This flexibility has created completely new business models. Instead of owning infrastructure or applications it is possible to obtain them on-demand from cloud service providers. So far this approach has mainly revolutionized IT datacenters. We believe that similar gains in efficiency and flexibility can be achieved when applying cloud technologies to Telco networks. Virtualization will allow decoupling of traditional vertically integrated network elements into hardware and software, creating network elements that consist just of applications on top of virtualized IT resources. The hardware will be standard IT hardware, hosted in datacentres and either owned by the network operator or sourced on-demand from third party cloud service providers. The network applications will run on top of these datacentres, leveraging the benefits of shared resources and flexible scaling.

Also user plane network elements such as base stations will be subject to this paradigm shift. Over time, the migration of network elements in combination with software defined networking will transform today’s networks into a fully software defined infrastructure that is highly efficient and flexible at the same time.

Efficient radio technologies, high utilization and network modernization will reduce the network energy consumption, another important cost factor for operators. Having the forecasted traffic growth in mind, reducing the network energy consumption must be a major objective. The focal point for improving network energy efficiency will be the radio access, which accounts for around 80% of all mobile network energy consumption. Ultimately the energy efficiency that can be achieved depends on the pace of network modernization. Efficiency gains materialize only when the new technologies are introduced into the live network. Determining the right pace for modernization requires careful balancing of CAPEX and OPEX. We believe that energy efficiency can beat the traffic growth – which makes keeping the network energy consumption at least flat a challenging – but achievable goal.