The world population is expected to soar to more than 9 billion people by 2050. Roughly 70% of the global population will live in cities, which today consume 70% of global energy supplies. That’s a concern for electric and water utilities, but there are ways to address these concerns. The best Smart Grid planning methodologies embody two related meta concepts called the “system of systems” and “network of networks” approaches. Both terms are defined in the Smart Grid Dictionary, and mean that planners must identify potential relationships between systems and/or networks and design solutions that leverage these synergies. These approaches encourage creative use and reuse of resources for multiple purposes instead of single-use applications, and are especially important when dealing with complex systems and networks like Smart Grids. Advanced data analytics leverage synergies between data from different sources, and are already delivering value in different electric utility applications. We need to apply these same concepts and tools to build or renovate complex systems like electrical grids and city infrastructures.
I recently moderated a panel discussion at a Smart Cities event about the technological and policy implications of big data created as more devices are enabled with intelligence to sense and communicate to other machines (M2M) or humans (M2H). I have three observations to share with you.
First, we need to think differently about decision-making and time. Data analytics give us the opportunity to time-shift decisions. Sophisticated analyses can be used in predictive and proactive decision-making. In the Smart Grid world, we recognize that energy storage allows us to “time-shift” generation. We can also time-shift electricity consumption through demand response and dynamic pricing programs that encourage or reward use at off-peak times. A smart city working with data to predict traffic patterns could enable automated and realtime traffic congestion management instead of reactive activities. Smart Grids and smart cities can reap significant benefits from data analytics. But humans have to imagine the possibilities of how big data can be harnessed to really improve infrastructure management. And the most insightful information derived from analytics is worthless if humans fail to take action on that information.
The second takeaway is that the concept of privacy has the same plasticity that we see in our concepts of personal space. Personal proximity definitions vary on several factors including culture and relationships. Privacy, and especially data privacy, has similar plasticity in terms of our expectations of how much and what type of data we intentionally share with friends, family, acquaintances, businesses, and governmental entities. There is a real need for well-articulated roles, responsibilities, and benefits of data creation and use as well as the perils of unintentional data sharing.
Finally, we need a new law that helps us frame expectations around data. Data can be created by machines or by humans. Data will travel across networks to destinations, and may be transformed (anonymized), analyzed (correlated with other data), or stored in multiple locations. Moore’s Law stated that processing power would approximately double every 24 months. Metcalfe’s Law said that the value of a network grows as the square of the number of users grows. We are missing a similar law that frames our expectations about data volumes and the privacy and security of that data.
We need big data from machines and humans, and holistic solution design methodologies to optimize our designs, developments, and management of smart cities and Smart Grids. But we do need to increase our understanding of the promises and perils of data use in both types of complex infrastructures. What are your thoughts about the equivalent of a Moore’s Law for data?