"Smart City" is one of the trending buzzwords these days. Like all buzzwords, no one really knows exactly what it means, but let me venture a broad definition: a smart city is a city that strives to reinvent itself using technology to improve its efficiency, enhance the service it offers to citizens and visitors and adopt a more sustainable model.
There's little doubt that the promise of smart cities is considerable. Amongst other things, a smart city approach may help address issues as diverse as:
- traffic congestion and its negative impacts on pollution, stress levels and accessibility of the city,
- sorting waste and more generally waste management,
- public transport efficiency and transparency,
- emergency information,
- post-emergency recovery,
These are only a few examples of the possibilities that technology can offer to cities. You will note that connectivity enables most if not all of these services. The problem is that when it comes to connectivity, cities rarely understand the stakes and rarely control the resources.
Admittedly, the smart city "market" is still in its infancy and it's largely driven by vendors (IBM, Thales, Cisco, etc.) who have developed certain areas of expertise, vertical applications that they tend to push to cities, sometimes going as far as financing all or part of the projects to acquire commercial references that they can then resell to other cities. As a consequence, most smart city projects are conceived top down, with the application as a starting point.
The issue with that approach is that there is no foundational understanding of the needs of the city and what the priorities are. Projects emerge because a solution exists and a vendor is available to make it happen, not necessarily because said project represents the most pressing need for the city. And that in turn leads to an accumulation of functional and technical silos that generate enormous waste in terms of infrastructure and IT resources.
Let me illustrate with the story of Smartville, a mid-sized city that decides it needs to tackle its traffic problem. It consults with various vendors and a solution for real-time traffic routing and traffic flow optimization is decided upon. Besides some necessary civil engineering, the solution requires sensors alongside all roads. The traffic generated by these sensors needs to be aggregated in real-time. It is decided that the city will contract out with a mobile operator to do so. Dedicated computing resources are put in place to handle the data generated by the sensors and send back traffic information and recommendations to drivers in the city through information boards or perhaps location-triggered text messages.
Sometime down the line, Smartville comes to the conclusion that it needs to set-up an emergency service because of an Earthquake risk. This service will provide the city government with real-time information on the status of roads and other critical infrastructure in the city, and also inform citizens in real-time in case of an emergency, using the data on infrastructure to recommend escape routes and safe areas. This service also relies on sensors, but they're different types of sensors, sometimes in different locations. The sensor data will also be aggregated through a mobile operator's network. It will also have its dedicated IT resources.
Smartville has now built two functional silos. It’s possible that the project managers of the two projects never even worked together or met each other. It has duplicated networks, IT resources and created data silos that might make it impossible to cross the data flows from the two projects even though it might be very useful to do so. A huge waste of resources and not the best results functionally. This may sound like a deliberately bleak fictional scenario, but informal discussions I have with many cities suggest to me that it is the norm rather than the exception.
It's a concern because anything that gets built today will be hard to revisit tomorrow. That means that cities that haven't really gone down that road yet should make sure they approach this rationally. In my opinion, that means the following:
- Assess and prioritize needs of the city and the citizens;
- Evaluate existing or envisaged connected solutions that can help address these needs;
- Examine the resources (infrastructure, IT, human, etc.) needed to power services that do address the identified needs;
- Develop services in order of priority with a constant concern for the maximal reusability of allocated resources.
The question of the reusability of infrastructure assets is a particularly important one that is often not even on the radar of most cities. Cities assume that operators will have available solutions to ensure the connectivity requirements of smart city services, but that's not necessarily the case:
- the more smart-city applications require real-time, the more crucial network performance will be. In particular, while operators focus on delivering download speeds, the kind of applications that will enable smart cities require upload capacity and very low latency;
- private operators do not focus on ubiquitous deployment, at least not with uniform quality of service. They deploy where they think it will be profitable, and only invest as much as necessary to call an area "open for service". As a city though, delivering services selectively to certain areas is not an option.
In addition to these considerations about available network solutions, there's an additional issue about cost. Private operators have relatively crude wholesale models, and they like very much the idea of overlapping or redundant contracts powering separate services. For them, each end-to-end circuit is billed, and as the need for sensors grow, the number of end-points will explode. Is it sustainable for cities to power their services using such partners? Probably not.
This means one of two things. Either operators realize that they need to offer completely different pricing models for cities (and generally for machine to machine communications) and do it quickly, or cities need to consider ways to take their IT infrastructure future into their own hands. A number of cities have done that in Europe, and increasingly in the US as well, deploying their own wireline infrastructure, or setting up public-private partnerships with operators that guarantee they have low-level access to the infrastructure deployed at affordable rates.
Remember that a wireless network is a wireline network with wireless access points at the edges. Having low-level access to ubiquitous or near ubiquitous fiber for a city government ensures that it will be able to deploy whatever sensor networks they need without having to pay over and over again. It probably won’t be cellular, but that’s a good thing: sensors don’t need cellular, and may even work better with other types of wireless traffic aggregation.
The smart city movement is still in its inception phase. Let's make sure that the infrastructure it needs to thrive doesn't get overlooked!