June 2018 - Connectivity | IoT | Networks

Building the Infrastructure for the Future Driving Experience

Realizing the vision of the connected and self-driving car of the future will require significant advances in connectivity, and the IoT will need to be built into the landscape to provide a seamless user experience. Klaus Landefeld, Vice-Chair of the Board at the eco Association, on the intelligent highway.

Building the Infrastructure for the Future Driving Experience

© antoniokhr | istockphoto.com

Automated notifications of traffic jams, as well as alternate route suggestions offered by the navigation system, may deliver the feeling that highways themselves are collecting information about traffic flows. But the traffic information on offer today is not provided by the traffic infrastructure in the roads. It is derived from the behavior of other cars on the road – e.g. information collected indicates that cars are not moving, and this is interpreted as a traffic jam. 

However, as automotive systems become more dependent on environmental and situational data being sent to and from the car to support the whole range of inbuilt automated safety features, there will be a need for increasing intelligence. This will mean that the connectivity needs to be built into the very landscape cars are driving through.  In the end, intelligent infrastructure is something which is being aimed for to support connected cars and self-driving cars. But what are the engineering requirements to make this possible? The connected car, with all its IoT sensors and network connectivity, is only one side of the coin. To enable intelligence of a geographical area, be that a city or a highway, what is also needed is to develop networks to build the IoT into the landscape.

While many of us are now familiar with the concept of a home network, with devices that have a very short range and are generally only minimally mobile, this is very different from the kind of networks we need to build to enable the connected car to reach its full potential as well as to fulfil security and reliability requirements. 

Here, it is instead about large-scale IoT deployment. This triggers the question of what type of network needs to be set up, and what the infrastructure will look like.

Making highways intelligent

When we're looking at enabling traffic infrastructure or even making an intelligent highway, the type of network we're talking about building is a narrowband wide area network. A network like this does not have a classic Internet connection or power supply for the devices, and one challenge here is about how to get these devices online.

There are three standards that have emerged to achieve this: the Narrowband IoT (NB IoT) protocol through the cellular network; the LoRa protocol; and the ultra narrowband (UNB) protocol, often called Zigfox. There are currently discussions about putting sensors based on these protocols into the mileposts on highways and, going a step further, there are even some developments which could allow the sensors to be built into the tarmac in order for the road itself to be able to communicate with cars. 

Designing devices and networks for large-scale IoT deployments

This narrowband IoT network would not provide the intelligence for connected cars, but what it would give you is information, for example, about road conditions – if there is ice on the road 500 meters ahead – and applications like these are capable of interacting with connected cars. However, narrowband IoT transmissions are typically not geared for high reliability in the individual transfer. You can ensure reliability through the protocol and retransmit, but this kind of system cannot offer real-time response capabilities. Rather, these networks can provide necessary background information.

LoRA and Zigfox are both unlicensed open standards that have a long range – reaching up to 40 or 50 kilometers in rural settings. Within a city, this will typically be significantly lower – maybe three to five kilometers. This is useable for anything where the transmission can be asynchronous: parking solutions and a range of mobility services, for example. These are wide area networks – rolled out for large-scale deployments in order to get things online where it is typically very difficult to do so. One of the major design criteria for these networks is the lifetime of the device. With LoRa and Zigfox, this is around 15 years. Typically, the design goal is to create a device which can sit in a basement or in a field somewhere without external cabling or a power source, and can deliver sensor data for a number of years without requiring a battery change. This means they can also be deployed in places where getting power to these devices is difficult. 

Edge and fog computing – the challenge of moving through networks

Having deployed networks to receive data from IoT devices, questions that arise include: How do you collect the data? Where do we actually send it? Do you make it available on a regional basis? This drives distribution on the Internet. Originally, most data and most processing power was stored in a centralized data center which was typically only one site. Now this is becoming more and more distributed, and in these mobility scenarios it would make more sense to collect and process data on a regional level. What’s more, you would actually need to be able to follow the device while it traverses the network or several networks. Being able to shift the workload and the back end processing to the place where the device is right now leads to what is called “edge processing”. Edge processing is where the storage capabilities and the actual processing power are located at the access node – this could be a radio tower or a DSO collector – very close to where the data is generated.

Hardly any data in the IoT world will be used as raw data. There's almost always preprocessing, and because the actual device is very constrained, the raw data might be sent out typically to the node where it is logged in, which will collect the data, do some preprocessing on it, and then send it on to the network. So you need a processing center and then a large number of devices will be logged in. The concept of edge computing originally stems from broadband, meaning that there's huge amounts of data which are transmitted on a repetitive base – like upgrades or popular videos – and it makes a lot of sense to actually store them at the network edge and deliver them repeatedly to the individual nodes that are requesting the data. 

There's a very similar concept where you have a large number of access points – in the city, for example, where there are many base stations or access nodes that these devices are logged into – that you simply use as a pass through and collect data on a regional level rather than a very local level. This is then fog computing, where the processing will happen at a level between the access node and a centralized data center. In fog computing, the processing will be shifted along while you move through adjacent regions. 

Fog computing is basically like the cloud, but regional. It emerged from the requirements of next generation mobile applications: You need to have your application quite close to your consumer, but the consumer might be mobile. As a result, the location of the processing is actually a fluid concept, because it will be shifted along. This means you might change the cell, but still continue processing in the original data center; then, at some point, the workload will move along to another data center, which might be closer, giving you better latency and therefore better reaction times, or offering better performance for your application. 

Augmented reality enriching the driving experience

Latency and reaction times become more problematic when you consider the possibility of augmented reality supporting your driving. Here, you need to overlay the picture, and to do this you need lots of local information, but the processing power for this information is likely be somewhere in the back end, rather than in your car. The cloud data center, however, is too far away to be able to give you an acceptable reaction time or it would simply not be sensible to have a global instance hold and process data which is only relevant on a local level. Putting it into the edge also won’t really help you, because you are shifting cells every couple of kilometers, so then you would need to continually shift the processing. The solution is a layer in between, where you are doing regionally-based processing. If we think about a landmass the size of Germany, for example, we would need roughly 50 to 70 data centers, as regional distribution centers, to support this. The underlying idea is that the processing never happens more than 50 kilometers away from your current location to facilitate single digit millisecond latency, meaning the need for a significant number of data centers

The challenge to provide a seamless user experience 

Obviously, this also requires a network intelligence which is able to actually shift the processing between the data centers. This can be done by designing proper protocols to shift the workload: It needs to be a seamless shift to another data center, which can then take over without interruption of service for the user. It is easiest to solve this with intelligence on the actual processor level, or within the data center. Trying to solve this using SDN is under consideration, but would require having the SDN all the way from the edges, through all the infrastructures, with the additional problem of shifting technologies – i.e. going from an LTE network to a Wi-Fi network, to a 5G network – while moving, delivering a seamless, uninterrupted service. It's clearly much easier to do a kind of pre-advice to the location you are actually moving into on an application level, and then shift the processing along while moving. You have it served from two locations or two technologies for a certain amount of time, much the same as your phone when you move between Wi-Fi networks within a larger building. 

Tactile computing – having the Internet at your fingertips

This will become very relevant in a couple of years when we have broadly available virtual reality and what is called “tactile computing”. Tactile computing means that when you take an action, then you have an almost immediate reaction – otherwise, you wouldn't perceive the two as directly related. This is used in augmented, or completely virtual reality, and it requires, according to research, a one to two millisecond response time. That means data needs to be transmitted, a result computed and sent back to you all within this short time. It will be a huge leap in technology to actually achieve this, but one of the predominant issues is that we can't beat physics – so it needs to be physically possible to deliver the data, process something, and deliver it back to you within the given time frame, which requires proximity.

This is on the horizon as “5G” technology, but the technology as a whole is not there yet. The standards have just been ratified, but it will take time and huge investments to deploy. We can expect to have these networks up and running by around 2022-23.


 

Klaus Landefeld is Vice-Chair of the Board and Director of Infrastructure & Networks at eco – Association of the Internet Industry.

Since 2013, he serves as Chief Executive Officer of nGENn GmbH, a consultancy for broadband Internet access providers in the field of FTTx, xDSL and BWA. He also serves as network safety and security officer as well as data protection officer for several German ISPs.

Before establishing nGENn, Mr. Landefeld held a number of other management positions, including CEO at Mega Access and CTO at Tiscali and World Online. He was also the CEO and founder of Nacamar, one of the first privately-held Internet providers in Germany.

Mr. Landefeld is a member of a number of high-profile committees, including the Supervisory Board of DE-CIX Group AG, and the ATRT committee of the Bundesnetzagentur (Federal Network Agency).