The Internet of Things became the hot topic and a buzzword around 2016 when major cloud providers presented their edge computing solutions. But what if the IoT era really starts next year?
The first Internet-connected device was the toaster presented by John Romkey in 1990 for the Interop conference. The term “IoT” itself was coined in 1999 by Kevin Ashton from MIT. Throughout the next 15 years, various pilot projects were announced, Tesla, Mercedez, General Motors, Ford and Volvo declared their intentions to create self-driving cars and trucks, and even Google announced their self-driven car project. Amazon Echo and Google Nest entered our lives and began transforming our apartments into smart homes of the future.
The continuous evolvement and growth of cloud hosting and cloud computing services enabled these projects. Traditional data centers were not able to satisfy the needs of geographically distributed networks and deal with the vast volumes of machine-generated data. In addition, not all interconnected devices could use SSL protection (like teapots and light switches), so a gaping security problem presented itself. After several reports on hacking the smart homes and a wave of advice on how to keep your smart home from being hacked, the hype wave died out — but the problem remained.
Edge computing — the response to the IoT problems
In 2016–2017 Microsoft introduced their Azure IoT Edge service, Google Cloud released their Cloud IoT Core, AWS had improved their IoT offers and introduced lots of new features during AWS re:Invent 2017. Meanwhile, a bunch of lesser cloud providers like DigitalOcean and IBM is also seizing their opportunity to get a share of the future IoT market.
Edge computing is a new concept of data processing, devised specifically for the IoT nodes. The main requirements are:
- Data filtering to hot and cold. If there are hundreds of equal signals from identic sensors (like the temperature control network of a plant sending the same values during normal operation), only one signal is transmitted from the IoT node to the cloud for logging, and the rest are discarded — the so-called cold data (1).
However, if any sensor shows a change of value (2), this means an anomaly (like a fire starting or a short circuit happening), so the signal is also processed according to a response scenario (an alarm is raised and the firefighting systems are activated in the affected zone) — the hot data, which requires action at once.
- Hot data processing within the node. The aforementioned response scenarios should be handled by Machine Learning models deployed inside the IoT node. The ML algorithms themselves are first trained in the cloud using vast arrays of historical cold data, but the trained models take as little as 1GB of disc space and can efficiently run inside the IoT node. (3)
Due to such structure, hot data can be handled on-site and the responses can be issued within milliseconds, while cold data is sent to the cloud and used in the ongoing ML model training process(4). For example, when a small batch of sensors registers a gust of wind with gravel that threatens the wind power station, the whole wind farm adjusts the positions of rotors to avoid damage.
- Regular ML model updates. The more historical data is available to the ML model, the better it can predict the needed outcomes. Therefore, the local instances of the model deployed within the IoT nodes must be updated from time to time, so a secure connection channel and an error-proof CI/CD pipeline must be established to ensure the system is operating efficiently.
- Internet coverage stability and safety. With regards to the issues of possible burglaries of smart homes or self-driven car crashes, the stability of Internet coverage and the security of connections is another major concern. Many companies like AT&T in the US and Vodafone in the U.K. are planning to launch 5G networks to provide sufficient access to the Internet for smartphones and other connected devices.
Thus said the IT services industry is actively developing the technology, software, and practices to support the influx of data expected from the billions of interconnected devices coming online in 2019. But is the IoT era really starting this year?
As for the smart cities, it turned out that the savings on energy and funds achieved by smart lighting systems were minor as compared to installing and running said smart systems. The same goes for Industry 4.0 plants and other expected IoT applications. In addition, the costs of equipping all the major roads with IoT sensors and nodes are much bigger than the expected profits from selling smart cars or trucks, and the automotive manufacturers are not willing to invest the sums of this magnitude on their own.
Conclusions on the cloud readiness for the IoT
The possible solution to this dilemma is delivering more personalized IoT applications and systems. While being developed for a specific purpose, these apps can make the IoT implementation feasible in the long run. For example, if the same system of IoT nodes was used for managing the city lights, city traffic control system, self-driving cars, and traffic surveillance, handling the consumer apps synchronization, etc. — that would make it feasible.
As for now — the cloud is ready to support the IoT, but the general public is not willing to invest in it. The IoT developers are steadily increasing the variety of apps and services available, but it rarely goes beyond the pilot projects or early adopters.
What would help speed-up the coming of the IoT era in your opinion?