As people and organisations come to rely more on cloud services, they also begin demanding better performance from their online applications. That’s the word from Chetan Mistry, the channel development manager for Anglophone Africa at Schneider Electric.
Edge computing moves parts of a cloud application and its data closer to the user to improve performance. Where cloud services are usually delivered through big data centres, edge computing proposes to use devices such as servers and data storage systems at network locations closer to the client.
This is not an entirely a new concept, Mistry said, but an evolution built on developments that have taken place over the history of computers.
Before the personal computer revolution, mainframe systems and centralised computing were the order of the day. “Dumb” terminals allowed you to input commands, simply sending them to a big computer which did all the heavy lifting.
With the advent of the personal computer, a kind of “edge computing” became a reality, and processing was done on the computer at your desk rather than using a big, central system like a mainframe. Applications and data were stored on your very own PC.
There were exceptions in the form of supercomputers which processed large amounts of data for scientific experiments and other purposes, but for a typical business or home user, this type edge computing or more precisely desktop computing became the norm.
Widespread adoption of the Internet brought with it cloud computing — a variation on the idea of centralised computing which enabled technologies in mobile phones such as AI-powered voice assistants and integrated online backups and more.
Productivity applications such as word processors and spreadsheets also started moving online and into the web browser, with a focus on real-time collaboration.
While many applications may benefit from cloud computing, this comes with drawbacks, such as latency and downtime, when an Internet connection is not available.
This is where edge computing comes in.
Latency—sometimes referred to as “ping”—is the time it takes for a packet of data to travel from your computer, to another device on a network, and back again.
If your computer is trying to reach a service or data stored on servers in Europe, Asia, or North America from South Africa, this latency can be significant. However, if your computer is connected to the same local network as the application or data, the delay is so small it is essentially unnoticeable.
An example of services that benefit from lower latency is the realm of real-time control, such as self-driving cars and medical robots.
Rise of an opportunity
Mistry said that the developments in edge computing are the continuation of a trend seen in the evolution of the Internet.
“If the South African Netflix node goes down, it won’t affect people in other countries,” he said.
Organisations will want that kind of redundancy for their operations, combined with lower latency, sparking big demand in edge computing deployments around the world.
A manufacturing business, for example, will not want to be in a situation where its assembly line has to grind to a halt if the factory’s connection to the Internet goes down.
“We will still have big data centre builds, but we’ll also see smaller data centres,” said Mistry.
The opportunity won’t just be for those who do the micro data centre builds and server deployments, but also in managing those systems. This includes managing the power and temperature in addition to the servers and connectivity. These smaller environments have now become critical in the edge computing environment and can no longer be ignored and relegated to the back of the offices.
This article was published in partnership with Schneider Electric.