Will Edge Computing devour the Cloud?

This is a question that the digerati are asking themselves more and more these days: will edge computing, growing so rapidly, end up devouring the cloud, which has become so fundamental?

I am giving this question a definitive no. We could run a game of semantic analysis, trying to figure out exactly what we mean by edge and cloud. But I think at this point we can broadly group the elements of the interconnected nodes into endpoints, edge devices, and cloud components – with wavy borders between them.

That being said, what will the nascent advantage be? The best recent example I can think of is the Edge video processor for home and business security, pet detection, facial recognition, retail analytics, fitness, and smart conferencing. This device, a stand-alone, discrete accelerator for inference applications, is not an endpoint. It hosts terminals, including cameras.

In the case of video processing, intelligence is needed to determine what is in the image. So surveillance cameras don’t burp over everything, only the types of images that interest them, like, say, people. Cameras could do this job, but sending images to a neighboring node with more storage, more processing power, and larger artificial intelligence (AI) models allows an edge node to handle the input of 24 high definition cameras.

Then, when the rare image that deserves more attention is selected by the edge image processor, it can be passed to a centralized cloud component that has inputs from multiple sites, even more powerful models, and the ability to transmit information to the edge nodes.

Evolution at the periphery

So the role of the edge node is just starting to evolve now. This space has been hotly contested by the main players in the silicon industry. Qualcomm, Nvidia and Intel all want a share of this market.

And these are not the only ones, but the nature of this competition is linked to the initial positions of these companies in the game. Thanks to its activity of processors for mobile phones, Qualcomm has established itself in the field of terminals.

Intel, which primarily owns PCs (which are traditionally terminals, but can serve as edge devices), has achieved its position in the cloud as the king of servers. Nvidia, traditionally in the graphics industry, has added a substantial new cloud business selling GPU banks to cloud customers for specialized workloads.

If you ask the question of where the intelligence should be, you can answer it easily: everywhere. We can assume that all nodes will get smarter. This means the cameras themselves, the edge collector, and the cloud component. Everything will become smarter.

For years now, cloud providers like Amazon, Microsoft, Google, and Netflix have moved information to the edges of their clouds to make popular content more accessible. When Casablanca, the classic movie starring Humphrey Bogart and Ingrid Bergman, suddenly becomes popular again because a famous person wrote or tweeted about it, Amazon can move more copies from its main cloud to edge servers close to geographic markets. where this sudden popularity occurs.

Where does the treatment take place?

There is a principle that guides the execution of an analysis model. Essentially, processing should take place as close to the data source as possible for three good reasons: privacy, latency, and efficient use of network resources.

The cameras that decide what a human is and what a pet is generate images. If these images, which may be too large to be processed on the camera, are analyzed in a home peripheral device, potentially sensitive images should never leave the house.

Such an analysis can be time critical. If someone monitoring a home remotely from a security app needs to know if a door caller is a delivery guy or a burglar, there might not be time to send the call. image in the cloud for analysis. And then there’s the cost of moving large data files everywhere. Better if this path is as short as possible.

In the AI ​​world, the two big tasks are training and inference. During training, the model learns what a human is and what a pet is through massive ingestion of properly labeled images. After some training, the model can distinguish one from the other; that is, a trained model can make correct inferences. Training takes a ton of resources, and it’s done most appropriately in the cloud, which is far less resource-constrained than endpoints and even most edge devices.

But inference must be done on endpoints or edge devices. Inference on the device can be used for things like voice activation, text recognition, face detection, voice recognition, computer photography, and object classification.

But since AI models must continuously evolve and improve to eliminate both false positives and false negatives, a remedial cycle must necessarily involve all levels of computation. In a federated learning model, the cloud-based aggregate model takes data from all downstream devices, improves its ability to correctly identify the object in question, and updates all downstream inference models. Thus, the model is generally improved on the basis of more diversified data.

And edge devices and endpoints may make local enhancements based on the location-specific data set, which may differ from the overall set used for the original training.

Conclusion: Edge and Cloud must cooperate

AI is just one area that illustrates how all levels of IT (endpoint, edge, and cloud) need to work together to achieve the best results. There are many others where the division of labor between IT elements makes perfect sense: large-scale and intensive computing in the cloud, offloading local tasks or positioning cloud copies at the edge, and fast and efficient computing. on the endpoint.


Source link

Comments are closed.