The Next Frontier is to Bring AI - and Learning - to the Edge

Prof. David Berman, head of AI Research and Technology at Cambridge Consultants, on the imperative of edge AI

Prof. David Berman, Head of AI Research and Technology

May 18, 2023

6 Min Read
Chip
Unsplash

At a Glance

  • Not every problem can be solved by behemoth large language models such as ChatGPT
  • Next-gen of edge AI will enable low-power, low-compute continuous learning, heralding a new wave of dynamic devices

Where is the next frontier in AI? Large language models (LLMs) such as ChatGPT are everywhere, but there are still significant challenges in bringing AI to personal devices without cloud or other forms of connectedness. The emerging opportunity is to further the implementation of AI at its point of use: the ‘edge’. My sights are set on advances here that will not just put AI on the edge but bring learning to the edge – and open up new possibilities in industrial and consumer markets.

This year we have seen the huge potential being realised by large language models. ChatGPT powering Microsoft 365 Copilot is just one example. These models are compute behemoths and it costs around $50m to train them using some of humanity’s largest datasets. Their performance continues to grow and use cases span areas such as coding, copywriting and much more. But not every AI problem can be solved by a large language model.

More than that, for AI to reach its full potential, the separation between the slow, compute-intensive learning stage and the quick inference stage needs to be removed to produce AI that can learn at the same time as it does inference. So, if the new challenge is to fully exploit edge AI, how do we go about it?

First, let’s see what edge AI can do well. There has been much work producing hardware relevant for edge compute with chips such as the NVIDIA Jetson and the ARM Cortex-M55both proving ideal for low-power AI inference. For example, at Cambridge Consultants we used the Cortex-M55 CPU to achieve a 7x power reduction and 1,000x speed increase for an edge AI voice detection task.

These hardware developments have been combined with the understanding of how to pack the usual AI models into ever smaller low-compute packages with various quantization and pruning methods contributing to significant technical progress. Together these methods have meant that edge AI works well implementing a pre-trained model with a strong performance in tasks such as computer vision where low latency can be crucial and edge capability is an obvious need.

A good example of the need for low latency AI is in autonomous vehicles, where visual data needs to be processed as quickly as possible. Also given that visual processing is so critical to vehicle control, one cannot rely on connected solutions even if speed were not the issue.

Now let’s look at some of the limitations of traditional edge AI. The AI that I can run on my watch or phone will be a pretrained model and any updates or training is likely to be done remotely through cloud resources. As customers become more tech savvy, they become more data aware. Sharing every piece of your data with a remote location is something people and businesses are becoming less comfortable with (and is one of the motivations behind Web3.0).

But worse is the latency issue. Do I want the delay of passing data and retraining a model to incorporate new data? Do I want the frustration of being reliant on connections with the associated security concerns and the worry of network outages? Of course, we will still want all the benefits of being cloud-connected and don’t imagine the edge device being permanently remote, but we don’t want that reliance.

Humans have always formed the benchmark for what we call intelligence. The current excitement around ChatGPT and other LLMs is really based on the fact that finally the Turing test has clearly been passed and these AIs can appear as human. But ask it to remember a simple to-do list and it can’t. For something passing as intelligent it fails straight away on the ability to learn new things. These models suffer from anterograde amnesia - a type of amnesia that affects a person's ability to create new memories after the onset of the condition.

In the case of ChatGPT, the onset is September 2021 and its knowledge after that is very uncertain. We want to do better and allow AIs to continuously learn and adapt. To do so requires more than just making models bigger. We require an algorithmic development that makes learning closer to being human, so the large compute needs aren’t required for learning as they are now. Achieving this would then have another pay-off. If algorithm development means learning will require lower compute and thus lower power, then we open the possibility of continuous learning on the edge.

Before looking at recent technical developments in the area let’s imagine a real case where edge learning would be ideal. Consider your drive to work. You have been driving for years and the experience you have built up makes you an excellent driver. This is something a current pre-trained AI could soon emulate (if not already). On your Monday morning drive, you go over a deep pothole that was barely visible. Luckily you get away with no damage. What do you do on Tuesday morning? You remember the pothole and avoid it.

This is a simple action based on your ability to continuously learn and fine-tune the model you have of your environment. Sitting in your autonomous vehicle and feeling it drive over the same pothole every day will not fill you full of confidence in the future of AI. Edge AI must do better and provide learning at the edge, not just inference, to solve such issues. The ability to adapt in real-time and learn will enhance a large set of AI-enabled devices, whether for industrial applications that adapt to a changing environment or consumer products that learn to adapt to different people and their needs.

There are now new algorithms (such as Hinton’s ‘Forward Forward’) that are inspired by human brains, in that they learn continuously and with relatively lower compute and power needs. These have yet to be industrially exploited but can provide the basis for the next generation of AI at the edge, with continuous learning at low power that can provide adaptable, dynamic solutions without the need for connectivity.

At Cambridge Consultants we are investing in this approach. We are implementing these new algorithms on low-power chips with a view to not just putting AI on the edge but bringing learning to the edge for our clients. This AI is then much more relatable to human intelligence. It has the capability, like you do, to update its view of the environment and adapt providing a more humancentric notion of AI. If you like the sound of the new edge frontier that we’re moving towards, please get in touch. I’ll be at the London AI Summit along with my colleagues Maya Dillon and Tim Ensor if you’d like to hear more.

About the Author(s)

Prof. David Berman

Head of AI Research and Technology, Cambridge Consultants

Prof. David Berman is Head of AI Research and Technology at Cambridge Consultants (CC), part of Capgemini Invent. David worked at the forefront of theoretical physics for over twenty years before moving into AI and machine learning at CC. His work focuses on leading a team to develop new AI techniques to support clients in tackling the tough, high-risk challenges that bring sustained competitive advantage. He has worked on projects in defense, telecoms, abstract datasets and finance.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like