Edge cloud computing and artificial intelligence are driving a seismic change to enterprise in the 5G era. But for Industry 4.0 to accelerate, CSPs need to understand the power — and flexibility of putting the cloud closer to the user. 

Google’s Head of Global 5G Solutions, Majed Al Amine says that knowing how to deploy it will save both time and money.

Below is a transcript of this conversation. Some parts have been edited for clarity.

Michael Hainsworth: The edge and AI, they’re driving a seismic business change in the 5G era. Unlike 4G, the next generation wireless infrastructure will be as much about enterprise as it is about the consumer. But for Industry 4.0 to accelerate, communications service providers need to understand the power of edge computing is in its flexibility. Majed Al Amine is the Head of Global 5G Solutions at Google. We began by talking about the importance of understanding that not all aspects of 5G need to run at the edge, and knowing how and when to deploy it will save time and money.

Majed Al Amine: That’s the core question these days. It is actually solving a physics problem in the beginning where latency is a function of the travel of light. And you know, the further away you are from the user, the more time it’s going to take for any computes to come back to the user. So it started as an engineering solution like everything else, but I think the core question around how can you monetize it and what are the use cases that are needed is the actual problem nowadays or actually, I would say the actual activities that are happening nowadays. 

Not all use cases are created equal. Not all of them need two milliseconds or five milliseconds latency. Some of them are looking at areas around security and safety concerns and those are also an edge use case that the CSPs are looking at and the more they explore or we explore together as partnerships, the industries and the markets and the end users’ reasons for applications and what are those applications and what they’re trying to do with those applications, the more it becomes clear how we build the solutions together.

MH: Well, then let’s talk about how those solutions play out in an applied basis. We think of things like Google Stadia for real-time video game playing, remote surgery, connected cars. These are the types of things that we really do need to have that ultra low latency that comes with 5G. Where does Google see potential in these things?

MAA: The ideas are limitless. We all learned from all the other technologies that are booming nowadays from like smart phones and applications on smartphones that really changed the industry into all the other applications within different industries. It has to be a future looking approach when it comes to edge and 5G around the ability to create a smart, flexible, resilient platform that is going to drive those use cases. In my mind, extremes like remote surgery, because they are extremes in the use cases perspective where like you basically are expecting a surgeon to be, you know living in Chicago, doing a surgery for someone who was in Europe or in Asia and that’s a technical engineering challenge but it’s also got all the other challenges that we need to go through with. 

But then there are other use cases that are more mature. Like you mentioned, Stadia is a gaming platform where basically the compute of the gaming happens away from the device of the user. And that’s a key aspect of guaranteeing quality of service because every device is going to be different whether you have a medium level, low level or high level device in your hand, you want the same experience to everyone. So rather than leveraging the device compute capabilities, you’re computing somewhere closer to the user and then streaming the gaming to the user device. Those are like, you know, bionic ideas but there are many, many ideas. 

I’m working on a nice project currently with a CSP around health and safety for manufacturing. So think of cameras spotting if someone forgot to wear safety gloves that might cause harm from a certain machine. And in milliseconds, it turns off the machine once that hand is getting closer to that machine. That’s a life-saving or body saving capability that needs to happen and it doesn’t have to be an extreme use case like remote surgery. So it is growing. Those are use cases that are growing fast, but building a capability behind it, like video analytics and machine learning and AI and stuff like that, and enabling 5G and edge to deliver those use cases with the right KPIs, if you like, which are latency throughput and others, is what we’re trying to do together with those partnerships, with the CSPs.

MH: You mentioned partnerships with the CSPs. I can imagine the communication service providers need to recognize they’re the key partners here. We need to give them credit around the work that they’re doing, because this has been a wholesale change in the way a telecom provider has done their work. 4G, 3G and everything prior to that, there had been sort of this monolithic approach to things. And I think it’s important to recognize that this is an industry that has had to evolve in order to keep up with the developments of 5G and become an active key partner in any relationship.

MAA: One hundred percent. I think they are the sole survivors of the concept of “build and the use case will come.” As you know, the market these days are much more risk-aware from the perspective of like, show me the money before I start investing. CSPs have taken the last three, four years in investing in 5G infrastructure. We salute them for that, but we’re coming here to help them, not just us or the ecosystem. And I’m happy to talk about the ecosystem later on, but the ecosystem is here to help them monetize, reduce cost of operations and basically open up new ideas and new concepts. 

And basically, as you were saying, they started with 2G, 3G, 4G solving for very critical, very important problems which are consumer based in most of the cases. Nowadays we are stepping into a completely new environment which is, how do we actually do a three-way partnership with the industries themselves? Because unlike the consumers, the industries, whether it’s automotive or health or manufacturing or others, they know what they want and they need certain technology behind it. And for the CSPs to wear this hat of service providers in a really proper sense rather than a connectivity provider, it’s a new environment and it’s a new challenge and we need to work together to make it happen.

MH: Yeah, I think it’s important for the CIOs of a CSP to help break down the barrier, the cultural barrier within their own organization with their technology teams. But before we sort of extend into the ecosystem components to this, one of the other aspects within the telecommunication space that fascinates me is that while greater speed, lower latency, high capacity, these are all very noble goals. When we talk about edge and the necessary tools for 5G, what though of the localized intelligence for customers or businesses, the analytics, the actions that help improve a service? These are aspects to 5G that are also quite new at the CSP level. Can we define localized intelligence first?

MAA: Yeah, absolutely. Localized intelligence at its core is the ability to have quick customized responses to problems from analytics and machine learning and AI. So having the right engines in your premise to make decisions and help you make decisions on the spot rather than having to push questions to a cloud or to a service far away, get it computed and come back to you. And that’s a key element in many of the use cases including the one I just talked about which is, you know health and safety. You cannot wait for the camera to send a feedback to somewhere across the globe to see that this is a harmful situation and then send back a decision to shut down a machine or whatever. It has to be localized.

For the CSPs to wear this hat of service providers in a really proper sense rather than a connectivity provider, it’s a new environment and it’s a new challenge and we need to work together to make it happen.

MH: So do CSPs understand that entrancing at the edge is as much a decision at the edge location as it is for the backend for everything else?

MAA: That’s actually the beauty of what’s happening nowadays, the CSPs started looking at it from a connectivity plus plus, which is basically [the concept of] we’re already helping you as a manufacturing or a whatever industry already helping you connect your locations back to the internet. I can add more services to that and, you know help you drive better solutions in your edge locations. However, with time they started to understand the technology and this is what we’re doing together. We are starting to learn the connectivity technology as a hyperscaler and a cloud provider too. And we learned that there’s two sides to the story. 

There’s the front end, where the decision is happening but there’s a big backend where the machine learning is happening. You can never enable a localized machine to learn all the wisdom, if you like, of the object recognition, for example. So a camera can recognize a box versus a square by learning millions of images to decide what is a box versus what is a simple square, that is not 3D, et cetera. So those millions of images have to be residing in a bigger backend that does all this learning and training, but then the wisdom is pushed to the edge where the decision starts happening at the edge. So the reason CSPs are even a bigger player here is that, as you would know, the backhauling back to the backend, that connectivity, that pipe is something that they own. And then add to it the value of, you know delivering 5G at the local area, and then together bringing in this edge capabilities and computing this is why this is creating this attention and this business.

MH: So let’s talk about how Google is adding value in this area using the power of edge networks, because it all starts with the added value of not just the technology but that machine learning and artificial intelligence element as well.

MAA: Before talking about machine learning and intelligence I want to talk about a simpler but more important problem, which is scalability. Being able to run one edge or two edges or five edges is something that is purely a technical engineering problem. But being able to scale it into hundreds and thousands of edge locations, because this is where the scalability is, where the business starts happening. This is where automotive starts happening, where health starts happening, et cetera. So that scalability is key, but it’s also scary. 

CSPs have seen firsthand how much OPEX you would bleed by operating at scale. And this is the first thing that Google brings to the table is that we’ve been learning for the last 10 to 12 years how to scale edges globally. We have thousands of nodes across the world that are carrying YouTube and caching and CDN traffic that has grown organically and is managed by a lean operations team. So less cost, more automation and scalability. And that’s a key aspect of what we bring on top of what Google is known for as AI and ML that I’m also happy to talk about.

MH: Right, the idea being that sure a CSP could build out its own edge network right across a given jurisdiction or geography but they don’t have the expertise to be able to affect that properly, therefore, it’s important to bring in a partnership scenario such as with Google.

MAA: Yeah, there’s a big mix between performance, quality, scalability, flexibility, and you know, SLAs being like available in other areas. It’s a board full of knobs. And any knob that you take to the extreme, you bleed lots of money to a point where the total cost of this project will fail. And that’s where the expertise of being able to balance all those, specifically based on the use case. So we don’t balance them because we would love to have a balanced sheet. We balance them because every application has its different requirements for it to be successful. You can’t give a Ferrari to every driver in the world. Every use case is different and that’s where we excel in specifically.

MH: When you talk about that Ferrari, not only the idea that, you know not everybody needs to drive a Ferrari but the roads on which we drive, you guys are active not just in the machine learning and AI side of things at the cloud level, but the hardware side of it. The idea that a telecom provider might build their own flow processing unit, an actual IoT AI chip, that’s never going to happen. You guys have an advantage over anyone who would want to build out a hyperscale environment by having not just the software and such powering it but also the hardware side of the equation.

MAA: I’d like to see that people looked at the shiny object of reducing latency by reducing distance, which is what we started the discussion today about which is by putting it at the edge. But people tend to forget that most of the time, consumed in any transaction is the processing itself. It’s not just the distance between two nodes. It’s what the node time is taking to process whatever you’re asking it to process. And this is where the hardware and the software optimization is key. 

Being able to have the right hardware that is GPU for graphics when it’s a graphics need, CPU when it’s a CPU need, but also TPU, which is something that Google created around TensorFlow processing units, which is more AI driven which is faster than GPU when it comes to AI capabilities we would say it’s like 15 to 30 percent. 30 times faster when it comes to AI capabilities. But all of those are things that you learn with time and you bring to the market in this partnership, being able to reduce the time of processing. And then I talked a bit about the software side also being able to do many smart things in the software engineering platform, engineering, et cetera, that really also helps the latency, the security and all the other aspects.

MH: So when it comes to the role that AI and machine learning play in delivering applications at the edge, it sounds like what you’re saying is one of the key aspects is flexibility.

MAA: Right. At the heart of being intelligent is giving every use case its exact needs. Giving it more or less is both wrong. So flexibility in the sense of understanding the use case and giving it exactly what it needs, starting from the hardware, software, and throughput and bandwidth and everything else but also going to the intelligence level itself. Do you really need a high level of rendering when it comes to graphics or can a decision happen at a medium level? So I’ll go back to my examples in the beginning of the use cases. If it’s a safety hazard type of detection, it might not need a high resolution rendering because basically a hand is easy to be detected versus any other tool. However, when you compare it to remote surgery or like any type of surgery, we’re talking about cell level or organ level that is more detailed and you need a high level of rendering. So giving a Ferrari to everyone, again is not the right way for many reasons and that’s where the heart of our experience in AI and ML is that ability to bring in the right value and the right amount of machine learning and AI to the right use case.

MH: So then to extend your metaphor of the Ferrari let’s talk then about the fleet scenario and the idea that that’s the ecosystem play. It’s the case where when it comes to applications that are on top of the infrastructure, there is no one single player that’s going to be responsible for creating everything. A lot of different companies are going to create a lot of different cars that are going to run on that infrastructure highway.

MAA: Absolutely, this is where we step in from the concept of engineering solutions versus opening a solution for co-innovation with everyone else. And I can’t but remember the Android model in here where basically it has to be an open platform where people can come and innovate and create their own applications and solutions, but be able to port it from one hardware to the other, one use case to the other, one CSP to the other, and one industry to the other. 

Computer vision or video analytics might be developed for let’s say a health industry, but it can be used for retail industry for objects on shelf recognition and stuff like that. So do you really need to start from scratch every time? No, you need that ability to basically create solutions that are agnostic to the platform and the hardware. And this is where we are really focusing on openness in that concept and building a platform that is built on open source is a key here.

We should be able to understand that we are a platform that is going to be an innovation platform for others to dream of ideas and try them. 

MH: So when Google came up with the Android platform for smartphones, it wasn’t Google’s responsibility to invent Uber, to invent Netflix, to invent all of these different companies that were built on that underlying infrastructure. How do we become more welcoming to the developer community under 5G?

MAA: There’s something that I always tend to answer when I’m meeting with technologists and CSPs and industries. They usually would ask what do you think is the most mature use case that I should go and invest in, like-

MH: What’s the big killer app?

MAA: Right exactly, what’s the big killer app? And my answer is always, if I know it I’d leave whatever I’m doing now, and I go invest in it. That’s not the approach we should all take. We should be able to understand that we are a platform that is going to be an innovation platform for others to dream of ideas and try them. And some of them we’ll be failing and some of them will be successful and we would never predict what is the use case that’s going to happen because we’re not going to be able to that. It’s too complex to do that. It’s as you said, when Android was created no one thought that it’s going to be used by taxi services and restaurant delivery but also at the same time banking and you know, health and everything else that is maybe more sensitive and more secure, et cetera, et cetera. So it was the ability to be a resilient platform, to be able to deliver the right security, safety capabilities, APIs to the right developers is the key thing. And our job is to accelerate their implementation. Giving them whatever is already developed for them to leverage and build on top rather than having to build monolithic applications from scratch every now and then.

MH: So we avoid a locked in proprietary platform and we expose to the developer community the application programming interfaces, APIs like machine learning and AI, speech recognition, all of these types of things need to be building blocks that they can pull in to leverage the benefits of 5G.

MAA: Absolutely.

MH: Well, let’s talk then about what Google is doing to build out the ecosystem. What is the focus on cloud native 5G core when it comes to Google? How does Google help build these stronger services and these ecosystems?

MAA: Yeah, so we talked a bit about the partnership with CSPs and also I would add to it, the partnership with the telecom vendor, infrastructure providers, the telecom infrastructure providers, those are key partnerships. But what I would say clearly is that when you build any partnership, it starts with the commonalities. And I would be maybe, you know, daring to say in the 2G and 3G world, there were not lots of commonalities between the telecom space and the cloud and hyper scaling space. 

The 5G domain is really starting from all these commonalities put on the table together. So the 5G core itself is envisioned by 3GPP and all the others who are participating into the design of the 5G core, for example, is envisioned to be a microservices approach, a service-based architecture approach which is exactly built on the same concepts of all the other microservices that we’ve been seeing in the internet of the world, in the smartphones of the world, where basically applications talk to each other through APIs and HTTP protocols rather than proprietary protocols. So every network function that used to be a box coming from a vendor is now an application. Not even a software, it’s an application. It’s a microservice that talks with another microservice at the modular level. And together they create this ecosystem of what we would call a 5G core versus the old days where it’s a huge room full of proprietary servers.

MH: So when you talk about microservices, what you’re talking about is cutting down these monolithic giant applications and functions into small bits and that they chain to each other based upon their use case. And within that cloud world, it sounds like the secret to all of that is containerization.

MAA: Yes, so containerization is the approach of getting rid of all the overloads or the overheads of creating a machine, whether it’s a physical or a virtual machine that is made just to run a clear and big application or a software. What containerization does is it breaks it down into pieces of nodes and clusters that can talk with each other freely and somehow enable different use cases in different situations. For example, a 2G switching room used to be a switch, not even a core, is basically a huge switch that used to switch voice between two users. And for me to be able to call you I have to send that signal all the way to that switching room. Every element of that switching room has to do its own job for that voice to get to you. Versus in 5G, when we distribute those microservices the application would decide how many hops or how many functions I need from that solution for me to deliver that service. If the application simply wants connectivity back to the public internet, it doesn’t have to go all through the whole functions that are within the 5G core. It can go just through a gateway to the internet, versus another application that needs to do a voice call where it needs to go to an IMS that connects it to another user. Then yeah, it can go through those hops. So giving the application the flexibility to decide and take exactly the efficient route that it needs to take, that’s a key element of why we’re doing it in a microservices containerized base. 

I would add to that, containers in the past few years have proven the ability to be resilient, scalable, and more importantly, operation problem-free if you like. All the operational issues that are caused by monolithic softwares, whether it’s you know when you upgrade a software, update a software or even have something like an outage in a data center or whatever, all of those almost dissolve when you go to a container base because you’re running clusters that are distributed. You can actually upgrade and update a version of that application in some cluster and the old version and other clusters running in parallel. And those are things that in the CSP world have not been introduced yet. And this is why we’re excited about containerization and cloud native environments in the CSP world.

MH: I get the sense that the CSP world definitely wants to go cloud native but maybe the reason why they might not push that button is readiness. Customer quality of service. The decision seems to be more around, are we ready to expose our only service? How do we convince them to move fast and break things?

MAA: Ultimately it’s a business and it has its own business models. So they’re not keen on moving to cloud native, per se. They’re keen to move to the benefits of being cloud native, which is operational resiliency, scalability, flexibility, and efficiency, et cetera. But at the same time, on the other side of the story, what would happen if I break whatever is really working now? I’m already generating money from my existing architecture. Why do I need to like, you know, break it? 

It’s this concept of, how do you keep that change and the momentum for change to improve things, to grow revenue, to reduce costs? But also to be more transparent and I would say partnership, partnership, partnership is the key here because ultimately, we’re not in the business of sending you a box to replace your old box, and then you open it and you start working with it and you start discovering the surprises. It’s a much more transparent approach that we are bringing together with the infrastructure providers of the world, to go to the telcos and show them step by step how we are building it, showing them what are the risks, and how we can build it together towards the end goal. And that’s always been the successful model for any technology to really become the technology to go to.

MH: So what must telecom companies learn from enterprises that have gone cloud native?

MAA: You mentioned how CIO’s have a big role. CIOs in CSPs know about other industries and the IT in general, more than anyone else. So they can start with those learnings from the CIOs because they already have their own IT workloads most of the time, either containerized or at least running on the cloud in many cases. And they should also look at the other industries for sure. 

I would say the biggest lesson is that the nice thing about, you know, cloudification and containerization is it’s a crawl, walk, run type of approach. It can start with a small use case or a limited use case where the risk factor is lower. You might think of an IoT use case that hasn’t picked up yet. Think of like, you want to test smart meters which is something that’s gonna be booming in the coming couple of years but today you only have a small traffic. You can start containerizing the workloads that are passing through your core from that perspective only and keep your consumer traffic which is your core business on whatever is running now and then you start migrating step by step and learning step by step and growing step by step. That’s what we are bringing in. And that’s how other enterprises built their migration. That modularity is key there.

MH: So you’re saying you can start small, you can scale, but future enterprise will be the kind of thing where you hit critical mass, you’re at the stage where you’re selling to enterprise because you’ve already learned from your mistakes in the core and what works and what doesn’t?

MAA: Yeah and if you talk with some industries, they would look back and say, I can’t believe I was working this way a few years back. So it is step-by-step, but it actually gives you the experience of looking back and comparing for yourself and seeing the difference. And that’s something that, I mean no amount of evangelism will drive that. It’s more around, you know let’s start doing something together because we are at the crossroad of 5G and the use cases at the verge of becoming the national infrastructure of all the countries in the world. It’s where the attraction of the business is and everything happening. And you have a short window to really get it right. And the only way to do it is to start trying it.

Source