BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Architecting for the Edge

Architecting for the Edge

Bookmarks
40:41

Summary

The panelists discuss main differences in how one should design and build services when embracing the Edge as part of the system architecture.

Bio

Jason Shepherd is VP of Ecosystem at ZEDEDA. Max Stoiber is Co-Founder @GraphCDN. Kaladhar Voruganti is Senior Fellow, CTO Office @Equinix.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Fedorov: The topic of discussion is architecting for the edge, with the main promise of edge computing that lies in bringing the data and processing closer to the end user. With the edge being very broad, ranging from the very ultra-small distance to a user, with home sensors or a car. Edge can also range all the way to the network edge, at the interconnection side. In today's product landscape and infrastructure landscape, there is a growing number of use cases with IoT, autonomous driving, and many others, and also many infrastructure solutions and options. The problems that we have to solve are in need of infrastructure and software architecture solutions, and we're going to have a discussion on that. Our goal is to learn how to do it right and what mistakes to avoid.

Background

Voruganti: My name is Kaladhar Voruganti. I'm a Senior Fellow in the office of the CTO at Equinix.

Shepherd: Jason Shepherd. I lead ecosystem efforts at a company called ZEDEDA. We're focused on edge orchestration. My motto is, if it's fuzzy, I'm on it. I always tend to find myself on the front end of emerging technologies, IoT. Prior to this now, of course, edge computing.

Stoiber: I'm Max Stoiber. I am one of the cofounders of GraphCDN. We provide GraphQL caching as a service. We operate in 60 metro level edges, and we edge cache people's GraphQL APIs.

The Benefits of Edge Computing

Fedorov: With edge computing, it's a big area. It's been expanding. The set of benefits that it can provide is also pretty broad and pretty wide. Some talk about the performance benefits, others talk about scalability, efficiency, privacy. What's your favorite benefit, and maybe with some argumentation, like where you see the main benefit of the edge for you?

Shepherd: A lot of what we see obviously, latency is a key benefit. We see fairly often because edge is a continuum and there's different locations, and we should definitely unpack that a bit as well. We see pretty often that bandwidth is a big concern. It costs a lot of money to move data around blindly. The internet was initially architected for download centric use cases, but with IoT and new data being generated, you're flipping to being more upload centric, and you have to change how a network is architected. Computer vision is a killer app for edge, because the only people that think sending raw video data over networks is a good idea are people that sell you network connectivity. That whole notion of just the bandwidth constraints of being able to pump all those data up to the backend for centralized processing is a big driving factor in addition to things like latency and security, and some other stuff we'll talk about. Bandwidth is a big one.

Voruganti: I completely agree with Jason, bandwidth and latency. I just want to double click a bit on latency. I think there is a spectrum of latencies that people are interested in. There are what I call "real-time hard constrained latencies" where you would probably have to have your edge in the factory or in the hospital operating room infrastructure. There are probably the less than 20 millisecond round trip latency. A lot of the video use cases that are coming up fall into that. For those type of latencies, definitely, I think in the spectrum of this are just the metro level edge where you're doing interconnection. It's a very good location for doing that latency range. Latency, I think, we have to tease it apart, and based on the use case, there are different edge locations where you can process them.

Stoiber: I think for us, for our customers, it's definitely the performance and latency, like e-commerce companies use us, news websites use us. For them, it's mainly about performance. They have their data centers, usually in U.S.-East-1, classic Virginia data center, have the database there, and their WooCommerce engine. For them, it's really about distributing all that data around the globe in those 60 metro level edges. Then having fast performance pretty much everywhere. It's not like a hospital operating room or anything else, like Kal mentioned, where it has to be real time otherwise someone dies. Still, for those use cases, milliseconds mean money. There's a tradeoff to be made, where on the continuum you go with those. Then I've also got to underline the bandwidth benefits, I think that Jason mentioned, a lot of our customers are seeing huge bandwidth savings by not having to go halfway around the world every single time someone wants to load a webpage. I think that also is a huge consideration.

Shepherd: A lot of people ask me, with 5G, it's so fast. Edge computing is a fad, it's going to go away, because 5G just solves all your problems. What people don't realize is, they're hyper-local connections, the last hundreds of feet, and you still have the same bottleneck upstream of that hyper-local connection. It drives more need for edge computing, you have to filter out the data before it starts to clog up the bottleneck that you've just created upstream. I've also heard a lot of people say that 5G is going to help you drive your car from the cloud, which is insane. That's a latency critical workload, as Kal was saying, that you're always going to see on-prem, but you would augment that with services, for like augmented reality, infotainment. A big point to be made is latency critical versus latency sensitive, very different.

Fedorov: I think the range of benefits that we can get from edge computing is pretty broad, and we have to give some of the practical advice on how the architects and software engineers can approach problems. Let's narrow things down to latency and some of the related aspects. As was mentioned, there are some of the interconnections, but let's focus on latency as the main benefit. The plan for the discussion is to go a little bit through the advice of how to approach designing the applications and running the application and service. Then, what are the problems to avoid?

Edge Hierarchy

Focusing on latency, and let's imagine that I want to develop a service application, I have a specific problem, whatever it could be. Then I'm immediately faced with a lot of options. First question is, where do I run my logic? Do I stay in the device? Do I stay on the CDN? There are also many CDN options. Do I stay somewhere in between? I think there is a growing range of hierarchy of edges that's evolving. I'm curious if you have any advice, any thoughts, or any first type questions that an architect should answer to themselves?

Voruganti: The way I look at the edge is as a hierarchy. You can have thousands or even higher number of device level edges, then you have maybe hundreds of what I call as micro or far edges. They could be parking lots or basement of an apartment building or a stadium. Then you have few metro level edges, which is one in every NFL city, type of metro level edges. Then you have these huge, large scale public clouds. One of the first questions you have to ask is, for this type of application in this hierarchy, it's like a tree, you can think of it like an inverted tree, from a cost standpoint, from a performance standpoint, from a security standpoint, where should I place my processing? That's the first question. Where in this hierarchy should you place? Because for cost reasons, it might be more optimum to go up in the hierarchy and still be able to satisfy the latency requirements. That's the first thing. The second thing is that not every microservice in your application needs to be satisfied with the same latency. You can basically have some microservices running at the device level, and some can potentially go up in the hierarchy and run in a metro level edge. Some can be even in the deep public clouds. Those are two points, cost or where in the hierarchy you should place it. Also, it's not just everything has to go in one single location, you can actually straddle your microservices across this hierarchy.

Shepherd: It really is. It's about a balance of cost and performance. There's also the old adage, it depends. We don't know yet. We'll talk about the architectural principles that are important to look at today, you just don't know in all cases. The way I would summarize it, it's a balance of, is it latency critical, latency sensitive? Is it focused on a given asset, or very location centric? Or is it across a bunch of different users? If it's latency sensitive, but you need to serve a bunch of users, you're going to do it at metro or regional areas, because you're a little upstream, but you still got that balance between better latency but still upstream of lots of users. If it is latency critical, or very location or asset specific, you're going to run it on-prem, on the factory floor, in the car. I would not want my airbag deployed from the cloud. It's that balance. Where things run along that continuum or hierarchy, it really depends on balancing the tradeoffs. It might change over time, which is, again, why you need to think about how you architect systems today to be able to be flexible with it.

Stoiber: I think that's really interesting. For us, there's a strong tradeoff between how much can we cache versus how small do we go at the edge level. If we had an edge for every single user, the caching would be pretty much useless, because you couldn't share any of that data. You might as well not cache in the first place. Maybe there's a tradeoff right there where metro level might be too coarse. Maybe it's slower than it needs to be compared to the cache tradeoff, which I think is really interesting. Definitely lots of tradeoffs everywhere. Like Jason said, it depends, as always.

Unifying the Interface and the Logic across Layers of the Edge

Fedorov: There was a great question about trying to unify the interface and the logic across layers of the edge, or between edge and the cloud. I'm curious if you have any advice on that. Here, there could be a tradeoff between latency and graceful degradation, if for some reason edge stops being available, and complexity of the overall system.

Voruganti: I think, as Jason said, it depends on the state of the application, if you have to satisfy multiple users, you have to go up in the hierarchy, state is critical. Also, data aggregation requirements. If your application is using data from multiple sources, then also a lot of times you have to go up in the hierarchy to a metro or a cloud to do the aggregation. Also, sheer compute requirements. If you have to crunch through a lot of data and processing, increasingly, you can have more dense power racks higher up in the hierarchy in the metro or in the cloud. Definitely certain tasks like AI model training. There are new techniques coming which can push that also to the edge or far edge. Definitely, from an AI standpoint, more training is getting done at metro or the clouds. Inference, definitely you can push it, once you create your model, using a distributed control plane, you can push that trained model as far as possible. Again, there are some tradeoffs, but at a rough, high level heuristic, that's how I look at it. It's not edge versus cloud. It's a continuum. In many use cases, you got to do both.

Shepherd: As part of the continuum it really goes from super constrained devices, we're starting to see TinyML being embedded inside of devices. It's compute. It's more fixed function. All the way up to regional data centers, just other side of the Internet Exchange. Where you place workloads along that continuum depends on a variety of factors, as we're saying. Actually, the LF Edge taxonomy white paper, it's a good read that we put out as a community as part of Linux Foundation last year, and it talks about inherent technical tradeoffs as you go through the continuum. First, it's, are you latency critical or are you latency sensitive? That means, are you going to drive it upstream from a WAN across a lot of users, kind of the service provider edge, as we've called it, or you're going to run it on-prem as an end user? Because you have to be right there at the services, or at the point of the subscribers with the systems?

The second one is, is it in a secure data center or is it not? If it's not in a secure data center, you have to have a very different security story, you have to assume someone can walk up and touch that box. That changes how you approach it. The third one is, is it so constrained that you have to go embedded with software where everything's custom, all of their update tools? Everything's custom because of the footprint of that device. Or, can you extend the cloud native principles that we've developed in these centralized data centers, platform independence, loosely coupled architectures, microservices? The whole goal of edge architecture as we see it, is, extend cloud native left, so-to-speak, from cloud to the extreme edge, extend it left to the point where you just can't anymore. That's driven by memory footprint, generally speaking. It's like a box with a couple gigs of memory. At that point, you can no longer support virtualization or containerization, you have to go embedded. Then it becomes really painful, everything's custom.

From an architecture standpoint, what you want to do, I saw the question about separating business logic, separating all these layers, is, abstract all of the application plane from the hardware, the infrastructure below. Make sure that you don't get tied in with any given backend service. Create all these layers of abstraction and extend cloud native as far out as you can, until the boxes can't support it, the hardware can't support it. At that point, if you do that properly, it depends, you don't know today, but your workloads can be transportable across that continuum. It's really important to be thinking about those layers of abstraction now, when you architect these systems.

Stoiber: I think the really important point there is future proofing. Even if you're not running at the edge yet, you will be very soon. There will be workloads that you're going to be running at the edge. The most important thing you can do now is avoid vendor lock-in or platform lock-in of any kind. Be ready to take that application logic you've written and run it at whatever part of the continuum of the edge that makes sense for your use case, and that makes sense for your application. I think that's one of the trickiest bits, from an architecture perspective. How do you ensure that that is future proof, and keeps working at all of those different points?

Future Proofing Workloads

Fedorov: Isn't it also extremely hard to do, because ultimately, especially in the newer developing area that edge computing still is, if we try to generalize or future proof the APIs, there is very little left to work with. There are lots of capabilities that might unlock some of the performance benefits or some advanced functionality that could be vendor specific. Is it a true statement, or you see different interesting developments in the field that would make you disagree?

Stoiber: I think for our use case of having metro level edges, we use Fastly under the hood. A big CDN provider. They have this compute at edge framework now that you can use to deploy to their 60 worldwide edge locations and whatever. We have to be very careful. We love Fastly. No plans to move off it at the moment. Eventually, there's a world in which we don't use it anymore, and so we have to be very careful with, at what level do we write our application logic? Essentially, for us, it's almost like, we just have that function that takes a GraphQL query and the response that we got back and determines how to cache it. Then there's the other level, which is the Fastly specific stuff, which is, how do you interface with Fastly's system? How do you tag something in the Fastly cache? How do you parse something from the Fastly cache? We just have to make sure to abstract that properly to the point that we could, in theory, rewrite that small shim layer, for any other provider or machine or whatever we use, but keep the "hard" logic of the actual caching and determining of how all of that works.

Voruganti: If possible, you should architect it for abstract data management principles like, where do you do data aggregation? Where do you do your data filtering? Where do you do your inference? Where do you want to do your training? There are defined tasks, and these can be vendor independent. They can also be edge "independent," but depending on the resources available, the contention level available, the security model that you need, you can move these data processing capabilities across this continuum. I think that's how you should try to abstract the design of your application, so that way you're not locked into any single cloud provider. You're not locked into any single type of edge hierarchy architecture. Depends, today you have not enough processing at the far edge, but tomorrow maybe you will get console processing there. Then you can say, I should be able to do my inference today, because I cannot put enough GPUs at that location, I will go a little higher up in the hierarchy, and do my model inference for more sophisticated model inference like video inferencing operations. Tomorrow, if more power comes there, you just take that container, and then you just plop it in the finer edge. I think you need to think in abstract data management principles terms, and then map it to different vendors and different edges.

Shepherd: Even when it comes down to protocol support. Clouds are offering great infrastructure. This is not about whether cloud infrastructures are good or bad. It's, what key decoupling points do you decide on? One is around just data acquisition. Why do we have thousands of protocols when you consider proprietary protocols coming from like IoT or just the operations world, because everyone created a proprietary protocol to lock you into their system? That's not any real way to scale long term as the pace of innovation picks up. We're starting to see this big trend towards openness. This is a big part of what we're doing within LF Edge and Linux Foundation is create these open source projects that are driving standard APIs that help you abstract yourself from these different technologies. Like EdgeX Foundry and Fledge, for example, they're basically protocol brokers between anything to something more modern and common, but it creates this decoupling point that's completely open so you're not tied to one backend. These are those types of tools that you want to evaluate, so that you can make sure that you have the maximum flexibility going forward. The odds that all use cases within one environment or a supply chain will ever go to one cloud is pretty much zero. You have to have basically multi-tenancy from the edge.

Stoiber: I think the main thing I hear us all saying is it depends. I feel like that's just like the summary so far is all of this depends.

Containerization and Tooling, at the Edge

Fedorov: I think right now, at least with some of the containerization or unification, there are tools available. I think as Jason mentioned, not every layer of the edge is physically capable of running the container or virtualized infrastructure. I assume that especially at the last frontier, like at the IoT, or the very end, like low end sensors, computing devices, there might not be an option to do that, physically. Ultimately, the problem of abstraction becomes really hard, and you have to decide whether not to do it, or do it and lock-in into the specific capabilities.

Shepherd: When it comes to orchestration and underlying tools, I see four major buckets. One is constrained devices, everything's custom. It just is what it is based on the resources. Of course, there's one that's more centralized to regional data centers, just generally a solved problem that we're seeing innovation happening around Kubernetes. The scale factor gets a little higher when you get out to metro areas, of course. Then another bucket along the continuum is client devices: PCs, mobile devices, tablets, whatever. That's also a solved problem around Windows, iOS, Android. Then there's all of these devices, from IoT gateways, to server clusters that are distributed outside of physically secure data centers, and there's going to be a growing number, it could be on a truck, or stuffed in a closet in a retail store, or on a manufacturing floor.

That's very different than data center compute, because in the data center, the controller calls the server and says, here's what you should be running, update this. It has pretty much a constant connection, usually fiber between that controller and the server. At the distributed edge outside of the fringes of the data center to the field, you have to assume you're going to lose connection, because you will at times, and the box has to phone home. It works through firewalls and nets and stuff like that. It has to otherwise just keep running if it loses connection. They're similar principles. We're saying move cloud native out and these layers of abstraction, but necessarily different tools, because they actually have to operate in reverse. These are other considerations. We've seen a lot of people that are trying to take data center tools, like really good stuff that's been around for a while and apply it out in the field and it falls apart, not only because of footprint, but because it actually has to operate in the opposite direction.

Fedorov: That's actually a very good point on the different layers and some of the complexities.

Antipatterns and Mistakes to Avoid

I think at this point it might make sense to talk about things not to do, and if you have seen any antipatterns, or is there any advice that you could generally give to avoid making early mistakes?

Voruganti: One clear antipattern is, historically, data was moved to where the compute was located. Now I think we are entering a world where because of the size of the data, in many cases, you have to move compute to where data is located. If the data is generated in the cloud, process it in the cloud. If the data is generated at the edge, and as close as possible, depending on the compute power, process it in that close location. Don't move the data that is getting generated at the edge to the cloud to process and vice versa. If the data is getting generated in the cloud, don't artificially move it, you will have egress costs, but still, don't move it and process it at the edge. I think that that model you need to do. Move the compute close to where the data is, in most cases.

Shepherd: The last thing the world needs is another IoT platform, so don't build one of those. There'll be too many of those. It's all about consolidating. I think what not to do in general is, don't reinvent the plumbing. There's a lot of good tools out there. There's a lot of good stuff happening through open source. Yes, we need more standards. We had the running joke, so it's 500 IoT platforms and now it's blurring into edge. We don't need 500 platforms, we need consistent infrastructure. Then the winners will have necessarily unique hardware, software, and services around that core infrastructure.

In terms of what not to do, don't reinvent tools that are already there. Focus on differentiation. Also, having an open foundation isn't just important for preventing lock-in and things like that, but also the real potential of digital over time is all these interconnected ecosystems crossing over. Whether it's B2B2C, like retail, insurance, utilities crossing into the home. Or it could be supply chains, the manufacturing, the energy ecosystem, which is actually quite fragile now with the macrotrends coming around electrification and all that. What we need is more consistent plumbing that builds trust into how data is exchanged across these different ecosystems. The only way that's going to happen is based on an open foundation. There's work that we're working on within Project Alvarium as an example, like building this concept of trust fabrics, and how do you start to interconnect things with measurable data confidence? The main point is, there's so much opportunity out there. As a developer, just make sure to take a look at some of these macrotrends. Even if it's easy to just latch on to some tool set or some cloud to get going. Everybody wants to get to hello world quick. There's an opportunity cost for what you might be able to get to, in the long term in terms of the different use cases and experiences that we can build.

Stoiber: I would say very broadly and generally, only move to the edge if you really need to. Think carefully about your use case, because it does come with cost and complexity tradeoffs. You can make those tradeoffs just fine, and there's tooling around doing that. You have to make sure that that is a thing that your application needs. If you don't need that it's just really unnecessary complexity and cost. I would say very generally, know when you got to move to the edge, because latency sensitive or even latency critical, absolutely do that. Unless you have a really good reason, it is definitely easier, and there's a lot more tooling if you try to stick with a single data center.

Shepherd: In a perfect world, we'd do everything central, and it is easier and whatnot, for a variety of reasons. That's also why I bring up the data confidence link in there is because data that originates at the edge, you can only drive so much value out of it if you know where it's been and if it's real. To do that you can use edge computing technologies to inject confidence into that data. That's the goal of efforts like these. Now you have basically turned security into a profit center instead of a cost center, and you can drive new monetization and things and all that. No, it's not just ledger technologies, put some blockchain on it. Blockchain will tell you where stuff has been, but it doesn't tell you if it's real. There's a lot of considerations even if it makes sense to centrally process things. There's investments you can make in edge architecture that are just to inject trust into data if nothing else. There are some other considerations beyond just the pure performance and cost benefits.

Fedorov: I think Max, your reference to moving to the edge when it's needed, really reminds me of some of the observations around the microservices. I think microservices became very popular, and very often they were used for a very good reason, but not always. I think there were many hours of engineering time spent on solving the problem that shouldn't have been solved in the first place.

Shepherd: Yes, in general, there's a lot of solutions looking for problems out there.

Voruganti: What we are also finding is that people are taking more of a data fabric centric view, for processing. What I mean by that is that they're not just looking at the stack in the cloud or stack at the edge. Instead, they're saying, we have all of these different locations from where we're doing compute and potentially data. They're looking at technologies which are at the fabric level, like you have a strongly consistent fabric, which is across all you have, like an object storage fabric across all these locations, or you have traditional file systems that are distributed caches across these locations. They're trying to get data or storage, not as a single silo, not like a single database or not like a single object storage system. Instead, they're looking at it from a fabric standpoint that is spanning across all these different locations. By employing technologies of that type, a lot of your state management for your application, state management for these stateless containers, moving them around, becomes much easier. Because the underlying fabrics are doing all that heavy lifting for you. They are moving the containers. They are replicating the data to a different location, or they can cache it to the locations that you want. A lot of the hard work there. Don't reinvent the wheel. There are a lot of fabric solutions available there that you should be piggybacking on.

Orchestration Tools

Fedorov: What's your thinking on the orchestration tools? Because I think Max, you quickly mentioned about running and managing the complexity at the edge. What's your current thinking, especially if we go to parts of the edge which are closer to the users, like it usually goes with the increased number of edge points? What's your current assessment on the state of the field and any insights about that?

Shepherd: There's obviously a lot of tools out there, whether they're open source or not. There's these main paradigms that there's different considerations for, so while there's good things like obviously Kubernetes, in general, there's K3s, a lighter weight version that people are using. For more distributed edge use cases, KubeEdge. Even from those tools, K3s, getting it on a box, much less a thousand times those distributions, is not easy. I think people also tend to forget that there's a difference between orchestrating the underlying hardware and the runtimes to play to it, versus orchestrating and managing clusters, for example, in band with that Kubernetes distribution. A lot of people don't think about the underlying orchestration because when you're in a data center, you could be scripting to a handful of servers, no problem, you get the bases going cool, but try to do scripting for 1000 devices in the field, good luck. That's where the tools, they have to have a different scale consideration, of course.

Also, there's the security factors. EVE-OS is a tool that we leverage from Linux Foundation, LF Edge. EVE-OS was built specifically for the needs of distributed edge where it's not only Linux, it's got embedded hypervisors so I can run legacy workloads. There's a lot of legacy out there. Zero trust security down to silicon measured boot, remote attestation, crypto IDs so you have no local username and password. That's another problem. People just walk up and log in to a box and start loading stuff on. It's like this grounds-up point. It's important to think about which edge before you start talking about what orchestration tools, I think is the net message. While it seems that things can be applicable across the board, there are very different considerations.

KubeEdge

I think KubeEdge is a really great open source effort, but I think that there's some gaps that other tools can help fill. We'd like to see communities collaborating to solve those core problems, and then we can focus on the value.

Infrastructure Level Orchestration

Voruganti: How we are looking at the orchestration piece there is as the infrastructure level, the hardware, the bare metal servers, or the infrastructure at multiple locations and the network connectivity. How do you make sure you do that orchestration to do that? One key thing is that we need grouping mechanisms, so that you have a single way of logging into all these different distributed locations, with the compute servers, the storage, the security devices, the interconnection fabric, all of that. How do you at the infrastructure level basically orchestrate that? Because it's hard. Otherwise, you have to have multiple calls to different vendors, and you have to meet, and so many different credentials. It's not that easy. Infrastructure level orchestration, and also clouds, because clouds are hybrid solutions, in most cases. It's not just a standalone solution. The infrastructure level orchestration, you need grouping mechanisms, for log aggregation, for authentication, for security policies setting. That is something that I think the standards groups can help.

Then you have the Kubernetes or container level, the service meshes, for that level, also, you need to do the orchestration. Then, also at the application level now, for example, for AI, you're getting these federated AI orchestration frameworks. There is KubeFATE and there's Federated TensorFlow, there is PySyft, there are many of such orchestration frameworks that are coming, that are helping to orchestrate at the application level on a cloud native platform, so that you train the model here, you inference the model there. Basically, you have layers of orchestrators that are required in order to basically then finally be able to deploy a distributed video surveillance application that is spanning across multiple locations. Basically, when I think of orchestration, there are different layers right now that people have to go through to get really what they need to get.

How to Solve Problems at the Edge

Fedorov: If I'm an engineer who is trying to solve the problem, can you try to give me one best piece of advice based on your experience?

Stoiber: I think the most critical piece that I would mention, again, is making sure to abstract the underlying infrastructure away. Make sure you're not locked into the easy button into that one provider that just makes it the easiest Hello World, as Jason said. You actually think about, if we might be able to move this, if we might need to move this around in the future, how would we enable that? I think that level of abstraction can be really important, as this whole edge continuum keeps developing more.

Shepherd: The biggest challenge with any of these things is balancing long-term flexibility and potential with easy button. It's very attractive to just latch on to PaaS services and start building stuff. When you think about the long-term potential, and once you get the bill after the data starts flowing, you're going to really want to have those abstractions. We see this all the time with customers that, initially, they jumped to a cloud centric model, they leveraged the services, really good services. At the same time, then they realized the practical realities of the bandwidth consumption, or that they are losing control of their data because the multi-cloud strategy is send me all your data and then you can pay me more money to send it anywhere you'd like. If you abstract it at the edge, and you impart trust into it, now you're in control of wherever it goes, whether it's to the cloud, other on-prem systems. It is important to find that balance between easy button and flexibility. There's also more offers coming out where basically, people are building edge as a service where you get easy button and that flexibility. That's a classic challenge, I think, for developers.

Voruganti: I want to echo Max's point, which is, make sure you really have to go to the edge. Otherwise, right now for about 80% of the use cases, I think metro edge is good. It can satisfy your latency requirements. It's much easier to deploy, as Jason said. You can deploy your containers. You can deploy your cloud native infrastructure, and manage it pretty easily. I would recommend them to pay close attention and see whether they can satisfy it in that situation.

 

See more presentations with transcripts

 

Recorded at:

May 21, 2022

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT