Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Building Applications from Edge to Cloud

Building Applications from Edge to Cloud



The panelists discuss the benefits and limitations of edge technologies and how to adopt them in existing applications and deployments.


Renato Losio is Principal Cloud Architect @funambol (moderator). Kristi Perreault Principal Software Engineer @Liberty Mutual Insurance. Luca Bianchi is CTO @Neosperience. Flo Pachinger is Developer Advocate @Cisco. Tim Suchanek Co-founder and CTO @Stellate.

About the conference

InfoQ Live is a virtual event designed for you, the modern software practitioner. Take part in facilitated sessions with world-class practitioners. Hear from software leaders at our optional InfoQ Roundtables.


Renato Losio: In this session, we are going to be chatting about building applications from edge to cloud, so understand as well what we mean by edge and what we mean by from edge to cloud.

Before introducing today's panelists, just a couple of minutes to just clarify what we mean by building applications from edge to cloud. Edge computing has added a new option basically, in the last few years, delivering data processing, analysis, storage close to the end users. We'll let our experts define what we mean by close to the end user, what the advantages are. Let's see as well what we mean by from edge to cloud. Data is no longer confined to a data center or to a bucket in a specific region, but it's just generating large quantity at the edge, even ever growing amounts and processed and stored in the cloud. Cloud providers, not only the big cloud providers, but many other ones as well, extended their infrastructure as a service to more locations, more entry points, integrate different options that offer new options at the edge that's as well increased the complexity and the choice for developers and for a project. We're going to discuss the benefits and limitation of edge technologies. We look at really, how can we adopt them in our real life project?

Background, and Journey through the Edge

My name is Renato Losio. I'm a Principal Cloud Architect at Funambol. I'm an InfoQ editor. I'm joined by four industry experts at least that have experience on the edge side. That is definitely not my field. I'd like to give each one of you the opportunity to introduce yourself and share about your own journey or your own expertise on the edge part, and why you feel that is important. I will start with just basically asking, give a couple of sentences to say who you are, and where you're coming from in this journey to edge.

Luca Bianchi: I am Luca Bianchi. I'm CTO of Neosperience, which is an Italian based company focused on product development and helping companies build great products. During my career, I had to face a number of challenges of companies needed to push the data back from the cloud to the edge. We started with the marketing domain where data needed to be reached at the edge. Then in recent years, I started focusing into healthcare or security for machine learning. I am quite in the middle of the journey to the edge. I don't know which part is behind and which part is in front of me, but I have enough scars about that.

Tim Suchanek: It's a dear topic to me, as I'm spending my full time working on this topic. I'm Tim, CTO of Stellate. We founded the company last year to make it easier to cache GraphQL at the edge. While it's great to bring the whole application to the edge, there's a lot happening in order to make it possible for maybe older applications but also new applications to benefit from the edge we're building at Stellate, so that you can cache at the edge. That's what we're doing.

Flo Pachinger: I'm Flo. I'm a Developer Advocate at Cisco. I'm pretty sure this is my topic, with edge compute, because I did already some IoT projects, specifically for edge compute, like in the manufacturing space a lot in Germany. We had a lot of challenges already there, how to connect and then get the data in an aggregated or in a better form, basically, to the cloud. Yes, also work on some computer vision stuff.

Kristi Perreault: My name is Kristi Perreault. I am a Principal Software Engineer at Liberty Mutual Insurance and an AWS serverless hero. I work in the serverless DevOps space at Liberty Mutual. We're pretty heavy on the cloud computing side of this journey. I definitely know that there are some folks working on the edge still. I've also had some personal experience in doing so in my own projects with some IoT, some robotics, some cloud computing.

The Main Benefits and Use Cases of the Edge

Losio: I will say that from your introduction, you are almost completely a different industry and experience. It's quite obvious that edge technology is not just one sector. It's not just machine learning. It's not just IoT. It's not just finance or whatever. It is helping in very different areas. More data is now collected and processed at the edge and used in very different technologies and location, what do you see as the main benefits and use cases? What are the use cases maybe we don't talk about?

Pachinger: From my experience, it comes basically down to latency, and to have some compute or time directly at the edge. You have critical compute tasks of what you can do, and what needs to be computed at the edge. A classic example would be, like I had a robot arm demo and challenge on our last event, and basically, it requires real time communication. It was a C++ interface, and you need to move the robot arm around. If the latency was really high, or 500 milliseconds, like we're talking about, like not 2 seconds, we're talking about really low milliseconds, then you can't do this. This application can't be in the cloud, so it really needs to be on-prem there. Especially in the manufacturing space, with connecting the machines, just like having telemetry data, collecting telemetry data, safety requirement, of course, that the machine is shutting off, like this kind of logic there. This is something you had to have something at the edge there. Especially in what I see is in the manufacturing space, in utilities, in oil and gas, and these kinds of industries, this is remote places. You need edge compute there, because the latency or the way to travel to another compute node is just too far away.

Suchanek: Maybe before I dive into that, I want to quickly talk about the term edge. I think that with edge, it's just easy to quickly define. One thing is that in the edge industry, for example, CDN providers, which just have hundreds of locations around the planet, the edge meant the edge of the network, meaning where they have the presence. The edge can also go a step further, as Flo already alluded to ML applications. In a sense, if you want to, you could define what Tesla does with GPUs in the cars, as edge as well. If you go that far, you could also say that every browser is an edge. I'm not sure if we want to go that far, but for coming a bit into the applications that we see, so for sure, when you're in the API space, it's quite useful to do certain things at the edge that you don't do at your central server. Just because it's separate infrastructure, it's already a big advantage.

For example, we are building a rate limiting product right now. The fact that it doesn't even have to hit my server, and I can protect the origin server, it doesn't even have to hit it. That is a big advantage. We can dive into architecture later more, but I definitely see more a hybrid approach. I have not yet seen really large scale applications that are from scratch written at the edge. They're slowly coming. You basically need all the capabilities that you have in the central cloud at the edge, and they're slowly coming. From our side, what we do is that we make it easy to cache GraphQL responses at the edge. GraphQL is an API protocol that Facebook came up with a few years ago. It's, in a sense, an alternative to REST. The advantage of GraphQL is that it has a very clear set of boundaries and like a specification, how every request and response can look like, and so you can build more general tooling. That's why we built the caching. Because now with GraphQL, when you know the schema of a GraphQL server, you can do smart things and invalidate, because reads and writes are separated in GraphQL. That's how we utilize it. Without GraphQL, the edge caching there from APIs is a bit cumbersome, more cumbersome, and now that is a new opportunity how we are utilizing it.

Losio: Actually as well, I gave it for granted what edge is, but actually different people might consider edge very different scenario. I was even thinking, maybe because I'm really cloud oriented, for me edge was really a point of presence of the cloud provider, but actually it's not necessarily even that. Actually, even those ones keep growing.

Limits of Cloud Providers Adding Zones and Functionality Closer to End Users

What is, for example, your point of view in where we are going as well with the cloud? It's obvious that all the providers are adding zones, are adding new functionality closer to the end user. Where do we see that limit? Is that just a latency point, or there's some other advantages?

Bianchi: I think that there are a number of advantages not only based on the latency value, which is a big point. We had a project in which our customer needed to manage vehicles on the highways and provide feedback on the vehicles while people were driving. They had very high latency constraints. Usually, I also see that when you enter in the machine learning domain, you basically had constraints related to bandwidth. Basically, I have the opportunity to work on the video processing applications, in which if you process the video stream using machine learning model, or at the edge, and here, we need to explain what we mean by at the edge because you can be directly on the camera, or maybe you can be in a data center close to the camera. It is a number of different things, but you are definitely at the edge, then you're bringing out from the stream only the insights. You're already using the bandwidth needs. This is one thing.

The other thing, especially important for the healthcare domain, but not only for the healthcare domain is about data locality. Because sometimes you cannot afford to shift peoples' data for regulatory reasons or privacy reasons, you cannot shift data out from the legal boundaries of the owner. With machine learning at the edge, you can deploy the model directly from the cloud. You can train the model, you can manage everything in the cloud, and then deploy the model at the edge, then process the data and extract all the anonymized or insights from the edge.

Developer Perspective on Cloud Latency

Losio: You actually already mentioned a good point that is just not simply a problem of latency. I don't know if you have any specific example as well. In your case, you mentioned that you work a lot on the cloud side as well as a team with some on the edge side. From a developer perspective, does anything change? I'm always this cool person that I'm playing with the cloud. If I'm coming from a developer side, probably the only thing I think are, ok, edge is maybe something new but until now, I always had to think about the latency of my request or the response time or whatever of my API that I was developing. Do I need, as a developer in the cloud, to think about that, or is that something for someone else?

Perreault: Yes. I like that Tim took a step back here and actually thought about edge as different definitions too, because I definitely have a different viewpoint than the folks here. We're all insurance, we're financial data. We're not really using things that are hardware or machine based. When I think of edge, I just think of it in the traditional definition of having your compute and your processing really close to your data and your data storage. I think that we do that in a hybrid mix with the on-prem that we still have, as well as our cloud computing and things. In terms of a developer, I always think it's just important that you should understand every step of that process, even just at a very basic level, in terms of things like vocabulary is changing all the time, things become buzzwords and overused, and those kinds of things. I think it's important to know what edge is. Maybe not even just the traditional definition or anything, but how that helps you and your company.

We've hit on latency a lot, but one thing that our company is concerned with is cost. That's something that we're going to be thinking of in terms of edge computing and your cloud solutions as well. That's something that you should be thinking of as a developer, too, because you want to make sure that you're building applications that are really well architected right now, is a big focus for us. That involves all of those pillars of security, of cost, and of performance and reliability.

The Cost of Edge Compute

Losio: That's an interesting point of view. I never thought about the cost part. Are you basically suggesting that doing it at the edge is cheaper because the data doesn't have to go around the world? Or it is actually more expensive because I'm paying for data centers that the cloud is, I'm not saying taking advantage of this scenario, but maybe the local zone is a bit more expensive than a data center far away?

Perreault: I think it depends on what you're doing and what you're working on. I think that that's what I mean when I say like, you just got to be educated on all the different options, because in some cases, it might be cheaper for you to go to the edge than it is to keep all of that up in the cloud, and do all of the processing, all of the data, and you're worried about performance and things. It might be cheaper in that aspect to do it for edge. It really depends on your use case, how much data you're processing, what you're working on, and how many other services or things you're interacting with. Then I think of, in the cloud, there's almost edge computing in a way. If you're going multi-region, or cross account, or doing different availability zones, that might not be optimized. It might help from a security perspective, and it might help from a data backup perspective, but it's not going to help from a cost perspective. Just different things to think about, I think, in terms of insurance and financial lens.

Pachinger: We did a calculation there. It's completely true, it really depends on the use case. One example of what we did is like, especially at the remote places, where we placed an industrial router there, we did actually edge compute on this industrial router, which interconnectivity was 3G and 4G. You can definitely see if you don't compute this at the edge, then you have costs at the cellular level, like with the service provider there. Plus, you have also cost at AWS or like also at the public cloud provider. You had double the cost there. There, we did the calculation, and yes, without any surprise, it was like, go for edge compute. That's a no-brainer. Yes, it really depends on accessibility. Like, is it remote? How much can we leverage actually classic like internet or cable and extend what we have there? This was the use case, and then it was clearly go for edge compute.

Losio: That's actually a good point, because I remember not a while ago that I saw, for the very first time in my life, I had one of those Snowcone devices that you borrow from the cloud provider with a lot of storage. There was as well some computing on the machine, I was like, are we just going to do any computing on the machine? Then I realized that actually there are use cases, maybe they're not my use case, but there's really benefit, because maybe that box is going to travel for two weeks before going back, so you can take advantage of that, but as well that not necessarily when you hire with the connection, it might be that cheap to do it in any other way.

Regulatory Compliance and Edge Compute

In that sense, do we see a compliance rule as well? I was thinking about the finance world, but not just that one, Luca mentioned the data has to stay in a specific country. Do we see that as well as the edge problem, or is it entirely different more legal problem, or whatever? I don't know if you have a shared view on that one. As you said, the edge can be many things. I remember actually, you really considered that as part of the edge as well.

Suchanek: I think when it comes to data compliance, I just have to quote our lawyers when we talked about GDPR, because a bunch of customers are asking about it. The lawyer said, only God knows if it's GDPR compliant. The thing is that it's really tough with GDPR compliance. It's not like with other compliances like SOC 2 where you can just actually do an audit and you're done. It's a bit more complicated than that. The irony is that, for us, at least in caching, there were customers asking for caching something only in a specific region. It turns out that's not needed. What I mean by that is, if I do a request from Europe, then usually we only cache in Europe. If I do the request from wherever, then we cache there apparently. According to our lawyers, that is enough for GDPR compliance. What some users were asking is to say, we never cache outside of Europe, even though I as a European citizen am traveling outside of the EU. That is my knowledge now. It's a confusing field. All the bigger enterprise customers we have, their legal team then comes and talks to our legal team, and that needs to be figured out. I think there's still a lot happening there to realize these things.

Perreault: Tim, if I can elaborate on that, too. Obviously, compliance is huge for us. We're dealing with credit card numbers. We're dealing with quotes and claims and insurance data and all that PPI. It's a huge concern of ours. You hit on a really good point where we're also global, so we have contractors everywhere. Something that you have to think about is when you're caching this data it's different in the U.S. where I live versus our Ireland office, or our contractors out in India, or some of those things. There is like a fear there. With on-prem, those boundaries are less blurry, but when you start thinking hybrid cloud, and there's a fear of going to the cloud, and what does this look like, and what is security? How do we answer those questions? It's not a one-size-fits-all solution, depending on what you're doing.

Edge Tech and Hybrid Cloud

Losio: You just mentioned actually hybrid cloud. I was wondering if there's any specific challenge or specific differences when we think about edge technology and hybrid scenario. Because, if I understand, there are two aspects, one you mentioned before, is I might have something a bit, not because I don't want to go to the public cloud, but because I need the data next door, and next door is not the big cloud provider, whoever it is, Microsoft, Google, Amazon, whatever else, is not there. Probably, if I'm in Iceland, maybe I need to do it different with a local provider or whatever. I was wondering as well in terms of paradigm there's still a very big difference between hybrid, or if everything is hybrid at the end, the moment you start to talk about edge?

Pachinger: From that perspective, it's very interesting, because hybrid is, again, a definition case, from what we see is that you leverage several clouds there. Usually, what we see is that the majority of customers are hybrid cloud users, they're using several clouds there. I think it's not wrong to say that hybrid in a way also extends to the edge, to any specific data center or to any other service provider what we see there. There again, coming back to Tim's suggestions, going back and say, we need to define as well where the edge is. Usually, I go through the Linux Foundation Edge Consortium or organization. They released a really cool white paper there, where everything is included. They say, this is basically a specific edge category for embedded hardware. This is more for the service provider. This is more for like using rack servers. It depends also a bit on the hardware, of course, where the edge is defined. As you said, it definitely fits to a hybrid cloud strategy there. It all comes down to the hardware, and also from the definitions, I usually tend to step back on how to define it and more to like, what does it do. We can be more clear on the things there.

Suchanek: Maybe about what you said Flo. I liked that point with like, where's the boundary? I think that in a sense, you could argue that it's a blurry line. The thing is that if you have one location, let's say you use AWS, I think they already support over 40, 50 locations, if you take their local zones, Wavelength zones. Then, ok, one location is not actually edge, so then we take two, also not edge. When is it edge, 10, 20, 30, 40, 50? That's the thing, and you just have multiple locations. You for sure need to put something in front. They, for example, have the Global Accelerator product, or you can use Route 53 for IP based routing or something like that. After all, I think as soon as you have a certain amount of locations, you probably can talk about edge. All those locations could basically run everything that a cloud can run. You can also check that out for AWS, specifically. I don't have any affiliation with them, but we use it. You can check out for certain locations. They, for example, support the whole offering, than maybe in a small edge location, how they call it, less of the offering. After all, you have the cloud in a region, and then you just have many of those, and then you can call it edge in a sense. It's blurry for me at least.

Bianchi: It's blurry, but I think that there is quite a big difference between the edge computing, which is under the cloud provider domain. Basically, AWS, or whatever has the full control on the edge location, so it is simpler. If you need to deploy Lambda at edge, just to give you an example, it's much more simpler. It's not straightforward as deploying a Lambda directly on a given region, but it's simpler. If you need to deploy something on an edge device, which is located within a company boundary, you have to face a number of complexities related to integration, to network security, and a lot of stuff that makes things a lot harder.

The Future of Edge Technology

Losio: I was wondering when you mentioned as well before about the use case for the edge technology. I was wondering if there is a new use case. If I think about where we were 10 years ago, 10 years ago there was really no edge, where at least. We were just talking maybe about a bit of caching for some, I'm thinking about Amazon maybe there was a bit there, before a load balancer you could use CloudFront. That was probably already there at the beginning. Five years ago, started some service at the edge, now it's a very big topic. I was wondering in 5 years or 10 years, where's the direction? Just new areas where we're going to use edge technology or simply edge technology will become the norm and almost transparent to the service.

Suchanek: I think that's an interesting one, let's look into the future. I think that more is moving to the edge. Let's assume we want to move a whole application to the edge, what would need to happen on a technical level without any centralized data storage anymore? We would for sure need to solve data, the easy part. Compute is not that challenging. Compute at the edge we got already. Data is the challenge. What options do we have there? Caching is one option. We still have maybe five, six main regions where we have our databases and they are replicated or sharded. The other option is that at the edge we could be sharding by user. Only the user data is stored there, that could be an option. What is obviously not scaling is that we would take terabytes of data to everything in edge location and replicate it 200 times. That's not scaling. We would need to come up with some better solutions there, how to distribute the data. Then there are many approaches right now like FaunaDB, and whatever you've got.

I think the solutions are still quite early. I think there's a saying that you shouldn't trust any database that's younger than five years. We'll need to wait a bit for all the databases that just came out, before we use them in production for edge related things. Cloudflare released the SQLite approach. Also Fly does that where they stream between the SQLite and saving them, that still won't give you strong consistency, it will only give you eventual consistency, meaning, if I do a write and a read directly after, I might see old data. That's a tradeoff.

Losio: It'll be an interesting tradeoff, because I wonder if it will be [inaudible 00:30:54] application that will accept that as a tradeoff knowing the importance of the milliseconds becoming small or smaller in that case.

Suchanek: I would be curious to hear Kristi's view on that, because I think there are a bunch of applications that can accept that tradeoff of eventual consistency.

Losio: Probably finance is not one of the main areas.

Suchanek: Probably not.

Perreault: Again, it depends. Actually, the one thing that keeps like playing in my mind when you were talking about this was, it's not new, but new to me, and what we're exploring is more of like content delivery networks too. Because we're a lot of frontend applications, so we have to serve up web pages and that's how we gather our data. That's one thing that is top of mind that I'm thinking of for edge locations, and for using computing on the edge. I think that there's going to be a lot of that in terms of networking. Yes, to your point, financial data, insurance data, credit cards, that stuff, that's going to be really hard to work around with the edge. Especially that brings in the compliance issue a lot too. Every country and globally, there's different laws, there's different policies, even by states here, too, it's different. I know our process for working with California is completely different than working with Ohio. I think some of those, it's going to be interesting. I wish I did have that glass ball to see who comes up with that solution and what that looks like. I think we're interested in that. Right now, we're all bought in on cloud and we're headed that way. Hopefully, we'll see how multi-region, and cross account, and all of that works out.

Pachinger: For me, what I see right now, maybe early adopter phase, or a bit of maturity level. A good appliance is computer vision, so where we have GPUs at the edge, and thinking about that's nano, like in this super small size there. You can do a lot with cameras, with image detection. A classic example would be with vehicles there to detect something. Then persons, of course. We have lots of use cases where like, does the person actually wear their face mask or not, so to see? Again, in the GDPR, it's like, are we actually violating the privacy of the user? It depends, of course, where you can deploy the cameras there. I think in the machine learning level, inference at the edge. This is still a bit early, but I see already some applications there. I think they will definitely grow and increase in that area.

Use Cases for ML at the Edge

Losio: Do you see any other use case increasing for machine learning at the edge?

Bianchi: I think that in the next three to five years, a lot of machine learning workflows will be pushed to the edge due to the fact that a number of specialized devices are coming, which is a good thing. On the other hand, I don't think that right now we have a sufficient level of abstraction from the device. When you are deploying machine learning on the device, you still need to understand what is under your hardware, what kind of GPUs you're using, what instruction set is supported? We have great tools, but they are evolving quickly. I don't think that now we can abstract away from the hardware complexity. Hopefully, within the next five years we will be there and we will be able to take a machine learning model, cross compile it for the given whatever hardware, and then deploy that to the hardware. We are not yet there.

Suchanek: I think that the ML use case that has been mentioned, that makes total sense. Also, we will still see more use cases coming as we have compute as a general, let's say, primitive available now with something like Cloudflare Workers. I think people are still looking for these use cases. I just want to mention one big one that I haven't heard, which is A/B testing. There's a whole industry around that. There's, for example, LaunchDarkly, which is a startup basically just built on that, where at the edge, you make the decision, where I send the actual request routing, generally. That's the big one. Anything that is right now, for example, the HTTP gateway, or API gateway from AWS, all these things, I think, are perfect candidates to run at the edge, so you can decide where the actual request goes.

The Open Standards and Risk of Vendor Lock-In at the Edge

Losio: As you mentioned, as well, some services from Cloudflare and AWS. One topic that usually when we think about edge technology in general [inaudible 00:36:00], all exciting new services, new products at the edge, not at the edge, is, what are the standards, what are open standards, what is the risk of a vendor lock-in, also, if I think in 5 years, 10 years?

Pachinger: It's always a challenge, because we see this actually in the cloud providers as well there, that the more you develop your application towards one cloud, of course, they try to leverage it or say, ok, let's be cloud native, or like, let's go in this direction and make it easy for all cloud providers. I think at the edge it will be similar. However, we are leveraging a lot of like open source container technologies like containerization. I think it will stay at this level. Containerization is I think the way to go to. Then we have K3s, for example, as a really cool solution there. Then, classic like a hypervisor, maybe an embedded hypervisor, but nothing super big. Depending again on what edge we're talking about, but talking about like the classic edge, or the smart device edge, or the constrained device edge there, this is where I see containerization there. Also, I think, with containerization, you can have it in this category. Then you can also have it in a data center in a rack server, for example. I think this is very important to have it open, to have no lock-in there, no vendor lock-in. This is of course very important. It's like a lot of users, they like this. This would be from a software perspective there.

Losio: Do you see that as an issue in your experience? If I understand well, you're very much an AWS focused company.

Perreault: Yes. Actually, I did want to add on to what Flo said, because I do agree. I think that one of the things that I really like about working at Liberty Mutual is that we are very open to whatever tools you want to use, whatever providers, whatever way you want to go, we are very much AWS driven and focused. We do have Azure support. We do have Google Cloud Support. There are folks that go to those ways. We do have some folks in the machine learning space or processing large amounts of data and data analytics that might prefer Azure and some of the tools that they have over there, over AWS. We also use containerization. That's a great kind of half step from on-prem to cloud. That's one that folks reference. We also use Kubernetes, Docker, Fargate, it's all across the board.

With a really large company, the idea of vendor lock-in is a little different, because once we're bought into something we're bought in, and then it's a long process to move, and to take that stuff out. We have about 3000 developers, 5000 people working in tech, and we're 110 years old, so there's a lot of data. We're always acquiring new companies and things too. We've inherited some of their vendor lock-in or some of their tooling to bring that in and modernize, or to even go in that direction, if that's a better solution. The idea of vendor lock-in is interesting, because we have what feels like every vendor all the time, and it hasn't been too much of an issue. It's just a matter of we're going to be bought in on AWS, and like most of our expertise is there. If you choose to do something else, you're more than welcome to, but some of that learning curve might be a little harsher and steeper for you.

The Roles Arising from Edge Compute

Losio: I was actually wondering, thinking about our audience, as well about developers and software engineers, how do you approach the cloud in the long term, from the cloud to be maybe a cloud architect, cloud engineer, whatever, to be an edge expert? Do you see the need of edge experts? Is it going to be a specialty somehow? We're going to have a new title, maybe it already exists, edge engineer, I have no idea. Or you see as kind of, from a software developer point of view, almost transparent that it's just ok, you deal with it, and someone else deals with the location of your data, Tim, from the graph point of view?

Suchanek: I think that we probably would not need new job descriptions there. There are a few things unique to the edge, for example, interconnectivity, but usually, as Luca also mentioned earlier, the abstractions that you get these days are so good, that I can "just write a function and upload it" and the providers usually take care of it. Then rather advanced scenario, I might have to know about that, but then we're in distributed systems anyway. I think, generally, with these JavaScript interfaces that are available these days, Cloudflare Worker, and so on, Deno also has an edge product, you don't really need to be that specialized.

I also want to add one thing regarding vendor lock-in. There is an important initiative happening right now called WinterCG. Where basically all the major edge compute providers came together and said, let's agree on a minimum API that we want to support.

Losio: I'll ask Luca if he shares the same view, that we don't need special developers.

Bianchi: I don't agree with Tim, because we are seeing that in some domain especially related to machine learning, people are naturally specializing into bringing models down to the edge. For instance, we are using AWS Panorama, and it requires a lot of effort. Some people of my team had to study and specialize into compiling the models directly for the hardware. Maybe in the future, the abstraction will be more mature, so we'll be able to have the same developer develop the same thing for the edge and for the cloud. Actually, I think that some effort is required.

Losio: If I simplify a bit what you said, it's basically, if you develop a Lambda function, probably you don't care where we deploy it. Maybe you can deploy it at the edge or you deploy it at a standard region or whatever. If you deploy a machine learning model, and it depends really a lot on the hardware, then if the hardware at the edge is different like Panorama or whatever, then you have a different challenge.

Bianchi: Yes. Exactly.

Pachinger: It depends also maybe on the industry there. A classic example is like IT/OT conversion, where OT, operation technology from manufacturing context, they would like to integrate more with cloud providers, with the classic IT. From like layer 1 network perspective, up to layer 7, of course, to the application perspective, collect data, again, also leverage machine learning there and gather the data to the cloud. This is exactly where, of course, like a specific knowledge is important. How do the machines operate? What are the network requirements? In this way, there's definitely expertise needed. If you talk about, I can just use the Lambda, I can just use some cloud native technologies there, I don't care about the specifics at the edge, then like, not everybody, but in a way, a classic standard cloud developer or developer can definitely handle that.


See more presentations with transcripts


Recorded at:

Aug 30, 2022