BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Is Edge Computing a Thing?

Is Edge Computing a Thing?

Bookmarks

Key Takeaways

  • "Edge" is not a place, but rather, a new way to compute on streaming data.
  • Real-time analysis of streaming data demands that we kick the REST + Database habit.
  • Cloud computing is built on stateless microservices and stateful databases, whereas edge requires stateful "things", boundless data streams, and dynamic context.
  • SwimOS is an example of a stateful, real-time platform for applications that process real-time streaming data

In this series of InfoQ articles, we take a look at multiple dimensions of the edge story. We review considerations for virtual and physical security. Another piece explores the data streaming aspects of edge that are more about “how” and not “where.” We also explore the dynamics of constantly-changing edge systems, and how cloud-based edge services are lowering the barrier for experimentation.

This InfoQ article is part of the series "Edge Cloud". You can subscribe to receive notifications via RSS.

 

Enterprise architects are comfortable using public cloud services to help quickly build applications. The (REST API + stateless microservice + database) model for cloud-native apps is a pattern that has been key to scaling the cloud (any server will do) and increased use of abstractions such as Kubernetes further simplifies the operational deployment and management of microservices.  But there’s a class of applications for which today’s cloud-native stacks are an awkward fit.  

I’m not referring to lift+shift legacy applications that are expensive to run in the cloud (and will take forever to re-write).  Instead, my focus is a class of applications that are increasingly important to enterprise customers that analyze streaming data – from production, products, supply chain partners, employees and more - to deliver real-time insights to help drive the business.  Increasingly these apps are put into the “edge computing” category.    In this piece I’ll dissect the requirements for these apps and (hopefully) convince you that, for streaming data apps at least, the “edge” is not a place.  Instead it’s a new way to compute on streaming data. 

There are many uses for advanced computing embedded in next-gen products “at the edge”, from cars to compressors.  The engineers that develop them will use the best CPUs, ML acceleration, hypervisors, Linux and other technology to build vertically integrated solutions: Better cars, compressors and drones.   Is this “edge computing”?   Sure, but not in a generic sense – these are tightly integrated solutions rather than computing systems that could be used for a broad set of applications - interesting but only narrowly.   But what about the data that these and other connected products produce?  “Smart” devices (with a CPU and lots to say) are being connected at a rate of about 2M/hour, and there already are a billion mobile devices in our hands - so increasingly networks are becoming flooded with streaming data.  Applications that drink from this firehose are becoming common, and we need tools to help devs create them.

The term “edge computing” implies a generic capability that is different from cloud computing.  While there are often requirements such as data volume reduction, latency or security/compliance concerns that dictate an on-prem component of an application, other than these, does edge computing have unique requirements?  It does: Real-time analysis of streaming data demands that we kick the REST + database habit.  But there is nothing that is unique to the physical edge.  This is great news because it means that “edge applications” can run on cloud infrastructure, or on prem.  “Edge computing” is definitely a thing, but it’s about processing streaming data from the edge, as opposed to running the application at the physical edge.   Edge applications that process streaming data from real world things have to: 

  • Continuously and statefully analyze new data: Stateless microservices can only update or consume from a database.  That means their performance is dominated by database response times.  Often many database accesses will be required to compute a contextual result, making performance non-real-time.  Stateful processing is key.
  • Always have an answer: The “store then analyze” approach needs to become “analyze, react, then (maybe) store”, without database roundtrips.  Storing raw data for later analysis is slow and batch focused, and moreover most raw data is only ephemerally useful. 
  • Process all data: Sub-sampling is inadequate.  The software stack needs to process every event, on-the-fly, and share insights in real-time.
  • Analyze and visualize in context: An event is of value only in the context in which it was generated, so the interrelationship between event sources is key, at a granular level.

These requirements for stream processing software are independent of “where” the solution runs. If the data sources are fixed, then it makes sense to co-locate some compute nearby – particularly to deal with data reduction. But a cloud-hosted stack is needed for any solution that processes data from mobile devices.   

OK, what’s different?   Whereas cloud computing builds on a powerful triumvirate: Stateless APIs and micro-services, and stateful databases, streaming applications need different infrastructure abstractions: 

  • “Things” are stateful: Whereas the stateless model of the web serves traditional cloud services well because it lets any server (including “serverless” Lambda / Functions) process an event, this has a massive drawback when  processing streaming data:  For each new event, an app must load (code and) the previous state from a database, compute a new value, and store the new state back in the database.  This bounds performance to the round-trip time to the database, which inevitably is a million times slower than the CPU.  We need a computing architecture that can deal with real world state changes - “things” change state often and changes to one thing propagate to others.    
  • Algorithms must be adapted for boundless data:  Algorithms that compute analytics, learn or predict must be adapted to deal with boundless data – computing on every new event.
  • Context is vital: Events aren’t meaningful in isolation.  Real-world relatedness of things such as containment, proximity and adjacency are key in applications that reason about events in context.  So, edge computation is necessarily “in (a dynamically changing) data graph” rather than “over (a pre-built) graph”.

A key observation is that “things” in the real-world change state concurrently and independently, and their state changes are context specific – based on the state of (other things in) their environment.  It is the state changes in things that are critical for applications, and not raw data.  Moreover, whereas databases are good at capturing relationships that are fixed (buses have engines) they are poor at capturing dynamically computed relationships (a truck with bad braking behavior is near an inspector).  Real-world relationships between data sources are fluid, and based on computed relationships such as bad braking behavior, the application should respond differently. Finally, effects of changes are immediate, local and contextual (the inspector is notified to stop the truck).   The dynamic nature of relationships suggests a graph database – and indeed a graph of related “things” is what is  needed.  But in this case, to satisfy the need to process continuously, the graph itself needs to be fluid and computation must occur “in the graph”.

In summary: Edge computing at any scale demands stateful processing, and not the usual stateless microservice model that relies on REST and databases.  But rather than make engineers scratch their heads and learn something new, we have to provide an approach that is easy to adopt – using familiar dev and devops metaphors to make it easy for engineers to quickly deliver stateful applications that offer real-time insights at scale.  Below we discuss a powerful “streaming-native” application platform called  SwimOS, an Apache 2 licensed platform loosely modeled on the actor model, that builds graphs of linked actors to statefully analyze sources of streaming data in real-time.  Developers create simple object-oriented applications in Java or JavaScript.  Streaming data builds an automatically scaled-out graph of stateful, concurrent objects - called web agents – actors that are effectively “digital twins” of data sources. 

Each web agent actively processes raw data from a single source and keeps its state in memory (as well as persisting its state in the background to protect against failures).  Web agents are thus stateful, concurrent digital twins of real-world data sources that at all times mirror the relevant state of the real-world things they represent.  The diagram shows web-agents created as digital twins of sensors in a smart city environment.  

Web agents link to each other based on computed context based on changes in the data, dynamically building a graph that reflects real-world relationships like proximity, containment, or even correlation.  Linked agents see each other’s state changes in real-time.  Agents can concurrently compute on their own state and that of agents they are linked to.  They analyze, learn and predict, and continuously stream enriched insights to UIs, brokers, data lakes and enterprise applications. 

The diagram shows that the sensors at an intersection link to the intersection digital twin.  Intersections link to their neighbors, enabling them to see state changes in real-time.   The links are dynamically computed: membership/containment and proximity are used to build the graph using insights computed by the digital twins from their own data.

SwimOS benefits from in-memory, stateful, concurrent computation that yields several orders of magnitude performance improvement over database-centric analysis simply because all state is in-memory and web agents can compute immediately when data arrives or a contextual state change in another linked agent occurs.  Streaming implementations of key analytical, learning and prediction algorithms are included in SwimOS, but it is also easy to interface to existing platforms such as Spark.    

SwimOS moves analysis, learning and prediction into the dynamically constructed graph of web agents.  An application is a thus a dynamically constructed graph built from data.  This dramatically simplifies application creation: The developer simply describes the objects and their inputs, and the calculations that they use to determine how to link.  When data arrives, SwimOS creates a web agent for each data source, each of which independently and concurrently computes as data flows over it, and in turn streams its insights – such as predictions or analytical insights. Each is responsible for consuming raw data from its real-world sibling, and links dynamically to other agents based on computed relationships.  As data flows over the graph, each web agent computes insights using its own state and that of other agents to which it is linked.   

You can see SwimOS in action in an application that consumes over 4TB per day of data from the traffic infrastructure in Palo Alto, CA, to predict the future state of each intersection (click on the blue dots to see the predicted phases, and the colors for down-timers of each light).   This application runs in many cities in the USA, and delivers predictions, on a per intersection basis, via an Azure hosted API to customers that need to route their vehicles through each city.  The source code for the application is part of the SwimOS GitHub repo. Starting with SwimOS is easy – the site is complete with docs and tutorials.

In SwimOS the graph is a living, dynamically changing structure. In the example earlier: (a truck with bad braking behavior is near an inspector) the relationship is dynamically computed and ephemeral; the link between the inspector and the truck is thus also ephemeral, built when the truck enters a geo-fence around the inspector, and broken when it leaves.

An application in SwimOS is a dynamic graph of linked web agents that continuously concurrently compute as data flows over the graph.  As each web agent modifies its own state, that state is immediately visible to linked web agents in the graph.  This is achieved through streaming – a link is effect a streaming API, and takes the form of a URI – just like a REST API call.   SwimOS uses a protocol called WARP, which runs over web-sockets, to synchronize state and deliver streamed insights. The key difference is this: each web agent transforms its own raw data into state changes, and those state changes are streamed to linked web agents in the graph.  They in turn compute based on their own states, and the states of agents to which they are linked.  

A SwimOS application is built from the data, given a simple object definition for each source.  A single app is easy to build.  But there is another benefit in this approach: If an application is re-used in multiple sites, no changes are required.  For example, data in a smart city application is used to build an application that predicts future traffic behavior in the city.  But note that there is no specific model required for each city.  Instead, only the local context for each intersection is used, by the digital twin of the intersection itself, to learn and predict.  In each city, the app for the city is built from the data, without needing to change a line of code.

A word about transformations: In traditional analytical stacks – for example using Spark – the data is transported to the application, which is forced to transform data to state, for each thing, and save it to a database, before the analytical app can operate on the state of the real world sources.  SwimOS transforms raw data to the relevant state of the source in each web agent.  The data-to-state transformation is done by each digital twin and the graph of web agents is a mirror of the states of the real-world sources.  Then, each digital twin can compute locally, at high resolution, using its own state and the states of linked web agents.  This proceeds concurrently, at CPU and memory speed, without needing to access a database.  The performance increase is huge, and the reductions of infrastructure required are commensurate.  We frequently find that a SwimOS application uses less than 5% of the infrastructure of a database-centric implementation of the same application.  

Finally, SwimOS digital twins are real-time streaming mirrors of real-world things – things in the edge environment.  They compute and stream insights on the fly.  Visualization tools are vital for such applications.  SwimOS includes a set of JS and typescript bindings that enable developers to quickly develop browser-based UIs that update in real-time, simply because they subscribe to the streamed insights computed by web agents.

In summary: Edge Computing is definitely a thing, but the computing need not occur at the edge.  Instead what is needed is an ability to compute (anywhere) on streaming data from large numbers of dynamically changing devices, in the edge environment.  This in turn demands an architectural pattern for stateful, distributed computing.  SwimOS is an example of a stateful, real-time platform for applications that process real-time streaming data.

About the Author

Simon Crosby is the CTO of SWIM.AI. Previously, Simon was a co-founder and CTO of Bromium, a security technology company. At Bromium, Simon built a highly secure virtualized system to protect applications. Prior to Bromium, Crosby was the co-founder and CTO of XenSource before its acquisition by Citrix, and later served as the CTO of the Virtualization and Management Division at Citrix. Previously, Crosby was a principal engineer at Intel. Crosby was also the founder of CPlane, a network-optimization software vendor. Simon has been a tenured faculty member at the University of Cambridge. Simon Crosby was named as one of InfoWorld’s Top 25 CTOs.

 

In this series of InfoQ articles, we take a look at multiple dimensions of the edge story. We review considerations for virtual and physical security. Another piece explores the data streaming aspects of edge that are more about “how” and not “where.” We also explore the dynamics of constantly-changing edge systems, and how cloud-based edge services are lowering the barrier for experimentation.

This InfoQ article is part of the series "Edge Cloud". You can subscribe to receive notifications via RSS.

Rate this Article

Adoption
Style

BT