BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Local-First Cooperation: Autonomy at the Edge, Secured by Crypto, 100% Available

Local-First Cooperation: Autonomy at the Edge, Secured by Crypto, 100% Available

Key Takeaways

  • Mission-critical local processes should be handled locally, to reduce downtime
  • Device edge autonomy is required for UX and resilience, and saves cost
  • Replicated event logs with event sourcing, DDD, and CRDTs are a great basis for local-first cooperation
  • Need to trust crypto locally: on disk, between peers, for management

“Why does this thing need the cloud?” You have probably asked yourself that question when you had two computers next to each other and wanted to get some data from one to the other. For example, when editing a TODO list on your notebook and waiting for the cloud sync so that it shows up on your phone. Or when you can’t scan a page because your multifunction printer can’t reach its manufacturer’s cloud services. Or, in my case, when you stand on a factory’s shop floor and can’t get a dashboard of who’s currently doing what because of some problem with the DSL connection.

Enter local-first cooperation: robust cooperation between nearby computing devices.

In this article, we take a look at this new paradigm and answer the above question from the bottom up: how can we keep local use cases working at all times — even when there is no access to the cloud — and how do we keep such a distributed system secure? For the first part of the question, we will introduce and tie together some helpful building blocks, like peer-to-peer networking, the interplanetary file system IPFS, distributed event sourcing, conflict-free replicated data types, the time-warp mechanism, edge computing, etc. We lean on tried and true tools like cryptographic hash functions and public-key cryptography for the second part.

While we do this bottom up, others are looking at this question top down, bringing cloud services closer to the infrastructure edge of the network and offering compute and storage in a multitude of data centers all around the world. We’ll see how these two approaches can benefit from each other.

Please note that this article is somewhat conceptual in nature since local-first cooperation isn’t yet a widely implemented paradigm with a plethora of tools and libraries at our disposal. The article mentions some emerging implementations for certain niches — like document editing or factory workflow automation — but we as an industry will need to build some coding infrastructure to make our users happy in a local-first fashion. I’m looking forward to reading your comments!

Keeping it local

Before we can dive into the tools and technologies, we need to talk about the purpose of this exercise. Using exciting new technologies mainly to have a computer in Frankfurt (Germany) communicate with a computer in Ashburn, VA (USA) may be fun, but it would be wasteful and do a disservice to your users if all they wanted was a data transfer within Germany.

Our purpose here is to notice whenever the crucial part of an interaction between computers is local, in a physical or logical sense. When neighbors talk to each other, they shouldn’t send their communication via many internet links and over extraneous distances of 1000s of kilometers. This would only make the link between them weaker and slower: bandwidth drops, latency rises, failure probability multiplies. For these reasons, we should strive to communicate locally where possible, and especially where our users intuitively expect that.

We may still use faraway services, be they in the cloud or near the infrastructure edge (like CDNs or the upcoming 5G antenna colocation facilities). Buying computing power and storage capacity as a commodity is extremely helpful whenever the end user isn’t burdened with providing these. The consideration put forward in this article places a counterweight on the scales, though; making our own life easier with centralized compute and storage needs to be balanced against the users’ desire to keep local collaboration working, which implies using their local compute and storage where that is practical.

Another service that becomes much more visible when doing this is neighbor discovery. Talking to a cloud API implicitly solves the issue by using DNS to guide every client to the one service endpoint. Taking this away poses the question of how those two neighboring computers will find each other and start talking. Within data centers, we typically apply a control plane like Kubernetes, which is a central yet local service that sits among the computers that host the business logic. Within my home I wouldn’t want to maintain an etcd and register all my devices, though, I’d want the devices to discover one another using Bluetooth or wi-fi. In either case, the actual communication between two computers will then be local, from one network card to the other.

Thinking a bit more about using a central yet local control plane, it’s obvious that services will eventually fail when they can no longer discover their neighbors; this is what “central” means. We can mitigate this issue to some degree by caching the neighbors’ locations within each service. But in the end, the most reliable option — just considering the number of involved components — is to have no central pieces. This theoretical argument has the practical limitation that making a completely decentralized solution work involves extra code complexity, which in turn tends to reduce reliability. We thus face a tradeoff: if we can use a simple, reliable, and highly available central piece, then we can get pretty far in end-user satisfaction by depending on it, otherwise, end-user satisfaction will be limited by this artificial implementation detail and we need to invest more into the decentralized approach. Where this needle stops depends on the requirements of your project.

Looking a bit into the future I think it’s quite likely that peer-to-peer technologies, such as Wi-Fi Aware or Multipeer Connectivity, are getting mature enough that we’ll see them used more often, tipping the above-mentioned scales towards end-user satisfaction by making the decentralized case easier to implement. And, I think that  Ultra-wideband (UWB), a radio technology that can use a very low energy level for short-range, high-bandwidth communications, also looks extremely promising in this regard, especially since it allows high-precision sender location by a single receiver.

No matter which path we choose for local neighbor discovery, keeping the business communication local — out of the cloud — removes dependencies from the overall process. The fewer network routers and servers I need, the higher the availability of my service. Every part that I remove will no longer lead to downtime. Now, before you say “but the cloud always runs!” you may recall that on Nov 25, 2020, there was a major outage in AWS’s us-east-1 region and on Dec 14, 2020, almost all Google services were down due to a failure in their user authentication service — both of these incidents affected many other dependent services worldwide. (You may also want to peruse lists like this one.) Another noteworthy issue is that most DSL contracts — including for business users — only mention a typical availability of 97%, with no lower bound and no strict SLA.

So, this is why it’s best to keep local processes local by having computers communicate directly with one another.

Autonomy on the device edge

Maximal resilience against internet issues is achieved when each computer, phone, or tablet can be used independently, autonomously. A trend in this direction started around the same time as the Reactive Manifesto, originally with Offline First and later Progressive Web Apps. The goal here is to give the end user a good experience, whether running on the latest hardware with a perfect internet connection or on a somewhat dated mobile phone in a rural area.

The necessary autonomy is achieved by storing all required data — program code and documents — locally on the device and using this information for all user interactions. In local-first cooperation, we take this one step further by demanding that the local device shall be able to communicate with devices around it. This way, end users can interact with one another, with their smart home, or at work with hospital equipment, just by having a smartphone in their hand. That level of autonomy will redefine how we appreciate our digital devices; it will give us unprecedented power over our digital environment. And it’s no coincidence that this autonomy also extends to the case where a cloud service is discontinued, which today leaves behind unhappy users with lost data. (If you want to dive deeper into this aspect then I highly recommend the article on Local-First Software by the Ink & Switch research lab.)

In technical terms, autonomy means that we need to design all network interactions with independence in mind. There is no central information dispatcher, so each device decides locally which inputs to accept and which outputs to send around. Contrast this with a centralized approach where all clients send all their information to a server, who decides

  • what is valid
  • who needs to be alerted about what
  • who can request what

These questions remain valid also in local-first cooperation. They must be answered by each communication partner in all conversations. For example, take the message exchange between a nurse’s smartphone and the digital hospital bed. The phone will need to authenticate the bed to trust its report of which medication is in the intravenous drip bag and to entrust to it the command to increase the drip bag flow rate. The bed, on the other hand, will need to validate the incoming command, decide based on the phone’s identity whether that command will be accepted, and in general, decide what information about the patient to reveal to this particular phone. Both the phone and the bed retain their autonomy, they remain in control of the responsibilities they have been assigned.

We are at the beginning of this journey, peer-to-peer protocols aren’t yet established for most use cases and we are currently inventing them as we go. Eventually, there will be standardization or the simple confluence of protocols, depending on how much regulation the business area in question calls for. I’m confident that when we start seeing the benefits of fruitful cooperation, the pressure will be sufficient for us to agree on data formats, conversation structure, and communication technologies in broad swathes of the application landscape. And if small vendors or hobbyists build the pieces for reliable smart home automation, or if we build a truly social network from the ground up, those will be valuable alternatives to their centrally created cousins.

One nice consequence of this push for autonomy is that the locally implemented mission-critical part runs exclusively on the end users’ computers, no cloud resources are necessary for this piece of the solution. In the hospital example, the bed needs to have a battery and a small computer and the nurse has a smartphone (probably owned and managed by the hospital for compliance reasons). These two computers suffice for the described workflow and many more, no servers, brokers, or cloud services are needed. So, in addition to keeping response times low, UX snappy, and availability high, we also save money — the end users just need to make sure that the computers are powered and keep running.

With this approach, it also becomes viable to create a non-profit distributed app that doesn’t cause any infrastructure cost. As an example, picture a Trello clone without any cloud back-end. The boards and cards are stored just on my computer like in the olden days, with the modern addition that they can also be synchronized with my phone — and edited there — as long as my phone and computer are connected. The app to do that could be an open-source project with absolutely no money involved, just some enthusiasts who write the code; and all end users “pay” for the service by using their own compute and storage.

Infrastructure edge as optimization

There are more concerns to be covered by an app than the mission-critical part: we need to deliver the app package to the devices, we may want to collect crash reports or count how many people are using it. All these concerns differ from the local communication part in that these interactions are between one location where the app is installed and the one central place where the app is developed and maintained. This can be a single person or an organization, and in both cases, it’s usually far away from where the app is used.

Such star topologies are well-known on the internet and we have the perfect solution for them—the cloud. Making things faster and more resilient has been an ongoing effort over the past years, leading to decentralized content delivery and API endpoints hosted at many points-of-presence worldwide. This way, the overall system still can be managed in a centralized fashion while the end-user interactions happen within individual countries or regions.

Local-first cooperation can also be bootstrapped via such infrastructure services: I could talk to an anycast IP to find out who else is in my vicinity and how to reach them. This provides better service than relying solely on device-to-device interactions and slow information propagation through moving mobile phones around. So the infrastructure edge may provide interesting optimizations, as long as we keep it at that level. If two devices are right next to each other then they must still be able to communicate even when all edge services are momentarily unreachable.

Data storage? Consistency?

The sections above probably have left you wondering: if we eschew anything central and allow all devices to make their own decisions, then where is the database? Where do we keep those documents that people are collaborating on? How do we decide which edit wins if people simultaneously write to the same replicated document?

The answer was given in the veiled form above, it derives from the requirement of per-device autonomy: every device needs to store at least the data that it needs for its user interaction. And every device needs to have the means of acquiring needed data from the others, in a decentralized, local-first fashion.

The latter leads us to one interesting detail, one of the two hard problems in computing—naming. In a distributed system it’s actually not trivial to answer the question of who should name a given piece of data. If I create it, I may be tempted to name it as well, but that doesn’t solve the problem that some other computer needs to ask yet another computer for my data, and they both need to know the name that I made up — otherwise, we need a central register for all names that will be a single point of failure. An elegant way of side-stepping this issue is to take the choice out of the naming process, making it deterministic. This is called content-addressable storage, which means that the name is derived from the contents using a hash function. As a result, everybody in the network will compute the same name for the same piece of data.

A popular implementation of this principle is the interplanetary file system IPFS which uses proven technologies similar to BitTorrent for finding and transferring the data bits, provided you know their hash. The part of finding the data is a very interesting topic for an article by itself; the basic idea of a so-called distributed hash table (DHT) is that each computer has a name (a hash as well) and the information on where a certain hash’s data is available is stored in a handful of computers whose name is bitwise similar to the hash of the data. With this technology, we can store and retrieve data among a loosely coupled network of computers, in a peer-to-peer fashion. With this, we have solved the question of where we keep the data.

This raises another very interesting question, namely who decides what the “current” version of some document is. In IPFS each version has a different hash, with no information of which version is newer or more accurate than any other. Therefore, each computer will need to track the current hash for each document it’s interested in, updating it when it gets told about edits that happened elsewhere in the network.

One immediate problem here is that locally relevant data — like edits by someone else — may be generated and received at any moment, making the local copy of the document outdated. The local user interaction will therefore always be based on possibly incomplete information. When two users replace the same word with different alternative choices, each one of them will at first see their own choice on the screen, but as soon as the information of the edits is propagated between devices, the conflict will become obvious.

How will I, as an app developer, now choose which edit to show? One thing that we should always strive for is eventual consistency, meaning that once the information has traveled to all places where it’s needed, everybody’s screen will show the same document — otherwise, it wouldn’t be much of a shared document at all. (For a more stringent yet practically useful definition, I recommend Byzantine Eventual Consistency and the Fundamental Limits of Peer-to-Peer Databases by Kleppmann and Howard.) Luckily, there has been a lot of work in this area in distributed systems research over the past decades, leaving us with several well-understood tools.

Distributed event-sourcing or CRDTs?

One tool that has been tried and mostly discarded is manual conflict resolution. Here, conflicting edits are presented to the user who is now responsible for reconciliation by adding a new edit that supersedes the conflicting edits. We all know this from git and other source control tools, and we also know that these manual merges can be difficult to understand and laborious to execute. This is acceptable in specific areas (like among programmers), but since it can’t be automated, it will never be a pleasant experience for normal people.

This leaves two approaches: based on some non-linear edit history (shown on the left), we can either flatten history with a deterministic ordering (so that all devices eventually see the same sequence of events, shown in the middle), or we need to make sure that the operations are formulated such that it doesn’t matter in which order they are applied (i.e., we make them commutative and apply them as they arrive, shown on the right).

The former leads to an eventually consistent distributed event log, the latter to conflict-free replicated data types(CRDTs). We discuss these in the opposite order.

Conflict-free replicated data types

CRDTs were formalized in 2011 and have seen great uptake in both the scientific and software engineering community (cf. https://crdt.tech/). You can use them in databases like Riak or Redis, with tools like Akka Distributed Data or hypermerge, or indirectly when synchronizing Apple notes, your TomTom preferences, or when trying outPushPin.

A CRDT is a data type used locally on each computer (or node) that describes the currently known state of a document. Changes to this document are made locally, using the type’s API to produce change operations. These operations (in some cases called δ-state) are sent to other nodes to update remote replicas of the same document.

While the CRDT doesn’t include code for this communication, it may place restrictions on how the operations need to be conveyed — for example, some types need causal delivery, meaning that an operation can only be applied once all its causes have been applied (those are all operations that were applied at the editing node when the operation was produced). The operation messages exchanged between replicas of a CRDT are specific to that particular data type and usually can’t be understood in isolation, they make sense when they are applied.

On the infrastructure level, state-based CRDTs provide not only the ability to ingest new information in different orders, they also allow getting new nodes up to speed quickly by replicating one local copy to another device. This includes enough metadata that the new replica can start from this point and apply subsequent updates just like the others. With δ-state CRDTs, the updates are trimmed down to the essence of each single change, keeping live bandwidth usage minimal as well.

There are CRDTs for many use cases, from simple counters to JSON documents (take a look at automerge), with a variety of behavioral choices in between; considering a set with addition and removal, which one do you want to prefer when the same element is concurrently added and removed? You need to pick the right merge behavior for the use case you are modeling. But once you’ve done that, a CRDT feels like any other data type, you use it in your local program just like a List or a Map.

Eventually consistent distributed event log

Eventually consistent distributed event logs are both older and newer than CRDTs. Event logs have been used in transactional databases for a very long time — the transaction (write-ahead) log is exactly that. The idea is that the current state of the document is stored only as an optimization, the primary data is the history of changes. This history can be freely replicated between nodes.

If this is done using a consensus algorithm like Paxos or Raft and clients receive answers only after transactions are committed to this log then the result is a strongly consistent document — also called strictly serializable. This can only be achieved at the cost of a high degree of centralization; the consensus cluster needs somewhat stable membership management and the desired result is that it acts “as if it were a single node.” This virtual node may have a higher reliability than any of its physical members, but requiring its involvement for recording local decisions makes this function non-local again, it centralizes it. Therefore, local-first cooperation avoids consensus where possible to foster true autonomy and achieve resilience.

The advantage of the event log approach over CRDTs is that each event is completely free in how it affects the shared document. Assuming all nodes will eventually apply the same sequence of events to the same locally replicated state, they will all come to the same result. The only condition is that the event application must be deterministic, a pure function from state and event to next state.

So how do we make the event log eventually consistent without a consensus cluster? One way is to use a clock to give a local timestamp to each emitted event and also mark each event with the emitting node’s name. If we ensure that the same timestamp isn’t used multiple times by the same node, then the pair of timestamp and node name provides a unique identifier by which we can also order any two events — and this ordering is the same regardless of where we compute it. In terms of timestamps, you can use physical time if you take care to manage your nodes’ clocks, but I recommend using a Lamport clock instead (because nothing’s more annoying than to order the event that enabled the button after the event that the button was pressed, due to clock skew between nodes).

An eventually consistent event log allows us to design individual events such that they are meaningful and can be understood in isolation. We can employ domain-driven design not only to our local program code and data structures but also to the storage protocol, the data format of the event stream. In this way, the recorded event log becomes a history of facts that occurred in our system, written down in such a way that a domain expert can readily understand them and follow the history of what happened.

Picturing a person looking at that log while debugging some unintended behavior (sometimes incorrectly called a bug) is just one use case out of a wide range of possibilities. Event sourcing means that once I describe some state that I want to track and the effect of individual events to that state, I can “run the event log” to extract from it what I want. This isn’t limited to those purposes we imagined while recording the events, we can later discover new ways to interpret the data — this ranges from bug fixes to entirely new cooperations with future apps.

In short, by ordering the operations to make the log eventually consistent we gain more freedom in how to interpret the data. This comes at the cost of having to write non-trivial code in the event application function — it must deal with interleaved events from both sides of a network partition after it heals. While eventual consistency is guaranteed by the chosen approach, the resulting system behavior won’t automatically match your business requirements.

As an example, consider a workflow in an issue tracker for a software project or in a factory (there they are called production orders). The issue can be picked up concurrently by two persons if they are momentarily unable to communicate, leading to possibly duplicated work but also creating two concurrent event histories. To make sense of the merged history, we need to keep both strands separate; the best way is to give each of them a unique ID (sketched in JavaScript syntax):

const onEvent = (state, event) => {
	switch (event.type) {
		case 'workStarted': {
			state.ongoing[event.workId] = { user: event.user, startTime: event.time };
			return state;
		}
		case 'workStopped': {
			const { user, startTime } = state.ongoing[event.workId];
			delete state.ongoing[event.workId];
			const timeSpent = event.time - startTime;
			state.timespans.push({ user, timeSpent });
			return state;
		}
		default: return state;
	}
}

Without the workId we might wrongly associate a workStopped with an unrelated workStarted event.

Updating external systems with “time warp”

With the mechanisms described above we can build local-first cooperation, we can create a system that gives up consensus and central oversight in exchange for an “always available” user experience. This tradeoff is one that we chose particularly for some part of our application or system, but it typically doesn’t match the design criteria for other parts of the system. For example, an enterprise resource planning system like SAP has the express purpose of pulling all the strands together to get a detailed overview of the whole company. Similarly, payment or order fulfillment platforms in the cloud draw their strength from centralizing their operations. All these centralized systems are built around transactional behavior, where every request results in a booking that is either accepted or rejected.

This presents a challenge when feeding information from a local-first cooperative part into such a centralized part — the basic question is “when am I certain enough based on my partial local knowledge that I want to make a central booking?” In the issue tracking example above we might find ourselves in the situation where we know that Fred worked on some issue from 10:23 to 13:35 and that the issue was closed at 16:05, so we might book and invoice 3 hours and 12 minutes. But what if Ralph also worked on that issue from 14:03 to 15:47 and we just don’t know yet because Ralph’s computer will only synchronize with ours at 16:25?

One elegant answer is provided in a 1985 paper titled Virtual time by Jefferson (PDF, it has been discussed in the morning paper). The problem addressed in that paper is that of parallelizing a computation (Jefferson started from discrete simulation) across multiple computers without incurring forbiddingly high coordination overhead — each node shall be able to make as much independent progress as possible. While our problem is in most cases more directly connected to the real world, the underlying issue and its solution are the same. We want autonomous nodes to make progress.

Applied to our example, it works like this: as the node that makes the booking, we keep track of which events we have seen and what the results were. So at 16:05, we emit a booking with 3h12m. Then, when we receive Ralph’s updates at 16:25, we sort those into the overall event log and reprocess the log, which leads to a booking for 16:05 of 4h56m. This replay is called “time warp” in an earlier version of the paper and its effect is that we now see that the emitted list of bookings has changed — we have made a mistake. Correcting this mistake can be done generically if bookings can be undone; we cancel the 3h12m booking and book 4h56m instead. Cancellation may either be recorded as a separate booking type or a booking of “negative time,” depending on the receiving system.

This sounds complicated, but it can actually be outsourced to some infrastructure code. The cost is that next to the previously known event log we need to keep around some of the corresponding business logic states and the generated bookings — the external effects. Then, when “old” events are sorted into the log, we find the youngest known state that is older than the incoming events, rewind to that, and apply the now bigger log on top of it. The emitted bookings are recorded as new effects during this replay so that the application can then see which bookings should be done, which have been done earlier, and what the difference is. The application then performs the new bookings — positive and negative — against the external (central) system. In most cases, it’s smart to delay this reconciliation for some seconds, to avoid a flurry of cancellations when many things happen at the same time.

With this technique, the external system will mirror the knowledge available on one node of the local-first cooperative part. It’s then up to a business decision when to act on this data and send an invoice, for example. The single point of failure represented by relying on a single node corresponds to the single database that is being updated, so it’s fine, especially when the node that runs the exporter logic is collocated with the target database.

We will need to rely on cryptography

In all of the above, we have assumed that computers know how to identify the peers they communicate with and how to authenticate information they receive. These are part of yet another set of interesting problems, namely how to implement IT security within the context of local-first cooperation. This is a large and important topic that requires — and deserves — attention to detail. Since this article is already quite long, I can’t do it justice here, but I would still like to point out some of the constraints that arise from the context we are in.

The first and foremost concern is that whatever we do, we must not compromise the idea that two computers sitting next to each other can work together without needing anyone or anything else. This means that a message — for example, events — transmitted from one computer to the other need to be encrypted and authenticated such that both can trust that

  • the message is only readable by the intended recipient(s) → confidentiality
  • the message is delivered intact → integrity
  • the message comes from a trusted computer → authenticity

These questions need to be answered by either computer in isolation — they can’t ask anyone else, especially not a cloud service, without adding a crucial external dependency into all mission-critical paths.

Luckily, public-key cryptography has established all necessary tools to solve this problem over the past three decades — we now benefit tremendously from both the concealed research during and after World War II as well as the public disclosure and continuation of same in the nineties.

The general principle to adhere to for local-first cooperation is that private keys are created on or distributed to computers and kept secret there (for this, Trusted Platform Modules come in really handy, even though they were initially aimed at centrally enforcing Digital Rights Management). These private keys represent the power to sign data to prove authenticity and integrity or to decrypt encrypted data for confidentiality. This way we take the necessary installation of trust relationships out of the critical path — we can even pass a properly signed and encrypted “trust package” from computer to computer until it reaches its designated recipient, who will then open it and install the contained new secrets or start trusting some other computer based on its public key.

If you want to contemplate the above with a more concrete use case, then you may use a blog article as inspiration, which I wrote after noticing that setting up a printer today is in some ways a worse experience than it was in the 1980s.

One noteworthy difference regarding security between cloud-native applications and local-first cooperation lies in the overall system design. In a centralized system, there is this central place that stores all data and regulates access to it. Compromising this place yields access to any piece of information stored in there. In local-first cooperation, we don’t have an easy way of regulating access — in particular, revoking access is a thorny issue — but on the other hand, there is also no single place where all data are aggregated and vulnerable to a centralized attack. Remember that every computer only needs to store the data it needs for the particular apps it runs and the users it serves.

Conclusion

Viewing the world from the bottom up, local-first cooperation points out ways in which we can make our software more resilient, give prompt responses to the end users, and at the same time run it on existing resources — leading to less infrastructure cost and waste. We get these benefits by recognizing that inherently localized processes are best dealt with in a purely local manner, without involving centralized services or long-range communication paths. All we need to do is to make full use of the edge devices that people already hold in their hands, utilizing the compute and storage available.

The cloud and especially the infrastructure edge will play an important role in bootstrapping and orchestrating this, and its strengths regarding infinitely scalable resources will continue to be very useful for all concerns that aren’t local to the end user. This includes, for example, software and content delivery, operations and maintenance, as well as machine learning, analytics, and optimization.

We already have the building blocks for constructing local-first cooperative apps, like conflict-free replicated data types, domain-driven design, event-sourcing, the time-warp mechanism, and public-key cryptography. What is still missing is an array of toolboxes assembled from these parts, covering a corresponding range of use cases. We are seeing first examples like hypermerge and in the factory automation space, and I’m convinced that we will see more of this soon. Meanwhile, you’re welcome to check out how it could look to add local-first cooperation to the standard TODO list web app (keeping in mind that the Actyx implementation isn’t generic, it’s tailored to factory use cases at this point). And I’m very keen on reading your thoughts on local-first cooperation.

About the Author

Dr. Roland Kuhn is CTO and co-founder of Actyx, a Munich-based startup, building a platform for local-first cooperation on the factory shop floor. Before this, he led the Akka project at Lightbend, after working in the satellite industry for several years. He obtained his PhD in high-energy particle physics.

Rate this Article

Adoption
Style

BT