FREE Training Courses — 100+ Hours of Akka and Reactive Architecture Expertise Get Started
Support
akka akka-edge

Akka 24.05 - The Edge Unleashed

Jonas Bonér Founder, CTO and Chairman of the Board, Lightbend, Inc.

As programmable devices proliferate in our homes, public spaces, and workplaces, we increasingly collaborate with programs and machines through local interactions. These edge applications are designed for intrinsically local use cases, requiring resilience, scalability, and high performance without relying on a central cloud or persistent connections.

Tackling the inherent constraints of edge applications

Unlike traditional cloud applications, edge applications must contend with several constraints:

  • Limited resources: Confined to the hosting device, with additional processing and storage only available from locally reachable peers.
  • Lack of global consistency: The inability to maintain a consistent global view across devices, with consensus among larger groups, is prohibitively expensive.
  • Lack of stable networks: Unreliable and potentially slow network links, such as multi-hop mesh networks.
  • Mobile nature of devices: Devices can move between locations, making them unreachable from previously joined networks.
  • Ephemeral nature of devices: Devices can fail, restart, or be temporarily suspended, at any time.
  • Ephemeral nature of connections: All communication partners must be treated as temporarily connected due to the devices' mobile and autonomous nature.

To address these constraints, edge applications require a programming model and runtime that embraces decentralization, location-transparent and mobile services, physical co-location of data and processing, temporal and spatial decoupling of services, and automatic peer-to-peer data replication. The core principle is enabling autonomous operation without dependence on a central infrastructure or persistent connectivity.

Towards decentralized multi-cloud applications

We are inevitably moving towards increased decentralization. For most companies, being anchored in the cloud and needing to serve their users more efficiently, this means relying less on centralized infrastructure and moving towards hybrid cloud-to-edge systems. For more information, see the article I wrote about unifying the cloud and edge into a single cloud-to-edge continuum .

Decentralized architecture that distributes logic and data together offers several advantages when managing data in the cloud and at the edge:

  • Increased scalability: With a decentralized architecture, data and workloads can be distributed across multiple nodes, servers, or locations. This allows for more efficient scaling by adding or removing resources as needed without being limited by the constraints of a centralized system.
  • Better resilience and high availability: Decentralized systems are designed to be resilient against single points of failure. If one node or component fails, the system can continue to operate without significant disruption, as other nodes can take over the workload. This ensures high availability and minimizes downtime.
  • Better performance: Decentralized architectures can perform better by distributing data and workloads across multiple nodes, reducing bottlenecks, and leveraging parallel processing capabilities. This benefits data-intensive workloads like big data analytics or AI and machine learning applications.
  • Improved data locality: In a decentralized architecture, data can be stored and processed closer to where it is generated or consumed. This can reduce network latency and improve overall performance, especially in large data transfers or real-time data processing scenarios.
  • Enhanced security and data privacy: Decentralized architectures can help mitigate security risks by distributing data across multiple nodes or locations. This makes it more difficult for unauthorized parties to access or compromise the entire dataset, as they must breach multiple nodes simultaneously. It also helps ensuring that data is contained in a geographic region (for compliance and legal reasons).
  • Cost optimization: Decentralized architectures can help optimize data management and processing costs by leveraging infrastructure resources more efficiently and scaling resources dynamically.
  • Reduced risk for vendor lock-in: Decentralized architectures often rely on OSS technologies, standardized interfaces, multi-cloud, and edge infrastructure, reducing the risk of vendor lock-in and allowing for greater flexibility in choosing cloud providers or switching between them.

Letting Akka do the heavy lifting

We have been working on building out this programming model and runtime for the last 2-3 years. A major part of this work is always ensuring the physical co-location of end users, data, and compute to guarantee the lowest possible latency and the highest levels of resilience. If the user, data, and compute are always at the exact physical location, one can serve the user with the lowest possible latency, and since everything needed to serve the user is right there, one can lose the connection to the backend cloud and peers and still be able to serve the user. This requires a distributed replicated data mesh—a data distribution and consensus fabric that moves the data to where it needs to be at every moment.

In the 24.05 release, we have pushed the envelope for edge development even further with three exciting new features.

Running Akka natively on resource-constrained devices

Last year, we shipped Akka Edge, but we are not stopping there. A significant leap forward for Akka is the new capability to use Akka concepts outside the JVM with the latest library called Akka Edge Rust. Here, we have extended Akka Edge to empower cloud developers to run their Akka applications even closer to where they are used and where the user’s data resides. Akka Edge Rust provides a subset of Akka implemented with the Rust programming language. Rust has been chosen given its focus on reliability and efficiency for resource-constrained devices where CPU, memory, and storage are at a premium.

This client library runs natively under 4 Mb of RAM (running on Arm32, Arm64, x86, amd64, RISC-V, and MIPS32). It has rich features, such as an actor model, event-sourcing, streaming projections over gRPC, local persistent event storage, WebAssembly (WASM) compatibility, and security (through TLS and Wireguard). Using this Akka client, one can extend an application to devices while maintaining its programming model, semantics, and core feature set.

For example, in the diagram below, the Akka JVM service is responsible for registering sensors. The Akka Edge Rust service will connect to the Akka JVM service and consume registration events as they occur. The Akka Edge Rust service will also “remember” what it is up to, and, in the case of a restart, it will re-connect and consume any new registrations from where it left off. Communication between the edge and cloud is made over gRPC. Observations for registered sensors can then be sent to the Akka Edge Rust service via UDP, as they are often used in practice. The Akka Edge Rust service will use its established connection with the Akka JVM service to propagate these local observations.

Akka JVM and Rust

Learn more in the guide introducing Akka Edge Rust, which explains how to set up and develop a Rust-based service that works with an Akka JVM, a cloud-based counterpart.

Active-active entity replication in the cloud and at the edge

An entity in Akka is a clustered event-sourced actor that is effectively a local, mobile, and fully replicated durable in-memory database with a built-in cache—since the in-memory representation is the Service of Record, it means that reads can always be served safely directly from memory.

We shipped Active-Active Replicated Event Sourcing for region-to-region replication about a year ago. Building upon that, in 24.05, we shipped Active-Active Replicated Event Sourcing for Edge, extending its support to entities (clustered event-sourced actors) running out at the far edge. Active-Active means that entities can concurrently process writes/updates in more than one geographical location—such as multiple edge Point-of-Presence (PoP), different cloud regions, hybrid cloud, and multi-cloud—while Akka guarantees that entities always converge to a consistent state, with read-your-writes semantics.

In short, Replicated Event Sourcing gives you:

  • The possibility to serve requests from a location near the user to provide better responsiveness, including edge PoPs, edge data centers, and cell towers.
  • Resilience to tolerate failures in one location and be operational, including multi-cloud redundancy.
  • Support for updates to an entity from several locations simultaneously, so-called active-active replication.
  • A way to build active-passive or hot-standby entities.
  • Load balancing over many servers.

It works like magic, forming an auto-replicating data mesh for transactional data that is truly location-transparent and peer-to-peer (masterless with all nodes being equal). It makes it easy to build applications spanning multiple clouds, cloud regions, edge PoPs and gateways.

Applications spanning multiple clouds, cloud regions, edge PoPs and gateways.

Also, in addition to Event Sourced entities, it now also supports replicating state changes of Durable State entities in Akka Edge (PoPs and edge data centers) and Akka Distributed Cluster (multi-cloud and multi-region).

Running Akka natively for better performance, efficiency, and lower costs

In our quest towards more performance, efficiency, and lower costs, we have made it much easier to build a GraalVM native image of an Akka application . GraalVM Native Image compiles Java or Scala code ahead of time to a native executable. A native image executable provides lower resource usage than the JVM, smaller deployments, faster startup times, and immediate peak performance—making it ideal for edge deployments in resource-constrained environments where one needs rapid responsiveness under autoscaling while also being very useful in the cloud.

Next steps

Try these new features and let us know what you think. We would love to hear from you. The best place to start is the Akka Edge documentation, where you can understand the design and architecture, read about example use cases, get an overview of the features, and dive into one of the sample projects.