BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Lyft Rearchitects ML Platform with Hybrid AWS SageMaker-Kubernetes Approach

Lyft Rearchitects ML Platform with Hybrid AWS SageMaker-Kubernetes Approach

Listen to this article -  0:00

Lyft has rearchitected its machine learning platform LyftLearn into a hybrid system, moving offline workloads to AWS SageMaker while retaining Kubernetes for online model serving. Its decision to choose managed services where operational complexity was highest, while maintaining custom infrastructure where control mattered most, offers a pragmatic alternative to unified platform strategies.

Lyft's engineers migrated LyftLearn Compute, which manages training and batch processing, to AWS SageMaker, eliminating background watcher services, cluster autoscaling challenges, and eventually-consistent state management, which had consumed significant engineering effort. LyftLearn Serving, which handles real-time inference, remained on Kubernetes, where Lyft's existing architecture already delivered the required performance and integrated tightly with internal tooling.


LyftLearn Hybrid High-Level Architecture (source)

The author, Yaroslav Yatsiuk, explains the main reasoning behind this decision:

We adopted SageMaker for training because managing custom batch compute infrastructure was consuming engineering capacity better spent on ML platform capabilities. We kept our serving infrastructure custom-built because it delivered the cost efficiency and control we needed.

LyftLearn supports hundreds of millions of daily predictions across dispatch optimization, pricing, and fraud detection, with thousands of training jobs per day serving hundreds of data scientists and ML engineers. Originally built entirely on Kubernetes, the system's operational complexity grew with scale. Each new ML capability required custom orchestration logic, and synchronizing Kubernetes state with the platform's database required multiple watcher services to handle out-of-order events and container status transitions.

For offline workloads, SageMaker's managed infrastructure directly addressed these pain points. AWS EventBridge and SQS replaced the watcher architecture with event-driven state management, while on-demand provisioning eliminated idle cluster capacity costs. However, the migration required maintaining complete compatibility with existing ML code.

Lyft built cross-platform Docker images that replicate the Kubernetes runtime environment in SageMaker, transparently handling credential injection, metrics collection, and configuration management. For latency-sensitive workloads retraining every 15 minutes, the team adopted Seekable OCI (SOCI) indexes for notebooks and SageMaker warm pools for training jobs, achieving Kubernetes-comparable startup times.

The most complex challenge involved Spark's bidirectional communication requirements across SageMaker Studio and EKS clusters. Default SageMaker networking blocked the inbound connections Spark executors needed to reach notebook drivers. Lyft partnered with AWS to enable custom networking configurations in their Studio Domains, resolving the issue without performance impact.

The migration was deployed repository by repository, with both infrastructures running in parallel, requiring only minimal configuration changes. The compatibility layer ensured the same Docker image used for SageMaker training would serve models in Kubernetes, eliminating train-serve inconsistencies. Lyft reports reduced infrastructure incidents and freed engineering capacity for platform capabilities following the migration. Yatsiuk concluded:

The best platform engineering isn't about the technology stack you run—it's about the complexity you hide and the velocity you unlock.

About the Author

Rate this Article

Adoption
Style

BT