InfoQ Homepage News
-
MySQL Repository Analysis Reveals Declining Development and Shrinking Contributor Base
A recent report has analyzed the repository statistics of the MySQL server to evaluate the project's status, Oracle's commitment to MySQL, and the future of the community edition. Julia Vural, software engineer manager at Percona, writes:
-
From On-Demand to Live : Netflix Streaming to 100 Million Devices in Under 1 Minute
Netflix’s global live streaming platform powers millions of viewers with cloud-based ingest, custom live origin, Open Connect delivery, and real-time recommendations. This article explores the architecture, low-latency pipelines, adaptive bitrate streaming, and operational monitoring that ensure reliable, scalable, and synchronized live event experiences worldwide.
-
Vitest Team Releases Version 4.0 with Stable Browser Mode and Visual Regression Testing
Vitest 4.0, the Vite testing framework, revolutionizes browser-based testing with stabilizations, built-in visual regression support, and enhanced debugging. With major features like stable Browser Mode and Playwright Traces integration, it streamlines workflows. Developers benefit from a smoother upgrade path and an optimized experience, reinforcing Vitest as a comprehensive testing solution.
-
AWS Lambda Managed Instances: Serverless Flexibility Meets EC2 Cost Models
Unlock the power of AWS Lambda Managed Instances, seamlessly combining serverless functions with Amazon EC2 for optimal performance and cost efficiency. Designed for steady-state workloads, this solution automates instance management, reduces cold starts, and enables multi-concurrency.
-
Grab Adds Real-Time Data Quality Monitoring to Its Platform
Grab updated its internal platform to monitor Apache Kafka data quality in real time. The system uses FlinkSQL and an LLM to detect syntactic and semantic errors. It currently tracks 100+ topics, preventing invalid data from reaching downstream users. This proactive strategy aligns with industry trends to treat data streams as reliable products.
-
NVIDIA Dynamo Addresses Multi-Node LLM Inference Challenges
Serving Large Language Models (LLMs) at scale is complex. Modern LLMs now exceed the memory and compute capacity of a single GPU or even a single multi-GPU node. As a result, inference workloads for 70B+, 120B+ parameter models, or pipelines with large context windows, require multi-node, distributed GPU deployments.
-
Karrot Improves Conversion Rates by 70% with New Scalable Feature Platform on AWS
Karrot replaced its legacy recommendation system with a scalable architecture that leverages various AWS services. The company sought to address challenges related to tight coupling, limited scalability, and poor reliability in its previous solution, opting instead for a distributed, event-driven architecture built on top of scalable cloud services.
-
Growing Yourself as a Software Engineer, Using AI to Develop Software
Sharing your work as a software engineer inspires others, invites feedback, and fosters personal growth, Suhail Patel said at QCon London. Normalizing and owning incidents builds trust, and it supports understanding the complexities. AI enables automation but needs proper guidance, context, and security guardrails.
-
Arm Launches AI-Powered Copilot Assistant to Migrate Workflows to Arm Cloud Compute
At the recent GitHub Universe 2025 developer conference, Arm unveiled the Cloud migration assistant custom agent, a tool designed to help developers automate, optimize, and accelerate the migration of their x86 cloud workflows to Arm infrastructure.
-
System Initiative Unveils Expansion with Real-Time Multi-Cloud Discovery and Automation
System Initiative recently announced a major set of new capabilities designed to give engineering organizations instant, real-time visibility and AI-driven control across any cloud platform or API.
-
Memori Expands into a Full-Scale Memory Layer for AI Agents across SQL and MongoDB
Memori is an innovative, open-source memory system that empowers AI agents with structured, long-term memory using standard databases like SQL and MongoDB. It seamlessly integrates into existing frameworks, enabling efficient data extraction and retrieval without vendor lock-in. Ideal for developers, Memori's modular design ensures reliability and scalability for next-gen intelligent systems.
-
How Discord Scaled its ML Platform from Single-GPU Workflows to a Shared Ray Cluster
Discord has detailed how it rebuilt its machine learning platform after hitting the limits of single-GPU training. The changes enabled daily retrains for large models and contributed to a 200% uplift in a key ads ranking metric.
-
Azure API Management Premium v2 GA: Simplified Private Networking and VNet Injection
Microsoft has launched API Management Premium v2, redefining security and ease-of-use in cloud API gateways. This new architecture enhances private networking by eliminating management traffic from customer VNets. With features like Inbound Private Link, availability zone support, and custom CA certificates, users gain unmatched networking flexibility, resilience, and significant cost savings.
-
Vercel’s Next.js 16: Explicit Caching, Turbopack Stability, and Improved Developer Tooling
Unlock the potential of full-stack web development with Next.js 16! This robust release features revolutionary Cache Components, enhanced routing, and Turbopack as the default bundler for lightning-fast builds. Experience architectural breakthroughs and AI-powered debugging. Upgrade today to optimize performance and streamline your development process!
-
JEP 526 Simplifies Deferred Initialization ahead of JDK 26
JEP 526 introduces Lazy Constants for JDK 26, enhancing developer ergonomics and performance. This feature replaces the earlier Stable Values, simplifying initialization while ensuring thread safety and immutability. With utilities for lazy lists and maps, it promotes efficient resource management, reducing startup costs. Feedback is welcomed to refine this API ahead of a potential future release.