InfoQ Homepage News
-
Java News Roundup: JDK 26 in Rampdown, JDK 27 Expert Group, GlassFish, TornadoVM, Spring gRPC
This week's Java roundup for December 1st, 2025, features news highlighting: JDK 26 in Rampdown Phase One; the formation of the JDK 27 Expert Group; GA releases of TornadoVM 2.0 and Spring gRPC 1.0; a point release of GlassFish 7.1; the December 2025 edition of Open Liberty; the first beta release of JHipster 9.0 and the second release candidate of Hibernate Search 8.2.
-
AWS CodeCommit Returns to General Availability after Backlash
AWS recently announced that the managed source control service AWS CodeCommit is again generally available and that new features, including Git Large File Storage, will be added early in 2026. This marks a shift for the cloud provider that previously announced the service would not be further developed, closed it to new accounts, and encouraged migration to external alternative services.
-
HL is a Fast, Rust-Based JSON Log Viewer Offering up to 2GiB/s Parsing Speed
Open-source log viewer hl is designed for efficient processing of structured logs in JSON or logfmt format. Built in Rust, it provides fast indexing and parsing, enabling to scan very large log files quickly, whether they are uncompressed or compressed.
-
JFrog Unveils “Shadow AI Detection” to Tackle Hidden AI Risks in Enterprise Software Supply Chains
JFrog today expanded its Software Supply Chain Platform with a new feature called Shadow AI Detection, designed to give enterprises visibility and control over the often-unmanaged AI models and API calls creeping into their development pipelines.
-
AWS Introduces Durable Functions: Stateful Logic Directly in Lambda Code
AWS has unveiled Durable Functions for Lambda, revolutionizing multi-step workflows. This feature allows developers to write code that manages state and retry logic without incurring costs during waits. With advanced capabilities like checkpoints, pauses for up to a year, and simplified orchestration, Durable Functions streamline complex serverless applications.
-
MySQL Repository Analysis Reveals Declining Development and Shrinking Contributor Base
A recent report has analyzed the repository statistics of the MySQL server to evaluate the project's status, Oracle's commitment to MySQL, and the future of the community edition.
-
From On-Demand to Live : Netflix Streaming to 100 Million Devices in under 1 Minute
Netflix’s global live streaming platform powers millions of viewers with cloud-based ingest, custom live origin, Open Connect delivery, and real-time recommendations. This article explores the architecture, low-latency pipelines, adaptive bitrate streaming, and operational monitoring that ensure reliable, scalable, and synchronized live event experiences worldwide.
-
Vitest Team Releases Version 4.0 with Stable Browser Mode and Visual Regression Testing
Vitest 4.0, the Vite testing framework, revolutionizes browser-based testing with stabilizations, built-in visual regression support, and enhanced debugging. With major features like stable Browser Mode and Playwright Traces integration, it streamlines workflows. Developers benefit from a smoother upgrade path and an optimized experience, reinforcing Vitest as a comprehensive testing solution.
-
AWS Lambda Managed Instances: Serverless Flexibility Meets EC2 Cost Models
Unlock the power of AWS Lambda Managed Instances, seamlessly combining serverless functions with Amazon EC2 for optimal performance and cost efficiency. Designed for steady-state workloads, this solution automates instance management, reduces cold starts, and enables multi-concurrency.
-
Grab Adds Real-Time Data Quality Monitoring to Its Platform
Grab updated its internal platform to monitor Apache Kafka data quality in real time. The system uses FlinkSQL and an LLM to detect syntactic and semantic errors. It currently tracks 100+ topics, preventing invalid data from reaching downstream users. This proactive strategy aligns with industry trends to treat data streams as reliable products.
-
NVIDIA Dynamo Addresses Multi-Node LLM Inference Challenges
Serving Large Language Models (LLMs) at scale is complex. Modern LLMs now exceed the memory and compute capacity of a single GPU or even a single multi-GPU node. As a result, inference workloads for 70B+, 120B+ parameter models, or pipelines with large context windows, require multi-node, distributed GPU deployments.
-
Karrot Improves Conversion Rates by 70% with New Scalable Feature Platform on AWS
Karrot replaced its legacy recommendation system with a scalable architecture that leverages various AWS services. The company sought to address challenges related to tight coupling, limited scalability, and poor reliability in its previous solution, opting instead for a distributed, event-driven architecture built on top of scalable cloud services.
-
Growing Yourself as a Software Engineer, Using AI to Develop Software
Sharing your work as a software engineer inspires others, invites feedback, and fosters personal growth, Suhail Patel said at QCon London. Normalizing and owning incidents builds trust, and it supports understanding the complexities. AI enables automation but needs proper guidance, context, and security guardrails.
-
Arm Launches AI-Powered Copilot Assistant to Migrate Workflows to Arm Cloud Compute
At the recent GitHub Universe 2025 developer conference, Arm unveiled the Cloud migration assistant custom agent, a tool designed to help developers automate, optimize, and accelerate the migration of their x86 cloud workflows to Arm infrastructure.
-
System Initiative Unveils Expansion with Real-Time Multi-Cloud Discovery and Automation
System Initiative recently announced a major set of new capabilities designed to give engineering organizations instant, real-time visibility and AI-driven control across any cloud platform or API.