InfoQ Homepage Python Content on InfoQ
-
Google Vertex AI Provides RAG Engine for Large Language Model Grounding
Vertex AI RAG Engine is a managed orchestration service aimed to make it easier to connect large language models (LLMs) to external data sources to be more up-to-date, generate more relevant responses, and hallucinate less.
-
Hugging Face Smolagents is a Simple Library to Build LLM-Powered Agents
Smolagents is a library created at Hugging Face to build agents based on large language models (LLMs). Hugging Faces says its new library aims to be simple and LLM-agnostic. It supports secure "agents that write their actions in code" and is integrated with Hugging Face Hub.
-
PydanticAI: a New Python Framework for Streamlined Generative AI Development
The team behind Pydantic, widely used for data validation in Python, has announced the release of PydanticAI, a Python-based agent framework designed to ease the development of production-ready Generative AI applications. Positioned as a potential competitor to LangChain, PydanticAI introduces a type-safe, model-agnostic approach inspired by the design principles of FastAPI.
-
Google AI Agent Jules Aims at Helping Developers with Their GitHub-Based Workflows
Part of Gemini 2.0, Google has launched its new AI-based coding assistant in closed preview. Dubbed "Jules", the assistant aims at helping developers to work with Python and JavaScript issues and pull requests, handle bug fixes, and other related tasks.
-
AISuite is a New Open Source Python Library Providing a Unified Cross-LLM API
Recently announced by Andrew Ng, aisuite aims to provide an OpenAI-like API around the most popular large language models (LLMs) currently available to make it easy for developers to try them out and compare results or switch from one LLM to another without having to change their code.
-
AWS Launches Lambda SnapStart for Python and .NET Functions
AWS has unveiled Lambda SnapStart for Python and .NET, enhancing serverless app performance by reducing cold start latency. This feature builds on the success of Lambda SnapStart for Java, allowing faster initializations through early environment caching. Available in multiple global regions, it offers efficient management of caching costs with Python 3.12+ and .NET 8+.
-
Rise of Python, Generative AI, and Global Developer Communities: Insights from GitHub Octoverse 2024
Recently, the GitHub Octoverse 2024 report revealed that Python has surpassed JavaScript as the most popular language on GitHub, primarily driven by its dominance in fields like data science, machine learning, and scientific computing. Generative AI continued its significant prominence in software development, with a substantial increase in contributions to generative AI projects on GitHub.
-
How Allegro Reduced the Cost of Running a GCP Dataflow Pipeline by 60%
Allegro achieved significant savings for one of the Dataflow Pipelines running on GCP Big Data. The company continues working on improving the cost-effectiveness of its data workflows by evaluating resource utilization, enhancing pipeline configurations, optimizing input and output datasets, and improving storage strategies.
-
No EC2 or Kubernetes Allowed: Insights from Building Serverless-Only Architecture at PostNL
PostNL shared insights and guidance from its transition from outsourced IT project delivery to an in-house product delivery capability. By embracing cloud-native technologies, with an emphasis on serverless services, the company achieved significant gains in productivity and market responsiveness while reducing operational costs.
-
Breaking down Python 3.13’s Latest Features
Python 3.13 introduces a revamped interactive interpreter with streamlined features like multi-line editing, experimental free-threaded mode, alongside the introduction of a Just-in-Time (JIT) compiler. Lastly, the update removes several outdated modules and introduces random function for the CLI.
-
Lyft Promotes Best Practices for Collaborative Protocol Buffers Design
Lyft shared its experiences using Protocol Buffers for inter-system integration, primarily focusing on collaborative protocol design for definitions shared between teams and systems. The company promotes approaches that improve knowledge sharing, consistency, and development process quality over raw efficiency optimizations.
-
Mistral Introduces AI Code Generation Model Codestral
Mistral AI has unveiled Codestral, its first code-focused AI model. Codestral helps the developers with coding tasks offering efficiency and accuracy in code generation.
-
JetBrains Aqua IDE for Test Automation Now Generally Available
Aqua, the first IDE for test automation, is now generally available. The IDE supports multiple languages and major testing frameworks like Selenium and Cypress. JetBrains introduces a new licensing model with Free Individual Non-Commercial and Paid Commercial plans. Additionally, Aqua is included in the All Products Pack.
-
Netflix Uses Metaflow to Manage Hundreds of AI/ML Applications at Scale
Netflix recently published how its Machine Learning Platform (MLP) team provides an ecosystem around Metaflow, an open-source machine learning infrastructure framework. By creating various integrations for Metaflow, Netflix already has hundreds of Metaflow projects maintained by multiple engineering teams.
-
Apple Open-sources Apple Silicon-Optimized Machine Learning Framework MLX
Apple's MLX combines familiar APIs, composable function transformations, and lazy computation to create a machine learning framework inspired by NumPy and PyTorch that is optimized for Apple Silicon. Implemented in Python and C++, the framework aims to provide a user-friendly and efficient solution to train and deploy machine learning models on Apple Silicon.