InfoQ Homepage Programming Content on InfoQ
-
If You Can’t Test It, Don’t Deploy It: The New Rule of AI Development?
Magdalena Picariello reframes how we think about AI, moving the conversation from algorithms and metrics to business impact and outcomes. She champions evaluation systems that don't just measure accuracy but also demonstrate real-world business value, and advocates for iterative development with continuous feedback to build optimal applications.
-
Effective Error Handling: a Uniform Strategy for Heterogeneous Distributed Systems
Jenish Shah, a back-end engineer focused on distributed systems at Netflix, provides more insights into how to handle failures in a distributed systems setup. He shares details on how he built a library that handles exceptions uniformly, regardless of the underlying communication protocol.
-
Mental Models in Architecture and Societal Views of Technology: a Conversation with Nimisha Asthagiri
In this podcast, Michael Stiefel spoke with Nimisha Asthagiri about the importance of system thinking, multi-agent systems, the consequences of society applying a technology into an area for which it was not designed, and whether we can ever have a healthy relationship with artificial intelligence.
-
Elena Samuylova on Large Language Model (LLM)-Based Application Evaluation and LLM as a Judge
In this podcast, InfoQ spoke with Elena Samuylova from Evidently AI, on best practices in evaluating Large Language Model (LLM)-based applications. She also discussed the tools for evaluating, testing and monitoring applications powered by AI technologies.
-
The Hidden Vulnerability of the Open Source Software Supply Chain: the Underlying Infrastructure
Software supply chain veteran Brian Fox unpacks the security implications of the new EU Cyber Resilience Act and its profound impact on open-source projects. He reveals the hidden infrastructure risks threatening open-source projects and shares insights for senior software leaders navigating this regulatory landscape.
-
Technology Radar and the Reality of AI in Software Development
Shane Hastie, Lead Editor for Culture & Methods spoke to Rachel Laycock, Global CTO of Thoughtworks, about how the company's Technology Radar process captures technology trends around the globe. She is sceptical of the current AI efficiency hype, emphasizing that real value of generative AI tools lies in solving complex problems like legacy code comprehension rather than just writing code faster.
-
Using AI Code Generation to Migrate 20000 Tests
In this podcast, Shane Hastie, Lead Editor for Culture & Methods spoke to Sergii Gorbachov, a staff engineer at Slack, about how they successfully used AI combined with traditional coding approaches to migrate 20,000 tests in 10 months, discovering that AI alone was insufficient and required human oversight and conventional tools to work effectively.
-
Technical Leadership: Building Powerful Solutions with Simplicity and Inclusion
In this podcast, Shane Hastie, Lead Editor for Culture & Methods spoke to Bhavani Vangala about creating powerful yet simple technology solutions, taking a balanced approach to AI tools, fostering inclusive team environments, and empowering women in tech leadership through focusing on strengths rather than societal constraints.
-
Achieving Sustainable Mental Peace in Software Engineering with Help from Generative AI
Shane Hastie spoke to John Gesimondo about how to leverage generative AI tools to support sustainable mental peace and productivity in the complex, interruption-prone world of software engineering by developing a practical framework that addresses emotional recovery, overcoming being stuck, structured planning and communication, maximizing flow, and fostering divergent thinking.
-
Taming Flaky Tests: Trisha Gee on Developer Productivity and Testing Best Practices
In this podcast, Shane Hastie, Lead Editor for Culture & Methods, spoke with Trisha Gee about the challenges and importance of addressing flaky tests, their impact on developer productivity and morale, best practices for testing, and broader concepts of measuring and improving developer productivity.