BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Testing Content on InfoQ

  • How NASA Tests Their Software for the Space Shuttle and the Orion MPCV

    NASA uses multiple testing levels, independent validation, standards, safety communities, and tools to ensure safety. Darrel Raines gave a talk about software development and testing for the Space Shuttle and the Orion MPCV. He explained how they learn from failures and near misses and continually improve their process.

  • OWASP Launches AI Testing Guide to Address Security, Bias, and Risk in AI Systems

    The OWASP Foundation has officially introduced the AI Testing Guide (AITG), a new open-source initiative aimed at assisting organizations in the systematic testing and security of artificial intelligence systems. This guide serves as a fundamental resource for developers, testers, risk officers, and cybersecurity professionals, promoting best practices in AI system security.

  • Scaling API Independence: Akehurst on Mocking, Contract Testing, and Observability

    At QCon London 2025, Tom Akehurst spotlighted the path to developer autonomy in microservices through "Scaling API Independence." He emphasized advanced mocking, contract testing, and observability to combat API dependencies. Akehurst showcased how these strategies, enhanced by AI, streamline development, boost productivity, and ensure integration confidence amidst complexity.

  • Learning from Embedded Software Development for the Space Shuttle and the Orion MPCV

    Software development is much different today than it was at the beginning of the Space Shuttle era because of the tools that we have. But the art and practice of software engineering has not progressed that much since the early days of software development. Compilers are much better and faster, and debuggers are now integrated into development tools, making the task of error detection easier.

  • Using Artificial Intelligence in Software Testing

    Quality Assurance Engineers can evolve into artificial intelligence (AI) strategists, guiding AI-driven test execution while focusing on strategic decisions. Rather than replacing testing roles, AI can enhance them by predicting defects, automating test maintenance, and refining risk-based testing. Human-AI collaboration is crucial for maintaining quality in increasingly complex software systems.

  • Applying DevOps Principles and Practices as a Quality Assurance Engineer

    DevOps streamlines software development with automation and collaboration between development and IT teams for efficient delivery. According to Nedko Hristov, testers' curiosity, adaptability, and willingness to learn make them suited for DevOps. Failures can be approached with a constructive mindset; they provide growth opportunities, leading to improved skills and practices.

  • Using Artificial Intelligence for Analysis of Automated Testing Results

    Analysis of automated testing results is a very important and challenging part of testing activities. At any given moment we should be able to tell the state of our product according to the results of automated tests, Maroš Kutschy said at QA Challenge Accepted. He presented how artificial intelligence helps them save time spent on analysis, reduce human errors, and focus on new failures.

  • Meta Introduces LLM-Powered Tool for Software Testing

    Meta has unveiled the Automated Compliance Hardening (ACH) tool, a mutation-guided, LLM-based test generation system. Designed to enhance software reliability and security, ACH generates faults in source code and subsequently creates tests to detect and address these issues.

  • How to Use Property-Based Testing as Fuzzy Unit Testing

    According to Eivind Jahren, property-based testing is an invaluable tool for its ease of use and effectiveness. It is flexible in what requirements one can formulate and is simple and lightweight enough to put in the hands of software developers to perform iterative testing on a daily basis, less test code is required, and it’s easier to reuse test data generators for complex structured data.

  • Exploring AI's Role in Automating Software Testing

    QA professionals are increasingly turning to AI to address the growing complexities of software testing. AI-driven automation can improve test coverage, reduce test cycle times, and enhance the accuracy of results, leading to faster software releases with higher quality.

  • How Slack Used an AI-Powered Hybrid Approach to Migrate from Enzyme to React Testing Library

    Enzyme’s lack of support for React 18 made their existing unit tests unusable and jeopardized the foundational confidence they provided, Sergii Gorbachov said at QCon San Francisco. He showed how Slack migrated all Enzyme tests to React Testing Library (RTL) to ensure the continuity of their test coverage.

  • Google Introduces Gemini AI Features to Android Studio

    Google has released a set of updates to Gemini in Android Studio, aiming to enhance the developer productivity through AI-powered features. This release is designed to bring AI to every stage of the development lifecycle, such as AI-assisted coding, refactoring, generating documentation, analyzing and test code, and suggesting fixes.

  • Staying Innovative on a Journey from Start-Up to Scale-Up

    As ClearBank grew, it faced the challenge of maintaining its innovative culture while integrating more structured processes to manage its expanding operations and ensure regulatory compliance. Within boundaries of accountability and responsibility, teams were given space to evolve their own areas, innovate a little, experiment, and continuously improve, to remain innovative.

  • The Value of Using Timeless Testing Tools

    According to Benjamin Bischoff, developers find new tools much more interesting than old ones, as they offer an opportunity to learn new technologies and approaches and to expand their tool belt. Using tools that have been around for decades, however, can save time and budget. When evaluating tools, it is more important to understand the problem to be solved than to jump straight into the tools.

  • How Testing in the Metaverse Looks

    The "metaverse" typically refers to a collective virtual shared space that is created by the convergence of a virtually enhanced physical reality and a persistent virtual reality. According to Jonathon Wright, testing requires a mix of manual testing, automated testing, user testing, emulators, and simulators. Real-world testing environments are used to cover as many scenarios as possible.

BT