BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News QCon London 2026: Ethical AI Is an Engineering Problem

QCon London 2026: Ethical AI Is an Engineering Problem

Listen to this article -  0:00

At QCon London 2026, Clara Higuera, Responsible AI Program Lead at BBVA, presented how many of the risks associated with AI systems are fundamentally engineering challenges rather than purely governance or policy issues. The session examined how AI systems are increasingly embedded in critical products and decision-making processes. As adoption grows, failures in these systems can have significant real-world consequences. This shift requires engineers to treat ethical properties of AI systems with the same rigor applied to reliability, performance, or security.

The talk opened with a widely reported case in the United States where Robert Williams, was wrongfully arrested after being misidentified by a facial recognition system. Incidents like this highlight how algorithmic errors can directly affect individuals and communities.

Such failures often arise from technical choices made during development. Training datasets may not represent the populations affected by the system, model architectures may lack explainability, and evaluation pipelines may fail to detect bias before deployment.

Rather than viewing these issues as external policy concerns, the talk emphasized that they originate within the engineering process itself.

AI systems encode the values embedded in their design. Decisions about data collection, feature engineering, model architecture, and evaluation metrics can all influence how a system behaves in production. For example, biased outcomes in loan approvals, hiring processes, or medical diagnostics can result from unrepresentative training data or poorly defined optimization objectives. Without explicit checks, models can reinforce historical biases present in datasets.

According to the presentation, integrating ethical principles into the AI lifecycle requires engineers to ask questions throughout development rather than after deployment. This includes evaluating datasets for representativeness, measuring model behavior across demographic groups, and ensuring that systems remain observable once deployed. The talk highlighted several principles that can guide AI system design. Fairness, transparency, security, sustainability, and accountability were presented as key dimensions that engineers must consider when building AI-powered systems.

Fairness requires evaluating how models perform across different groups and ensuring that outcomes do not systematically disadvantage specific populations. Transparency involves improving the interpretability and explainability of models so that stakeholders can understand how decisions are made.

Security is another emerging concern, particularly as new attack vectors such as prompt injection and model extraction become more common in AI systems. Sustainability is also gaining attention due to the computational cost associated with training and deploying large models. These dimensions must be addressed through engineering practices rather than abstract principles.

One of the challenges organizations face is translating high-level ethical concepts into practical engineering workflows. Teams often understand the importance of fairness or transparency but lack clear methods for implementing them.

The presentation suggested embedding ethical checks throughout the development lifecycle. This can include fairness evaluation during model training, explainability analysis before deployment, security testing against adversarial attacks, and monitoring systems that detect unexpected behavior in production.

By incorporating these practices early in the system architecture, organizations can reduce the risk of discovering ethical issues after systems are already in use. The talk compared the current stage of AI development to earlier technological transitions. Industries such as aviation, electricity, and automotive engineering initially advanced faster than the safety standards needed to govern them. Over time, those industries developed new engineering practices, standards, and regulatory frameworks to make systems reliable at scale.

AI appears to be entering a similar phase. As AI systems move from experimental tools into critical infrastructure, engineering practices will likely evolve to incorporate safety, reliability, and ethical considerations as core system requirements. Software architects and engineering leaders play an important role in shaping these practices. Because technology often evolves faster than regulation, developers frequently operate in environments where formal standards have not yet been established.

In this context, ethical principles can act as design guidelines that help teams navigate emerging risks. Organizations that treat ethical AI as an engineering discipline rather than an afterthought may be better positioned to build trustworthy and resilient systems.

The presentation concluded by encouraging developers to treat ethical properties of AI systems as measurable engineering requirements. Incorporating fairness evaluation, explainability checks, security testing, and resource efficiency into the development lifecycle can help ensure that AI systems remain both technically robust and socially responsible. As AI continues to become embedded in products, platforms, and infrastructure, the engineering decisions made during development will increasingly shape how these systems affect society.

About the Author

Rate this Article

Adoption
Style

BT