At QCon London 2026, Clara Higuera, Responsible AI Lead at BBVA, presented how many of the risks associated with AI systems are fundamentally engineering challenges rather than purely governance or policy issues. The session examined how AI systems are increasingly embedded in critical products and decision-making processes. As adoption grows, failures in these systems can have significant real-world consequences. This shift requires engineers to treat ethical properties of AI systems with the same rigor applied to reliability, performance, or security.

The talk opened with a widely reported case in the United States where Robert Williams, was wrongfully arrested after being misidentified by a facial recognition system. Incidents like this highlight how algorithmic errors can directly affect individuals and communities.
Such failures often arise ethical questions in which in some cases arise from technical and design choices made during development. Other training datasets may not represent the populations affected by the system, model architectures may lack explainability, and evaluation pipelines may fail to detect bias before deployment. Other times, they raise questions such as: does this system even exist? Or is AI the best solution for a specific problem?
Rather than viewing these issues as external policy concerns, the talk emphasized that they originate within the engineering process itself.

AI systems encode the values embedded in their design. Decisions about data collection, feature engineering, model architecture, and evaluation metrics can all influence how a system behaves in production. For example, biased outcomes in loan approvals, hiring processes, or medical diagnostics can result from unrepresentative training data or poorly defined optimization objectives. Without explicit checks, models can reinforce historical biases present in datasets.
According to the presentation, integrating ethical principles into the AI lifecycle requires engineers to ask questions throughout development rather than after deployment. This includes evaluating datasets for representativeness, measuring model behavior across demographic groups, and ensuring that systems remain observable once deployed. The talk highlighted several principles that can guide AI system design. Fairness, transparency, security, sustainability, and accountability were presented as key dimensions that engineers must consider when building AI-powered systems.
Fairness requires evaluating how models perform across different groups and ensuring that outcomes do not systematically disadvantage specific populations. Transparency involves improving the interpretability and explainability of models so that stakeholders can understand how decisions are made.
Security is another emerging concern, particularly as new attack vectors such as prompt injection and model extraction become more common in AI systems. Sustainability is also gaining attention due to the computational cost associated with training and deploying large models. These dimensions rather than abstract principles can be addressed through engineering practices.

One of the challenges organizations face is translating high-level ethical concepts into practical engineering workflows. Teams often understand the importance of fairness or transparency but lack clear methods for implementing them.
The presentation suggested embedding ethical checks throughout the development lifecycle. This can include fairness evaluation during model training, explainability analysis before deployment, security testing against adversarial attacks, and monitoring systems that detect unexpected behavior in production.
By incorporating these practices early in the system architecture, organizations can reduce the risk of discovering ethical issues after systems are already in use. The talk compared the current stage of AI development to earlier technological transitions. Industries such as aviation, electricity, and automotive engineering initially advanced faster than the safety standards needed to govern them. Over time, those industries developed new engineering practices, standards, and regulatory frameworks to make systems reliable at scale.
AI appears to be entering a similar phase. As AI systems move from experimental tools into critical infrastructure, engineering practices will likely evolve to incorporate safety, reliability, and ethical considerations as core system requirements. Software architects and engineering leaders play an important role in shaping these practices. Because technology often evolves faster than regulation, developers frequently operate in environments where formal standards have not yet been established.

In this context, ethical principles can act as design guidelines that help teams navigate emerging risks. Organizations that treat ethical AI as an engineering discipline rather than an afterthought may be better positioned to build trustworthy and resilient systems. This does not mean that the whole responsibility is from the engineering team but what they can concretely do — how fairness, accountability, and transparency can be translated into system requirements and development practices — without suggesting that this replaces the need for governance or leadership accountability.
The presentation concluded by encouraging developers to treat ethical properties of AI systems as measurable engineering requirements. Incorporating fairness evaluation, explainability checks, security testing, and resource efficiency into the development lifecycle can help ensure that AI systems remain both technically robust and socially responsible. As AI continues to become embedded in products, platforms, and infrastructure, the engineering decisions made during development will increasingly shape how these systems affect society.