BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Responsible AI: from Principle to Practice at QCon London

Responsible AI: from Principle to Practice at QCon London

This item in japanese

At the QCon London conference, Mehrnoosh Sameki, principal product manager at Microsoft, delivered a talk on "Responsible AI: from Principle to Practice". She outlined six key principles for responsible AI, detailed the four essential building blocks for implementing these principles, and introduced the audience to useful tools such as Fairlearn, InterpretML, and the Responsible AI dashboard.

Sameki opted for the term "Responsible AI" over other alternatives such as "Ethical AI" and "Trusted AI". She believes that Responsible AI embodies a more holistic and proactive approach that is widely shared among the community. Those discussing this field should demonstrate empathy, humility, and a helpful attitude. As the AI landscape is evolving rapidly, with companies accelerating the adoption of AI technologies, our societal expectations will shift, and regulations will emerge. It is thus becoming a best practice for individuals to introduce the right to inquire about the rationale behind AI-driven decisions.

Sameki outlined Microsoft's Responsible AI principles, which are based on six fundamental aspects:

  1. Fairness
  2. Reliability and safety
  3. Privacy and security
  4. Inclusiveness
  5. Transparency
  6. Accountability

She also outlined four building blocks she deemed essential to effectively implement these principles, which were "tools and processes", "training and practices", "rules", and "governance". In the presentation, she mostly talked about the tools and processes and practices around responsible AI. 

The importance of fairness can be best understood through the potential harm it prevents. Examples of such harms include different qualities of service for various groups of people, such as varying performance for genders in voice recognition systems or considering skin tone when determining loan eligibility. Evaluating the possibility of these harms and understanding their implications is crucial. To address fairness, Microsoft developed Fairlearn, a tool that enables assessment through evaluation metrics and visualizations, as well as mitigation using fairness criteria and algorithms.

InterpretML is another useful tool aimed at understanding and debugging AI algorithms. It focuses on both glassbox models and so-called "opaquebox" explanations, such as explainable boosting machines. This allows users to see through their predictions and determine the top-k factors impacting them. InterpretML also offers counterfactuals as a powerful debugging tool, enabling users to ask questions like, "What can I do to get a different outcome from the AI?". Counterfactuals give a machine learning engineer insight into how far away certain samples are from the decision border, and which features are most likely to "flip" a decision. For example, an outcome could be that people where the gender feature is switched suddenly get a different prediction, which could indicate an unwanted bias in your model. 

Sameki also gave a demo of Microsoft's Responsible AI dashboard. The analysis of errors in predictions is vital for ensuring reliability and safety. The tool provides insights into the various factors leading to errors, and allows you to create cohorts to dive deeper into causes of bias and errors. 

Sameki also discussed the potential dangers associated with large language models, specifically in the context of Responsible AI for Generative AI, such as GPT-3, which is used for zero-shot, one-shot, and few-shot learning. Some considerations for responsible AI in this context include:

  1. Discrimination, hate speech, and exclusion. It is easy to let models generate this automatically. 
  2. Hallucination - the generation of unintentional misinformation. Models generate text and are not knowledge engines. 
  3. Information hazards. Models can leak information in an unintended way
  4. Malicious use by bad actors to automatically generate text. 
  5. Environmental and socioeconomic harms. 

To address these challenges, Sameki proposed several solutions and predictions for improving AI-generated output:

  1. Provide more precise instructions to the model. This is something which individuals should do. 
  2. Break complex tasks into simpler subtasks. Large language models 
  3. Structure instructions to keep the model focused on the task
  4. Prompt the model to explain its reasoning before answering
  5. Request justifications for multiple possible answers and synthesize them
  6. Generate numerous outputs and use the model to select the best one
  7. Fine-tune custom models to maximize performance and align with responsible AI practices

To explore Sameki's work on Responsible AI, consider visiting the following resources:

About the Author

Rate this Article

Adoption
Style

BT