InfoQ Homepage Explainable AI Content on InfoQ
News
RSS Feed-
Azure AI Foundry Labs: a hub for the Latest AI Research and Experiments at Microsoft
Microsoft's Azure AI Foundry Labs revolutionizes AI development by bridging cutting-edge research with real-world applications. Offering experimental projects like Aurora and MatterSim empowers developers to prototype new technologies. With tools for dynamic learning and multimodal models, Azure Labs accelerates innovation and collaboration.
-
UC Berkeley's Sky Computing Lab Introduces Model to Reduce AI Language Model Inference Costs
UC Berkeley's Sky Computing Lab has released Sky-T1-32B-Flash, an updated reasoning language model that addresses the common issue of AI overthinking. The model, developed through the NovaSky (Next-generation Open Vision and AI) initiative, "slashes inference costs on challenging questions by up to 57%" while maintaining accuracy across mathematics, coding, science, and general knowledge domains.
-
AMD and Johns Hopkins Researchers Develop AI Agent Framework to Automate Scientific Research Process
Researchers from AMD and Johns Hopkins University have developed Agent Laboratory, an artificial intelligence framework that automates core aspects of the scientific research process. The system uses large language models to handle literature reviews, experimentation, and report writing, producing both code repositories and research documentation.
-
Ethical Machine Learning with Explainable AI and Impact Analysis
As more decisions are made or influenced by machines, there’s a growing need for a code of ethics for artificial intelligence. The main question is, “I can build it, but should I?” Explainable AI can provide checks and balances for fairness and explainability, and engineers can analyze the systems' impact on people's lives and mental health.
-
Responsible AI: from Principle to Practice at QCon London
At the QCon London conference, Microsoft's Mehrnoosh Sameki discussed Responsible AI principles and tools. She emphasized fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability. Tools such as Fairlearn, InterpretML, and the Responsible AI dashboard help implement these principles.
-
DeepMind Open-Sources AI Interpretability Research Tool Tracr
Researchers at DeepMind have open-sourced TRAnsformer Compiler for RASP (Tracr), a compiler that translates programs into neural network models. Tracr is intended for research in mechanistic interpretability of Transformer AI models such as GPT-3.
-
Allen Institute for AI Open-Sources AI Model Inspection Tool LM-Debugger
The Allen Institute for AI (AI2) open-sourced LM-Debugger, an interactive tool for interpreting and controlling the output of language model (LM) predictions. LM-Debugger supports any HuggingFace GPT-2 model and allows users to intervene in the text generation process by dynamically modifying updates in the hidden layers of the model's neural network.