Microsoft recently held its annual MSBuild developer conference, where it made several significant announcements, including updates to its AI capabilities, focusing on Copilot AI Agents, Phi-3, and GPT-4o now available on Azure AI.
New features for Microsoft Copilot were announced aimed at enhancing productivity and collaboration across organizations. The updates include Team Copilot, which expands Copilot's role from a personal assistant to a team collaborator, facilitating meetings, managing tasks, and improving group communication in tools like Microsoft Teams and Microsoft Planner.
According to Professor Ethan Mollick:
Agents represent the first break away from the chatbot and copilot models for interacting with AI.
Additionally, custom agents built with Microsoft Copilot Studio can now automate business processes, reason over user actions, and learn from feedback, aiming to boost efficiency and cost savings. New Copilot extensions and connectors allow developers to tailor and integrate Copilot with specific business systems using Copilot Studio or Teams Toolkit for Visual Studio.
Microsoft also introduced Phi-3, a family of small open models developed by Microsoft. These models support developers in building cost-efficient and responsible multimodal generative AI applications. Phi-3-mini, Phi-3-small, Phi-3-medium, and Phi-3-vision are all super-sets of previous versions, offering a range of capabilities for various applications.
As mentioned by Machine Learning Researcher Awni Hannun on X:
You can run Phi3 Small (7B) in MLX LM.
The model has a few quirks: the block sparse attention, a new nonlinearity, and an unusual ways of splitting queries / keys / values.
Useful to have a flexible framework to implement it in. And still runs quite fast on an M2 Ultra
The previously available Phi-3-mini and Phi-3-medium models can now be accessed via Azure AI’s models as a service offering. Phi-3 models, optimized for various hardware and scenarios, offer cost-effective solutions for language, reasoning, and coding tasks. Notable use cases include ITC's AI copilot for farmers, Khan Academy's math tutoring, and Epic's patient history summaries.
Finally, OpenAI's GPT-4o, a new multimodal model, is now available in Azure AI Studio. This model, which is an extension of GPT-4, allows for a richer user experience by enabling inputs and outputs that span across text, images, and more. Azure OpenAI Service customers can explore GPT-4o's capabilities in a preview playground in Azure OpenAI Studio, available in two US regions. GPT-4o is engineered for speed and efficiency, offering advanced handling of complex queries with minimal resources, translating to cost savings and improved performance.