InfoQ Homepage AI, ML & Data Engineering Content on InfoQ
-
Microsoft Introduces Open Data for Social Impact Framework
Microsoft recently introduced the Open Data for Social Impact Framework, a guide to help organizations put data to work to get new insights, make better decisions, and improve efficiency while tackling pressing social issues. The framework includes a five-step roadmap that organizations can use to get started.
-
EleutherAI Open-Sources 20 Billion Parameter AI Language Model GPT-NeoX-20B
Researchers from EleutherAI have open-sourced GPT-NeoX-20B, a 20-billion parameter natural language processing (NLP) AI model similar to GPT-3. The model was trained on 825GB of publicly available text data and has performance comparable to similarly-sized GPT-3 models.
-
From Natural Language Queries to Insights: GCP BigQuery Data QnA Usage in Twitter
The Twitter engineering team has shared architectural details of their Qurious data insights platform and its advantages for real-time analysis. Designed for internal business customers, the platform allows users to analyze Twitter’s BigQuery data using natural language queries and create dashboards.
-
Meta AI Labs Introduces BuilderBot, a Voice Control Builder for Virtual Worlds
Meta’s latest AI research introduces BuilderBot, a new tool to fuel creativity in the metaverse capable of generating immersive objects through voice commands only.
-
Meta Announces Conversational AI Model Project CAIRaoke
Meta AI Research recently announced Project CAIRaoke, an end-to-end deep-learning model for digital assistants. Project CAIRaoke is currently being used in Meta's Portal device and outperforms a previous conversational model when evaluated on a reminder task.
-
Microsoft Introduces Azure Health Data Services: Protected Health Information on the Cloud
Microsoft recently announced Azure Health Data Services, a Platform-as-a-Service that allows organizations to upload, store, manage and analyze healthcare data in the open standards FHIR and DICOM.
-
University of Washington Open-Sources AI Fine-Tuning Algorithm WISE-FT
A team of researchers from University of Washington (UW), Google Brain, and Columbia University have open-sourced weight-space ensembles for fine-tuning (WiSE-FT), an algorithm for fine-tuning AI models that improves robustness under distribution shift. Experiments on several computer vision (CV) benchmarks show that WISE-FT improves accuracy up to 6 percentage points.
-
Orchestrate Operations, Validations, and Approvals on Data Entities with Azure Purview Workflows
Recently, Microsoft announced the preview of Azure Purview Workflows, allowing customers to orchestrate then create, update and delete operations, validation, and approval of data entities using repeatable business processes. These workflows are currently in preview.
-
Allen Institute Launches Updated Embodied AI Challenge
The Allen Institute for AI (AI2) has announced the 2022 version of their AI2-THOR Rearrangement Challenge. The challenge requires competitors to design an autonomous agent that can move objects in a virtual room and includes several improvements including a new dataset and faster training using the latest release of the AI2-THOR simulation platform.
-
Deep Learning Toolkit Intel OpenVINO Extends API, Improves Performance, and More
The latest release of Intel OpenVINO offers a cleaner API, expands support for natural language processing, and improves performance and portability thanks to its new AUTO plugin. InfoQ has spoken with senior director AI Intel OpenVINO Matthew Formica to learn more.
-
QCon Software Development Conferences: Seven Tracks Not to Miss
Why are micro-frontends important? How should you optimise your organisational architecture for speed and flow? How to make microservices successful? Have you ever wondered how well-known tech companies can seamlessly deliver an exceptional user experience while supporting millions of users and billions of transactions? Looking for new processes and best software practices?
-
University Researchers Investigate Machine Learning Compute Trends
A team of researchers from University of Aberdeen, MIT, and several other institutions have released a dataset of historical compute demands for machine learning (ML) models. The dataset contains the compute required for training 123 important models, and an analysis shows that since the year 2010 the trend has significantly increased.
-
AlphaCode: Competitive Code Synthesis with Deep Learning
AlphaCode study brings promising results for goal-oriented code synthesis using deep sequence-to-sequence models. It extends the previous networks and releases a new dataset named CodeContests to contribute to future research benchmarks.
-
Tel-Aviv University Releases Long-Text NLP Benchmark SCROLLS
Researchers with Tel-Aviv University, Meta AI, IBM Research, and Allen Institute for AI have released Standardized CompaRison Over Long Language Sequences (SCROLLS), a set of natural language processing (NLP) benchmark tasks operating on long text sequences drawn from many domains. Experiments on baseline NLP models show that current models have significant room for improvement.
-
Waymo Releases Block-NeRF 3D View Synthesis Deep-Learning Model
Waymo released a ground-breaking deep-learning model called Block-NeRF for large-scale 3D world-view synthesis reconstructed from images collected by its self-driving cars. NeRF has the ability to encode surface and volume representation in neural networks.