Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Defensible Moats: Unlocking Enterprise Value with Large Language Models at QCon San Francisco

Defensible Moats: Unlocking Enterprise Value with Large Language Models at QCon San Francisco

In a recent presentation at QCon San Francisco 2023, Nischal HP shared insights on how large language models (LLMs) can be leveraged to unlock enterprise value. He discussed the challenges enterprises face when building LLM-powered applications using APIs alone. These challenges, or "hurdles", include data fragmentation, the absence of a shared business vocabulary, privacy concerns regarding data, and diverse objectives among stakeholders.

The speaker's team established a robust data foundation to overcome hurdles by creating an enterprise data fabric using knowledge graphs, adapting domain knowledge and vocabulary, and implementing data contracts for enhanced data observability. This strong foundation has accelerated the utilization of large language models, offering solutions to the needs of their customers in the supply chain industry.

The solutions encompass a range of scenarios, including risk mitigation, ESG framework implementation, strategic procurement, spend analytics, and data compliance. The speaker's daily routine involves developing and managing a data and machine learning infrastructure, which includes multiple layers of machine learning and the latest advancements in large language models. The speaker's motivation for the talk was to showcase how large enterprises can implement machine learning models while considering technology enablement, design, and data privacy.

HP discussed the concept of "defensible moats," a term coined by Warren Buffet to describe economic castles protected by unreachable moats. In the context of AI, the speaker argued that the defensible moat is not the deep learning models or the large language models themselves but the ability to smartly understand where to build your moat and what commodity to use off the shelf.

The speaker's team has built a data stack which includes a system of records, a system of intelligence, and a system of engagement. The system of records involves data from different sources, including ERP systems, document stores, and custom data systems. The system of intelligence includes a machine learning inference layer and an agent-based framework. The system of engagement involves building conversational AI supported by a multi-agent architecture.

The speaker also noted it was a relief to independently see the concept of "Graph Neural Prompting," which involves using a domain-specific knowledge graph to guide the reasoning of a large language model. This approach can help reduce the model's tendency to "hallucinate" or generate factually incorrect but coherent answers.

The speaker urged practitioners to take LLMs' reliability, predictability, and observability seriously to make them safer for everyone to use through adequate usage of guardrails. The full presentation will become available on October 20, 2023.

About the Author

Rate this Article