BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News OpenAI's Open-Source ChatGPT Plugin - Q&A with Roy Miara

OpenAI's Open-Source ChatGPT Plugin - Q&A with Roy Miara

This item in japanese

OpenAI recently announced plugin support for ChatGPT, allowing the language model to access external tools and databases. The company also open-sourced the code for a knowledge retrieval plugin, which organizations can use to provide ChatGPT-based access to their own documents and data.

Although large language models (LLM) like ChatGPT can answer many questions correctly, their knowledge can become out of date, since it is not updated after the LLM is trained. Further, the model can only output text, which means it cannot directly act on behalf of a user.

To help solve this problem, researchers have explored ways to allow LLMs to execute APIs or access knowledge databases. ChatGPT's plugin system will allow the model to integrate with external systems such as knowledge bases and 3rd party APIs. The retrieval plugin allows the model to perform semantic search against a vector database. Because the plugin is self-hosted, organizations can securely store their own internal documents in the database, which lets their users interact with the data via ChatGPT's natural language interface.

The plugin supports several commercial and open-source vector databases, including one developed by Pinecone, who contributed to the open-source plugin code. InfoQ spoke with Roy Miara, an engineering manager at Pinecone, about their contribution to the plugin.

InfoQ: What is a ChatGPT plugin, and in particular, what is the Retrieval Plugin used for?

Roy Miara: ChatGPT plugins serve as supplementary tools that facilitate access to current information, execute computations, or integrate third-party services for ChatGPT. The Retrieval Plugin, in particular, empowers ChatGPT to obtain external knowledge via semantic search techniques. There are two prevalent paradigms for employing the Retrieval Plugin: 1) utilizing the plugin to access personal or organizational data, and 2) implementing the plugin as a memory component within ChatGPT. Both are using semantic search as a way for the model to rephrase user prompts as queries to a vector database such as Pinecone, Milvus, or Weaviate.

InfoQ: What are some advantages a ChatGPT plugin has over other LLM integrations such as LangChain?

Miara: Although LangChain enables an "agent" experience with tools and chains, ChatGPT plugins are more suitable for AI app development. The advantages of ChatGPT plugins include: 1) a more sophisticated implementation that leverages OpenAI's internal plugin capabilities, as opposed to LangChain's approach of concatenating plugin information as a prompt to the model, and 2) the support for security and authentication methods, which are essential for AI application development, particularly when accessing personal data or performing actions on behalf of a user. These features are not present in Langchain's current offerings.

InfoQ: Can you describe your contributions to the retrieval plugin open source project?

Miara: Pinecone datastore implementation was contributed to the project, alongside some other internal improvements of testing and documentation. The overall base implementation follows Pinecone's upsert/query/delete paradigm, and we are currently working on hybrid queries and other advanced query techniques.

InfoQ: Can you provide some technical details on how a typical ChatGPT plugin works?

Miara: A ChatGPT Plugin is a web server that exposes an "instruction" manifest to ChatGPT, where it describes the operation of the Plugin as a prompt and the API reference as an OpenAPI yaml specification. With those, ChatGPT understands the different API calls that are possible, and the instructions it should follow.

So in order to build Plugin, one should build the application logic, implement a web server that follows OpenAPI specification, and deploy the server such that ChatGPT is able to access it. Although there is no limit to the application logic that one can implement, it is not recommended to construct a complex API server, since this might result in undesired behavior, confusion etc.

We have found that the "description_for_model" part of the manifest, which is essentially a prompt that is injected before the context retrieved, is a key for a successful plugin. OpenAI are providing some guidelines, but in the end of the day, it's in the developers' hands to find the right prompt for the task.

InfoQ: OpenAI mentions that plugins are "designed specifically for language models with safety as a core principle." What are some of the safety challenges you encountered in developing your plugin?

Miara: Firstly, enabling ChatGPT to access personal or organizational data necessitated the implementation of both security and data integrity features. The plugin is designed to handle API authentication, ensuring secure data access for both reading and writing purposes.

Secondly, generative language models often grapple with hallucinations and alignment issues. We observed that earlier versions of plugins occasionally provided incorrect responses to queries, but subsequent iterations demonstrated improved accuracy while also admitting when certain questions were beyond their scope. Moreover, by running plugins in an alpha stage for an extended period, OpenAI can better align the results before releasing them to a broader audience.

Additionally, it's important to note that the plugins feature is designed with complete transparency for users. First, users explicitly select the plugins they wish to enable for use with ChatGPT. Secondly, whenever ChatGPT utilizes a plugin, it clearly indicates this to the user, while also making it simple to view the specific results that the plugin service has provided to ChatGPT's context.

About the Author

Rate this Article

Adoption
Style

BT