Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News OpenAI is Adding Memory Capabilities to ChatGPT to Improve Conversations

OpenAI is Adding Memory Capabilities to ChatGPT to Improve Conversations

This item in japanese

By letting ChatGPT remember conversations, OpenAI hopes to reduce the need for users to provide repetitive context information and make future chats more helpful. Users will be able to ask what to remember explicitly, what to forget, or turn off the feature entirely.

Context is crucial for ChatGPT to be able to provide coherent responses and is usually provided by the user in the prompt. This requires it to be restated with each new conversation. Since its introduction, though, ChatGPT has been acquiring additional flexibility in this realm, including custom instructions and, now, memory.

ChatGPT's memory extends ChatGPT capability of providing context- and user-aware responses. For example, explains OpenAI, the new memory mechanism will let ChatGPT learn and remember user preferences about how information should be summarized at the end of a presentation, which is their current job or endeavor, and so on.

When ChatGPT memory is enabled, you can visit Settings > Personalization > Manage Memory to view a list of all your memories. You can delete specific memories or clear them all. It is important to notice that deleting conversations will not remove any memories associated with them. OpenAI says they may use memories to improve GPT training, but standard users are free to opt out while ChatGPT Team and Enterprise customers are excluded by default.

ChatGPT memory is used for every new conversation, which may not always be the best option, as cl42 remarked on Hacker News. Indeed, they say, memory is critical for "relationship-driven" conversation, while it is not helpful with "transactional" conversation were each chat is meant to be a separate question. To address this kind of use, OpenAI suggests using temporary chats, which "won't appear in history, won't use memory, and won't be used to train our models".

A few months ago, OpenAI started rolling out custom instructions to give users more control over their conversations and to reduce the friction implied in starting new conversations afresh. Custom instructions opened up the possibility of specifying some general context about the way you want ChatGPT to provide its answers as well as instructions about response format, purpose, or other criteria affecting how ChatGPT presents its answers.

With ChatGPT's memory, OpenAI is attempting to go beyond custom instruction and provide a more general and flexible mechanism that can work transparently behind the scenes as well as under direct user control.

Context is passed to ChatGPT through a context window embedded in the user prompt, the text the model uses as a starting point to generate responses. As mentioned, this is essential for keeping the model consistent and ensuring it generates meaningful dialogue with the user. Distinct ChatGPT versions have differently sized context windows. For example, while ChatGPT Enterprise supports a 128K context length, ChatGPT-4 standard only provides an 8,000-token window, which makes it less context-aware than the former.

About the Author

Rate this Article