Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News GPT-3.5 Turbo Can Now Be Fine-Tuned for Improved Quality and Performance

GPT-3.5 Turbo Can Now Be Fine-Tuned for Improved Quality and Performance

This item in japanese

OpenAI has made GPT-3.5 Turbo available to developers, with the added bonus of allowing them to customize the model to improve performance for their specific use cases. According to OpenAI, fine-tuning GPT-3.5 Turbo can even outperform base GPT-4 for certain tasks.

OpenAI provides GPT customization to improve the model performance for a number of use cases, including making a model follow instructions in a more coherent way with the developer's aims, such as always using the same language as the one used for the prompt; providing more consistent responses in cases like code completion or composing API calls; and refining the model output tone, for example to make it fit better the desired brand voice.

Another advantage of fine-tuning GPT-3.5 Turbo, says OpenAI, is reducing prompt size while not hampering performance:

Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs.

Fine-tuning is a generic technique that besides bringing better results and saving tokens, also allows to train on more examples that can fit in a prompt and reduce request latency. For example, if you fine-tune a model, you won't need to provide as many examples in the prompt to get the same level of performance.

Fine tuning a model implies a number of steps, including preparing the dataset to train the model, creating the fine-tuned model, and using it for inference. Preparing the dataset is the key step to the process, which includes a number of sub-steps, such as crafting the prompts, providing a sufficient number of well-crafted demonstrations to check if the model show signs of improvements, training the model based on the new demonstrations, and finally testing it.

As OpenAI explains, fine-tuning should not be the first step to attempt to improve a model performance, since it requires a careful investment of time and effort. Instead, prompt engineering, prompt chaining, and function calling are good techniques to explore first, before even thinking of fine-tuning a model, along with other best practices.

On a related note, OpenAI has also announced they will support fine-tuning for GPT-4 later this year.

About the Author

Rate this Article