September 18, 2023

How to fine-tune ChatGPT (gpt-3.5-turbo) by utilizing new OpenAI API with Python?



In the dynamic realm of artificial intelligence, breakthroughs continuously pave the way for smarter and more adaptable models. The latest milestone in this journey was the highly anticipated capability to fine-tune gpt-3.5-turbo! OpenAI's GPT-3.5 Turbo has already amazed us with its facility at comprehending context, generating human-like text and aiding diverse tasks. Now, with the added potential of fine-tuning, its efficiency is even greater. But that's not all – on the horizon, we're about to gain fine-tuning access for GPT-4, which will allow to enter a new era of AI capabilities! 

For now, let's focus on GPT 3.5's tuning capability and learn these simple steps to use its capability in your work. 

A Step-by-Step Guide 

Prepare Your Data 

To begin your fine-tuning venture, you need to prepare a dataset in JSONL format. Your fine-tuning dataset should have a minimum of 10 examples, while optimal improvements tend to manifest with sets of 50 to 100 examples. Here's an example: 

{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]} 

{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]} 

{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]} 

Each line in your JSONL file contains an object with a "messages" key that holds an array of three messages: 

  • the system message sets the context 
  • the user message is your input to ChatGPT or its API 
  • the assistant message is the expected model response 

You can use a variety of ways to process your data into specified format. For example, you may use a simple Python script or utilize the ChatGPT Code Interpreter plugin to achieve the desired output. 

Upload Your Dataset to OpenAI 

Once your fine-tuning dataset is ready, the next step involves uploading it to OpenAI. One of the simplest ways is to do it via Python script that utilizes an OpenAI library. 

To send the request we use openai.File.create. Remember to save your response or at least the file ID as it will be necessary for the next step. 

Create a Fine-Tuning Job 

The final step of your journey is to initiate a fine-tuning job through the OpenAI API, again using a short Python script. 

Remember to replace "api_key" and “file_id” with your actual API key and file ID. Once the fine-tuning job is set up, you'll receive an email notification. Typically, training for a small dataset takes about 10 minutes. After this, you'll receive another email confirming the completion and details of your newly fine-tuned model. 

Case Study Example 

In a practical test case involving technology-related terms we have gotten 10 examples ready.  We have tasked the model with providing these terms along with their definitions in a specific JSON format. Our training set totalled about ~106k tokens, incurring a training cost of approximately $1. 

We have crafted an initial prompt with a system role to adapt, a TypeScript interface for the desired output, and clear instructions. After training, the prompt became succinct, omitting the system message and unnecessary details as well as TS interface specifications. 

Here is one of the results from the test cases, illustrating a comparison between the top structure of a full prompt before fine-tuning and the bottom one after fine-tuning. 

Conclusions after test case 

The purpose of our test was to shorten the length of complex prompts, simplify their overall structure and, most importantly, achieve a specific JSON output structure without the need to specify it in every prompt. Additionally, we aimed to establish a coherent context by guiding the model's answers through specific roles or interpretations. 

As we can see with the result, depending on your subjective model use, case fine-tuning might start providing expected results even after the minimal of 10 examples, if you follow it up with same structure requests as training examples.  

During conducted tests our fine-tuned model largely adapted to produce the desired output format across limited tests, effectively reducing prompt size and complexity. 

Cost Considerations

For cost analysis, we refer to

Training cost to hours depends entirely on your dataset with about 1 to 1.5 tokens per word on average. In usage both input and output tokens are 8 times more expensive with the fine-tuned model. The cost efficiency aspect of fine-tuning is highly subjective, stemming from your particular use case scenario. 

Fine-Tuning's Benefits 

Fine-tuning unlocks a number of possibilities for developers:  

  • Improved steerability: Empowers users with the ability to guide the model more effectively by following instructions with greater accuracy. 
  • Reliable output formatting: Users can achieve greater predictability in model’s responses, ensuring desired outputs are provided consistently. For instance, maintaining a specific structure or format like JSON. 
  • Custom tone: Model can be fine-tuned to align better with a specific role or tone. 
  • Shorter prompts: Employing shorter prompts reduces token count, resulting in improved inference times and cost efficiency. 
  • Alternative to Prompt Chaining or Few-Shot Prompting: Depending on your use case, fine-tuning can simplify interactions with the model, providing an alternative to techniques like prompt chaining or few-shot prompting.


In the exciting world of AI development the introduction of OpenAI's GPT-3.5-turbo tuning capability marks a pivotal moment. If you want to enter a new era of artificial intelligence possibilities, I invite you to join us here at BCF Software and discover its great potential.

Whether you're a developer, researcher, or an enthusiast let's collaborate and innovate in this ever-evolving field. We are confident that together we can make remarkable progress in the development and application of Artificial Intelligence.