
Figuring out how to calculate ChatGPT API cost in Make doesn’t have to be complicated. It just takes a few minutes of setup and an understanding of how ChatGPT charges for tokens. In this guide, we’ll give you everything you need to get started today.
If you’ve never calculated how much the ChatGPT API costs you, you’d be forgiven for thinking it could be tricky, but the good news is that it’s a lot easier than it seems. Whether you’re working on a personal project or automating tasks for clients the simple method I’m about to show you will make things very easy to set up.
All you need is a formula, an understanding of how tokens work, and a few tips to set you on the path to laying the groundwork for your own perfect calculation machine. This will save you time and money while helping to optimize your use of the ChatGPT API, so let’s get stuck in!
While the ChatGPT module will cheerfully tell you how many tokens you spent, it won’t tell you how much the tokens cost. That’s a little bit annoying, but it is also easily fixed with the following formula:
{{formatNumber(sum(1.usage.prompt_tokens * 0.000005; 1.usage.completion_tokens * 0.000015); 3; "."; )}}
Don’t forget to adjust the formula to match your own scenario settings and token pricing! The formula itself is not as complicated as it might look:
To get accurate prices, you’ll need to use the current token prices of the ChatGPT model you are using. Once again, the ChatGPT module isn’t particularly helpful here, so you will need to do a little detective work.
Nip over to OpenAI’s API pricing page to get the current price for the ChatGPT module you want to use. Unfortunately there is a little bit of math to do. Ugh, I know! You can’t avoid this, since you need to know how much an individual token costs and OpenAI doesn’t give that information. If you really want to avoid the math, consider going down the database route and keeping an up-to-date list of GPT APIs there (more on this below).
When you look at OpenAI’s pricing for the ChatGPT API (ignore the Batch API column, since this isn’t relevant for our use case), you’ll notice that there are two types of prices listed per million tokens:
Take note of both of these and then divide both numbers by a million. If you change models a lot, then get used to seeing a lot of zeros! Let’s take an example from the pricing table for ChatGPT 4o mini, which is the GPT model I am using in my Make scenario:
The result of this division is what you should add to your Make.com formula. A little tip from me: always double check the number of zeros first, because it’s easy to make a mistake at this point. It’s a tiny slip-up, but it will mess up your calculations!
Also, keep a somewhat close eye on OpenAI announcements, since prices change every so often, necessitating an update. That can get very old very fast if you’re doing everything in Make over a lot of scenarios, so you may want to consider the database route, which I get into below.
But for now, back to the ChatGPT API. What are tokens, and why is everything done this way?
According to ChatGPT, tokens are “pieces of words” where each token represents around four characters in English. A good yardstick to remember for article writing is that 2,048 tokens is around 1,500 words.
One million tokens is a shade under 750,000 words. It’s almost as many words as you’d find in the King James Bible (783,137). Bear in mind that ChatGPT’s user, system, and assistant input requests all take tokens, so a million tokens won’t give you 700,000+ words of output. Instead, your tokens are split according to use:
There are also token limits to consider. You set these under the Max Tokens field of the ChatGPT API module in Make.com.
You can set token limits so that you never spend more than a specified amount of tokens. This is where knowing how many words tokens represents comes in useful.
You can’t really miss this field, since you either have to limit your tokens or set it to 0.
In most cases, you will want to be generous when you set token limits to avoid getting cut off.
Outside of Make.com and into the broader world of LLMs, token limits refer to the context length of an LLM model, which you’ll usually see expressed as something like 128k. This simply means that the model can use up to 128,000 tokens between prompt tokens and prompt completions.
That may not sound as impressive as a million tokens, but it is still the length of this summer’s hottest bodice-ripping romance novel!
This content length also represents a context window. When a conversation thread goes beyond 128,000 tokens, ChatGPT creates a rolling window that removes pieces of conversation from the start, replacing them with the latest messages. If you’ve ever wondered why ChatGPT seems to “forget” stuff during your longer conversations, this is why.
While you pay very little for million tokens, it pays to be more strategic in how you use the ChatGPT API (or any other) on Make.com. Using this blueprint can help you to optimize your prompts. The tokens might be cheap, but they can still be wasted if your conversation threads are overly long!
The Make.com calculator in its basic form (above) won’t show you the break down when it returns results, since most times, you don’t really need this information. But if you’re a completionist who wants all the data, this is easily fixed with the set multiple variables module (below).
If you use the ChatGPT API a lot, you’ll develop a good second sense for ballpark pricing and how many tokens to use for any particular task. Even then, this is still very useful information to have, especially if you want to keep a close eye on expenditure.
Personally, I find it most useful with clients. When a lead or client asks me how much it costs to automate the research, SEO, and writing of a 2,000-word article, I can give them an accurate ballpark figure. This isn’t a small detail; everyone has a budget, so knowing where every cent is going can help them to stay within that budget.
Sending ChatGPT costs to a database is the obvious answer to the question of what do you do if you’re using a lot of different ChatGPT models across multiple scenarios. Rather than spam your beautiful scenarios with set variable modules, send everything to a database. That could be Google Sheets, Airtable, ClickUp, Monday, or any other database.
All you need is a datasheet that collects and calculates everything for you. Rather than using a formula in a variable module, you now use a datasheet with pricing and set up everything else how you need it to be.
This can be very useful if you are renting out an AI automation to a client and billing by usage – and with the ability to make pretty client-facing dashboards, you can turn this basic workflow automation into something very impressive.
It’s also very useful for updating multiple scenarios from one place. Even if you’re just starting out with Make and don’t need the additional complication of a database in your operations, doing things with a database will mean that you don’t need to spend time “saving time” by switching from Make calculations to a centralized database calculator later on.
Your call!
Now you know how to calculate much you spend on ChatGPT in an automation, you can start using this yourself. All you need to do is use the formula I provided at the top of this article and add it to either a set variable or set multiple variables module and you’ll start capturing this data. Alternatively, you can use a database to keep track of this information – if you’re using AI a lot for you or your clients, this is a better route to take.
This is definitely a lot more cumbersome than it could be. However, until OpenAI provides an even easier method to track expenditure by API, it’s the best resource we have. One thing you will learn from using this is just how much the price varies between the more expensive models and their cheaper counterparts. If you’re like me, you’ll be using this information to pick which models you use more carefully. You don’t always need OpenAI’s current flagship model to quickly check a small article of text for information!
At Slashrepeat, we work with AI and automation all day. If you’re struggling with automation or want to start saving time and money as workflows take care of the boring work for you, get in touch with us!
Book your consultation now. It’s free and friendly, with no pushy sales tactics. We can start your project as early as tomorrow!