The hidden cost of GPT-4o: what every SaaS founder should know about per-user LLM spend it
So you're running a SaaS that leans on an LLM. You check your OpenAI bill at the end of the month, it's a few hundred bucks, you shrug and move on. As long as it's not five figures, who cares, righ...

Source: DEV Community
So you're running a SaaS that leans on an LLM. You check your OpenAI bill at the end of the month, it's a few hundred bucks, you shrug and move on. As long as it's not five figures, who cares, right? Wrong. That total is hiding a nasty secret: you're probably losing money on some of your users. I'm not talking about the obvious free-tier leeches. I'm talking about paying customers who are costing you more in API calls than they're giving you in subscription fees. You're literally paying for them to use your product. The problem with averages Let's do some quick, dirty math. GPT-4o pricing settled at around $3/1M tokens for input and $10/1M for output. It's cheap, but it's not free. Say you have a summarization feature. A user pastes in 50,000 tokens of text (around 37.5k words) and gets a 1,000 token summary back. β’ Input cost: 50,000 / 1,000,000 * $3.00 = $0.15 β’ Output cost: 1,000 / 1,000,000 * $10.00 = $0.01 β’ Total cost for one summary: $0.16 If a user on a $19/mo plan does this ju