I Tried to Automate a Manual Review Task with Claude. It Wasn't Worth It.
Every day, a CI job adds new entries to test-titles.json in my Clusterflick repo. When it finds a cinema listing title the normaliser hasn't seen before, it records the input and the current output...

Source: DEV Community
Every day, a CI job adds new entries to test-titles.json in my Clusterflick repo. When it finds a cinema listing title the normaliser hasn't seen before, it records the input and the current output, then opens a pull request. Someone — usually me — then has to review whether those outputs are actually correct, fix anything that isn't, and merge. It's not complicated work. Review the output and confirm the normalizer has done the correct job. If it hasn't, fix the output (test now fails ❌) and then fix the normalizer (until the test now passes ✅). But it happens twice day, and "not complicated" doesn't mean "not context switching". So I decided to try automating it with Claude. Several hours and $5 later, I don't think it was worth it — and I think the reasons why are worth writing up 💸 The Task The normaliser — normalize-title.js — converts raw cinema listing titles into a consistent string. I've written about it more in depth in my previous post, Cleaning Cinema Titles Before You Can