How to Stop Babysitting Your AI Agents
Every time I need an LLM to do something, the ritual is the same. Open a chat window. Type a prompt. Read the response. Decide if it's good enough. Repeat tomorrow. That's not automation that's a n...

Source: DEV Community
Every time I need an LLM to do something, the ritual is the same. Open a chat window. Type a prompt. Read the response. Decide if it's good enough. Repeat tomorrow. That's not automation that's a new job I didn't apply for. The frustrating part isn't the AI. The frustrating part is that I'm the scheduler, the context manager, and the output parser all at once. I'm writing the same prompt variations over and over because nothing persists. I'm watching a spinner because there's no way to fire and forget. The tool is supposed to be doing the work. So I built a 12MB binary to fix it. Unix already solved this You don't open a chat window to run grep. You pipe input in, get output out, and chain it with something else. Small tools, one job each, composable by design. The Unix philosophy isn't clever it's right.. and it's been right for fifty years. AI agents should work the same way. One job. Clean input/output. Plugs into your existing workflows. The problem is that most AI tooling goes the