Human-in-the-loop is a state, not a fallback

Why human involvement should be modeled explicitly in automated publishing systems.

Human-in-the-loop is a state, not a fallback

The most common “feature” in automation

In a lot of automated systems, there is an unspoken feature that ships by default: a human operator who shows up when something feels off.

Not because the system asked clearly. Not because a workflow reached an intentional checkpoint. But because something didn’t happen, and someone noticed the absence.

That person becomes the last-mile glue. The part that isn’t on the diagram.

The problem with treating humans as a fallback

When humans are treated as a fallback, the system never has to be precise about its own state. It can pretend everything is fine until it can’t. It can return optimistic success while the real world is still undecided. It can “complete” a workflow even though execution is still pending.

Then, later, a human is asked to fix it. Not as part of the design, but as a cleanup step.

That isn’t human-in-the-loop. That’s ambiguity-in-the-loop.

Human-in-the-loop is a state

One framing that helped me internalize this came from Andrej Karpathy’s writing about the “app layer” around LLMs, where a proper product often includes a dedicated interface for human oversight and even an “autonomy slider”. The details there are about AI apps, but the systems idea generalizes nicely: human involvement is something you model and design for, not something you improvise at the end. 2025 LLM Year in Review.

The key shift is simple: a human-in-the-loop is not an error state. It is one of the valid states the system can be in.

Waiting for approval is a state. Needing review is a state. Requiring confirmation before external side effects is a state.

If you don’t model those states, you’ll still end up with humans in the loop. Just in the worst possible way.

Why publishing makes this unavoidable

Publishing is where internal intent becomes external reality. It’s also where you run into rate limits, platform constraints, delayed outcomes, partial success, and all the other reasons “success” can be a lie for a while.

In other words: publishing is exactly where a system should be allowed to pause intentionally, ask for confirmation explicitly, and resume with context. Not because it failed, but because it is about to do something irreversible.

This is also why I keep coming back to the idea that publishing shouldn’t require a person staring at a screen, but it also shouldn’t pretend the person never matters. The point is not to remove humans. The point is to remove unnecessary coupling to human presence, and keep human judgment where it belongs, as an explicit state in the workflow, not a surprise dependency. As I hinted at in my earlier post about letting systems do the staring, the real win is making execution calmer and more reliable, not more “autonomous”. No computers, please.

What “explicit” looks like

If you treat human-in-the-loop as a state, the system can be honest:

It can say “prepared, awaiting confirmation” instead of “done”. It can show what will be published before publishing it. It can record that a human approved the action, not just that the action happened. It can make “pause” a first-class part of the execution model, not a failure mode.

This is boring, but it’s the good kind of boring. It’s the difference between systems you can trust and systems you babysit.

Why we care about this at Postproxy

Postproxy isn’t trying to decide what should be published. It’s trying to execute publishing intent reliably, across platforms that don’t behave consistently.

In that world, human involvement will always exist in some workflows. The important part is where it lives. If it lives as a fallback, it shows up late, without context, and usually under time pressure. If it lives as a state, it shows up early, with intent, and leaves behind a clean trace of what happened.

That is the version of automation we want: one that respects operators, models reality, and doesn’t outsource ambiguity to whoever happens to be watching.

Ready to get started?

Start with our free plan and scale as your needs grow. No credit card required.