Ask Good Questions: Choosing AI by workflow (not benchmarks)

I’m building a resource site called Ask Good Questions focused on disciplined AI use for real work.

The first guide is up:

Choosing AI by workflow, not benchmarks
https://askgoodquestions.dev/guides/choosing-ai-by-workflow-not-benchmark/

The core idea is simple: stop picking models based on leaderboard scores and start picking them based on the job you’re doing and the constraints you care about.

There’s also a short field note that came out of a real-world moment:
https://askgoodquestions.dev/field-notes/the-day-claude-told-me-to-take-a-nap/

If you were building a practical AI methods library for developers, what would be most useful: repeatable prompt structures, workflow templates, or guardrails for keeping output trustworthy?

Charles

1 Like

That one. Not real point if I don’t quite trust the output. I may well end up spending all that saved time on checking.

That’s absolutely on point Sean.

If you don’t trust the output, the time you “save” just shifts into verification, and then it feels pointless.

I think the key isn’t blind trust - it’s reducing the verification surface.

For example:

  • Constrain the scope of what AI is allowed to do.
  • Force it to explain assumptions.
  • Require it to cite sources or reasoning paths.
  • Use it for scaffolding and comparison rather than final code.
  • Build repeatable prompt structures that produce consistent output.

In other words, guardrails are not about trusting AI. They’re about making the output predictable enough that checking it becomes fast and mechanical instead of stressful and open-ended.

If I can review something in two minutes instead of twenty, that’s still a win.

The real question isn’t “Can I trust it?”

It’s “Can I structure it so verification is cheaper than writing it from scratch?”

1 Like