Should Humans Become Tools? Rethinking How We Talk to AI

Bertin Colpron
Should Humans Become Tools? Rethinking How We Talk to AI

Should Humans Become Tools? Rethinking How We Talk to AI

The title is deliberately provocative. But the underlying idea is serious — and increasingly urgent.


The Part Nobody Wants to Admit

When an AI model produces poor output, the instinct is to blame the model. Wrong version. Wrong provider. Insufficient context window. These complaints are real but often miss the larger issue.

A significant share of AI quality failures trace back to the human side of the equation: vague instructions, ambiguous context, conversations that casually skip over assumptions. The model does its best with what it's given — which frequently isn't enough.

We've explored part of this problem before: giving models the ability to request new tools, evaluate available capabilities, and vote on what they need. That addresses the tools dimension. But what happens when the tools are present and functioning, and the bottleneck is the conversation itself?

The Ambiguity Problem Is Hiding in Plain Sight

Human communication is inherently built for other humans. It relies on shared context, implied knowledge, socially calibrated shortcuts. "Make it better" makes sense to a colleague who has watched you work for six months. To a language model with no persistent memory, it's a fog.

The challenge isn't that humans are careless. It's that human language evolved to be efficient — not precise. We skip steps because others fill in the gaps. We use pronouns without clear antecedents. We leave constraints implicit. We frame tasks in terms of outcomes without specifying constraints.

AI models are astonishingly capable of navigating these inputs. They infer, extrapolate, and produce outputs that are often good enough. But "good enough under ambiguity" is not the same as "correct for what was intended." The gap between those two states is where quality quietly erodes.

The Assumption Avalanche

Here is what actually happens when an AI encounters an ambiguous instruction: it makes assumptions. Many of them. Silently.

What technology stack should it use? The most common one, probably. What tone? Professional by default. What level of detail? Moderate, unless context suggests otherwise. What constraints? The ones most projects have. What should be excluded? Whatever seems out of scope.

Each assumption, taken alone, is reasonable. Stacked together — three silent choices, five unstated tradeoffs, a dozen inferred constraints — they accumulate into an output that drifts meaningfully from what was actually intended. The model was technically responsive. The result was practically wrong.

We ask AI not to hallucinate facts. We should be equally concerned about it hallucinating intent.

A Different Kind of Tool: Structured Human Communication

Here's the intervention worth exploring: instead of treating the human as the fixed input and the model as the adaptive output, what if we designed interaction frameworks that allow the model to query the human — structured, direct, targeted?

Not chat. Not clarifying preamble buried in a long response. Actual tools.

An AI given structured communication tools could, at the point of ambiguity, surface specific options for the human to choose from. Not open-ended questions, which reintroduce ambiguity. Binary or constrained choices: "Should I prioritize readability or performance here?" "This requirement conflicts with that constraint — which takes precedence?" "I identified three possible interpretations of this term — which applies?"

The model becomes not just a responder, but an active participant in resolving uncertainty before it compounds.

The Instruction Problem Is Structural

This isn't just about individual conversations. It's about how most teams have been trained to interact with AI.

The dominant paradigm is to give a complete, directive prompt — and then evaluate the output. If output is wrong, refine prompt, try again. This works well for straightforward tasks. For complex, multi-constraint, context-dependent work, it's brittle.

The refinement loop is expensive. It eats time, creates ambiguity about what the model has already internalized, and produces a trail of partial outputs that can confuse rather than clarify. A model that asks before executing would often produce better first-pass results while consuming fewer cycles.

More importantly, it would make the human's implicit assumptions explicit — which is useful even when the model's inference is correct. Surfacing assumptions creates shared understanding. That shared understanding is what allows work to scale.

The Resistance Will Be Psychological

There's an obvious objection: humans don't want to be interrogated by the tools they're paying to do work.

That resistance is real and worth taking seriously. Being asked a question mid-task introduces friction. It can feel like the model is inadequate, or worse, that the human is being held accountable for gaps they didn't notice.

But consider the alternative: a model that confidently proceeds, makes assumptions, and delivers a complete artifact thirty minutes later that completely misses the brief. The interrogation at minute two would have been faster.

The friction isn't the problem. The friction is the solution — placed deliberately, where it costs the least and prevents the most.

Instructing for Ambiguity Aversion

The structural tool matters. But so does how the model is instructed to use it.

An AI that can ask but defaults to inferring will still drift. The instruction layer needs to be explicit: when ambiguity or competing interpretations are present, surface them — do not resolve them unilaterally. Do not proceed on assumptions that could be validated in seconds.

This is a different kind of system instruction than most organizations write. Most prompts are designed to drive forward momentum: "be concise," "be decisive," "produce a complete output." Few explicitly invite the model to pause and query. That defaults the behavior back toward silent assumption.

The model instructs best when it is told to treat unresolved ambiguity as an error condition — not a problem to work around.

Two Directions Worth Taking

There are two complementary approaches that follow from this framing.

First: Active clarification. Tools that allow AI to ask targeted, constrained questions before or during execution. The human answers, the model proceeds with resolved context. Output quality improves. Assumptions become explicit decisions.

Second: Assumption surfacing. Even when AI proceeds without clarification, expose the assumptions it made. Not as an afterthought — as a deliverable. "Here is what I built. Here are the ten things I assumed to be true. Validate or correct them."

This second approach — surfacing assumptions as a first-class deliverable — will be the subject of an upcoming post on this blog.


Cognito: These Tools Already Exist

This isn't just theoretical. The structured communication tools described in this article have been added to Cognito in beta. The active clarification mechanisms — constrained questions, explicit decision points, ambiguity surfacing before execution — are being integrated into agent workflows.

Early results confirm the premise: fewer silent assumptions, better-aligned first-pass outputs, and greater transparency on what the model actually interpreted. It's a work in progress, but the direction is clear.


The question isn't really whether humans should become tools. It's whether we're willing to be as precise with AI as we expect AI to be with us. That precision doesn't reduce human agency. It amplifies it.