March 16, 2026
AI as Intern: Treating AI as a Productivity Partner
A practical mental model for integrating AI into daily work without over-trusting or under-using it.
7 min read

The most useful mental model I have found for working with AI is also the simplest: treat it like an intern. Not a bad intern, not a genius intern, but a reasonably bright person who arrived this morning, has strong general capabilities, no institutional context, and needs supervision proportional to the stakes of the task.
This model resolves most of the confusion people experience when trying to integrate AI into their work. It sets the right expectations, suggests the right workflows, and prevents the two most common failure modes: over-trust and under-use.
The Over-Trust Problem
Over-trust is what happens when you treat AI output as if it came from an expert. You ask it to draft a contract and send the result to a client without review. You ask it to analyze data and present the conclusions in a meeting as if you had verified them yourself. You delegate judgment to a system that does not have judgment.
The intern model prevents this naturally. No reasonable manager sends an intern's first draft to the client. They review it. They check the reasoning. They ask questions. And they accept that the review step is part of the workflow, not an inefficiency to be eliminated.
The review step is where the human adds the value that the AI cannot: context, judgment, domain knowledge, and accountability. Removing it does not save time. It transfers risk from a visible checkpoint to an invisible one, and invisible risks have a way of materializing at the worst possible moment.
The Under-Use Problem
Under-use is the opposite failure, and it is equally common. This is the person who has heard about AI's limitations and concluded that it is not reliable enough to use for anything. They continue doing everything manually, including tasks where AI could save them hours with minimal risk.
The intern model resolves this too. You would not refuse to give an intern any work because they might make a mistake. You would give them appropriate work: research tasks, first drafts, data compilation, scheduling logistics. Tasks where the cost of an error is low and the time savings are real.
The key is matching the task to the capability. An AI can produce a solid first draft of a routine email in seconds. It can summarize a long document accurately enough to save you the first read-through. It can generate a list of options that you then evaluate and refine. Each of these saves time without requiring you to trust the output blindly.
The Supervision Gradient
The intern model also provides a natural framework for adjusting supervision based on stakes. A competent manager gives an intern more autonomy on low-stakes tasks and more supervision on high-stakes ones. The same gradient applies to AI.
For a low-stakes internal memo, you might accept the AI's draft with light editing. For a client-facing proposal, you review carefully and rewrite sections. For a legal filing or a financial report, you use the AI's output as a starting point only and verify every substantive claim independently.
This gradient already exists in how experienced professionals delegate to junior colleagues. The skill is not new. What is new is applying it to a tool that does not push back, does not ask clarifying questions when confused, and does not flag its own uncertainty in the way a human intern would. You have to supply the uncertainty detection yourself, which means understanding what the AI is likely to get wrong.
Building the Feedback Loop
Good internships have structure. There is an initial period of close supervision. As the intern demonstrates competence in specific areas, the supervisor gradually extends autonomy. The intern learns what the organization expects, and the supervisor learns what the intern can handle.
The same structure applies to AI integration. Start with close supervision across all tasks. Pay attention to where the AI consistently performs well and where it consistently needs correction. Over time, develop a mental map of reliable and unreliable capabilities. This map is your own, specific to your domain and your workflow. No general guide can substitute for it.
This is a form of deliberate practice applied to a new tool. You are not just using the AI. You are developing the skill of using it effectively, which includes knowing when not to use it. That meta-skill, the judgment about when to delegate and when to do the work yourself, is the actual productivity gain. The AI is just the medium through which the skill operates.
The behavior loop here is: delegate a task, review the output, note what worked and what did not, adjust your delegation strategy, repeat. Over hundreds of iterations, you develop an intuitive sense for what the AI handles well that is more reliable than any set of written guidelines.
What the Intern Cannot Do
There are things you would never ask an intern to do, not because they are incapable of attempting them, but because the task requires judgment that only comes from experience the intern does not have.
You would not ask an intern to decide whether to enter a new market. You would not ask them to handle a sensitive personnel situation. You would not ask them to represent the organization in a negotiation with a major partner. These tasks require context that cannot be briefed into a conversation; they require the kind of accumulated understanding that Boyd would have called orientation.
The same boundaries apply to AI. Tasks that require deep domain judgment, political sensitivity, relationship awareness, or ethical reasoning should not be delegated to AI any more than they would be delegated to someone who started this morning. The AI can prepare background materials for these decisions. It should not make them.
The Long Game
The intern model is not a permanent ceiling. Real interns grow into junior analysts, then senior ones, then managers. AI capabilities are also expanding. Tasks that require close supervision today may be reliably delegated tomorrow.
But the model remains useful even as capabilities improve because it keeps the human in the right relationship with the tool: responsible, supervisory, and engaged. The moment you stop reviewing AI output is the moment you have outsourced your judgment without realizing it. And judgment, as every decision-making essay on this site argues, is not something you can safely outsource.
Treat the AI like an intern. Give it real work. Review what it produces. Adjust your supervision over time. And never forget that you are the one who signs off on the final output. That responsibility is not a burden. It is the mechanism that keeps your own skills sharp while you benefit from the speed the tool provides.