A watercolor painting of a home office workspace with terminal and documents
groundwork·6 min read

Context Is the Interface

What if the most powerful thing you could give AI isn't a better prompt, but a better room to work in?

Share
Copied!

The Brief

This article explains how providing structured context to AI through workspace files, business plans, and rules directories transforms it from a simple Q&A tool into an accountability partner. It describes a practical system for monthly business reviews using Claude Code with persistent context.


What does context is the interface mean for AI?
The prompt is what you say to AI, but the context is the room you are standing in when you say it. By placing business plans, voice profiles, and project notes in workspace directories that AI reads at session start, the conversation begins informed rather than from scratch every time.
How can AI serve as a business accountability partner?
By storing business plans, monthly notes, and commitments in a workspace directory, AI can track progress against stated goals across sessions. It flags dropped projects, quotes your own targets back to you, and notices strategic drift, functioning as an accountability partner with perfect recall of your stated commitments.
What is the lost in the middle problem with AI context?
AI models pay close attention to the start and end of a conversation, but instructions in the middle get diluted by surrounding content. In long sessions, even well-constructed context fades and the model stops consulting rules as reliably, requiring strategies like mid-session context refreshes or enforcement hooks.
How do Claude Code skills and rules directories work?
Rules in a .claude/rules/ directory load automatically at session start and provide persistent context. Skills in .claude/skills/ are invoked on demand to refresh context mid-conversation. This structure allows the workspace to adapt to the task, loading financial guidelines for budget work or voice profiles for writing.

I opened my terminal last Tuesday for what I call a "monthly mirror." Same ritual since July: I ask Claude to review my business plan against what actually happened. Revenue targets. Project milestones. The uncomfortable questions.

What struck me wasn't the answers. It was that Claude remembered everything.

Not because AI has memory in the way we do. It doesn't. Each conversation technically starts fresh.1 But my workspace doesn't. Sitting in a .claude/rules/ directory is my business plan, my voice profile, my correspondence patterns, my project notes. When I open a session, Claude reads the room before I say a word.

The Briefing Room

Most people approach AI like a search engine with personality. Type a question, get an answer, move on. That's maybe ten percent of the capability.2

Think of it differently. Imagine hiring a brilliant consultant. They walk into your office for the first time. What do they see? Business plan on the wall? Client correspondence in the filing cabinet? Notes from last quarter's strategy session? Or an empty room where they have to ask you to explain everything from scratch, every single time?

The prompt is what you say. The context is the room you're standing in when you say it.

A consultant's desk covered with business documents and planning materials The context you provide shapes the conversation before it begins.

Building the Room

When I set this up last summer, I started with my business plan. Not a static document gathering dust, but a living reference that Claude can read every time I open a session. Revenue targets, strategic priorities, the five-year arc. When I ask "how are we tracking against Q1 goals," there's something to track against.

Then I added my correspondence and writing samples, plus rules files that activate based on what I'm doing. When I mention "budget," financial guidelines load. When I'm writing content, my voice profile takes over. The workspace adapts to the task.

But the piece that surprised me was the monthly notes. In October, I recorded a commitment to launch a SaaS product by Q2. By December I'd stopped mentioning it entirely. Not a deliberate decision. It just slipped. When Claude flagged the gap in January, it wasn't because the AI was smart. It was because the room still held what I'd forgotten. That thread across sessions, the record of what I committed to and what actually happened, turned out to be the most valuable thing in the workspace.

That stings sometimes. Good accountability usually does.

The Monthly Mirror

Here's what a check-in actually looks like.

I pull up Claude Code in my business directory. The rules files load automatically. I say something like: "Let's do the monthly review. How are we tracking against the plan?"

What comes back isn't generic advice. It's specific. "You're tracking at about 60% of your Year 1 revenue target. The retainer work is on track, but the SaaS product you mentioned hasn't generated revenue yet. You said you'd launch by Q2. That's eleven weeks away."

Eleven weeks. I knew that abstractly. Seeing it reflected back, grounded in my own stated commitments, hits differently.

A calendar with business milestones and a strategic planning session The AI becomes an accountability partner with perfect memory.

When the Room Fades

There's a catch. In long sessions, even well-constructed context starts to fade. I noticed it first when Claude ignored a voice rule mid-conversation that it had followed perfectly at the start. The instructions hadn't changed. They'd just gotten buried under everything else.

Researchers call this "lost in the middle." AI models pay close attention to the start and end of a conversation, but instructions in the middle get diluted by everything around them. The rules are still there. The model just stops consulting them as reliably.

Recently, I've restructured how context loads. Instead of everything activating once at session start, I converted some of my .claude/rules/ into .claude/skills/ that I invoke when I need them. /voice loads my writing patterns. /business loads the plan. The context refreshes mid-conversation, not just at the start.

But some things can't be allowed to drift at all. For those, I built Claude Enforcer, which adds hooks. These are shell scripts that run before Claude acts. A hook can block a forbidden action even if the model has forgotten the rule entirely. Soft guidance where drift is acceptable, hard enforcement where it isn't.

The room matters. But sometimes you need to walk back in and turn the lights on again.

What Changes

The biggest shift is visibility. Strategic drift is easy to miss when you're inside it, convincing yourself you're on track. It's harder to ignore when the AI can quote your own plan back at you.

But there's something subtler happening too. That decision you made in September, the reasoning behind it, stays accessible in December. The client email you send in February sounds like the one you sent in August because the AI has your voice, not a guess at it. Your own forgetting stops mattering as much.

The AI doesn't care that you missed a milestone. It just notices. You get to decide what to do with that.3

The Hidden Interface

Last Tuesday, when I ran that monthly mirror, the first thing Claude flagged was a project I'd stopped mentioning. I hadn't decided to drop it. I'd just stopped talking about it, and the silence said more than I wanted to hear.

That only happened because the room was full. The business plan was there. The previous month's notes were there. My own commitments, in my own words, were there. Most people focus on what to say to AI. The real leverage is in what you show it before you speak.

The AI doesn't remember on its own. But your data can remember for it.


References

Footnotes

  1. Anthropic. "System Cards." Anthropic Claude processes each conversation without persistent memory across sessions; context must be provided within the session.

  2. Mollick, E. (2024). "Co-Intelligence: Living and Working with AI." Portfolio. Most users interact with AI at a surface level, missing the deeper capabilities that emerge from sustained, contextual engagement.

  3. Newport, C. (2024). "Slow Productivity: The Lost Art of Accomplishment Without Burnout." Portfolio. On the value of accountability systems that notice without judging, allowing the individual to decide how to respond.

Found this useful? Share it with others.

Share
Copied!

Browse the Archive

Explore all articles by date, filter by category, or search for specific topics.

Open Field Journal