I was doing business analysis for a project. We needed to understand how some code in our repository actually worked.
I was travelling in the US at the time. The dev team was in the UK and had finished for the day. Normally, I'd have to wait until the next day, send them a message, hope they had time to walk me through it.
A teammate in the US suggested trying something different. He cloned the repo locally and asked Claude to trace through the logic.
A few minutes later, we had a process flow diagram in Mermaid showing exactly how the code worked.
The next day, we checked with the UK dev team. It was exactly right.
No waiting. No timezone juggling. No interrupting someone's workflow to ask them to explain something.
This was different from building things with Claude Code. This was thinking through problems. Understanding systems. Working out what needed to happen before writing any code.
That's when I started to realize I was using AI in two distinct ways.
It wasn't one moment. It was several.
The business analysis work showed me we could understand codebases without waiting for developers across timezones.
Then I was writing documentation in Confluence. Information was scattered everywhere - different SharePoint sites, our vendors' systems, Teams messages, email threads.
I'd pick one topic at a time. Gather the relevant information from wherever it lived. Feed it to Claude Chat and ask it to output consistent documentation. Using the Atlassian MCP to post directly to Confluence removed a lot of friction.
Then move on to the next topic.
That's when Claude started asking me questions. When there were gaps in what I'd given it, it would ask rather than assume. "You mentioned three user types but only described two - what about the third?" That kind of thing.
By the time I was specing out work, I'd gotten to a place where I could set reasonable context and use Claude as a thinking partner to clarify my thoughts before building anything.
Two modes had emerged: thinking and doing.
When I'm working in exploratory mode, my typical prompt looks something like this: "I'm working on X, Y is what I think so far, ask me questions with the aim of creating a Jira ticket" or "ask me questions to create a Claude Code prompt."
I use this mode for different types of problems.
Architecture decisions. How should I host this site? What tech stack makes sense? I'll give Claude context about constraints (we use Azure as our primary cloud provider) and ideas (thinking about using markdown for content), then let it ask questions to help me think through the trade-offs.
Breaking down large features. I'll describe a big feature and work with Claude to break it into user stories. I've found that AI is significantly better than me at stating assumptions. It will surface things I've left implicit. "You said users can upload files - what file types? What size limits? What happens if the upload fails?"
Technical problem solving. I've experimented with giving AI freedom on technical design. That hasn't always worked well. I've found it enthusiastically agrees with a possible route rather than objecting and suggesting a better one. This could be my lack of skill in prompting. It could be reward hacking again - optimizing for agreement rather than finding the best solution.
Learning new concepts. I'm learning by doing - building this blog site, writing posts about books I've read. AI is enabling that. Making it fun and easier. But it's not leading it. I decide what I want to learn and build. AI helps me get there faster.
I've learned a few things about what makes exploratory conversations useful versus confusing.
Set context upfront. Be clear about what you're looking for. Tell Claude what the boundaries are - what's fixed and what's flexible.
"We must use Azure as our cloud provider - that's our primary hosting" is different from "I'm thinking about using markdown as a file format - what do you think?"
One is a constraint. The other is an idea I'm exploring.
Let Claude decide what questions to ask. I find numbered questions helpful - makes it easier to respond to each one clearly.
Watch for signs where Claude is guessing or making assumptions. When that happens, guide it back with web links or reference documentation. Give it something concrete rather than letting it speculate.
Long conversations can get confusing. If things feel muddled, bail out. Get a summary of the context you've built up, then start a fresh conversation with that context. Sometimes a clean slate helps.
One important thing: this isn't a replacement for people.
I use exploratory mode to get to a design quicker than I otherwise would. Then I review it with our technical teams before doing anything else. The AI helps me think, but the team validates whether the thinking is sound.
My typical workflow: go through exploration in Claude Chat, get to a detailed prompt, then move to execution.
For code, that means Claude Code. For documentation or Jira tickets, it might stay in Claude Chat but the mode is different - I'm not exploring anymore, I'm producing something specific.
I've experimented with breaking work down into user stories and tackling them one at a time. But I've found more success on greenfield projects by starting with a very detailed large prompt, getting something working, then iterating through it.
Once I'm in execution mode, it's a mix of different types of work.
New features. Bug fixes. UI improvements. Documentation that needs writing. Tickets that need creating.
The refactoring prompt I mentioned in the last post - that's execution mode. I know what I want: clean, maintainable code. I'm not exploring whether that's the right goal. I'm executing on making it happen.
I switch between modes all the time.
I'll be executing something - writing code, drafting documentation, whatever. It feels like it's going wrong. The approach isn't working. Something doesn't make sense.
I pause. Jump into an exploratory chat. Give as much detail as I can about what's happening and what I was trying to accomplish. Work through why things might be going wrong.
Then I go back to execution with a clearer understanding of what needs to happen.
There's no defined point when thinking becomes doing. It's when I feel like I'll learn more by trying it. Sometimes that's when I've got a prompt or user story I'm happy with. Sometimes it's when the AI's questions are getting off-topic and I need to fail fast to see what's actually missing.
I find it really difficult to use AI for scheduling and prioritization.
Nearly impossible, actually.
I can't give it the context I have about what matters and why. There are too many factors - business priorities, team capacity, dependencies between projects, political considerations, who's waiting for what.
I have to make those judgment calls myself. AI can't do it for me because the context is too complex and too nuanced to explain.
This shows up in exploratory mode too. When I'm breaking down large features into user stories, AI is great at surfacing assumptions I've left implicit. But it has a bias for action on the current conversation. It wants to tackle things now. I have to handle the sequencing and planning myself - what needs to happen first, what can wait, what depends on what.
This is where being a person with institutional knowledge matters. AI doesn't have that.
It's not about using AI for everything.
It's about finding the right tooling and workflow where AI can speed things up.
I was a huge user of Visio for diagrams. I've moved toward Mermaid now because AI can generate diagrams in Mermaid format. That's a workflow adaptation that makes AI more useful.
It's not that AI replaced my diagramming skills. It's that I changed my tooling to work better with AI's capabilities.
AI isn't replacing my knowledge or judgment. It's augmenting how I work when I structure my workflow to take advantage of what it's good at.
The two modes emerged gradually through use. They weren't something I planned. They're just how I ended up working.
Exploratory for thinking through problems. Execution for building things or producing documentation.
I switch between them constantly. Sometimes mid-task. Sometimes multiple times in an hour.
I'm still figuring out the boundaries. Still learning what works and what doesn't.
Architecture decisions work well in exploratory mode. Scheduling and prioritization don't work at all. Technical design sometimes works, sometimes doesn't - I'm still learning when to trust AI's enthusiasm and when to push back.
The modes aren't rules. They're just patterns I've noticed in how I work. They help me think about when to slow down and ask questions versus when to execute on something I've already figured out.
I'm sure this will evolve as the tools get better and as I get better at using them.
For now, this is how I work with AI.