My GitHub Copilot Journey - Part 2: The Long Conversation
Overview
Most people use AI like a search engine: one question, one answer, done. To me, the real power unlocks when you keep going.
In Part 1 , I described the reflex shift — replacing Google with a direct question. That's stage one. Stage two is what happens when you stop leaving after the first answer. And honestly, I discovered this almost by accident.
The accidental deep dive
It started by accident. I needed to prepare a workshop on a topic I partially understood (I won't pretend I was an expert — I knew just enough to be dangerous). Usually that means: read five articles, outline on paper, restructure three times, fill in the gaps, and hope the narrative holds.
This time, I described the topic, the audience, and the goal to GitHub Copilot. It came back with a structured learning path. Not bad, but not quite right. So I said: "The progression here is too fast. Add an intermediate exercise between sections 2 and 3."
Then: "Actually, make section 1 more hands-on. The audience learns by doing."
Then: "Can you also generate a sample dataset they can work with?"
Twenty-five turns later, I had a complete workshop: narrative arc, exercises, sample data, estimated timing per section. The kind of thing that normally takes me a full day had taken about an hour. And — this is the important part — it wasn't a rough draft I had to rewrite. It was 80% there. My actual expertise was needed for the last 20%: the judgment calls about what to cut, the real-world anecdotes, the places where I knew the audience would get confused.
That session changed something. I realized I could stop treating AI as a lookup tool and start treating it as a thinking partner. I have been using it that way ever since, and I must say that I really love it 😊.
The tools that made depth possible
Two features of GitHub Copilot turned out to be critical for going deep (I'm not going further into every feature here — the goal is not to write a product manual):
Checkpoints. During long sessions, Copilot automatically creates checkpoints — snapshots of what's been accomplished. My first deep session had 1 checkpoint. The workshop session had 7. By the time I was doing full project builds, sessions would rack up 15-19 checkpoints. The beauty of this is that they're like chapter markers in a book — they let you navigate a long session without losing your place, and they help Copilot maintain context across dozens of turns.
The rubber-duck agent. At some point I discovered that Copilot can spin up a separate "rubber-duck" agent to critique your plan before you implement it. It's like having a colleague review your architecture sketch before you start building. The first time I used it, it caught a blind spot in my deployment approach that would have cost me an hour of debugging. Now I use it reflexively for anything non-trivial. (Yes, I realize I could have just thought harder myself, but honestly — this is faster and more thorough.)
Context7. I installed a skill that pulls live documentation from any framework directly into the conversation. Instead of tab-switching to read docs, I could say "show me how this framework handles authentication" and get up-to-date code examples inline. It sounds small, but it eliminated one of the last reasons to leave the conversation.
Pro Tip: Install Context7 early in your journey. The moment you stop tab-switching to documentation sites, your sessions get noticeably deeper because you never break flow.
Microsoft Learn MCP. Similar to Context7 but specifically for the Microsoft ecosystem. This one connects Copilot directly to the official Microsoft Learn documentation — Azure services, SDKs, APIs, best practices, code samples — all searchable and fetchable without leaving the conversation. When I'm building Azure infrastructure and need to know the exact Bicep schema for a resource, or the correct MSAL configuration for a specific auth flow, it pulls the authoritative answer from Microsoft's own docs. No more half-outdated blog posts or guessing at parameter names. For me, as someone working heavily in the Microsoft/Azure ecosystem, this has been invaluable.
Why longer sessions produce disproportionate results
Here's a pattern from my own usage that surprised me: about 58% of my sessions were quick one-turn asks. Quick answer, move on. But 6% of my sessions went 30 turns or deeper — and those marathon sessions produced the vast majority of tangible output.
Why? Because long sessions have compounding context. Each turn builds on the previous ones. By turn 20, Copilot knows your project structure, your constraints, your preferences, and your goals. You're not re-explaining anything. You're iterating at the speed of thought.
The beauty of this is that short sessions and long sessions serve completely different purposes — short sessions are transactions, long sessions are collaborations. The downside is that you need to set aside a proper block of time for the deep ones (no 5-minute slot will do).
The prompt evolution: questions → commands → guidance
Here's something concrete you can track in your own usage. I have been noticing that my prompts evolved through three distinct phases:
Phase 1 — Questions (first 2 weeks):
"What does this error mean?" "How do I configure this setting?" "What's the syntax for X?"
You're asking for information. Copilot is an encyclopedia.
Phase 2 — Commands (weeks 3-5):
"Create a workshop outline for topic X." "Refactor this function to handle edge cases." "Write tests for this module."
You're directing work. Copilot is an implementer.
Phase 3 — Guidance (weeks 6+):
"I want to build a system that does Y. The audience is Z. What's the best approach?" "Here's my current architecture. Where are the weak points?" "Help me think through the trade-offs between approach A and B."
You're thinking together. Copilot is a collaborator.
That progression — ask → direct → think together — is the trust ladder in action. You can't skip steps (I tried, it doesn't really work that well), but you can move through them faster if you're deliberate about it.
Learning by building (not by reading)
One of the biggest surprises was how much I learned through long sessions. Not by asking Copilot to explain concepts in the abstract, but by building something and asking questions along the way. (I'm not going to claim this replaces formal learning entirely — but for me, it's become the primary way I pick up new skills.)
When you scaffold a project together, you naturally encounter decisions: Why this database instead of that one? What's the trade-off between these two auth patterns? Why does this deployment model work better here?
Each of those questions — asked in context, about code you're actively building — teaches you more than reading a documentation page ever could. You're learning by doing, with an expert pair programmer who never gets tired of your questions.
The native language breakthrough
Here's something I didn't expect to matter: I started mixing languages in my prompts. When I was thinking in Dutch, I'd write in Dutch. When the topic was technical, I'd switch to English mid-sentence. (Just for the fun of it really, to see what would happen.)
Copilot didn't care. It just adapted.
Over time, my Dutch-language prompts grew from zero to nearly one in five. That might seem like a small thing, but it removed a real friction point. When you're brainstorming, the last thing you want is to translate your thoughts before you can express them. The moment I realized I could think out loud in my native language and still get good results, the tool stopped feeling like a foreign interface and started feeling like an extension of my thinking 😊.
You might be in this stage if:
- You're comfortable with quick asks but haven't tried longer sessions
- You mostly write questions, not yet giving instructions or thinking out loud
- You've used AI for the same types of tasks and haven't explored beyond that
- Sessions rarely go past 10 turns
The workflow shift
In Part 1, the workflow was:
1Problem → Search → Read → Try → Repeat
By Part 2, it became:
1Idea → Describe → Build together → Iterate → Refine → Ship
The critical difference: I went from being a consumer of answers to a director of work. I still need expertise to evaluate the output. I still make the final calls. But the loop from "idea in my head" to "working first draft" collapsed from days to hours. To me, that shift is what makes long sessions so powerful.
Try this week
Pick one thing you want to learn or build. Start a session and don't stop until you've produced something tangible. Not just answers — an actual artifact. A workshop outline, a project scaffold, a structured analysis, a prototype.
If you find yourself at 15 turns and still going, you're doing it right. That's not inefficiency — that's collaboration. Do give it a try if you haven't already, you won't regret it!
Track your session depth. If your longest session this week is under 10 turns, you haven't pushed far enough yet.
In Part 3 , I'll show what happened when I stopped limiting AI to technical tasks and started using it for business, strategy, research, and domains I never expected it to handle.
This is Part 2 of the My GitHub Copilot Journey series.
Now go ask GitHub Copilot something ambitious. 🚀
Enjoy!
Tim