My GitHub Copilot Journey - Part 5: The Compound Effect
Overview
Two months ago, I asked an AI to explain an error message. Last week, I built a complete web application — frontend, backend, authentication, infrastructure, CI/CD pipeline, monitoring dashboards — in a single afternoon session of 122 exchanges.
This is the final part of the series (well, almost — more on that in a moment). Not because the journey is over, but because the foundation is set. In Part 1 I built the reflex. In Part 2 I went deep. In Part 3 I went wide. In Part 4 I started automating the systems around my work. Part 5 is about what happens when all of those layers compound. (And yes, there's a Part 6 — because the story didn't end where I expected.)
The numbers
I tracked my own usage over two months. Here's what happened (I won't go into every metric — just the ones that tell the story best):
| Metric | Week 1 | Week 9 |
|---|---|---|
| Sessions per week | ~5 | ~25 |
| Average session depth | ~8 turns | ~17 turns |
| Longest session | 44 turns | 122 turns |
| Total weekly interactions | ~76 | ~241 |
| Time between sessions (median) | 11 hours | 18 minutes |
| Types of tasks | Code only | Code, business, ops, research, communication, infrastructure |
That last row is the most important one. The median gap between sessions dropped from 11 hours to 18 minutes. I went from "occasionally using AI" to "it's always open." Not because I forced a habit — but because asking AI became the lowest-friction path to getting anything done. I have been working this way for a while now and honestly, going back would feel strange 😊.
On my most active day, I ran 14 separate sessions. Not 14 questions — 14 distinct projects and workflows, each with their own context and goal. (Yes, that was probably a bit much — but I was on a roll.)
The 122-turn session
Let me describe what a 100+ turn session actually looks like in practice.
I started with a source of data — a document that needed to become a web application. In a single continuous conversation, we went through:
- Data extraction — parsing and structuring the source material
- Application scaffolding — frontend and backend, project structure, routing
- Authentication — user login, role-based access, token management via MSAL
- Infrastructure — Bicep templates for compute, database, monitoring, identity
- CI/CD — GitHub Actions pipeline triggered by push, with OIDC auth to Azure
- Deployment — to Azure, with environment configuration and slot management
- Telemetry — Application Insights dashboards, KQL-based query workbooks, alert rules
- Debugging — fixing deployment failures, iterating on UX issues
- Refinement — responsive design, error handling, edge cases
One conversation. One afternoon. A complete, deployed, monitored, authenticated system.
This is not about AI being fast. It's about never losing context. In traditional development, every context switch — from code to browser, from editor to documentation, from development to deployment — costs time and mental energy. In a deep AI session, you stay in one conversation. The context accumulates. By turn 80, you can say "the authentication from step 3 doesn't work with the role assignment from step 5" and Copilot knows exactly what you're referring to.
To me, it feels less like programming and more like directing. You're making decisions at the speed of thought while the implementation keeps pace. The beauty of this is that your expertise becomes the bottleneck, not your typing speed. The downside is that very long sessions can tax your attention span — you still need to stay sharp to catch issues.
The operating model shift
Looking back at the full two months, the change wasn't gradual. It happened in three distinct phases (I'm simplifying a bit here for clarity — the real progression was messier than this makes it sound):
Phase 1: The Reflex (Weeks 1-2)
Operating model: AI is a search replacement. I ask, it answers, I verify. Trust level: Low. I check everything. Output: Time saved on individual lookups.
Phase 2: The Exploration (Weeks 3-5)
Operating model: AI is a collaborator. I describe goals, we build together, I iterate. Trust level: Growing. I verify selectively. Output: Projects I wouldn't have started. Skills I wouldn't have learned.
Phase 3: The Integration (Weeks 6-8)
Operating model: AI is part of my workflow. It handles implementation, deployment, and daily ops. I handle vision, decisions, and quality. Trust level: Calibrated. I know where it excels and where it needs oversight. Output: Complete systems. Automated workflows. Structural change in how I work.
The key word in Phase 3 is structural. It's not about doing the same work faster. It's about doing different work — higher-level, more ambitious, more complete — because the implementation burden is shared. If you ask me, that's the most important insight from this entire series.
The trust progression model
If there's one framework I'd want you to take from this series, it's this (and I realize it sounds deceptively simple, but it genuinely captures how this works in practice):
1Ask → Verify → Direct → Delegate → Integrate
Ask: You pose questions and treat answers as suggestions to be checked. Verify: You still check, but you've built enough evidence to know when to check. Direct: You give instructions instead of asking questions. Copilot implements, you review. Delegate: You hand over workflows, not just tasks. You define the process, Copilot executes it. Integrate: Copilot is part of your operating model. You think about work differently because of its presence.
You can't skip steps. But you can move through them faster by being deliberate: start with verifiable questions, expand gradually, and keep testing the boundaries.
The habit formation data
Some patterns that might help you predict your own trajectory (although everyone's path will look slightly different):
- Consistency matters more than intensity. I used AI on 54% of all days during this period. Not every day — but more than every other day. That consistency is what builds the reflex.
- Evenings were my deep work. Morning sessions tended to be short (daily ops, quick questions). Evening sessions ran longer and produced more output. If you're looking for when to do your ambitious AI sessions, find your natural deep-work window.
- The marathon sessions matter disproportionately. 58% of my sessions were quick one-turn asks. But the 6% that went 30+ turns produced the majority of tangible output. Don't be afraid of long sessions. They're where the real work happens.
- You'll find your natural language. My prompts started 100% in English. By month two, nearly one in five were in my native language. When I stopped translating my thoughts and just wrote what I was thinking, the sessions became more natural and productive. If English isn't your first language, give yourself permission to use whatever language you think in.
Pro Tip: Track your session depth for a week. If everything is under 10 turns, deliberately push one session past 20. You'll be surprised what happens when you let the context compound.
The honest limitations
This series would be incomplete without an honest accounting of where AI falls short. Here's what I've learned (and I think it's important to be upfront about this):
It doesn't replace judgment. Every session still requires me to decide what's right for the situation. AI can generate ten approaches to a problem. Knowing which one fits your context, constraints, and users — that's on you.
It doesn't replace expertise. You need to know enough to evaluate what it produces. An AI can write an infrastructure template, but if you don't understand why certain permission scopes matter, you'll deploy something insecure. AI amplifies your knowledge — it doesn't substitute for it.
It makes confident mistakes. Not often, but when it does, it doesn't flag them. You need to catch them yourself. This is why the trust calibration from Part 1 matters: you learn which types of output to scrutinize. (I have been caught off guard by this a few times — it's humbling.)
It works best when you work with it. The "set it and forget it" fantasy doesn't exist. The best sessions are collaborative: you provide context, review output, course-correct, iterate. If you paste a one-liner and expect perfection, you'll be disappointed.
It has a context window. In very long sessions (80+ turns), earlier context can fade. You'll sometimes need to re-state constraints or decisions from earlier. This is a current limitation, not a permanent one — but it's worth knowing about.
A framework for anyone starting today
If I could go back to day one, here's what I'd tell myself (and what I'd tell you, if you're just starting out):
Week 1: Build the reflex
Replace 5 searches per day with AI asks. Verify every answer. Track accuracy. Goal: internalize that asking AI is faster than searching, and start building evidence-based trust.
Week 2: Go deeper
Pick one project or learning goal. Have a 20+ turn conversation. Stay in the session until you produce something tangible — not just answers, but an artifact. Goal: experience compounding context.
Week 3: Go wider
Use AI for 3 non-code tasks. Email drafting, research, analysis, planning — anything outside your usual domain. Use the context framework: situation, audience, goal, constraints. Goal: stop thinking of AI as a "code tool."
Week 4: Automate something
Pick one manual process — a deployment step, a daily check, a recurring task — and automate it through AI. Iterate on the prompt until it's reliable. Goal: think in workflows, not tasks.
Week 5+: Push the boundary
Attempt your most ambitious project yet. Not because AI makes it easy — but because it makes it possible within a reasonable timeframe. You'll learn more from one ambitious session than from fifty quick asks.
Week 6+: Build your squad
Once you've hit your stride, explore skills and multi-agent features. Install skills that give Copilot specialized knowledge for your domain. Use the rubber-duck agent to validate plans before you build. Try multi-agent orchestration for complex workflows. Read more about this in Part 6: The Squad .
Self-assessment: where are you?
If you... You're probably at... Your next step is... Rarely use AI, or only for simple lookups Ask Part 1 : replace 5 searches this week Use AI regularly but keep sessions short Verify Part 2 : try a 20+ turn deep session Go deep on code but nothing else Direct Part 3 : use AI for 3 non-code tasks Use AI broadly but still do things manually Delegate Part 4 : automate one workflow AI is part of everything you do Integrate Keep pushing boundaries. Build something ambitious.
What changed
Two months ago, my workflow looked like this:
1Morning: Open inbox → triage manually → start coding
2Daytime: Hit a problem → google → read → try → repeat
3Evening: Push half-finished work → wonder where the day went
Today:
1Morning: AI briefs me → I review in 5 minutes → prioritize
2Daytime: Describe a vision → build it together → iterate → deploy
3Evening: Look back at what shipped → plan tomorrow's ambitious session
The total output isn't just higher. It's a different kind of output. I'm shipping complete systems instead of pushing half-finished projects forward by inches.
And the most honest thing I can say: I'm still getting better at this. The trust progression doesn't stop. Each week, I find new things to delegate, new patterns to optimize, new boundaries to test. The ceiling keeps moving up.
One last thing
If you've read this entire series, you now have a framework that most people discover through months of trial and error. You don't need to follow my exact path (and honestly, your path will probably look different — that's fine!). But you do need to start.
Open GitHub Copilot. Ask it something you'd normally google. Notice how fast you get the answer. Then ask a follow-up. And another.
Before you know it, you'll be at turn 20, building something you didn't think was possible this morning 😊.
This is Part 5 of the My GitHub Copilot Journey series. But the story didn't end here — Part 6: The Squad explores what happens when you stop working alone and start orchestrating a team of specialized agents.
Now go ask GitHub Copilot something ambitious. 🚀
If you have questions or want to share your own experience, do reach out — I'd love to hear from you!
Kr, Tim