My GitHub Copilot Journey - Part 6: The Squad
Overview
In Part 5 , I described the compound effect — what happens when trust, depth, breadth, and infrastructure all stack up. I thought that was the end of the story. It wasn't.
Somewhere around week seven, I stopped talking to one Copilot. I started assembling squads. (I realize that sounds dramatic — but bear with me, it'll make sense in a moment 😊.)
This is the part of the journey where GitHub Copilot stopped being a tool I used and became a system I orchestrated. Where I went from having one conversation at a time to coordinating multiple specialized agents working in parallel. And where I started thinking about how to bring this capability to an entire team — not just myself.
What is a squad?
A squad is exactly what it sounds like: a team of specialized agents, each with its own role, working together on a shared objective.
In my first squad session, I was working on a property sale that needed a website, a buyer outreach strategy, and operational logistics — all at once. (I knew very well that this was an unusual use case for a "coding assistant" — but by this point I'd stopped thinking of it that way.) Instead of handling everything in one long linear conversation, I assembled a squad:
- One agent handled the website build (frontend, gallery, content)
- One handled buyer identification and outreach strategy
- One handled operational tasks (photo management, document generation)
- One handled security and deployment
That session went to 102 turns and 19 checkpoints. But the key insight wasn't the volume — it was that each agent held its own context. The website agent knew about image galleries and SEO. The security agent knew about Managed Identity configuration and API authentication. I didn't have to re-explain the full picture to each one. I just pointed them at their part of the problem.
To me, it felt less like programming and more like managing a project team. I was the architect. They were the specialists.
The multi-agent orchestration breakthrough
The squad for the property sale was emergent — I discovered the pattern by doing it. But the real breakthrough came when I deliberately designed a multi-agent workflow for my day job. (I won't go into the specifics of the data models here — that would take us too far off track.)
I set up a squad that combined three different data sources:
- Work data (emails, calendar, files) through one agent
- Sales pipeline data (opportunities, accounts, deal teams) through another
- Project tracking data (deliverables, milestones, tasks) through a third
Each agent had its own connection, its own schema knowledge, its own specialization. I could ask "what did my customer ask about last week?" and get an answer from one agent, then immediately ask "what's the pipeline status for that account?" and get it from another — all in the same conversation, with context flowing between them.
This is what multi-agent orchestration means in practice: not just "AI that's fast," but AI that's connected to your actual systems and can cross-reference between them. The gap between "information I have" and "information I can use" collapsed to near zero. I have been running this setup daily and it has genuinely changed how I prepare for customer meetings.
Skills: teaching Copilot your domain
The squad pattern works because of skills — packaged knowledge modules that give Copilot specialized capabilities. (I'm going to keep this section practical rather than theoretical — if you want the full documentation, there are better resources for that.)
A skill is essentially a SKILL.md file: a structured document that tells Copilot how to work with a specific domain. What commands are available, what data schemas look like, what the gotchas are, what the workflows should be. When you install a skill, Copilot doesn't just know about your domain — it knows how to operate in it.
I discovered skills while building a Hugo website. I needed Copilot to understand the project's custom widget system — not just generic HTML, but the specific folder structure, naming conventions, and configuration patterns. Writing a SKILL.md for it transformed Copilot from "generic web developer" to "someone who knows this project's architecture." The beauty of this is that once you've written the skill, it works consistently every time — no more re-explaining your project structure at the start of each session.
The implications are enormous. Every team has domain knowledge that lives in people's heads, in scattered wiki pages, in tribal knowledge that takes months for new members to absorb. Skills make that knowledge executable. Instead of reading a 50-page onboarding doc, a new team member can install a skill and immediately have Copilot guide them through the team's workflows.
Pro Tip: Start your first SKILL.md with just the happy-path workflow — don't try to cover every edge case on day one. Add complexity as you discover gaps through actual use.
The skill authoring loop
Writing a skill is itself a Copilot task. The loop looks like this:
- Describe what you want — "I need a skill that knows how to query our sales data and generate territory reports"
- Copilot generates the SKILL.md — including scenarios, error handling, data schemas, and workflow steps
- You test it — use the skill, find gaps, hit edge cases
- You iterate — update the SKILL.md with what you learned
- You share it — make it available to your team
Step 5 is where it gets interesting. GitHub recently announced skill sharing across teams — the ability to publish skills that your colleagues can install. This means the expertise you encode doesn't just help you. It helps everyone who installs it. One person's breakthrough becomes the team's baseline 😊.
Think about what that means at scale. Your best engineer's deployment workflow, encoded as a skill. Your security team's compliance checklist, executable through conversation. Your data team's query patterns, available to anyone who asks. Skills turn individual expertise into organizational capability. (I realize I'm getting a bit enthusiastic here, but for me this is genuinely exciting stuff.)
Dispatch: the TUI that changed the game
There's one more piece that made the squad pattern practical: Dispatch.
Dispatch is a terminal UI (TUI) for managing multiple Copilot agents. Instead of juggling browser tabs or terminal windows, you get a clean interface that shows all your active agents, their status, and their output. You can spin up new agents, route work between them, and monitor progress — all from one screen.
Before Dispatch, multi-agent work was possible but messy. After Dispatch, it became my default operating mode. The ability to see all your agents at a glance, send them work in parallel, and collect their output in one place — that's the difference between "you can technically do multi-agent" and "you naturally work multi-agent." The downside is that you might find yourself spinning up agents for things that a single session could handle just fine (I've been guilty of this a few times).
The rubber-duck agent: your built-in critic
One specialized agent deserves its own mention: the rubber-duck agent.
In traditional programming, "rubber-duck debugging" means explaining your problem to an inanimate object (a rubber duck on your desk) to force yourself to articulate your thinking. GitHub Copilot turned this into a literal feature: a separate agent whose only job is to critique your plan before you implement it.
I use it for anything non-trivial. Before starting a complex build, I describe my approach and ask the rubber-duck to find holes. It catches things I miss — security blind spots, edge cases I didn't consider, simpler approaches I overlooked. The first time I used it, it identified a deployment issue that would have cost me an hour of debugging after the fact. (Well, it would have cost me more than an hour if I'm being honest with myself.)
It's not infallible. Sometimes it raises concerns that don't apply to your specific context. But on balance, spending 30 seconds getting a critique has saved me hours of rework. For me, it's the cheapest quality assurance I've ever found.
The explore agent: parallel investigation
Another agent type that changed my workflow: the explore agent.
When I'm working in an unfamiliar codebase, I used to spend the first hour reading through files, building a mental map, understanding the structure. Now I launch explore agents in parallel — one investigating the API layer, one looking at the database schema, one mapping the authentication flow, one checking the test structure. I have been using this approach for onboarding into new projects and it's dramatically faster than the old "read everything sequentially" approach.
In minutes, I have a comprehensive picture that used to take an hour. And because each explore agent reports back independently, I'm not blocked waiting for a sequential investigation to finish. It's the difference between reading a book cover to cover and having four researchers each read a chapter and brief you simultaneously.
From individual to team
Here's where the story gets bigger than me. (And honestly, this is the part I'm most excited about — although I realize we're still in the early days of all this.)
Everything in Parts 1-5 was my personal journey. One person, getting better at working with one tool. Useful, but limited. The squad pattern — skills, multi-agent, Dispatch, skill sharing — scales horizontally.
Consider this progression:
Month 1 (Solo): I learn to work with Copilot effectively. My productivity doubles.
Month 2 (Skills): I encode my workflows as skills. Now Copilot doesn't just help me — it helps me in my specific context, with my specific tools and patterns.
Month 3 (Sharing): I share those skills with my team. A colleague installs my deployment skill and immediately has access to the workflow that took me weeks to develop. They don't need to make the same mistakes I did.
Month 4+ (Compounding): My colleague improves the skill based on their own experience. They add error handling for a case I never hit. They share it back. Now my Copilot is better because of their work. The team's collective knowledge grows faster than any individual could manage alone.
This is the real compound effect. Not just personal productivity gains stacking up, but an entire team's expertise becoming executable, shareable, and continuously improving. If you ask me, this is where AI assistants go from "nice productivity hack" to "fundamental change in how teams operate."
A practical starting point
If the squad concept sounds appealing but abstract, here's how to get started:
Step 1: Write your first skill
Pick a workflow you do repeatedly — deploying to a specific environment, querying a particular data source, generating a report in a standard format. Write a SKILL.md that describes it. It doesn't need to be perfect. Start with the happy path and add edge cases as you hit them.
Step 2: Use the rubber-duck agent
Next time you're about to implement something complex, stop. Describe your approach to the rubber-duck agent first. See what it catches. Make this a reflex for non-trivial work.
Step 3: Try multi-agent on a real task
Pick a task that naturally decomposes into parallel streams. Maybe you need to audit three systems at once. Maybe you need to build a frontend and a backend simultaneously. Spin up separate agents for each stream. Notice how the parallelism changes your throughput.
Step 4: Share a skill
Once you have a skill that works well, share it with a colleague. Watch how they use it. Notice what they struggle with. Improve the skill based on their feedback.
Self-assessment: squad readiness
If you... Start with... Haven't tried multi-turn sessions yet Go back to Part 2 first Use Copilot deeply but only for one domain Write your first SKILL.md for your most common workflow Already use skills but work solo Share one skill with a colleague this week Already share skills within your team Set up a Dispatch workflow for your next complex project Already orchestrate multi-agent workflows Start encoding your team's tribal knowledge as skills
The meta moment
Here's a confession: this entire blog series was built with the squad pattern. (Yes, really.)
One agent analyzed my usage data from a local session database. Another structured the insights into a narrative arc. Another wrote the actual posts. A rubber-duck agent critiqued the plan before implementation. An explore agent investigated the Hugo blog structure and theme configuration.
The series about GitHub Copilot was itself built by a squad of GitHub Copilot agents 😊.
If that feels circular, it should. That's the point. When a tool is powerful enough to document its own impact, and does so convincingly, the argument for adoption kind of makes itself.
What comes next
I don't know. And that's the most exciting part.
Two months ago I was googling error messages. Now I'm orchestrating multi-agent squads that connect to my live data sources, operate with domain-specific skills, and produce work that would have taken teams of people and weeks of effort.
The ceiling keeps moving. Skills are getting more sophisticated. Multi-agent coordination is getting smoother. The gap between "what I can imagine" and "what I can build" shrinks with every session.
If you've read this entire series, you have a roadmap that covers the full journey: from the first tentative question to assembling your own squad. Not everyone needs to go all the way (and that's completely fine!). Maybe the reflex from Part 1 is enough for you. Maybe the deep sessions from Part 2 are your sweet spot. The framework works at every level.
But if you're anything like me, you won't stop. Because once you've seen what's possible with a squad, working alone feels like leaving half your team on the bench.
This is the final part of the My GitHub Copilot Journey series. If you've enjoyed following along, I'd love to hear about your own journey — especially which stage you're at and what your next step will be. Find me on LinkedIn or reach out via this blog.
Now go build your squad. 🚀
If you need info, or just want to chat about any of this — please do not hesitate to contact me!
😃 Tim