<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AI on Tim Schaeps</title>
    <link>https://www.timschaeps.be/categories/ai/</link>
    <description>Recent content in AI on Tim Schaeps</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <copyright>Copyright © 2010–2026, all rights reserved.</copyright>
    <lastBuildDate>Thu, 30 Apr 2026 18:30:00 +0200</lastBuildDate><atom:link href="https://www.timschaeps.be/categories/ai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>My GitHub Copilot Journey - Part 7: Fixing a Big Issue (and What It Means)</title>
      <link>https://www.timschaeps.be/post/github-copilot-journey-part-7-fixing-a-big-issue/</link>
      <pubDate>Thu, 30 Apr 2026 18:30:00 +0200</pubDate>
      
      <guid>https://www.timschaeps.be/post/github-copilot-journey-part-7-fixing-a-big-issue/</guid>
      <description>
        
          
            &lt;p&gt;&lt;strong&gt;Today I had a deployment that was working yesterday. Today it wasn&#39;t. What followed was one of those debugging sessions that perfectly illustrates both the power and the economics of working with GitHub Copilot.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you&#39;ve been following this series (starting with &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-1-the-first-ask/&#34;&gt;Part 1: The First Ask&lt;/a&gt;


), you know the progression: from quick questions, to deep conversations, to infrastructure automation, to multi-agent squads. This post is about what happens when that infrastructure &lt;em&gt;fights back&lt;/em&gt; — and how the tool helps you win. (And then, in a beautifully meta moment, I asked the tool to calculate its own cost. More on that later. 😊)&lt;/p&gt;
          
          
        
      </description>
    </item>
    
    <item>
      <title>My GitHub Copilot Journey - Part 6: The Squad</title>
      <link>https://www.timschaeps.be/post/github-copilot-journey-part-6-the-squad/</link>
      <pubDate>Mon, 27 Apr 2026 23:00:00 +0200</pubDate>
      
      <guid>https://www.timschaeps.be/post/github-copilot-journey-part-6-the-squad/</guid>
      <description>
        
          
            &lt;p&gt;&lt;strong&gt;In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-5-the-compound-effect/&#34;&gt;Part 5&lt;/a&gt;


, I described the compound effect — what happens when trust, depth, breadth, and infrastructure all stack up. I thought that was the end of the story. It wasn&#39;t.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Somewhere around week seven, I stopped talking to &lt;em&gt;one&lt;/em&gt; Copilot. I started assembling &lt;em&gt;squads&lt;/em&gt;. (I realize that sounds dramatic — but bear with me, it&#39;ll make sense in a moment 😊.)&lt;/p&gt;
&lt;p&gt;This is the part of the journey where GitHub Copilot stopped being a tool I used and became a system I orchestrated. Where I went from having one conversation at a time to coordinating multiple specialized agents working in parallel. And where I started thinking about how to bring this capability to an entire team — not just myself.&lt;/p&gt;
          
          
        
      </description>
    </item>
    
    <item>
      <title>My GitHub Copilot Journey - Part 5: The Compound Effect</title>
      <link>https://www.timschaeps.be/post/github-copilot-journey-part-5-the-compound-effect/</link>
      <pubDate>Mon, 27 Apr 2026 22:40:00 +0200</pubDate>
      
      <guid>https://www.timschaeps.be/post/github-copilot-journey-part-5-the-compound-effect/</guid>
      <description>
        
          
            &lt;p&gt;&lt;strong&gt;Two months ago, I asked an AI to explain an error message. Last week, I built a complete web application — frontend, backend, authentication, infrastructure, CI/CD pipeline, monitoring dashboards — in a single afternoon session of 122 exchanges.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is the final part of the series (well, almost — more on that in a moment). Not because the journey is over, but because the foundation is set. In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-1-the-first-ask/&#34;&gt;Part 1&lt;/a&gt;


 I built the reflex. In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-2-the-long-conversation/&#34;&gt;Part 2&lt;/a&gt;


 I went deep. In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-3-beyond-code/&#34;&gt;Part 3&lt;/a&gt;


 I went wide. In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-4-the-infrastructure-leap/&#34;&gt;Part 4&lt;/a&gt;


 I started automating the systems around my work. Part 5 is about what happens when all of those layers compound. (And yes, there&#39;s a &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-6-the-squad/&#34;&gt;Part 6&lt;/a&gt;


 — because the story didn&#39;t end where I expected.)&lt;/p&gt;
          
          
        
      </description>
    </item>
    
    <item>
      <title>My GitHub Copilot Journey - Part 4: The Infrastructure Leap</title>
      <link>https://www.timschaeps.be/post/github-copilot-journey-part-4-the-infrastructure-leap/</link>
      <pubDate>Mon, 27 Apr 2026 22:30:00 +0200</pubDate>
      
      <guid>https://www.timschaeps.be/post/github-copilot-journey-part-4-the-infrastructure-leap/</guid>
      <description>
        
          
            &lt;p&gt;&lt;strong&gt;Eight weeks ago, I accidentally committed a secret to a public repository. Last week, I was designing authentication architectures with managed identities and federated credentials. Same person. Same tool. Completely different confidence level.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This post is about the most transformative phase of my journey: when I stopped using AI for code and content, and started using it for &lt;em&gt;infrastructure, deployment, and DevOps&lt;/em&gt;. To me, this is where the productivity gains went from impressive to structural. (I won&#39;t pretend the learning curve was smooth — there were definitely some embarrassing moments along the way.)&lt;/p&gt;
          
          
        
      </description>
    </item>
    
    <item>
      <title>My GitHub Copilot Journey - Part 3: Beyond Code</title>
      <link>https://www.timschaeps.be/post/github-copilot-journey-part-3-beyond-code/</link>
      <pubDate>Mon, 27 Apr 2026 22:20:00 +0200</pubDate>
      
      <guid>https://www.timschaeps.be/post/github-copilot-journey-part-3-beyond-code/</guid>
      <description>
        
          
            &lt;p&gt;&lt;strong&gt;I hired a coding assistant. Then I realized it&#39;s not a coding assistant. It&#39;s a thinking assistant that happens to be really good at code.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-1-the-first-ask/&#34;&gt;Part 1&lt;/a&gt;


 I built the reflex. In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-2-the-long-conversation/&#34;&gt;Part 2&lt;/a&gt;


 I learned to go deep. Part 3 is about going wide — and to me, this is where things got really interesting.&lt;/p&gt;
&lt;h2 id=&#34;the-category-explosion&#34;&gt;The category explosion&lt;/h2&gt;
&lt;p&gt;Looking back at my usage patterns, something happened around week 4. My sessions, which had been almost entirely about code, started fragmenting into wildly different categories (I honestly didn&#39;t plan this, it just sort of happened):&lt;/p&gt;
          
          
        
      </description>
    </item>
    
    <item>
      <title>My GitHub Copilot Journey - Part 2: The Long Conversation</title>
      <link>https://www.timschaeps.be/post/github-copilot-journey-part-2-the-long-conversation/</link>
      <pubDate>Mon, 27 Apr 2026 22:10:00 +0200</pubDate>
      
      <guid>https://www.timschaeps.be/post/github-copilot-journey-part-2-the-long-conversation/</guid>
      <description>
        
          
            &lt;p&gt;&lt;strong&gt;Most people use AI like a search engine: one question, one answer, done. To me, the real power unlocks when you keep going.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In &lt;a href=&#34;https://www.timschaeps.be/post/github-copilot-journey-part-1-the-first-ask/&#34;&gt;Part 1&lt;/a&gt;


, I described the reflex shift — replacing Google with a direct question. That&#39;s stage one. Stage two is what happens when you stop leaving after the first answer. And honestly, I discovered this almost by accident.&lt;/p&gt;
&lt;h2 id=&#34;the-accidental-deep-dive&#34;&gt;The accidental deep dive&lt;/h2&gt;
&lt;p&gt;It started by accident. I needed to prepare a workshop on a topic I partially understood (I won&#39;t pretend I was an expert — I knew just enough to be dangerous). Usually that means: read five articles, outline on paper, restructure three times, fill in the gaps, and hope the narrative holds.&lt;/p&gt;
          
          
        
      </description>
    </item>
    
    <item>
      <title>My GitHub Copilot Journey - Part 1: The First Ask</title>
      <link>https://www.timschaeps.be/post/github-copilot-journey-part-1-the-first-ask/</link>
      <pubDate>Mon, 27 Apr 2026 22:00:00 +0200</pubDate>
      
      <guid>https://www.timschaeps.be/post/github-copilot-journey-part-1-the-first-ask/</guid>
      <description>
        
          
            &lt;p&gt;&lt;strong&gt;I spent 20 minutes googling an error message. Then I asked an AI, got the answer in 8 seconds, and spent the next 10 minutes wondering why I hadn&#39;t done that sooner.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To me, this is a story about trust. Not the blind kind — the calibrated kind. The kind you build one correct answer at a time, and that honestly takes a bit of patience (but it&#39;s worth it, I promise 😊).&lt;/p&gt;
          
          
        
      </description>
    </item>
    
  </channel>
</rss>