Back to Blog
AI & Automation

Building an AI Agent with AI: The Thursday Experiment

What happens when you use AI to build an AI? Mostly chaos. But also... progress.

March 22, 2026·9 min read

I'm going to tell you something that sounds like a tech bro fever dream: I'm building an AI agent called Thursday. And I'm building it with AI agents.

Yes, it's as meta as it sounds. Yes, I'm aware of the irony. And yes, it actually works — most of the time.

The Goal Isn't to Replace Me. It's to Clone My Decision-Making.

There's a narrative out there that AI is coming for everyone's jobs. I'm not here to argue that. I'm here to tell you about Thursday — the AI agent ecosystem I'm building to run SyncTech — and what the reality of "AI automation" actually looks like when you're in the trenches.

The goal is simple: teach the system how I work so I can automate more and more over time.

Not "set it and forget it." Not "AI does everything while I'm on a beach." The goal is a system that learns my processes, my standards, my judgment calls — and gradually takes on more responsibility as it earns trust.

Right now, I make all the calls. I review everything. But every week, Thursday gets a little smarter about what I'd do, and I steer a little less.

The Meta Twist: AI Building AI

Here's where it gets weird. My primary development workflow for Thursday involves Claude and a system called OpenClaw — which is essentially an AI agent framework that gives AI models access to tools, files, terminals, and the internet.

So picture this: I'm sitting at my desk, telling an AI agent to write code for another AI agent. That AI agent reads the codebase, understands the architecture, writes the feature, runs the tests, and commits the code. Then I review it. Sometimes I tell it to fix things. Sometimes it catches bugs I missed.

I'm not going to lie to you — the first time it worked end-to-end, I just sat there for a minute. It felt like teaching someone to ride a bike and then watching them enter the Tour de France.

The Team Behind Thursday

Thursday isn't one agent. It's a team of specialized agents, each with a role:

Dev agents pick up GitHub issues and generate pull requests. Each client project has its own dev agent that understands that codebase, that tech stack, those patterns.

A reviewer agent looks at every PR before I do. It checks for logic issues, security concerns, style consistency, test coverage. It's not replacing my review — it's doing the first pass so I can focus on the stuff that actually needs my brain.

A QA agent spins up preview environments and verifies that changes actually work. Not just "does it compile" — does it match the requirements? Does the button go where the button should go?

They work as a team. A GitHub issue comes in, the dev agent writes the code, the reviewer agent checks it, the QA agent verifies it, and the whole package lands on my desk for final approval. Sometimes I use one AI agent to review the code another AI agent wrote. I know. I KNOW. But it actually catches real issues — edge cases, missing error handling, performance concerns. It's like having a tireless, opinionated colleague who never takes lunch.

The Human in the Loop (For Now)

I want to emphasize that "for now." The human in the loop isn't a permanent philosophical stance — it's where we are in the trust-building process.

Right now, I review every meaningful PR. I approve every deployment. I make the final call on architecture decisions. That's not because AI can't do these things. It's because I haven't yet taught the system enough about my standards to trust it autonomously.

The end state? Confident autonomous reviews where AI knows when to flag me. Not "review everything" but "review the 15% that's actually tricky and let the rest flow."

We're not there yet. But we're closer every week.

The Database Incident (aka Why Permissions Are Earned)

Early on, I gave an agent too much permission in a development environment. It was supposed to clean up test data. Instead, it deleted records that were still being referenced by active orders.

In a dev database. Thank God.

I never give AI full read/write permission anymore. Ever. That incident lived in my nightmares for a week. Imagine if that had been production. Imagine explaining to a client that your AI deleted their customer data because it "thought" those records were stale.

Permissions are earned, not given. Every agent operates with the minimum access it needs, and I expand those permissions incrementally as the system proves it can be trusted.

The Friction Tax

Here's the part nobody talks about in the AI hype posts: the constant friction.

I spend a non-trivial amount of my day saying "move on," "accept this," "that's good enough." AI agents are thorough to a fault. They want to optimize everything, refactor everything, test everything. Sometimes the right call is "ship it, it's fine."

AI will happily over-engineer something if you let it. It'll add abstractions you don't need, handle edge cases that'll never happen, and refactor perfectly fine code because it "could be cleaner." You need a human in the loop saying "that's good enough, ship it."

Building processes to give AI enough permission without giving it too much is genuinely hard. It's a calibration problem, and it changes with every project and every context. Every time I have to override an agent, that's data. Every time an agent makes the right call without me, that's progress.

The Honest Take

AI is not replacing developers. I know that's not the hot take people want, but I've been a software engineer my entire career — it's not what I do, it's who I am — and I'm telling you: AI is the most powerful tool I've ever used. But it's a tool.

The developers who lean into AI will outpace the ones who don't. The ones who think AI replaces the need for experienced engineers? They're going to learn an expensive lesson.

And let's keep it real about hallucinations — sometimes the AI confidently writes code that calls APIs that don't exist, references libraries that were deprecated three years ago, or invents function signatures. You can't just trust and deploy. You have to verify.

The human in the loop isn't optional. They're the whole point. Less steering over time — that's the trajectory. But steering? Always.

What's Next

I'm going to go deep on Thursday's architecture, the specific agent workflows, the failures, and the wins. The next post in this series will break down the actual process: how a GitHub issue becomes a deployed feature with minimal human intervention. The prompts, the guardrails, the feedback loops.

And if you're a founder who needs this kind of AI-augmented development power behind your product? That's literally what we do.

D

Darie Dorlus

Head of Tech, Entrepreneur & Software Engineer

Founder of SyncTech and Last Minute Bouquet. Co-founder of TrustDots. Building an AI-powered custom dev boutique and Thursday, the AI agent desktop app. Former engineering leadership at Gusto, Ultimate Software, Symbiose Technology, and Cendyn. Successfully failing at launching startups since 2013.

Ready to build something real?

SyncTech helps startups go from idea to MLP. We make the complex simple and the simple scalable.

Book a Free Consultation