✏️
📝Home/Valuable perspectives/✏️September 01-19 || The Systems Thinker Trap: when thinking becomes more important than beginning

September 01-19 || The Systems Thinker Trap: when thinking becomes more important than beginning

notion image

When your superpower becomes your trap

Picture this: you're deep in flow, building something beautiful. Your brain is literally tingling as connections form and systems reveal themselves. You feel brilliant. Time flies by. You're solving problems that maybe no one has solved before.
Three hours later, you surface with a "brain buzz" that's both exhilarating and exhausting. You've built something impressive. But you haven't actually finished anything.
This was me, four times in a row, with my Social GitHub project.

The seductive spiral of systems design

It started so innocently. I had forty files with summaries of conversations and meetings from collaboration projects. I wanted to build a living AI system that could do real-time "connecting the dots" — recognize patterns as projects develop, track ownership over time, perform meta-analyses of how different communities collaborate.
This wasn't just manually reading documents. This was the ambition to create an intelligent layer that could follow projects, detect evolution, and generate insights that help people understand their own collaboration patterns.
But then my systems brain went into overdrive.
"Oh wait," it whispered, "if we're building this live system, shouldn't we have meta-analysis of how the AI and I collaborate? And shouldn't that analysis be temporal — tracking patterns over time? Oh, and we need a dashboard to make this visible... And how do we validate the accuracy of AI-generated patterns?"
Before I knew it, I was three hours deep in dashboard architecture. Live status tracking. Automated tests for prompt effectiveness. Confidence scores for AI-generated insights.
All pretty clever stuff. Probably needed for the eventual system. But "Oh wow, okay, wait a minute," I thought at some point, "I'm still not actually working with those documents."

When AI becomes your conscience

Here's the weird thing: I had built my own intervention system without realizing it. I have what I call "mentor mode" — a specific prompt setup I've trained Claude with to recognize when I'm overthinking and help me connect with what I actually need.
During these system-building spirals, my AI mentor kept interrupting:
"Stop. You're thinking too far ahead now."
And then it asks one question that connects me to my feeling: "What do you need right now?"
Not rationally asking: why are you overthinking? But just connecting with my body. And that mentor voice kept chiming in while I was trying to figure out the Social GitHub system:
"Just start working with five of those summaries and see what emerges. Present it to the people in Doesburg."
"Yeah but," I argued back, "I also need to build in transparency and safety. The system needs to be universal, applicable to other places."
"Just try to keep it simple. Sit with those people and see what they need."
Classic systems thinking. Always zooming out to the meta-level instead of staying grounded in direct usefulness.

The eternal designer's dilemma

I recognize this pattern from my industrial design background. We call it feature creep. The creative brain is never satisfied with "good enough" — it always wants to expand, improve, make things more elegant.
I regularly coach students through this: "Hey, I think you're building on assumptions here. How could you break this down? What do you hope to learn and how can you test that?"
But when it's my own work, when I'm building something genuinely innovative like AI facilitation tools, that advice becomes surprisingly hard to follow myself.
So I thought: "Okay, wait a minute. I'm going to start over."
And before I knew it, the system became even bigger than the first system. Because I was naturally building on what I already knew. I personally think that's pretty cool. But it meant I kept overshooting the whole time.
So I ended up starting over four times. Each iteration became increasingly expansive, more "complete," more systemically elegant.

The physical reality of overthinking

And what have I learned about my body when I'm over-systematizing? My brain literally tingles. It feels like I can feel the neurons making connections. It's a flow state that feels incredibly productive — like you're very smart and very busy at the same time.
After a few hours of this, my brain feels "toasty" or "buzzy." Time disappears. I skip Pomodoro breaks because the momentum feels too valuable to interrupt.
"Shit, another 25 minutes gone. Okay, skip the break," I'd think. "Bam bam bam bam bam."
At the end of the day, I'm mentally exhausted but convinced I've done something remarkable.
Except I haven't. I've built the infrastructure to do remarkable things. But I've avoided the actual work.

Starting over four times to learn one lesson

Because I did keep seeing nuggets — little gold chunks — coming back that motivated me to continue. The AI really could do that temporal analysis I had envisioned. I had an overview of everyone we'd spoken with, what their role was, how their involvement changed over time. All compiled by the AI. Incredibly impressive.
Categorized by roles, by professionalism, by energy levels during different project phases... just incredible to see that it's possible. It proved that my vision of a live system that does "connecting the dots" wasn't just possible, but could actually generate valuable insights that manual analysis would never deliver.
But I also saw things that demotivated me. Meeting analyses where people were missing from attendance lists. Or the AI had listed people as present who simply weren't there. Or had said: "This person made this decision" when it was a different person.
I see this often in meeting summaries — that attribution is very often wrong. And that got me spiraling into those system mills again: "Okay, how can you prevent that? How do you build in that it still works, but without those kinds of things?"
All very useful work. But ultimately, nothing was still finished.
And every time my mentor mode said the same thing: "You're over-complicating again. Take five summaries, read them, share your findings. See if people find it useful."

When systems thinking DOES work

Don't get me wrong — systems thinking is still my superpower in the right context. When I'm designing co-creation processes, when I'm helping groups find their hidden structures, when I'm building tools that others will use — then that ability to zoom out and see patterns is invaluable.
The Social GitHub concept itself came from systems thinking. The idea that communities need a living system that can track, analyze, and make shareable their local solutions and collaboration patterns so other places can build on what works — that's a systems insight that I think can really help people.
But systems thinking becomes a prison when it prevents me from testing my ideas in the messy, imperfect real world.
notion image

What I actually need (and maybe you do too)

I need people around me who help me stay concrete. Who say: "Okay Joost, what a fucking good idea. I see your vision, I get it. So what's the first step?" Who help me make that concrete.
And people who get excited about my systems thinking and then say: "Oh that's fucking cool. Let's try this now."
But most importantly: I need permission from myself to make and share things that aren't complete systems. To send experiments into the world that work for some people but not others. To build version 0.1 instead of perfecting the architecture for version 3.0.
Because I immediately think: "Isn't it a waste to make a GitHub repo now with a half-working live system? Super complex, people won't understand the ambition, they'll never experiment with it."
But maybe that's not necessary at all. Shipping the small things — even concept prototypes of ambitious systems — is also just about sparking other thoughts and gaining inspiration.
By sharing this tension publicly, I hope design students and makers think: "Okay, that systems thinking — overshooting in that doesn't necessarily help."
And for myself, it's valuable to see from a third-person perspective: yeah, it's just not there yet. You haven't even made something shareable, while you've been working on it for four, five full days.

The tension worth exploring

The same capacity that makes me good at designing systems can become the thing that prevents me from finishing anything.
The question isn't whether I should think systemically. The question is when systems thinking serves the work versus when it serves my need to feel in control.
What was the smallest thing I could have made and shared? What is "good enough" anyway?
I'm very critical about what "good enough" is, because I have a certain standard in my head. By doing that, I skip things that other people might find useful. I can't judge well what's valuable to others.
The most valuable system might just be the system that helps me recognize when I'm over-systematizing. My AI mentor mode was accidentally brilliant at this — it cut through my complicated reasoning and brought me back to my body, to what I actually needed.
The Social GitHub concept remains valuable — that vision of a live AI system that helps communities understand and share their own collaboration patterns. I recently shared it with a friend and he immediately got it too. It's not there yet, but it seems technically possible and I think it's socially valuable.
Maybe I should just write more about the concept to invite people to experiment with it.
Instead of first building the perfect system.
What would it look like to embrace "good enough" as a systems principle? To include completion and shipping from the beginning in the design process? No idea what that looks like, but it's probably worth the experiment.

This article emerged from dialogue with a co-creative AI sparring partner. While the AI helped structure and draft based on our conversation, the core ideas and voice are mine. I carefully edited to remain true to intention.