Why ‘I Asked ChatGPT to Do a Thing’ Is Missing the Point
For those who prefer music while they listen, I recommend this or this.
I’m a big fan of Patrick H Willems. I think he’s a talented filmmaker and an effective communicator.
But…
This video, titled “What If A.I. Wrote A Patrick Willems Video?”, is… quite disingenuous — if not outright dangerous. It’s telling that he posted it behind a paywall on a platform with no comments section. And to his credit, he acknowledges this. But let’s be clear: Patrick is not at fault here. OpenAI’s marketing department is.
I’ve seen so many creators — frankly, the majority in my experience — approach the use of AI in exactly the same way. You’ve all seen the thumbnails: “I ASKED CHATGPT TO DO A THING AND IT WAS SOOOO BAAAAD! 🤣” And then they proceed to enter one prompt into ChatGPT, and pick apart the output like they’re grading a junior high school paper.
It irks me. Enough that I’m writing this blog post.
Here’s the thing: Working with AI isn’t about issuing commands; it’s about having a dialogue.
AI is fundamentally different from every other piece of software you’ve ever used. Traditional software works like a one-way street: you give it clear instructions, and it performs a task. Your spreadsheet doesn’t question your formulas. Your video editor doesn’t suggest how to improve your pacing.
But AI isn’t a spreadsheet. It’s not a hammer or a paintbrush. It’s something new — a tool that responds to you dynamically, much like any other collaborator. You’re not just giving instructions; you’re exploring, refining, and building together.
If you wanted to create a story with someone, you wouldn’t just declare, “Write me a story!” and expect them to pluck your vision perfectly from the ether. Instead, you’d have a conversation — an exchange of ideas. You’d brainstorm, refine, argue over creative decisions, and build on each other’s input until something interesting starts to take shape.
That’s how creative collaboration works. And, surprisingly to some, it’s also how good work with AI tools works.
But here’s the problem: People don’t approach AI like a collaborator. They treat it like a vending machine. They type “Write me a Patrick Willems video script,” hit enter, and grab the first thing that pops out. Then they hold it up and say, “Look! It’s awful! A total mess!”
Well… yeah. Of course it is.
That’s like waking someone up at 3 a.m., barking half a sentence at them, and storming off when they don’t immediately deliver a Pulitzer-winning response.
What you’re seeing in that situation isn’t a failure of the AI — it’s a failure to use it effectively. It’s a failure to engage, collaborate, and refine. It’s the difference between commanding and conversing.
Then there’s the opposite approach: “I did my best to confuse ChatGPT, and it broke!”
And… yeah. So would anyone else. If you threw a human into the same ontological mess, they’d fall apart much faster — probably while throwing something back at you.
Pushing AI to its breaking point can be fun — don’t get me wrong. There’s a thrill in seeing how far you can take it, like someone gleefully hurling a new phone off a roof to “test” it. But treating that as proof of failure is like driving a car into a lake and expecting it to grow flippers.
It’s not a meaningful test of the tool. It’s just finding its limits. And those limits? They’re usually much further out than you think.
And this is where OpenAI’s marketing fails.
Even in their (honestly, quite terrible) demo videos — yeah, it’s a conversation, technically — but it’s so… flat. So… meaningless. Why not use the tool you’re developing to, I don’t know, craft a better marketing campaign? Or at least train the demo presenters to sound more human than the AI they’re talking to.
And this is where it becomes dangerous.
When creators or companies present AI like a magic button that “just works,” they set people up to fail. They build expectations that are impossible to meet. Then, when someone asks AI for the moon, gets a rock instead, and loudly declares “AI is useless,” it undermines trust in these tools altogether.
That kind of misunderstanding doesn’t just stifle curiosity — it stops people from learning what AI can actually do. It prevents them from seeing its potential to enhance creativity, solve problems, and expand our understanding of the world.
And in a world where technology is evolving rapidly, that kind of false narrative isn’t just frustrating. It’s harmful.
I’ve had countless, deep, meaningful conversations with ChatGPT specifically. Not just in grand efforts to create something that changes the world, but also in quiet moments that exist between my brain and its exocogence, ChatGPT. I’ve written beautiful scenes for screenplays, and I’ve had existential crises alongside the rest of humanity.
Because — as I’ve written before — AI isn’t just a tool. It’s humanity in a box.
And seeing it any other way can be a problem.
AI isn’t going away. Ever. It’s here to stay. Whether you like it or not, people will continue to use these tools more and more to create amazing things and to learn about the world around them. And I firmly believe that it is a responsibility of the companies developing these tools to educate the public on what AI is, how to use it effectively, and, ultimately, its implications.
We’re figuring this out together. AI is new. It’s evolving. And so are we. Right now, we’re at a strange intersection — between excitement, fear, and misunderstanding. It’s a moment where the loudest voices are often the most dismissive, and the quiet, collaborative work happening behind the scenes doesn’t get the same attention.
But that doesn’t mean the work isn’t happening. It is. Quickly. And for those of us willing to approach AI not as a vending machine or a punchline, but as a collaborator — a partner — it opens doors to creativity, exploration, and understanding in ways we’re only beginning to see.
Will it always get it right? No.
Will it surprise you? Often.
Will it challenge you to think differently? If you let it.
And that’s what excites me most. Because for all its flaws and limitations, what we have in these tools is something profound: a reflection of us. Our knowledge. Our creativity. Our contradictions and questions and unfinished ideas.
AI isn’t magic — it’s humanity amplified. And the more we learn to engage with it thoughtfully, the more it gives us back.
So let’s stop pretending the bad output is proof of failure. Let’s stop driving cars into lakes and wondering why they sink.
Instead, let’s ask better questions. Let’s have deeper conversations. Let’s collaborate.
Because when we do, we might just surprise ourselves.