You've probably seen that new OpenAI report. It claims Claude now rivals or outperforms top-tier human freelancers in nearly half of professional tasks—across marketing, design, and even legal work. The study even suggests it's getting good at design aesthetics, not just execution.

As with every AI milestone, there's hype. We're skeptical of “100x productivity” headlines too. But you don't need to believe every stat to see what's happening: the baseline quality of AI output has gotten frighteningly high. For the past year, it was easy to dismiss AI creativity as clunky, soulless, or laughably off-base. That era's over. The bar has moved, and the old excuses don't hold up anymore.

The Great Blame Shift

For a long time, “AI just makes slop” was a fair defense against bad output. We all saw the uncanny images, the stiff writing, the nonsense. But that excuse has officially expired. Once a tool can routinely outperform skilled humans, the accountability flips.

You don't blame a Leica for a bad photo—you question the photographer. The blame for mediocrity now lies not with the tech, but with the operator.

This is the inversion of our entire industry. The limitation is no longer the capability of the tools—it's the taste and judgment of the people using them.

A strong creator can use these systems to make extraordinary work at impossible speed and scale. A weak one can only generate more bad work, faster. When the tech is this capable, the only variable that matters is human direction.

The New Creative Divide

AI is no longer the bottleneck—humans are.

For years, it was easy to dismiss AI as a generator of slop: uncanny images, stiff copy, and outputs that felt more like parody than partnership. That defense is gone. The baseline quality of AI output is now high enough that, in many domains, it can rival or surpass skilled humans.

That shift forces a new kind of accountability. When a camera got good enough, we stopped blaming the hardware and started judging the photographer. The same thing is happening with AI. If the work is mediocre, the question isn’t “What can the model do?” but “How strong is the person directing it?”

The Great Blame Shift

Once tools become powerful and reliable, the limiting factor moves upstream:

  • From capability → to direction
  • From access to tools → to quality of taste
  • From "the AI is bad" → to "the brief and judgment are weak"

A strong creator can now:

  • Translate a nuanced brief into structured prompts
  • Iterate rapidly based on taste, not just novelty
  • Use AI to explore breadth, then apply judgment to narrow to the best

A weak creator just produces more of the same bad work—faster.

The Real Test Is You

The meaningful question is no longer, “Can AI make something?” It clearly can.

The question is:

Can you lead it?

Instead of asking an AI to “make a logo” or “write a blog post,” the real test is whether you can:

  • Provide context: history, audience, constraints, and goals
  • Define tone and standards: what good looks like—and what doesn’t
  • Give sharp feedback: not just “make it better,” but “make it more X and less Y, because…”

This is the new creative craft: not typing prompts, but exercising judgment.

Judgment as the Last Advantage

Tools are now commodities. Access is nearly universal. The differentiator is no longer whether you use AI, but how well you can guide it.

That’s the philosophy behind Winston: AI is the engine, but human judgment is the driver.

The right question for any creative partner isn’t:

“Do you use AI?”

It’s:

“How strong is the human judgment behind it?”

In a world where machines can generate almost anything, the rare skill isn’t creation on demand—it’s taste, discernment, and the courage to say no to 99% of what’s possible.

[@portabletext/react] Unknown block type "image", specify a component for it in the `components.types` prop