Between Blind Acceptance and Reflexive Rejection
The Studio Ghibli Craze and AI's Shared Playbook with Social Media
The Rundown↓
KNOW that OpenAI believes “image generation should be a primary capability of our language models.”
REALIZE that AI-generated content is trained on copyrighted content that keeps users active on social media platforms.
EXPLORE OpenAI’s press release and the Tony Blair Institute’s AI Insight.
Details↓
Last week OpenAI released 4o Image Generation, a significant step forward in generative AI capabilities. They announced the achievement on their website:
At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful.
The image generator made waves after users began to replicate their own images in Hayao Miyazaki’s Studio Ghibli style and post online. OpenAI is fighting multiple copyright lawsuits while claiming Fair Use in training its Large Language Models (LLMs).
On the heels of this latest update, OpenAI grew by 100 million users in the last month alone and just secured $40 billion more dollars in funding—making it more valuable than companies like Samsung and McDonald’s.
Preface↓
The following is not some sort of high and mighty anti-artificial intelligence manifesto. I have generated AI images myself and utilize AI assistance in a handful of applications from research to removing objects from our photos in Photoshop. It can be fun and useful as much as it can be disappointing. As I’ve written before, I think it has great potential to serve us well if well-developed, stewarded properly, and implemented ethically. But at the same time, let’s not pretend this isn’t a gold rush under the banner of human flourishing.
Commentary↓
If I could boil Know Curtains down to one idea, it’s finding the sweet spot between blind acceptance and reflexive rejection of technology. The goal is helping you make informed decisions with a complete picture of the technologies you and your family adopt. And if you’ve ever tracked screen time, you know “adopt” is an appropriate verb choice.
As it relates to the story above, I implore you to take time to watch the videos from Driver’s Training for Social Media. Why? Because it will open your eyes to see how big tech is following the same social media playbook for AI.
Social media exploded with voluntary social participation. Mark Zuckerberg created Facebook, but we enabled and empowered it. We flocked to it like a corner house in the neighborhood giving out piles of candy on Halloween—just as we have done with each subsequent social media platform.
Soon it consumed us and became the public square, the marketplace of ideas, and the main avenue of human expression—dominated by a small number of massive tech companies.
Our blind acceptance unleashed something many now regret and find difficult to walk away from while yielding mixed results. We are socially expected to partake in something designed to be both brazenly addictive and perceptually necessary. We’re led to believe it’s too big to change… too entrenched to abandon, but social media platforms don’t exist without us. They need us more than we need them.
History is repeating itself with artificial intelligence. It seems the tech titans who flanked Trump are hoping once again for blind acceptance. With every new gimmick, maybe AI will get too big to change… too entrenched to abandon… too necessary to bridle with copyright restrictions and creator compensation.
Lost in last week’s Studio Ghibli craze was news that the New York Times can move forward with the copyright lawsuit against OpenAI and Microsoft (maybe OpenAI took PR timing cues from Meta). The Times alleges OpenAI used their stories without license or permission to train their LLMs - the same argument that could be made on behalf of Studio Ghibli and founder Hayao Miyazaki though they have yet to comment.
Keep in mind that OpenAI has licensing deals in place with publishers like the Associated Press, Financial Times, and NewsCorp. They understand it is copyrighted work while in court arguing fair use in training their models. According to the Wall Street Journal, OpenAI tried to enter into a licensing agreement with the New York Times, but it fell apart when they wanted to be absolved of some legal risk.
Tech giants like OpenAI have pitched AI development as the national security necessity of outpacing China, the hope of curing disease, and the promise of artificial general intelligence (when AI matches or surpasses human cognitive abilities), yet we’re given tools to turn ourselves into cartoons and cheat on homework.
The emphasis on AI-generated art and media is not by accident. They are the first dominoes of global acceptance, diffusion, and profitability. It’s low-hanging fruit for the masses. From OpenAI’s own press release, they state they have “long believed image generation should be a primary capability of our language models.” Key word: primary. It might be a fine distraction from the lack of progress on meaningful pursuits.
The wave of AI-generated Studio Ghibli images flooding the internet is a classic attention-driven social media strategy, leveraging trends and FOMO to get users to unwittingly market the product for OpenAI. It fuels a feedback loop where this “art” is displayed in the gallery of social media… both conveniently supplied by big tech. And it’s working.
OpenAI’s CEO, Sam Altman, with an AI-generated profile pic, announced on X that on Monday they added one million users in one hour. All the while reaping mountains of capital at the expense of artists, filmmakers, journalists, authors, and songwriters whose work is unauthorized and uncompensated in LLM training.
AI image generation is fun. It’s like candy on Halloween. I’ve used it myself. It’s reminiscent of Facebook’s release to the general public twenty years ago, but context is king. How artificial intelligence is implemented in the present and the future is not a forgone conclusion, but powerful people want you to think it is. The jury’s still out, figuratively and eventually literally in some cases.
It’s no mystery the goal is rapid global adoption in all sectors of society. Read the opening chapter of yesterday’s Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI from the Tony Blair Institute for Global Change for a taste of what’s envisioned.
Artificial intelligence holds great potential. My hope is that it does lead to human flourishing, but let’s slow down, make sure creators are properly compensated (the money is there), and first ask ourselves… do we need AI in EVERY sector of society? Maybe we do. Maybe we don’t. From there we ask… to what degree?
Let’s find that sweet spot between blind acceptance and reflexive rejection to figure it out.
Postscript↓
Photo by cottonbro studio.
Like what you’re reading? Ready for Driver’s Training for Social Media? Support us with a “Behind the Curtains” annual subscription to gain access to everything we produce.