OpenAI's Ghiblification of Data Harvesting
Amid the OpenAI "Ghiblify" Trend
Millions of individuals around the world have voluntarily given their personal photos to an untested and unscrupulous corporation. It is a tale as old as tech: OpenAI will eventually exploit this data for its own profit. The Ghibli-engine built by OpenAI is as ethically controversial as it is visually impressive, and it raises both artistic ownership and privacy issues.
While significant attention has been paid to the original Studio Ghibli's protest of OpenAI's unethical imitation of their intellectual product, less attention has been paid to how OpenAI is likely enriching itself by preying on the "Privacy Apathy" of its users.
Ghiblified Data Harvesting
What does OpenAI gain from users generating cute profile pictures and ironic illustrations for laughs on the internet? Quite simply: our data.
Time has shown that tech companies’ motivations are never altruistic. Beholden to shareholders and corporate bottom lines, these companies bury the truth within their Terms of Service. As OpenAI’s own Privacy Policy states, the company collects your data for multiple purposes, including:
"To improve and develop our Services and conduct research, for example to develop new product features..."
Make no mistake: OpenAI's Ghibli engine is about data harvesting.
As global privacy laws tighten, corporations have become more clever in circumventing restrictions. For example, in the EU, when OpenAI scrapes images from social media, it must maintain a "legitimate interest" under Article 6.1.F of the GDPR. By cleverly enticing individuals to voluntarily upload their own photos, OpenAI procures this “legitimate interest.” In doing so, OpenAI has Ghiblified its own data-harvesting machine by appealing to our natural affinity for art.
Once Surrendered, Privacy Cannot Be Regained
Startups and corporations have proven time and time again that they are poor stewards of private information in the long run.
Take 23AndMe as a cautionary tale. The DNA-testing company built the world’s largest genetic database through samples voluntarily surrendered by individuals curious about their heritage. The results were impressive—but also dangerous. Police agencies around the world quickly began demanding compliance with court orders to find alleged criminals via DNA.
Even if we momentarily applaud these outcomes, the database is now for sale, as 23AndMe faces bankruptcy.
What's in a Face?
As someone uploads their photo to be “Ghiblified,” they might ask—or console themselves—that AI cannot really learn much from a single image. But this is naive.
AI today can reportedly:
Detect skin cancer from photos
Detect lies based on eye movements
Communicate via voice and text with near-human accuracy
If ChatGPT can read emotion from our words, what might it infer from a face? As much as 93% of human communication is non-verbal, communicated through facial expressions and body language. For my part, I can’t detect skin cancer, I’ve been fooled by liars, and I often struggle to hold in-person conversations.
"The eyes are a window to the soul..." So what does OpenAI see in yours?
Privacy Apathy and Our Humanity
After this critique of OpenAI, we must consider the broader issue: Privacy Apathy. This is the fatigue caused by the constant digital intrusions on our personal lives, a well-documented phenomenon.
From Facebook to Google Maps, Snapchat to Apple Pay... we are bombarded daily like a virtual DDoS attack on our privacy.
“Saying you don’t need privacy because you have nothing to hide is like saying you don’t need freedom of speech because you have nothing to say.”— Edward Snowden
If we surrender privacy out of apathy, we are surrendering ourselves to digital overlords. What preserves our humanity is the will to resist.
We can, and must, fight for privacy. We must resist Privacy Apathy.
And we should wholeheartedly reject this Ghiblified attempt by OpenAI to profit off its users.