Art by Algorithm: Who Gets Credit When AI Creates?

“When the stakes are high—when someone’s freedom or financial well-being are on the line—what recourse do people actually have when an AI gets it wrong?” – Elina Treyger


With the growth of companies such as OpenAI, Midjourney, and IBM, the term of AI (Artificial Intelligence) became more and more popular and familiar to the public. Many people use AI to enhance their work, creativity, and research. Yet, not everyone truly understands how it works. In a common sense, AI isn’t just one thing, it’s a broad term covering many technologies. At its core, AI systems learn from massive amounts of data. They use complex algorithms to identify patterns and make decisions. The more data they process, the “smarter” they become (Crawford, 2021).

Today, AI is changing everything, how we work, create art, and even define originality. Recently, AI-generated images imitating Studio Ghibli’s style flooded social media. While some admired the aesthetic, others raised ethical concerns. Fans and artists questioned whether AI should replicate a beloved studio’s work without permission. Who owns the rights to an AI’s output?

A screenshot from OpenAI Developer Community of “Concern over the use of the Studio Ghibli style”1


In this blog, I would explore how AI brings exciting possibilities while also challenging the rules we rely on—especially in labor, creativity, and intellectual property. More importantly, how we choose to regulate these systems today will directly shape the future of the digital world we’re building.

The Rise of AI Art and the Changing Workforce

Traditionally, automation affected mostly blue-collar jobs. Machines replaced human labor in factories and manufacturing. But AI is different—it’s now transforming white-collar work too. Designers, programmers, lawyers, and analysts are seeing their fields evolve.

A central concern in the proliferation of AI-generated art is the use of existing human artworks to train these models without explicit consent from the original artists. This raises concerns over copyright and the value of human creativity. Legal experts point out that current laws struggle to address AI’s role in infringement. The question remains: How do we protect artists while fostering innovation? (The Art Newspaper, 2024)

Midjourney image, from a prompt by Cameron Butler, via Wikimedia Commons


As AI spreads across industries, we must ask: Who gains, and who loses? The technology boosts efficiency and creates new opportunities. Yet it also disrupts jobs and creative traditions. Moving forward, we need smart policies that balance progress with fairness. The goal? A future where AI empowers—without leaving people behind.

The Creative Crisis: AI vs Human Artists

Powerful new tools like DALL-E, Microsoft Copilot, and Canva’s AI features are transforming how we create images. By just typing in a descriptive prompt, and these systems generate stunning, original artwork in seconds. But this innovation comes with tough questions: Is AI art threatening human creativity? And who really owns the rights to machine-generated images?

At the heart of the controversy lies how these AI models learn. They’re trained on millions of existing artworks – often without the creators’ permission. Many artists feel violated, watching AI replicate their hard-earned styles with a few keystrokes. While legal experts debate whether this violates copyright law (Shoemaker, 2024), the ethical concerns are clear. Is this fair use of human creativity, or simply a new form of digital appropriation?

The impact goes beyond legal technicalities. For artists, discovering their work was used to train AI can feel deeply personal – like having their creative voice copied and commodified. There’s also real economic anxiety: as AI art improves, will businesses still hire human illustrators and designers? Many creatives worry about being priced out by machines that work faster and cheaper. This isn’t just about protecting individual artists, but also about preserving the soul of human creativity. As we embrace these remarkable technologies, we need thoughtful solutions that value both innovation and the artists who inspire it. The choices we make now will shape the future of art itself.

The Studio Ghibli AI Controversy: When Imitation Crosses the Line

The internet recently flooded with AI-generated images mimicking Studio Ghibli’s beloved animation style – and the backlash was immediate. Devoted fans and artists alike called out these digital replicas as hollow imitations, arguing they lack the soul and storytelling depth that make Miyazaki’s work so special.

A Video talks about the issue behind AI generated image of Studio Ghibil


This firestorm highlights AI’s growing ethical dilemma: its ability to clone unique artistic styles without permission. But the problems run deeper than copyright. Many AI systems are trained on mountains of online data – including personal artwork and content – scraped without consent. This raises serious privacy concerns and potential violations of data protection laws.For instance, the European Union’s General Data Protection Regulation (GDPR) imposes strict guidelines on the collection and processing of personal data, and the indiscriminate scraping of online content for AI training purposes can lead to significant legal ramifications. (Fazlioglu, 2023)

A Collage of AI-Generated Studio Ghibli Style Images Related to the Hot Topics


Where exactly does “inspiration” end and “theft” begin? As AI grows more sophisticated at copying individual artists’ signatures (Shoemaker, 2024), we’re forced to confront uncomfortable questions about creative ownership in the digital age. The Ghibli controversy isn’t just about one studio – it’s a wake-up call about protecting artistic identity in an era of algorithmic reproduction. 

Copyright in the Age of AI: Who Owns Creative Work?

The debate over AI-generated content goes beyond artistic concerns—it strikes at the heart of copyright law itself. Current legislation was built for human creators, leaving AI in a legal gray area where questions of ownership, authorship, and accountability remain unresolved.

Can an AI “own” a work? If not, then who does? The user who entered the prompt? The company behind the AI? Or the countless artists whose work was used to train the model? Courts are just starting to confront these dilemmas, but in the meantime, real-world conflicts are already unfolding. For example, disputes have emerged around AI-generated music that mimics the style of famous artists, images that closely resemble copyrighted works, and AI-written texts that pull from copyrighted materials. This uncertainty undermines intellectual property law’s core principle: human authorship. Without clear legal standards, creators are left vulnerable, and unethical uses of AI continue unchecked.

As concerns about AI-generated content grow, artists and rights holders are starting to fight back. They are responding both legally and ethically. Several high-profile lawsuits have targeted companies behind generative AI tools. These companies are accused of using copyrighted content without permission.

One of the most important cases is Getty Images vs. Stability AI. Getty claims that Stability AI copied over 12 million of its images, including captions and metadata, without a license. These images were allegedly used to train the AI model Stable Diffusion (Vincent, 2023; Brittain, 2023). 

An illustration from Getty Images’ lawsuit, showing an original photograph and a similar image (complete with Getty Images watermark) generated by Stable Diffusion. Image: Getty Images


This lawsuit matters because it challenges a common practice in the AI industry. Many AI models are trained on huge datasets scraped from the internet. The case raises a key question: does training an AI on copyrighted material count as copyright infringement, even if the final image is new? The answer could set a legal precedent. It may impact how companies train AI systems in the future. It could also force companies to get permission before using copyrighted data.

Beyond the courtroom, artists are also calling for stronger laws. They want transparency on how AI models are trained. They also want legal recognition for AI-generated or AI-assisted works. Some are pushing for a new category in copyright law.

This movement is about more than just money. It is about protecting creative work. It sends a clear message: creativity should not be taken without consent, and machines must not erase the value of human expression.

The Governance Gap: Ethics and Accountability in AI

When it comes to governing AI in the context of art and creativity, the challenges go far beyond simply regulating technology. Generative AI tools raise unique concerns that touch on ethics, law, and responsibility. Especially due to the fact that they often mimic human creativity without following the same rules and respect for the originality that human creators do. To build a fair and sustainable future for both artists and AI developers, we need to think carefully about three core issues: what ethical AI actually looks like, whether governments are keeping up with the pace of change, and who should be held accountability  when things go wrong. Each of these areas reveals just how complex and urgent the governance of AI-generated content has become.

Image by PashaIgnatov and Just_Super/Getty Images


Can AI Ever Be Ethical?

As AI-generated content becomes more powerful, the issue of ethics is becoming harder to ignore. What does it actually mean for AI to be ethical, especially when many models are trained on biased, unconsented, or copyrighted data? Image-generating tools like Stable Diffusion, DALL·E, and Midjourney rely on huge datasets scraped from the internet. Often, this includes artwork and content taken without permission. This practice doesn’t just challenge copyright laws—it raises serious ethical concerns.

Many AI companies have released their own “ethical AI” guidelines. But critics argue that these statements are often superficial. They are more about public image than real accountability. Without strong laws or independent oversight, companies are free to set their own rules. And in a race for profit and innovation, ethics can easily take a back seat. If we want truly ethical AI, we need more than promises. We need clear standards, systems for consent, and accountability that comes from outside the companies themselves (Crawford, 2021).

Are Governments Keeping Up?

Governments around the world are struggling to keep up with the rapid pace of AI development. So far, most efforts have been slow, fragmented, or lack legal force. The European Union has taken the lead with the proposed AI Act, which categorizes AI systems based on their level of risk. High-risk systems—including some generative AI tools—will be subject to stricter rules on transparency and human oversight (European Parliament, 2023). However, the act is still in development, and its enforcement has yet to be tested in real-world cases.

Australia has introduced a set of AI Ethics Principles, promoting fairness, privacy, and transparency in AI use (Australian Government, 2022). These guidelines offer direction but are not legally binding. In the United States, lawmakers have suggested several bills to regulate AI. But so far, there is no national law specifically targeting generative AI or image synthesis.

This lack of strong regulation creates a global governance gap. AI is advancing faster than governments can respond. In this vacuum, tech companies often make their own rules. That leaves creators, users, and the public exposed to decisions made in boardrooms—not democratically governed spaces.

Who is Accountable When Things Go Wrong?

One of the biggest challenges in AI governance is figuring out who is responsible when things go wrong. If an AI-generated image infringes on someone’s copyright, who is at fault? Is it the user who entered the prompt? The company that built the tool? Or is no one accountable because “the algorithm did it”? 

This confusion makes it easy for companies to deflect blame. When artists find their work was used to train an AI without permission, developers often point to automated systems or claim the data came from “open sources.” The problem gets worse with deepfakes and other harmful content. If someone’s reputation is damaged by an AI-generated image or video, tracing who is legally responsible becomes even harder.

Experts warn that without clear legal frameworks, we risk creating a world where accountability disappears. Blame will keep shifting, and victims will struggle to find justice. As RAND researchers argue, laws must evolve quickly to define who is liable before the damage from AI systems becomes too great to manage (RAND, 2024).

Conclusion

AI has brought new opportunities for creativity, efficiency, and innovation. But it has also disrupted the systems that protect artists, workers, and original ideas. From AI-generated images to legal battles over training data, one thing is clear: this is more than just new technology. It’s a shift in how we understand and manage creative work.

The laws and policies we have today weren’t designed for a world where machines can generate art or make decisions using massive datasets. As AI continues to evolve, we need updated rules that reflect this new reality. That means clear standards, stronger protections, and more accountability.

The future of AI and creativity is still being shaped. The decisions we make now about ethics, regulation, and responsibility would decide what that future looks like. Will AI support human creativity, or replace and exploit it? That choice is ours to make.

References

  1. Flew, T. (2021). Issues of Concern. In T. Flew, Regulating platforms (pp. 79–86)
  2. CRAWFORD, K. (2021). Introduction. In The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (pp. 1–21). Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t.3
  3. Yavuz, S. K. (2024, October 9). Can artists protect their work by suing AI companies? The Art Newspaper – International Art News and Events. https://www.theartnewspaper.com/2024/10/09/artists-lawsuit-artificial-intelligence-ethics-image-generation?utm_source=chatgpt.com
  4. Shoemaker, E. (2024). Is AI Art Theft? The Moral Foundations of Copyright Law in the Context of AI Image Generation. Philosophy & Technology37(3). https://doi.org/10.1007/s13347-024-00797-x
  5. Sad Boyz Highlights. (2025, April 8). The Studio Ghibli AI Problem. YouTube. https://www.youtube.com/watch?v=4J1yDs60fOc
  6. Husain, Z. (2025, March 27). Studio Ghibli-style AI images flood social media after ChatGPT update. Gulf News: Latest UAE News, Dubai News, Business, Travel News, Dubai Gold Rate, Prayer Time, Cinema. https://gulfnews.com/entertainment/studio-ghibli-style-ai-images-flood-social-media-after-chatgpt-update-1.500074511
  7. Vincent, J. (2023, February 6). Getty Images Sues AI Art Generator Stable Diffusion in the US for Copyright Infringement. The Verge; Vox Media. https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion
  8. Australian Government. (2022). Artificial Intelligence Ethics Principles.  https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles
  9. Act, A. (2023, August 6). EU AI Act: first regulation on artificial intelligence | Topics | European Parliament. Topics | European Parliament. https://www.europarl.europa.eu/topics/en/article/20230601STO93804
  10. Irving, D. (2024, May 9). When AI Gets It Wrong, Will It Be Held Accountable? Www.rand.org. https://www.rand.org/pubs/articles/2024/when-ai-gets-it-wrong-will-it-be-held-legally-accountable.html

Be the first to comment

Leave a Reply