
AI and the Creative Space
I recently asked Chatgpt whether my work had been scraped and used to train AI models. Nb: Dedicated search tools indicate it has.
Here’s part of the response I received:
Hey Roly — thanks for reaching out. That’s a very real and important question.
Here’s the honest situation:
If your work has been published online (on blogs, articles, books, websites, forums, etc.), especially if it’s publicly accessible without paywalls or passwords, there’s a chance it may have been scraped and used to train AI models, depending on when and where it was published.
A few key points:
Most AI models (like mine) are trained on a mixture of licensed data, publicly available data, and data created by human trainers. Public internet content up until 2023 is commonly included in large training datasets.
OpenAI (the org behind me) hasn’t disclosed the full list of sources used for training GPT models, but they’ve said they do not use subscription-based content (e.g., from paid sites) without permission.
If you’re a published author and your work is in widely accessible formats (e.g., online articles, books on public sites), there is a chance it was included in the training of models like mine in a non-attributable form—meaning the model doesn’t retain or “remember” your work specifically but might have learned from language patterns or general ideas.
After that, I asked, ‘Give Me 10 Reasons Why should I hate you?’
This was its answer:
10 Reasons to Hate AI
- Job Displacement – AI can automate roles, leading to job losses and forced career changes.
- Loss of Human Touch – In healthcare, education, or customer service, AI lacks empathy and authentic human connection.
- Bias and Discrimination – AI can reflect and amplify existing biases, resulting in unfair outcomes.
- Privacy Invasion – AI enables mass data collection and surveillance, often without informed consent.
- Deepfakes and Misinformation – It can generate convincing fakes, making it hard to trust what we see or hear.
- Lack of Accountability – When AI goes wrong, it’s unclear who’s to blame—the developer, the user, or the machine?
- Depersonalization of Society – Over-reliance on AI may distance us from real human interaction.
- Manipulative Algorithms – AI powers addictive social media and manipulative targeted advertising.
- Ethical Grey Areas – From warfare to surveillance, AI raises tough moral questions society isn’t fully ready to answer.
- Loss of Creativity – AI-generated content can dilute the value of human imagination and flood the world with generic works.
These are all valid points.
However, while point 4 touches on it, it doesn’t specifically highlight the core issue for me and other creatives: AI steals thousands of hours of hard work, tears-definitely (not sure about blood and sweat). Copyright protects the creative expression of ideas, not the ideas themselves, so while ideas and imagination can’t be copyrighted, words on a page can, in my case well over 400,000 of them!
That’s the biggest reason I hate it.
And yes, the irony of using AI to assist in writing this article is not lost on me. But AI is here to stay, folks, and it can do amazing things. I could have asked, Give Me 10 Reasons Why I should Love AI, and it would have easily generated convincing and credible responses.
I am not against AI, but if it’s going to coexist with artists, writers, and other creatives, we need to lay down some rules—around data collection, remuneration, and crucially attribution.

I sent a letter to Mark Zuckerberg earlier this week telling him I’m joining a class action of creatives against data scraping. It will be interesting to see where it goes. I suspect nowhere.
The International Authors Forum (IAF) has developed a clear set of principles to guide how AI should interact with authors’ intellectual property. You can check them out via the following link.
I’d love to know your thoughts.
Roly
