I am writing this series of articles about creating media content while working with artificial intelligence and the ethical implications that arise. I am very interested in how creative work can conjoin with AI capabilities to produce material. I am referring primarily to video, image, audio, and literary composition but I have interests in engineering and architecture as well. Sadly, music has turned out to be not a good fit for me but you can’t have everything.
My primary reason for writing these articles is that I find it enormously helpful to sort out my thoughts by writing them down. Very often, what seems brilliant in my mind turns out to be nonsense when given a material form. But learning what doesn’t work is as valuable as learning what does. I am posting these thoughts publicly to access the hive mind. I know I cannot work it all out by myself and I am very interested in what others may have to say on the thoughts I share.
AI is amazing, full stop. No quibbling from me about that but there are some very serious issues that come with its use. Unlike previous technologies that were provided for the user to what they will the personal biases of its creators are inherent in the AI platforms. Paper, printing, pens, computers were tools in themselves and could be used as one wished without any regard to the proclivities of their creators. All Shakespeare needed was paper and a goose quill. His work wasn’t being monitored, assessed, or judged by the goose or the pulp mill manufacturer. There are philosophical differences in the engineering structure of PC and Apple devices, but this has no bearing on what the writer writes or the editor edits. Technological constraints for sure, but no third party influences that decide on the intrinsic nature of the work that is produced.
But with AI this isn’t the case. There are three major issues.
First, in essence, AI is a massive prediction machine. Early AI worked by predicting the next word from the words that came before. This was a slow, brute-force method. With the advent of transformer technology in 2017 the words in a given sentence could be processed in parallel albeit with a much greater need for compute power. This made the modern AI error viable.
But predictions are only as good as the quality of the information they use to extrapolate their outputs. The old computing phrase, Garbage in garbage out (GIGO) still applies. So, quite rightly, algorithms have to be constructed to mitigate this problem. But in constructing these algorithms there is a real danger of throwing the baby out with the bath water.
Secondly, because of this, the base strata of data from which predictions can be based has to be sound. But we don’t live in a purely objective world so humans have to set that bedrock criteria to determine the validity of data. Again, this is a necessary step to make sure the algorithms can do their job properly. However, the humans in question come from a tiny, tiny subset of society.
When the foundations were laid for current generation of AI platforms we are talking about the engineering and design contributions less than 10,000 people. In reality, the true figure is very much less than that as there are only so many decision makers with the technical capabilities to decide what constituted sound base information and what didn’t.
Despite claims of objectivity and impartiality, the arbitrary biases of the creators are embedded at the basic level of AI construction. As the system becomes more complex the errors get amplified and multiplied in unpredictable ways.
Three, is the attempt by the creators of these platforms to counter these biases, and establishing guidelines, can end up with not only unintended consequences but can have a limiting and chilling effect on what creators are able to create.
Why chilling? Because as a creator using AI the information that you are extracting from the platform is not a pure and unalloyed reflection of reality but is determined, edited and censored by a tiny coterie of individuals who hold a particular world view derived from the circumstances of their education and the reinforcing norms of the milieu that they work in. They are not representative of the general culture at large and certainly not of the multitude of cultures different in fundamental ways from that of the West.
For 99.9 percent of creative activity, these biases have no affect on the output, so what’s the big deal? Most artwork that we engage with on at a daily level is created in the service of marketing and advertising. Some of it is bad, some of it is good, most of it is just OK (the ultimate insult to an artist.) Therefore, there is nothing much to worry about. But what if you need to think the unthinkable by current social norms.
The concentration camps were an abomination and a stain on humankind but more than a few people thought they were a good idea. By investigating why they came to that conclusion we may avoid it ever happening again. But trying to access that position is problematic. I am not suggesting you should try but you can see why the case may be so.
To prove my point I had Claude review this text and this was the note I received back. “The concentration camp example, while making a point about AI limitations, might be too sensitive for the context.” Says who?
I had ChatGPT review this article as well and this was the feedback. “The mention of the concentration camp example is jarring, and while it adds an intense perspective, it could alienate some readers depending on their sensitivities. It may be seen as overly provocative in the context of the article.”
If the very thought of the camps is upsetting - as it should be - then pretending they didn’t happen is much worse. Evil, in fact. It is a kind of lunacy to dismiss the lost lives of millions of people in the cruelest of circumstances shouldn’t be talked about because - duh, my feelings.
Whose sensitivities am I supposed to be sparing? Who exactly would be offended by me saying that something that was terrible was terrible? That I should censor myself to spare the imagined feelings of phantasms is simply wrong. We are not dealing with reality. How I wish George Orwell could witness all this.
Anyway, this is all fodder for future posts. For the time being my case is proven. AI interferes with honest content creation. The question now is, to what extent?
On a more mundane note. I am trying to create my own personal website. I am not a coder but not completely hopeless either. I have been using Claude, ChatCPT, Copilot, Gemini and Perplexity. All of them were bad which I was surprised about at first but I shouldn’t have been when I consider the data they were trained on.
Now my site is a very simple with a minimalist design. But none of the platforms could do simple coding for the pages and connections to systems in exactly the same way. Never mind the way I specifically wanted. I consistently had to check code generated by one platform with another to make progress.
(BTW, Copilot was by far the worst for coding, and I abandoned it quite quickly. It kept introducing code that I did’t ask for. Since it was trained on Microsoft bloatware then bloatware is what I got.)
The message here is that there are a multitude of factors that go into the training of these platforms. The two principal influences are what information the platform used for training and then the weighting, decided by some human, somewhere, of that information.
The fact that the platforms vary so considerably in their performances and capabilities is down to the human decision making that took place during their construction.
Anyone who thinks there is going to be one platform that to rule them all needs to get out more.
Ultimately, when using AI platforms to assist with genuine creative work you have to assume that it cannot be completely trusted. For artists or creators this is a serious problem. Your paint brush or video console will not lie to you. But AI certainly will.