Mea culpa: I have been making a huge assumption while working with AI platforms these last couple of years. I thought that AI was just a very handy tool, and on occasion, a creative co-pilot.
We are already co-creating with machines. We are writing articles, blog posts, creating images and videos, and all sorts of marketing content. But what we’re really dealing with is a deeply biased, occasionally delusional assistant whose outputs reflect the priorities and blind spots of its designers.
The challenge is to transmute flawed AI output into ethically sound, creatively authentic work. The question then becomes, how do we do this?
What Hallucination Really Means
It is easy to think of AI hallucinations as randomly occurring falsehoods of a sometimes spectacular nature. Spurious nonsense to be dealt with as it arises. But there is a more insidious level where subtle fabrications can pass as truth.
It’s not that AI tells malicious lies; it just doesn’t know what truth is. As humans, being able to discern the truth of anything can be hard going at times, so we must not be too harsh in our rush to judgment. AI is a human construct and invariably it will reflect our foibles. But the problem is that sometimes it just predicts what sounds right. This is a process that results in distortions that can mislead and misrepresent but sound completely plausible to the unwary reader or viewer.
Bias by Design
What we have to keep foremost in our mind when using AI is that it is not neutral, and despite the best efforts of many very smart people it is never likely to be. There are two major issues: One is the inherent biases of the programmers, executives, and other gatekeepers involved in the construction of these platforms. The biases and assumptions may possibly be refined away but it is not likely, certainly not in the short or medium term. The second issue is that the training materials are predominantly rubbish. As Timnit Gebru et al. point out in Datasheets for Datasets. “Of particular concern are recent examples showing that machine learning models can reproduce or amplify unwanted societal biases reflected in training datasets.” That is because the datasets that most of the large AI models are trained on are full of toxic nonsense and are inherently untrustworthy.
This is not expert advice by any means. But here are just a few things I find useful to keep in mind when using these AI systems.
Is this accurate?
Just because something reads fluently and seems authoritative, it can easily hide hallucinated “facts,” misquotes, or distortions. The worst situations are when the truth weaves in and out of mistruth. It sounds right at first pass but doesn't stand up to closer examination. The problem is, who has the time for that kind of rigorous cross-checking? But if we are going to make important changes in our business or personal lives there is a strong onus on the user to double-check and cross-check. Very much a pain but very necessary.
What truths have been sanitised?
This softening of the truth is deeply pernicious. We know from previous articles in this series that the outputs of the platforms have a very strong element of bureaucratic Newspeak. This is a product of programming that requires a mindless conformity and a strong wish not to be triggering or offensive to real or imagined users. Only a committee could like this stuff and it shows.
What part of this feels alive?
This is the antidote to the anodyne. To overcome blandness, enter prompts full of life and amend the output accordingly to restore life to them. Left to themselves, AI systems will bore you to death. All passion and originality will be removed and this is the absolute grave of innovation.
I am sure it is possible to think of more things to be specifically aware of but these three questions should keep you on an interesting track. It is not so much about policing AI but cultivating the possibility to work in conjunction with it to produce interesting and creative work.
Background reading:
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). *On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?*. In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21)*.
Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). *On Faithfulness and Factuality in Abstractive Summarization*. arXiv preprint: arXiv:2005.00661.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). *Datasheets for Datasets*. arXiv preprint: arXiv:1803.09010.
Buolamwini, J., & Gebru, T. (2018). *Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification*. In *Proceedings of Machine Learning Research 81:77–91*.
Crawford, K. (2021). *Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence*. Yale University Press.