In the world of modern creative arts, artificial intelligence (AI) is an unavoidable topic. So, I won’t be avoiding it here.
Across the internet and beyond, all manner of articles and videos have already discussed it, and there’s lots of intriguing takes to see. However, all of these individual presentations feel to me like they’re missing important angles. I don’t think this is malice or foolishness on the part of the presenters, simply that the topic is too big, too fluid, and too nuanced to boil down to a single article, however good it may be. So, if I want to explore the topic, it’s going to need several bites at the cherry. This discussion of AI in creative work will be the topic of my Monday posts until I run out of things to explore.
My initial thought was to make this discussion of AI a separate blog entirely. I have no idea how long it will be and I didn’t want it to drown out everything else by lumping everything together. However, the more I thought about it, the more I realised that it was central to how people are likely to view creativity going forward. Whether you like AI, you hate AI, or you haven’t decided what you think, the genie is out of the bottle. It’s going to be around, and it’s probably going to play some important role. So, I want to understand the whole picture better than I do. I also think that it deserves to be discussed properly, at length, without the polemics and lynch mobs. Very little is entirely good or bad. I suspect this is also true of AI. Putting it in a Monday slot gives it room to breathe without letting it sprawl.
Note that I don’t have all the answers. Unlike my Thursday posts, which are the fruits of over 40 years of experience, my thoughts on the current version of AI haven’t properly coalesced. So this series is even more than usual a case of me learning alongside you. I’m sure that whatever I think I might want to discuss now, by the time you’ve had your say in the comments I’ll have a bunch more to ponder. In fact, I’m counting on it.
Also, as I dig into this topic, I expect to refine and change my views. That’s OK. It’s expected. Necessary, in fact. At the moment, my overall view is that I don’t know. Seems to me that both fanboys and nay-sayers have failed to prove their case. But I need to look at this more. There are aspects of AI that I think on the face of it are definitely bad. There are aspects of AI that I suspect have quite good applications. But as a whole, I don’t really know, and there’s a whole bunch of unclarity in the middle, which makes AI as a topic fuzzy in my head.
Finally, critically, we need to remember that we’re very much at the start of this process. As with every other major technological change, it ain’t over till the fat lady sings. Currently, I don’t think she’s even on stage.
As always, the problem isn’t the AI itself, because that’s only a tool (for now…). It’s how people use it. And unfortunately, while a lot of the uses are wonderful, especially in science, most people use it to do the work for them- on fields where creativity and expertise are required. In my former job, I saw a lot of AI (ab)use exactly because there was a lack of both.
I’m very curious to see how this affects the gaming industry as well (already plenty has been affected, at least on a visual level). Do you think it won’t be long before we see AI-written rulebooks? I’ve already seen people feed their design notes to AI to write it up for them, with the excuse “I’m not good at writing or explaining”, as if AI is.
Amazon is flooded with AI books and there are quite some games that already make big use of AI. Foxpaw is such a game and in the end quite mediocre, but it is interesting as a study case why it turned ou the way it is. There was ahuge discussion about the fact that it was claimed the game was mad by humans while most of it was done by AI.
Pure AI output is still usually obvious. That’ll change over time, but for now you need a fair amount of human in the mix to make it better than meh.
Of course, the humans lying problem is neither an AI thing or a new one.
If you know what to look for it is very easy to spot it. And still a lot of what is sold to us as good results was worked on by clickworkers.
As you say, it’s a tool and therefore one assumes that it can be abused as well as used reasonably. However, there are both narrow and broad questions here, in how we got here and where it may be going as a toolset, as well as what’s doable and done with it.
So far, I’ve seen loads of AI art in gaming, and some AI text in rulebooks, and more in lore. There will be stuff I haven’t noticed too, I’m sure.
Uses in science and medicine are promising on the face of it, though AI’s hallucinations feel especially worrying in those fields. The peer review system may itself need reviewing as it’s going to be under increasing pressure.
I am not so afraid in science, medicine, engineering etc, because just like I see in the field of programming (where it sees very widespread use), there will always be competent people to check up on mistakes, hallucinations and erroneous training data.
Rather, it’s in the creative fields I’m more worried about. As someone who has worked with words and concepts for quite some time, I see something more worrying than people using AI to write their stuff- people either being oblivious or downright ecstatic over AI “works”. No AI’s effectiveness will ever make it as pervasive as the public’s lack of discernment.
The public’s lack of discernment is nothing new, and not only an AI problem. But yes, it is an issue here too.
My concern about AI in the sciences is that the current oversight of human work is not great, and bad actors already get stuff past the checks and balances. AI is only going to make it easier to produce fraudulent content and do so in quantity. The problem isn’t AI instigating fraud, but facilitating it.
I mean, we’re a long way from AI instigating anything, at least as far as I know, conspiracy theories notwithstanding. But regarding the sciences, that will depend I guess on the next decade or two to see if there’s stringent anti-AI policy implemented in universities (or, more wholesome use of it? I’ve heard some examples that aren’t very impressive), or the “publish or perish” mentality of the Academic Paper Factory will be even worse.
If we’re relying on humans to do the right thing and not use AI to falsify at least superficially convincing-looking results then we’re deluding ourselves. Without some form of rigorous insight the AI slop we hear so much about is going to flood the published academic world too.
At least, that’s my more dystopian angle.
I will hold out a tiny sliver of hope for people, as much as I love a good dystopia.
The tiny sliver of hope is what makes it all the more dramatic.
Even in the other fields it creates a lot of slop and damage and in the area of medicine already deaths can be attributed to putting to much trust in AI.
Oh I absolutely agree, and I’m not defending AI (ab)use at any point. Honestly, though, if people believe AI’s medical advice, it’s the same people who would google crappy medical sites as well…
True to a point. Part of the problem is that AI is being built into so much stuff these days which doesn’t have an opt out button or an explanation that it is AI behind the nice graphic interface and the friendly smiles on the (possibly AI generated) images.
Interesting topic. I, literally, sit in the crossroads of this topic. I work for the ROTC department (US Army Officer program) but am surrounded but the English department.
On one hand, the US Army has their own AI program with a, “You WILL use this” mantra whereas most people in the English department are vehemently against it as they have had their own work “fed” into AI so others can use it to make their own stories without the effort.
I know, that was about people around me but not how I feel. I am -very- hesitant of AI. Somewhat for the same reasons as my friends in the English department, but mainly because there is no clear, clean line of responsibility. When AI is given the control over major decisions, it is too easy to say, “Well, the system says that is the best way to improve, so that is what we are going to do.” and not take responsibility.
My go-to Science Fiction on this isn’t Terminator, but Dune. The Butlerian Jihad wasn’t started because AI, or thinking machines as they were called, was evil, it was started because AI was given control over everything to make life easier. So much so that it did not have oversight. When Dr. Butler was in an accident and then a comma, she was pregnant. While in the coma, the AI system scanned the unborn child and decided to abort the pregnancy. Not because it had sustained injury or was a threat to the mother’s life, but because of some birth defects. Her horrifying realization of what had happened is what kicked off the whole revolt.
This is the concern that I have. Obviously (Or Hopefully) things will never get to this level. But I do worry that AI will become so intrusive and without oversight, that most people will lose their own intelligence and no longer seek to better themselves and spend their lives reveling in pleasure.
Thank you for coming to my TED Talk. 😉
Abdicating responsibility is one of the obvious challenges when you give humans an easy way to be lazy. I’m sure we’ll return that that idea.
I was listening to something yesterday that talked about the problems with making things easier all the time, and how research backs up common sense to show that your brain will atrophy when unchallenged. This isn’t just an AI problem, but it definitely is that too.
Well, the problem is that the ruling AI – LLM – is a rather simple parrot that steals its data too often from the internet. I.e. it is bound to be dumbed down more and more by the content it and other LLM flood the internet with. This we already can observe. The IP issue is the other side which affects nearly every creative work and in its current form is too often theft and to monetize the stolen IP. This will end in a nasty legal way one day, but many creative people will not be able to wait that long.
AI code currently bloats already bloated code even more and introduces way to many bugs… this we also can observe with many apps showing issues that in the past were a rarity.
And then there is the ressource hunger of AI which is way beyond what is sustainable and damages many other areas (try getting RAM or SSD at a good price or watch as GPUs are more and more sold to AI companies and no longer to consumers.
The whole topic is very interesting, but one thing is for sure: The is a bubble and it will burst and it will cause a lot of damage. (And it even hurts the development of more intelligent AI systems).
The biggest winner currently seems the folks who sell pick&shovel and many scammers and spammers who push ou more and more crap every day.
Lots of good observations there, and I’ll dig into them in due course. It does seem that scammers are likely to benefit as early adopters of this new tech. I’m certainly seeing a wave of AI generated nonsense lately. There doesn’t feel like an obvious fix for that is in the offing.
The biggest problem ist that the platforms obliterated their filter teams and most of the stuff that helps avoiding slop. It may increase interaction… but more and more only that of bots.
I wonder what the bots would talk about among themselves if they weren’t on public platforms.