Natural Stupidity

In the world of modern creative arts, artificial intelligence (AI) is an unavoidable topic. So, I won’t be avoiding it here. 

Across the internet and beyond, all manner of articles and videos have already discussed it, and there’s lots of intriguing takes to see. However, all of these individual presentations feel to me like they’re missing important angles. I don’t think this is malice or foolishness on the part of the presenters, simply that the topic is too big, too fluid, and too nuanced to boil down to a single article, however good it may be. So, if I want to explore the topic, it’s going to need several bites at the cherry. This discussion of AI in creative work will be the topic of my Monday posts until I run out of things to explore.

My initial thought was to make this discussion of AI a separate blog entirely. I have no idea how long it will be and I didn’t want it to drown out everything else by lumping everything together. However, the more I thought about it, the more I realised that it was central to how people are likely to view creativity going forward. Whether you like AI, you hate AI, or you haven’t decided what you think, the genie is out of the bottle. It’s going to be around, and it’s probably going to play some important role. So, I want to understand the whole picture better than I do. I also think that it deserves to be discussed properly, at length, without the polemics and lynch mobs. Very little is entirely good or bad. I suspect this is also true of AI. Putting it in a Monday slot gives it room to breathe without letting it sprawl. 

Note that I don’t have all the answers. Unlike my Thursday posts, which are the fruits of over 40 years of experience, my thoughts on the current version of AI haven’t properly coalesced. So this series is even more than usual a case of me learning alongside you. I’m sure that whatever I think I might want to discuss now, by the time you’ve had your say in the comments I’ll have a bunch more to ponder. In fact, I’m counting on it. 

Also, as I dig into this topic, I expect to refine and change my views. That’s OK. It’s expected. Necessary, in fact. At the moment, my overall view is that I don’t know. Seems to me that both fanboys and nay-sayers have failed to prove their case. But I need to look at this more. There are aspects of AI that I think on the face of it are definitely bad. There are aspects of AI that I suspect have quite good applications. But as a whole, I don’t really know, and there’s a whole bunch of unclarity in the middle, which makes AI as a topic fuzzy in my head.

Finally, critically, we need to remember that we’re very much at the start of this process. As with every other major technological change, it ain’t over till the fat lady sings. Currently, I don’t think she’s even on stage. 

This entry was posted in Random Thoughts and tagged . Bookmark the permalink.

4 Responses to Natural Stupidity

  1. Odysseas's avatar Odysseas says:

    As always, the problem isn’t the AI itself, because that’s only a tool (for now…). It’s how people use it. And unfortunately, while a lot of the uses are wonderful, especially in science, most people use it to do the work for them- on fields where creativity and expertise are required. In my former job, I saw a lot of AI (ab)use exactly because there was a lack of both.

    I’m very curious to see how this affects the gaming industry as well (already plenty has been affected, at least on a visual level). Do you think it won’t be long before we see AI-written rulebooks? I’ve already seen people feed their design notes to AI to write it up for them, with the excuse “I’m not good at writing or explaining”, as if AI is.

  2. Quirkworthy's avatar Quirkworthy says:

    As you say, it’s a tool and therefore one assumes that it can be abused as well as used reasonably. However, there are both narrow and broad questions here, in how we got here and where it may be going as a toolset, as well as what’s doable and done with it.

    So far, I’ve seen loads of AI art in gaming, and some AI text in rulebooks, and more in lore. There will be stuff I haven’t noticed too, I’m sure.

    Uses in science and medicine are promising on the face of it, though AI’s hallucinations feel especially worrying in those fields. The peer review system may itself need reviewing as it’s going to be under increasing pressure.

    • Odysseas's avatar Odysseas says:

      I am not so afraid in science, medicine, engineering etc, because just like I see in the field of programming (where it sees very widespread use), there will always be competent people to check up on mistakes, hallucinations and erroneous training data.

      Rather, it’s in the creative fields I’m more worried about. As someone who has worked with words and concepts for quite some time, I see something more worrying than people using AI to write their stuff- people either being oblivious or downright ecstatic over AI “works”. No AI’s effectiveness will ever make it as pervasive as the public’s lack of discernment.

  3. Interesting topic. I, literally, sit in the crossroads of this topic. I work for the ROTC department (US Army Officer program) but am surrounded but the English department.

    On one hand, the US Army has their own AI program with a, “You WILL use this” mantra whereas most people in the English department are vehemently against it as they have had their own work “fed” into AI so others can use it to make their own stories without the effort.

    I know, that was about people around me but not how I feel. I am -very- hesitant of AI. Somewhat for the same reasons as my friends in the English department, but mainly because there is no clear, clean line of responsibility. When AI is given the control over major decisions, it is too easy to say, “Well, the system says that is the best way to improve, so that is what we are going to do.” and not take responsibility.

    My go-to Science Fiction on this isn’t Terminator, but Dune. The Butlerian Jihad wasn’t started because AI, or thinking machines as they were called, was evil, it was started because AI was given control over everything to make life easier. So much so that it did not have oversight. When Dr. Butler was in an accident and then a comma, she was pregnant. While in the coma, the AI system scanned the unborn child and decided to abort the pregnancy. Not because it had sustained injury or was a threat to the mother’s life, but because of some birth defects. Her horrifying realization of what had happened is what kicked off the whole revolt.

    This is the concern that I have. Obviously (Or Hopefully) things will never get to this level. But I do worry that AI will become so intrusive and without oversight, that most people will lose their own intelligence and no longer seek to better themselves and spend their lives reveling in pleasure.

    Thank you for coming to my TED Talk. 😉

Leave a comment