Do it Again

Normally, I’d be advocating for working by iterating things gradually. Once you’ve got a rule in place, you just need to be paring away any rough edges and polishing the thing to a nice, smooth shine. It’s an editing process. A feedback and tweak process. A tried and tested process that works. 

Most of the time.

Sometimes, there’s another approach you could try that doesn’t get talked about as much, but which you already do, sort of, just earlier in development. I think a variation of it is useful later as well.  

When you’re at the start, brainstorming ideas, testing this or that for fit with the whole, then you’re open to things being chucked out and replaced. It feels natural at this early stage, before things gel. 

While you could do exactly the brainstorming thing later, and throw out one rule for something entirely different, this isn’t what I’m suggesting. Instead of the whole rule, I’m asking you to consider only discarding the way you’ve expressed it. Let me see if I can explain this clearly. 

You’ve got a game on the go, and you have a draft rulebook. So far, so good. When you test it, there’s a bit of the rules that players keep tripping over. You’re sure the rule works in practice, it’s just that it’s hard to explain clearly on paper. We’ve all been there. It’s not the rule that’s a problem, it’s conveying it clearly. Folk might get it when you’re on hand to explain, it’s just that the words on the page alone leave them unsure. 

The normal approach would be to edit and tweak until it worked, and this is certainly an option that usually fixes things. However, sometimes you’ll have more success with replacing it entirely. Just take a blank page and write the whole section again from memory. This is important. Don’t refer to the previous version when you’re writing its replacement (they get jealous). There are a couple of key advantages baked into your subconscious the second (or third) time you write a piece:

  • You’ve already done this before, so you’re more practiced. Experience helps. 
  • You’ve seen the objections and the difficulties with the first one, so you intuitively write to mitigate those problems. Probably the critical element here.

Now this idea isn’t a replacement for iterative improvement, it’s an addition, an option. Typically, I use this when I’ve written something, done a few rounds of iterations and editing, and it’s just not settling. So, I rewrite the piece and iterate over that new version. It’s not a guaranteed improvement, though I’d say that for me it’s getting close. And, if it really doesn’t help, you can always try again and write a third version, or simply revert to where you were. You lose very little trying this, and I’ve found it a great help to get a tricky section moving again. 

Note that this works with writing stories or non-fiction as well as rules, and it’s something visual artists do all the time. That’s what erasers are for. I expect that it works in music and coding and yodelling and everything else creative too. It’s just the practice of saying to yourself that this isn’t working, so try again. You like the story beat or the visual in your head. Just take another stab at expressing it. All anyone will care about in the end is that the final version is clear. So do what you have to do to get there. 

I was using this approach yesterday, on a bigger scale and for a different reason, and it works there too. I’d written a game some years ago, made a prototype, played it, and it was fine. Not terrible, but not great either. That’s not good enough, so it had been iterated and tweaked as you normally would. Still wasn’t happy with it. The game worked and I liked the idea, I just thought it could be better expressed. So, I left it to marinate. 

Yesterday, something made me think of it again, so I started tinkering. And instead of going back to my old version, I took the same spark I had originally, the same images in my head, and wrote a new game from scratch. This is the same principle of rewriting a rule or a paragraph in a story, just writ larger. 

Now I’ve not had time to make a prototype yet, so it’s not been on the table. I am, however, feeling pretty confident that this is a cleaner and slicker approach than before. To be honest, I can’t remember all the mechanics I used last time, and that’s unimportant. The whole point is to make the end result great, and we do whatever it takes. In this case, I took a tool I normally use for something small and I changed the scale it applied to. I wasn’t replacing a single rule that was unclear, I was trying to express the whole idea of the game in a more exciting and fun way. Time will tell if that worked. 

As some of you may be thinking, this is all related to murdering your darlings, which I’ve written about before, though it’s not quite the same. At least, not in my head. Anyway, I hope that’s put some ideas in yours, and that you find it helpful. Let me know how you get on.

Posted in Random Thoughts | Tagged | 6 Comments

AI and Me

I’ve been writing about AI for a while, and there are more AI topics I could cover. However, from my perspective, I feel like I’ve done enough and am starting to repeat myself. I also feel that my ideas on the topic have gelled into a usable form now, at least in terms of my own work, so that thinking is what I’m going to share today. Then I’m done with AI discussions until something new occurs to me. Back to more obviously gaming-related stuff on a Monday.

So where have I got to?

Well, I think the following statements are true about our current crop of “creative” LLM/AI tools, and here are my thoughts on each point:

Point 1: It’s here to stay whether I like it or not.

Therefore, I should have a considered opinion about it, even if it ends up being to ignore it. 

Point 2: It’s everywhere.

I don’t have to use it.

Point 3: Raw AI output can be competent, but is generally unremarkable. It needs (often extensive) additional work from a human to bring it to a high-quality professional standard. 

This may change, and I may look again if it does. Other people may also have different standards than I do for what constitutes an acceptable professional quality, and that’s their choice. I’m happy with my standards where they are.  

Point 4: Once the prompts have been refined (which may take a long time), AI output is often suggested to me as a solid basis to work over. 

Most people I know who use AI use it like this. In a way, it’s like having a very fast junior to do the foundational work. Personally, I want the best foundations possible for a project as everything is built on top of them. If I know in advance that I’m going to have to overwrite their poor work anyway, then why not do it myself in the first place? I feel like me doing it is going to be a faster and better quality end result when you factor in all the remedial work rather than looking only at how fast AI does the initial output. As the saying goes, “If you don’t have time to do it right, when will you have time to do it over?”

Point 5: AI can’t copy what it hasn’t previously stolen, so it cannot innovate. 

Average isn’t good enough. And, if I have to bring the innovation myself, why don’t I bake it in from the start? Then there’s the ethical question of using a tool that’s based on such a blatant moral vacuum. 

Point 6: AI can mimic my voice, but it isn’t me. 

I create stories, experiences, and new worlds to play in. I do things that interest me, that make me smile, and which aren’t the same as dozens I’ve seen before unless there’s a reason for them to be, in which case I’ll be doing something else different. Their faults and excitements are products of my experience and my character, what amuses me, or terrifies me. Working on my ideas is far more interesting than working on someone else’s. So why would I want to hand off a chunk of that fascinating process to some software? Baffles me. “Here, have a big slice of my fun. No, no, I’d rather do the boring tidying up your mess at the end. You do the exciting bit.”  Now there’s an idea that does not compute. 

Overall, AI isn’t for me. Not in its current form, not with my current workflow, not until it’s offering a benefit that comes without so many downsides. 

I know people who use it occasionally, and a few who lean more heavily upon it, and that works for them. I’m not evangelical about people not using it. I can see how it works in some use cases. You choose what works for you. 

Critical for my own decision is the fact that I’m not required to use it by my boss. I work freelance, and if someone hires me, they’re presumably after my 50+ years of writing and game design experience rather than any nascent prompt-writing skills. If your boss decides that your job now needs to integrate AI, then you don’t have the same choice. 

Other than that, the most important aspects of this question for me are around skill and emotion. Delegating the creative part of the process means dulling my skill and missing out on the fun. Neither feels like a benefit.

Asking someone or something else to do the work means that I don’t learn and my skills atrophy. As true for AI as it is for delegating anything else. I learn to delegate, not to do the thing. And here, doing the thing is infinitely more fun than learning to delegate. I love the endless challenge of game design in all its many permutations. I think about it all the time. As far as I’m concerned, AI can’t have that fun. It’s all mine. 

The main potential upside I see of using AI is that it arguably does some of it faster, which is an economic argument rather than a creative one. I say “arguably” because by the time I’ve gone over its output and brought it up to the standard I’m happy with, I’m not sure that it saves me any time. I’ve worked with many less experienced people over the years, and polishing the work of others is both less interesting and often no faster than starting from a clean sheet. 

If it’s a human junior, then part of the process is mentoring and teaching them where they could improve and how to do so. I’m happy to do that as it’s a rewarding process. Fixing AI’s failings is not. 

In the end, I’m unconvinced by the time-saving claims, and have no interest in letting my skills die or foregoing the joys of creating my own stories. So, thank you very much Mr Billionaire, but I won’t be using your AI.

Posted in Random Thoughts | Tagged | 7 Comments

Boxes Full of Air

I live in a continual state of trying to make more space on my game shelves. This isn’t for any real lack of shelves (though another couple of room’s full would be nice, thanks), it’s because I have a lot of games. I don’t think I’m alone in this quest, and regardless of how many games you have it’s not really about numbers, it’s about numbers compared to your room to store them. Whether you have one shelf or a whole library, it’s volume that counts. 

In this regard, the most infuriating, most egregious sin, is games that have boxes with almost nothing in them. This is disrespectful of my space. If you want me to keep the game you made, it’ll need to live on a shelf. The less space it takes up, the less likely I’ll be asking it to justify its continued presence. Presumably, more people keeping and enjoying your game is a Good Thing, right? So why not consider this question when it comes to designing your box?

You could argue that the empty box is a sign of confidence that new expansions will need the space, and I think that’s sometimes true. In fact, I opened a new game yesterday, and for some reason that’s exactly the impression I was left with for the small amount of room that remained. Can’t put my finger on why I thought that. Definitely did though. Jisogi, in case you were wondering.

The usual situation is that a game ships with the box fully closed, and when you punch all the counters and organise it properly, there’s a lot of spare space inside. The answer to this is simple: don’t close the box when you shrink it. I’ve had several games come with the box lid raised by a stack of punchboards, and once they’re sorted out the lid closes nicely on a comfortably full box. That’s efficient, thoughtful, and it shows that you care both about how slick your product feels, and my shelf space. I like people who care. 

I don’t like being sold a box full of air that takes up three times the room on my shelf that it needs to. This is a solid strike against something when I’m organising my shelves and deciding what stays and what goes. The poster child for this nonsense has to be 1066, Tears for Many Mothers. It’s even worse if you remove the card insert (which isn’t fooling anyone).

This sin of selling air masquerading as game used to be far more prevalent than it is today, mainly for the simple reason that size mattered for sales. It wasn’t my shelves that the publishers cared about, but those of my FLGS. People bought games in person, in a shop, where they could pick up the box. How your game was stacked on a shelf mattered, and boxes were designed to look good in multiple presentations. On a related note, it’s also why backs of boxes have changed over the years. Anyway, these days, you probably order from an image online, and the thumbnail’s the same size regardless of the real box. 

This bloaty approach to boxes seems to survive mostly in smaller games these days, presumably trying to avoid disappearing into the jumble of card decks at the back of a shelf. They’re priced to be impulse purchases, but only if someone sees them. And if you’re priced low and look big, then maybe folk will think you’re a bargain, though that’s a double-edged sword. 

Giant, sprawly, miniature-heavy, campaign games of the sort that Kickstarter and GF facilitated have a different challenge to earning their shelf space. They’re generally OK in their use of space inside any given box, they just bloat over half a dozen giant ones. Sure, it’s all full of stuff, but there’s a heck of a lot of it. There are several reasons why I’m more reluctant to buy new games if they’re huge and sprawly. Partly because they’re the cost of half a dozen other games (which I might enjoy more overall), partly because they’re so hard to get to the table, but also because they take up such a silly amount of space. Of course, the huge campaign monstrosities with buckets of miniatures can generate an experience that smaller games struggle to or simply can’t, so there’s that to include in the mix. But I digress…

Overall, in the internet age, I’m buying less air. That’s good. Less wasteful all round. 

As you’d expect, there are some companies who seem to care more about this than others, and I always take that as a good sign. They’re thinking, literally, beyond the box itself. Beyond their immediate packaging challenges of getting it to your table, and into its actual life beyond delivery. I’d argue that this is all part of the gesamtkunstwerk idea that I bang on about, and I applaud it. Typically, if they care about this, then they care about other details too. In other words, it’s a green flag. A good example of this is Garphill Games. While they don’t always get it right, they tend towards smaller boxes that are packed to the brim. Only one of many good things about their range. 

My takeaway from this rambling is that games are experienced in more ways than only in the mechanics, on the tabletop. They’re repositories of memories and nostalgia as well as being the results of understanding technical aspects of offset litho printing or sculpting for steel moulds. How your game design will be stored is part of the puzzle too. Just because it isn’t especially cool and exciting, doesn’t mean that it can be ignored. 

And, you never know, there may even be a few nerds like me who see your art there too. 

Posted in Random Thoughts | Tagged | 2 Comments

Where Does This End?

I’ve been thinking a lot about the different audiences and use cases for LLMs, and the various creative endeavours that it’s seeking to change. 

From the POV of the billionaires at the top of these companies, actual creative professionals are a rounding error. A loud one, at times, but not one to overly concern themselves with once the media has wandered off and all you’re trying to do is make your first trillion. 

Currently, very few of the AI products are making a profit. Most are haemorrhaging cash at an impressive rate. Obviously, this isn’t a business plan that can go on indefinitely. So it won’t. While there’s lots of talk about a bubble, my expectation is that most will simply adapt and survive. I’m already hearing of cancellations of some giant data centres which were going to hoover up colossal amounts more money, as LLMs evolve into more streamlined versions. Smaller versions, SLMs, perhaps. And expect to see a lot more of companies using S/LLMs in concert with other AI structures, acting as collections of systems fronted by a single AI agent to interface with the wetware.

I see “creative” AI being used in two main ways now, and this will continue. The first is a minority approach, but possibly a modestly lucrative one. This is with creative professionals using it as part of their pipeline. It’s not a complete replacement for animation or code departments. However, it can do some of the basic stuff and with skilled professional (humans) to check, filter, and adapt the output, it’ll get better over time. I don’t see the need for human oversight going away soon (outside the AI company’s own advertising). The problem of this disenfranchising the new folk in each field is something that will be dealt with by companies either drinking the AI Kool Aid and relying entirely on AI as it improves, or by training their own cadre internally. We’ll see both. 

The big chunk of users now, and probably indefinitely, will be unskilled and uncreative people who’d like to be more so and who lack the time or the inclination to actually learn the skill. Humans have always wanted a short cut, and we see this in every other field. AI is nothing new in that regard. Personally, I see this as an extension of prolefeed or soma. Mixing my SF authors here. It’s something to keep folk occupied. It’s not a way to create new masterworks. Not in this use case. 

The big question is how this will be monetised, as the bubble can’t be sustained indefinitely. My guess is that they’ll attack both ends of the puzzle by reducing costs as iterations increase efficiency, and raise income by adding AI to ever more unnecessary situations and getting people to pay subscriptions for it. You’ll have seen this all over the place. 

Of course, the bubble may pop before they balance the books. If that happens, the tech will just change hands, the debts will be written off, and a new group of AI proponents will take the helms of the next wave of AI companies. It’s not going away. 

Personally, I’ve done enough pondering on this topic now. As I said at the start of this series of rambles, I wasn’t sure how I thought about the whole thing, and writing these articles and reading your comments would help me frame it more clearly in my mind. Well, I’ve done that now, so next week will be my conclusion. At least for the time being. Then you can look forward to me getting back to pontificating about games and art and writing. I bet you can’t wait. 

Posted in Random Thoughts | Tagged | 9 Comments

Roll & Move Doesn’t Have to Suck

Roll & move is a mechanic with a bad reputation. It’s widely seen as a simplistic rule that’s found only in old or rubbish games that aren’t for serious gamers. I’m here to suggest that roll & move isn’t bad, it’s just misunderstood and misused.

The classic roll & move games have you rolling a dice and then moving exactly that distance along a one-way track. On your arrival, you must execute whatever that space dictates. There’s definitely a problem here when it comes to engaging gameplay, but I’d argue that it’s not the mechanic, it’s the way it’s been terribly applied. And, just because a mechanic is traditionally executed poorly1 doesn’t mean it can’t be executed well.

Agency. Agency is the key.

The traditional application of roll & move takes away all the player’s agency, and this is why it sucks. Not because you roll a dice and move, but because you have no choice about any of it. To illustrate this point, and show how roll & move as a mechanic can be better used, let’s try a thought experiment. Let’s imagine a non-sucky roll & move game. 

Start with the roll bit. Let’s keep the restriction that we must move as many spaces as we roll, no more nor less. What can we do from there? How about we roll two dice instead of one, and let the player choose which one to use? This is a very simple change, and only one of many ways we could allow the player to mitigate their dice roll. However, even this tiny change gives the player something to think about. It starts to give them a little agency back. 

Now the value of being able to choose or modify your result rests on two things. The first is that the goal of the game is something other than a pure race (otherwise more is always better and it’s not really a choice). Actually, we could even keep the race idea as long as we tweak it a bit. Let’s say that you can only cross the finish line if you’ve collected 3 each of 4 different fruit from those scattered about the board. Now there’s a reason to go to space A and not space B on top of the general movement towards the end. 

Our second reason why the choice of roll matters is that each potential landing space does something different. In our example, let’s say each space allows you to collect a different fruit. This is easily done on our imaginary board.

Speaking of the board, we can do even better. Why not allow the player more than one route? An open grid with movement in any direction is an extreme option, but even if we stick to a more traditional path format we can add branches and junctions to allow the player more options for their movement. Maybe we even let them move in either direction along each path, picking one direction each turn. Now, instead of no choice at all when they roll their dice, they’ve got the choice of which dice to use, and each one could take them to multiple end spaces, each of which do something different as well as moving them nearer or further from the finish line. With a well-designed path system this could give them half a dozen options for each roll, which is already quite a bit of agency and choice. And, all these changes have been small, practical, and easily implemented. 

Of course, there’s a bunch more you could layer on, but I’m going to stop here. My point is that roll & move isn’t a bad mechanic per se, it’s just generally used with little imagination and skill. As I think even these simple changes show, a roll & move game could easily be developed into something interesting. 

Before I leave you to ponder this further, I’d like to recognise that there are a few good games that have roll & move at their heart; they’re just rare beasts. I think that’s a shame. 

Notes

When I say “poorly”, that’s with regard to the narrow definition of “doing well” as being fun for gamers. There’s other ways in which the traditional approach is actually a good thing, but I’ll come back to that another time.

Posted in Random Thoughts | Tagged | 15 Comments

Prompt Critical

Prompts = delegation, nothing more. 

Getting a professional to fix my pipes does not make me a plumber. In a similar way, writing a prompt does not make me an artist. Or an author. Or a coder. Or anything other than someone who understands how to ask for what they’d like. This is a skill that we learn as small children, and while it can always be refined to fit a particular circumstance better, it’s hardly a major achievement. 

So, when I read folk claiming to be an artist because they wrote a prompt for Midjourney or Nano Banana, then I’m reminded of the concept of stolen valour. You’re not an artist, you’re someone who asked nicely.  

Now, it’s a different situation if you are an actual artist and you’re using the AI generated output as one part in your process and it’s going to get modified using your actual skill at art. In that use case, AI generated output is most similar to the raw material of photobashing, and is a process I expect is already widespread in professional concepting. It’s a tool. Artists use tools. They just don’t expect the paintbrush to do the whole picture without any more input than being asked to do so. 

Delegating your work is something we do all the time. However, when we hear or experience managers claiming the delegated work as their own then we all find that wrong. Same here. Delegated work isn’t your work. 

And, if you’re just flaunting the unmodified AI output as your own, as I’ve seen done repeatedly, then you are, at the absolute best, masquerading as an untrained art director. Most likely you’re just deluding yourself and lying to everyone else. 

Do better. 

Speaking of doing better, the other thing that delegating to AI does is atrophy any skills you did have. It’s obvious, and nothing to do with AI per se. Just like anything else, if you delegate the work to someone else, then the someone else is the one who learns and upskills themselves. Not you. 

Learning any skill takes time, effort, and lots of practice. Delegating is not a short cut, regardless of whether it’s to AI or a human. You don’t gain the skill. If anything, your existing skills degrade from lack of use. 

So, if you want to learn, you need to do. For yourself. The hard way. 

Posted in Random Thoughts | Tagged | 4 Comments

What Does Your Worldbuilding Mean?

Who is responsible for deciding what your worldbuilding choices mean? You? Your editor? Your audience? Take a moment. Have a think. While you do, I’ll tell you what sparked this idea.

It’s an old question that I was pondering again this morning, after reading Sean Äaberg”s recent Substack article on Orcs. This is a solid rundown of how the perception of Orcs has changed over the years and it’s worth a read. Whilst I might quibble over the odd detail, the only important thing I’d like to add is that the modern look of GW Orcs is almost entirely down to an excellent sculptor called Brian Nelson. Games Workshop are very poor at crediting their creatives, but I think he deserves to be recognised here. I was working in the studio at the time, and I remember his project on them. They’ve not really changed their looks since. 

Anyway, now you’ve had time to think about my question, I’ll give you my answer: all of the above and more. You, your editor, and definitely your audience all (mis)interpret what your worldbuilding choices mean. And, not just the audience now, but the audience at different times in the future too. 

You’ve heard the phrase “everyone’s a critic”. Add to that “everyone’s an interpreter”, and you get to one of the intriguing theoreticals about worldbuilding. While there are all sorts of articles and advice on how to build a world, and you’ll definitely have ideas about what each aspect means to you, the way your work is interpreted by your audience is somewhat out of your hands. Sure, you can make bad folk do bad things, but the motivations assigned by your future audience are likely to shift over time, and will always be reinterpreted in the light of their contemporary social values. 

This leads me to a couple of thoughts. The first is that I’m often more interested in what a creator meant than what the current accepted view is. It often requires a lot of digging, isn’t always at all clear, but is generally very rewarding to find out what they were thinking when something was made. It often tells you a lot about the time they lived in, and I love this sort of experiential archaeology (a term I just made up). You might think you know what Orwell had in mind when he wrote Animal Farm, but do you? Really? Who are the pigs? Who is the horse? It’s very much a product of the world he lived in, and we don’t share that experience, so we interpret it differently, in a smoothed off pastiche of the raw vision that was originally present. This is normal. Inevitable, even. Your work, whatever form it takes, will be reinterpreted (and possibly horribly misinterpreted) as long as it exists. Another example would be the famous gaming celebration of capitalism called Monopoly. It was based on The Landlord’s Game, which was written specifically as a critique of monopolies. 

When I think about the potential distortions in interpretation that any story or character or image I make will inevitably be subjected to, I find that initially rather demoralising. I’m trying to tell a tale, make a point. Why aren’t they getting it? Soon though, I realise that it’s actually the opposite: it’s both positive and liberating. I’m free to create whatever I like as regardless of what I do it’ll be misinterpreted anyway. So, I’ve no need to try to pander to an audience. They’ll wander off in their own direction whatever I do. Maybe a few will get close, who knows? In the end though, does it matter? I get to create what I want, and everyone else can like it or not as the case may be. Sure, I may get tarred and feathered due to some peculiar reading of my text, but there’s not a lot I can do about that. Being potentially misunderstood is a staple of creative work. No way around it. So ignore it and move on with a smile. 

This idea is related to the discussion of separating authors from their work. That’s normally about “problematic” authors who’ve done something that’s frowned upon in their private lives and people who want to find a way to enjoy that work without the taint. In a way, it’s just an accolade for the work. People are bending over backwards to find a way to keep it. In reality though, audiences don’t know or generally seek to know what the creator really intended, and don’t always believe them when they do explain. So why waste everyone’s time? Just get on with it, whatever it is. Let it run free in the world. Offering the fruits of your creativity to the public is a risk in many ways. This is just one of them. It shouldn’t stop you creating. It should liberate you. 

In the end, build the worlds you want. You’re free to please yourself with your creations and whatever audience you have will find their own interpretations to enjoy. Everyone gets what they want. Wins all round. 

Posted in Random Thoughts | 2 Comments

The Legal Woes of Bots

I was going to talk about why I’ve found the whole, often worrying, AI thing good in at least one important aspect. However, while writing that article, I realised that I needed this one as a foundation. That’s been the case with this whole bundle of AI topics; there are so many things I want to discuss at once. Anyway, AI and legal stuff now.

First, I should say that I’m using art as an example because it seems clear and there’s more info on court cases and so on. The same principles seem to apply to all AI-generated work though. Also, I need to say that I’m not a lawyer, so this is only my layman’s view of things. It’s based on personal experience helping a legal team with copyright cases, some digging into other company’s experiences and published concerns, plus reading about AI-related lawsuits. I’m sure I’ve missed something, so please feel free to enlighten me in the comments. 

Morals? What Morals?

If you’re a big company, then ignoring the whole illegal use of training data question (discussed previously) may be convenient in the short term. Several are doing just this. I think they’re right that the public is fickle and will soon move on. However, I also wonder whether their tacit bet that their reputation will weather the storm sufficiently intact will pan out. 

Even so, I don’t think this is the worst of AI’s copyright problems. We’ll come back to what is in a minute.

Don’t Share Your Secrets with Mr MechaHitler

In my reading, I’ve come across some companies doing their due diligence and testing out the various AI software before they jump in with both feet. This is great information to mine for you and me because we don’t individually have the resources to do as thorough a job of this. It’s also notable because there seems to be a pattern. 

Whatever they think of the creative abilities of AI, and however simple that is or isn’t to include in their pipelines, they’ve got legal questions. Any AI work that includes talking to a cloud (most of them) means that your proprietary info is going into the training pool. Read the small print. Research has shown that this information is indeed recoverable to a high 90s percentage accuracy. Legal departments get very squirrely about this sort of thing, never mind the GDPR implications of data management if there’s any customer info included. 

This uncertainty in data control and management potentially puts the company in a host of trouble. Firstly, there’s the potential loss of company secrets, which is a problem commercially and for encouraging investors. Then there’s the issues with personal data. Putting this at the mercy of AI training pools is potentially illegal in itself, though I’ve never heard of it being tested in court. It’s also a very bad look in the court of public opinion when it inevitably gets out. 

For these reasons, some companies are being stopped from AI use by their legal departments alone. I don’t have a legal department. However, if it’s a legal problem for them, I don’t see why I’d be immune. So that’s one reason not to use AI. This isn’t my main concern though. I’m sure this will be cleared up in a year or two as this kind of client is way too important for the AI businesses to be pushing away. If I wanted to use AI and this was the only issue, then I’d just wait it out. 

A Human Wearing a Robot Suit

Possibly the biggest legal challenge for AI companies, going forward, is the fact that most versions of copyright laws clearly say that only human-generated product can be the subject of copyright. Simply put, you cannot copyright the output of an AI. Case closed.

Or is it?

You see, it’s not just original human work that can be copywritten. Humans can take the works of others (humans, traditionally) and transform them into new works which can be copywritten as new pieces in their own right. 

As far as AI goes, this means that something like Midjourney can generate a picture with zero copyright, and then a human can transform it into something that can be copywritten. So, whatever transformation means, it’s key to commercial use of AI. And, at least till the law is changed, this means using pesky humans. Unfortunate for an industry who’re ostensibly in the business of replacing them.

How much humaning do they need though? 10%? 50%? As with many laws, it’s written to make lawyers enough for a second holiday home and a yacht rather than be clear to mere mortals. Or courts. This guarantees lots of protracted legal cases for the next decade or two. 

Transformative is an unhelpfully fuzzy term. However, companies want copyright control over important brand imagery. They need to know what they have to do to jump through that hoop if they’re going to use AI to generate the imagery on the first place. This is potentially a deal breaker, and it’s going to be a problem until the copyright law is changed. 

Requiring a human to intervene in a major way in the AI pipeline to make it copyrightable cuts against the idea that AI can avoid the need for skilled humans. This is a trend. While I see individuals using AI for various art and writing, very little of it is turning up unmodified in the public sphere. And that means we’re getting a new creative tool for humans to use, not a tool that replaces human creativity. At best (from the pro-AI POV), you might argue that it reduces the human to an art director rather than an artist. A different skill, and usually one done by an artist anyway, but not what’s being presented as the future of AI. I’m not sure that’s enough to satisfy the law on copyright either. It seems that the transformation must be hands on, and sufficiently invasive to count as having transformed the output into something entirely new. Hard to see how much of this is going to be done without a human artist in the loop. In fact, AI always requires a human. Nobody’s publishing substantial work without a human vetting it at some stage. Well, not without ending up in the news.

Despite all this, plenty of big players are already sacking folk to replace them with AI. But AI is a term that covers a lot of different areas within a large company, and using ChatGPT to write your emails isn’t the same as using Nano Banana to do your character design. While one seems (to me) legally safe, the other feels more nuanced. More of a challenge. After all, the AI companies haven’t borrowed billions of dollars on the promise that they’d replace a couple of secretaries. They want to put all creativity in the metaphorical hands of your toaster (I may be paraphrasing here). They need bigger wins. 

As far as jobs go, the cuts I’ve seen generally remove less skilled roles and add work to the higher skilled folk. That’s not a great long term strategy as the highly skilled folk don’t get where they are without going through the lower skilled stage, and without roles to train new folk in it’s going to be an interesting puzzle to solve. I suspect that the AI believers simply imagine that the whole requirement for humans will be written out any day, so this is something they can ignore. But, like the AI eating its own slop and the training going weird, I think it’s an issue they can’t ignore. Well, obviously they can and probably will ignore it, and it’ll come back and surprise them later, just as they seem to have been surprised by the current copyright. Too busy being clever and pleased with themselves to avoid all the unforced errors.  

Maybe They Should Have Considered This Sooner

What’s interesting to me is that while stealing training data is about the AI itself, most legal hurdles are reasons why the market might not use it. This is probably more concerning for the AI companies. After all, the best product in the world is no use without a paying audience. If AI can’t be easily used without falling into legal chasms, then the market will look elsewhere for their art. 

Humans, maybe. 

Posted in Random Thoughts | Tagged | 2 Comments

Comfort Games

Today is a friend’s birthday, and we’re going to play some games together. We’ve played lots of different games, but some we return to again and again. It’s the same with any group of people I play with and have played with over the years. Whichever club I’ve gone to or friend group I’ve played among, there were always those games we returned to more than others, and which felt different. Why? I don’t think it’s just that you like that game more. There are plenty of games that I like well enough and seldom play. No, here I’m thinking of something more akin to pulling on that familiar old sweatshirt or the heavily worn slippers that really should go, but which have moulded themselves to you perfectly and cannot be replaced, no matter how many objectively newer and shinier alternatives your in-laws buy you for Christmas. They’re like a comfort blanket, so I call them comfort games.

As I pondered what we might play today, I started wondering whether you could deliberately design a game to fill that comfortable niche, and while (spoiler alert) I don’t think you really can, you can probably design something with more chance of getting there. 

First, let’s take a step back. What do I mean by a comfort game? It’s a tricky thing to define closely, though I suspect you already have an image in your head. After having rewritten this paragraph half a dozen times, I still think that the best analogy is the favourite jumper, slippers, or chair. It’s familiar, friendly, and safe. It’s smooth edges and a warm hug. It’s an emotional comfort as well as an intellectual pleasure. 

That’s a bit fuzzy as definitions go, and I wanted something more specific, so I tried a slightly different tack. If I was going to interrogate the idea of whether you could design specifically for it, which design elements would I need to include? After some pondering, I came up with a few common criteria that I think are important in the games I reflexively include in this category.

Collective Happy Memories

This isn’t quite the same as nostalgia, though it bleeds into it. I think it’s important that it’s not just me: the whole group needs the vibe to be just right. If you can manage this a couple of times with the same game, then you’re onto a winner. This is easier with fewer people, and it applies to solo games too. Starting a game with the knowledge that you had a great time last time helps massage your expectations the right way for this time. A very tricky thing to design for specifically, and probably something you’d expect to be working towards anyway. 

Familiarity

I don’t think games can get into this comfort game category on a single play. You might see the potential, but they’re going to have to earn their position over time. This suggests that games need to be easy to get to the table, with simple set up. You need repeated plays to ease yourself into the comfortable embrace of the familiar. Another hard thing to design for. The following point can perhaps proxy for this. 

Simplicity

This is a trend more than a hard requirement. I reckon that you could have incredibly complex comfort games if everything else aligned. However, I also think that there’s a sweet spot of complexity that’s not trivially simple, and not baroque in its complexity.

Comfortable Game Loop

A clean game loop is something you’ll be working on as you design a game, and a slick process is usually better than a cluttered one. Here though, I think that it’s essential to have a loop whose rhythm feels natural to the players as well as being clean and polished. Like musical taste, there’s a variety of options and matching the rhythm of the game to a specific player isn’t something you can do in the design stage (unless it really is a game that’s made for a specific individual). 

Rewards Clever Play

All games do this to a degree. What I’m thinking of here is the kind of game where you pull off a neat combo and feel clever for having done so. Not all games promote this sort of combo loop, and different folk are probably attuned to different periodicities of it. For me, it’s not something I expect to do every turn; I want to earn it by plotting over a few turns to build into a clever combo, when I’ve set it up and the cards or dice or game state align to reward my play. I do think that this feeling of self-congratulatory pride is part of the puzzle, as unflattering as that sounds. Games with flatter loops and only regular, smaller achievements don’t seem to make my list. 

I appreciate that this is all a bit fluffy and soft-edged, and also that it may well be entirely subjective, but it feels like a thing. Obviously, I’ve derived these ideas by reverse engineering the games I put In this category. You may end up with a different set of criteria if you tried this process with your comfort games. I’d be interested to hear what you find. 

If I was organised enough to track my plays, I’m sure you’d see some of these near the top of my list, but not all. Discordia, for example, doesn’t come out often, though it’ll have a season now and again. However, when it does appear it always feels like slipping into warm slippers round a cosy fire. It’s very much a comfort game. So, frequency of play doesn’t seem to be a requirement once the initial familiarity has been built up. 

Not sure where I’m really going with this. I just had the thought and wanted to share. To finish off, let me ask you a few questions. Do you have comfort games? I’ve ploughed on with the assumption that my neurodivergent brain is not off on a weird tangent (this time) and that the feeling is widespread if not universal. 

What comfort games do you have? 

Do you think they conform to these criteria?

Do you think it’s possible to deliberately design a comfort game?

Posted in Random Thoughts | 12 Comments

Oceans Infinity

You and I have been witness to the greatest art heist in the planet’s history. Yup, today I want to talk about the underpinning of the current crop of Large Language Model (LLM) style AIs: the training data they scraped from across the internet, published works, and anything else they could get their digital paws on.

Compared to this, the recent Louvre break in was amateur hour, Vincenzo Peruggia was a lucky chancer, and even the stripping of artwork from across Europe to line the Nazi galleries barely warrants a footnote. Hyperbole? Kind of, but not entirely. 

On the one hand, the artwork hasn’t been removed. It’s still where it was to start with, pretending that nothing has happened. And the creators may even now be none the wiser. However, behind the scenes, a hugely consequential theft has taken place. 

Theft? Before we go any further, let me say that this is how I understand it. I’m not a lawyer, but I’ve dealt with copyright cases a number of times in my years in the industry. Fundamentally, copyright law seems simple. A human creator makes a new work, and by the process of that creation they gain the copyright in it. They don’t have to claim it, the act of creation itself grants them rights (as long as they’re human). Other entities may not use that copywritten work for their own financial gain without permission. There are a few cut outs for what is broadly called Fair Use (FU, ironically), though this applies to reviews and education rather than profit-centred endeavours. So, let’s say I want to design and publish a Star Wars game. I’d need permission from the copyright holder to do so. If I wanted to teach a course on media studies, I could reference Star Wars as Fair Use without express permission. 

On the face of it, the AI case seems pretty simple. The LLM industry collectively scraped all the art and writing and music and code and anything else they could find and used it to train software with the express purpose of competing with and replacing those creators. I don’t think that’s up for debate. This isn’t Fair Use. It’s also done without consent or license, and is therefore theft. If I sold that imaginary Star Wars game I’d end up in court. I’ve been involved in cases where exactly this sort of thing happened. 

There are three things that muddy the waters.

Firstly, tech-enabled scale. We’ve never seen so much stolen from so many individuals, and stolen so quickly. It’s so brazen, and so blatantly immoral and uncaring of the harm it does that authorities have no clue how to react. The law has simply failed to keep pace with the crime. 

Secondly, Fair Use and Legitimate Interest (LI). We’ve already mentioned the former, and that’s used to defend all manner of stuff that it doesn’t legally cover. The second is far more problematic because it has some legit uses. 

Legitimate Interest basically means that there’s a good reason why someone might need to use your data without asking permission. For example, a bank must conform to money laundering and other laws, so they need to look at your data. The GDPR has a number of clauses under which someone could claim LI. Unfortunately, the last one is a catch-all that can be used to exploit the process. Sure, you could send in a request to ask them to explain themselves, and then debate whether that was reasonable, and so on through the complaints procedure and courts. Realistically, this is almost never going to happen, or be worth the effort, because it’s massively resource consuming and by that point they’ve already used your data anyway. If we’re talking about AI, then it’s buried in the training pool and isn’t removable. So, LI is another pseudo-legal smokescreen that the AI companies can use to steal your data. 

Remember that I said this series was about me exploring and learning? Well, this third point is where we come to the core of this aspect of AI, and it leads to my first major takeaway, and it’s a pragmatic rather than an immediately emotionally satisfying one.

So where was I? Oh, yes. The AI companies have plundered what they needed to start their work and continue to harvest whatever they can get away with. A few have built models on training data that they’ve asked permission to use, though this is a minority approach and mainly seems to be a marketing exercise. Despite this, overwhelmingly, LLMs are built on data that’s being used without permission of the legal owner. It is not FU or LI under those definitions. Calling this anything other than theft is, in my opinion, either deluded or intentionally misleading. 

The third, and most important point, is that nothing material is going to be done about this. At least, not in any major way. The legal system lags too far behind, the power and resource imbalance between AI companies and creators is laughably one-sided, and the politicians are clearly leaning on the side of their oligarch peers outside the odd occasion where they need to pretend to care for votes. 

None of this should surprise anyone. It’s not new, it’s just business as usual. I don’t know if it’s because I’m older or whether anything really has changed, but it feels to me that while this abuse of power has always been the way, it’s less masked than it was in my youth. The politicians and billionaires used at least to pretend to care a little. Now the corruption and nepotism are out in the open in a way that would make the Borgias blush. 

So, yes. It’s not good. Creators large and small have had their work taken and used without permission. Stolen, in other words. And outside a few cases that perform the social function of show trials to salve the public conscience (which I predict they will fail to impress), the legal system will side with the money and power, as is tradition.

What I think this means is that there’s no real point in complaining. It’s a new paradigm we live in. Assume that anything you show in public will be stolen without repercussion. We’re back in the pre-copyright days now. Of course, the law will still be used to prosecute you if you use stuff the big boys own without permission, but the far more consequential thefts by the giant AI companies will continue unchecked.

All of this may seem a bit dystopian and gloomy, and it’s certainly not sunshine and rainbows. However, it’s not really new. People with more money than you will ever see have always had this power. The only difference is that it’s being wielded more brazenly and being used to abuse creatives. That doesn’t make it right, and being angry is a sane response. It won’t, however, make any difference. 

So, what to do? It seems to me that you’ve got 3 choices. 

  1. Take up an AK and lead the revolution against the capitalist running dogs of Big AI. Vanishingly small likelihood of success. Cannot recommend.
  2. Rail against the unfairness of it all, post online, complain to your friends, etc. This reminds me of Douglas Adam’s Hitchhiker’s Guide to the Galaxy. When the Vogon’s arrive, Ford tells the barman that the world is about to be destroyed. “Aren’t we supposed to lie down or put a paper bag over our heads or something,” he says. “Will that help?” That’s what getting angry feels like to me. The answer is no, by the way. It won’t help. I think that you’re far better off putting your energies into the last option. 
  3. There are sort of 2 versions of this, lurking under one umbrella. Overall, option 3 is simple: deal with it. The two possible flavours are coping by ignoring AI and coping by embracing it. You could blend your own middle ground, though purists will probably suggest that any use of AI is going to taint you. Whatever happens, in this third option you find your way to navigate the unfairness and lopsided immorality of it all. It’s never been fair or moral anyway, and this may be a useful wakeup call if you’d failed to notice previously. Whatever LLMs or subsequent AI forms do to the creative space, there will always be humans making things, and for the foreseeable future these will have a different place in the world to whatever software churns out. 

This last point is my takeaway from pondering this aspect of the current AI/LLM wave. The genie won’t be going back in the bottle, and the folk that let it out will not be held accountable for the damage they’ve done. 

One of the many lessons that decades of gaming has taught me is that you need to pick your fights. Some battles can’t be won. In this analogy, they have a million tanks and you have a pointed stick. It’s not a winnable position. Now, you don’t have to like it (you’d be strange if you did), but you stand far more chance of winning the game as a whole if you let this battle go. Reinforcing a loss is just a waste of your resources.

Realising this makes me think that the only useful way forward is to accept the shit sandwich they forced upon us, and move forward. Take your anger and channel that energy into getting better, learning more, finding your own way, because, at the end of the day, you are what you have most control over. Maybe the only thing. 

Let the AI companies do what they’re doing, just as you let the other human creators get on with their thing. It’s competition, it’s inspiration, and it’s background noise. Focus on yourself, your work, your skills and your progression. There was always someone better than you, and others who were less skilled. That hasn’t changed. Just now, some of it’s software. Learn, improve, and be the most exciting and interesting creator you can be. In a way, you should pity the poor AI. After all, for all its crass theft, and there was a lot of it, it can only copy, it cannot truly create. 

Maybe next week we should look at why this is a good thing.

Posted in Random Thoughts | Tagged | 24 Comments