The Legal Woes of Bots

I was going to talk about why I’ve found the whole, often worrying, AI thing good in at least one important aspect. However, while writing that article, I realised that I needed this one as a foundation. That’s been the case with this whole bundle of AI topics; there are so many things I want to discuss at once. Anyway, AI and legal stuff now.

First, I should say that I’m using art as an example because it seems clear and there’s more info on court cases and so on. The same principles seem to apply to all AI-generated work though. Also, I need to say that I’m not a lawyer, so this is only my layman’s view of things. It’s based on personal experience helping a legal team with copyright cases, some digging into other company’s experiences and published concerns, plus reading about AI-related lawsuits. I’m sure I’ve missed something, so please feel free to enlighten me in the comments. 

Morals? What Morals?

If you’re a big company, then ignoring the whole illegal use of training data question (discussed previously) may be convenient in the short term. Several are doing just this. I think they’re right that the public is fickle and will soon move on. However, I also wonder whether their tacit bet that their reputation will weather the storm sufficiently intact will pan out. 

Even so, I don’t think this is the worst of AI’s copyright problems. We’ll come back to what is in a minute.

Don’t Share Your Secrets with Mr MechaHitler

In my reading, I’ve come across some companies doing their due diligence and testing out the various AI software before they jump in with both feet. This is great information to mine for you and me because we don’t individually have the resources to do as thorough a job of this. It’s also notable because there seems to be a pattern. 

Whatever they think of the creative abilities of AI, and however simple that is or isn’t to include in their pipelines, they’ve got legal questions. Any AI work that includes talking to a cloud (most of them) means that your proprietary info is going into the training pool. Read the small print. Research has shown that this information is indeed recoverable to a high 90s percentage accuracy. Legal departments get very squirrely about this sort of thing, never mind the GDPR implications of data management if there’s any customer info included. 

This uncertainty in data control and management potentially puts the company in a host of trouble. Firstly, there’s the potential loss of company secrets, which is a problem commercially and for encouraging investors. Then there’s the issues with personal data. Putting this at the mercy of AI training pools is potentially illegal in itself, though I’ve never heard of it being tested in court. It’s also a very bad look in the court of public opinion when it inevitably gets out. 

For these reasons, some companies are being stopped from AI use by their legal departments alone. I don’t have a legal department. However, if it’s a legal problem for them, I don’t see why I’d be immune. So that’s one reason not to use AI. This isn’t my main concern though. I’m sure this will be cleared up in a year or two as this kind of client is way too important for the AI businesses to be pushing away. If I wanted to use AI and this was the only issue, then I’d just wait it out. 

A Human Wearing a Robot Suit

Possibly the biggest legal challenge for AI companies, going forward, is the fact that most versions of copyright laws clearly say that only human-generated product can be the subject of copyright. Simply put, you cannot copyright the output of an AI. Case closed.

Or is it?

You see, it’s not just original human work that can be copywritten. Humans can take the works of others (humans, traditionally) and transform them into new works which can be copywritten as new pieces in their own right. 

As far as AI goes, this means that something like Midjourney can generate a picture with zero copyright, and then a human can transform it into something that can be copywritten. So, whatever transformation means, it’s key to commercial use of AI. And, at least till the law is changed, this means using pesky humans. Unfortunate for an industry who’re ostensibly in the business of replacing them.

How much humaning do they need though? 10%? 50%? As with many laws, it’s written to make lawyers enough for a second holiday home and a yacht rather than be clear to mere mortals. Or courts. This guarantees lots of protracted legal cases for the next decade or two. 

Transformative is an unhelpfully fuzzy term. However, companies want copyright control over important brand imagery. They need to know what they have to do to jump through that hoop if they’re going to use AI to generate the imagery on the first place. This is potentially a deal breaker, and it’s going to be a problem until the copyright law is changed. 

Requiring a human to intervene in a major way in the AI pipeline to make it copyrightable cuts against the idea that AI can avoid the need for skilled humans. This is a trend. While I see individuals using AI for various art and writing, very little of it is turning up unmodified in the public sphere. And that means we’re getting a new creative tool for humans to use, not a tool that replaces human creativity. At best (from the pro-AI POV), you might argue that it reduces the human to an art director rather than an artist. A different skill, and usually one done by an artist anyway, but not what’s being presented as the future of AI. I’m not sure that’s enough to satisfy the law on copyright either. It seems that the transformation must be hands on, and sufficiently invasive to count as having transformed the output into something entirely new. Hard to see how much of this is going to be done without a human artist in the loop. In fact, AI always requires a human. Nobody’s publishing substantial work without a human vetting it at some stage. Well, not without ending up in the news.

Despite all this, plenty of big players are already sacking folk to replace them with AI. But AI is a term that covers a lot of different areas within a large company, and using ChatGPT to write your emails isn’t the same as using Nano Banana to do your character design. While one seems (to me) legally safe, the other feels more nuanced. More of a challenge. After all, the AI companies haven’t borrowed billions of dollars on the promise that they’d replace a couple of secretaries. They want to put all creativity in the metaphorical hands of your toaster (I may be paraphrasing here). They need bigger wins. 

As far as jobs go, the cuts I’ve seen generally remove less skilled roles and add work to the higher skilled folk. That’s not a great long term strategy as the highly skilled folk don’t get where they are without going through the lower skilled stage, and without roles to train new folk in it’s going to be an interesting puzzle to solve. I suspect that the AI believers simply imagine that the whole requirement for humans will be written out any day, so this is something they can ignore. But, like the AI eating its own slop and the training going weird, I think it’s an issue they can’t ignore. Well, obviously they can and probably will ignore it, and it’ll come back and surprise them later, just as they seem to have been surprised by the current copyright. Too busy being clever and pleased with themselves to avoid all the unforced errors.  

Maybe They Should Have Considered This Sooner

What’s interesting to me is that while stealing training data is about the AI itself, most legal hurdles are reasons why the market might not use it. This is probably more concerning for the AI companies. After all, the best product in the world is no use without a paying audience. If AI can’t be easily used without falling into legal chasms, then the market will look elsewhere for their art. 

Humans, maybe. 

This entry was posted in Random Thoughts and tagged . Bookmark the permalink.

Leave a comment