For those who have forgotten,

Or maybe those who never read Frank Herbert’s Dune, the Butlerian Jihad, was the holy war waged against Artificial Intelligence in those wonderful books. The war to eliminate thinking machines and wrestle humanity’s freedom from the devices that had enslaved them.

It is a noble thought.

It’s self-aggrandizing on two fronts; first, that humanity would be godlike enough to create true artificial intelligence and, second, that we’d be heroic enough to defeat that creation. It’s all very impressive. Good job humanity, good job on being like unto god and extra good job on putting it all back into its box.

A noble thought.

It is, in fact, such a noble thought that people developing machine learning tools in our present timeframe like to imagine they are creating true artificial intelligence. Or, worse, they believe they’ve already achieved it. All done and dusted. Watch out world, are you ready? Hey, Wall Street, cut me a big check!

Not so fast.

Large Language Models (LLMs) are not intelligent. Generative AI image generation is not talent. LLMs are, at best, tools for shaping words into heaps that statistically resemble writing. GenAI turns noise into pixels in the general shape of its prompts. There’s no cognition happening, no inspiration, no creative expression.

In the words of Shakespeare…

“It is a tale told by an idiot, signifying nothing.”

No. What the press and the culture call AI is not true artificial intelligence. But that isn’t stopping some from starting the Butlerian Jihad in humanity’s name. Want to dabble with ChatGPT? Heathen! Feel like exploring MidJourney? Heretic! Want to use either in your profession? Apostate! Witch! Blasphemer!

How very noble.

If one sees AI as an evil, how self-aggrandizing it must feel to oppose it. How empowering it must feel to the powerless to lift that Butlerian Banner and rally the townsfolk to raise their torches and pitchforks to battle that evil. Witch hunts always feel great to the hunters.

But is AI evil?

That’s not how it was pitched to us since 1966 on Star Trek. The computer on the USS Enterprise was always ready with the information, always handy with the info-graphic. It was a hell of a plot device, but any fan could see how useful such an appliance could be. Need an answer? Don’t have time to sift through library data bases? Just ask the computer.

AI seemed sort of cute.

It helped to have Majel Barrett voicing the system in TNG and most of the rest of the franchise. That was AI though, right? We all agree that was AI. Or… did you think someone wrote all the computer’s responses? Did star Fleet have a computer writer’s room composing all possible answers to all possible questions? Nah! That was AI.

Right?

And all those cool TNG graphics? Was there a Federation graphic design lab drawing all of those images or were they generated by AI? I’ve always assumed that they were generated at runtime by the computer. I mean, sure, there was a database to pull from, but most of it had to be provided as needed. Just-in-time infographics.

We agree that’s AI they were portraying.

Well, I always wanted that. I still do. It’s the reason I bought a computer back in 1985. I want the Star Trek, post-scarcity utopia. And part of the way to get there is to build LLMs and GenAI. Are they great now? Nope. But they’re not evil. There are a lot of allegations against them. I buy some of them. I don’t buy them all. 

Here’s a list of the common arguments against AI.

I’m going to focus here on Image generators like Dall-e and MidJourney. I don’t want to ignore ChatGPT and its impact on writers, but for this argument I’m going to focus on GenAI. I’ll come back to the issues of writing books with AI (which I think is absolutely foolish for many reasons) in another posting. But for now, my list:

1. They’re plagiarism machines.

This is the belief that AI Image generators are trained on scraped images from artists on the internet. Since these images were scraped without artists’ consent it is plagiarism?  Well, I mean maybe it’s forgery. That’s when you copy an artist’s work. Plagiarism is when you copy writer’s work. But the basic premise is that the images were “stolen.”

You’ve used the internet, right?

On this very web page is an image. I know you can right-click on that image and save it to your photo library. You can drag it to your desktop. Anyone can. All artists, from the dawn of the World Wide Web, have known this. Yet they continued to post their artwork in this insecure environment, anyway. What reasonable expectation does any artist have that their art is safe from downloading?

I don’t think there’s any reasonable expectation.

Also, Google and other companies have been scraping data from websites since the 1990s. Where was the outcry then? Why is scraping images different? Why is any scraping different when it’s for AI training? I don’t think it is, but I think it fits into the Jihad mentality.

And, anyway, artists are catching on.

They’re figuring out how to protect their work. That should be respected. If GenAI wants to learn from an artist, that artist should be paid. End of story. GenAI should get the benefit of the doubt for their prior scraping because of the fast and loose way everyone conducted themselves on the internet before 2020-ish. But, no more.

2. They’ll put creators out of a job.

I’m more sympathetic to this than to most of the allegations, partially because I’ve worked as a designer for over forty years. But, here’s the thing, in all of those forty years I’ve only known a handful of artists that can actually draw. I can draw reasonably well. I’m no DaVinci, but I’m better than 70% of the handful of designers I’ve worked with that could actually draw. I don’t think their jobs are at risk.

But what about the rest?

The rest can use GenAI as a tool in their work. They already are in businesses all over the planet. As Adobe goes so go the globe’s designers. The only thing the Butlerian Jihad is doing is forcing them to stay in the closet about it and filling them with shame. There is no designer worth a damn that is generating an image and submitting it as a final product. Designers use GenAI to make elements and layers and textures.

But it can make their work better and earn them more dough.

I started designing a few years before desktop publishing hit. I knew typographers and stat camera operators that had to change careers. I knew airbrush artists that had to learn Photoshop. Tech has been eating designers’ lunches for a long time. We end up learning the tech and making it work for us.

3. It is an environmental nightmare.

I agree. But the entire data center industry is no better. I don’t know about you, but from the outside, I can’t tell if a data center is churning away on AI algorithms, serving AWS lambda functions for eCommerce, or serving streaming video for Netflix. Everything you consume with your phone or computer is being processed in a data center. Why is it bad if it’s AI?

It’s all the same sausage.

I saw a stat that said a GenAI image takes as much power to render as it takes to charge my iPhone. I can charge my iPhone in an hour. If I sat down to paint an image in Procreate on my iPad pro, it would take me a few days and more than a few recharges. To say nothing of the cost in energy, cash, and vital resources required to keep a living breathing artist alive long enough to produce a finished piece of art.

We tend to ignore the costs of human intelligence.

But, I don’t want to leave you with the impression that I am in the pocket of BIG AI. I have many, many misgivings about the current state of tech in general and AI in particular. So what are my personal red lights for AI? I have a few. Some are easier to deal with than others.

1.  Stealing individual artist’s style

Currently with many, if not all, GenAI image tools, a prompt engineer can request the image be rendered in the style of a particular living, breathing artist. The results vary in terms of quality. Some of them nail the look and feel of a working artist. Some, not so much.

This should be illegal.

Flat out, this needs to be stopped, through regulation if necessary. Working artists labor for too many years to develop their technique and style. It should not be possible to sidestep them to get work that previously, only they could achieve. And it could easily be coded away. That should be implemented immediately.

Implement prompt bans on artist names.

It shouldn’t be possible to request an image to be rendered in the style of Frank Frazetta (for example) without paying Frank Frazetta’s estate. If GenAI developers want to pay for the artist’s lifetime style rights, I say go for it. But otherwise, impose a ban on the use of a given artist.

2. The environmental costs need to be reckoned with

I think this will happen in the long run. All computing gets more efficient over time. Cloud computing is far more efficient than having companies and individuals hosting their own servers. Algorithms get cleaned up and streamlined to save money. But data centers shouldn’t get deals on electricity. Market forces should force them to get cleaner and more efficient.

And if that doesn’t work, regulate them

3. Make GenAI tools, not replacements

If the developers of GenAI simply spoke to the design community, they could have avoided all of the backlash. Designers want to use GenAI. It’s cool to have a magic wand that provides an image to suit your need. It beats stock photography by a mile. But designers don’t want a tool for non-creative managers to side-step them.

They don’t want to be replaced.

These tools should be added to the designers’ toolkit not the management’s. A producer should not be able to pull up GenAI and make a marketing campaign without hiring a designer to do the work. And, realistically, it’s the only way to make sure the artwork isn’t shit. Designers earn their money because they have taste. GenAI will spit out some seriously bad shit and the C suite doesn’t have the training to separate the wheat from the chaff. 

Hire artists and let them use these tools.

The industry is already heading in this direction. Unfortunately its largely Adobe leading the way. And that’s okay for me because, its essentially Adobe eating its own lunch. But their customers are primarily designers and I think they’re learning they can’t burn their user base without suffering the consequences.

So that’s my argument.

Do I use GenAI in my work? That is none of your business. I will neither confirm or deny. If you believe I am, I invite you to prove it. There are several AI tools for detecting AI artwork. But it takes a certain amount of cognitive inconsistency to use those tools to ferret out the heretics. Using witchcraft to divine the witches. But here’s the thing…

You may not agree with me.

That’s okay. I’m not waging a holy war here. I’m not trying to change your mind to think AI is a good thing. I’m not sure it is. I think having healthy doubt about AI is just that…healthy. I, too, think AI is being used in some irresponsible ways. I, too, think that the Tech culture needs serious checks and balances. I wrote my first book on that very thing.

But I don’t think AI is necessarily evil.

I think we should all be open to that possibility. Be open to the possibility that there may be nuances in the conversation. Be open to the possibility that reasonable people can disagree without the need to label anyone with a Scarlet Letter. 

Post Scriptum

I guess a few AI engineers have gotten hold of this blog post and found it comforting. It should not. Your industry has a lot of explaining to do and a lot of trust to earn. You’ve acted like assholes and you’ve put people’s livelihoods at risk. If my words have made you comfortable, well fuck off. Start talking to designers NOT ON YOUR PAYROLL. Make the tools they want. Don’t make the stuff that robs them of the joy of creating.

Don’t be evil.

 

Hester Prynne.