By Russell Viers
I still remember the first time I showed Photoshop’s Clone Stamp Tool to a newspaper audience. It was 1997 and it was called the Rubber Stamp Tool back then. The room was silent in amazement as I eliminated various marks, spots, and objects from the image. I think I removed a hot air balloon from the sky in a photo, as well.
Many people have come up to me over the years showing how they have used this tool to “fix” photos as well as share stories of how people have abused the power of that simple technique, only to lose their jobs.
Back then, the line in the sand, for news photos, anyway, was that you could clean up a photo, like removing scratches and spots, etc., but no manipulation. None. Stories of people adding basketballs to photos to make the shots more exciting, or removing the oxygen hose from a town mayor’s face come to mind, as well as too many others. People lost jobs over this type of manipulation of photos. I’ll bet you have stories of your own.
And now we have generative artificial intelligence (AI) built into Photoshop, which gives any user the simple-to-use-tools that can manipulate a photo way beyond anything we dreamed of doing with the Clone Stamp Tool. Want to remove things? Click. Want to add things? Type and click. BOOM…you have the photo you meant to take.
Yeah, it’s pretty amazing, and scary, at the same time.
And Photoshop’s new Generative Crop Tool? I shake my head thinking about the implications. This new tool allows the user to crop beyond the edges of the photo and, with a simple click of a button, the blank area around the image will be filled with what Photoshop thinks should be there…in seconds. I’ve used it in testing. I’ve taken square photos and added width to make it a landscape. Obviously, what’s added doesn’t really exist. Adobe’s generative AI brain, Firefly, is creating what it thinks would look natural and believable.
Notice I’ve been calling it “generative AI?” It matters, as this is the Mr. Hyde to the Dr. Jekyll of AI we’ve had in Photoshop, and many other applications, for years now.
Mild mannered Dr. Jekyll’s AI, in our graphic arts world, is rather harmless, as it does a particular task that we can do ourselves, but it has a trained mind behind it that can help us identify, and fix things, quicker. For example, all of Adobe’s photo adjustment tools have new masking capabilities that will quickly identify the subject of a photo, or the sky, or even people. In the case of people, it not only identifies them, it can identify the various people in the photo, then allow you to select just faces, or eyes, or lips, or facial hair, and more.
I can do this exact same thing with my Lasso Tool, or maybe Magic Wand Tool, or even the Quick Select Tool, or many other ways, but it would take a lot longer…and a steady hand.
This technology makes a selection based on information it has been taught in order to be able to identify what a sky looks like, or people, etc. But it’s not changing anything. It’s merely selecting it so we can then make some adjustments to lighting, color values, etc…things we’ve done for decades, just not with the help of Dr. Jekyll’s ability to select things for us.
There are many other AI features I would lump in this category. Things like JPEG Artifact Removal, Photo Restoration, Depth Blur, Super Zoom, and others. These are all under Filter> Neural Filters, which we’ve had for several versions of the Creative Cloud. They are run by Adobe’s Sensei AI brain, which, unlike the Mr. Hyde of Firefly, focuses on functionalities.
The line between AI and generative AI gets a little blurry at times, however, as there are tools in the Neural Filters panel that are, in my mind, generative, in that they literally change the picture, manipulating it to tell a different story. Smart Portrait is a perfect example of how a user is a few clicks away from literally changing the photo completely from what was shot with the camera.
Now, I’m not in a position to tell you to use, or not to use these tools or where you draw your line in the sand as to what’s allowed, and not. I can tell you that for my photos, and my art, I don’t let Mr. Hyde anywhere near my work. I will not use generative AI to change or manipulate my photos in any way. On the other hand, I rely on Dr. Jekyll’s AI tools to help me make selections faster so then I can make the same lighting and color adjustments I’ve been making for years, only faster.
Keep in mind that Adobe’s not the only player in this game. In fact, they are a little late to the game, compared to MidJourney and other tools that can generate art just by typing a few key words into the prompt. There are generative AI tools for music and video creation, and much more. It’s really hard to keep track of because, as I write this article, things are already changing.
And it’s not just visuals. Mr. Hyde now has tools for writing text, in the form of ChatGBT and many others. ChatGBT is only a year old and it’s had an incredible impact on how the world creates content.
I probably could have let ChatGBT write this article for me, but for many reasons, I didn’t. One is that I want MY thoughts and words in this story. Another is that, although it takes me a lot longer, I enjoy the journey of finding the right words, constructing the sentences, and creating a flow to convey my thoughts as clearly as I can. And another is that I’m convinced people are more willing to read words written by a human, even if they are more flawed than what a machine could right … wait … scratch that … write, because what is written by the human is truly original thoughts, not just notes pulled from a database.
There is a movement in the art community promoting “No AI,” and they have logos people can put on their art to let it be known that the product you are viewing was 100 percent man made. And as much as I would LOVE to promote that on my site and in my work, I come back to the point of this article: there are two sides to this AI issue, and they aren’t both “bad.”
Dr. Jekyll has so many non-generative tools that help me work faster, without creating content that’s not real. Whether it’s the many photoshop tools mentioned above or something as simple as AI driven spell and grammar check in various applications, there are great AI tools that are harmless, in my opinion. I placed some photos in a PowerPoint presentation the other day and the application automatically wrote the alt text for sight impaired readers, using AI…and it was accurate (I would, of course, proof it all before releasing it to the public).
It’s Mr. Hyde who’s the problem, in my mind. Changing imagery and writing stories automatically can be dangerous, I think. And as more papers are adding video to their websites, let’s not forget the equal dangers of manipulating them. There are tools available now that can eliminate the “umms” and the other empty words from video automatically. Is that simply editing or changing the story? What about having the same power of removing or adding things in a video that we have in Photoshop?
So what is your line in the sand going to be? What are you going to allow AI / Generative AI to do, and not do? I think it’s a decision we all need to make… sooner, than later.
I’ve drawn my line. It’s the same line I’ve used since I first started using Photoshop thirty years ago. I won’t do anything to a photo I couldn’t do in the darkroom. Granted, I didn’t have the amazing masking tools back then, but even with Dr. Jekyll’s selection tools, I’m only going to adjust light and color, etc., much like I did with burning and dodging in the old days. As for images for advertising, that’s a totally different discussion we can have another day.
I recommend newspapers establish their AI / Generative AI rules as soon as possible. Perhaps talk with other newspapers, and your state associations, to get their takes on this to help you decide. Feel free to drop me a line, if you want.
And be specific, not general. “Do not use ChatGBT to write your story,” “Use of ChatGBT is allowed to make a story you’ve written shorter in length, provided you proof the final story again before submitting,” “Under no circumstances are you allowed to use Generative Cropping on a news photo,” would be examples. This is not a time for ambiguity.
Then, and this is so important: Make the rules very clear to everyone involved with the creation of your newspaper, including stringers, freelancers, and part time employees. Put your rules in the employee handbook. Make a poster and put it over every computer so there is no question what is allowed, and not. With the high turnover in the news and production departments these days, this makes even the newcomers aware of the rules on day one.
And finally, evaluate your rules regularly. The AI world is changing so rapidly. And not just in the number of players who are entering the field with various tools, but the existing products are improving every day due to programming, what information is fed into the data pool, and the fact that AI is actually getting smarter. So keep an eye out for what’s new and what’s changing, and how those affect the way you put out a newspaper.
If necessary, change and adapt your rules in keeping with current technology. Then update all of your employees and freelancers, change the employee handbook, and change the posters hanging around the office.
I wonder if ChatGBT had written this article, would it simply have read “Make your rules about AI, post your rules, evaluate often, adapt your rules if necessary, repeat?”
Russell Viers is an international speaker and trainer who teaches production techniques for graphic designers. In addition to speaking live, he offers training online through his site www.digiversity.tv.