"Pay attention to the world." -- Susan Sontag
Irises on Black / Notes On Experiences (2 of 2)

Irises on Black / Notes On Experiences (2 of 2)

From “The Photograph” in Understanding Media: The Extensions of Man by Marshall McLuhan:

“The age of the photograph has become the age of gesture and mime and dance, as no other age has ever been….

“[To] say that ‘the camera cannot lie’ is merely to underline the multiple deceits that are now practiced in its name….The technology of the photo is an extension of our own being and can be withdrawn from circulation like any other technology…. But amputation of such extensions of our physical being calls for as much knowledge and skill as are prerequisite to any other physical amputation….”

From “Embodied Perception” in The World Beyond Your Head: On Becoming an Individual in an Age of Distraction by Matthew B. Crawford:

“The still photograph turns out to be a poor metaphor for understanding visual perception, for the simple reason that the world is not still, nor are we in relation to it. This has far-reaching consequences, because some foundational concepts of standard cognitive psychology are predicated on the assumption that we can understand the eye by analogy with a camera, in isolation from the rest of the body. Nor is this a mere intramural fight between quarreling academic camps; what is at issue is the question of how we make contact with the world beyond our heads….

“The world is known to us because we live and act in it, and accumulate experience.”

From Six Degrees of Separation by John Guare:

“How do we fit what happened to us into life without turning it into an anecdote with no teeth and a punch line you’ll mouth over and over for years to come…. [We] become these human juke boxes spilling out these anecdotes….

“But it was an experience. How do we keep the experience?”


This is the second of two posts wrapping up Iris Season for 2023, with a selection of my previously posted iris photos rendered on black backgrounds. The first post is Irises on Black / Notes On Experiences (1 of 2).

I ended the previous post — a discussion of my experiments with the artificial intelligence image generator Adobe Firefly — with the following:

“Outside the realm of graphic arts, photography typically captures an instant in an experience, with the experience implied in the relationship between a shared photograph and its viewers. With an AI-generated image, the photographer’s experience is eliminated: there is no living interaction with the external world and whatever story a photograph might represent is reduced to phrases typed at a keyboard. What this might mean for the evolution of photography is something I’ll speculate on in the next post in this series….

I have always enjoyed the entire process that ends with sharing my photography. From trudging around with my camera in the woods or a park, an urban landscape or a tourist attraction; to culling and sorting the images through post-processing; to organizing the results around some loose theme and posting them here — I like it all, even when any given step might stretch my patience or stress my aging joints. The idea that new software tools — AI image generators — could theoretically replace most of that workflow is more than astonishing….

Photography differs from graphic arts or digital art in two important ways. The second difference — implicit in my previous post and in the paragraph you just read — is that photography includes an experience that occurs in the external world (the world outside your head). Graphics or digital images produced by artificial intelligence tools don’t require such an experience, even if the images they produce are based on or derived from composites of images used to train those tools.

But the first difference between photography and AI tools (and digital art) is this: photography starts with a photograph, taken with a camera. This may seem glib and self-evident, and there are more complex ways to describe this inception by swapping “photograph” with “image” then talking about light, color, image sensors, and lots of imaging technical terms, but the “first principle” remains:

Photography starts with a photograph, taken with a camera.

It may — or may not — matter what happens to that photograph next. Every photo I publish here goes through some post-processing: at minimum, there are colors, lights, shadows, and details that get adjusted every time. And there are always spots to remove — outdoors is very spotty! — which sometimes means I reconstruct damaged leaves or flower petals, or remove background elements that interfere with the photo’s balance or the way your eye might follow its lines. All of these are forms of image manipulation, but the image that results is still a photograph — because photographs are, and always have been, manipulated by the technologies used to create them or the technologies used to refine the results.

But as you’re probably already imagining, things start to get a little muddy when you think about different kinds of image manipulation, even those that have long been available with tools like Lightroom and Photoshop. If I take one of my photos of a flower in a field, and remove the field by converting it to black — is that image still a photograph? If I take elements of several photographs and use Photoshop to create a composite, is that image still a photograph? Image manipulation is a subject that Photography — with a capital “P” — remains uncomfortable with, yet it will be more and more necessary to develop a shared understand of the differences between “photographs” and “images” as artificial intelligence tools continue to advance.

As I was cobbling together some research for this post, I came across this interesting article: Copyright Office Refuses to ‘Register Works Entirely Generated by AI’ — which describes how the United States Copyright Office will not allow AI-generated works to be copyrighted, because “human authorship” is not present in the creation of those works. This may seem like a woo-hoo moment for the regulation of AI images — but how long before someone effectively challenges that restriction because the prompts used to generate an image were typed into a computer by a human being?

But this isn’t the pinhead I want to dance around on; instead, I ask: how will they know the image is AI-generated? I knew that there were tools supposedly capable of differentiating between text written by humans and text written by, say, ChatGPT — but only learned recently that there were tools designed to identify AI-generated images. I won’t name them, though; here’s why:

I tested three of the tools using the images I generated with Adobe Firefly for the previous post and this one. Two of the three tools identified every one as likely human-generated (which they were not). The third tool fared better, but only got about half of them right. This could be because Firefly is newer than some of the other AI image generators, I suppose, but I still think it suggests we’re going to need better detective tools!

If you go here, you can see some of the images people have generated with Adobe Firefly, without signing in. You’ll notice, I’m sure, that many of the images clearly are not photographs and don’t try to be: they are, instead, fantastical renderings of different scenes that I like to call: imaginaria. I have no doubt that the ability to create images like this requires significant technical skill and creative insight, one that includes training in tools like Photoshop and a great imagination — or at least it did, until now, provided the artist is willing to concede a lot of their creative energy to a tool that will approximate their request, and fill in its own blanks.

But I did wonder what else I might come up with if I decided to stay within the realm of (imitated) photographs, with bits of imaginaria. So I started with something simple, but slightly exotic, and asked Firefly to generate “a photograph of a Bengal tiger, in natural light.” Here’s what Firefly gave me…

… and I don’t think I would have obtained a better Bengal tiger photo if I’d gone to Zoo Atlanta and taken one myself.

I thought it might be cool to find a Bengal kitty-cat sleeping on my porch, so I updated the prompt to “photograph of Bengal tiger sleeping on someone’s front porch, in natural light.” And I got just what I asked for:

So then I decided to create some photos for my catering business web site (I have no catering business, and it has no web site) — one that offers wine tastings, including wine and cheese parties for iguanas. I used the prompt “photograph of an iguana on someone’s front porch, with a plate of cheese, and wine in a glass with a bendable straw.” Here are the resulting photos, which include me (not me) training the iguana to use the bendable straw, since, of course, iguanas can’t drink from wine glasses — unless you give them a bendable straw.

I then finished out the day with a little Birds, Bees, and Beers party (prompted with “photograph of a hummingbird drinking beer from a frosty mug” and “photograph of a bee drinking beer from a frosty mug”) for some of my closest friends:

I made only two kinds of post-processing changes to all the photos above: I cropped or used healing tools to remove the Adobe Firefly watermark and (sometimes) straighten the images; and I removed spots that annoyed me because… spots! The colors, shadows, lighting, and textures are exactly as Firefly produced them.

There is, of course, really no reason to do this (except to entertain oneself); but it does illustrate that: even outside the realm of fantasy or imaginaria, it’s possible to AI-generate images that emulate photographs, but are completely implausible. Yet while implausible, the images still could be considered “logically correct” in that there’s only one obvious error: the bendable straw in the first iguana image is both inside and outside the wine glass. Still, these “photographs” fail my photography test: they don’t capture a living being’s experience, and they aren’t produced with a camera.

We have little understanding of how these photos are created, other than a sense that AI engines undergo training — but with what? We’ve all long ago ceded much control of whatever we post on the internet, our ownership obfuscated by incomprehensible, seldom read privacy policies and terms of use. Adobe maintains that it uses “stock images, openly licensed content and public domain content” to train Firefly — but that distinction also implies that other AI engines may be doing something different. For some delightfully contrarian views on how AI is being trained, see AI machines aren’t ‘hallucinating’. But their makers are, where Naomi Klein asserts that AI training with our content is the greatest theft of creative output in human history; and AI Is a Lot of Work, one of many recent articles about the legions of human beings exploited to keep AI models on track. This paragraph just hints at some of the cultural (and legal) issues that AI tools are already presenting — even as the tools are teaching themselves to do things they weren’t designed to do.

The play (and film) Six Degrees of Separation, quoted above, is about many things, and the story revolves around the intrusion of an imposter (pretending to be the son of actor Sidney Poitier) into the habituated and aristocratic lives of a wealthy couple, Flan and Louisa Kittredge. The imposter uproots their lives by involving them in a series of his deceptions, leading Flan to compartmentalize what happened into stories he tells friends, but leading Louisa to a climactic speech where she demands what I quoted: How do we keep what happens to us from being turned into anecdotes? How do we keep our experiences?

We seem to be in a similar position with respect to new technologies: AI image generators — even in their infancy — attempt to imitate photography, potentially supplanting actual photography; just as language generators (like ChatGPT) exert their ability to replace writing. But AI image generators won’t help someone become a photographer and language generators won’t make someone a writer, because they can’t answer the questions: Why do we need — and how do we keep — our experiences?

Thanks for reading and taking a look!

My previous iris posts for this season are:

Irises on Black / Notes On Experiences (1 of 2)

Bearded Irises in Yellow, Orange, and Burgundy

Iris pallida ‘variegata’

Yellow and White Bearded Irises (2 of 2)

Yellow and White Bearded Irises (1 of 2)

Purple and Violet Iris Mix (2 of 2)

Purple and Violet Iris Mix (1 of 2)

Irises in Pink, Peach, and Splashes of Orange (2 of 2)

Irises in Pink, Peach, and Splashes of Orange (1 of 2)

Irises in Blue and Purple Hues (2 of 2)

Irises in Blue and Purple Hues (1 of 2)

Black Iris Variations (and Hallucinations)


  1. I find your experiments very interesting. I have no particular interest in Ai but found myself unable to stop reading about your adventures with it.
    As I see it, the challenge in creating AI is that it comes down to “teaching” (which really means programming) a machine to accumulate data as if it was experience. First the programmers need to understand how experiences really affect us…and that is really not something that we understand very well. And second into them accurately coding that into a computer. Not as easy as it sounds.

    1. Dale

      Thank you! And I agree! The boundaries between a simulated experience and an actual one seem to be getting thinner very quickly. And I wonder how long it will be before these tools can imitate any camera or lens I want, or — since people often upload photos with GPS coordinates in them — create a “photo” of something at a specific location.

      I started looking at Firefly because I saw a video about how it was being added to Photoshop (available in a beta version now) — which will allow you to generate Firefly AI content into your own photos. (Search for “Photoshop generative fill” on YouTube if you want to see some examples.) So I guess in the future I could take some of my iris photos and add realistic looking bees or butterflies — or tigers!

      Thanks for reading — I know it was a lot of words! 🙂

Leave a reply ...