“At first glance, the Spireas look more like wildflowers than shrubs. Their thin stems, only two to three feet tall, are topped with clusters of small blossoms…. They grow alongside many of our common wildflowers, such as Milkweed, Queen Anne’s Lace, and Loosestrife, and when winter comes, all remain standing with attractive dried flowerheads. But if you went to collect these plants for an arrangement of ‘winter weeds,’ you would notice one significant difference in Spirea. While the stems of the wildflowers have all died, leaving only live roots to start next year’s growth, the stem of Spirea remains alive, as you can tell by scraping the bark and seeing green beneath….
“One winter, as I examined a few Spirea that were sticking up through the snow, I noticed that although their main stems were alive, the dried flowerheads at their tips were dead. I wondered how the plant would continue its growth next year. Would the flowerheads drop off? Where would new stems grow? The following spring, I returned to the Spirea and got my answer. The living buds just beneath the dead flowerhead were growing into new branches. The weight of these branches was making the original stem bend to a horizontal position, with its old flowerhead still at its tip. On older plants I found that this process repeated for several years, creating a jumble of horizontal stems with dead flowerheads at their tips, and young vertical branches growing from them….
“There are four common native species of Spirea in the East. Three of them — Meadowsweet, Broadleaved Meadowsweet, and Corymbed Spirea — usually have white flowers in either a flat or a cone-shaped cluster. The name Meadowsweet is given to these plants collectively because of the pleasant sweet smell of their blossoms and their habit of growing in moist, sunny places, especially old meadows….
“The fourth species of Spirea, Hardhack or Steeplebush, is quite different in appearance from the others. It has a thin spike of bright magenta flowers shaped like the spire of a church steeple. The name ‘hardhack’ refers to the difficulty early farmers had with cutting them in meadows. The plants were very persistent too, for even after they were cut, they could send up new stems from their spreading roots.”
It’s just a little chilly. April’s promise fills the air. For anyone who’s looking signs of spring are everywhere.
Sunshine brightly glinting on new magnolia leaves. Irrepressible forsythia bounding forth in golden wreaths.
Pointed spears of green attending yellow daffodils. Poeticus narcissus preening beside the prim jonquils.
Miniature grape hyacinths growing low in clumps of blue. Vermillion quince in flower with a mockingbird or two.
On slender branches circlets of white spirea beguiles. Periwinkle twinkles in shy lavender smiles….
Hello!
Having photographed this collection of spirea in previous springs (see, for example, Bridal Wreath Spirea from last year), I can see how the growth of these shrubs matches the pattern described in the quotation from The Natural History of Wild Shrubs and Vines above. Where there were just a few spindly stems with sparse blooms last year or the year before, the plant has expanded to split off new branches and create clusters of flowers running their length.
The overall pattern of the plant’s growth reminds me of how spirea variants are often used in vases of flower arrangements to create contrasting lines and colors with other flowers. Yet I can also see that a vase full of long stems of spirea would be quite striking and stand on it’s own — with contrast provided by its dark red woody stems and tiny green leaves. The Photographer imagines snipping some of these stems and smuggling them home under his coat — but, alas, he behaves himself and is content with the photographs instead. Thieving has never been one of his skills, anyway; he would most likely get caught.
“Periwinkle is more than just a pretty ground cover. It has an interesting past and a promising future. Legends about periwinkle date back further than the facts we have about it, portraying a plant with influence over the devil. Herbalists proclaimed its powers. Apuleius, Roman author from the second century A.D., described periwinkle’s powers thus: ‘This wort is of good advantage for many purposes, that is to say, first against devil sickness and demoniacal possessions and against snakes and wild beasts and against poisons and for various wishes and for envy and for terror and that thou mayst have grace, and if thou hast the wort with thee thou shalt be prosperous and ever acceptable….’ Modern advertising could not give the plant a better promotion.
“And modern science has discovered more reasons to revere the periwinkle plant. Certain components of the Madagascar species, crimson-flowered Vinca rosea, inhibit cell growth. Doctors now include in cancer chemotherapy treatments steady doses of vinblastine sulfate or vincristine sulfate, two alkaloids extracted from the tropical periwinkle plant. While the vinca alkaloids sometimes produce unpleasant side effects, they effectively slow down tumorous cell reproduction. Periwinkle is no home cure for cancer, but these vinca extracts are among the most promising treatments for cancer today.
“The periwinkle that grows, wild or cultivated, around the United States and Canada is a smaller and less potent relative of the Madagascar breed. Its local appearance only reminds us of the worldwide search for cancer treatments deriving from the plant world. Vinca minor covers wooded corners, orchard spots, and landscaped yards with its shiny evergreen leaves. Its appearance in the wild often means the land was earlier inhabited. Early blue flowers spin open in the spring. Hybrids bloom pink or white or purple. Closely related, Vinca major stands higher, grows larger leaves and flowers, and doesn’t take so kindly to the wild. You will never discover a Madagascar periwinkle growing in the United States or Canada, outside a greenhouse. But the periwinkles you will find here have their own practical uses.”
At an abandoned house site, edge of the woods, lies a patch of periwinkle ground cover: glossy green leaves, violet flowers, a thick carpet spread across the forest floor.
I’ve come here at times to dig squares so now periwinkle covers my side yard. It holds banks of the mountain road near my cabin….
Imagine. All the vinca I will ever need….
Hello!
The batches of vinca major (or periwinkle) that I photographed for this post were entwined among the hellebores I posted previously (see Early Spring Hellebores (1 of 2) and Early Spring Hellebores (2 of 2)) — mostly in the shade of some elm and oak trees and a few large shrubs. They were also among my first experiments with a neutral density filter, which seems to have helped produce some very rich background greens for the flowers. The blooms varied in color from blue to purple or violet (depending on how much reflected sunlight they caught), and I shifted all the colors slightly in Lightroom so they matched each other as closely as possible. Still, you may see these flowers as more blue or more purple, depending on the level of blue light the screen your viewing them on emits (or restricts).
Vinca’s vines tend to grow close to the ground, somewhat loosely but often rising in small clumps. They don’t so much attach to surfaces as tangle around them, but do not mind climbing up a tree like those between two tree trunks below. You can find them throughout much of the United States (and other countries), usually early in the spring, and often in fields or along roadsides where those variants are most likely vinca minor (with smaller leaves and flowers) and often considered wildflowers.
“Few plants are of greater antiquity, or more surrounded by legend and superstition than the hellebore. According to Greek tradition, the shepherd Melampus first became aware of its properties through observing its effect on his goats; and he used it successfully to cure the daughters of Proetus, King of Argus, of mental derangement — in some versions of the story, by dosing them with the milk of the goats that had eaten it, or in others, by the use of the herb itself, followed by baths in a cold fountain; so that for centuries afterward, the plant was famous as a cure for insanity….
“One of the species grew plentifully about Anticyra in the Gulf of Corinth, so eccentrics were playfully advised to ‘take a trip to Anticyra,’ and Horace calls a hopeless mental case: ‘One not three Anticyras could cure.’ So powerful a herb had, of course, to be treated with great respect, and Greek rhizotomoi or root-gatherers thought it necessary to draw a circle round it with a sword and recite prayers to Apollo and Aesculapius, before digging it up; keeping at the same time a wary look-out for eagles, for if one of these birds chanced to hover near, the gatherer would die within the year. It was also considered advisable to eat garlic before-hand, in order to ward off the poisonous efluvia of the plant. Later, the Gauls are said to have rubbed their arrow-points with hellebore before hunting, in order to make the meat killed, more tender.
“It was possibly introduced into this country by the Romans, who would hardly have allowed themselves to be deprived of so useful a plant; and it was much valued in mediaeval times for keeping away witches and evil spirits, and breaking spells and enchantments. If cattle fell sick, either through poison or evil spells, the practice was to bore a hole through the animal’s ear, and insert a piece of hellebore root. This was removed twenty-four hours later, by which time the trouble was supposed to be cured. The belief in the plant’s efficacy as a cure for mania continued right through the sixteenth and seventeenth centuries….”
About half of the photos in this post were taken with backlighting or side-lighting; those are the ones that look like they might have their own electric light source. Others were from shadier spots (like those in the first post) where I played around with different combinations of dappled sunlight just to see what would happen.
“Old English names for hellebore are setterwort, oxheal and bear’s foot, which, less fancifully than Bishop [Richard] Mant’s description, refer to the shape of their leaves. But the most popular name for one variety of hellebore is the Christmas Rose. Hellebores are referred to by [John] Gerard by yet another name, neesewort, and recommended as a cure, not surprisingly, for ‘Phrensies‘, but with the advice that it should not be administered to ‘delicate bodies… but may be more safely given unto country people which feed grosly and have hard tough and strong bodies.’
“Hellebores, however they are named, are more popular with discerning gardeners today than they have ever been before. To have several varieties of hellebore in your garden is the sign of maturity of taste, of garden one-upmanship; they have become, in the gardening fraternity, a status symbol.
“Some hellebores, though not as many as are grown today, have been features for many years in Western gardens; and in Victorian times, and indeed up to the present day, while labor was available, the most prized flowers were those that were carefully protected in winter by glass bells, or in miniature greenhouses which were specially built for the purpose.”
Hadst thou lived in days of old, O what wonders had been told Of thy lively countenance, And thy humid eyes that dance In the midst of their own brightness, In the very fane of lightness. Over which thine eyebrows, leaning, Picture out each lovely meaning: In a dainty bend they lie, Like to streaks across the sky, Or the feathers from a crow, Fallen on a bed of snow. Of thy dark hair that extends Into many graceful bends: As the leaves of hellebore Turn to whence they sprung before And behind each ample curl Peeps the richness of a pearl….
Hello!
I’ve never photographed hellebores before. I’ve stumbled by them often, but would find their colors monochrome and a bit dull so I’d move on to something else. I don’t know if those I’ve posted here are possibly new plantings, or if I just caught them at the right time — but the purple and pink marbling among their blooms got my attention and this hellebore community was quite insistent that I take their pictures. This is the first of two posts featuring some of the ones I encountered.
Since I hadn’t previously photoshot them (and have never tried growing them myself), I don’t know much about them — so it will be fun to learn a little about their botanical history, and dig up some poems like the one from John Keats above, where he conflates a woman’s appearance with that of some hellebores. Or maybe he doesn’t, and he’s really just writing about hellebores, nobody knows for sure.
I don’t usually use any lens filters with my camera, except for some starburst filters that I’ve occasionally strapped on when photographing Christmas decorations. But I recently bought one — a neutral density filter — and the photos in this post (and the next one) were taken with that filter in place. I also have several hundred other photos of early spring flowers and plants I’m working on, all of which I took using that filter. Why, you ask? Well, thanks for asking and I will now explain.
As frequent visitors here know, many of my photos are from Oakland Cemetery’s gardens, where there’s an enormous number of native southeastern plants displaying themselves in a variety of natural settings and lighting conditions. As many of these plants are sun perennials, I’m often photographing in morning or mid-day sun — conditions that can allow for capturing detail, but can introduce bright lighting (and harsh shadows) that can be a challenge to manage. I would handle this by under-exposing my image slightly, then adjusting out any remaining excess brightness (especially overly bright highlights) in post-processing in Lightroom.
Neutral density filters are often described as “sunglasses for your camera” — a perfectly fine metaphor for what they do: reducing a scene’s brightness without (theoretically) altering colors. They’re commonly used in landscape photography — especially with scenes of water or waterfalls to create a flowing appearance for the water, so commonly used that way that every article I read or video I saw about them described this use. But since their purpose was to reduce a scene’s overall brightness, I wanted to see what would happen if I used them for flower photography, especially closeups of flowers like those featured below.
So I put these “sunglasses” on my camera and headed out on an extremely brigthteous day — just to find out what would happen. The first thing I discovered was that — since the camera now had sunglasses on and so did The Photographer — it was really-really dark in the camera’s viewfinder, sort of like night at 10:00 in the morning. It took me a minute to realize I had to rethink my exposure settings — and where I was accustomed to reducing exposure (to limit excess sunlight), I needed to do the opposite: increase the exposure since the filter decreases the light reaching the camera’s sensor. Without doing that, much of the scene’s detail would be missing.
This first outing was a bit of a bust: I took 600 photos and threw most of them out. As I was unaccustomed to using filters like this, lots of things that looked relatively well-focused in the camera’s viewfinder when I took them looked like fuzz when I loaded them up in Lightroom. That focusing problem was easily corrected once I realized that it I was using slower shutter speeds than I typically did (which introduced motion blur); and shallower depth-of-field (smaller f-stop settings that reduced front-to-back sharpness).
But it was a good learning experience: I went back for a second shoot and took greater care when focusing, having figured out how careful focusing and closely monitoring exposure settings (and leaning towards over-exposure), could get me the results I wanted. What I see now — with a little extra experimenting — is that a neutral density filter helps accentuate colors on a sunny day by: reducing the amount of light overall, eliminating aberrations like blown-out highlights or excessively bright sunlight, and allowing me to overexpose and thus let the camera’s sensor gather more color from the scene.
By creating a better balance between bright and dark contrasts that way, the filter lets the colors show through, since they’re not overpowered by the light or hidden by the shadows. The resulting images are rather fascinating to work with in Lightroom: I can add saturation to the colors without making them look harshly brighter. And intense shadows on subjects are virtually eliminated — meaning that I can alter the darkness of shadowy regions and get some nice background color and foreground detail in photos like this.
I’m still puzzling about optimal exposure settings and how to understand (and explain) how using these filters changes my plant-based (haha!) photography. Because the filter alters how the camera interprets the scene and recommends correct exposure with its meter, I may need to try different metering modes. Since I’m photographing relatively small subjects close up, I usually have the camera set for spot metering — which makes exposure recommendations based (roughly) on the subject I’m focusing on. But it may be better to try multi-segment metering, which will recommend exposure settings across more of the scene that appears in the viewfinder. These observations are not precise, I think, because this experiment is just starting (and, oddly, it almost feels like beginning with a new camera), but I think I’ll keep using the filter with my spring and summer photography — and fine-tune my understanding of how best to use it and how it changes the way I post-process my photos.
“The age of the photograph has become the age of gesture and mime and dance, as no other age has ever been….
“[To] say that ‘the camera cannot lie’ is merely to underline the multiple deceits that are now practiced in its name….The technology of the photo is an extension of our own being and can be withdrawn from circulation like any other technology…. But amputation of such extensions of our physical being calls for as much knowledge and skill as are prerequisite to any other physical amputation….”
“The still photograph turns out to be a poor metaphor for understanding visual perception, for the simple reason that the world is not still, nor are we in relation to it. This has far-reaching consequences, because some foundational concepts of standard cognitive psychology are predicated on the assumption that we can understand the eye by analogy with a camera, in isolation from the rest of the body. Nor is this a mere intramural fight between quarreling academic camps; what is at issue is the question of how we make contact with the world beyond our heads….
“The world is known to us because we live and act in it, and accumulate experience.”
“How do we fit what happened to us into life without turning it into an anecdote with no teeth and a punch line youโll mouth over and over for years to come…. [We] become these human juke boxes spilling out these anecdotes….
“But it was an experience. How do we keep the experience?”
Hello!
This is the second of two posts wrapping up Iris Season for 2023, with a selection of my previously posted iris photos rendered on black backgrounds. The first post is Irises on Black / Notes On Experiences (1 of 2).
I ended the previous post — a discussion of my experiments with the artificial intelligence image generator Adobe Firefly — with the following:
“Outside the realm of graphic arts, photography typically captures an instant in an experience, with the experience implied in the relationship between a shared photograph and its viewers. With an AI-generated image, the photographerโs experience is eliminated: there is no living interaction with the external world and whatever story a photograph might represent is reduced to phrases typed at a keyboard. What this might mean for the evolution of photography is something Iโll speculate on in the next post in this series….“
I have always enjoyed the entire process that ends with sharing my photography. From trudging around with my camera in the woods or a park, an urban landscape or a tourist attraction; to culling and sorting the images through post-processing; to organizing the results around some loose theme and posting them here — I like it all, even when any given step might stretch my patience or stress my aging joints. The idea that new software tools — AI image generators — could theoretically replace most of that workflow is more than astonishing….
Photography differs from graphic arts or digital art in two important ways. The second difference — implicit in my previous post and in the paragraph you just read — is that photography includes an experience that occurs in the external world (the world outside your head). Graphics or digital images produced by artificial intelligence tools don’t require such an experience, even if the images they produce are based on or derived from composites of images used to train those tools.
But the first difference between photography and AI tools (and digital art) is this: photography starts with a photograph, taken with a camera. This may seem glib and self-evident, and there are more complex ways to describe this inception by swapping “photograph” with “image” then talking about light, color, image sensors, and lots of imaging technical terms, but the “first principle” remains:
Photography starts with a photograph, taken with a camera.
It may — or may not — matter what happens to that photograph next. Every photo I publish here goes through some post-processing: at minimum, there are colors, lights, shadows, and details that get adjusted every time. And there are always spots to remove — outdoors is very spotty! — which sometimes means I reconstruct damaged leaves or flower petals, or remove background elements that interfere with the photo’s balance or the way your eye might follow its lines. All of these are forms of image manipulation, but the image that results is still a photograph — because photographs are, and always have been, manipulated by the technologies used to create them or the technologies used to refine the results.
But as you’re probably already imagining, things start to get a little muddy when you think about different kinds of image manipulation, even those that have long been available with tools like Lightroom and Photoshop. If I take one of my photos of a flower in a field, and remove the field by converting it to black — is that image still a photograph? If I take elements of several photographs and use Photoshop to create a composite, is that image still a photograph? Image manipulation is a subject that Photography — with a capital “P” — remains uncomfortable with, yet it will be more and more necessary to develop a shared understand of the differences between “photographs” and “images” as artificial intelligence tools continue to advance.
As I was cobbling together some research for this post, I came across this interesting article: Copyright Office Refuses to โRegister Works Entirely Generated by AIโ — which describes how the United States Copyright Office will not allow AI-generated works to be copyrighted, because “human authorship” is not present in the creation of those works. This may seem like a woo-hoo moment for the regulation of AI images — but how long before someone effectively challenges that restriction because the prompts used to generate an image were typed into a computer by a human being?
But this isn’t the pinhead I want to dance around on; instead, I ask: how will they know the image is AI-generated? I knew that there were tools supposedly capable of differentiating between text written by humans and text written by, say, ChatGPT — but only learned recently that there were tools designed to identify AI-generated images. I won’t name them, though; here’s why:
I tested three of the tools using the images I generated with Adobe Firefly for the previous post and this one. Two of the three tools identified every one as likely human-generated (which they were not). The third tool fared better, but only got about half of them right. This could be because Firefly is newer than some of the other AI image generators, I suppose, but I still think it suggests we’re going to need better detective tools!
If you go here, you can see some of the images people have generated with Adobe Firefly, without signing in. You’ll notice, I’m sure, that many of the images clearly are not photographs and don’t try to be: they are, instead, fantastical renderings of different scenes that I like to call: imaginaria. I have no doubt that the ability to create images like this requires significant technical skill and creative insight, one that includes training in tools like Photoshop and a great imagination — or at least it did, until now, provided the artist is willing to concede a lot of their creative energy to a tool that will approximate their request, and fill in its own blanks.
But I did wonder what else I might come up with if I decided to stay within the realm of (imitated) photographs, with bits of imaginaria. So I started with something simple, but slightly exotic, and asked Firefly to generate “a photograph of a Bengal tiger, in natural light.” Here’s what Firefly gave me…
… and I don’t think I would have obtained a better Bengal tiger photo if I’d gone to Zoo Atlanta and taken one myself.
I thought it might be cool to find a Bengal kitty-cat sleeping on my porch, so I updated the prompt to “photograph of Bengal tiger sleeping on someone’s front porch, in natural light.” And I got just what I asked for:
So then I decided to create some photos for my catering business web site (I have no catering business, and it has no web site) — one that offers wine tastings, including wine and cheese parties for iguanas. I used the prompt “photograph of an iguana on someone’s front porch, with a plate of cheese, and wine in a glass with a bendable straw.” Here are the resulting photos, which include me (not me) training the iguana to use the bendable straw, since, of course, iguanas can’t drink from wine glasses — unless you give them a bendable straw.
I then finished out the day with a little Birds, Bees, and Beers party (prompted with “photograph of a hummingbird drinking beer from a frosty mug” and “photograph of a bee drinking beer from a frosty mug”) for some of my closest friends:
I made only two kinds of post-processing changes to all the photos above: I cropped or used healing tools to remove the Adobe Firefly watermark and (sometimes) straighten the images; and I removed spots that annoyed me because… spots! The colors, shadows, lighting, and textures are exactly as Firefly produced them.
There is, of course, really no reason to do this (except to entertain oneself); but it does illustrate that: even outside the realm of fantasy or imaginaria, it’s possible to AI-generate images that emulate photographs, but are completely implausible. Yet while implausible, the images still could be considered “logically correct” in that there’s only one obvious error: the bendable straw in the first iguana image is both inside and outside the wine glass. Still, these “photographs” fail my photography test: they don’t capture a living being’s experience, and they aren’t produced with a camera.
We have little understanding of how these photos are created, other than a sense that AI engines undergo training — but with what? We’ve all long ago ceded much control of whatever we post on the internet, our ownership obfuscated by incomprehensible, seldom read privacy policies and terms of use. Adobe maintains that it uses “stock images, openly licensed content and public domain content” to train Firefly — but that distinction also implies that other AI engines may be doing something different. For some delightfully contrarian views on how AI is being trained, see AI machines arenโt โhallucinatingโ. But their makers are, where Naomi Klein asserts that AI training with our content is the greatest theft of creative output in human history; and AI Is a Lot of Work, one of many recent articles about the legions of human beings exploited to keep AI models on track. This paragraph just hints at some of the cultural (and legal) issues that AI tools are already presenting — even as the tools are teaching themselves to do things they weren’t designed to do.
The play (and film) Six Degrees of Separation, quoted above, is about many things, and the story revolves around the intrusion of an imposter (pretending to be the son of actor Sidney Poitier) into the habituated and aristocratic lives of a wealthy couple, Flan and Louisa Kittredge. The imposter uproots their lives by involving them in a series of his deceptions, leading Flan to compartmentalize what happened into stories he tells friends, but leading Louisa to a climactic speech where she demands what I quoted: How do we keep what happens to us from being turned into anecdotes? How do we keep our experiences?
We seem to be in a similar position with respect to new technologies: AI image generators — even in their infancy — attempt to imitate photography, potentially supplanting actual photography; just as language generators (like ChatGPT) exert their ability to replace writing. But AI image generators won’t help someone become a photographer and language generators won’t make someone a writer, because they can’t answer the questions: Why do we need — and how do we keep — our experiences?
“Flowers can be enjoyed without knowing about the interactions of soil, air, moisture, and seeds of which they are the result. But they cannot be understood without taking just these interactions into account — and theory is a matter of understanding….
“Theory is concerned with discovering the nature of the production of works of art and of their enjoyment in perception. How is it that the everyday making of things grows into that form of making which is genuinely artistic? How is it that our everyday enjoyment of scenes and situations develops into the peculiar satisfaction that attends the experience which is emphatically esthetic? These are the questions theory must answer. The answers cannot be found, unless we are willing to find the germs and roots in matters of experience that we do not currently regard as esthetic. Having discovered these active seeds, we may follow the course of their growth into the highest forms of finished and refined art.”
Hello!
For two final iris posts this season, I sifted through the 235 photos I posted so far and selected a few dozen that I thought could be most effectively rendered on black backgrounds. The galleries below — and in the next post — demonstrate, I think, how removing background elements can emphasize the shapes, colors, and structures of these flowers. I didn’t make any other color or texture changes to these images from those posted previously — except to eliminate the backgrounds by converting them to black.
Lately I’ve been trying to educate myself on some of the artificial intelligence tools that have been emerging across various disciplines, about which you have probably seen breathless-sounding news coverage ranging from descriptions of these tools as world-changing to equally breathless heralding of the end of the human race. Having spent three decades working in information technology, I’m not that surprised by the hyperbole, which reflects two recurring themes embedded in most technological advances: these new things are hyped as miraculous; and the next versions of any of them will fix all the problems everyone sees in the current versions. Neither of these is true, of course, but the framing does grab attention and perhaps helps further public discussion, while the wizardry remains largely behind the curtains.
The term “artificial intelligence” is a broad concept that includes a wide variety of technological implementations, some of which have been available for a while across different types of software tools. Before I retired, for example, one of my last projects was to evaluate a customer support platform that was capable of responding to verbal or written support requests, of learning from its interactions with humans, and of improving its ability to respond to reported problems as it engaged in those interactions. In all likelihood, you’ve experienced something like this, happily or not, when you’ve requested help with a software program or web site by telephone, email, or with a chatbot. Similarly, products like Adobe Lightroom and Photoshop now include capabilities that are supported by artificial intelligence, notably spot removal tools that are more capable of recognizing content and matching patterns; and the ability to select objects, subjects, and backgrounds in an image with greater accuracy than previous iterations.
Implementations like these differ, in significant ways, from the newer, user-facing variations of artificial intelligence, which are already being widely used to generate content. Since the universe of available tools is as large as it is, I settled on two I would spend some time with: ChatGPT, the language model with which you can engage conversationally; and Adobe Firefly, a program that can generate images from text prompts. I’ve been using ChatGPT for research (with wildly erratic and often disturbing results) for a few months and taking notes on the experience; but as my notes have reached about 5000 words, I’ve not yet sorted them out enough to write anything better than stream-of-consciousness observations, so I’m going to sit on those notes a little longer.
Adobe Firefly is available for anyone to use, for free, and you can sign in to use it with Adobe, Apple, Google, or Facebook accounts, at this link. Firefly lets you describe, in words, an image you’d like to generate. It supports content types categorized as art, graphic, or photo — so, of course, “photo” is what interested me the most. Here for example is one of the images it generated from my prompt “white iris on black background” in the “photo” style:
Firefly automatically generates the image with the watermark in the lower left corner, to indicate that it was an AI image. Aside from that, though, it’s not quite reminiscent of an actual photograph, especially the iris standards (the uppermost section of the bloom) that seem to lack the fine details you’d find in a photograph. And I could never get Firefly to create a pure black background — there were always some shades of gray behind whatever variation it generated — so I imported it into Lightroom, updated the background, adjusted shadows and added some texture, and ended out with this…
… which is much closer to a photograph in appearance, and eerily resembles one I might have taken. It’s still not quite right — yet it’s difficult to explain in words why it strikes me as “not quite right” — but since it was my first attempt at generating an AI image, I figured I’d eventually learn how to get more “photo-realistic” results.
I decided to try something more complicated, and used the prompt “Mausoleum of a wealthy family at a Victorian Garden cemetery similar to Oakland Cemetery in Atlanta, Georgia, surrounded by hydrangeas” to generate the next four images. I doubt that Firefly recognized “similar to Oakland Cemetery” as relevant to the images it generated; though “Victorian Garden cemetery” is certainly a specific type of cemetery well-represented by images and words in books, articles, and web sources.
Here are its “photographs” of four mausoleums that do not exist:
The first thing I noticed about these images was that they all contained perspective errors: they’re slightly crooked horizontally, or the buildings appear tilted backward — yet this type of perspective error is common in architectural photographs, simply because the person with the camera is much shorter than any building, and it’s very easy to hold the camera off-level and create these distortions (especially with wide-angle lenses). While it’s impossible to speak in terms of “intentionality” with AI images whose training you know nothing about, I thought it was interesting that it included what most photographers would consider mistakes — apparently intentionally!
I took the Firefly images and did what I would do if I had photographed these in real life: I imported them into Lightroom, removed the watermarks and a few spots, made some color and contrast adjustments, then straightened or tilted each image, ending out with these…
… which are certainly now more respectable-looking as photographs. And there are some elements of each image that struck me as especially insightful, given the prompt I used. Aside from the obvious Victorian-style architecture, notice in the first photograph that the tool created a roof with some missing shingles (on the left side), which would reflect such a building’s age and some wear and tear. Further, it included a piece of plywood between the grass and the center sidewalk — something I often do see at Oakland Cemetery, where the old culverts (originally used for drainage and hosing horse doo-doo from the gravesites and pathways) have deteriorated. Both these elements suggest that the tool is capable of great specificity in the images it generates.
Could you tell that these images were not produced with a camera? Or that they were images of structures that don’t exist? At first glance, it might be nearly impossible, and two of the photos (the bottom pair) didn’t seem to reveal any hints of their AI source. A couple of them show problems with the hydrangeas, where those to the left and right side of the frame have no detail. They’re just shapeless blobs whose structure couldn’t be recovered in Lightroom or Photoshop (though they could be replaced with use of a healing tool), but their flawed appearance at the edges might be missed since we tend to focus our eyes toward an image’s center anyway.
There are, however, structural or architectural mistakes in the first two, which — according to a conversation I had with ChatGPT — are common to AI-generated images. Take a look near the mausoleum entrances in this pair, then let your eye follow the columns starting at the ceiling then down. You’ll see that the columns on the right and left side start at the correct location, but the columns on the left side end too far forward, toward the middle of the sidewalk — like they might in an M.C. Escher illusion.
Here’s the relevant portion of each photo, zoomed-in so you can take a closer look:
Now you should very clearly see the flawed column “design” — and the facade of this building, if it could exist, would likely fall down. Once you see the flaws, you can’t unsee them; every time I look at these images now, that’s the first thing I notice. But what’s compelling to me is that more often than not, Firefly generated plausible images of entirely imaginary buildings, that were architecturally correct.
While scrounging around the web trying to learn more about AI image generators, I came across the suggestion that a photography prompt could contain information about a camera and lens combination, and the software would generate an image consistent with their characteristics. So, for example, instead of just using “Iris on a black background” as a prompt, I could type “Photograph of an iris on a black background, taken with a Sony A99ii camera and Sony 100mm lens.” While I couldn’t confirm that those additional details made a difference — because every time you change the prompt, Firefly automatically generates wholly new images, making it hard to compare — I did become convinced that starting the prompt with “Photograph of” might matter. Here, for example, are two images generated with the prompt “Photograph of a blue heron at the edge of a pond”…
… where I only removed the Firefly watermark and made a few shadow and contrast adjustments in Lightroom to emphasize the herons. These images are not of evidently lower quality — nor any less like photographs — than any of the thousands of blue heron images you might find on the web. And unlike the AI-imagined mausoleum images above, blue herons — just not these blue herons — do exist, despite the fact that I didn’t photograph any.
Outside the realm of graphic arts, photography typically captures an instant in an experience, with the experience implied in the relationship between a shared photograph and its viewers. With an AI-generated image, the photographer’s experience is eliminated: there is no living interaction with the external world and whatever story a photograph might represent is reduced to phrases typed at a keyboard. What this might mean for the evolution of photography is something I’ll speculate on in the next post in this series, and share some additional photos of animals — that I didn’t take.