“There are not very many species of Day-Lily — about thirty in all, including several which are probably only sub-species of the ubiquitous H. fulva, whose range extends from Europe to China. In that flowery land it was cultivated at a very early date, and appears in a painting of the twelfth century; it was called Hsuan T’sao, the Plant of Forgetfulness, because it was supposed to be able to cure sorrow by causing loss of memory….
“In England both H. fulva and H. flava were cultivated before 1597, and called by the early botanists Lilly-Asphodills or Liliasphodelus, because they seemed to embody the characteristics of both families — a lily flower with an asphodel leaf. H. flava, the yellow day-lily or Lemon Lily, ‘is a native of the northern Parts of Europe; it gilds the Meadows of Bohemia; and in Hungary perfumes the Air, in some places for many Miles’. It is very hardy, flourishing even under trees and in towns, and was recommended for London gardens as early as 1722. The foliage is reported to make excellent fodder for cattle, particularly for cows in milk….
“Hemerocallis comes from two Greek words meaning the beauty of the day.”
“The botanical name [hemerocallis]comes from the Greek hemera (day) and kallos (beauty) because the flowers’ beauty lasts but a day, which is also why they are called ‘day lilies.’ They were named by Linnaeus, and the names ‘fulva‘ for the tawny lily and ‘flava‘ for the lemon lily are rare instances where he named specific plants by the color of their flowers.”
One by one, the unborn announce themselves — risen from green shadows day lilies tremble into light.
Hello!
It was only last year that I learned that daylilies are no longer classified as lilies — yet I still associate them with an invented summer time period I call “Lily Season” since they tend to bloom along with true lilies such as Easter Lilies, Madonna Lilies, and the lily-like Amaryllis family’s Swamp Lilies or Crinum. My Lily Season doesn’t have a set start date, though: it starts when I post my first batches of lily and lily-adjacent images, so this year begins on July 6 and will end when I run out of photos. Imaginary seasons can be very flexible.
I took the photos below — along with some of the other varieties I just mentioned, which I’m working on — in the first half of June. They seemed to have bloomed earlier than usual this year, but even though I was iris hunting at the time, I didn’t want to miss them. “Plants behaving strangely” is sort of a theme for gardens and gardening this year (see, for example, Dogwoods with White Blooms (1 of 2)). I’m still puzzling about the lingering effects of a long and unusual deep freeze we had at the end of 2022 — which did a lot of damage to plant life throughout the area — that was followed by a second one a few weeks later that did further damage to plants that were just beginning to recover. Even this late in the year, I see quite a few plants in my own garden that produce new leaves, lose them, then produce another set. I have read elsewhere that some plants — especially struggling shrubs like mine — may need another season to return to their normal cycles, since they’re clearly not dead but not exuberantly alive either.
I’m hoping that there are additional batches of daylilies and true lilies this month, but recurring stormulous weathers have kept me away from the gardens for the past few weeks so I hope my hope is not misplaced.
“Hemerocallis” — the daylily’s genus — is a favorite new word for me, one I only learned when researching their botanical characteristics and history. It looks like a word I might make up, but — alasp! — I did not. Sometimes I holler it to The Dog just because I like how it sounds. And somehow he got it associated with his playtime… so now when I yell “Hemerocallis!” — he runs off and gets his ball…. 🙂
Try this: Let “Hemerocallis” roll off your tongue once or twice the next time you’re out at your favorite speakeasy; it’s sure to impress all your friends!
“The age of the photograph has become the age of gesture and mime and dance, as no other age has ever been….
“[To] say that ‘the camera cannot lie’ is merely to underline the multiple deceits that are now practiced in its name….The technology of the photo is an extension of our own being and can be withdrawn from circulation like any other technology…. But amputation of such extensions of our physical being calls for as much knowledge and skill as are prerequisite to any other physical amputation….”
“The still photograph turns out to be a poor metaphor for understanding visual perception, for the simple reason that the world is not still, nor are we in relation to it. This has far-reaching consequences, because some foundational concepts of standard cognitive psychology are predicated on the assumption that we can understand the eye by analogy with a camera, in isolation from the rest of the body. Nor is this a mere intramural fight between quarreling academic camps; what is at issue is the question of how we make contact with the world beyond our heads….
“The world is known to us because we live and act in it, and accumulate experience.”
“How do we fit what happened to us into life without turning it into an anecdote with no teeth and a punch line you’ll mouth over and over for years to come…. [We] become these human juke boxes spilling out these anecdotes….
“But it was an experience. How do we keep the experience?”
Hello!
This is the second of two posts wrapping up Iris Season for 2023, with a selection of my previously posted iris photos rendered on black backgrounds. The first post is Irises on Black / Notes On Experiences (1 of 2).
I ended the previous post — a discussion of my experiments with the artificial intelligence image generator Adobe Firefly — with the following:
“Outside the realm of graphic arts, photography typically captures an instant in an experience, with the experience implied in the relationship between a shared photograph and its viewers. With an AI-generated image, the photographer’s experience is eliminated: there is no living interaction with the external world and whatever story a photograph might represent is reduced to phrases typed at a keyboard. What this might mean for the evolution of photography is something I’ll speculate on in the next post in this series….“
I have always enjoyed the entire process that ends with sharing my photography. From trudging around with my camera in the woods or a park, an urban landscape or a tourist attraction; to culling and sorting the images through post-processing; to organizing the results around some loose theme and posting them here — I like it all, even when any given step might stretch my patience or stress my aging joints. The idea that new software tools — AI image generators — could theoretically replace most of that workflow is more than astonishing….
Photography differs from graphic arts or digital art in two important ways. The second difference — implicit in my previous post and in the paragraph you just read — is that photography includes an experience that occurs in the external world (the world outside your head). Graphics or digital images produced by artificial intelligence tools don’t require such an experience, even if the images they produce are based on or derived from composites of images used to train those tools.
But the first difference between photography and AI tools (and digital art) is this: photography starts with a photograph, taken with a camera. This may seem glib and self-evident, and there are more complex ways to describe this inception by swapping “photograph” with “image” then talking about light, color, image sensors, and lots of imaging technical terms, but the “first principle” remains:
Photography starts with a photograph, taken with a camera.
It may — or may not — matter what happens to that photograph next. Every photo I publish here goes through some post-processing: at minimum, there are colors, lights, shadows, and details that get adjusted every time. And there are always spots to remove — outdoors is very spotty! — which sometimes means I reconstruct damaged leaves or flower petals, or remove background elements that interfere with the photo’s balance or the way your eye might follow its lines. All of these are forms of image manipulation, but the image that results is still a photograph — because photographs are, and always have been, manipulated by the technologies used to create them or the technologies used to refine the results.
But as you’re probably already imagining, things start to get a little muddy when you think about different kinds of image manipulation, even those that have long been available with tools like Lightroom and Photoshop. If I take one of my photos of a flower in a field, and remove the field by converting it to black — is that image still a photograph? If I take elements of several photographs and use Photoshop to create a composite, is that image still a photograph? Image manipulation is a subject that Photography — with a capital “P” — remains uncomfortable with, yet it will be more and more necessary to develop a shared understand of the differences between “photographs” and “images” as artificial intelligence tools continue to advance.
As I was cobbling together some research for this post, I came across this interesting article: Copyright Office Refuses to ‘Register Works Entirely Generated by AI’ — which describes how the United States Copyright Office will not allow AI-generated works to be copyrighted, because “human authorship” is not present in the creation of those works. This may seem like a woo-hoo moment for the regulation of AI images — but how long before someone effectively challenges that restriction because the prompts used to generate an image were typed into a computer by a human being?
But this isn’t the pinhead I want to dance around on; instead, I ask: how will they know the image is AI-generated? I knew that there were tools supposedly capable of differentiating between text written by humans and text written by, say, ChatGPT — but only learned recently that there were tools designed to identify AI-generated images. I won’t name them, though; here’s why:
I tested three of the tools using the images I generated with Adobe Firefly for the previous post and this one. Two of the three tools identified every one as likely human-generated (which they were not). The third tool fared better, but only got about half of them right. This could be because Firefly is newer than some of the other AI image generators, I suppose, but I still think it suggests we’re going to need better detective tools!
If you go here, you can see some of the images people have generated with Adobe Firefly, without signing in. You’ll notice, I’m sure, that many of the images clearly are not photographs and don’t try to be: they are, instead, fantastical renderings of different scenes that I like to call: imaginaria. I have no doubt that the ability to create images like this requires significant technical skill and creative insight, one that includes training in tools like Photoshop and a great imagination — or at least it did, until now, provided the artist is willing to concede a lot of their creative energy to a tool that will approximate their request, and fill in its own blanks.
But I did wonder what else I might come up with if I decided to stay within the realm of (imitated) photographs, with bits of imaginaria. So I started with something simple, but slightly exotic, and asked Firefly to generate “a photograph of a Bengal tiger, in natural light.” Here’s what Firefly gave me…
… and I don’t think I would have obtained a better Bengal tiger photo if I’d gone to Zoo Atlanta and taken one myself.
I thought it might be cool to find a Bengal kitty-cat sleeping on my porch, so I updated the prompt to “photograph of Bengal tiger sleeping on someone’s front porch, in natural light.” And I got just what I asked for:
So then I decided to create some photos for my catering business web site (I have no catering business, and it has no web site) — one that offers wine tastings, including wine and cheese parties for iguanas. I used the prompt “photograph of an iguana on someone’s front porch, with a plate of cheese, and wine in a glass with a bendable straw.” Here are the resulting photos, which include me (not me) training the iguana to use the bendable straw, since, of course, iguanas can’t drink from wine glasses — unless you give them a bendable straw.
I then finished out the day with a little Birds, Bees, and Beers party (prompted with “photograph of a hummingbird drinking beer from a frosty mug” and “photograph of a bee drinking beer from a frosty mug”) for some of my closest friends:
I made only two kinds of post-processing changes to all the photos above: I cropped or used healing tools to remove the Adobe Firefly watermark and (sometimes) straighten the images; and I removed spots that annoyed me because… spots! The colors, shadows, lighting, and textures are exactly as Firefly produced them.
There is, of course, really no reason to do this (except to entertain oneself); but it does illustrate that: even outside the realm of fantasy or imaginaria, it’s possible to AI-generate images that emulate photographs, but are completely implausible. Yet while implausible, the images still could be considered “logically correct” in that there’s only one obvious error: the bendable straw in the first iguana image is both inside and outside the wine glass. Still, these “photographs” fail my photography test: they don’t capture a living being’s experience, and they aren’t produced with a camera.
We have little understanding of how these photos are created, other than a sense that AI engines undergo training — but with what? We’ve all long ago ceded much control of whatever we post on the internet, our ownership obfuscated by incomprehensible, seldom read privacy policies and terms of use. Adobe maintains that it uses “stock images, openly licensed content and public domain content” to train Firefly — but that distinction also implies that other AI engines may be doing something different. For some delightfully contrarian views on how AI is being trained, see AI machines aren’t ‘hallucinating’. But their makers are, where Naomi Klein asserts that AI training with our content is the greatest theft of creative output in human history; and AI Is a Lot of Work, one of many recent articles about the legions of human beings exploited to keep AI models on track. This paragraph just hints at some of the cultural (and legal) issues that AI tools are already presenting — even as the tools are teaching themselves to do things they weren’t designed to do.
The play (and film) Six Degrees of Separation, quoted above, is about many things, and the story revolves around the intrusion of an imposter (pretending to be the son of actor Sidney Poitier) into the habituated and aristocratic lives of a wealthy couple, Flan and Louisa Kittredge. The imposter uproots their lives by involving them in a series of his deceptions, leading Flan to compartmentalize what happened into stories he tells friends, but leading Louisa to a climactic speech where she demands what I quoted: How do we keep what happens to us from being turned into anecdotes? How do we keep our experiences?
We seem to be in a similar position with respect to new technologies: AI image generators — even in their infancy — attempt to imitate photography, potentially supplanting actual photography; just as language generators (like ChatGPT) exert their ability to replace writing. But AI image generators won’t help someone become a photographer and language generators won’t make someone a writer, because they can’t answer the questions: Why do we need — and how do we keep — our experiences?
“Flowers can be enjoyed without knowing about the interactions of soil, air, moisture, and seeds of which they are the result. But they cannot be understood without taking just these interactions into account — and theory is a matter of understanding….
“Theory is concerned with discovering the nature of the production of works of art and of their enjoyment in perception. How is it that the everyday making of things grows into that form of making which is genuinely artistic? How is it that our everyday enjoyment of scenes and situations develops into the peculiar satisfaction that attends the experience which is emphatically esthetic? These are the questions theory must answer. The answers cannot be found, unless we are willing to find the germs and roots in matters of experience that we do not currently regard as esthetic. Having discovered these active seeds, we may follow the course of their growth into the highest forms of finished and refined art.”
Hello!
For two final iris posts this season, I sifted through the 235 photos I posted so far and selected a few dozen that I thought could be most effectively rendered on black backgrounds. The galleries below — and in the next post — demonstrate, I think, how removing background elements can emphasize the shapes, colors, and structures of these flowers. I didn’t make any other color or texture changes to these images from those posted previously — except to eliminate the backgrounds by converting them to black.
Lately I’ve been trying to educate myself on some of the artificial intelligence tools that have been emerging across various disciplines, about which you have probably seen breathless-sounding news coverage ranging from descriptions of these tools as world-changing to equally breathless heralding of the end of the human race. Having spent three decades working in information technology, I’m not that surprised by the hyperbole, which reflects two recurring themes embedded in most technological advances: these new things are hyped as miraculous; and the next versions of any of them will fix all the problems everyone sees in the current versions. Neither of these is true, of course, but the framing does grab attention and perhaps helps further public discussion, while the wizardry remains largely behind the curtains.
The term “artificial intelligence” is a broad concept that includes a wide variety of technological implementations, some of which have been available for a while across different types of software tools. Before I retired, for example, one of my last projects was to evaluate a customer support platform that was capable of responding to verbal or written support requests, of learning from its interactions with humans, and of improving its ability to respond to reported problems as it engaged in those interactions. In all likelihood, you’ve experienced something like this, happily or not, when you’ve requested help with a software program or web site by telephone, email, or with a chatbot. Similarly, products like Adobe Lightroom and Photoshop now include capabilities that are supported by artificial intelligence, notably spot removal tools that are more capable of recognizing content and matching patterns; and the ability to select objects, subjects, and backgrounds in an image with greater accuracy than previous iterations.
Implementations like these differ, in significant ways, from the newer, user-facing variations of artificial intelligence, which are already being widely used to generate content. Since the universe of available tools is as large as it is, I settled on two I would spend some time with: ChatGPT, the language model with which you can engage conversationally; and Adobe Firefly, a program that can generate images from text prompts. I’ve been using ChatGPT for research (with wildly erratic and often disturbing results) for a few months and taking notes on the experience; but as my notes have reached about 5000 words, I’ve not yet sorted them out enough to write anything better than stream-of-consciousness observations, so I’m going to sit on those notes a little longer.
Adobe Firefly is available for anyone to use, for free, and you can sign in to use it with Adobe, Apple, Google, or Facebook accounts, at this link. Firefly lets you describe, in words, an image you’d like to generate. It supports content types categorized as art, graphic, or photo — so, of course, “photo” is what interested me the most. Here for example is one of the images it generated from my prompt “white iris on black background” in the “photo” style:
Firefly automatically generates the image with the watermark in the lower left corner, to indicate that it was an AI image. Aside from that, though, it’s not quite reminiscent of an actual photograph, especially the iris standards (the uppermost section of the bloom) that seem to lack the fine details you’d find in a photograph. And I could never get Firefly to create a pure black background — there were always some shades of gray behind whatever variation it generated — so I imported it into Lightroom, updated the background, adjusted shadows and added some texture, and ended out with this…
… which is much closer to a photograph in appearance, and eerily resembles one I might have taken. It’s still not quite right — yet it’s difficult to explain in words why it strikes me as “not quite right” — but since it was my first attempt at generating an AI image, I figured I’d eventually learn how to get more “photo-realistic” results.
I decided to try something more complicated, and used the prompt “Mausoleum of a wealthy family at a Victorian Garden cemetery similar to Oakland Cemetery in Atlanta, Georgia, surrounded by hydrangeas” to generate the next four images. I doubt that Firefly recognized “similar to Oakland Cemetery” as relevant to the images it generated; though “Victorian Garden cemetery” is certainly a specific type of cemetery well-represented by images and words in books, articles, and web sources.
Here are its “photographs” of four mausoleums that do not exist:
The first thing I noticed about these images was that they all contained perspective errors: they’re slightly crooked horizontally, or the buildings appear tilted backward — yet this type of perspective error is common in architectural photographs, simply because the person with the camera is much shorter than any building, and it’s very easy to hold the camera off-level and create these distortions (especially with wide-angle lenses). While it’s impossible to speak in terms of “intentionality” with AI images whose training you know nothing about, I thought it was interesting that it included what most photographers would consider mistakes — apparently intentionally!
I took the Firefly images and did what I would do if I had photographed these in real life: I imported them into Lightroom, removed the watermarks and a few spots, made some color and contrast adjustments, then straightened or tilted each image, ending out with these…
… which are certainly now more respectable-looking as photographs. And there are some elements of each image that struck me as especially insightful, given the prompt I used. Aside from the obvious Victorian-style architecture, notice in the first photograph that the tool created a roof with some missing shingles (on the left side), which would reflect such a building’s age and some wear and tear. Further, it included a piece of plywood between the grass and the center sidewalk — something I often do see at Oakland Cemetery, where the old culverts (originally used for drainage and hosing horse doo-doo from the gravesites and pathways) have deteriorated. Both these elements suggest that the tool is capable of great specificity in the images it generates.
Could you tell that these images were not produced with a camera? Or that they were images of structures that don’t exist? At first glance, it might be nearly impossible, and two of the photos (the bottom pair) didn’t seem to reveal any hints of their AI source. A couple of them show problems with the hydrangeas, where those to the left and right side of the frame have no detail. They’re just shapeless blobs whose structure couldn’t be recovered in Lightroom or Photoshop (though they could be replaced with use of a healing tool), but their flawed appearance at the edges might be missed since we tend to focus our eyes toward an image’s center anyway.
There are, however, structural or architectural mistakes in the first two, which — according to a conversation I had with ChatGPT — are common to AI-generated images. Take a look near the mausoleum entrances in this pair, then let your eye follow the columns starting at the ceiling then down. You’ll see that the columns on the right and left side start at the correct location, but the columns on the left side end too far forward, toward the middle of the sidewalk — like they might in an M.C. Escher illusion.
Here’s the relevant portion of each photo, zoomed-in so you can take a closer look:
Now you should very clearly see the flawed column “design” — and the facade of this building, if it could exist, would likely fall down. Once you see the flaws, you can’t unsee them; every time I look at these images now, that’s the first thing I notice. But what’s compelling to me is that more often than not, Firefly generated plausible images of entirely imaginary buildings, that were architecturally correct.
While scrounging around the web trying to learn more about AI image generators, I came across the suggestion that a photography prompt could contain information about a camera and lens combination, and the software would generate an image consistent with their characteristics. So, for example, instead of just using “Iris on a black background” as a prompt, I could type “Photograph of an iris on a black background, taken with a Sony A99ii camera and Sony 100mm lens.” While I couldn’t confirm that those additional details made a difference — because every time you change the prompt, Firefly automatically generates wholly new images, making it hard to compare — I did become convinced that starting the prompt with “Photograph of” might matter. Here, for example, are two images generated with the prompt “Photograph of a blue heron at the edge of a pond”…
… where I only removed the Firefly watermark and made a few shadow and contrast adjustments in Lightroom to emphasize the herons. These images are not of evidently lower quality — nor any less like photographs — than any of the thousands of blue heron images you might find on the web. And unlike the AI-imagined mausoleum images above, blue herons — just not these blue herons — do exist, despite the fact that I didn’t photograph any.
Outside the realm of graphic arts, photography typically captures an instant in an experience, with the experience implied in the relationship between a shared photograph and its viewers. With an AI-generated image, the photographer’s experience is eliminated: there is no living interaction with the external world and whatever story a photograph might represent is reduced to phrases typed at a keyboard. What this might mean for the evolution of photography is something I’ll speculate on in the next post in this series, and share some additional photos of animals — that I didn’t take.
“Often I surprise myself at how little I notice in a flower, and the reason for this is haste and excitement with the flower as a whole. The beauty of an iris, say, is so great that it was years before I paid much attention to its structure. Eventually, when I bred irises in a small way, I marveled at the elegance of the style arms, the stigmatic lip, the wonderful tight way in which the stamens curve to fit the curvature of the arm.
“The casual viewer, who may admire the beauty of the iris as much as any fanatic iris fancier, will wonder how the dedicated gardener can tell the name of every iris among, say, five hundred kinds in the garden. But it is easy if you know and love the flowers.”
“In Iris germanica the beard is confined to the midrib of the falls, and… in time this species came to be regarded as the type of many species of tall bearded Irises (tall as compared with Iris pumila and other dwarf species) in which the beard is confined to the midrib, and so the name ‘German’, derived from the name of the species named ‘germanica’, was applied to all of them as a group, without any regard to the matter of habitat. So it seems to be quite apparent that when ‘German’ was first applied to the members of the germanica group it was understood as indicating merely resemblance in matters of form to the species germanica, and that in time the meaning became perverted.
“‘German’, as the term is now understood, as applied to the so-called group of Irises, is a misnomer. No species included in the group has ever been known to be native to Germany — not even any of the varieties of the species botanically called ‘germanica’.”
Hello!
Here we are, on the first day of summer, with the last of the new iris photos from my 2023 Iris Season expeditions. I did decide to recast some of my favorites from this season on black backgrounds, and I’ll post those lateron this month. We are in the midst of a couple of weeks filled with dark and stormy days, so I’m keeping mostly indoors (arghh!) working on those photos instead of taking new ones.
I took the photographs below on two separate days: those that appear to have yellow standards were taken on a sunny day, while the rest were taken on a cloudier day that shifted the yellow colors to more saturated orange tones. They’re all from the same general area at Oakland Cemetery’s gardens, and I think they’re all the same kind of iris (even though their proximity to each other doesn’t necessarily mean that).
Since these irises have such a unique and fetching color combination, I thought I might be able to determine their specific cultivar or variant. As I’ve mentioned before, I often use PlantNet to help me identify flowers and plants, but given there are thousands of bearded iris variations, I never could get very precise. Yet I did learn something new about using PlantNet — something I was surprised I hadn’t noticed before….
When uploading a photo for identification, PlantNet lets you select the geographical region where you photographed the plant. I had always let it default to “World Flora” without realizing I could select “Southeastern U.S.A” instead. Interestingly — or perhaps weirdly — when I tried to identify all 21 of these photos in “World Flora” PlantNet said eleven were iris x germanica, and ten were iris variegata (often commonly known as German bearded irises and Hungarian bearded irises, respectively). And, among these two pairs of photos….
… PlantNet identified the first as Hungarian, the second as German; the third as German, and the fourth as Hungarian — even though each pair is actually the same photo with different cropping. Whaaatttt!?!
So then! I started poking around on the site to see if I could find an explanation, but tools like this tend to be black boxes — meaning: you don’t really know why they make the choices the make, you only see inputs and outputs. But that’s when I discovered I could use “Southeastern U.S.A” as an area for identification — and with that setting, PlantNet identified all 21 photos as iris x germanica. This leads me to believe that somewhere out in the world — but not here in the southeast — there is a Hungarian iris similar in color and characteristics to these German irises, so PlantNet weighted its “World Flora” suggestions accordingly.
“Pileated, variegated, and broken-colored are all adjectives used to describe the splashed and streaked flowers of bearded irises under the likely influence of a transposon (a jumping gene). Though the genetic history remains a little foggy, what matters most is that this phenomenally novel genre has rightly taken the bearded iris world by storm.
“Scandalous-looking, no doubt, these irises have graced the gardens of avant-garde iris lovers since the 1970s. But like many new trends seized upon by stylish people, broken-colored irises have been around longer than most realize. A ‘Zebra’, in commerce in the 1890s, reportedly had white flowers with blue stripes throughout the standards and falls, but that name is now reserved for the familiar cultivar of Iris pallida and its variegated foliage.”
Band of iris-flowers
above the waves,
you are painted blue,
painted like a fresh prow
stained among the salt weeds.
Hello!
Iris pallida ‘variegata’ is known by several common names, including Sweet Iris, Dalmatian Iris, Zebra Iris, and simply Striped Iris. “Variegata” refers to the variation that produces bi-color leaves — which may be white and green, or yellow and green — and the leaves are quite striking on their own.
There’s one large batch of these irises at Oakland Cemetery’s gardens, and I try to visit with them every spring. For many of the photos below, I pulled my lens back to produce wider-angled images — because the leaves seemed to demand as much attention as the iris blooms themselves. There are so many leaves — a multitude more leaves per plant than most other irises — that they can easily be positioned as background or foreground elements, or kept at the same focal plane as the flower. I tried a few at each of these positions — and I think my favorites below are actually those where the flower and the surrounding leaves are both in focus.