"Pay attention to the world." -- Susan Sontag
 
Iris japonica (1 of 2) / Notes on Image Reconstruction

Iris japonica (1 of 2) / Notes on Image Reconstruction

From “Iris Japonica” in The World of Irises by The American Iris Society, edited by Bee Warburton and Melba Hamblen:

“The much frilled and fringed flowers of Iris japonica lie almost flat and are 8-10 cm (3-4 inches) in diameter. The multibranched stalks, up to 60 cm (2 feet) in height, carry as many as 24-30 blossoms, two to each spathe. Generally four to six are open on each stalk, but when plants are well grown there may be many more. The pale lavender flowers have finely traced orange markings on their falls consisting of three slight ridges. The margins appear on a white ground outlined with splashy dots of deeper lavender shade. Smaller orange dots are contained within this area. The lavender style arms are finely fimbriated above the lip….

Iris japonica is a native of moist woods and grows best outdoors in cool, frost-free conditions…. Since it blooms in early spring, early to mid-March in climates where temperatures seldom go below 3 degrees C (26 degrees F) and freezes are rare, it enhances the azalea and camellia garden….

“Because of their shallow root growth, some kind of mulch is desirable at all times except under circumstances of excessive heat and moisture. New plants emerge some distance from the main rhizome… traveling on the surface of the soil. The damp mulch provides anchorage for these proliferations and encourages their rooting….

“These irises have no genuine dormant period since the foliage remains evergreen the year round.”

From “Japonicas and Hybrids” in Iris for Every Garden by Sydney B. Mitchell:

Iris japonica, sometimes also called I. fimbriata, is the most distinctive and beautiful of the crested irises. From fans of thin, bright evergreen foliage it sends up two-foot widely branched stems which bear over several weeks a succession of somewhat fugitive but lovely pale lavender flowers, fringed, and with orange crests. A single stem will bear dozens of blooms, so light and airy and so like orchids that this particular species is often referred to as the ‘orchid’ iris….

“Japonica is not in the least demanding as to soil, though its preferences are for one that is not too heavy and that contains humus and leaf mold. In contrast with almost all other irises, it succeeds far better in considerable shade than in sunlight and often will be found growing luxuriantly in the dappled shade of overhanging trees. Unfortunately japonica cannot be grown outdoors in very cold climates, but all along the Pacific Coast and through the southern states it is well worth trying.”


Hello!

This is the first of two posts with photographs of Iris japonica from Oakland Cemetery, that I took on one of my early spring photoshoots. Iris japonica is known by several common names, including Japanese Iris, Fringed Iris, or Butterfly Flower. You might also find it called Shaga or Shaga Flower, “Shaga” being a translation of its Japanese name, one that was used in early horticultural trade before European botanists assigned it scientific names. Both the Butterfly Flower and Fringed Iris names connect to the flower’s visual appearance: Butterfly Flower has Chinese origins reflecting how its spread flower petals resemble butterfly wings; Fringed Iris descends from one of its earlier scientific names, Iris fimbriata, where the Latin word “fimbriata” referred to the distinctive fringed edges of the flower petals. In iris classifications, Iris japonica is included in a group of visually similar crested irises, more formally known as Lophiris.

While they’re members of the same family as upright irises like Iris germanica or other tall irises — the Iridaceae family — their growth pattern is much different. Iris japonica grows close to the ground, as an understory plant that I’ve often found surrounding the trunks of Oakland’s massive magnolia and oak trees. Their early spring bloom time under those trees means that the plants emerge above layers of discarded leaves from the prior autumn as well as built-up winter debris. The evergreen leaves are abundant and large compared to the sizes of individual flowers; and the flower stems push through the layers of natural mulch to bloom just a few inches above ground.

In my last two posts about Camellia japonica (see Camellia japonica (1 of 2) and Camellia japonica (2 of 2)), I mentioned that I photographed the Camellia plant because I wanted to take on the challenge of properly rendering the red and magenta color blending in the flower petals. I took these Iris japonica photos to pursue a different challenge: to see what I could do with their untidy backgrounds, which we tend to ignore when viewing the flowers in real life but dominate any photo featuring both the flowers and their surroundings.

Here’s one of the photos in this series as it came out of the camera, where the flowers look just fine but the rest is a mess — a visually disturbing mix of broken, cut, or winter-damaged leaves that overwhelm the image:

In the olden days of a few years ago, I would — and occasionally did — produce photos of these flowers by zooming in on only the flowers, hiding the backgrounds as much as possible, or converting them to black. Despite how tedious that was, converting them to black — which involved carefully detailed masking around the edges of flower petals — would create a presentable image of the flowers, but only the flowers and maybe a few stems and leaves. I used that black background technique often with different kinds of plants (see here), and my black background period taught me a lot about how to use Lightroom’s masking tools to emphasize the subjects in my photos. Its biggest drawback, however, was the loss of environmental and botanical context since the images isolated only the blooms and eliminated everything else.

When Adobe added Generative AI Remove to Lightroom in 2024, the capability was introduced as a “distraction removal” tool to get rid of spots or unwanted objects in an image. It indeed does that (and does it well), but for my nature photography, I’ve adopted it as a reconstruction tool that’s perfect for conditions like those present in these Iris japonica photos. I can use it to rebuild or reconstruct severely damaged leaves; or, as I’ve come to think of it: replace damaged elements with what might have been there, if the damage had not occurred.

Lightroom previously had (and still has) its traditional healing and cloning tools, which operate by replacing something you select with pixels from another section of the photo. That means, in effect, you could select something like the tip of a broken leaf and replace it with the tip of another leaf that wasn’t damaged. But that would only work if you could find a suitable match. Scroll back up to my sample photo and see if you can find structures, colors, or textures — which matter a lot with botanical photographs — that would match those with any single piece of damage you might want to eliminate. It would be like trying to assemble a puzzle from pieces out of several boxes: most of them just wouldn’t fit.

Like a lot of photo editing tools, the original healing and cloning tools did exactly what you told them to do. Their behavior was literal and restricted to the image you were working on, with no broader context. Generative Remove, on the other hand, isn’t restricted like that; it can draw on an enormous amount of information beyond what’s present in a single image. While I don’t have technical knowledge about exactly how Generative Remove does what it does, I can see — from experimenting with it — that it recognizes patterns in the colors, forms, and textures of botanical (and other) subjects. It may or may not know anything about Iris japonica specifically, but manages to take selections I make and suggest replacements that are aligned with the natural appearance of parts of the plants.

Equally important, I’ve learned that I can influence Generative Remove’s replacement suggestions. If I want to replace the flatly torn end of a leaf in this image with a properly pointy Iris japonica leaf, I can make a roughly triangular selection and the replacement will look more like the tip of an unbroken leaf. And if I make an angled or curved selection aligned with the directional flow of the leaves, the replacement will mimic those angles or curves. These then lead to a more complex form of influence: the sections of the image I reconstruct first help establish how Generative Remove “sees” the image as I continue the reconstruction work, which enhances the quality and effectiveness of its additional replacement suggestions. That led me to develop a specific reconstruction workflow: remove smaller defects first so that some of the leaves are restored early on, then proceed to those with less damage, then finish up with those that are the biggest messes of all.

In this image, for example, I’m about halfway through leaf reconstruction. I’ve finished fixing those on the right side of the flowers, which had less damage than those on the left; and those in the upper right corner now match the natural appearance of Iris japonica leaves when they’re fully intact. These patterns then become additional context that Generative Remove can use as I move to the more seriously damaged leaves on the left side of the photo…

… which, when completed, look like this, and show why I think of this reconstruction as “what might have been there, if the damage had not occurred.”

Here you can see the transition from the original image, to partial reconstruction, to the final reconstructed version — that final version including color adjustments to reflect the flowers’ subtle blend of blue, violet, and purple tones. Select the first photo to view the series in a slideshow, where it should be evident how well the tool (with two hours of my help!) has managed to generate a botanically accurate and naturally plausible background for the flowers.

That the final image doesn’t reproduce exactly what was present at Oakland when I took the photo isn’t, of course, the point. All images — whether drawn or painted, produced by a film camera and retouched in a darkroom, or captured and enhanced by a digital camera’s creative modes — are interpretations rather than precise representations. A tool like Generative Remove provides a way to approach image editing differently, because the tool infers what should occupy any area I select from what’s actually present in the surrounding scene. That makes it possible to render a composition as our human visual system and memory experienced it, where distracting elements like damaged leaves in the background are filtered out in favor of the subject that attracted our attention to begin with. And — unlike the hyper-realism that often surfaces in an image created by an image generator, or even one where the background is replaced — this kind of editing preserves fidelity to the experience of taking a photograph and recalling what we found significant when we did that.

That the Generative Remove tool’s behavior can be influenced is one of the most interesting things about using it for reconstruction like this. But it’s also very different from tools that do exactly what they’re told, as the influence isn’t obvious at first and the results it produces seem ambiguous or random until you repeatedly observe how your influence can work. I don’t tell the Remove tool what I want to do in words; instead, I have to make selections, see what results it produces, decide whether they fit visually and look natural, then use them or modify them — with those choices (and the accumulation of many choices) influencing what the tool does each time I make another selection.

The closest description I can come up with is that it’s like a picture-based dialogue between a human and a machine toward a creative goal. This ambiguity and uncertainty can feel uncomfortable at times, yet embracing it means that the nuanced responses produced by AI-based tools — photo editing tools or even language models like ChatGPT or Claude — are not necessarily mysterious, but are within realms we can understand if we step back from the kind of literal instructions and expectations we’re accustomed to, when interacting with these new technologies.

Thanks for reading and taking a look!









Leave a reply ...