"Pay attention to the world." -- Susan Sontag
 

Black Iris Variations and Observations

From Classic Irises and the Men and Women Who Created Them by Clarence E. Mahan:

Amos Perry gave the name ‘Black Prince’ to the first iris he put into commerce because of the color of the flowers. ‘Black Prince’ has flowers with intense blue-violet standards and deep purple, almost black, falls, which have the texture of velvet. The color pattern of ‘Black Prince’ is called ‘neglecta.’ Other irises of this type were available when ‘Black Prince’ was introduced in 1900, but none with falls so dark and of such rich texture.

“The name ‘Black Prince’ was appropriate because of the color of the iris, but the name was also a stroke of advertising genius. What English heart could resist a ‘black iris’ named for the legendary warrior prince? The Royal Horticultural Society gave late-flowering ‘Black Prince’ an Award of Merit the year it was introduced. ‘Black Prince’ soon acquired a reputation for being a ‘slow grower,’ but its alleged lack of vigor did not diminish the desire of English and American gardeners to acquire it.

“Some unscrupulous nurserymen — not Perry — sometimes sold other irises, especially ‘Kochii,’ under the name of ‘Black Prince.’ So common did this practice become that gardeners had reason to believe that the iris’s name evoked another ‘black prince’ mentioned by Shakespeare in All’s Well That Ends Well, namely, ‘the black prince, sir, alias, the prince of darkness; alias, the devil.’

“Some iris experts believe that ‘Black Prince’ is one of the parents of Arthur Bliss’s famous iris progenitor ‘Dominion.’ Perry also thought this to be true. But Bliss did not really know the parentage of ‘Dominion’ and the truth of the matter remains, in the language of Scottish legal verdicts, ‘not proven.'”


Hello!

The first twelve photos in the galleries below are of some irises from Oakland Cemetery’s gardens that I’ve photographed and written about before — see Black Iris Variations (and Hallucinations) — and the remaining images are of a similar iris I’d not seen previously, but appears to be a related variant. All of them show similar and quite striking black color in their unopened buds and the standards and falls of opened flowers, and they all stand tall on stems ranging from two to four feet high, populated with clusters of blooms.

I was never quite satisfied with the colors I reproduced in that previous post, so with this trip to the gardens I tried to more accurately photograph and represent them as I saw them. Here, for example, is the original version of one of the photos as the camera interpreted it…

… which closely matches how I saw and remembered them. Worth noting here is that it was an overcast but fairly bright day, conditions which provide (in my opinion) the best lighting for flower photography. In this case especially, the diffused sunlight ensured that there were no harsh shadows between parts of the plant. That also had a countervailing effect, however: the flower and its colors appear somewhat neutral and the tonal range of the image seems limited, giving it a flat (could I say lifeless?) appearance, something that is often common with RAW images before any post-processing.

To my eyes (and in my brain), this initial version of the photo shows why this flower is commonly and locally referred to as a “black iris” — even if, botanically speaking, it’s not officially a black iris, of which there are very few since most are dark-dark purple rather than black. And in post-processing, that’s exactly how Lightroom sees it: colors my eyes interpreted as black actually contain various shades of dark purple (and dark blue). Here’s what happens when the only change I make in Lightroom is to increase the photo’s overall brightness…

… and Lightroom exposes the purple (and blue) that the camera actually captured. If I keep increasing brightness, the flowers get even purplier (!!) — making The Photographer wonder what colors are correct, and suggesting that variations between how we perceive color and how a camera can interpret it may be wildly different.

But that takes us back to what I — and not the camera — experienced: irises whose colors appeared mostly as black, especially so on this overcast day. So this becomes the challenge: how to represent the flower as a black iris, yet still create an image that has some interesting color variations, without over-purpling (!!) it. Here’s where I ended out, after experimenting with varying the hue, saturation, and luminance of purple and blue colors, while coming up with a combination of brightness and contrast that preserved the swaths of black. Now you see the colors as I experienced them, especially how each flower petal shifts from shades of purple at the outer edges (and at the stem) towards black at each petal’s center and on their undersides. And by adding a touch of extra detail in Lightroom to each of the blossoms, even the “velvet texture” described in the quotation at the top of this post comes through.

All of the photos in this series got similar treatment, though for each one I made different adjustments to the color variables, since even the slightest changes in cloud cover, background color, or reflected light (as from nearby statues) created variations in purple and blue intensity when the photos were taken.

Overall, this was a fun experiment with color, one that started on a fine spring day when freshly blooming irises were plentiful at Oakland Cemetery’s gardens. The number of blooms and color variations were surprising (even to me!) and I’m currently working through a backlog of iris photos in shades of brown, orange, peach, white, yellow, more blue and purple, and some with distinct color variations between their standards and falls — like white and purple or yellow and burgundy. The color wheel will be well-represented in all these photos as I post them — along with more history of this regal plant.

Thanks for reading and taking a look!











Iris Domestica, the Leopard Flower or Blackberry Lily (3 of 3)

From Strange Tools: Art and Human Nature by Alva Noe:

“Things show up for us as colorful and noisy. But this is all false appearance, a consequence of our particular makeup and local perspective. The qualities of objects we seem to see wouldnโ€™t get cataloged in the final description of absolute reality. For they are merely effects, in our minds, of processes that are, in themselves, without color and without sound…. Everything we know in the world around us — from mountains to ice creams to sunsets to rose petals to the sun and the earth — is made up of physical parts that are made up in their turn of parts that are made up of still smaller parts. Itโ€™s pure matter… all the way down.”

From “The Act of Expression” in Art as Experience by John Dewey:

“[When] excitement about subject matter goes deep, it stirs up a store of attitudes and meanings derived from prior experience. As they are aroused into activity they become conscious thoughts and emotions, emotionalized images. To be set on fire by a thought or scene is to be inspired. What is kindled must either burn itself out, turning to ashes, or must press itself out in material that changes the latter… into a refined product….

[Elements] that issue from prior experience are stirred into action in fresh desires, impulsions and images. These proceed from the subconscious, not cold or in shapes that are identified with particulars of the past, not in chunks and lumps, but fused in the fire of internal commotion…. Through the interaction of the fuel with material already afire the refined and formed product comes into existence….”


Hello!

This is the third of three posts — with images magically remanufactured as black-background variations — of Iris domestica photographs that I uploaded to Iris Domestica, the Leopard Flower or Blackberry Lily (1 of 3) and Iris Domestica, the Leopard Flower or Blackberry Lily (2 of 3). Also, for extra fun, I made a collage of all twenty images and included that at the bottom of this post.

Of all of the photos I’ve converted to black backgrounds, these are the most complicated. As is implied by the quotation from Strange Tools above, Iris domestica is a fine example of something that seems to reveal smaller and smaller parts and pieces, the more you look at it. Here, for example, is one of the photos from my previous posts…

… where you would see the two flowers in the center as the subject of the photo, despite the presence of many other elements. This is correct of course, and I guided your eyes toward seeing the photo that way by Lightroom adjustments that created greater visual distinction between the subject and background, by dimming and softening the background so the pair of orange-and-spotted flowers became more prominent.

Converting a photo like this to one with a black background can be a challenge. Last year I did something similar — see Leopard Flower Variations (On Black) from September, 2022 — where I used Lightroom brushes to paint the backgrounds black, limiting myself mostly to the flower blossoms because brushing around the plants’ thin stems, leaves, and seedpods was too time-consuming. Shortly after that, Adobe introduced enhanced masking tools with the ability to select objects, subjects, and backgrounds, which I’ve been using as much as possible since they became available.

Updates to our post-processing tools serve us best when they open up new possibilities; and with these Lightroom masking enhancements, I’ve tried to take on more complicated variations. Instead of just brushing out the backgrounds around parts of an image as I did in the past, I can now use a combination of masks to get better results. With object selection, I can choose different parts of an image that I want to retain as the black-background version’s overall subject, then invert all those selections, then change the background to black.

Here, for example, is an interim step in this approach. I selected parts of the image as individual objects in a single mask one at a time — the flower petals, the seedpods, and the stems — then inverted the mask (shown in dark green). Lightroom’s object selection got a lot right; but as you can see — look to the right of the flower — some of the stems appear disconnected from the rest. This happens when selected objects are close in color to the background colors, and will also happen where there are similarities in sharpness or contrast between foreground and background.

If I stopped here and converted the background to black, the gaps in the stems would be apparent, as you can see here…

… or, up closer, here:

I often compare the next steps (in my own head, at least) to painting different colors on walls and window trim, where you have to pay attention to the boundaries between two objects (the wall and the window frame) and two colors. If you slip with the paintbrush and one color intrudes onto the other, corrective action (!!) is warranted, along with, perhaps, a bit of cussing and extra bits of patience. But you have to fix it because you know it won’t look right if you don’t.

When adjusting masks that have started out coarse as shown above, I’ve learned that I need to remember that elements of any image tend to be brighter where they’re closer to the camera (or to the eye), and darker toward the back. This light-to-dark, front-to-back brightness variation is one of the ways that we perceive two-dimensional images as having depth, and it applies to even the smallest details. In Lightroom, the masks appear to become “fuzzier” when they partially cover darker, toward-the-back elements. If I adjust the masks too much, I lose the front-to-back appearance of depth and leave the image looking flat — and something as small and thin as a flower’s stem would look like a two-dimensional geometric line, instead of a living portion of a plant. At the same time, I have to deal with an illusion: the more I zoom into a photo, the more tiny pixels appear to need adjustments. It took some practice to keep in mind that front/light-to-back/dark contrast helps us perceive something as “real” and avoid adjusting the masks more than I should.

Since I have to pay close attention while working on the masks, I’ve noticed how familiar I become with the subjects of the photos and all their details. For most of my photos, this means that — even if it’s unintentional — I’m constantly observing the structures of plants and their flowers. This in turn helps me shoot with different expectations about what I see, what I can show, and what level of focus or what kind of light I need, especially if the photos might end out with black backgrounds. This is another valuable characteristic of the software tools we use: they not only offer expanded possibilities, but they help us see something we might overlook, as we envision different ways of taking photographs and enhancing them.

Here you see the corrected mask — where the stems (just to the right of the flower) are no longer disconnected from the rest of the plant. I used a “subtract brush” to erase the black background from areas where it intruded on the plant’s stems.

Now I can turn the mask overlay off, and I’ve got a completed black background. Select the first image below if you’d like to see a larger version, and I’ve included the original starting point for this image for comparison.

Thanks for reading and taking a look!









Iris Domestica, the Leopard Flower or Blackberry Lily (2 of 3)

From “Belamcanda” in Garden Bulbs in Color by J. Horace McFarland:

“More familiarly known as the Blackberry Lily or Leopard Lily, Belamcanda chinensis, a summer-blooming member of the iris family, is well worth growing. It came to us from China and Japan.

“With foliage much like iris and clusters of bright orange flowers on two-and-one-half-foot stems, the plant is very striking in the summer landscape. Plant the root-stalks in masses of six or more in places where they will have an effective background. Fortunately, the Blackberry Lily is relatively hardy, save in exposed areas.

“The first common name mentioned comes from the character of the seeds, which resemble blackberries. The other name, Leopard Lily (sometimes listed as
Pardanthus chinensis), brings to mind the curious spots which accentuate the flowers.”


Hello!

This is the second of three posts featuring photos of the Blackberry Lily, Leopard Lily, Leopard Flower, or IRIS DOMESTICA! — that I took a few weeks ago. The first post is Iris Domestica, the Leopard Flower or Blackberry Lily (1 of 3).

I photographed these the day after several seriously-windy thunderstorms had passed through the area, and some of the plants had blown from their normal standing-tall positions to hang from their bent (but not broken)stems, almost horizontally among their leaves. The first seven photos below show the bit of extra drama I got from the plants in those positions.

Thanks for taking a look!








Iris Domestica, the Leopard Flower or Blackberry Lily (1 of 3)

From “Summer Blooms” in Through the Garden Gate by Elizabeth Lawrence:

“This summer the black-berry lily, Belamcanda chinensis, bloomed from early June until well into August. There was scarcely a day when there were not several small, ephemeral, red-spotted flowers. They open at various times in the morning, according to the amount of light, I think, but I could never catch them at it, though the clump is right outside my studio window, and I see it every time I look up from my work.

โ€œThe flowers close before dark, neatly furling themselves into a minute and almost invisible red and yellow striped barber pole, so they do not detract from the appearance of the plant even though they persist for some time. The handsome pale green seed pods form quickly, and when they burst open, early in September, the bunches of shiny seeds look like ripe blackberries. If the stalks are cut to the ground as they finish blooming, the plant will bloom again in September, but most people like the fruits for winter arrangements. The fan-like foliage is pale green with a delicate silvery bloom, and the stiff, well-branched flower stalks stand well above it. Although the stalks are from three to four feet tall, I am glad I put the plant in the front of the border, for it deserves to be seen as a whole and to stand alone.

โ€Belamcanda is the Malabar name for the black-berry lily, which grows spontaneously in India where it is considered a cure for snakebite.”


Hello!

It was only last summer that I discovered the charming plant with its pinwheel-shaped flowers featured in this post (and coming up in the next two). It has such a unique appearance — well-described in the quote from Through the Garden Gate above — that I assumed it would be easy to identify, and my friend PlantNet did tell me it was a Leopard Flower whose scientific name was Iris domestica. It’s also commonly known as Leopard Lily or Blackberry Lily, and I explored the history of its name a little in last year’s post (see Leopard Flower Variations). But based on how easy it was for me to find the phrase “blackberry lily” in my botany books and online sources like the Internet Archive — and how infrequently I got hits on “leopard lily” or “leopard flower” — I guess “Blackberry Lily” is its more common-common name. The Blackberry or Leopard Lily is among several other plants often referred to as “leopard lily” — such as those listed on the Wikipedia page Leopard Lily.

I’ve gotten in the habit of referring to it by its scientific name Iris domestica, simply because that keeps me from forgetting that it’s been classified into the Iris family and has never been considered a true lily. But it was only given the name “Iris domestica” in 2005 — see the excellent article Blackberry Lily from the University of Wisconson’s Horticulture site for a history of its names — so in many gardening and botany books you may see references to its original scientific name, Belamcanda chinensis, especially if those books were published before the name change.

Compared to most other irises, Iris domestica is a late — very late — bloomer. I suspect in these photos the plants had been flowering for about a week, since the green seedpods you see in some of the photos have not yet opened to show the blackberry-looking seeds that engendered its “Blackberry Lily” common name. I almost missed them entirely: ’twas a hot and steamy July day when I came across them this year as I was melting my way out of the gardens, but I spent another hour or so taking these shots because they really, really wanted to be photographed.

That I almost missed them this year reminded me of another big miss from earlier in the summer: that I had never gotten a chance to photograph Tiger Lilies because on one of my trips they had not yet bloomed, but by my next trip they were all spent and had blown away. Tiger Lilies seem to bloom almost all at the same time and don’t last long (this is probably not botanically accurate), and we had many multi-day windy thunderstorms right around their blooming time. But the fact that I missed them (and nearly missed Iris domestica) got me thinking that — with several years of photographs taken at Oakland Cemetery’s Gardens — I could probably put together a cheat sheet to remind me which flowers bloomed when.

So I did this: I went through my Lightroom folders for the past five years, and created a spreadsheet of all of the flowers I’ve photographed and the months I photographed them. I ended out with a list of 50 flowers, flower families, and blooming trees, which you can see here (as a pdf) or here (as a picture). Of course the dates reflect blooming times in the U.S. southeast — but I thought others might find the chart a useful reminder of when to be on the lookout for whatever’s blooming next. Among the delights I realized after assembling the spreadsheet: that anemone, angelica, coneflower and asters, goldenrod, and lycoris (a spider lily) will be there waiting for me, in September, October, and November. Wheeeee!

Thanks for reading and taking a look!









Irises on Black / Notes On Experiences (2 of 2)

From “The Photograph” in Understanding Media: The Extensions of Man by Marshall McLuhan:

“The age of the photograph has become the age of gesture and mime and dance, as no other age has ever been….

“[To] say that ‘the camera cannot lie’ is merely to underline the multiple deceits that are now practiced in its name….The technology of the photo is an extension of our own being and can be withdrawn from circulation like any other technology…. But amputation of such extensions of our physical being calls for as much knowledge and skill as are prerequisite to any other physical amputation….”

From “Embodied Perception” in The World Beyond Your Head: On Becoming an Individual in an Age of Distraction by Matthew B. Crawford:

“The still photograph turns out to be a poor metaphor for understanding visual perception, for the simple reason that the world is not still, nor are we in relation to it. This has far-reaching consequences, because some foundational concepts of standard cognitive psychology are predicated on the assumption that we can understand the eye by analogy with a camera, in isolation from the rest of the body. Nor is this a mere intramural fight between quarreling academic camps; what is at issue is the question of how we make contact with the world beyond our heads….

“The world is known to us because we live and act in it, and accumulate experience.”

From Six Degrees of Separation by John Guare:

“How do we fit what happened to us into life without turning it into an anecdote with no teeth and a punch line youโ€™ll mouth over and over for years to come…. [We] become these human juke boxes spilling out these anecdotes….

“But it was an experience. How do we keep the experience?”


Hello!

This is the second of two posts wrapping up Iris Season for 2023, with a selection of my previously posted iris photos rendered on black backgrounds. The first post is Irises on Black / Notes On Experiences (1 of 2).


I ended the previous post — a discussion of my experiments with the artificial intelligence image generator Adobe Firefly — with the following:

“Outside the realm of graphic arts, photography typically captures an instant in an experience, with the experience implied in the relationship between a shared photograph and its viewers. With an AI-generated image, the photographerโ€™s experience is eliminated: there is no living interaction with the external world and whatever story a photograph might represent is reduced to phrases typed at a keyboard. What this might mean for the evolution of photography is something Iโ€™ll speculate on in the next post in this series….

I have always enjoyed the entire process that ends with sharing my photography. From trudging around with my camera in the woods or a park, an urban landscape or a tourist attraction; to culling and sorting the images through post-processing; to organizing the results around some loose theme and posting them here — I like it all, even when any given step might stretch my patience or stress my aging joints. The idea that new software tools — AI image generators — could theoretically replace most of that workflow is more than astonishing….

Photography differs from graphic arts or digital art in two important ways. The second difference — implicit in my previous post and in the paragraph you just read — is that photography includes an experience that occurs in the external world (the world outside your head). Graphics or digital images produced by artificial intelligence tools don’t require such an experience, even if the images they produce are based on or derived from composites of images used to train those tools.

But the first difference between photography and AI tools (and digital art) is this: photography starts with a photograph, taken with a camera. This may seem glib and self-evident, and there are more complex ways to describe this inception by swapping “photograph” with “image” then talking about light, color, image sensors, and lots of imaging technical terms, but the “first principle” remains:

Photography starts with a photograph, taken with a camera.

It may — or may not — matter what happens to that photograph next. Every photo I publish here goes through some post-processing: at minimum, there are colors, lights, shadows, and details that get adjusted every time. And there are always spots to remove — outdoors is very spotty! — which sometimes means I reconstruct damaged leaves or flower petals, or remove background elements that interfere with the photo’s balance or the way your eye might follow its lines. All of these are forms of image manipulation, but the image that results is still a photograph — because photographs are, and always have been, manipulated by the technologies used to create them or the technologies used to refine the results.

But as you’re probably already imagining, things start to get a little muddy when you think about different kinds of image manipulation, even those that have long been available with tools like Lightroom and Photoshop. If I take one of my photos of a flower in a field, and remove the field by converting it to black — is that image still a photograph? If I take elements of several photographs and use Photoshop to create a composite, is that image still a photograph? Image manipulation is a subject that Photography — with a capital “P” — remains uncomfortable with, yet it will be more and more necessary to develop a shared understand of the differences between “photographs” and “images” as artificial intelligence tools continue to advance.

As I was cobbling together some research for this post, I came across this interesting article: Copyright Office Refuses to โ€˜Register Works Entirely Generated by AIโ€™ — which describes how the United States Copyright Office will not allow AI-generated works to be copyrighted, because “human authorship” is not present in the creation of those works. This may seem like a woo-hoo moment for the regulation of AI images — but how long before someone effectively challenges that restriction because the prompts used to generate an image were typed into a computer by a human being?

But this isn’t the pinhead I want to dance around on; instead, I ask: how will they know the image is AI-generated? I knew that there were tools supposedly capable of differentiating between text written by humans and text written by, say, ChatGPT — but only learned recently that there were tools designed to identify AI-generated images. I won’t name them, though; here’s why:

I tested three of the tools using the images I generated with Adobe Firefly for the previous post and this one. Two of the three tools identified every one as likely human-generated (which they were not). The third tool fared better, but only got about half of them right. This could be because Firefly is newer than some of the other AI image generators, I suppose, but I still think it suggests we’re going to need better detective tools!

If you go here, you can see some of the images people have generated with Adobe Firefly, without signing in. You’ll notice, I’m sure, that many of the images clearly are not photographs and don’t try to be: they are, instead, fantastical renderings of different scenes that I like to call: imaginaria. I have no doubt that the ability to create images like this requires significant technical skill and creative insight, one that includes training in tools like Photoshop and a great imagination — or at least it did, until now, provided the artist is willing to concede a lot of their creative energy to a tool that will approximate their request, and fill in its own blanks.

But I did wonder what else I might come up with if I decided to stay within the realm of (imitated) photographs, with bits of imaginaria. So I started with something simple, but slightly exotic, and asked Firefly to generate “a photograph of a Bengal tiger, in natural light.” Here’s what Firefly gave me…

… and I don’t think I would have obtained a better Bengal tiger photo if I’d gone to Zoo Atlanta and taken one myself.

I thought it might be cool to find a Bengal kitty-cat sleeping on my porch, so I updated the prompt to “photograph of Bengal tiger sleeping on someone’s front porch, in natural light.” And I got just what I asked for:

So then I decided to create some photos for my catering business web site (I have no catering business, and it has no web site) — one that offers wine tastings, including wine and cheese parties for iguanas. I used the prompt “photograph of an iguana on someone’s front porch, with a plate of cheese, and wine in a glass with a bendable straw.” Here are the resulting photos, which include me (not me) training the iguana to use the bendable straw, since, of course, iguanas can’t drink from wine glasses — unless you give them a bendable straw.

I then finished out the day with a little Birds, Bees, and Beers party (prompted with “photograph of a hummingbird drinking beer from a frosty mug” and “photograph of a bee drinking beer from a frosty mug”) for some of my closest friends:

I made only two kinds of post-processing changes to all the photos above: I cropped or used healing tools to remove the Adobe Firefly watermark and (sometimes) straighten the images; and I removed spots that annoyed me because… spots! The colors, shadows, lighting, and textures are exactly as Firefly produced them.

There is, of course, really no reason to do this (except to entertain oneself); but it does illustrate that: even outside the realm of fantasy or imaginaria, it’s possible to AI-generate images that emulate photographs, but are completely implausible. Yet while implausible, the images still could be considered “logically correct” in that there’s only one obvious error: the bendable straw in the first iguana image is both inside and outside the wine glass. Still, these “photographs” fail my photography test: they don’t capture a living being’s experience, and they aren’t produced with a camera.

We have little understanding of how these photos are created, other than a sense that AI engines undergo training — but with what? We’ve all long ago ceded much control of whatever we post on the internet, our ownership obfuscated by incomprehensible, seldom read privacy policies and terms of use. Adobe maintains that it uses “stock images, openly licensed content and public domain content” to train Firefly — but that distinction also implies that other AI engines may be doing something different. For some delightfully contrarian views on how AI is being trained, see AI machines arenโ€™t โ€˜hallucinatingโ€™. But their makers are, where Naomi Klein asserts that AI training with our content is the greatest theft of creative output in human history; and AI Is a Lot of Work, one of many recent articles about the legions of human beings exploited to keep AI models on track. This paragraph just hints at some of the cultural (and legal) issues that AI tools are already presenting — even as the tools are teaching themselves to do things they weren’t designed to do.

The play (and film) Six Degrees of Separation, quoted above, is about many things, and the story revolves around the intrusion of an imposter (pretending to be the son of actor Sidney Poitier) into the habituated and aristocratic lives of a wealthy couple, Flan and Louisa Kittredge. The imposter uproots their lives by involving them in a series of his deceptions, leading Flan to compartmentalize what happened into stories he tells friends, but leading Louisa to a climactic speech where she demands what I quoted: How do we keep what happens to us from being turned into anecdotes? How do we keep our experiences?

We seem to be in a similar position with respect to new technologies: AI image generators — even in their infancy — attempt to imitate photography, potentially supplanting actual photography; just as language generators (like ChatGPT) exert their ability to replace writing. But AI image generators won’t help someone become a photographer and language generators won’t make someone a writer, because they can’t answer the questions: Why do we need — and how do we keep — our experiences?

Thanks for reading and taking a look!


My previous iris posts for this season are:

Irises on Black / Notes On Experiences (1 of 2)

Bearded Irises in Yellow, Orange, and Burgundy

Iris pallida โ€˜variegataโ€™

Yellow and White Bearded Irises (2 of 2)

Yellow and White Bearded Irises (1 of 2)

Purple and Violet Iris Mix (2 of 2)

Purple and Violet Iris Mix (1 of 2)

Irises in Pink, Peach, and Splashes of Orange (2 of 2)

Irises in Pink, Peach, and Splashes of Orange (1 of 2)

Irises in Blue and Purple Hues (2 of 2)

Irises in Blue and Purple Hues (1 of 2)

Black Iris Variations (and Hallucinations)