The Image represents my reaction and how I felt at the time of the shot.

From there, I may process with AI – and this is the thinking behind the processing…

Within this process, I look to convey more of what I could have conveyed with the orginal.

______________________________________________________________________________________________________________________________________________________________________

This component is added by using specific AI prompts. 

Prompts are scripted data  – programmed inputs and directions. It is not unlike HTML in that it is simplified computer language. 

These prompts – and how they are very specifically manipulated, determine varied visual outcomes. 

______________________________________________________________________________________________________________________________________________________________________

In some cases, I will direct towards a certain Artist’s style – and if that is the case, I always layer the image generations with at least THREE different artists’ styles to avoid any (future) copyright issues.  Moreover, using at least 3 styles will increase my own uniqueness – which is NEVER a conscious goal.

So these “layers” are done in 3 separate prompt generations

That being said, “unique” can go only as far as the overall data from which a pool is selected.

As of 2024, the only data access from which AI extracts is without legal boundaries – and managed by public big-tech companies (like AWS, Google, etc). 

The data from which AI extracts, on public platforms, is also siphoned into a common algorithm which rewards social engagements. 

But I digress from the broad subject of AI.

______________________________________________________________________________________________________________________________________________________________________

What does this mean for me and my art?

It’s just another channel through which to make art.

It’s the Wild West – unregulated, unmanaged..   and so wildly fun to be a part of!

______________________________________________________________________________________________________________________________________________________________________

Sure, I’d like to somewhat stay relevant, but really, it’s just cool. 

The more realistically I could capture a scene or moment, the less fun it became.  So I started experimenting…  with filters, lenses, prisms, more lenses, cameras, venues…  

And then in early 2023, I found MidJourney – which is the AI platform that I use to make my images.

To this point in mid-2024, I have generated more than 13,000 images.  Of those, probably around 100 images were generated WITHOUT using my photographs.  They were naked – or just prompts consisting of words. 

______________________________________________________________________________________________________________________________________________________________________

Without using my photographs as a basis, I floundered without direction and feel.  I had NO CONTROL over where I wanted to take the generated art.

It would be disheartening and rob my artistic bandwidth for some period of time afterwards.

______________________________________________________________________________________________________________________________________________________________________

I keep all of my prompts in Word documents (I am not organized enough or need the scalability to go to Excel for these pre-script options that I have spent months cultivating).

Each output produces 4 different panels/scenes.  Chances are that I do not like the first couple of batches – so I make tweaks to either the prompts or the photo.

Once I have a scene, or two, that I like, I will re-process with either different prompts or photos.

Sometimes, months go by and I will re-process with either new prompts or using new software updates.

The AI tech is moving so fast that the difference in 6 months produces much different results – requiring total prompt re-do’s. 

______________________________________________________________________________________________________________________________________________________________________

As a SPECIFIC example of how I use prompts…

Within the last few years, I have learned, studied, and become licensed in the art of Reiki (life force energy) healing.

With many of my “Zen/Meditative/Yoga” works, I have added the wording “the 963 Hz frequency” (also known as the God frequency) – which is said to be a vibration which enables connection to the Divine and the essence of a spiritual world.

Let’s say I start with one of my photos from visiting Myanmar in 2006.

By adding “963 Hz” in varied weights and forms in a specific character command structure, I could directly add a visual component – which can be a subtle nuance – or a tone-setter – depending on how I control it. 

It may produce an amazing profound and beautiful scene – or it may be a total dud that looks horrible.

Say the first few batches yield a nice image – I will either further tweak that image – or take it to the final stage of upscaling. The image that AI produces is a very small one – so to increase the quality – and to make it so the image is printable – it has to be upscaled – and this is done through a separate AI software platform.

Once I have an upscaled AI enhanced/generated image, I will then edit in either Lightroom or Photoshop.

Generally, I blend many different narratives or prompts – though this example is only one.

______________________________________________________________________________________________________________________________________________________________________