The top image was AI generated on Adobe’s Firefly and looks more like Mary, the mother of Jesus, than of Marilyn Monroe. The second image was AI generated on Craiyon.com and I am pleased with the results. It looks like Marilyn Monroe, but her pose looks like Mona Lisa, the red lips and gold gown with shawl add to the overall effect. On both of these sites, I asked for Marilyn Monroe in the style of Mona Lisa.
I changed the Craiyon.com image into a pop art version and that has a cool, Andy Warhol look. To achieve this image, I used the phrase, Marilyn Monroe in the style of Pop Art. I believe that creating art this way is a threat to artists because it takes time and talent to create original art on canvas. If the masses continue to produce and print art in this manner, it will diminish the livelihood of many.
Today I experimented with two AI softwares to create art. This is based on the Demystifying AI article. The two softwares I used are Microsoft Bing and Craiyon. Before this, I hadn’t used AI for anything other than grammar (I am referring to Grammarly), so this was new to me. In general, I have been against AI art on principle, but I have to admit that playing around with it was fun.
Below is a table to compare my results. I would say that the Craiyon pop art style photo is the worst – I’m not sure what Craiyon thinks pop art is. I think Microsoft did better in general, but I appreciate both versions. I also find it interesting that the two Microsoft images with people are framed the same way.
Chip Kidd Interview: My analysis of the interview with Chip Kidd is his expression of comic books being considered a cinematic medium. He does not agree with this because there is much left out and much for the reader to accomplish. Kidd refers to movies, and how the film determines the place of not the person watching it. Comics, however, allow for the reader to be involved, meaning you have to bring something to it.
The Works of James Verdesoto: Verdesoto introduces color trends in movie poster designs. Throughout the video, The Fast And The Furious posters caught my attention because they are some of my favorite films. I personally love action thrillers and the series of these movies has impacted my movie lifestyle. As Verdesoto represents the first movie poster of the action series, the detail of the black & white faces of the main actors blends well with the green, yellow, and red outlining of the car that is shown. This brings the perspective of the eye to be drawn into the car design, then the whole poster itself.
Stranger Things Retro Style: It is unique how the popular television show, Stranger Things, came across the 80’s style theme. Especially how the title design company Imaginary Forces came across such a perfect retro-style intro. Imaginary Forces wanted to replicate a title design that was rugged and imperfect so that it represented a 1980s-style theme. The sequence that the company used for the lettering, Kodalith, shows a high-contrast image that draws in the individual(s) watching the show.
Photoblitz: Seven Points
Take a picture from the inside of something looking out.
Create an interesting photo that includes looking through one object to see another
A photo that looks better in black and white than it did in color.
“The dusky path of a dream…” Tagore. Photograph that idea.
A fork in the road. Casting a ballot. Buying bread. Choices abound.Take a photo of a choice.
Take a photo showing the wide open space, the great outdoors.
Take a photo that emphasizes the detail of a human hand
Middlebury College: During the process of the Middlebury assignment; It’s not art, it’s data. The page supplies some very detailed descriptions of how AI imaging can be used. Some steps that the site proposed as how to start AI-image generation continued as; Pick your tool, such as Craiyon. Build your base, refine your prompt to generate a simple image that you will enjoy. Add some flair, allow for more detail in your description for the image generator. Transform it, an example they shared is “Marilyn Monroe in the style of Mona Lisa. Share your output, share the image that you most disliked and most liked. Dig deeper, they provided a link such as AI can make art that feels human. Whose fault is that?Overall, this post gave me some things to think about for the next time I use an AI image generator.
Populate the Landscape: Character Image.
The Grand Canyon, destination poster: The Golden State Bridge, where my character may be.
Weekly Summary: To open up with week three of ds106, I encountered creative aspects on the assignment board. The three videos that I watched; Chip Kidd, James Verdesoto, and Stranger Things, introduced numerous factors for the illustration of popular films and books. For example, James Verdesoto expresses his detailed posters and how he wanted to catch the eye by coming up with different color combinations. As I finished up the videos I started to move to the Photoblitz assignment, there isn’t much to say about this, to be honest. The Middlebury College assignment helped me to learn more about AI digital imaging, it was informative and I recommend reading through their website. With the Middlebury assignment, I created two visual design images that represent my character, I think they speak for themselves. Overall, for the assignments this week, I did not have any trouble using the websites as well as producing the daily creates.
This week I was tasked with doing Middlebury Art’s demystifying Ai- digital detox assignment. I used two AI photo generators to create art based on a prompt.
While both weren’t what I was expecting this image was the worst. The character has no facial features, no distinctive characteristics, and looks out of place and uncomfortable. I know that technically I would be uncomfortable in the cold but still. This character could be anything and I feel as though the perspective is wonky I am seeing certain characteristics of the mountains that I shouldn’t be able to see or wouldn’t see in something man made.
This art was the best out of the two options. The AI produced more distinguishing characteristics and I could tell that it TRIED to recreate Jcoles characteristics. I am also not sure why AI has a problem with creating hands. In both of these, the hands are in the pockets which is okay but it would be nice to see some differentiation in poses. The mountain here, however, is blurry.
I don’t see AI taking any artists’ jobs any time soon. AI will need a lot of advancement to replicate the complexity of human art. Also, Ai creates art that feels tone-deaf to me. As of now, it is not possible for AI to eventually take an artist’s job but it could happen in the future.
We were given an assignment this week from Middlebury College’s Demystifying AI course. According to Professor Bond, this assignment will require us to use AI to make images, and it will definitely cause us to consider the repercussions of doing so. For this exercise, I used DALL-E 3 from ChatGPT4 and Stable Diffusion, two AI picture generators. I instructed these two AI image generators to create an image of Elvis Presley first. The result is displayed above. I believe that both image generators really hit it out of the park with this one, emphasizing his attractive features, his well-known hair, and even dressing him in authentic Elvis attire. Stable Diffusion produced the image on the right, whereas DALL-E 3 produced the image on the left. Since Elvis Presley is one of the most well-known singers, it is apparent that both AI images featured a microphone and a guitar that fit him. To be honest, I believe Stable Diffusion Elvis is a little closer to the real Elvis, but that’s just me being picky. Overall, both AI picture generators did a fantastic job of adhering to the prompt and covered all the bases for this first prompt.
I instructed both of the AI image generators to create an image of Elvis Presley performing at a concert because the following part of the assignment requires us to add some flair to our prior suggestion. These images are displayed above; DALL-E 3 produced the picture on the left, while Stable Diffusion produced the picture on the right. While the Stable Diffusion image appears to be an older and fatter version of Elvis, the DALL-E image appears to be a young, confident, and up-and-coming Elvis. These two images appear to have slightly distinct thought processes. Still, both pictures had spotlights, fans, and Elvis dressed like a star for a concert.Again, I believe the Stable Diffusion Elvis to be a little more realistic looking, but the DALL-E handled my prompt quite well. All in all, these AI image creators did another excellent job.
In order to complete this assignment, we had to design an AI image of something unusual that we couldn’t locate online, in accordance with our initial request. I requested an image of Elvis Presley dressed like a mermaid from both AI image producers. The two outputs are displayed above; DALL-E 3 produced the picture on the left, and Stable Diffusion produced the picture on the right. Elvis submerged as a mermaid, complete with all of the added elements, is what really made the DALL-E 3 generator shine. Although the steady diffusion image fulfilled my need, I feel that there are too few features in the image.It’s also frustrating that you can’t see Elvis’ entire mermaid tail. However, I believe both of the AI image creators fulfilled the assignment I assigned them.
We must consider everything that the AI picture generators have produced as the assignment’s last phase. The Stable Diffusion image of Elvis as a mermaid was the worst AI image I believed to have been produced, even though none of them were very awful. As I mentioned in my note beneath the image, I believe it is merely devoid of detail. Elvis’s torso with the mermaid tail is partially visible, but it ends abruptly. Though it’s missing several details that I requested to see when I requested this to generate, I still think the photo is nice. The DALL-E 3 picture of Elvis dressed like a mermaid was the one I found to be the greatest. With this one, the AI really showed off its creativity by dressing Elvis normally but adding a pink/blue mermaid tail. He is surrounded by a lot of seal life, which enhances the picture’s ambiance. In the final section of our reflection, we are asked to assess the extent to which we believe AI image generation poses a threat to artists. Because I think there is value in a genuine image, picture, or portrait, I don’t think AI image generation poses a serious danger to artists. While I think AI-generated images are visually appealing, it is evident that they are primarily artificial. I think artists will always have a place in this world because authenticity is what people love most.
One of the assignments for this week was from Middlebury College and we were tasked with creating a strange picture using two different AI chatbots and compare our results. I took this assignment step-by-step and have three different versions of each step. So, my initial step was to just asked the two chatbots, DALLE-3 from OpenAI and SDXL Turbo from StabilityAI to give me a picture of a superhero. I was very open-ended with my question here and as we can see I got two very different results. On the left hand side, I got an image of a superhero that appears to have three legs and two capes and a mask that looks like the one Star-Lord wears in the Marvel franchise. It is hard to tell if this is where the inspiration for the image was drawn from because the colors of the suit and mask are very similar to that of Star-Lord. On the right hand side, we see a more typical superhero costume and an image that resembles that of Superman. We can see the differences from the two different AI chatbots and how one (OpenAI) is more realistic in terms of the picture quality and then the SDXL Turbo is more cartoonish when it comes to the picture quality. Moving on, I decided to add some more details to the prompts to try and figure out what the chatbots would output.
The next step of the design challenge was to add some flair to the images and see what you could create with the different chatbots that you chose. So, I decided to ask the chatbots to give me an image of each of the heroes that they generated in my previous prompt and have them standing on top of a rock with one leg being shown to be higher than the other like some heroes stand in movies. On the left, the image was generated by DALLE-3 and as you can see, it was pretty accurate with what I asked for and what I got. The image on the right is a little more obscure because it did not have the hero have one leg higher than another, however, it did have them standing on a rock facing the world and it also appears to have added some extra people in the back even though I did not ask for it. Also, the face on the superhero on the right looks to be a little more obscure as well and does not seem to be fully generated. Finally, I made one last request to the chatbots and that will be shown and explained down below.
Here the prompt for the chatbots was to create an image that had a superhero riding a cat on the moon. As a result, on the left, we can see that ChatGPT did a great job of creating the image that I requested and looks very nice overall. However, on the right hand side we see the SDXL Turbo output which was also tasked with the same job which was to create a picture of a superhero riding a cat over the moon. For some reason, this chatbot decided to ignore the “on the moon” part of my prompt and instead have the superhero ride the car in some random city. Also, this chatbot also decided to create a superhero that was a mix of both superman and batman and not just use the same costume throughout so that was interesting to see as well. Overall, the quality of the SDXL Turbo pictures were much worse than that of ChatGPT’s and ChatGPT was also able to be more accurate with its output and they were more appealing to look at.
I would say that the image on the right hand side was the worst image I got because it was not really that close to what I was asking for. I asked for a superhero riding a cat on the moon and I got a man dressed up as two different superheroes riding a cat in some random city. That is what I think that one was the worst because the output that I got was not what I really wanted at all. I would describe what I got as a child taking in instructions and only paying attention to half of the instructions. On a more positive note, the picture below I thought was the most impressive. I really liked how this one ended up turning out. I just think my simplicity of the prompt honestly made it easier for ChatGPT to interpret and output an image that was exactly what I wanted. Also, the sun in the background that is setting and the cape flying in the back is all extra from what I asked for and it just comes together to create the perfect image. One of the reflection questions in the assignment description talked about how AI may pose a threat to artists and where I think it would affect them. I think that AI poses a pretty serious threat to artists because whatever artists are creating, AI can create the same thing, if not better, in almost no time while also making it accessible to everyone immediately. Therefore, I think that AI will definitely pose a threat in the future because AI will take that sense of creativity away from artists because whatever they make, AI can also make. When creativity is lost, it is very hard to make interesting content so it makes sense that artists might struggle in the future.
The Middlebury college AI assignment was an interesting read and activity. As an introduction of the assignment, there was an in-depth discussion about the regulation of AI, and how the development of free and accessible online tools has created issues for original creators and artists. The widespread recognition of Stable Diffusion is something I’ve come across in my time on the internet as well. It’s becoming increasingly difficult to differentiate the difference between real art and AI generated ones, especially for already animated content. If I were an artist (which I am most definitely not), I wouldn’t be happy with the direction of the industry and I would frankly be worried about how this changes my creative direction for the future. I’d assume that many artists are stuck at a crossroads between pursuing a type of art they’re truly excited about versus adhering to the AI buzz and losing some originality in the process.
For the actual image generation portion of the assignment, I decided to create an image of a dachshund on the moon, and I formatted the prompt to where the dog would resemble an astronaut, akin to Neil Armstrong during the first moon landing. I decided to use both Stable Diffusion and Craiyon to generate these images.
This first one from Stable Diffusion was amazing, 10/10 in my book. The image feels very realistic yet still with an artistic flair. The generator even stayed true to the prompt and added an American flag on the left arm sleeve. The only criticism I’d have would be the weird lighting coming from what I’d assume is the sun? Doesn’t seem very realistic for there to be beaming sun on the moon. Otherwise, it kind of scares me how good this image was.
This one from the Craiyon generator is something that I expected a little more. The image is very low quality and doesn’t adhere to the guidelines I set in the prompt. It’s almost comically bad, the moon completely overtakes the subject of the photo which was supposed to be the dog.
I didn’t get any submission result back after sending in my results on the website, but I will update here if there is anything else that pops up from this adventure.