Category Archives: ai106

Weeks Five and Six: Reading movies, making movies

“Cheers, I guess,” says Dr. Oblivion as he introduces the video segment of ds106. Snarky Dr. Oblivion comes first, letting you know that video, done well, is not easy, but good Dr. Oblivion follows to let you know it’s worth the effort.

We have two weeks of projects here: 2/9/24-2/16/24 and 2/16/24-2/23/24

Week 5

Finish this first section by Friday, 2/16, and link or embed it in your weekly summary.

Revisit goals
Somehow we’re already a third of the way through. Look back at your ds106goals from week 1. Do you feel you are on track? Or have they changed? This is quite possible now that you have a better feel for the course. Take a moment to reflect on how you are doing and blog about it. Tag this ds106goals2

Reading movies
This week we’re moving from audio to video. We’ve been looking at related aspects – photography, sound, design – all along, but now we’re going to look at cinematic camerawork, and how it all comes together.

Read:
Film analysis https://en.wikipedia.org/wiki/Film_analysis

Film Analysis – Storytelling https://pressbooks.pub/storytelling/chapter/film-analysis/

Watch:

Interested in Black Mirror? One particular episode, “Be Right Back” (the 1st episode of season 2), might be of particular interest in terms of how large language models, social media, and AI might be able to bring back the dead. It is also brilliantly done to create a deeper sense of the real, emotional and existential issues at the heart of new “business” models for AI.

For me, this raises the question – How much of what we consider good acting is really good editing? When we watch movies, we identify with the actors. They are what we see and hear. We pay attention to story and dialogue. This week, I’m going to ask you to pretty much ignore that part, and pay attention to everything else – the camera, the lighting, the editing. The ways that video tells stories.

The video above comes from Tony Zhou’s great series, Every Frame a Painting in which he analyzes details of film making. The entire series is worth watching and highly recommended, but I’m going to point out these in particular:

Memories of Murder (2003) – Ensemble Staging

Akira Kurosawa – Composing Movement

Joel & Ethan Coen – Shot | Reverse Shot

In Praise of Chairs

An interesting point about all of these is that they are about design. It may not be design in the graphic sense, but staging, composition and sets are all carefully and deliberately planned out to achieve particular goals, that is to say, designed.

David Fincher – And the Other Way is Wrong

F for Fake (1973) – How to Structure a Video Essay

The Silence of the Lambs – Who Wins the Scene?

The Marvel Symphonic Universe

Note that the focus in these is not on plot or acting, or even if the movies are good or not, but rather on the techniques, like editing, that the directors use to tell stories.

Here is Alfred Hitchcock on the the Kuleshov Effect:

Apply what we’ve learned

Now that we’ve spent some time thinking about how films are made and how we “read” them, let’s apply that new information to a film. Identify some particularly effective scenes from a video related to our theme. Here is the Wikipedia list again: https://en.wikipedia.org/wiki/List_of_artificial_intelligence_films Pick one of them to analyze in a video essay. Use the critical lens of this week’s reading and resources. This means you are going to make a video, using a scene from a movie, and discuss the scene in voice-over narration. You can upload your video essay to Vimeo or Youtube. (Note: If you have a google account, you have a Youtube account. Vimeo may be a better choice for the video essay because their content police are more easygoing.)

Note: Often people pick scenes from favorite movies, and forget to separate what they like about the movie or show from what is happening in the scene and how it is put together. The focus should be on how film-making technique is used, not acting or plot or if the movie is good. It may be easier to analyze something that is not a favorite for the purposes of this assignment.

This assignment is a slight variation on the classic ds106 Video Essay assignment in the Assignment Bank. For this class, you need only analyze one scene, although you’re welcome to do more. In particular, your analysis should reflect what you learned by reading the film analysis readings and watching the Tony Zhou videos.

iMovie and MS Photos app are good tools for this project, and OpenShot may be a good free alternative, although I haven’t tried it yet. OBS (Open Broadcaster Software) looks like fun too. There are extensions for Firefox and Chrome to help with downloading clips. There is a whole page with advice and information that should help with this assignment, and the ds106 Video Essay assignment has a few tutorials linked to it. The Digital Knowledge Center is also a great resource. They offer tutoring on video editing.

When you’re done, blog your video essay (that means embed the video in your post, and write about the process of making it and what you got out of it.) and tag it videoessay.

Daily Creates
Do two this week

Plus, the usual commenting

Week 6

Finish this second section by Friday, 2/23, and link or embed it in your weekly summary.

Video Assignments:
Your choice of either A or B

A. Talk to the Bot
As a way of experimenting with video editing, you should have your character engage in a back and forth conversation with Dr. Oblivion. This is a video version of the Consult with your doctor audio assignment. The difference is you will have to generate a Dr Oblivion video from the MP3 recording you get from https://oblivion.university/.

Here are step-by-step directions for generating a Dr Oblivion video from an MP3 recording of one of his responses:
https://docs.google.com/document/d/1_09wzqxth7qn3rvYAWzyeMrtNwTqIh_BFd5EVapaci8/edit?usp=sharing
Note that some of the processing steps can take considerable time, perhaps 30 minutes.

Feel free to split your Oblivion video(s) to insert your character’s part of the conversation. This part of the conversation does not need to match your input to oblivion.university. It might help to get a couple Oblivion videos to get the full range of his personality. If you can animate your character, that’s great. If not, you could use an avatar image with a voice-over. Include it in a blog post along with reflections on your creative process. Tag it talktothebot.

B. Your choice of 3 video assignments from the Assignment Bank. Involve your character in some way in at least two of them.

Developing Your Character
One of the things we want you to think about is developing your character in relation to the mega-multi-national corporation Aggressive Technologies. This will be the common, connecting thread amongst all the various characters created, and you will have to somehow connect your character to this multi-billion dollar tech giant that has its hands in everything from online search to e-commerce to infrastructure to social media, with an Aggressive interest in cornering the burgeoning Artificial Intelligence market. Write a post wherein you start imagining these connections and building out your characters backstory and relationship to Aggressive Technologies. Tag this post aggressiveconnection.

Daily Creates
Do two this week

Commenting
Keep it up! This is how we maintain our sense of connection and community.

“My mind is exploding with sound”

On Monday and Tuesday nights we listened to the BBC adaptation of Do Androids Dream of Electric Sheep on ds106radio. We had great participation on Monday, and only one person showed up on Tuesday, but there were still many valuable insights and observations about how sound was used to drive the storytelling.

It was noted how simple sounds, like paper shuffling on a desk, let the listener visualize the scene. An echo on a voice indicates the size of a room. The sound of rain on a car roof shows us the scene. We can even hear the type of shoes:

an image of a Discord chat saying: "There are also two different footstep sounds" "She’s wearing pumps" "those details help us see the story" "Yes! I hear the 2 different foot steps in the background"There were also background sounds, hums and metallic drones, that had interesting effects. In some cases they gave a futuristic impression, like in the corporate headquarters. Other times, they subtly raised the tension in scenes. The presence of background sound throughout simultaneously kept our minds attuned to sound but also made it almost subliminal. Moon Graffiti excelled at this as well, exemplified by the point where the astronauts put on their helmets. You could visualize what was happening by the way their voices changed with the helmets on.

One of the challenges of audio storytelling is exposition. How do you show what’s going on, using dialogue, without it seeming forced? The BBC production, along with some versions of Blade Runner, used voice-over narration to do some of the work. There was a conversation between JR Isadore and Pris where JR casually mentioned “It’s my apartment after all,” which told a lot about what was going on in the situation.

I got a time-warp feeling from the production. The detective style dialogue and narration sounded like 40s noir. The music interludes were late 60s classic rock. The post-apocalyptic setting and the android technology were futuristic, but some clues in the script put the setting between 1988 and 1993. The novel was published in 1968, and one listener connected that to the music.

Another listener said this was a great activity. The idea actually came from a ds106 student, back when we did Wire106. It was suggested that we live-tweet an episode. We did, and everyone saw a benefit to having that synchronous conversation. So we’ve continued it ever since. The point here is we welcome your ideas. The best parts of ds106 come from student input.

So, what did we learn?

A police commander asking his men, "What did we learn?"

In working through the Middlebury assignment and visual and design assignment from the Assignment Bank, a few lessons and a few questions become apparent. We found pros and cons to AI image generators. Can we call what AI produces design? Is it art? I think no, because the decision-making processes at the heart of art and design are absent, and replaced by statistical averaging. Many years ago, one of my painting teachers, in critiquing the work of Bob Ross, defined art as “a constant process of exploration and discovery.” While there is exploration and discovery involved in playing with image generators, it’s not embodied in the products. It arises rather through the reflection on playing with the tools and evaluating the output. AI does sometimes produce something unexpected, which Bob Ross might call “happy accidents.” We could use those for inspiration and build upon them.

How is using AI to generate images different from copying existing images from the web? Aside from the copyright issues, of course. If we don’t build on AI-generated images, alter them or give them some context, did we really learn anything? Going forward, I think we should be using AI tools and products as a springboard or an enhancement, but to always work at making the output better, so it represents our work and our creative and aesthetic decision-making. And we should be sure to explain that decision-making process in our blog posts. To paraphrase a famous quote:

A poster image of JFK with the slogan, Ask not what AI can do for us, ask what we can do with AI.

Some things I’d like to highlight from the blogs this week:

 

What’s in your bag?

Your purse, bag, packpack, murse, whatever, is your constant companion throughout your adult life. What are the essentials in your bag that get you through the day? Take a picture of the items and detail each of them with a description.

Here’s my everyday carry.

Emptying out my bag and placing all the essential items on the floor reminded me of the artwork for one of my favourite albums, The Rich Man’s Eight Track Tape by Big Black.

“Picture yourself in a boat on a river”

image of a person in a boat on a river, overlaid on a pattern of rectangles.I wanted to do a random visual assignment, so I sorted the entries in the section from old to new and picked one from the first page, the One Shot assignment. To make it AI-related I decided to get a generated image to start with. I went to the Rundown AI tools page and picked the Magic Studio AI Image Generator for no other reason than it was free-ish and looked like it would do the job, i.e. create a scene from a text input. As I was wondering what to say, these lyrics came to mind:

Picture yourself in a boat on a river
With tangerine trees and marmalade skies

That seemed as good as anything. I made a few versions. They had very little variation, so I just picked this one. The first thing I did was see if I could remove their watermark, not seen in the image above but very visible on the download. In Photopea, I fixed it with the Magic Wand tool and the Content-Aware Fill function. Would we call that function AI? Whatever, it worked pretty well to flawlessly. Next I had to chop it up to create tension and narrative. It wasn’t exactly a tense scene, so that could be a challenge.

I copied sections and adjusted the scale. I also played with Levels, Hue and Saturation.
To make it comic-book-like, I used the Magic Wand to select the background area on each layer, then inverted the selection so I had the image part. I made a new layer, then used the Modify-Expand function to increase the selected area by 8 pixels. I filled it with white and moved the layer behind the image layer, making a white box around it. I rotated some parts to create a sense of movement, or at least askew-ness. I thought the part in the top right corner might give the impression of an axe, and maybe tilting the other oar might hint that the boat capsized.

I was hoping to create a sense of a boating accident or an impending one. I don’t think I got it though. There is some drama in the color, contrast and organization. It probably would have been better, or easier, to start with an image that would lend itself to the project, like something with more going on. But it did make me think about image and narrative and tension a bit, along with color and mood.

The way the generator interpreted the input was interesting. It gave views of a figure from behind, one from inside a boat and one of an empty boat. That seems like a fairly sophisticated way to deal with “picture yourself.” A frontal view, or one that showed part of the figure’s face, would be picturing someone else. I’m not sure if I’d describe the sky as marmalade, but it’s more marmalade than blue. The trees have a tangerine-ish color. One image had a treeful of tangerines. The lyric was somewhat metaphorical but not so much that the machine couldn’t predict from it.

Week 2: That’s a rap

I was curious to see how people would respond to The Techno-Optimist Manifesto. It seems most people didn’t hate it as much as I did. To each his own dystopia, as they say. Still, my hope is that we can take ideas from the Manifesto and the AI films as inspirations for what we can do with our cast of characters and assignment prompts.

Here we have an examination of the Manifesto in conversation with Dr. Oblivion, which was a nice twist. Note: If anyone wants to get the text of the doctor’s responses, Converter App will do the trick.

Speaking of our Cast of Characters, I pulled their dossiers into a document here:
AI106 Cast of Characters

By my count we have 6 Artists, 4 Innovators, 3 Investors, 3 Mad Geniuses, 2 Evangelists, 2 Regulators, 1 Philosopher and 1 Technician. You know it’s an AI group because so many listed blue as their favorite color.

A couple of them work at a place called Aggressive Technologies. I wonder if the company is something we could play with. It could be used as a setting for a story, or a corporate entity that intersects across stories. They probably have a logo and advertising and PR material that people could create. These are ways that we can build upon each other’s ideas.

Another example is the remix. Tyler wrote a poem that had a rap feel, so we gave it to Dr. Oblivion and added a beat to it.

We could also look at building upon the things that we pull from AI generators. AI gave us a Yankees poster and garbled some text, so they do. But what might we make of Ye York? Did he go from College Dropout to running the town? Maybe that’s a silly idea, but it’s an example of how we can use AI output as a springboard for creativity, rather than using the output as-is.

I should also highlight the blog here and there it’s everything. That crawl with the latest posts is something I hadn’t seen before.

Are people getting spam comments? I may have neglected to mention Akismet, the spam-blocking plugin. Use the free version. Don’t pay for anything.

Another free plugin I use is Jetpack which does a variety of things, but one that I’d like to point out is a subscribe to comments function:screenshot of a line below a comment box on a blog page, showing a checkbox and saying "Notify me on follow-up comments by email."

Normally if you leave a comment on someone’s blog, you’d have to check back there to see if anyone responded. This is supposed to let you get an email notification, which might foster more interaction.

Got Glitch?

a glitch image of the Oblivion University homepage with Dr. Oblivion on a TV in front of a chalkboard

We’ve all met good Dr.Oblivion and snarky Dr. Oblivion, hopefully. There’s also crazy Dr. Oblivion, who finally showed up for me.

Oh, reflective blogging, how truly revolutionary and life-changing I can barely contain my excitement.
Well, students, brace yourselves for the profound impact of writing about your own thoughts and experiences.
Riveting, isn’t it?
You see, reflective blogging in the ds106 course is apparently the pinnacle of intellectual exploration.
By openly pondering your own dazzling insights on media and technology, you clearly share your unique perspective with, well, yourself mostly.
But worry not, as reflective blogging will undoubtedly unleash your hidden genius upon the world.
Ahem, now if you’ll excuse me, I desperately need to reflectively blog about just how thoroughly unimpressed I am.
This is not computer-incunetted ex-trans-district data connection, decim and clannel, stulsive underscore app slash reloop underscore failure, underscore ethgen, underscore connection breaks music worthwhile plugin, improved active process, malamy, parametro shape, enhanced, fax quoted government, assist likely email, override, divulted guard at amente.

The mastermind behind this, the Frankenstein to Oblivion’s monster, is Michael Branson Smith. He set up the different personality parameters. He also explained that there is a temperature setting on the back end. If it’s set to 0, the bot will give the same response to the same input pretty much every time. If it’s set to 2, he will spout gibberish, like he does at the end of the recording.

So let’s see if I understand how this works. The bot doesn’t actually know anything. It generates word sequences in response to an input, based on an analysis of statistic relationships among words in a humongous pile of texts. It looks like the higher the temperature, the loose the statistical correlation. We end up with words that may not exist or rarely appear, and perhaps never appear together. Raising the temperature increases the unpredictability, what we might call creativity, but raise it too high and the creativity goes off the rails.

Create your own room – acoustic anechoic chamber

The Create your own room assignment, is about doing exactly that.

Ever thought of creating your own dream room? Now you can! Using AI create pinterest find rooms, furniture, colors etc. From there write a blog post about why you chose everything! This is your dream so make it as crazy and out of the box that you can think of!

About my dream room

I benefit from taking time out, especially if I can find a room or location that’s quiet. It’s not uncommon for it to be a challenge to find a quiet place that’s free from sound, even the most smallest incidental sound.

That’s why my dream room is an acoustic anechoic chamber – a “room designed to stop reflections or echoes of either sound”. The kind of room where I can take some time out and feel like “I Am Sitting in a Room” before emerging refreshed and fully-equipped to reengage with the world around me.

My image was generated with Image Creator from Microsoft Designer (Powered by DALL·E 3), using the prompt “Low wide angle. Interior of an acoustic anechoic chamber with soft red light and a single comfortable chair in the middle facing forward, realistic digital art.”

Mimicry is the best form of flattery – Mondrian Melbourne

For the Mimicry is the best form of flattery assignment, use a vector or raster graphic software to mimic your favorite artist work but use your own design.

Doing my best to bring a Mondrian-esque Composition with Red Blue and Yellow to the iconic Flinders Street Railway Station in the City of Melbourne.

My image was generated with Image Creator from Microsoft Designer (Powered by DALL·E 3), using the prompt “Extreme close-up. Flat cubist style facing Flinders Street Station in Melbourne Australia, only in very thick red, blue, yellow and black colour lines.”

More minimalism, please

These AI-generated images feel too detailed and just quite not minimal enough to be truly Mondrian, for me. Perhaps some super-aggressive cropping could help to bring out the magic of Mondrian minimalism.