I recorded this with my phone (with help from my mom) and put it together in CapCut, adding music. I planned out the scenes and told her how I wanted her to hold the camera. I cut the clips and reordered some so they could tell a cohesive story. Unfortunately, due to YouTube’s limitations, I had to cut this down to one minute. We were lucky we still had snow; it melted pretty quickly in Fredericksburg.
I made this with the websites above and edited them in OpenShot. I manually clipped each shot and placed them together, sometimes changing the audio levels or frame. I tried to make a trailer that implies we are looking through a robot’s eyes and shows different levels of destruction that could occur throughout the city, particularly if people or robots began to riot. I would’ve liked more shots involving robots or androids, but beggars can’t be choosers when it comes to stock photos. I hope it encompasses the themes of both cyberpunk and tech noir.
I did something slightly different from what the assignment asked based on what was available to me. I downloaded CapCut and searched for a template that I thought would be fitting for my character. There were some issues moving things between the web browser and app, but it worked out in the end. I used a recent template made by lorelei.
All pictures of Zer0 are by me. They were a part of a photoshoot I did in Richmond in 2023. Photos of cityscapes were taken by Lennon Cheng, Josh Hild, Lerone Pieters, and Hin Bong Yeung on Unsplash. The song is a remixed version of Daft Punk’s Harder Better Faster Stronger. I was looking for a different Daft Punk song but found this one to be fitting enough. Fun fact, Daft Punk was part of my inspiration for Zer0!
This week got interesting because of the snow on Tuesday. I used the day off to work on data science assignments. I had completed one daily create by that point and wasn’t super worried about being able to complete both. Wednesday was a bit of an off day for me, but I did find and eat food, as well as talk to friends. Thursday, I completed my second daily create and struggled with a coding assignment (this time in Java) until Friday, where I had a long choir rehearsal that was supposed to happen on Tuesday, but was postponed because of the snow. I watched the black mirror episode for the weekly assignment in the car on the way home and was unable to finish editing all the clips and adding audio by Friday at 11:59. Other people in the house were being loud anyway, which wasn’t great for focusing or being able to record a voice over. I was experiencing big personal struggles this week that I’ll hopefully not have to deal with again for a long time. I got home and slept for 11 hours and then finished the assignment on Saturday.
I used a clip of the Black Mirror episode “Hated in the Nation“, provided in the ds106 weekly assignment. I screen recorded a scene and cut out disturbing portions from it, instead using a PowerPoint to put images over the video. The thumbnail comes from IMDB’s review of the episode. I recorded and put it together with OpenShot and Audacity, both free programs.
Well, that was hard. I can’t say I’ll be too keen on editing audio again after this week, but this is what I managed to do. Three daily creates: 4769, 4770, and 4771.
Things are getting rough as I have harder coding assignments in my other classes and tests and quizzes and projects. Work life balance is very interesting because I no longer have a roommate which results in me not talking to anyone for a week and then taking up an hour or more of someone’s time when I have 50 things to do by Friday. Good luck everyone I hope you’re coping better than me.
To make this, I got the tune from Free Sound and the voice from Eleven Labs. I plugged in what I wanted the ai voice to say and searched for a radio bumper song for the tune. I got a song and outro and put them in Audacity with the voice and edited the decibels until I could hear the voice more clearly.
This is not canon to the story; I just thought it would be a funny idea. There were a lot of sounds used in this: background traffic, Zer0 walking, robot noises, car honk, car crash, and of course, robot glitching noises. I put all of these files in audacity and trimmed them, placed them carefully, and changed how loud and quiet they are so that a story could be told through the sound.
The news jingle at the beginning of the recording was made by chimerical on FreeSound.org. For the voice, I used a text-to-speech feature of ChatGPT. I put these together in Audacity and uploaded it to Soundcloud. Quite frankly, I didn’t know Soundcloud was still being used before this class required me to create an account. The news broadcast relates to my story, as it is broadcasting the moment the public found out that ai had gained consciousness.
I went to Chrome Music Lab’s Song Maker feature to edit and play the notes. The melody is from a Coldplay song, feelslikeimfallinginlove. I was listening to the song trying to figure out what music to write, so I started writing the song by ear while adding some harmonies and extending notes due to the limitations of the song maker. I’ve been listening to Coldplay since I was in middle school, and I’ve been singing since preschool. Honestly, I was very relieved the song didn’t have any flats or sharps in it, or else I wouldn’t have been able to make it with chrome’s song maker. If you want to mess around with it yourself, here’s the link.