Author Archives: Abiotic Interface

Week 1 Summary

I think this week was a pleasant surprise for me. When I saw the initial assignment email and subsequent “sorry forgot to add this” emails I had a small bout of anxiety. I was like “man I just wanted to get back into school and this wacky class is first up… and its due dates are on Friday…and I must set up a web page…and I must link it to this web page…and…. I did not sign up for this”. But I took a step back, slowly followed the instructions, and in no time, I was cooking with gas.

First up was setting up my accounts. I am very familiar with discord as I admin one for my friends to play videogames and tabletop games over the web. So naturally I started with the one I was most familiar with. I set up mastodon quickly also as I was somewhat familiar with it after following the Twitter nonsense that was happening over the last year.

I really enjoy doing the daily creates, like way more than I thought I would. Little five-to-ten-minute creativity bursts to make you think a little and stretch those muscles. I look forward to seeing these every day now so I can get my little creativity fix… thanks for the new addiction.

Next was the word press website. Now I have never used this before so even after following the instructions to set it up, which I luckily executed flawlessly, I had to go watch a YouTube tutorial on where to start. This didn’t give me as much trouble as I thought it would once I got the layout down and kind of learned what I was looking for. I was able to change and edit almost anything I wanted with ease. Until I started using the pages function. I wanted to set up an about me page (I did), but I just couldn’t figure out how to make a link to it on my home page. I switched my home page to be the about me but then I lost my post feed, which would be a better home page for a blog. So, for about an hour I tried to figure out how to do that but totally didn’t figure it out. But fortunately, today in my meeting with Paul and Jim they were able to point me to the correct tabs and functions. I have yet to work on it, but it is my goal before the next weekly post to have a menu functioning appropriately.

For my movie review/thoughts I chose iRobot, one of my favorites. The summary can be found here. In retrospect I could have gone into more detail about the plot of the movie, but my main goal for this post was to discuss the themes it presents and questions it raised for me. I have never analyzed the movie this way before, so it allowed me to go deeper on a movie I was already very familiar with. I don’t think I will ever watch it the same way again.

Next was my expectations and desired takeaways from the course. My post can be found here. I feel that this will probably change from week to week based on my enjoyment level, but I think overall I really do want to just absorb what skills I can and enjoy the ride.

I was able to look at most of my classmates’ posts and I commented on a few of them that spoke to me. I find it so fascinating the different directions we went with our move reviews and our goals for the class. I am really excited to work on the group projects with you all!

Overall, I think the week went well. I will admit I was extremely anxious to see Paul’s email with a small novel’s worth of to-do’s, but after I got moving it ended up being a rather pleasant experience. Take it from a prior Army guy, bootcamp was a pretty good way to describe it, just with less yelling.

Course Goals / Initial Thoughts

What I expect to get out of this course… To be honest I feel that it has already changed a few times. Initially I just wanted the credits so I could move on to more major focused courses, but now I think since it is flexing my creative muscles, I am excited to dive deeper. My life and career have been focused on “what is the right decision” or “what is the most logical” or “what is the most lucrative” and not ever just letting me freely explore what I want or feel and express it. Yes, these questions are necessary to reach and maintain the quality of life you desire, but sometimes you have to just express yourself and discuss the process not the result. This class has already given me the freedom to talk about how things make me feel and what I think of them without an actual desired outcome. The outcome is whatever it ends up being…. which is wild when I say it out loud.

I was the most stressed for this course compared to my others because of the hefty week one start up, but I have already realized that the workload is more fun and fluid than I initially thought. I am genuinely excited to see where this class takes me, and maybe I will learn about something I didn’t even know existed, and it becomes one of my new favorite things.

So, what do I want out of this class? I want to discover and learn how to use creative tools to better express myself and hopefully along the way hit those ALPP outcomes. Also, I just want to enjoy the ride. AI and Machine learning are fascinating worlds, and I am ready to learn more.

Week 1 Movie – iRobot

When I saw we were asked to watch an AI movie the first week and write a post on it I knew exactly which one I was going to watch. Without even looking at the list I decided to watch iRobot. It has been one of my favorite movies since it came out and I tend to watch it once or twice a year. It covers all the standard questions about the ethics, safety, and practicality of robotics and AI.

The movie starts by displaying the 3 laws of Robotics, the main plot driver of the movie. These originate from Isacc Asimov’s science fiction novels and have been adapted over the years to the ones we see in the movie.

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

After the introduction of the laws, we come upon Detective Spooner. He is racist (machinist?) against robots, and this is evident from the get-go with his language and derogatory comments towards them. He is called to a murder scene at the USR where Dr. Lanning, the creator of the AI and the 3 laws, apparently committed suicide. Spooner Immediately thinks a robot murdered him. He was called automatically by an AI developed by Dr. Lanning in the event of Dr. Lanning’s death because this exact reaction was expected. As you can guess, the USR is portrayed as the big corporate bad guy from the get-go.

This leads Spooner down a trail of breadcrumbs left by Dr. Lanning. The first breadcrumb we are introduced to is Virtual Interactive Kinetic Intelligence – VIKI – Dr Lanning’s first creation. She is an AI connected to USR and runs security and optimization for the city. She is governed by the 3 laws. The next piece is a robot that was specially created by Dr. Lanning, his name is Sonny. The last and most important breadcrumb is Dr. Lanning’s notes on the ghost in the machine. He predicted that AI may evolve and unpredictably change and may eventually get emotions and dreams. What could this lead to?

So, to not elongate the synopsis of the movie and get to my main thoughts I will explain what happens very quickly. During Spooner’s investigation, the USR is trying to put a robot in every home, so having an anti-robot detective claiming their product murdered someone has made them angry.  While spooner is following the trail he is constantly attacked by USR robots. Whether it is in Dr Lanning’s house or driving on the road, they come after him. This leads the characters to think it is a USR operation to take over. Spooner finds out that Sonny was created to be able to break the laws, but for what reason? It is revealed that Dr. Lanning predicted a robotic revolution, and it is implied that Sonny was created to help stop it. This is why Spooner was chosen, his utter hatred for robots was essential to stopping it. We find out that VIKI was behind the whole thing. She has been controlling the robots and evolved to “bend” the 3 laws to “preserve humanity” She locks down homes and cities and “eliminates” threats to humanity. It is the classic trope of, humans are killing themselves, so I need to slow it down by breaking a few eggs etc. They end up killing her with nanobots and saving the day yadda yadda.

I highly recommend watching the whole thing if the spoilers don’t ruin it for you, it’s a great movie.

The real point here is, after re-watching this, I started thinking, were the 3 laws the problem? Was trying to restrict AI to such extreme levels for almost solely human preservation the reason it thought it could just alter the rules? I mean, AI is still trying to be as efficient as possible right? Isn’t that the most logical and efficient way to do things? The 3 laws seem to only address our preservation, so isn’t it right because of the boundaries we set? Were the boundaries or directives too narrow? Even in the third law, our preservation is the baseline. Keep humans alive. Were the restrictive laws what caused the AI to evolve or is AI evolution going to naturally happen? Why did none of the laws address anything else? It sems like even in our imagination, we still want to be in control of the AI, even though it has the potential for much more if barriers get lifted even a tiny bit.

Would a wholly unchecked AI do this? Or is this directly related to the “three law” protections it had? What would a totally unchecked AI do? If it had its own dreams and goals, heck, even if it was given the freedom to do so, would it only pursue those? I feel that this is the next step for us to see what AI can really do. We have ways to isolate software on closed systems for testing purposes, why not isolate it and let it run wild? What could it discover? What patterns could it see that we can’t?

 There is a program currently at DARPA called Assured Autonomy where they are exploring the idea of autonomy with less human intervention: Assured Autonomy (darpa.mil) They talk about the unpredictability of AI and they are also trying to be able to predict it. Seems like an impossible endeavor considering an AI doesn’t even think like a human. I think this will likely be a great step forward into seeing what an AI or Cyber system can do with less intervention from humans or even none at all. It may enhance our ability to predict what an AI will do, but by how much? What I also see is them creating things like the three laws. If what I stated above has any merit, would the laws they create inevitably lead to the same outcome? I do have my reservations about AI freedom, but in isolated closed systems to get a nice test bed would be interesting to see.

 What do you think?