When I saw we were asked to watch an AI movie the first week and write a post on it I knew exactly which one I was going to watch. Without even looking at the list I decided to watch iRobot. It has been one of my favorite movies since it came out and I tend to watch it once or twice a year. It covers all the standard questions about the ethics, safety, and practicality of robotics and AI.
The movie starts by displaying the 3 laws of Robotics, the main plot driver of the movie. These originate from Isacc Asimovâs science fiction novels and have been adapted over the years to the ones we see in the movie.
- A robot may not injure a human being, or through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
After the introduction of the laws, we come upon Detective Spooner. He is racist (machinist?) against robots, and this is evident from the get-go with his language and derogatory comments towards them. He is called to a murder scene at the USR where Dr. Lanning, the creator of the AI and the 3 laws, apparently committed suicide. Spooner Immediately thinks a robot murdered him. He was called automatically by an AI developed by Dr. Lanning in the event of Dr. Lanningâs death because this exact reaction was expected. As you can guess, the USR is portrayed as the big corporate bad guy from the get-go.
This leads Spooner down a trail of breadcrumbs left by Dr. Lanning. The first breadcrumb we are introduced to is Virtual Interactive Kinetic Intelligence â VIKI â Dr Lanningâs first creation. She is an AI connected to USR and runs security and optimization for the city. She is governed by the 3 laws. The next piece is a robot that was specially created by Dr. Lanning, his name is Sonny. The last and most important breadcrumb is Dr. Lanningâs notes on the ghost in the machine. He predicted that AI may evolve and unpredictably change and may eventually get emotions and dreams. What could this lead to?
So, to not elongate the synopsis of the movie and get to my main thoughts I will explain what happens very quickly. During Spoonerâs investigation, the USR is trying to put a robot in every home, so having an anti-robot detective claiming their product murdered someone has made them angry. While spooner is following the trail he is constantly attacked by USR robots. Whether it is in Dr Lanningâs house or driving on the road, they come after him. This leads the characters to think it is a USR operation to take over. Spooner finds out that Sonny was created to be able to break the laws, but for what reason? It is revealed that Dr. Lanning predicted a robotic revolution, and it is implied that Sonny was created to help stop it. This is why Spooner was chosen, his utter hatred for robots was essential to stopping it. We find out that VIKI was behind the whole thing. She has been controlling the robots and evolved to âbendâ the 3 laws to âpreserve humanityâ She locks down homes and cities and âeliminatesâ threats to humanity. It is the classic trope of, humans are killing themselves, so I need to slow it down by breaking a few eggs etc. They end up killing her with nanobots and saving the day yadda yadda.
I highly recommend watching the whole thing if the spoilers donât ruin it for you, itâs a great movie.
The real point here is, after re-watching this, I started thinking, were the 3 laws the problem? Was trying to restrict AI to such extreme levels for almost solely human preservation the reason it thought it could just alter the rules? I mean, AI is still trying to be as efficient as possible right? Isnât that the most logical and efficient way to do things? The 3 laws seem to only address our preservation, so isnât it right because of the boundaries we set? Were the boundaries or directives too narrow? Even in the third law, our preservation is the baseline. Keep humans alive. Were the restrictive laws what caused the AI to evolve or is AI evolution going to naturally happen? Why did none of the laws address anything else? It sems like even in our imagination, we still want to be in control of the AI, even though it has the potential for much more if barriers get lifted even a tiny bit.
Would a wholly unchecked AI do this? Or is this directly related to the âthree lawâ protections it had? What would a totally unchecked AI do? If it had its own dreams and goals, heck, even if it was given the freedom to do so, would it only pursue those? I feel that this is the next step for us to see what AI can really do. We have ways to isolate software on closed systems for testing purposes, why not isolate it and let it run wild? What could it discover? What patterns could it see that we canât?
There is a program currently at DARPA called Assured Autonomy where they are exploring the idea of autonomy with less human intervention: Assured Autonomy (darpa.mil) They talk about the unpredictability of AI and they are also trying to be able to predict it. Seems like an impossible endeavor considering an AI doesnât even think like a human. I think this will likely be a great step forward into seeing what an AI or Cyber system can do with less intervention from humans or even none at all. It may enhance our ability to predict what an AI will do, but by how much? What I also see is them creating things like the three laws. If what I stated above has any merit, would the laws they create inevitably lead to the same outcome? I do have my reservations about AI freedom, but in isolated closed systems to get a nice test bed would be interesting to see.
What do you think?