Created by Materia for OpenMind Recommended by Materia
6
Start Why Androids Dream: AI by Philip K.Dick
04 December 2017

Why Androids Dream: AI by Philip K.Dick

Estimated reading time Time 6 to read

US author Philip K. Dick wrote around 40 novels and 120 short-stories on his writing machine. With this classic tool he created claustrophobic science fiction scenarios, writing less about alien invasions, but artificial intelligence. Doing so, he not only described technical possibilities, but furthermore described ethical concerns. Most of his stories featured different layers of reality, what his work compares to Franz Kafka. As all good science fiction, his stories were less an escape from reality, but the opposite, allowed us a view into a distorted mirror.

Ridley Scott took the book “Do Androids Dream of Electric Sheep?” and converted it into a arthouse science fiction movie. His masterpiece started with a scene where a Blade Runner conducted the Voight-Kampff-Test to define if the individual on the other side of the table was human or artificial. This as it was forbidden for the so-called replicants to return to Earth.[1] The idea of the test was to trigger small deviations in the android’s behavior, which would reveal its artificial origin. With this, the Voight-Kampff-method was similar to the Turing-test, a process developed in 1950 by the British mathematician and computer scientist Alan Turing.[2] His idea was to confirm artificial intelligence by its ability to pretend being a human. For the test’s purpose a panel of experts communicated text-only with humans and different software. If the software could achieve that the majority of participants assume to write with a human instead with a software, the AI passed the test.

Machine learning and behavior

The movie is based on Philip K. Dick’s book “Do Androids Dream of Electric Sheep?” [3] The author developed the interesting concept that machines received human memories to ground them. Related to today’s knowledge about Artificial Intelligence, this idea makes sense. Based on its code, the software is capable of autonomous learning and adapting its behavior to that. With doing so, today’s software is not re-programming its own code, but storing the learned information. If a new situation gets identified to be similar to one in the past, the software will execute a script of a learned behavior. The more successful the result had been and the more similar the new situation perceived to the earlier one, the higher the possibility that the known behavior gets repeated. Furthermore, the AI can analyze different behavior scripts to identify patterns and underlined values & attitudes. If such are identified, the software can elaborate adequate behaviors also for unknown scenarios.

The AI’s codes defines potential behavior and enables the machine to learn. Quite similar to a child. At this point, the software is not effective yet to use for its planned purpose. The AI can start with its tasks, but the error-rate is still high. With new gained experience the quality of decision making raises. A coach may support the newly installed AI to learn faster, just as companies do it with their human talents. This individual accompanies the AI and analyzes together with it the quality of taken decisions.

Instead of investing time on machine learning, copying experiences from one AI to another may speed up the process. Here the manufacturer of the AI (our “Tyrell Corporation”) lets the prototype act and learn in a simulated environment, which is a near as possible to the later real environment. The results of these deep-learning will be copied later to the memories of the sold AIs. This process supports adequate behaviors, as the software does not have to start with 0 and use pure trial-and-error. The AI perceives the experiences as its own and identifies it as successful behavior of the past. Behavior scripts used various times in the past are relatively protected against an easy change. This as adequate behavior not guaranties the wished results, but always includes a risk factor. If the script has a 95% possibility to lead to the favorable outcome, the AI will continue using it, even if in 5% there will be a failure. Mathematically, if the machine already executed 10,000 process, a negative result will not lead to a change of behavior. Here it does not matter, if the AI really executed these 10,000 processes or only perceives that it did so.

Simplicity ensures the basic principles of machine learning: fairness, accountability and transparency. Drawn portrait of Philip K Dick /Image: Pete Welsch
Simplicity ensures the basic principles of machine learning: fairness, accountability and transparency. Drawn portrait of Philip K Dick /Image: Pete Welsch

Of course the AI store such experiences in a much more abstract way than Dick described it in its novel, but nevertheless the concepts are comparable. The advantage of using “fake memories” instead of including the desired behavior directly in the AI’s algorithm is that the software stays relatively small and non-complicated. With this it is efficient and can adapt fast to new situations. Simplicity ensures the basic principles of machine learning: fairness, accountability and transparency.[4] Dick not only used this idea for “Do Androids dream of Electric Sheep?”, but also addressed this topic in another novel: “Total Recall”[5]. As human and artificial behavior are similar, once the faked memories had been implanted into androids, once into humans. As Hosagrahar Visvesvaraya Jagadish from the University of Michigan pointed out, most suboptimal AI (or also human) decisions are not caused by biased code, but can be explained by the usage of non-correct information. This as sources mostly not gets chosen by the AI itself, but a human individual.

Today’s smart phones are continuously connected to the internet, this to ensure that the user receives actual push messages. We can conclude that androids would work accordingly, at least if they stay inside the range of a mobile network. Nevertheless even in future such replicants could not stay inside such a radius all of the time. In the movie they escaped from the colonies to return to Earth, so they had not been able to connect to a Cloud. For this, they require the ability to act independent from such a connection.

The blockchain of androids

Ethereum[6] was originally developed in 2015. The philosophy was to establish a robust peer-to-peer network to exchange information or even group resources to execute sophisticated apps. Thanks to this, central servers are not required. Androids can use a similar technology. If the connection to the central server is not available, the machines can connect with the nearest android, which can be connected to another or directly with the Cloud. Oral languages are not required for the connection between different AIs, Bluetooth-like technology enables them to communicate via “telepathy”.

As today’s electric cars show, batteries require time for charging. Even if scientists experiment with electric highways, we can assume that also in future it will not be possible to recharge batteries everywhere. As conclusion, androids require time-off, similar to human sleep. Philip K. Dick asked if they dream of electric sheep. Dreaming machines sound illogical, but if we use again smart phones as comparison, we see that night times often get used for mayor updates on apps and even the operation system itself. This, as such processes may require an hour or even longer.

Androids may use their charging time to update their software, up to interchange information with the Cloud and other androids. Similar to synchronizing an iPhone with its iTunes, this makes perfect sense. The received information, as others’ experiences, can be understood as dreams. In opposite to humans, AIs are available to remember their complete dream sequences and add them to their memory. Explained by a cognitive anchor-effect[7], the information can be used as base for future decisions. For the case that there is no connection to the Cloud or other sleeping machines, it will be a dreamless sleep.

Image of the robot Hitchbot in one of the stops of his trip #hitchBOTinUSA / Image: Instagram @hitchbot / www.hitchbot.me
Image of the robot Hitchbot in one of the stops of his trip #hitchBOTinUSA / Image: Instagram @hitchbot / www.hitchbot.me

Philip K. Dick hinted it his book, But Ridley Scott’s movie Blade Runner had been more direct, as the name “android” had been changed to “replicant”, a wording with an open negative touch. In 2014 the hitchBOT hiked across Canada. To do so, scientists sat the little robot on the side of an highway with a message that it wants to travel and meet new friends. Then they tracked how drivers reacted to the machine. If they helped hitchBOT, ignored or even destroyed it. Even if there had been single bad experiences, the overall results had been quite promising.[8] Of course, its likeable design made it easy to sympathize with the little robot. Today scientists evaluate the results of the experiment and these may confirm that humans have the possibility to build up an emotional relation with a machine. Such may positive or negative. A relevant conclusion, as the implementation of AI in an organization is a disruptive situation, which pushes the individuals out of their comfort zone. If no adequate change management is in place, emotional reactions as sabotage is a given risk. In such a negative atmosphere it gets overseen that AI can act as ambassador for humanity, as for non-governmental organizations. For example, a bot can act as a first basic legal or medical support for groups, which up today had no access to such support. The “donotpay”-project created such a robot lawyer.[9] The software can support for free if people seek for compensation for delayed planes, help against unfair parking or speeding tickets. So far, the target group could be everybody inside the society. But thanks to relative cheap devices as basic smart phones or tablets, also more vulnerable groups could get support, for example related immigration questions or also healthcare checks.[10] As Tyrell’s claim promised: “More human than human.”

Patrick Henz

References:

  • [1] Scott, Ridley (1982): “Blade Runner”
  • [2] The Alan Turing Internet Scrapbook (2014): “The Turing Test, 1950”
  • [3] Dick, Philip K. (1968): “Do Androids Dream of Electric Sheep?”
  • [4] www.fatml.org (2017): “Fairness, Accountability, and Transparency in Machine Learning”
  • [5] Dick, Philip K. (1966) “Total Recall”
  • [6] Ethereum (2017): “Blockchain App Platform”
  • [7] Tversky, Amos / Kahnemann, Daniel (1974): “Judgment under Uncertainty: Heuristics and Biases”
  • [8] Hitchbot (fetched 10.6.2016)
  • [9] Do not Pay (fetched 20.03.2017)
  • [10] Brown, Jessica (2017): “The robot lawyer that helped people with their parking tickets is now helping refugees”

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved