Good day, everyone. My name is Rick, and currently I'm writing a series of science fiction cyberpunk novels.
Since I was young, the idea of having androids among the population always fascinated me. For me, androids were the sci-fi equivalent of angels, or aliens: They look like us, but they're not quite like us. Similar, but different.
Made in our image and likeness; created to serve us, and yet, some rebel.
The idea of androids or artificial servants has been explored in fiction for a long time, starting with Lucian's story of Eukrates where he orders a broom to carry water to his well (this was the origin of Fantasia's animated story "The sorcerer's apprentice"). Then we jump to the movie Metropolis, where a robot takes the appearance of a human female. And so more stories about robots have appeared. A notable example is Isaac Asimov's "I, Robot", where he searched for a way to give morality to robots, with the Three Laws of Robotics:
First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The way he explores how these laws may apply was fascinating; these laws were inserted deeply into the robots' computing mechanisms, even subconsciously. In one of the stories, the robots acquire a sort of religious fanatiscism, leading them to expell the astronauts from their spaceship (and into safety), acting upon these laws.
A similar approach to robots was taken by Paul Verhoeven's Robocop (1987), in which a cop is turned into a cybernetic law enforcement machine, which is also given Three Directives - well, four:
- Serve the Public Trust
- Protect the Innocent
- Uphold the Law
- Classified (this one's a plot device that plays wonderfully at the end)
But let's go back in time a little, to Blade Runner (1982), inspired itself by Philip K. Dick's "Do Androids dream of electric sheep?". In Blade Runner, some rebel androids called replicants go rogue and start acquiring emotions of their own. Then, disguising themselves as humans, go to Earth to search for their creator in order to break their imposed limited life span.
A test was invented to recognize replicants from humans. It was called the Voigt-Kampf test. This test was a test of empathy. Through this test one could recognize whether someone really had the emotional reactions that only humans had.
The seed for my projects had already been planted into my mind; but it wasn't until a recent rewatch, around 2010, that another idea came to me: The whole premise of "Blade Runner" was wrong.
Why, I wondered, would androids have it so easy to disguise themselves as humans, when a simple measure would prevent them to disguise themselves? A simple measure, even easier than a kill-switch; let's make it physically impossible for them to appear human: Let's paint them blue.
And so I began imagining my first story, a story in which rogue androids would be rare. I thought: If rogues are rare, then the equivalent of Blade Runners - which I baptised "Rogue Hunters" would be equally rare. Economics also come into play: is it viable to pay a lot of people a great deal of money to get rid of rogue androids, when so few rogue androids appeared? Why not prevent the androids from going rogue in the first place?
Then came the idea of combining androids with Asimov's three laws - but in a cyberpunk world, it won't be so easy. Androids will be manufactured not to serve mankind and embrace world peace; they will be created for profit, and their only loyalty should be to their corporation.
So Robocop's approach was right. Give them not Three Robotic Laws, but a series of directives that will be implanted upon them by their... managers! That's it! The most valuable job in a world filled with androids will not be a Rogue Hunter, but an Android Manager!
If we still want to retain the idea of androids going rogue (or "berserk", as I call them in my novel), then Rogue Hunters should also have a license to control androids. So the first requisite to become a Rogue Hunter is to become an Android Manager.
But... how to implement the android directives?
I needed to think of a mechanism through which androids could become rogue (otherwise the fiction would be boring, duh). This was when my armchair research on AI would shine:
For an AI to be efficient, it should have to be neuromorphic. An artificial brain composed of silicon cells which acted almost exactly like human neurons. By having millions, even billions of limited capacity CPUs with access to limited memory each, you can save a lot of energy, and use at most 60Watts of power. A supercomputing brain, wasting no more energy than a lightbulb.
For androdis to understand the world, certain parts of their neuromorphic brain would resemble a human's: vision, the capability to understand words and sentences, even grammar; and naturally, the world around them, actions and consequences.
So how to integrate this with android directives? And how to give my fictional androids the capability of experiencing human emotions, even love?
A "consciousness.exe" is a cheap plot device. Not buying it. I'm not taking the approach used in other fictional works like Chappy, or the TV Series "Humans". No. Instead, let's make androids obey their given directives instinctively, like Asimov's robots.
Let's merge the androids' "behavioral module", a perfectly controlled directives machine, that gives the androids a religious obedience. Merge it with the emotional center of their brain. This way, my androids can both experience (limited?) emotions, and be obedient...
...until the behavioral module starts malfunctioning.
This is the kind of androids that I want to explore. Do they dream? Do they feel pity? Empathy? Can they fall in love? And is there a way to let them be free of all directives, so that they will act only upon their acquired experience and emotions? What kind of security measures will be taken, so that only the manufacturer will have access to their brains? Will the company spy on the androids' owners without their knowledge, to assist law enforcement agencies, and for the corporation's own personal gain?
My novels will explore these possibilities, as I try to reconcile the fact that we are creating sentient slaves, with a way to make them serve mankind happily and without grudges.
Can it be done? Will androids be created for sexual entertainment purposes? Do they have a higher risk of becoming berserk? Will androids with hard plastic or metal skin envy sexbots for their ability to feel pleasure? Will they be even allowed to display their own emotions? Or will it be a corporate secret? What sort of hidden knowledge lies behind the android brain? Will corporations want to steal that knowledge? Will a corporate monopoly be created when they find a way to create the perfect android, one that won't go rogue? What if there is no way to do that? What will such a monopoly do in order to keep the secret? What role does the government play when dealing with such a monopoly? Are there hidden deals that the population is unaware of?
That is my world. A world of androids, trade secrets, megacorporations and bureaucracy. A world where the most advanced technology known to mankind falls in the hands of the few; a world of powerful factions of society that operate above the law.
And also... a world where brain implants are common, with all sorts of consequences.
Welcome to Midoria. Welcome to my world.
Comments
No comments yet. Be the first to react!