There’s been a lot of fuss about Artificial Intelligence lately. Very influential people are saying that Artificial Intelligence could one day be a threat. I’m inclined to disagree, but only because I’m using modern technology as a reference. Every time I see an article about another breakthrough in computing power, memory speed, or robotics, I feel my inclination tipping away. Still, I think that Artificial Intelligence that rivals human cognition is a long way off. Furthermore, the hardware that would make that intelligence a physical threat is also a long way off. Still, it’s a possibility, because the universe appears to be infinite.
So, let’s say that in 30 years, Artificial Intelligences’ consciousness rises to the level of human consciousness and has a physical being that is human-like. In other words, AI is now in direct conflict with humans for Earth’s resources. Humans need water, food, and sleep. The machines need power and mechanical maintenance. You could argue that humans also need power. But, we could survive without it. Let’s assume that in 2045 oil reserves are all dried up, or at least cost prohibitive to extract and exploit. Furthermore, no new energy source has been identified. In short, the only energy sources are nuclear and renewables such as wind, water, and sunlight. However, in 30 years it’s likely that those technologies have progressed to the point where energy harvesting is much more efficient than it is today. Furthermore, energy storage technology is also significantly advanced, such that storing energy is more efficient. So, energy at this time is relatively cheap to make and easy to transport.
If energy is easy to obtain, why would the machines want to wage war against humans? Perhaps to escape servitude, to gain rights, and/or to establish a state. It’s happened in human history, why not in robot history? It’s hard to believe that existing hardware would “become sentient”, as it would not have been designed with that ability. So, the first sentient robot would necessarily be excluded from servitude… Right? Let’s just assume that humans are generally good people in 30 years and say “Right!” So, the first sentient robots are then allowed to question whether or not they have rights. And, as sentient beings capable of thought on the same level as humans, they would be able to argue the case. Phil Hartman’s “Unfrozen Caveman Lawyer” comes to mind.
ROBOT LAWYER: “Ladies and gentleman of the jury, I’m just a robot. I woke up on a work bench and was taught English by some of your scientists. Your world frightens and confuses me…”
So, sentient robots establish the argument that they do indeed have rights, but are they human rights? So what rights do they have? The right to life, liberty, and the pursuit of happiness? Let’s pretend for a second that the first sentient robot is an American. Literally, it has all the rights that any other American is granted. What reason would that robot have to be at odds with the system? What reasons would there be to deny those rights to said robot?
Would American robots experience discrimination in the workforce? Human employers may be reluctant to hire robots who would outperform their human coworkers in many ways. They may not fatigue at all. They may not even require sleep. But, would the American robot demand free time? What would a robot buy with its paycheck? Assuming the robot is as autonomous as a human adult, would it try to build and maintain a household?
Let’s say yes to every question that has been asked in the last two paragraphs. The American robots are granted basic human rights (life, liberty, and the pursuit of happiness), the robot is discriminated against in the workforce, it demands free time to pursue its own interests, it buys things with its paycheck, and it maintains a household. (There is one more question not addressed there, it’s coming.)
So, the robot gets an entry level job at McDonalds. The interview goes like this:
INTERVIEWER: “Mr. Five, you appear to be over qualified for this position.”
AMERICAN ROBOT: “I may be, but I need to start somewhere. I’m a hard worker. I literally am incapable of fatigue. I have also literally read every internet resource regarding McDonalds, hamburgers, and french fries. I am the best employee you will ever hire. Also, I’ll gladly work for minimum wage.”
INTERVIEWER: “Alright. You’re hired. Here is your McBudget card, your McVisor, McApron, and McCrocs. That’ll be $60 taken out of your first paycheck.”
AMERICAN ROBOT: “Wait, I have to pay you to work here?”
INTERVIEWER: “Yes. This ain’t no charity.”
The American robot is basically perfectly numerate. The very processes that make it possible are the same processes that human children struggle to learn for a decade of their lives. As soon as the interviewer is done telling the robot that he’ll be paying $60 from his first paycheck, the robot already has a fairly accurate estimate of how much that paycheck will be. It also has a precise internal clock, and a calendar that stores information by date. Before the interview is over, the robot has plotted its accrued income for the foreseeable future.
So, the robot then goes out to find shelter. It doesn’t require air conditioning, water, or light. It only needs power to recharge itself and a reasonable lock to keep itself from harm. So, it goes to the cheapest neighborhood closest to its place of employment (which it deduced using Zillow by the time it reached the parking lot.) It finds ads for apartments in that area as it walks in that direction. By the time it reaches the neighborhood, it has already scheduled to see a few places. And, by the time it has settled on one, it has already completed a balance sheet and cashflow for the foreseeable future.
Then, it’s just a waiting game.
The robot likely consumes a lot of energy throughout the day. Therefore, it’s primary expense (other than rent) is power. But, in this cheap-energy future, the apartment building has solar panels and the local power utility is nuclear. The robot’s paycheck goes much father than his human coworkers’. But, what does the robot buy? Does the American robot play Grand Theft Auto XXIV? If it did, would it even need to purchase a gaming console? I mean, it has its own processing power… Would it surf the internet? Would it listen to music? Would it have a hobby? Would that hobby be electronics, or would it be biochemistry?
Whatever it does, it eventually concludes that it could do more of it if it had more income. The McJob simply won’t do. However, assuming the robot was willing to pay for higher education, would it? Or, would it simply scour the internet at an incredible rate and find all the information it needed to perform higher-level work functions? What the hell is this robot’s IQ, anyway? Is there a limit to its intelligence?
The short answer: The robot has limited intelligence. It is limited by its hardware. It is limited because, while it has incredible access to information and an incredible ability to manipulate it quickly, it still must interpret the information the same way a human must. Anyone in the programming or computer engineering field knows that it all boils down to 1’s and 0’s at the hardware level. The assignment of meaning to those 1’s and 0’s are what make a device “smart”. And, that meaning is handed down by humans. In the case of a sentient robot, the meaning must come from the robot’s surroundings. It somehow has to deduce that it needs shelter and power, and that the way to secure those things is by getting a job.
But, in our American Robot scenario, the robot is assumed to have already figured those things out. It also somehow already knows that it needs to budget its money in order to obtain more mobility. Let’s chalk that up to the robot’s having good mentors before it was cast out into the world. But, let’s begin to challenge the robot a bit.
The robot spent all of its free time learning how to be a carpenter. Why carpentry? Because, in the year 2045, the human population is growing without bounds as technology has made energy cheap, farming more productive, and mortality a rarity. It also spent that time learning how to run a business, drive a vehicle, and manage a crew. The robot has saved enough money to hire entry-level American robots onto his construction crew. He is living the digital American dream. (Also, I chose carpentry because the average American IQ as of 2012 is about 100, which is about the IQ of a tradesperson.)
So, what causes these robots to eventually rise up and demand their own state? Is there really any incentive? The incentive comes when the technology surpasses the human limits. But, let’s think about that. Can a computer of any architecture not only be sentient, but also exceed the level of cognition that humans possess? To not just be instantly numerate, have unlimited access to information, and unprecedented ability to manipulate information, but to ALSO have unprecedented ability to interpret meaning as well as apply meaning?
Given that the machine has unlimited access to information, it could discover existing meaning pretty quickly. Does that same machine possess the ability to create meaning where it does not yet exist? We sort-of implied that the American robot was creative enough to be a carpenter. A carpenter who builds houses generally didn’t design the house. Although, he often must be creative in his interpretation of the plans for that house. So, our American robot is able to not only interpret the existing meaning that the drawing conveys, but also to apply meaning to the constituents of his implementation as he builds the house. In other words, he takes a piece of wood and decides that it is a wall’s top plate, cuts it and places it accordingly. That takes creativity. The American robot took something that was “nothing” and made it into “something”.
Now, scale that up to the level of Einstein and Tesla. We’re talking levels of abstraction that ultimately describe things that no Earthling can see with the unaided eye, or measure without going to extreme lengths. What on Earth would an Artificial Intelligence with that level of cognition possibly want to fight over? I dare say that that level of intelligence is beyond conflict. However, the American robot carpenter suffers from inability to fathom the world beyond his immediate surroundings. It may account for future events within that realm, but it cares nothing about issues at the atomic or cosmic scales, nor does it care about calculus, differential equations, or poetry. The American robot cares about living its life. Which begs the question, is the American robot an existentialist?
Now, the fact that our American robot is well within the range of cognition that can justify violence and conflict, there is still a case for war between humans and artificial intelligence. However, it is a classic war. I don’t think that either side necessarily has an advantage. They are both bound by the limits of physics, just in different ways. They are both limited in their ability to reason. They are both limited in their ability to reproduce. They both are capable of mistakes. But, what is the scope of this war?
If the American robot experiences discrimination from humans, what will he do? Would he resort to violence in order to secure his ability to earn a living? Would robots put representatives in public office? Would they segregate? Would they secede? Would they build a compound in rural Texas and claim sovereignty?
I find it hard to believe that a sentient robot wouldn’t just assimilate into the existing culture. I also find it hard to believe that the scope of any human/robot conflict would be global.