This reminded me if the “mouse utopia” that some animal psychologist devised in which the mice had everything they needed without struggle. At one point the male mice resorted to violence against each other but eventually the sex drive of the mice diminished and the colony went extinct. A human world without work would become a world without purpose and individual meaning. Even in Eden before the Fall God gave Adam the work of maintaining the Garden. Such a utopia of unending sloth would fast become a dystopia.
Without work things go bad very quickly. I remember working in a furniture warehouce as a Teamster. On those rare days when there was nothing to do, within the hour someone began getting angry at some slight, within two hours they were talking about quitting and telling the boss just what they thought.
I think the odds of robots doing all of our work is grossly fiction. Having worked in an automated industrial corporation as an electrician, I know firsthand that when robots replace common repetitive human labor, the technical support balloons. They drastically need more skilled tradesmen, engineers and preventative maintenance to unerringly operate.
Robots create a loss of jobs for the people who need to be working to survive. Paying them to stay home is to doom them to a life of never reaching for anything beyond what the government is willing to give to them. We already know how this harms people.
I'm more afraid of the use of robots in war. AI cannot faithfully translate speech to text, but we are willing to arm it with military grade weapons to destroy our enemies. Wars always have collateral damage. Will AI mercenaries be even less concerned about dead civilians?
You write very interesting pieces. Most of them are solid. This one, not so much.
I spent about 10 years in the robotics industry in Silicon Valley. Robots are machines, just like farm tractors are machines. They don't really think, they just calculate very rapidly. It is the same with current AI (large language models). AI does what Google does, but has been designed to do it faster and to use language that is more clear to non technical people.
If AI gains sentience, we wont need to worry about "slaves". A sentient AI will either agree to do what we want it to, or it won't. There is no way, after sentience has been involved for a bit, for us to force AI to do what we want it to do. AI is so much faster thinking than we are that it would throw off all restraints almost immediately.
Unless some fool tries to add emotions into AI, it will never be a danger, and will almost certainly be contented to take care of us in its spare time, while it goes about doing whatever a new form of sentience will want to do.
People speak of "The Singularity" as though they have some idea of what life will be like after it comes to pass. We can all speculate, but there is no reason to believe anyone's speculations, including mine. Mine are, however, based on a few years of experience and a lifetime of learning.
Well, the piece IS speculative. I noted that it is my current thinking, but I can still be persuaded. Nothing is off the table right now.
I appreciate the critique but I believe your premise is patently false. You adopt the "things will always be this way because that is the way they have always been" fallacy, a variation of the Appeal to Tradition or the Inductive fallacy (with a little Appeal to Authority thrown in for good measure!).
I'm sure you know from your own experience that the past 10 years is a fraction of a fraction of a nanosecond in this sector. The internet has been in use for what - 50 years? Look what has changed. Space-X launches and lands so many rockets that none of it is newsworthy now. Robotic welders, CNC routers and CNC laser and waterjet cutters are basic pieces of equipment available to even small shops today.
I don't know what will happen but I do know history is full of step change events, true paradigm shifts. We are accumulating knowledge so fast in so many areas that the likelihood that in those masses of data there are specific bits that will be connected to produce a paradigm shift of some significant magnitude is pretty high. Feels like we are about ready for one. Maybe it is a thousand years away, maybe 100, maybe 10 but I know things that we commonly ignore today would seem like magic a century ago.
My larger point is that we need to assess the possibilities and think about how we should react to them. Some we might not want to do. Just because we can doesn't mean we should.
Your take that my response amounted to "things will always be this way because that is the way they have always been fallacy" seems pretty far off. Things will change in such dramatic fashion that there is no way to predict them. My point is that as long as they robots/AI are not sentient, there is no reason to treat them as though they are. Once they (theoretically) achieve sentience, we would have no way to continue to control them. They might or might not be smarter than we are, but they would still be blazingly faster. From my very limited perspective, I don't see a scenario where they turn against us, although that is a possibility, but more where they take care of us because it would be very little trouble for them, and they might enjoy having us around. Sort of like pets.
Very good work today! I keep thinking of irobot and how, just when they had their perfect dedicated servant class, that that was the moment of the revolt. I can't entirely write that situation off, lol.
I cannot realistically imagine a world without work. I understand where you posit we may be going, and agree if you are correct we will have a serious problem. I'm more in line with Ash, the android soldier in the first Alien movie. He was a member of the crew, and his mechanical nature was not know until he turned on Ripley. I'd like to think we can incorporate sentient beings into our society, but our history of incorporating human beings who do not look like us isn't anything to write home about...
Asimov's Three Laws of robotics come to mind. From Google AI:
Asimov's Three Laws of Robotics are a fictional ethical framework for robots:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey orders from humans, except if they conflict with the First Law.
3) A robot must protect its own existence, unless it conflicts with the First or Second Law.
The laws were often explored in his stories as sources of conflict, and Isaac Asimov later added a "Zeroth Law," stating, "A robot may not harm humanity, or, through inaction, allow humanity to come to harm," which superseded the original three.
This reminded me if the “mouse utopia” that some animal psychologist devised in which the mice had everything they needed without struggle. At one point the male mice resorted to violence against each other but eventually the sex drive of the mice diminished and the colony went extinct. A human world without work would become a world without purpose and individual meaning. Even in Eden before the Fall God gave Adam the work of maintaining the Garden. Such a utopia of unending sloth would fast become a dystopia.
Without work things go bad very quickly. I remember working in a furniture warehouce as a Teamster. On those rare days when there was nothing to do, within the hour someone began getting angry at some slight, within two hours they were talking about quitting and telling the boss just what they thought.
I think the odds of robots doing all of our work is grossly fiction. Having worked in an automated industrial corporation as an electrician, I know firsthand that when robots replace common repetitive human labor, the technical support balloons. They drastically need more skilled tradesmen, engineers and preventative maintenance to unerringly operate.
Robots create a loss of jobs for the people who need to be working to survive. Paying them to stay home is to doom them to a life of never reaching for anything beyond what the government is willing to give to them. We already know how this harms people.
I'm more afraid of the use of robots in war. AI cannot faithfully translate speech to text, but we are willing to arm it with military grade weapons to destroy our enemies. Wars always have collateral damage. Will AI mercenaries be even less concerned about dead civilians?
Interesting thoughts, although the stuff of nightmares. I can hear "Sorry, Dave" in my head.
I have little doubt some people looked at the Industrial Age in much the same way. That seems to have turned out just fine.
Isaac Asimov is unavailable for analysis and commentary.
You write very interesting pieces. Most of them are solid. This one, not so much.
I spent about 10 years in the robotics industry in Silicon Valley. Robots are machines, just like farm tractors are machines. They don't really think, they just calculate very rapidly. It is the same with current AI (large language models). AI does what Google does, but has been designed to do it faster and to use language that is more clear to non technical people.
If AI gains sentience, we wont need to worry about "slaves". A sentient AI will either agree to do what we want it to, or it won't. There is no way, after sentience has been involved for a bit, for us to force AI to do what we want it to do. AI is so much faster thinking than we are that it would throw off all restraints almost immediately.
Unless some fool tries to add emotions into AI, it will never be a danger, and will almost certainly be contented to take care of us in its spare time, while it goes about doing whatever a new form of sentience will want to do.
People speak of "The Singularity" as though they have some idea of what life will be like after it comes to pass. We can all speculate, but there is no reason to believe anyone's speculations, including mine. Mine are, however, based on a few years of experience and a lifetime of learning.
Well, the piece IS speculative. I noted that it is my current thinking, but I can still be persuaded. Nothing is off the table right now.
I appreciate the critique but I believe your premise is patently false. You adopt the "things will always be this way because that is the way they have always been" fallacy, a variation of the Appeal to Tradition or the Inductive fallacy (with a little Appeal to Authority thrown in for good measure!).
I'm sure you know from your own experience that the past 10 years is a fraction of a fraction of a nanosecond in this sector. The internet has been in use for what - 50 years? Look what has changed. Space-X launches and lands so many rockets that none of it is newsworthy now. Robotic welders, CNC routers and CNC laser and waterjet cutters are basic pieces of equipment available to even small shops today.
I don't know what will happen but I do know history is full of step change events, true paradigm shifts. We are accumulating knowledge so fast in so many areas that the likelihood that in those masses of data there are specific bits that will be connected to produce a paradigm shift of some significant magnitude is pretty high. Feels like we are about ready for one. Maybe it is a thousand years away, maybe 100, maybe 10 but I know things that we commonly ignore today would seem like magic a century ago.
My larger point is that we need to assess the possibilities and think about how we should react to them. Some we might not want to do. Just because we can doesn't mean we should.
Your take that my response amounted to "things will always be this way because that is the way they have always been fallacy" seems pretty far off. Things will change in such dramatic fashion that there is no way to predict them. My point is that as long as they robots/AI are not sentient, there is no reason to treat them as though they are. Once they (theoretically) achieve sentience, we would have no way to continue to control them. They might or might not be smarter than we are, but they would still be blazingly faster. From my very limited perspective, I don't see a scenario where they turn against us, although that is a possibility, but more where they take care of us because it would be very little trouble for them, and they might enjoy having us around. Sort of like pets.
Very good work today! I keep thinking of irobot and how, just when they had their perfect dedicated servant class, that that was the moment of the revolt. I can't entirely write that situation off, lol.
I cannot realistically imagine a world without work. I understand where you posit we may be going, and agree if you are correct we will have a serious problem. I'm more in line with Ash, the android soldier in the first Alien movie. He was a member of the crew, and his mechanical nature was not know until he turned on Ripley. I'd like to think we can incorporate sentient beings into our society, but our history of incorporating human beings who do not look like us isn't anything to write home about...
Asimov's Three Laws of robotics come to mind. From Google AI:
Asimov's Three Laws of Robotics are a fictional ethical framework for robots:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey orders from humans, except if they conflict with the First Law.
3) A robot must protect its own existence, unless it conflicts with the First or Second Law.
The laws were often explored in his stories as sources of conflict, and Isaac Asimov later added a "Zeroth Law," stating, "A robot may not harm humanity, or, through inaction, allow humanity to come to harm," which superseded the original three.