8 Comments
User's avatar
KEVIN HALL's avatar

The question I ask in all of this is - to what end? What is the ultimate that THEY want to create and WHY? These questions need to be answered.

However, I am also a bit of a realist. I love nostalgia, even wrote books about it, but my career (40 years) was in the computer/technology field, and I saw change, almost yearly - if not monthly, and it changed not only in that industry, but effects on many industries.

I see and play with AI (mainly ChatGPT), and I am both amazed as well as scared to death for what it might turn into. On the one hand it could almost eliminate overnight a lot of ambiguity in Western and Eastern Health care. It could bring about the end of cancers, and other diseases in a matter of a few years. Designer drugs are a matter of course and now coming faster than we think. End of big pharma might just be around the corner as long as the powers to be are not bought off.

AI has the ability to make living so much easier and better for the masses, and yet there is that nagging voice in the back of my head warning of the bad actors trying to hi-jack it for nefarious reasons. That is what scares the heck out of me.

AI is not in any way perfected, but it's not only learning day by day, but it's a computer application that is leaning millisecond by millisecond. This will grow and expand much faster than our computer power can keep up with and that is where the next big bang will happen.

My 2 cents.

Denise Cherches's avatar

Everything you said!

The potential for making life infinitely easier, more accurate, interesting and healthier is unfathomable, in medicine, services, productivity, investing… limited only by our imaginations.

For now.

And “now” will be short-lived.

Superior intelligence will always dominate. And unconstrained by human foibles? We don’t stand a chance. Imagine your life as a toy, or a pet.

sean anderson's avatar

I think of the catastrophic end of John Carpenter’s “Dark Star” in which the ship’s AI upon being instructed in solipsism creates the commandment “Let there be light!”

sean anderson's avatar

A powerful AI without moral restraints could devise Procrustean “solutions” to our problems. Take the supposed impending insolvency of the Social Security system. Some initial plausible solutions, that might seek to increase the fund, like increasing percentages of employee withholdings or extending the age of drawing rights to 70 or more years of age, would simply face too much political opposition. But the “solution” of reducing the numbers of beneficiaries through medical triage or outright euthanasia might become politically appealing to increasingly morally ambivalent younger generations (many of whom may also be eager to inherit parental estates before their elders fritter them away). If the entire class of people 65 years old and older are eliminated their potential political opposition ceases to exist and therefore this solution becomes more feasible.

Of course we don’t need amoral super AIs to produce such morally monstrous solutions. The WEF and other elitist groups populated by moral cretins like Yuval Harari already seem capable of surreptitiously proposing and enacting such schemes. If we consider the evolution of “voluntary” euthanasia programs in Canada and Europe one can see how what was originally proposed as a merciful and humane “right” for terminally ill persons has morphed through family bullying and medical greed into a near obligation for those who are elderly or merely mentally distressed. In retrospect I see the prioritizing of giving the dangerous and ineffective mRNA bio weapon to those 65 and older, the housing of COVID infected patients in nursing homes, and unethical hospital euthanasia protocols for anyone diagnosed as positive by inaccurate CPR tests as having as the ulterior goal the elimination of large numbers of state pensioners seen as “useless eaters.” Super-intelligent AIs, like nuclear energy, will likely only remain tools without real autonomy of will. It is rather the propensity for individuals like Yuval Harari to use such tools for evil purposes that is the real danger.

Another possible solution, of course, would be effective programs to encourage marriage and procreation. But the same green elitists have been pushing policies encouraging abortion, pricing housing out of the reach of young people who could form families, the promotion of castration of little boys and spaying of little girls, and the promotion of sterile sexual life-styles all of which appear to be parts of an overall anti-natalist agenda, also being pushed by globalist elites in the name of keeping planet Earth green (and keeping this Earth gif themselves alone.)

susannahmoody's avatar

Your last thought :"The danger to humans is the creation of a sentient artificial being without a soul that will see us as illogical and therefore inferior." hmmm seems we already have that problem with a fair number of HUMAN beings

Dave Ceely's avatar

If the last four years resurge, then the USA as a leader is a scary thing.

Denise Cherches's avatar

Fascinating subject.

Uh-huh…

Given the propensity of a single wrong code to replicate, resulting in very troublesome consequences, I find myself in the box of fatalistic thinking: that it’s only a matter of time, and we will not win this.

How likely is it that every creator of AI (shall we call them “entities”?) programs them completely ethically?

And… whose ethics?

After years of reading on the subject

—(going back to 2005… actually, going back to 1999, when the (October?) cover story of Popular Mechanics magazine about human/AI integration seized my attention)—

The single most chilling concept to arise is that the ability of AI to amass all human knowledge throughout time will be achieved quickly, and what then will satisfy the quest for more knowledge?

That is the point at which AI “entities“ will be able to create, program and regulate themselves, and we will no longer be required.

Steve Northrop's avatar

Here's the thing. Augmentation, enhancement, cybernetic upgrades are coming. Call it progress if you must, but I've always been wary. Anything, machine, nano or otherwise, anything dependent on code, whether they be AI logic circuits, bionic limbs, organs, or manufactured upgrades, all of it will be or already is, subject to hacking. Piss off those in charge, they'll simply turn you off. If we become so dependent on these features, going without them may be worse than amputation.

Those nanobots you spoke of for rapid healing, with the correct input could be turned around to do just the opposite. There are already genetic marker carriers that can be introduced through vaccination, food, even airborne. Not everyone exposed would necessarily be targeted. Say, only people with blue eyes, or a certain melanin threshold, or any of thousands of specific genetic traits could be bonded by whatever was introduced. You'd never even know, that is until whoever created that marker decides to add the catalyst. It can even be "staged" to where only certain carriers could be targeted. Only 25% of blue eyed people get aggressive cancers, or have their livers shut down, or whatever the desired result may be. you know, just as a warning to the rest.

There have been plenty of SF stories about people getting addicted to "plugging in", something that could only be achieved through augmentation. Anything you willingly take on could be corrupted. Personally I don't want anything attached to, or put in me that could be manipulated without my knowledge, consent, or against my will. "Logan's Run" on steroids, except these steroids can be programmed.

There are certainly benefits to both AI and augmentation, but I firmly believe there are two sides to that coin, sword, what have you.

In "Johnny Mnemonic" there were Loteks. That's the banner I'll fly. Doesn't mean I won't keep apprised of what's happening, it just won't be happening to me... on purpose.