Let's take our eyes away from the cell phone and ears away from Alexa long enough to ponder what Brave New World we could be ushering in

The Dark Side of Artificial Intelligence

By —— Bio and Archives--March 8, 2018

Comments | Print Friendly | Subscribe | Email Us

The Dark Side of Artificial Intelligence
Artificial intelligence (AI) is cool.

I often have to write in different languages and was shocked to find that overnight Google Translate went from being a joke to “knowing” even my best foreign language better than me. The reason: an AI program called “Deep Learning.” An Echo device with Alexa software has been a godsend to my extremely elderly parents who like most people their ages are technophobic, plus my father has very poor eyesight. I’ve written extensively on vehicle safety and I’m convinced that autonomous vehicles can eliminate the overwhelming majority of the 37,000 annual vehicular deaths in the U.S. each year.

Yet there are those who say AI can be, in the words of silicon valley mogul Elon Musk, “... summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like—yeah, he’s sure he can control the demon. Doesn’t work out.”

Other very smart and tech-savvy people such as Stephen Hawking and Apple co-founder Steve Wozniak have also expressed concern.

The dark side of AI has essentially been the stuff of movies such as “Ex Machina” and especially The Terminator franchise. And fact is, the day will almost certainly come when machines go beyond beating us in extremely difficult games like Go to becoming smarter than us in all ways. Nobody’s quite sure what will happen then. (Hint: If they decide to wipe us out they won’t use human-looking machines with sunglasses, but probably microbes.)

Still, that day could be 30 years away. It’s a real concern, but the Future of Life Institute has just released a disturbing 100-page report on potential bad things from AI in just the next five years. In other words, major aspects of the technology already exist.


Prepared by a group of 26 leading AI researchers, it discusses the threat of AI (and related concepts of machine learning and “deep learning,” while also offering strategies for potentially mitigating the risks. Essentially the categories of threat are: Personal safety, digital safety, and protection from invasions of privacy, including government-sponsored snooping and control.

Personal safety. Consider, say, a cleaning robot that goes about its autonomous duties until it identifies the minister of finance whom it then approaches and assassinates by detonating itself. A Roomba that goes boomba. Autonomous flying drones (as opposed to guided ones such as the Unmanned Aerial Vehicles the U.S. military routinely uses) could be used to track and attack specific people. In part with that nifty facial recognition such as the iPhone X uses.

As household items as innocuous as coffee pots become connected into the “universe of things” we can see how a hacker might give commands to those otherwise ultra-safe autonomous vehicles to drive through a crowd. In fact, they could wreak absolute havoc my commandeering numerous vehicles in what’s called “swarming.” (And so much for stopping them by shooting the driver.)

At the close of the Global Fortune Forum in Guangzhou on Dec. 7, the event’s hosts released a swarm of over 1,000 autonomous small drones that danced and flashed through the air for nine minutes without bumping into each other.

Continued below...

So no, that 5-year period is not exaggerated. Moreover, while those chips in your computer are doubling roughly every two years, AI is growing exponentially. Throw in progress with quantum computing, which will make today’s super computers look like pocket calculators, and you can imagine that we cannot imagine where AI is going. (Don’t worry about affording them; they can be accessed via the cloud.)

Digital security. Computers with sensitive information from bank accounts to embarrassing selfies seem almost routinely breached these days and it’s just going to get worse. Current phishing messages tend to be pretty simple, if not idiotic. I don’t think I’ve ever received a phish message that didn’t have spelling errors. Yet as with the DNC breach (which was in fact idiotic), we’ve seen how effective they can still be.

AI can convince you you’re actually communicating back and forth with a human being through emails, texts, and even voices. Recent breakthroughs have made computer voices almost indistinguishable from human. Next step will be chatbots that will convince us we’re speaking to and viewing a live person.

Attacks on privacy. Autocratic governments will spend fortunes too on AI to identify “troublemaker” targets for surveillance and to discredit or disappear them. China has a social credit system using AI (along with low-tech methods) to minutely control what benefits and punishments will be meted out to its citizens. And we’ve already seen how one, Russia, has tried to influence key elections in the U.S. and elsewhere.


If you saw Star Wars: Rogue One, you may have been shocked to find Peter Cushing reprising his role. (We should all look so good after being dead 23 years.) Mind, that took some real computer heft—although the price will keep dropping.

Meanwhile, the latest craze seems to be using a very simple program to insert female celebrity faces over those of women in porn videos. (“So Gal Gadot, what’s a nice Jewish girl like you doing in a business like that?”) More sophisticated programs can alter mouth movements to any words inserted, such that the best lip-reader wouldn’t know Barack Obama or Donald Trump didn’t actually say those words.

And given that one-half the American population is of below-average intelligence, expect many people to consider these renderings real and spread them all over social media in minutes—helped by bots of course.

Yet we don’t want to give up what we’re getting and will get from AI. A just released report by the cybersecurity firm McAfee and the Center for Strategic and International Studies estimates that cybercrime cost the global economy $600 billion last year. That’s bad. Yet another report predicts that AI will contribute as much as $15.7 trillion to the world economy by 2030. That’s good.

So we very much want ever-improving AI, even as we want effective countermeasures against the bad aspects.


Continued below...

Among the major recommendations of the Future of Life Institute are:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in artificial intelligence need to acknowledge that the good things they’re developing can be used for ill, and actively reach out to people who may be affected rather than simply waiting for that harm to show up.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security.
  4. A ban on development of autonomous weapons.

Most of this is easier said than done, and it doesn’t remedy actions by governments—whether they be Russia, China, or the U.S. against its own citizens. As to banning autonomous weapons, tough luck. It’s not like the above-ground nuclear test ban in which a violation would be rather obvious. Anyway, such weapons already exist, depending on the definition. There’s little hope of putting that jin back in the bottle.

But perhaps the greatest value of the report is in reminding us that a lot of new computer technology has already become a two-edged sword. After many years of declining U.S. vehicle fatality rates they’re now going up, even as cars keep getting safer. The only reasonable explanation is driver cell-phone use. As much as 15% of bandwidth is used for porno, while social media seems to a great extent be replacing face-to-face and voice-to-voice interaction, which in turn seems to be reducing our ability to really contact and empathize with human beings. As an article in Psychology Today put it, “As screen time goes up, empathy goes down.”

Maybe nothing can be done to even slow the development of “bad” AI. But traditionally conservatives have led the way in preaching cautiousness over new developments that can cause wrenching changes in society. And that it seems in recent years we’ve been seduced into abandoning that role. Let’s take our eyes away from the cell phone and ears away from Alexa long enough to ponder what Brave New World we could be ushering in.

Michael Fumento -- Bio and Archives | Comments

Michael Fumento is a journalist, author, and attorney who specializes in health and science. He can be reached at Fumento[at]gmail.com.

Commenting Policy

Please adhere to our commenting policy to avoid being banned. As a privately owned website, we reserve the right to remove any comment and ban any user at any time.

Comments that contain spam, advertising, vulgarity, threats of violence, racism, anti-Semitism, or personal or abusive attacks on other users may be removed and result in a ban.
-- Follow these instructions on registering: