WhatFinger

To make its liberal employees happy...

Google won’t allow U.S. military to use AI to make weapons more effective



Google won’t allow U.S. military to use AI to make weapons more effective If you haven’t been paying attention to the internal drama at Google – and why would you? – here’s a brief primer: Google employees are overwhelmingly liberal, and a movement started some months back to pressure Google management not to work with the U.S. military in the use of artificial intelligence on weapons systems. The Pentagon saw promise in the use of Google’s AI to make pinpoint drone strikes more effective, and wanted to explore that and other possible uses of the technology.
The “Googlers” freaked, embarking on a months-long pressure campaign to get management to back off its work with the Pentagon, because you know, military/war/bad/evil. Google sounds more like a college campus than a serious company, and today more than ever, because the liberal Googlers got their wish:
Google will not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts under new standards for its business decisions in the nascent field, the Alphabet Inc (GOOGL.O) unit said on Thursday. The restriction could help Google management defuse months of protest by thousands of employees against the company’s work with the U.S. military to identify objects in drone video. Google instead will seek government contracts in areas such as cybersecurity, military recruitment and search and rescue, Chief Executive Sundar Pichai said in a blog post on Thursday. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he said. Breakthroughs in the cost and performance of advanced computers have carried AI from research labs into industries such as defense and health in the last couple of years. Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large datasets to make predictions and identify patterns and anomalies faster than humans could.

Google seems to think it’s being exceedingly virtuous by making the distinction that the AI can’t be used for offensive systems. I guess they think that means they won’t be helping the U.S. military start any wars. But if you have to fight a war, regardless of who starts it, you want to be able to go on the offensive. That’s how you end it quickly, by being able to move on the enemy, capture its territory and destroy its ability to fight you. Offensive pinpoint drone strikes could be very effective in turning a potentially long war into a short one, and the ability to distinguish between different things on the ground could help prevent civilian casualties. But the liberal Googlers don’t want the U.S. military to be able to do these things, which means the military will either have to work with a different AI partner or develop its own technology. I hope Google is proud of itself. This decision will make it more difficult for the U.S. Armed Forces to succeed in military operations. Come to think of it, Google is probably very proud of itself for that. From everything we’re learning about the culture of Google, that’s probably exactly what they want.

Support Canada Free Press

Donate


Subscribe

View Comments

Dan Calabrese——

Dan Calabrese’s column is distributed by HermanCain.com, which can be found at HermanCain

Follow all of Dan’s work, including his series of Christian spiritual warfare novels, by liking his page on Facebook.


Sponsored