Tecnología e innovación

Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows

Anduril’s advanced line of battlefield drones and surveillance towers — along with its eagerness to take defense contracts now viewed as too toxic to touch by rival firms — has earned it lucrative contracts with the Marine Corps and Air Force, in addition to its Homeland Security work. In a 2019 interview with Bloomberg, Anduril chair Trae Stephens, also a partner at Thiel’s venture capital firm, dismissed the concerns of American engineers who complain. “They said, ‘We didn’t sign up to develop weapons,’” Stephens said, explaining, “That’s literally the opposite of Anduril. We will tell candidates when they walk in the door, ‘You are signing up to build weapons.’”

Palmer Luckey has not only campaigned for more Silicon Valley integration with the military and security state, he has pushed hard to influence the political system. The Anduril founder, records show, has personally donated at least $1.7 million to Republican candidates this cycle. On Sunday, he hosted President Donald Trump at his home in Orange County, Calif., for a high-dollar fundraiser, along with former German ambassador Richard Grenell, Kimberly Guilfoyle, and other Trump campaign luminaries.

Anduril’s lobbyists in Congress also pressed lawmakers to include increased funding for the CBP Autonomous Surveillance Tower program in the DHS budget this year, a request that was approved and signed into law. In July, around the time the program funding was secured, the Washington Post reported that the Trump administration deemed Anduril’s virtual wall system a “program of record,” a “technology so essential it will be a dedicated item in the homeland security budget,” reportedly worth “several hundred million dollars.”

The autonomous tower project awarded to Anduril and funded through CBP is reportedly worth $250 million. Records show that $35 million for the project was disbursed in September by the Air and Marine division, which also operates drones.

Anduril’s approach contrasts sharply with Google’s. In 2018, Google tried to quell concerns over how its increasingly powerful AI business could be literally weaponized by publishing a list of “AI Principles” with the imprimatur of CEO Sundar Pichai.

“We recognize that such powerful technology raises equally powerful questions about its use,” wrote Pichai, adding that the new principles “are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.” Chief among the new principles were directives to “Be socially beneficial,” “Avoid creating or reinforcing unfair bias,” and a mandate to “continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.”

The principles include a somewhat vague list of “AI applications we will not pursue,” such as “Technologies that cause or are likely to cause overall harm,” “weapons,” “surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

It’s difficult to square these commitments to peaceful, nonsurveillance AI humanitarianism with a contract that places Google’s AI power behind both a military surveillance contractor and a government agency internationally condemned for human rights violations. Indeed, in 2019, over 1,000 Google employees signed a petition demanding that the company abstain from providing its cloud services to U.S. immigration and border patrol authorities, arguing that “by any interpretation, CBP and ICE are in grave violation of international human rights law.”

“This is a beautiful lesson in just how insufficient this kind of corporate self-governance really is,” Whittaker told The Intercept. “Yes, they’re subject to these AI principles, but what does subject to a principle mean? What does it mean when you have an ethics review process that’s almost entirely non-transparent to workers, let alone the public? Who’s actually making these decisions? And what does it mean that these principles allow collaboration with an agency currently engaged in human rights abuses, including forced sterilization?”

“This reporting shows that Google is comfortable with Anduril and CBP surveilling migrants through their Cloud AI, despite their AI Principles claims to not causing harm or violating human rights,” said Paulson, the founder of Tech Inquiry.

“Their clear strategy is to enjoy the high profit margin of cloud services while avoiding any accountability for the impacts,” he added.

This content was originally published here.

EL 2 DE JUNIO DEL 2024 VOTA PARA MANTENER

TU LIBERTAD, LA DEMOCRACIA Y EL RESPETO A LA CONSTITUCIÓN.

VOTA POR XÓCHITL