A company called Anthropic announced on Thursday that it is working with Amazon Web Services (AWS) and the data analytics company Palantir to give U.S. intelligence and defense agencies access to its Claude family of AI models.
The news comes at a time when more and more AI companies are trying to make deals with U.S. defense users for both strategic and financial reasons. Meta recently said that its Llama models will be offered to defense partners. OpenAI, on the other hand, wants to get closer to the U.S. Defense Department.
Anthropic’s head of sales, Kate Earle Jensen, said that the partnership between the company and Palantir and AWS will “operationalize the use of Claude” on Palantir’s platform by using AWS storage. This month, Claude was added to Palantir’s platform. It can now be used in Palantir Impact Level 6, which is a defense-accredited setting.
IL6 from the Defense Department is only used for systems that have “critical to national security” data that needs “maximum protection” against unauthorized entry and changes. It is possible for information in IL6 systems to be more than “secret.” This is one level below “top secret.”
“We’re proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, which will improve the government’s ability to analyze data and run important operations more efficiently,” Jensen said. “Associating with Claude in Palantir on AWS will give U.S. intelligence and defense groups strong AI tools that can quickly handle and study huge amounts of complicated data.” This will make intelligence analysis a lot better and help officials make decisions. It will also make jobs that use a lot of resources easier to do and improve operational efficiency across departments.
Anthropic added some Claude models to AWS’ GovCloud this summer as a sign that it wants to get more clients in the public sector. GovCloud is an AWS service made just for cloud tasks used by the U.S. government.
Anthropic has set itself up as a provider that cares more about safety than OpenAI. But the company’s terms of service say that its products can be used for things like “legally authorized foreign intelligence analysis,” “finding covert influence or sabotage campaigns,” and “warning before possible military activities.”
Anthropic’s terms say, “We will tailor use restrictions to the mission and legal authorities of a government entity.” This is done by looking at things like “the extent of the agency’s willingness to engage in ongoing dialogue.” It says that these terms don’t apply to AI systems that it thinks “substantially increase the risk of catastrophic misuse,” have “low-level autonomous capabilities,” or can be used for things like spreading false information, making or using weapons, censorship, domestic surveillance, and bad cyber operations.
There is no doubt that the government is interested in AI. The Brookings Institute looked at this in March 2024 and found that government contracts linked to AI went up by 1,200%. Still, some groups, like the U.S. military, have been slow to accept the technology and are still not sure if it will pay off.
Also Read: About Creative Ai, Open Source, and Ending Services, Aws Ceo Matt Garman
It is said that Anthropic is in talks to raise a new round of funding at a value of up to $40 billion. The company just recently expanded to Europe. With forward promises, the company has raised about $7.6 billion so far. Amazon has put the most money into it by far.
What do you say about this story? Visit Parhlo World For more.