Handpicked products, unbeatable prices — shop with confidence at CannonEdge

Anthropic Wants to Be the One Good AI Company in Trump’s America

Anthropic, the artificial intelligence company behind the chatbot Claude, is trying to carve out a spot as the Good Guy in the AI space. Fresh off being the only major AI firm to throw its support behind an AI safety bill in California, the company grabbed a headline from Semafor thanks to its apparent refusal to allow its model to be used for surveillance tasks, which is pissing off the Trump administration.

According to the report, law enforcement agencies have felt stifled by Anthropic’s usage policy, which includes a section restricting the use of its technology for “Criminal Justice, Censorship, Surveillance, or Prohibited Law Enforcement Purposes.” That includes banning the use of its AI tools to “Make determinations on criminal justice applications,” “Target or track a person’s physical location, emotional state, or communication without their consent,” and “Analyze or identify specific content to censor on behalf of a government organization.”

That’s been a real problem for federal agencies, including the FBI, Secret Service, and Immigration and Customs Enforcement, per Semafor, and has created tensions between the company and the current administration, despite Anthropic giving the federal government access to its Claude chatbot and suite of AI tools for just $1. According to the report, Anthropic’s policy is broad with fewer carveouts than competitors. For instance, OpenAI’s usage policy restricts the “unauthorized monitoring of individuals,” which may not rule out using the technology for “legal” monitoring.

OpenAI did not respond to a request for comment.

A source familiar with the matter explained that Anthropic’s Claude is being utilized by agencies for national security purposes, including for cybersecurity, but the company’s usage policy restricts uses related to domestic surveillance.

A representative for Anthropic said that the company developed ClaudeGov specifically for the intelligence community, and the service has received “High” authorization from the Federal Risk and Authorization Management Program (FedRAMP), allowing its use with sensitive government workloads. The representative said Claude is available for use across the intelligence community.

One administration official bemoaned to Semafor that Anthropic’s policy makes a moral judgment about how law enforcement agencies do their work, which, like…sure? But also, it’s as much a legal matter as a moral one. We live in a surveillance state, law enforcement can and has surveilled people without warrants in the past and will almost certainly continue to do so in the future.

A company choosing not to participate in that, to the extent that it can resist, is covering its own ass just as much as it is staking out an ethical stance. If the federal government is peeved that a company’s usage policy prevents it from performing domestic surveillance, maybe the primary takeaway is that the government is performing widespread domestic surveillance and attempting to automate it with AI systems.

Anyway, Anthropic’s theoretically principled stance is the latest in its effort to position itself as the reasonable AI firm. Earlier this month, it backed an AI safety bill in California that would require it and other major AI companies to submit to new and more stringent safety requirements to ensure models are not at risk of doing catastrophic harm. Anthropic was the only major player in the AI space to throw its weight behind the bill, which awaits Governor Newsom’s signature (which may or may not come, as he previously vetoed a similar bill). The company is also in D.C., pitching rapid adoption of AI with guardrails (but emphasis on the rapid part).

Its position as the chill AI company is perhaps a bit undermined by the fact that it pirated millions of books and papers that it used to train its large language model, violating the rights of the copyright holders and leaving the authors high and dry without payment. A $1.5 billion settlement reached earlier this month will put at least some money into the pockets of the people who actually created the works used to train the model. Meanwhile, Anthropic was just valued at nearly $200 billion in a recent funding round that will make that court-ordered penalty into a rounding error.

Trending Products

- 27% ANTEC NX200M RGB, Large Mesh Front ...
Original price was: $75.34.Current price is: $54.99.

ANTEC NX200M RGB, Large Mesh Front ...

0
Add to compare
- 24% HP 330 Wireless Keyboard and Mouse ...
Original price was: $32.99.Current price is: $24.99.

HP 330 Wireless Keyboard and Mouse ...

0
Add to compare
- 37% NZXT H9 Flow Dual-Chamber ATX Mid-T...
Original price was: $252.75.Current price is: $159.97.

NZXT H9 Flow Dual-Chamber ATX Mid-T...

0
Add to compare
- 33% KEDIERS ATX PC Case,6 PWM ARGB Foll...
Original price was: $164.98.Current price is: $109.99.

KEDIERS ATX PC Case,6 PWM ARGB Foll...

0
Add to compare
- 10% LG FHD 32-Inch Computer Monitor 32M...
Original price was: $199.99.Current price is: $179.99.

LG FHD 32-Inch Computer Monitor 32M...

0
Add to compare
- 41% SAMSUNG 27″ CF39 Collection F...
Original price was: $287.28.Current price is: $169.99.

SAMSUNG 27″ CF39 Collection F...

0
Add to compare
- 34% Dell SE2422HX Monitor – 24 in...
Original price was: $182.38.Current price is: $119.99.

Dell SE2422HX Monitor – 24 in...

0
Add to compare
- 36% Aircove Go | Portable Wi-Fi 6 VPN R...
Original price was: $266.74.Current price is: $169.90.

Aircove Go | Portable Wi-Fi 6 VPN R...

0
Add to compare
- 40% Motorola MG7550 – Modem with ...
Original price was: $200.32.Current price is: $119.95.

Motorola MG7550 – Modem with ...

0
Add to compare
- 37% Cudy TR3000 Pocket-Sized Wi-Fi 6 Wi...
Original price was: $142.94.Current price is: $89.90.

Cudy TR3000 Pocket-Sized Wi-Fi 6 Wi...

0
Add to compare
.

We will be happy to hear your thoughts

Leave a reply

TheCannonEdge
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart