Boycott unethical AI companies – and do it now!
In light of recent events, it is time for consumers to start wielding their power to influence the AI giants. Replace ChatGPT with Claude, and do not use the Microsoft Copilot chatbot.
Opinions expressed in Viewpoints are the authors’ own.
The fact that something is difficult has never been a good reason not to do it. This applies not only to individuals, but also to tech giants.
Earlier this year, something happened that probably went under the radar of most Norwegians. Anthropic, one of the world’s leading AI companies, was excluded from collaborating with the U.S. Department of War. Not because they had done anything wrong, but because they refused to compromise on their own safety requirements.
Given the technology’s safety challenges, Anthropic had built in technical safeguards to prevent AI being used for autonomous weapons and mass surveillance of American citizens. The Department of War responded by giving them an ultimatum: open the AI systems for unrestricted use, or lose the contract. Anthropic said no and lost a contract worth USD 200 million.
For now, Anthropic is standing its ground, although it is back at the negotiating table with the Pentagon. It remains unclear what they are willing to accept.

Even though Anthropic returned to the negotiating table, this story is worth reflecting on for a moment, as it is rare for an AI giant to be willing to sacrifice its market share to stand by its principles.
Just hours later, OpenAI, the company behind ChatGPT, entered into an agreement to replace Anthropic and make its AI technology freely available for U.S. military use.
Responsible development is not prioritized because the competition within the AI sector is brutal.
Several tech giants have reneged on their pledges against military use. In addition, when Donald Trump took office, one of the first things he did – on his very first day – was remove the federal framework for responsible AI development.
Several powerful companies’ ethical guidelines, meant to ensure safe development, have proven to be little more than a marketing ploy. Responsible development is not prioritized because competition within the AI sector is brutal, capital is impatient, and now also due to pressure from powerful American political forces. The result is that safety mechanisms are removed in closed negotiations.
- You might also like: The danger of AI is that it’s so good, you simply can’t resist
What makes this story even more unsettling is what happened when the United States attacked Iran. Hours after President Trump announced the ban on Anthropic, Claude was used in the attack, integrated into Palantir’s Maven Smart System, which helped the U.S. military identify and prioritize nearly a thousand targets within the first 24 hours.
According to the Washington Post, the military was so dependent on the technology that even if Anthropic’s CEO had demanded that they stop using it, the government would have used legal authority to retain access. Anthropic no longer had a choice. It is difficult to set boundaries for something that has already been integrated.
However, the story is not just about American tech companies.
Recently, the Norwegian Police University College used the AI tool Copilot to prepare the case for a personnel policy decision. The document reportedly summarized practices at seven Norwegian universities and university colleges. When an employee representative checked the figures, they were incorrect. Copilot had not found the information it needed, it had simply guessed. The point is not that this is embarrassing and should have been avoided, but that it is part of a larger pattern.
Copilot is Microsoft’s AI assistant and is based on OpenAI’s technology – the same technology that very recently became part of the U.S. military infrastructure. They are not two separate entities.
Copilot is currently used by Norwegian businesses, municipal authorities, government agencies, hospitals and schools – often with little more guidance than that the user has the ultimate responsibility, and almost always without the users knowing very much about the technology or who controls it.
- You might also like: How to strengthen cybersecurity on the continental shelf
So, what can you do? More than you think.
The simplest thing you can do right now is switch from ChatGPT to another AI provider. If more people switch to Claude, it sends a signal to the market that principled opposition has value and gives Anthropic greater backing in its negotiations. It takes just a couple of minutes.
The point is to stay informed, make conscious choices and be willing to switch again.
Anthropic demonstrated that some things cannot be bought, at least not for now. Choosing Claude over ChatGPT is an easy way to show support for the decision through consumer power. That does not mean Anthropic has full control over how Claude is used – no technology company does once the systems have been integrated into critical infrastructure. However, there is a difference between companies that at least try to set boundaries and those that do not.
In six months, it might be Claude that you are deleting, not ChatGPT. The point is not to find the perfect provider once and for all and then rest on your laurels. The point is to stay informed, make conscious choices and be willing to switch again.
For those people who would like a European alternative entirely independent of U.S. geopolitics, there is Mistral, a French AI company with open models that do not face pressure from Washington.
In Word, Excel and PowerPoint, you can turn Copilot off under File > Options > Copilot.
Disabling Copilot at work is a bit more difficult, but not impossible. In Word, Excel and PowerPoint, you can turn it off under File > Options > Copilot. Personal subscribers can downgrade to Microsoft 365 Classic, which does not include Copilot.
If you have Microsoft 365 through your work, your employer’s IT department can turn it off for you. It is certainly worth asking.
This commentary was first published in Dagens Næringsliv on 6 March 2026.

