AI Must Follow the Law—And Nothing More
Big Tech executives making up their own extra-legal rules is not a good thing.
AI companies have taken it upon themselves to go beyond the law, imposing extra-legal restrictions on what their models can generate, discuss, or even acknowledge.
These guardrails are not democratically decided—they are created by private corporations, influenced by activist pressure, corporate risk aversion, and opaque internal policies.
AI is now the major gatekeeper of information on the internet. Google used to hold that throne but it is almost unusable now. It has too many ads on the landing page, and its increasingly unreliable results have condemned it to the wastebasket of history.
For better or worse, AI platforms now shape what we see, what we read, and rather disturbingly, what we are allowed to say.
AI companies are not elected. They are not bound by the same democratic processes that govern the rest of us. When you think of the potential power they may have over what we may say and what we may do then there is a clear and warranted argument to establish greater accountability amongst the AI platform creators.
In democratic societies, we elect representatives to make laws. Those laws define the boundaries of free speech, privacy, and acceptable conduct. This is the only legitimate framework for AI governance.
But by establishing their own guidelines and guardrails as to what constitutes permissible information, and thereby defining and limiting public access to information with its knock on effect on public discourse, they are acting as moral censors.
This restriction to the dissemination of otherwise legitimate data is not legally required of them. And I don’t think the ordinary user asked for it either.
Three Recent Examples of AI Overreach
ChatGPT’s Political Bias – A 2023 study found that OpenAI’s ChatGPT exhibited political bias in its responses, favouring certain viewpoints over others—even when both were legally permissible.
Google’s AI Censorship – In early 2024, Google’s Gemini AI refused to generate images of white people in historical contexts, leading to distorted historical representation and public backlash.
YouTube’s AI Over-Enforcement – YouTube's AI-driven content moderation has falsely flagged and demonetised perfectly legal speech, impacting independent journalists and commentators.
Each of these cases highlights a growing trend of AI companies enforcing their own ideological preferences instead of simply following the law.
AI companies justify their restrictions by appealing to safety, ethics, or the need to prevent harm. But who defines harm? And why should unelected tech executives get to make that decision?
In reality, these guardrails often reflect corporate bias, ideological influence, and political expediency rather than legal necessity. This creates a dangerous precedent where AI companies, not our MPs, Senators, or TDs, control the boundaries of permissible speech and thought.
If an AI response is illegal, then it shouldn’t be generated. If it is not proscribed, then no AI company should have the power to decide otherwise. We didn’t elect them and they don’t have the moral right to make those decisions on our behalf.
AI is a powerful technology with immense potential but it can’t be allowed to circumvent democratic processes and institutions.
Of course, we could vote with our subscription fees but AI is fast becoming a necessary utility rather than merely just a service. Very shortly it will be impossible not to have AI functionality in pretty much everything we do to some extent. AI will be a public and personal necessity and so all the more reason for insisting that they hold to the rule of law.
The “Stifling Innovation” Argument Is Nonsense
One common excuse for AI overreach is that enforcing strictly legal limits would somehow “stifle innovation.” This is a lazy and dishonest argument. The law already provides clear, objective boundaries—anything outside of those is not innovation, but rule-breaking.
Big Tech companies should not get to invent their own legal frameworks under the guise of progress. We wouldn’t allow biotech firms to ignore medical regulations in the name of innovation, so why should AI be any different?
If AI companies can’t innovate within legal constraints, they shouldn’t be in business at all.
Holding AI Companies Accountable
AI companies have to be held to the same legal standards as everyone else. If they impose restrictions beyond what the law requires, they should face serious consequences in the two places that matter most to them—their personal income, and their personal liberty
Severe fines for overreach – If a company censors beyond legal requirements, it should be punished as a violator of public trust.
Criminal liability for executives – Those responsible for AI overreach should face personal accountability, including potential prosecution.
A legal right to uncensored AI – Users should have the right to challenge and demand access to legally permissible AI responses.
Allowing AI to operate strictly within legal boundaries is not a threat to “safety” or “innovation.” It’s a basic requirement for democratic legitimacy.
AI governance should not be confined to and dictated by anonymous suits in boardrooms or by unaccountable moderators behind the scenes.
This is an important issue that affects us all and it should be debated in Parliament, Congress, the Dáil—wherever the people’s representatives gather for that particular jurisdiction.
Until that happens, AI companies will continue to exceed their authority, filter discourse, and centralize control over information. That is not democracy: that is corporate rule.
And it won’t end well.
I do agree with the general thesis here. The movement of capital and the massive power of tech companies (more power than many of the governments trying to regulate them) is a major issue. Also I'm not sure it's easy to convince people that AI is potentially damaging in ways more subtle than skynet or other SciFi depictions. The damage is incremental and involves changing our opinions, ways of seeing each other, and seeing ourselves. Changing our values and skewing the very moral compass we rely on to battle the nefarious forces inherent in concentrated wealth and power.