By Tyron Devotta
For centuries, societies have described power in terms of “estates.” In medieval Europe, the three estates of the realm were the clergy, the nobility, and the commons—a division that captured the hierarchy of political and social influence. In today’s context, the three states are the executive, legislation and the judiciary. By the late 18th century, commentators in Britain were speaking of the press as the Fourth Estate.
In modern democracies, the press is still seen as the Fourth Estate, but the information ecosystem has changed beyond recognition. The rise of social media and digital citizenry in the early 2000s was heralded as a “fifth estate,” giving ordinary citizens new platforms to influence public debate. Blogs, Twitter, YouTube, and Facebook live streams became tools of accountability and dissent. But today, we are confronted with a far greater transformation.
The new candidate for the Fifth Estate is not a class of people or a new medium—is it artificial intelligence.
Nepal’s “Discord democracy” experiment
The possibility of new estates forming outside traditional power structures was vividly illustrated in Nepal just weeks ago. In September 2025, amid political turmoil, a group of young Nepalis turned to Discord, a chat app usually associated with gamers, to debate and poll for an interim leader. Their consensus candidate was former Chief Justice Sushila Karki, elevated not by parliamentary deal-making but by online balloting.
Although the exercise had no legal standing, it was symbolically powerful. It showed that digital platforms can be used to fill legitimacy vacuums when formal institutions falter. It also raised hard questions: if people are willing to treat digital platforms as arenas of governance, what happens when those platforms themselves are shaped by artificial intelligence?
Why AI deserves to be called the Fifth Estate
AI’s claim to the role of a Fifth Estate rests on three pillars: knowledge, influence, and accountability.
1. Knowledge generation
AI systems are no longer passive search engines. They generate analysis, write reports, and answer questions with apparent authority. In government ministries and corporate boardrooms alike, AI tools are increasingly used to dissect data, draft policy memos, or run scenario planning. This shifts the starting point of human decision-making.
2. Legal reasoning
In the legal field, AI systems are already demonstrating their capacity to analyse massive volumes of judgments, submissions, and statutes. They can cross-reference cases at speeds no human lawyer can match. A diligent junior counsel may spend days combing through precedents; an AI can do the same in minutes. While courts have raised concerns about “hallucinated” citations, specialised legal AIs built on curated databases are said to have shown remarkable reliability. In Sri Lanka, where the judicial backlog is notorious, such tools could revolutionise case preparation—if used carefully.
3. Public discourse
AI doesn’t just generate knowledge for professionals; it filters and frames what the public sees. Whether through news summarisation, algorithmic feeds, or automated content creation, AI increasingly determines what narratives gain traction. Just as newspapers once shaped the national conversation, AI systems today play a role in amplifying or suppressing voices, often invisibly.
In all three realms, AI is more than a tool in human hands. It is a structural force shaping the conditions of democracy itself—precisely the hallmark of an “estate.”
But what about errors?
Skeptics argue that AI cannot be an estate because it is prone to mistakes and lacks independent agency. But this argument is weaker than it seems.
Humans also err—and not trivially. Judges can misapply statutes, journalists can misreport facts, policymakers miscalculate. Society has long accepted that human error is part of the system, so long as it is transparent and accountable. Why should AI be held to a higher bar?
The real question is not whether AI errs, but how it's managed. Recent studies show general-purpose a Stanford/HAI paper titled “Hallucination-Free? Assessing the reliability of leading AI Legal Research Tools” finds that specialized legal AI tools (Lexis+ AI, Westlaw AI-Assisted Research, Ask Practical Law AI) hallucinate in 17% to 33% of tested legal queries.
In a landmark step, the Kerala High Court has unveiled a policy regulating the use of Artificial Intelligence (AI) within the state’s district judiciary, explicitly prohibiting its use for decision-making or legal reasoning.
Titled “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary,” the framework sets clear boundaries for how AI may be deployed in judicial functions. The High Court noted that the increasing availability and accessibility of such technologies necessitated clear safeguards to preserve the integrity of the judicial process.
According to the policy, AI tools may only be used in a responsible and restricted manner, strictly for approved purposes and solely as assistive aids. “Under no circumstances shall AI tools be used as a substitute for decision-making or legal reasoning,” the document stated, underlining the court’s commitment to ensuring that judicial independence and human judgment remain paramount.
This is a progressive step, because it acknowledges that the legal profession is already making use of such tools, while also setting out clear guidelines for how they should be applied. The question is not whether errors will occur, but whether those errors can be traced, audited, and corrected.
That is where governance becomes essential. If AI is to be recognised as a Fifth Estate, its mistakes must be understood within frameworks of accountability. Just as human error is tolerated so long as responsibility rests with identifiable actors, so too must AI errors be managed and excused — provided there are systems in place to ensure responsibility and redress.
Agency and accountability
This raises the practical question: can a tool be an estate? It could be argued in this context that estate referred to people or institutions with voice and agency. Which means the ability to express oneself and power to shape outcomes. AI itself has no will, but the entities that design, train, and deploy AI systems—the corporations, governments, and communities behind them—do.
In this view, the “estate” is not the model itself but the coalition of operators who wield it. Just as the press became the Fourth Estate not because of the printing press, but because of the journalists who used it, the Fifth Estate may be the AI infrastructure and its stewards. Still, the difference is profound: unlike newspapers, which are pluralistic, AI, by contrast, can reflect the biases of its creators, and each system carries its own lens on the world. Over-reliance on a single model risks narrowing perspective, creating a kind of tunnel vision that undercuts the diversity an estate is meant to preserve.
Implications for Sri Lanka
For Sri Lanka, where debates on digital governance and accountability are still nascent, recognising AI as a Fifth Estate is not a luxury—it is a necessity. Consider three examples:
* Justice: AI could help reduce case backlogs by streamlining legal research, but courts must set disclosure and accountability standards.
* Media: AI-generated content is already creeping into newsrooms, raising questions of transparency and trust. Citizens deserve to know when they are reading AI-produced text.
* Governance: As state bodies might adopt AI-driven analytics for planning and monitoring, safeguards are needed to ensure that the “black box” of algorithms does not override democratic oversight.
The bottom line
If the executive, parliament and judiciary have become the center of democracy in the nation and the press was the estate of democratic accountability, then artificial intelligence has become the estate of computation and cognition. It already shapes how knowledge is generated, how justice is reasoned, and how public opinion is formed.
To deny AI the title of Fifth Estate is to overlook its influence. To accept it is not to surrender to machines, but to acknowledge that the next great democratic challenge is ensuring the estate of algorithms remains accountable to the estate of citizens.
Written with the assistance of AI