The Board Is Closer to the AI Than It Thinks


 

When systems act at scale, accountability does not disappear. It returns to the room where approval was given.

 

Adv. Sreeraj Muralidharan, BBM, FCS, LLB, CFORA

Email: Advsreerajm@gmail.com

The recent noise around Grok has been discussed largely as a technology issue. That is an easy way to talk about it. It keeps the conversation safely distant. But what really unfolded was something most founders recognise immediately, even if they don’t articulate it often.

A system was allowed to operate at scale.
It behaved in a way that surprised people.
And suddenly, explanation was required.

At that moment, no one was interested in how the system worked. The questions were not about models or architecture. They were about judgment. Who allowed this? Why was it permitted to operate this way? At what point should someone have stepped in?

That instinct is not unique to governments or regulators. It is how accountability works everywhere that power exists.

Anyone who has built and run a business long enough knows this pattern. When things go wrong in a meaningful way, responsibility does not scatter. It concentrates. It moves upward. Eventually, it reaches the person or group that approved the decision and set the tone.

Grok simply made that visible in public.

What many founders underestimate is how close AI already sits to that centre of gravity. It is still often treated as an operational or product decision—something delegated, experimented with, or rolled out incrementally. But once a system generates content, influences people, moderates behaviour, or operates without immediate human judgment, it is no longer just technology. It becomes an extension of leadership intent.

At that point, the question is no longer whether the system is impressive. It is whether its behaviour reflects the judgment of the people who allowed it to act.

This is where founder-led businesses need to be especially careful—not because they are reckless, but because they move decisively. Speed is usually a strength. But speed without reflection can quietly turn into exposure.

Most serious problems are not created by bad intentions. They arise because no one paused to ask a simple question early enough: What happens if this behaves differently than we expect? Not in theory, but in public. At scale. With consequences.

AI does not introduce a new category of risk. It accelerates existing ones. It amplifies culture, tolerance, and assumptions. If oversight is informal, the system will reflect that. If escalation is unclear, the system will find the edges. If judgment is deferred, automation will fill the gap.

None of this is an argument against AI. Founders adopt tools because they solve real problems. But leadership has always been about knowing where delegation ends. There is a point at which a decision is too consequential to remain implicit.

The uncomfortable truth is that when something goes wrong, no one asks whether the decision was delegated efficiently. They ask whether it should have been delegated at all.

The Grok episode will fade from headlines, as these things do. But the underlying lesson will keep resurfacing in different forms. Systems are getting faster. Visibility is increasing. Tolerance for “we didn’t anticipate this” is shrinking.

AI does not change who is accountable.
It shortens the distance between action and consequence.

And in founder-led organisations, that distance has always been small.

In the end, leadership is not defined by what we automate, but by what we remain willing to own.

 

#FounderLeadership
#Boardroom
#CorporateJudgment
#Governance 


 

Comments

Popular posts from this blog

The CBI Case That Wasn’t

PRoG Act, 2025: A Turning Point for India’s Online Gaming Industry

From Boardrooms to Boilerplates: The DPDP Act’s Impact on Compliance and Contract Drafting