There was always going to be a moment like this.
As generative AI tools become more capable, more accessible, and more embedded in day-to-day decision-making, it was only a matter of time before they moved beyond drafting emails and summarising documents and into shaping strategic decisions with real financial and legal consequences.
The recent dispute involving Krafton and the leadership of Unknown Worlds is a clear example of that shift. More importantly, it highlights where things can go wrong.
The temptation of AI as a strategic adviser
On the surface, the situation is not unfamiliar.
A company enters into an earnout arrangement. Performance targets are met or are expected to be met. The financial obligation becomes significant. The acquiring party begins to reassess the commercial wisdom of the deal.
What is new is how the response was formulated.
Rather than relying solely on internal legal counsel and external advisers, the CEO reportedly turned to a generative AI system to explore options. What followed was the creation of an internal “Project X” taskforce and a sequence of actions that ultimately led to the removal of studio leadership, actions that have now been overturned by the court.
This is not a story about AI “going rogue”. It is a story about how AI is being used, and misunderstood.
The bell curve problem
Generative AI systems are exceptionally good at producing plausible, structured, and confident responses.
But they are fundamentally trained to predict the most likely answer based on patterns in data. In other words, they operate around the centre of the bell curve.
That works well for general queries, common scenarios, and standard workflows.
It does not work nearly as well for edge cases.
And legal disputes, particularly those involving contractual nuance, fiduciary obligations, governance structures, and litigation risk, are almost always edge cases.
In this instance, the AI reportedly acknowledged that cancelling the earnout would be difficult yet still contributed to a plan that attempted to navigate around that obligation. What it could not fully account for was how a court would interpret those actions in context: the intent, the contractual protections, and the broader equitable considerations.
That is where the gap lies.
It is not just “humans in the loop”
There is a common refrain in discussions about AI risk: keep humans in the loop.
This case demonstrates why that framing is insufficient.
The issue is not simply whether a human reviewed the output. It is whether the right human did.
Legal expertise is not interchangeable. Experienced practitioners bring with them a floor of knowledge, an internal benchmark against which they can test whether something feels wrong, incomplete, or overly simplistic.
They understand where the law is settled, where it is ambiguous, and where courts are likely to scrutinise behaviour more closely. They can identify when a proposed strategy might technically appear viable but carries unacceptable litigation or reputational risk.
AI does not have that floor. And a non-expert relying on AI does not inherit it.
The illusion of completeness
One of the more subtle risks with generative AI is that its outputs feel complete.
They are well-structured. They anticipate objections. They often include strategic steps, communications plans, and even defensive positioning.
That creates a dangerous illusion: that the problem has been fully analysed.
In reality, what is often missing are the low-probability, high-impact considerations, the very issues that tend to define legal outcomes. The interpretation of “for cause” provisions. The weight given to prior warnings from legal counsel. The optics of decision-making processes. The court’s view on good faith.
These are not always obvious. And they are rarely captured by a system optimised for average-case reasoning.
What this means for businesses
None of this is an argument against using AI.
On the contrary, these tools are already proving valuable in legal workflows, whether in document review, drafting assistance, or surfacing relevant information more efficiently.
But there is a clear boundary.
AI can support analysis. It can accelerate understanding. It can help frame options.
It should not be relied upon as a substitute for legal advice, particularly where decisions carry material financial exposure or governance implications.
The more complex and high-stakes the issue, the more important it becomes to engage experienced advisers who can interrogate the problem properly, challenge assumptions, and identify risks that are not immediately visible.
A familiar principle, reframed
There is an old principle in commercial dealings: caveat emptor, let the buyer beware.
In the context of AI, it takes on a slightly different meaning.
The risk is not just in the information you receive, but in how much confidence you place in it.
Tools that are designed to be helpful, articulate, and persuasive can give the impression of authority. But authority in legal matters is not derived from fluency. It is derived from expertise, judgement, and experience.
This case is unlikely to be the last of its kind.
If anything, it is an early signal of how AI will intersect with legal risk in the years ahead. The organisations that navigate this well will not be those that avoid AI altogether, but those that understand where its limits are, and where human expertise remains essential.
How Madison Marcus can help
At Madison Marcus, we work at the intersection of law, strategy and emerging technology.
We understand both the capability of AI systems and, critically, their limitations when applied to complex, high-stakes decisions.
Our role is not to slow innovation. It is to ensure that as organisations adopt these tools, they do so with the right governance, oversight and legal grounding.
We assist clients to:
- Assess legal and commercial risk arising from AI-informed decision-making
- Structure governance frameworks for the use of AI in executive and operational contexts
- Review and stress-test strategic decisions before they are implemented
- Navigate disputes where AI-driven processes are under scrutiny
- Align innovation initiatives with regulatory, contractual and fiduciary obligations
In an environment where AI can shape decisions as much as it supports them, the question is no longer whether to use these systems, but how to use them responsibly.
That requires more than access to technology. It requires judgement, experience, and the ability to see what is not immediately obvious. That’s where we come in.
Contact us today