AI in the Kolhoz
Why tinkering with shadow AI is not a strategy
In the proverbial scene of The Witness, a classical Hungarian movie, Comrade Pelikán is promoted to run a kolhoz and tasked with growing oranges in Hungary. Since that’s impossible, he and his comrades hang lemons on the tree before the Party inspection. When the Commissar bites into one and winces, Pelikán proudly explains: “A bit yellow, a bit sour, but it’s ours.”
Between 2019 and 2022, CEOs talked endlessly about digital transformation. In Europe, this often came wrapped in a “triple transformation” — to be more digital, environmentally sustainable, and socially responsible. The trouble was, there simply wasn’t enough skilled workforce to make it all happen.
Then, in PwC’s 27th Global CEO Survey (late 2023), the generative AI hype arrived. CEOs predicted that GenAI would transform business models and workforce skills, with reskilling challenges framed vaguely as “upskilling” or “continuous reinvention.” A year later, AI was already reported as impacting headcounts.
By late 2025, the tone had shifted. Earnings calls grew cautious, MIT reported a 95% failure rate in enterprise GenAI projects, and the rise of shadow AI became impossible to ignore. The writing had been on the wall: a year earlier, Fortune and Salesforce surveys showed that 90% of workers were already using AI tools at work — more than half without IT approval. MIT confirmed the paradox: while only 40% of companies had purchased an official AI subscription, employees in over 90% of those firms were using personal tools like ChatGPT, Claude, and Gemini anyway.
In the post-Covid, digitalised workplace, I often hear that employees bring work home. But rarely that they bring the tools into the workplace. It is particularly disturbing that if the company’s strategic management, CIO, or CDO cannot figure out how to work with a tool that is known to carry reputational and legal risks, how can they let their workforce experiment with it without supervision?

Recent headlines show what happens when AI is introduced in a tinkering way. Google’s Gemini image generator produced wildly inaccurate depictions of history — from racially diverse Nazi soldiers to American Founding Fathers — sparking viral ridicule, accusations of bias, and a multibillion-dollar stock drop. Air Canada was forced to compensate a customer after its chatbot gave incorrect refund advice, with the tribunal ruling that the airline was accountable for its AI’s promises. McDonald’s scrapped its AI drive-thru system after viral clips showed it repeatedly adding absurd items to orders, turning the pilot into a public punchline.

And yet that’s exactly what shadow AI looks like: a Comrade Pelikán kolhoz. Soviet-era kolhozes often lacked competent management or working machinery, so the tractors were kept plodding only because mechanics brought a precision screwdriver or wrench from home. The Witness was a grotesque parody of what didn’t work — and shadow AI is heading in the same direction. It may be entertaining, but it’s no manual for running a modern enterprise.
Our aim with our blog and our AI Academy is to provide exactly the opposite: a manual for CEOs who do not want to lead from the backseat while their workforce tinkers unsupervised, but instead want to be in the front line — setting direction, choosing the right tools, and ensuring AI delivers lasting value rather than lemons disguised as oranges.
This post originally appeared on reprex.substack.com on Sep 18, 2025 during the preparation of our book, masterclass, and blog.



