AI pioneer Sachin Dev Duggal is building a SeKond Brain — and doing it differently this time
The home office is cool in that over-air-conditioned way — a stark contrast to the polished glass-and-climate perfection of tech offices from London to Singapore. Behind Sachin Dev Duggal, a whiteboard diagram looks less like a product roadmap and more like a nervous system: arrows looping between boxes labelled memory, context, permissions, and provenance.
He isn’t pitching me another chatbot. He’s talking about the idiosyncrasies of the human brain – and the far greater failures of artificial intelligence.
“Most people don’t realise how much of their life is spent rebuilding context,” he says.
His pause makes me think about it and I realise it’s true. Board meetings end, we walk back to our desks and suddenly what had seemed to clear and logical to us when the CEO was talking evaporate. Corporate decisions get scattered across Slack threads, half-edited documents, voice notes, email replies and AI summaries that sound confident but are oddly detached from the thing they’re summarizing – suggesting that it’s half guesswork. And by the end of the week, entire teams are having to reconstruct what they already knew three days earlier.
This is the problem Duggal thinks AI still hasn’t solved: it can give us input and intelligence, but it can’t give us continuity and context. And this is why, after a well-documented rise and fall in enterprise AI, he’s starting again.
Recalibrating the route to success
Before his new venture, SeKondBrain AI, there was Builder.ai – the company that made Duggal one of the most recognisable founders in the global automation boom. The idea was simple enough to travel: software without the usual engineering friction. Businesses could assemble applications from reusable components, accelerated by AI-assisted workflows.
The company raised more than $450 million and grew from roughly 40 employees in 2018 to more than 1,100 by 2024, reaching a valuation near $2 billion at its peak.
“The idea was sound and the ambition was real,” says Duggal, “but I built too much, too fast.”
He describes the lessons learned with striking clarity: how momentum can outpace verification, and how scrutiny can erode under layers of formal reporting. Over time, a gap opened between leadership and operational truth — until, as he puts it, “two versions of reality” quietly coexisted inside the same company. It allowed what he calls “leaders with amnesia” to reshape events, even telling him they would need to “throw him under the bus every now and then.”
It’s jarring, but Duggal recounts it without bitterness — more matter-of-fact than aggrieved.
That recalibration now underpins everything he’s building. And he’s building fast — around the clock — because, as he puts it, the world “changes every morning”. He has also allocated free equity to many former stakeholders and employees of Builder.ai. I call it generous; he corrects me: it’s his moral responsibility.
AI’s memory problem
Perhaps because of Builder.ai’s collapse, Duggal seems to see AI’s shortcomings more clearly than others in the sector. Its biggest problem is not capability, he argues, but memory.
Not storage memory of course – the cloud has that in abundance – but conceptual memory: The ability to preserve meaning across time, across teams, and across changing versions of the same decision.
On his Substack, he recently described intelligence not as something that can be contained inside a model, but as something which should be distributed across people, tools, documents and systems to create an “ecology” rather than a database.
This shift in how we think about AI matters deeply, because if intelligence is ecological, then most current AI systems are working with the wrong unit of analysis. They treat snippets of text as tokens, when really, they should be working with relationships and generating understanding rather than simple answers. Instead of chat, you get continuity.
AI as a gardener not a librarian
SeKondBrain is Duggal’s burgeoning attempt to build out that idea.
He describes it less as an AI assistant and more as a substrate – a layer of shared memory that sits underneath people, software, and AI agents alike, turning fragments into connected maps of what a team actually knows.
His analogy is that truly successful and powerful AI shouldn’t be about accumulating information. Instead, it’s about tending relevance – AI as more gardener than archivist or librarian. The hope is that shaping a new generation of AI in this way will allow long-term gains and accumulation of truer, deeper knowledge over time.
So, what’s next on the agenda?
“Anybody can build a demo,” Duggal says. “The real test is whether the system holds when the stakes are boring and constant.”
His past ventures, mirroring Silicon Valley’s mantra of “move fast and break things”, have been animated by speed – how fast something could be scaled up and how far ahead of the market’s imagination you could build. But now the emphasis is building more deliberately, with durability.
This echoes something else he’s realised: that modern organisations are in desperate need of underpinning structure to their tools, resulting in them “drowning in stored data but starving for connective tissue.”
An unglamorous philosophy
Duggal definitely sees the irony. He once built a company to make software design feel frictionless, and now he’s building one designed to make friction visible – to show how decisions were made, what changed, and whether an answer can be trusted enough to survive compliance review, board scrutiny, or regulatory oversight.
He puts it plainly: “With any other investment, enterprises don’t buy vibes, they buy guarantees. It’ll be the same for AI once the excitement wears off.”
A post-hype world is exactly where the AI industry is heading. As today’s systems move deeper into finance, healthcare, logistics, law, and public services, charm and novelty stop being enough. The problem is no longer whether software can sound intelligent but whether they can show their working.
The hallucinations of large language models (LLMs) won’t be accepted forever, and nor should they be. The next generation of AI must be “confidence-gated” – layered with traceability and credibility instead of pretending to be omniscient and remaining uninterrogated.
Sachin Duggal has already lived through the era when tech rewarded speed simply for being fast. Now he’s betting on something slower, sturdier, and harder to fake. The next wave of AI may arrive without fireworks. But it’s the one that decides whether intelligence becomes infrastructure – or just another passing trick.
Rolling Stone UK publishes articles from a variety of contributors which express a wide range of viewpoints. The positions taken in these articles are not necessarily those of Rolling Stone UK.
