- How it started
Story.
From a Janmashtami chakra and a stripped radio motor in class three, to a multi-LLM autonomous factory in 2026 - the full arc of the studio in eleven chapters.
Chapters
XII
Class 3 → today
[ Preface ]
Most origin stories are written backwards from the result. This one was lived forwards. Every step of the arc - the gears in class three, the C++ ERP in class ten, the mechanical-engineering detour, the mushroom-plant pivot in 2019, the ESP32 deep-end, the AI-ML diploma, the CLI discovery in mid-2025, and the swarm that ships the studio today - was unplanned at the time. What follows is the actual sequence, written in the present-tense voice the studio uses everywhere else.
I
Class 3–4
The Janmashtami chakra
Every Janmashtami at home, the family set up a Krishna idol with a small motor-powered chakra spinning behind his head. The chakra was a thin metal disc on a tiny shaft; the motor came from a tape recorder or a cheap toy. Singh, around eight or nine at the time, watched the chakra spin and asked the question that decides who becomes an engineer: if this motor can turn a halo, why can't it turn a wheel?
He pulled an old radio apart, took out the motor, and coupled it directly to the tire of a toy car. The car flew. He had seen that the proper toy cars sold in the market all had gears inside their chassis. He took apart one of those, lifted out the white-plastic gear set, and slipped it between the radio motor and the wheel. The car slowed to a speed his hands could keep up with.
An old phone-charger transformer became the adapter. Dry cells became the battery. Forward and reverse came from flipping the polarity. By the end of that summer he had built a working remote car from a Krishna idol, a radio, and a broken market toy. That was the moment that fixed the question: anything that runs can be taken apart and rebuilt by hand.
II
Class 10–11
The school ERP
The first serious software project was a student-database system for the school, written in C++ end to end. Singh built it alongside his computer-science coach over a stretch of months - a small ERP with login flows, record entry, search, and report exports.
The thing that mattered wasn't the C++. It was the discovery that software was the same kind of object as the toy car - something with parts that could be specified, taken apart, rebuilt, and run. The mechanical world had been there since class three; the software world arrived in class ten and slotted in beside it.
III
After Class 12
The mechanical fork
There wasn't strong family guidance about which way to go for college. Both options felt enormous. Software had the school ERP behind it; mechanical had the gears, the chakra, the years of taking apart everything that ran on a motor. He picked mechanical engineering.
In retrospect, the right call. Mechanical engineering taught problem-solving harder than it taught knowledge. Knowledge is secondary - every problem in the field has a known answer somewhere. What separates the engineers is whether they can break a fresh problem into solvable pieces under time pressure. Mechanical engineering opened those mental blocks. The actual coursework - thermodynamics, machine design, fluid mechanics - came and went; the habit of approaching a problem stayed.
Software ran in parallel through college years - small builds, side experiments, a little more C++, some Python. Nothing serious enough to call a portfolio. The serious work was still ahead.
IV
2019
The mushroom plant
Singh's father set up a commercial mushroom plant in 2019. Operations ran on conventional electrical control panels - one row of grow rooms needed a panel that cost ₹70,000 to ₹80,000. Every breakdown stopped the work. The same loop played out a dozen times: a panel fault, a phone call, a wait, the electrician arriving with his toolbox, the panel swung open, fresh wiring, a quote, a repair, the room running again.
Watching that loop was the next pivot. Companies like Carrier already shipped small electronic control units the size of a paperback that ran entire systems for a fraction of the cost and footprint. Why was a mushroom plant in 2019 wired the way a textile mill in 1985 was wired? The gap was a product opportunity sitting in the open. The way out was learning the controllers.
V
2019–2022
ESP32, and the diploma
He started with the ESP32 - a small, programmable, cheap microcontroller that could fit anywhere a relay could. Once you can read a sensor, you can capture data; once you can capture data, you can predict. He understood the second half of that sentence by instinct before he had the formal training behind it.
The training came next: an AI / ML diploma alongside the work. The diploma was not the degree of an applied scientist - it was the practitioner's path through what AI actually does in production. The role of data engineering. Why a model is a thin layer on top of a much heavier pipeline. Why prediction is mostly about the data and very little about the algorithm. Concepts that had been hand-wavy turned into specific shapes.
VI
2022–2024
The custom plant controller
He built the full custom controller for the mushroom plant in stages - actuator governance, sensor ingestion, predictive analytics, alerting, all running on ESP32 nodes plus a small backend. The plant ran on Singh's hardware and Singh's code instead of an electrician's panel.
Along the way the studio stack revealed itself one piece at a time. Services. APIs. Webhooks. Database design. Frontend. Deployments. Every concept came into the work because the work needed it that week. By 2024 he was building websites and small SaaS prototypes alongside the IoT work - picking up domains as fast as the projects demanded them.
VII
Through early 2025
The Google-the-syntax era
Until the start of 2025, the loop was honest but slow. Think the design. Open Google. Search the syntax. Copy the snippet. Paste. Adapt. Debug. Repeat. The thinking-to-shipping ratio was wrong; most of the time went into typing, not into deciding.
Singh built shipping software anyway. But the felt sense that something was about to change was already there. The first wave of LLMs had landed and the pattern was obvious: this layer was going to absorb the syntax part of the work soon. The only question was when, and how cleanly.
VIII
June–July 2025
The CLI changed everything
By mid-2025 the CLI tools - Claude Code, Codex CLI, Gemini CLI - were good enough to take the syntax part of the loop entirely. Singh found them, started using them, and the shape of the work flipped overnight. The thinking remained where it had always been; the typing stopped being his job.
The plain way to put it: "now I don't have to do anything." Not literally - the operator still has to choose what to build, why, and when to stop. But the doing-by-hand part of software, the part that had eaten most of the hours since class ten, was outsourced to the model in a single quarter.
IX
Late 2025
The first multi-LLM sandbox
By late 2025 he had stitched several CLIs into one local workspace. The first real production problem surfaced inside a week: a single CLI's quota would run out mid-task and the whole pipeline would stall. The fix needed to be a fallover layer.
He built it the way an electrical engineer would have built it: circuit breakers on every model dependency, fallback chains by priority, model-by-usage routing - cheap model for routine work, escalate to a higher-tier model when the cheaper model's confidence dropped, escalate again on the next failure, never escalate without a reason. The studio's resilience to provider failure was an architecture decision, not a wish.
The piece that mattered most - and the piece every other multi-model setup was getting wrong - was context handover. When a model dies mid-task, the conventional fallback spins up the next model from a cold prompt and the work effectively restarts. Singh closed that loop. The orchestrator persists the live context - the conversation history, the partial output, the in-flight reasoning, the open file handles - and hands it forward to whichever model takes the next swing. The replacement model picks up exactly where the failed one left off, keeps the same intermediate state, and finishes the task as if the failure never happened. That single piece of plumbing - context preserved across model boundaries - is what turned the swarm from a fragile experiment into something that runs for hours unattended.
X
Late 2025 → today
The model-comparison stretch
The next phase was empirical. He spent weeks running real production tasks across each CLI and noting where each one genuinely outperformed the others. Codex was strongest at certain shapes of code. Claude was strongest at certain shapes of reasoning. Gemini at certain kinds of search. Kimi at long-context reads against legal-style or audit-style documents. None was strictly best; each had a grain.
Once the grains were known, the orchestration changed. Tasks stopped being assigned to whichever model the operator had open. They started being assigned to whichever model fit the task. The studio's working assumption became: the model is a tool with a profile, not a generic intelligence to be flattered.
XI
2026
The mature swarm
Today the studio runs on a single orchestrator that calls the right model for the right slice automatically. Each sub-agent - orchestrator, architect, quality gatekeeper, digital twin of the owner, independent verifier, executor - already knows whether it wants Opus or Sonnet or Haiku, and asks for it. The operator does not pick the model anymore.
The result is the studio that ships AdaptiveMind. Six production projects in a single calendar year. A Project Learning Ledger that captures every gate failure as a permanent test. A 95%-target verification gate before any blueprint locks. A multi-wave parallel build that closes itself out when the audit averages clear the bar.
No task gives Singh tension anymore. Research first, understand fully, then implement - and the implementation has stopped being the hard part. The hard part is what every operator's job has always been, only now it stands on its own with no syntax in front of it: deciding what is worth building, and deciding when to stop.
XII
Next
Coming soon
Being written. The next chapter is in motion - Project 22 (io-gita applied), Project 21 (PPF Phase 2), Project 34 (URIP Personal), and a few quieter threads underneath. Chapter XII will land when there is something honest to ship as a chapter, not before. Until then, the live work is on the Now block on the homepage.
Closing line · Singh
Today no task gives me tension.
I research, I understand, I ship.
The hard part is what every operator’s job has always been - deciding what is worth building, and deciding when to stop. The syntax floor has been removed. Everything else is the swarm.