How Indian IT can flip the AI script, ft. Ameya P
Between existence, survival, and the possibility of adaptation.
Hi folks, Pranav (Manie) here.
You (and maybe your portfolio too) are probably tired of hearing this, but the age of AI agents has been a thorn on Indian IT’s side. Its business model is getting stale each passing day. But what we often don’t understand is what this means for the day-to-day nitty-gritties of the industry.
Now, there is a possibility that Indian IT might just adapt to the new paradigm. We’ve briefly covered some semblance of this possibility in past Daily Brief stories. And usually, we can’t say for sure what such a possibility even entails. After all, what does adaptation for Indian IT look like? What are the forms of inertia they will have to overcome to successfully change themselves? And even if they do adapt, will they be able to defend their business from the new threats that AI will inevitably give birth to?
To answer these questions, we had a nuanced conversation with Ameya Pimpalgaonkar, a storied professional with two decades of experience in the global IT industry, working across giants like IBM, Accenture and Infosys. He has donned many hats, from software developer to co-founder and CTO. He’s also a prolific technology investor in both public and private markets, and is very well-known on X for his investing takes. We’ve quoted his work in the past in our coverage of Indian IT on The Daily Brief. I highly recommend reading his newsletter.
Personally, I learnt a lot. Most mainstream conversations about Indian IT are either a black-box, or are too one-size-fits all. Ameya’s answers cut through both of those problems succinctly. The idea was not to stick to a highly doomer or highly optimistic narrative, but weave a story that pinpoints the opportunities, the threats, and understanding the depth of how much AI can commoditize any possible future moat.
Some choice quotes that I think are worth highlighting:
On why the business model shift, while real, is still slow:
“First, the contractual challenges. All service level agreements are still tied to response time, not to a business outcome. If you go to Jira, or any ticketing tool like ServiceNow, you would clearly see the service catalog — the SLAs as per the category of an incident — and the SLA translates into how many hours that incident should be resolved in. When that time elapses, you get the escalation emails automatically. But inherently, this is still tied to response time. It is not tied to the quality of the fix being delivered.
There is no connection to the business outcome defined yet. And what this means is you have to renegotiate a contract that is already a legal document — an already time-tested relationship with the customer. How would you even define a business outcome? That’s the biggest challenge right now.”
And it's slow because, to begin with, defining the new scope of work itself is a challenge:
“In my 20 years of career, what I’ve seen is that whatever scope of work you do in the first year — or even the first six months — with a customer engagement, it always balloons four to five times compared to the initial SOW. So what you end up delivering is, I think, four to five times more than what was defined in the SOW.
Since most enterprise customers can’t define what success and closure looks like in the early phases of SOW definition — that’s what I mean by client readiness. So the transition is definitely real, but it’ll happen deal by deal. It won’t happen as a wave.”
On how data annotation is emerging as a new (if boring) revenue stream for Indian IT, and why it’s well-poised to take advantage of it:
“Any data that’s anonymized and pseudonymized is perfectly suited to be transformed. This is where data labeling richness comes in. A surgeon looks at an X-ray and says, okay, there’s a nodule in the chest — that line by the surgeon is an annotation labeling that data. If you just pass that X-ray to an AI, the AI will say this is this, this is that — but it’s missing the surgeon’s label, the surgeon’s context for interpreting what it means. That data labeling, in my view, is going to be one of the quickest revenue streams opening up for many of these companies.
What it means in practice is: you take models with pre-trained weights, bring in domain experts from legal, geography, STEM, and so on, and have them engage. You must have experienced this — whenever you use ChatGPT and ask a strategic question, sometimes it generates two outputs and asks which one you prefer.
That’s exactly ChatGPT doing reinforcement learning to identify your preferences. Now think about doing that at an LLM level, at an enterprise level. India has the workforce and the muscle to do this. It’s not glamorous, but it’s real revenue.”
On why organizational structure matters more than ever in AI, and why smaller, agile firms might be better poised to win in AI:
“And if you look beyond the top 4, 5, 6 IT companies to smaller companies that have always run leaner structures, they’re much better positioned to take advantage of this.
One reason is they don’t have the overhead. There’s no risk of firing large numbers of people and creating chaos in the market. Instead, they can just increase the productivity of existing employees — and I know for a fact that’s happening. For smaller companies, it’s much easier to adopt AI tools.
These companies are starting to convert their product features into markdown files. Every product backlog, every PI planning exercise — the output is a markdown file today, which feeds directly into whatever coding copilot you’re using. And once you have everything defined in a markdown file, building a feature becomes so much easier.”
On accountability in the age of agents, and where you'll need a human still (with a very interesting personal tale):
“Sales cycles are not technology-driven. They’re driven by intangibles — delivery quality, what happens after delivery. And in the age of AI, the importance of audit trail is something we can’t even begin to describe. I was in a call yesterday with a company where they discussed an incident — they had an army of agents, and three of them didn’t agree with what the rest concluded. The system just froze.
The product had no way to determine a resolution. Who has the final say? Who says “okay, this is it”? Even the orchestrator agent couldn’t come out of that state. And it froze even though the product had a built-in rule — if agents disagree, do this, do that. Agents were self-aware, and it still happened.
We’re still discovering new aspects of working with these tools. You need a human coming in to handle the dirty work. But do you have that human if you’ve already let people go?”
You can listen to the full conversation on YouTube, Spotify, or Apple Podcasts.
And if you like reading more than listening, here’s the transcript of the podcast, shared earlier on Subtext by Zerodha.


