Our goal with The Daily Brief is to simplify the biggest stories in the Indian markets and help you understand what they mean. We won’t just tell you what happened, but why and how too. We do this show in both formats: video and audio. This piece curates the stories that we talk about.
You can listen to the podcast on Spotify, Apple Podcasts, or wherever you get your podcasts and watch the videos on YouTube. You can also watch The Daily Brief in Hindi.
In today’s edition of The Daily Brief:
The Daily Brief AI round-up: June edition
RBI Eases Priority Lending Rules for Small Finance Banks
The Daily Brief AI round-up: June edition
There’s almost too much happening in the world of AI these days.
We try to cover every development in the world of business and economics that we think you guys should know. But to tell you the truth, if we did every important-seeming AI story we came across, this would turn into an AI-only newsletter. Which is why, every once in a while, we’ll do a complete round-up. We’ll run through a mountain of news, with the aim of bringing you everything interesting that’s happening in the world of AI.
Don’t expect this to be a comprehensive deep dive. This will be very chaotic. There are many things we won’t do justice to. Bear with us. We’re just trying to run through everything that recently caught our eye. Maybe it’ll catch yours too!
What’s the next great move?
The first mode of generative AI to burst into public consciousness were all the chatbots. In some ways, that market is already maturing.
New, cutting edge chatbot models are breaking onto the scene on a bi-weekly basis. Consider this: we wrote our last AI round-up barely a month-and-a-half ago. In just the few days since, the three frontrunners — OpenAI (O3 Pro), Anthropic (Opus/Sonnet 4) and Google (new variants of Gemini 2.5 Pro) — have each come out with genre-beating models.
Here’s our hot-take, though: if AI companies are looking for a clincher that will let them dominate the space, we don’t think it’ll come from these incremental improvements. It might turn out that one of these companies put out a chatbot so superior that it overwhelms its competitors. For our money, though, that’s unlikely.
To us, there’s a good chance that the next big AI move comes from outside this limited arena; from all the novel experiments in spaces that are, so far, less crowded.
New modes to explore
One potential battleground of the future, for instance — that’s already heating up — is around non-textual modes of generative AI. Like image or video generation. Take video: when we wrote the last edition of this round-up last month, we still thought video-based AI was a creepy, hallucinogenic nightmare. Turns out, it got really, really good really fast.
Google briefly broke the internet when it introduced its new video generation AI — Veo 3. The company has a massive advantage — Youtube has nearly endless amounts of video content for its models to train on. That’s a moat that nobody else has. And by training on those billions of hours of video content, its models have gained an almost eerily good sense of how the world works.
For one, Veo 3 creates audio to match your video — it’s the first to do so, and it’s pretty nifty.
But that’s hardly the most exciting bit. The model seems to understand, at some level, how physics works. Not perfectly, sure, but this is still a breakthrough. That’s why this is such a new paradigm. That sort of understanding isn’t just useful in creating videos; it could help everything from robots to self-driving cars. Google’s DeepMind lab is building further on this idea — they’re trying to create something called “force prompting” — where you could ask for a certain specific physical interaction in your AI video, and it’ll follow through perfectly.
Selling shovels in a gold rush?
Not all companies, though, have been this prolific. Meta, for instance, is supposedly in a state of panic.
Mark Zuckerburg, the Wall Street Journal reported, is scrambling to find enough talent to catch up with the AI frontier. Meta’s willing to offer hundreds of millions in pay packages to some potential hires. They’re also willing to gobble up companies outright.
Recently, Meta made its biggest ever AI investment, pumping $14 billion investment into Scale AI. Scale AI isn’t a traditional AI company — it doesn’t make AI models. It is, instead, a vendor to AI companies.
It uses a network of contractors in low income countries like Kenya and the Philippines to create databases of clean, properly-labelled data. This has, of course, always been the dirty secret behind AI — behind the magic of AI are armies of workers that sit and tag data. But Scale AI is no longer just a network of unskilled people labelling pictures of cats and dogs. As The Ken reported recently, they’ve graduated to high skill tasks, like hiring PhDs to ask AI models extremely nuanced questions, and figuring out where they go wrong. Their feedback, then, turns into even more data for those models.
Getting good, clean is a crucial bottleneck for most AI companies — the quality of a model is a matter of how good its training data is. That’s why companies like Google have previously paid Scale AI as much as $200 million a year. That’s what Meta is bringing under its own fold.
We aren’t quite sure what Meta’s plans are, here. On one hand, it might try to cut its competitors out and monopolise that data for itself. On the other hand, its strategy could be one of “selling shovels during a gold rush” — if it can become a key data provider to all the other labs, it could put itself at the heart of how industry leaders train their AI models. That, however, looks increasingly unlikely. As soon as Meta announced its investment, everyone from Google to OpenAI started shopping around for other, more neutral data vendors.
Amazon, too, is trying to sell shovels in a gold rush. This isn’t new, of course — much of the internet sits on Amazon’s data centres, with the company running a third of the world’s cloud infrastructure market. And its bringing that approach to AI as well. The company is in the process of setting up a complex of 30 AI-first data centres, under what it calls ‘Project Rainier’. The entire complex will draw 2.2 GW of electricity. For context, that is more than India’s total data centre capacity today.
Among other things, this giant facility will help power Anthropic — a company in which Amazon has already pumped $8 billion in investment. But there are others. For instance, Stability AI, the company behind Stable Diffusion, uses Amazon’s cloud offerings as well.
An army of product makers
Apple, as we recently covered, is a distinct laggard in the AI race. That isn’t an existential risk for the company — after all, what sets it apart from its big-tech rivals is the fact that it makes consumer hardware. Nobody is threatening iPhones and Macs just yet. But its intelligence arm is clearly behind the curve.
It does have one card left to play, though: developers.
Apple’s real strength has always been its ecosystem — millions of developers building apps for its tightly integrated hardware-software platforms. If Apple can’t build a category-defining AI product on its own, maybe its developer community can do it for them?
That’s the company’s latest bet. Apple already has small AI models that can run directly from your phone — that is, they don’t have to talk to the cloud, they just run on your phone’s processor. Now, it’s opening these models up to its developers. While most AI companies keep their models behind API walls, limiting how others can use them, Apple is handing developers a wrench and saying: build whatever you want. Even if Apple’s AI models themselves aren’t that impressive, if developers can find creative ways of using them, it’s possible that Apple users still get the best AI products.
Or that’s the hope, anyway.
Elsewhere, though, Apple was busy stirring other controversies.
Are all the AI models… idiots?
There’s another side to the AI story: some people are asking if AI is at the risk of hitting a barrier, beyond which no further intelligence is possible — at least as long as we stick to our current approach. Looking at the sheer pace of change all around, at the moment, it’s easy to think we’re on a direct highway to the world of The Matrix. But are we just not there yet?
Researchers at Apple asked this question, in a paper they titled “The Illusion of Thinking”.
They were trying to test the reasoning capabilities of “large reasoning models” — models like OpenAI’s O3, which supposedly think through their answers. These models’ thinking abilities are usually tested on maths or coding benchmarks. These tests tend to reward AI for getting to the right answer, rather than the quality of reasoning. If they got to the right answer, but got there through dubious means — for instance, by using a rule-of-thumb that just happened to work — it’s hard to catch them.
So, Apple’s researchers instead threw complex, customised riddles at them, and then looked at how the models were thinking through them. They found that these models have clear ceilings. They shine when they have to deal with moderate levels of complexity. But once things get harder than that, their reasoning abilities collapse. They get confused; they give up; they start overthinking about trivial things. The “thought process” of these models, in essence, is rather fragile.
This sparked a storm in the AI world.
But soon, researchers at Anthropic and Open Philanthropy shot back with their own paper, titled “The Illusion of the Illusion of Thinking.” They basically argued that Apple’s researchers had set these models up to fail. AI models have a limited thinking “budget” — they only have so many tokens they can use. The puzzles they threw at the models either required them to hold on to far too much information, or simple could not be puzzled out in a step-by-step manner. Neither were appropriate for what these models could do.
Only, the debate didn’t end there. This rebuttal was rebutted as well, in a paper titled “The Illusion of the Illusion of the Illusion of Thinking”.
This third paper tried to square the other two. On the one hand, it agreed that Apple’s researchers had imposed unfair conditions on these models, making it likely that they would fail. On the other hand, it argued that these models were still brittle in how they thought. It wasn’t just that Apple was overwhelming these models. Even when models had enough room to think, they still tended to “give up early” as the problems got harder. They didn’t spend their full thinking budget — they just gave up mid-way.
Fundamentally, it doesn’t yet look like AI models can think through anything you throw at them. They tend to fall apart, for reasons we don’t quite understand. There are others who have reached the same conclusion. Researchers at Epoch AI, for instance, think O3 behaves like an over-enthusiastic graduate student — it loves name-dropping, but doesn’t seem to understand anything deeply enough to actually build off that knowledge.
But is it impossible for them to get to that level of understanding? Is there some technical barrier they can’t scale? We’ll be honest with you: we just don’t know. Asking us how smart AI can get is like asking your pet dog which smartphone you should buy next. We might be enthusiastic, but we won’t be very helpful.
That said, this is exactly the kind of question that could shape the trajectory of the world.
Agents are the new frontier?
If AI models are to ever reach the capabilities that sci-fi movies promise, that’ll probably be on the back of ‘AI agents’ (here’s an interesting take on how that could happen).
Agents, unlike simple chatbots, don’t just answer your prompts. They carry out entire tasks by themselves. They can figure out the many steps it’ll take to complete a job, use different tools or apps to get through them, and adapt as they go. You just tell them what you need, and leave the rest to them.
AI models are developing these capabilities surprisingly fast. They’re now at a point where they can complete a task that would take an expert human a full hour. And their capabilities, it seems, are doubling every seven months. At this rate, in a couple of years, you could get an entire day’s work done with a few lines of prompting.
If you’re worried about “AI taking your job,” it’s agents that you’re actually afraid of.
You’ve probably already seen some agents. If you’ve ever used “Deep Research” on an AI model, for instance, you’re seeing agents in action. Not just one agent, by the way. Claude recently revealed that it deploys a full team of agents for these queries. There’s an AI “lead researcher,” which coordinates a squad of “sub-agents” — each of whom have a different job. Some handle citations. Some run individual searches. Some store information. And so on. It’s like an entire office that works to resolve your queries.
The latest chapter in this race, it seems, is the rise of ‘Computer-Use Agents’, or ‘CUAs’. Since we did our last AI round-up, both Google and OpenAI have announced that their AI agents can now use your browser. That let’s them do all sorts of stuff online a regular human would. They can see your screen, and respond to it in real time. They can go to different websites, click buttons, upload files, fill out forms, and more.
Think of what that means. Imagine you want to apply for a UK tourist visa, for instance. A CUA can do the whole thing for you. It can pull up your passport and travel documents, open the visa website, select the right form, fill it out, and upload all your documents. It can even handle glitches, captcha codes and misfires. All through this, you don’t even open a single tab.
And this is just the beginning.
AI regulations?
This is all very impressive. But as you might expect, with so much happening, AI is also going to trample on a few shoes. As these businesses mature, there’s a growing body of regulation and court cases that are hemming these companies in.
We’ll cover two big ones today.
For one, New York is at the cusp of passing a set of AI regulations. Both its Assembly and Senate have approved the RAISE Act — which by-and-large tries to create some basic safeguards to stop some sort of AI-fuelled calamity. The Act targets ‘frontier models’ and hopes to stop them from doing “critical harm‘.
Now, it doesn’t really do much. It’s a transparency bill — it requires companies to publish their safety protocols, get third-party safety audits, and disclose any safety issues they run into. It doesn’t do much more than that, though. As one commentator writes:
“…it seems clearly to be bending over backwards to avoid imposing substantial costs on the companies involved even if the state were to attempt to enforce it maximally and perversely…”
But given that we have no real precedent of how to anticipate, or deal with, the consequences of AI, this is an important development. It’ll set a reference point for AI regulation across the world.
A much more lively battle, on the other hand, is happening where these companies have real opponents, with their own commercial interests — the fight over intellectual property rights.
It’s no secret that AI models use huge amounts of data for their training. A lot of that data belongs to others, who have a clear copyright over it, and these AI companies haven’t exactly asked for their permission. From newspaper articles to pirated books, they’ve taken reams of data for themselves without compensating the owners. And so, everyone from The Indian Express to The Wall Street Journal have started suing these companies in courts around the world.
Legally, though, this is a gray area. Our intellectual property laws were simply not created for this era. They were created for regular humans who copy each others’ work. They do not even imagine a situation where a machine reads petabytes of data at one go, and then learns to play around with it better than anyone alive. Many legal concepts simply fall apart here. That’s why the Indian government, for instance, has set up a panel to re-think how copyrights should work.
Recently, the AI companies scored a big win. Various authors had recently taken Anthropic to a California district court, asking it to pay up for using 5 million books without permission. If the court was strict in how it applied the law, Anthropic would have had to pay hundreds of billions in damages, which would have instantly bankrupted the company. The court, however, sided with Anthropic. See, Copyright laws allow people to use others’ work without permission for what’s called “fair use” — as long as you’re being reasonable, and aren’t hurting the original work in any way. And to the court, there was no fairer use than creating a transformative technology out of others’ work.
But there’s a twist. Apart from using pirated books to train their models, Anthropic also made a library for itself with these books. And suing them for that, the court held, was perfectly reasonable.
This isn’t all, by far
This all we could cram in without turning this into a novel. But the truth is, there’s way too much happening in AI right now for anyone to fully keep up. There are an endless number of rabbitholes we could have gone into with this — from how different AI models do while playing board games with each other, to how AI shopping apps let you see how you’d look in different clothes, to how there’s an AI that can teach you how to make anything at all with lego blocks.
There’s a firehose of developments out there, and we’re just processing all that we can. But we’ll be back soon, one of these days, with yet another AI round-up.
RBI Eases Priority Lending Rules for Small Finance Banks
The Reserve Bank of India (RBI) has been tweaking how banks’ “priority sector lending” (PSL) should work over the last few months. And recently, it did so again.
First, it came out with a huge package of changes in March 2025, where it expanded the scope of PSL, and moved many of its thresholds around. Then, in June 2025, it modified the law around how microfinance loans would work.
Now, the RBI has revised its PSL norms again — this time for “Small Finance Banks” (SFBs). In a nutshell, from FY 2025-26 onwards, SFBs must direct just 60% of their total loans to priority sectors — down from 75%.
This sounds like a technical detail. But this is a decision that could move around thousands of crores in capital. That’s why we’re talking about why these changes are happening, and what they mean for small finance banks — and the borrowers they serve.
Small Finance Banks 101: Born to serve the underserved
Small Finance Banks are an interesting category of banks. The RBI created them with the specific mission of promoting financial inclusion. Many of these banks started their lives as microfinance lenders, and were given banking licenses on the condition that they would focus on serving borrowers that big banks often overlooked.
Under the initial licensing rules, an SFB had to allocate a whopping 75% of its Adjusted Net Bank Credit — or simply put, three-fourths of all their loans — to PSL sectors. Regular commercial banks, in contrast, have a much lower target of 40%.
This meant PSL was the core of an SFBs’ operations — not just a compliance requirement. SFBs built their entire portfolio around things like microloans, small farmer credit, tiny MSME loans, etc., in order to meet this mandate. This helped them achieve the stiff 75% target. But it also came with side effects.
The latest change
Starting this financial year (FY 2025–26), RBI has changed the PSL requirement for Small Finance Banks (SFBs) — from 75% to 60%.
Think of what that means. Effectively, 15% of an SFB’s portfolio has suddenly been freed up, and can now be lent to other, non-PSL segments. Cumulatively, that’s ~₹41,000 crore of capital which is suddenly available, and can go wherever these banks choose.
Why did RBI do this?
The RBI’s announcement didn’t spell out a detailed rationale for this change, but everyone in the industry has been buzzing about why this was done.
From what we can tell, this adjustment seems primarily aimed at risk management. See, there was one major problem with the 75% PSL mandate: bankers argued that, in trying to meet these norms, they often had to give out riskier loans to lower-quality borrowers. Finding enough high-quality PSL borrowers was a perennial challenge. As Sarvjit Singh Samra, CEO of Capital SFB, noted, these banks were sometimes stuck chasing volume over quality.
Because of this, many SFBs ended up with very concentrated microloan portfolios — since microfinance was the easiest way to rapidly build PSL assets. But this made them vulnerable. As microfinance started going through a rough patch, things were getting worse by the day. In fact, recent data showed that SFBs’ gross NPA (bad loan) ratios jumped to 4.35% in March 2025 from about 3.5% just a year earlier. Analysts attributed this to SFBs’ heavy exposure to microfinance loans.
In this context, the RBI’s move appears aimed at de-risking SFBs — letting them diversify their loan books and reduce concentration risk.
What does this change do?
To SFBs, this change is a huge boon. It will free up a massive amount of capital — almost one of every seven Rupees they lent — which was earlier locked into meeting PSL quotas.
That means instead of chasing targets, SFBs can deploy this chunk of funds into other opportunities — ideally those that are safer or more profitable. They can now expand more into segments they previously stayed away from because of PSL constraints. This flexibility could enhance their profitability and sustainability — reducing the pressure on them to accept higher-risk or lower-yield accounts. Now, they can be more choosy.
But there’s naturally a flip side. When rules like these are relaxed, one has to ask — who loses out?
In this case, the priority sector borrowers absorb the hit. These are the farmers, small entrepreneurs, low-cost homebuyers, etc., who were the intended beneficiaries of that 75% mandate. With SFBs now freer to diversify, less money might flow to these underserved segments.
This could cut both ways, though. Some marginal borrowers could find it even harder to access credit, as banks become choosier, and shrink credit supply to the highest-risk segments. At the same time, you might not see loans marketed as aggressively, which has sometimes led to over-indebtedness, or poor credit practices.
Whether this is good or bad for financial inclusion, in the long run, is an open question. While it could lop off direct money supply, a less-predatory banking sector might be better for underserved groups in the long run.
The bottomline
RBI’s latest move aims for balance. For SFBs, financial inclusion will still be a priority, with 60% of their loans oriented towards underserved borrowers. At the same time, this will also give banks far more flexibility. Rather than rigid mandates, like the earlier 75% PSL, RBI seems to prefer practical targets, that pay due attention to bank health.
Tidbits
Hindalco to Acquire US-Based AluChem for $125 Million in All-Cash Deal
Source: Business Standard
Hindalco Industries, the metals arm of the Aditya Birla Group, has announced the acquisition of US-based AluChem Companies for $125 million (approximately ₹1,075 crore) in an all-cash transaction. The deal will be executed through Hindalco’s step-down wholly owned subsidiary, Aditya Holdings LLC, and is expected to close in the upcoming quarter, subject to regulatory approvals. AluChem operates three advanced manufacturing facilities in Ohio and Arkansas, with a combined annual capacity of 60,000 tonnes. This move adds to Hindalco’s existing 500,000 tonnes of specialty alumina capacity, supporting its plan to scale up to 1 million tonnes by FY2030. The acquisition is aimed at strengthening Hindalco’s downstream portfolio in high-tech, value-added alumina products and enhancing its presence in the North American market.
RBI Announces ₹1 lakh crore VRRR Auction Amid Surplus Liquidity Surge
Source: Business Standard
The Reserve Bank of India has scheduled a ₹1 lakh crore 7-day Variable Rate Reverse Repo (VRRR) auction for Friday, June 27, to absorb excess liquidity from the banking system. This comes as the average daily liquidity surplus over the past two weeks hovered around ₹2.5 lakh crore, with Monday alone seeing a surplus of ₹2.43 lakh crore. The Weighted Average Call Rate (WACR) had dipped to 5.27%, below the policy repo rate of 5.50%, prompting the RBI to take corrective action. This fine-tuning move is expected to push up short-term bond yields by 3–4 basis points, according to bond dealers. The RBI had previously injected liquidity earlier in the year, but the shift to absorption indicates a recalibration in its liquidity management strategy.
NSE Proposes ₹1388 crore Settlement to SEBI, Paving Way for Delayed IPO
Source: Reuters
The National Stock Exchange of India (NSE) has proposed a ₹1388 Crore settlement to the Securities and Exchange Board of India (SEBI) in a bid to resolve regulatory hurdles stemming from the 2019 co-location case, according to Reuters. The case had earlier led to a ₹1100 crore penalty imposed by SEBI, which NSE had challenged in court. As part of the resolution process, SEBI is currently conducting final inspections of NSE’s internal systems. If cleared, SEBI may issue a no-objection certificate within three months, potentially allowing NSE to launch its long-delayed IPO before May 2026. Major shareholders such as LIC (10.72%), SBI (7.76%), Morgan Stanley (1.58%), and CPPIB (1.60%) could benefit from the proposed listing. This out-of-court settlement, if accepted, would be the largest in SEBI’s history. The IPO is expected to allow early investors to exit and bring NSE closer to the public market.
- This edition of the newsletter was written by Pranav and Kashish.
📚Join our book club
We've started a book club where we meet each week in JP Nagar, Bangalore to read and talk about books we find fascinating.
If you think you’d be serious about this and would like to join us, we'd love to have you along! Join in here.
🧑🏻💻Have you checked out The Chatter?
Every week we listen to the big Indian earnings calls—Reliance, HDFC Bank, even the smaller logistics firms—and copy the full transcripts. Then we bin the fluff and keep only the sentences that could move a share price: a surprise price hike, a cut-back on factory spending, a warning about weak monsoon sales, a hint from management on RBI liquidity. We add a quick, one-line explainer and a timestamp so you can trace the quote back to the call. The whole thing lands in your inbox as one sharp page of facts you can read in three minutes—no 40-page decks, no jargon, just the hard stuff that matters for your trades and your macro view.
Go check out The Chatter here.
“What the hell is happening?”
We've been thinking a lot about how to make sense of a world that feels increasingly unhinged - where everything seems to be happening at once and our usual frameworks for understanding reality feel completely inadequate. This week, we dove deep into three massive shifts reshaping our world, using what historian Adam Tooze calls "polycrisis" thinking to connect the dots.
Frames for a Fractured Reality - We're struggling to understand the present not from ignorance, but from poverty of frames - the mental shortcuts we use to make sense of chaos. Historian Adam Tooze's "polycrisis" concept captures our moment of multiple interlocking crises better than traditional analytical frameworks.
The Hidden Financial System - A $113 trillion FX swap market operates off-balance-sheet, creating systemic risks regulators barely understand. Currency hedging by global insurers has fundamentally changed how financial crises spread worldwide.
AI and Human Identity - We're facing humanity's most profound identity crisis as AI matches our cognitive abilities. Using "disruption by default" as a frame, we assume AI reshapes everything rather than living in denial about job displacement that's already happening.
Subscribe to Aftermarket Report, a newsletter where we do a quick daily wrap-up of what happened in the markets—both in India and globally.
Thank you for reading. Do share this with your friends and make them as smart as you are 😉
Let's go waiting for this topic since 2001.