Welcome back!
It's great to have you here for this week's Politics to Policy edition. Today, we're going to talk about probably the most burning issue among Gen Z and Gen Alpha, so let's get right into it.

Somewhere in Bengaluru, a 14-year-old opens Instagram. The app asks her age. She types 17. Her actual age is insignificant. What matters is what she tells the app. This may sound simplistic, yet it encapsulates the entirety of the problem with Karnataka's social media ban, and every ban like it, condensed into about four seconds.
On March 6, 2026, Karnataka Chief Minister Siddaramaiah announced that children under 16 would be barred from social media platforms. The announcement came during the state's budget speech. No prior consultation with technology companies had taken place. No implementation details were shared. No enforcement mechanism was specified.
A global wave with the same question mark
Karnataka is not acting in isolation. It is joining a rapidly expanding list of governments that have decided, more or less simultaneously, that something has to be done about children and social media.
Australia enacted what is being called the world's strictest such law in December 2025, banning under-16s from platforms including Instagram, TikTok, YouTube, Snapchat, and X, with platform fines of up to $50 million for non-compliance. France requires parental consent for under-15s. The UK, Spain, Greece, Norway and Denmark are actively debating similar restrictions. Closer to home, Andhra Pradesh announced a 90-day window to restrict access for children under 13.
The stated reasons are consistent across geographies: mental health deterioration, cyberbullying, screen addiction, and declining academic performance. India's own Economic Survey 2025-26 flagged a link between high screen time and deteriorating mental health in the 15-24 age group, citing anxiety, sleep disorders and declining attention spans as the most visible symptoms.
The anxiety is genuine (and maybe even immediate). The political will, finally, appears to have arrived. And yet, a harder question begs asking: are bans actually the right instrument?
The Economic Survey 2025-26 flagged a link between high screen time and deteriorating mental health in the 15-24 age group, citing anxiety, sleep disorders and declining attention spans as the most visible symptoms.
Three parties, three alibis
The ban debate involves three sets of actors, and all three have a case. None of them, on their own, is compelling enough.
Governments argue that the state has always drawn lines around what children can access: alcohol, tobacco, adult content, gambling and so on. Extending that logic to algorithmically engineered platforms is not unreasonable on its face. Platforms have had years to self-regulate and have produced nothing except polished apology tours by tech CEOs before parliamentary committees. So that leaves the state no option but to step in because someone has to.
Platforms counter, not entirely without merit, that age verification at scale is both technically fraught and privacy-invasive. Meaningful enforcement requires linking social media accounts to government-issued identity documents. In India, that points directly at Aadhaar, and that raises a different set of concerns about who holds that linkage, how it is stored and what happens when it is breached (which is a different can of worms worth exploring). Platforms also raise the displacement argument: bans push children toward darker, less regulated corners of the internet where there are no content moderation teams at all.
Parents and civil society have watched this exchange with mounting frustration. Their position is blunter: both of the above arguments are deflecting. To them, the ban feels, for all its implementation gaps, like the first real acknowledgment from authority that something is systemically broken.
All three positions are not exactly without merit, but, like I said before, not adequate on their own.
What Karnataka's ban actually looks like on the ground
The Karnataka state government did not hold consultations on the ban before the announcement. Implementation details remain unspecified. And there is a deeper jurisdictional problem that nobody in the budget speech addressed.
Regulating the internet falls largely under exclusive Union jurisdiction through the Information Technology Act. A state can articulate the policy objective of child safety, but a binding, platform-facing ban would be much harder for a state to sustain without running into Centre-State and constitutional questions.
The Internet Freedom Foundation raised a concern that deserves more attention than it received: broad bans risk deepening India's digital gender divide if families use such measures to keep girls offline. India's shared-device culture means enforcement will land very differently across households. A girl in a conservative family may lose digital access entirely under the cover of a safety policy. In a country where digital access is already heavily gendered, this is a predictable outcome.
The enforcement reality will be uneven in another familiar way. Children from households with attentive, tech-literate parents will route around the ban with guidance. Children from households with less oversight, who are arguably more vulnerable to begin with, will encounter it unaided or not encounter it at all. The law will exist. The protection it promises will be distributed by existing privilege.
Regulating the internet falls largely under exclusive Union jurisdiction through the Information Technology Act. A state can articulate the policy objective of child safety, but a binding, platform-facing ban would be much harder for a state to sustain without running into Centre-State and constitutional questions.
What Australia's experiment is actually showing
Australia is three months into the world's most ambitious version of this experiment, and the early evidence is instructive.
Social media companies deactivated approximately 4.7 million accounts belonging to Australian teenagers in the first month after the ban took effect on December 10, 2025. The Australian government called it a success. The number is genuinely large.
But researchers have cautioned that the government has not shown who the suspended accounts belong to, whether they include adults or dormant accounts, or on which platforms. It is too soon to know if the ban is reducing online harm or changing children's offline habits.
More telling is what teenagers themselves are doing. A 15-year-old from Melbourne used facial recognition to recover her suspended Instagram account. She is not unusual. Australia's own government trial found that age estimation technology is highly imperfect and often off by two to three years, particularly when applied to younger users.
In February 2026, Guardian Australia reported that teenagers under 16 were still able to access some social media platforms, while others felt more isolated from communication and some found vastly different content on their feeds. The ban has moved accounts. Whether it has moved behaviour is a different and still open question.
An opinion poll found that 70% of Australian voters endorsed the ban, while only 33% were confident it would actually work. That gap between approval and confidence is the honest summary of where this policy sits globally right now.
The deeper problem that bans don't touch
Here is the question the ban debate keeps skating past: what exactly are we protecting children from?
From the platforms? Or from the content that platforms, with great engineering precision, decide to serve?
From addiction? Or from the loneliness, social anxiety, and need for external validation that make addiction so appealing in the first place?
On March 25, a Los Angeles Superior Court jury found Meta and YouTube liable for designing platforms in ways that foster addiction and harm users' mental health. The case centred on a 20-year-old woman, known as Kaley, whose social media use began at age 6 on YouTube and at age 9 on Instagram. Her lawyers argued that features like infinite scroll, autoplay, and algorithm-driven notifications were specifically engineered to hook young users. She testified that the addiction worsened her depression, anxiety and body dysmorphia. The jury awarded $3 million in compensatory damages and designated up to $3 million in punitive damages, subject to judicial confirmation.
The case relied partly on the Facebook Files, internal Meta research reported by the Wall Street Journal in 2021, showing the company knew Instagram worsened body image issues for teenage girls, with one internal study finding that 32% of teen girls said the platform made them feel worse about themselves. It also drew on whistleblower Frances Haugen's Senate testimony linking platform design to anxiety and compulsive use.
The verdict is significant for one precise reason: it shifted liability from content to platform design. Previous legal battles focused on what platforms allowed users to post. This one focused on how the platform itself was built, and found that architecture actionable.
A day before the Meta verdict, a separate jury in New Mexico found Meta liable for the way its platforms endangered children and exposed them to predatory contact.
For India, these verdicts carry direct implications. Instagram, YouTube and Snapchat collectively reach hundreds of millions of Indian users, a substantial portion of them under 18. The same infinite scroll, the same autoplay, the same notification architecture found liable in a California courtroom operates identically on a phone in Bengaluru or Bhopal. India has no equivalent legal framework that would allow a similar claim to be brought here. The Digital Personal Data Protection Act, passed in 2023, contains provisions on children's data consent, but says nothing about design liability. The gap between what an Indian teenager experiences on these platforms and what legal recourse exists for that experience is enormous.
The previous moral panics about what children consume, comics in the 1950s, television in the 1970s, and video games in the 1990s, were about passive consumption. Children watched or read and moved on. This is structurally different. These are platforms engineered to extract attention, to manufacture social comparison, to monetise the gap between who a child is and who they fear they are not. A ban does not dismantle that architecture. It just puts a gate in front of it. The architecture continues to exist and evolve on the other side.
Digital literacy is the answer most frequently offered by those sceptical of bans. Teach children to navigate the internet critically, to recognise manipulation, and to understand that the algorithm is not their friend. In the abstract, this is obviously the right long-term answer. In practice, ask who teaches it, in which school, in which medium, with what training, and funded by whom. In India, the honest answer is largely silence.
Social media platforms are engineered to extract attention, to manufacture social comparison, to monetise the gap between who a child is and who they fear they are not. A ban does not dismantle that architecture. It just puts a gate in front of it. The architecture continues to exist and evolve on the other side.
The Future State problem
Karnataka's ban, as a policy moment, reveals a gap that its specifics barely gesture at: the difference in speed between how governance operates and how the technology it is trying to govern evolves.
Platforms update their recommendation algorithms continuously. They run thousands of simultaneous experiments on user behaviour. They optimise engagement in real time. Karnataka announced a ban in a budget speech with no implementation plan. The two are not operating on the same timescale, and that asymmetry is the real governance challenge.
The LA verdict points toward a more durable regulatory direction than access restriction: holding platforms accountable for design choices that demonstrably harm users. If upheld on appeal, it could mark the beginning of a period in which algorithmic design is scrutinised for its psychological and societal impact. That is a fundamentally different regulatory approach than asking a 14-year-old to correctly state her age.
Research published in JAMA Pediatrics found that moderate social media use appears to support adolescent well-being, suggesting the optimal approach is thoughtful engagement and moderation rather than total prohibition. The Snap CEO, writing in the Financial Times, argued that app-store-level age verification would be more technically coherent than platform-by-platform bans, creating one consistent age signal per device and limiting how often personal information must be shared. That is a more serious technical proposal than anything in Karnataka's announcement, regardless of the source's obvious self-interest.
India has a window to move beyond the access restriction debate and engage seriously with design accountability. The Digital Personal Data Protection Act's provisions on children's data are a starting point. They are nowhere near sufficient. Building a regulatory framework that can hold platforms accountable for what their algorithms do to adolescent psychology requires legislative imagination that a budget speech announcement simply cannot provide.
The more durable governance question is whether the harm being done to adolescent mental health and self-conception, while this debate proceeds at legislative pace, is the kind that waits.
The 14-year-old in Bengaluru has already logged back in. The algorithm already knows exactly what to show her next. And the government that announced the ban still hasn't said how it plans to enforce it.
Thank you for reading through.
I am always awaiting your feedback. If you want me to discuss a specific policy or governance question, reply to this email. If there was something in this article that you did not agree with, let me know that, too. I would love to discuss this with you in even more detail.
This newsletter gets better the more you engage with it. So please hit reply. I read every response.
Until next time.
Anas Ahmad Tak
