India’s Data and Broadcasting Minister Ashwini Vaishnaw has urged social media platforms to undertake a “fair proportion of income” mannequin for the individuals and establishments that create the content material powering their engagement and promoting companies—starting from journalists and legacy newsrooms to impartial creators in distant areas, influencers, and teachers who publish analysis on-line.
Talking on the Digital Information Publishers Affiliation (DNPA) Conclave 2026 in New Delhi, the minister framed the difficulty as a correction that platforms should make because the web shifts deeper into an period of artificial media and high-velocity misinformation.
A push for creator compensation throughout the ecosystem
Vaishnaw’s remarks broaden the revenue-sharing debate past conventional publisher-platform dynamics. He argued that the “precept” of honest remuneration ought to apply to all classes of content material creators whose work drives attain on platforms—information professionals, standard media, far-flung impartial creators, influencers, professors, and researchers.
The minister’s emphasis comes amid intensifying international scrutiny of how giant platforms monetize content material whereas creators—particularly newsrooms—wrestle with shrinking margins and fragmented promoting markets.
Platform accountability and on-line security
Past monetisation, Vaishnaw positioned accountability for hosted content material squarely on platforms, arguing that reinforcing belief in long-standing social establishments is now a core requirement for the digital public sphere. He linked this accountability on to citizen security on-line, together with youngster security—stating that platforms should deal with it as an obligation, not an non-compulsory compliance train.
The feedback align with a wider coverage route that seeks clearer accountability from intermediaries as they more and more operate as distributors and amplifiers of knowledge at scale.
“No artificial content material with out consent”
A central a part of Vaishnaw’s message centered on artificial content material—deepfakes and different AI-generated media calling for consent to be necessary when an individual’s face, voice, or persona is used to generate content material. He argued that the web’s “nature” has modified, and that the following inflection level should embrace stronger norms (and enforcement) round consent-driven creation.
On the conclave, he additionally warned that deepfakes and organised misinformation campaigns are straining the “core tenet” of belief a problem he urged is changing into systemic quite than episodic.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a worldwide community of future-focused thinkers.
Unlock tomorrow’s developments right this moment: learn extra, subscribe to our e-newsletter, and turn out to be a part of the NextTech neighborhood at NextTech-news.com

