“Economic growth does not come only from sharing data with global players as Big Tech would try to project. Every country needs to preserve its sovereignty by ensuring that its data assets are used in a manner that enhances the country’s wealth in the long run.”
With Big Tech controlling global data and shaping AI policy, can Indian regulators realistically rein them in, or are we simply adapting to pre-set global norms?
It is true that Big Tech has developed a stranglehold on the “digitally dependent society” and, therefore, would like to monetize this hold. If AI is the new tool to control society, Big Tech would undoubtedly try to exploit it to increase its stronghold since “Ethics is not the voluntary character of business”.
At the same time, India is a sovereign country and needs to protect its sovereign data assets. Hence, there is a constitutional obligation to protect the interests of the country and its citizens. The current political leadership in the country appears to be aware of this responsibility. It is expected to take all reasonable steps to protect the interests of India and its citizens.
Even where the political leadership is slack, there is the Indian judiciary, and further the vigilant part of the citizenry may be expected to put the Government on track.
India has indicated during the time they adopted the Digital Personal Data Protection Act that it would not necessarily copy the line of the EU GDPR or CCPA. It is expected that even the AI regulation, when introduced, would also be independently constructed, retaining the best principles of the EU-AI act or the US legislations that are emerging, but not end up copying the laws blindly
Explainability is a buzzword, but realistically, can AI models ever be truly “transparent”, or are we chasing an impossible standard?
I agree that the commercial world, under the excuse of “innovation”, would like to have a free hand in the development of any software, much more the critical class of AI algorithms, to make as much wealth as possible at the expense of society. Hence, given an opportunity, the industry would like to retain “Transparency” and “Explainability” as only discussion points without a commitment to fulfill them.
It is the responsibility of the regulators to enforce strict laws and deterrent punishments to ensure that the industry does not run away with irresponsible AI systems.
In the legacy IT world, society has allowed the development of the “Dark Web” without a commitment to reign in illegal activities. There is a risk that a similar “DARK-AI” world would develop which will exploit the technology to commit crimes in society.
This Risk should be recognized and countered even when the industry is in its infancy. It is possible that we have already missed the bus though we might not know since the recent advances in AI-led humanoid robots and neuroright infringements indicate that the days of Intelligent Robots turning rogue are not far off.
Data localization is pitched as a security measure, but critics say it raises costs and stifles startups. How should regulators balance data sovereignty with economic growth?
Data localization is not only a need of the hour to meet the constitutional obligation of “security of the state” and “preservation of the sovereignty and integrity of the state” but also the commercial need of preserving the national wealth which the “Data” is.
Economic growth does not come only from sharing data with global players as Big Tech would try to project. Every country needs to preserve its sovereignty by ensuring that its data assets are used in a manner that enhances the country’s wealth in the long run. Hence, economic growth should be worked out along with data sovereignty, not by giving it up.
Hence, data localization is necessary from both a security point of view and for economic growth.
With the global AI regulatory landscape fragmented, should India align with an existing model or create its own unique system catered to its evolving needs?
It is necessary for India to have its own unique system and align it with others on mutually respectful basis. Entering into treaties is part of Global Citizenship and cannot be entirely avoided. However, such treaties are relevant between parties of equal strength, and hence, developing an indigenous approach with India’s first approach is essential before working on aligning with any model of the EU or the US.
However, the threat landscape is evolving rapidly, with new threats and vulnerabilities emerging daily. The CERT-In guidelines need to be updated regularly to keep pace with these emerging threats. While CERT-In has established an incident response mechanism, there is a need to enhance its capabilities to respond to complex and large-scale cyber incidents.
The DPDP Rules focuses on personal data, but AI needs broader datasets. Should India introduce a separate AI governance framework?
Currently, ITA 2000 addresses the larger data set of personal and non-personal data, whereas DPDPA addresses personal data needs.
India can work on strengthening the ITA 2000 through the notification of appropriate rules even before any comprehensive law for AI evolves.
The possibility of amending the ITA 2000 to add chapters on AI, metaverse and quantum computing should be explored without an insistence that only a new law can address these issues.
Chasing a new law when ground can be covered by notifications under the current law or amendments is strategically unwise.
In your opinion, in the next decade, what are the three biggest regulatory battles India will face in AI, data protection and cybersecurity?
Without a sovereign controlled computer operating system, all efforts to manage the challenges in AI, data protection and cybersecurity would be inadequate, and superficial and pretentious attempts would be unlikely to be effective. The battles are with AI, data protection and cybersecurity, but the real war is with the operating system. The sooner India realizes this, the better it is.
In the coming days, the battle with the “Dark Web” is likely to intensify since criminals will use AI technology to launch sophisticated attacks.
Poisoning the information on the web that goes into the training of LLMs, is another threat that places the integrity of the Internet is at stake. Trust is getting eroded by the day, and we need to ensure that we do not lose the battle to protect the integrity of the Internet system.
Hence, the battle for the independent operating system, battling the Dark web and building an impeccable Trust in Internet communications are the three battles I would like to flag.
About Vijayashankar Nagarajarao (Naavi):
Na.Vijayashankar (Naavi) is a leading authority in Cyber Law and Data Protection in India, with over 20 years of experience in the field. He is the founder of naavi.org, India’s premier Cyber Law portal, and the Chairman of the Foundation of Data Protection Professionals in India (FDPPI). Vijayashankar specializes in compliance and advisory services across key regulations, including HIPAA, ITA 2008, GDPR, and the Indian DPDPA.
An accomplished author, Vijayashankar has written several pioneering books on Cyber Laws in India, including the first works in the domain. He is also the creator of influential frameworks like the Data Governance and Protection Standard of India (DGPSI) and the Data Trust Score (DTS). Vijayashankar is a visiting faculty member at top law schools such as NLSUI and NALSAR, where he shares his expertise with future leaders in the field.
With a rich background in banking, IT services, and advertising, Vijayashankar is a recognized thought leader in data privacy and cyber law, honored with multiple Lifetime Achievement Awards in 2022 and 2023 for his contributions to the field.