"Agentic AI lifts the role of AI from being an enabler to being a decision-making agent. This technology acts independently on a course of action linked to repetitive, low-risk decisions that are mostly binary, objective and driven by empirical considerations."
Having led legal teams across sectors, what has shaped your in-house leadership the most, and how do you approach legal strategy today?
I’ve been fortunate to work alongside a smart and eclectic group of professionals who take pride in their work, share a foundational belief in the value of teamwork and have consistently delivered stellar results. In my view, the best way to bring together top-talent is to chalk out a goal that taps fully into their professional and personal objectives.
Then, one instils a belief in the team that they are primed to deliver exceptional results if they work as a collective – that the team is not just a sum of parts but an exponential whole.
Here are my mantras for working with and leading an exceptional team of attorneys:
- A Legal department must be responsive and future-ready on skills: As a member of a highly-acclaimed global legal department, I see our team-members have universally imbibed the central ethos of our team which is, to enable exceptional business results by delivering practical and clear counsel…always. Members of outstanding legal teams invest significant time in understanding business and constantly re-shape legal strategy by letting-go of entrenched positions and engaging in lateral thinking to solve business problems.
- Manage people differently – Straitjacketing doesn’t work: High-performing professionals refuse to be pigeon-holed in their expected roles. As attorneys jump into the trenches alongside business teams, Leaders must enable members of our teams across geographies to reinvent themselves. We need to enable IP attorneys to become contract professionals, M&A attorneys to morph into dispute-resolution experts and technical professionals to handle regulatory/risk responsibilities.
- Be Inclusive, inclusive and inclusive. In a world where capabilities lie hidden in plain sight, one should challenge dogmas, set aside unconscious bias and scout talent beyond considerations like, gender, age or conventional wisdom on what top-talent looks like. The days of uni-dimensional attorneys are over, and the idea that subject matter experts will stay in their own lanes is past its sell-by date.
- Evolve or step aside: Leaders need to create growth trajectories to meet the evolving professional needs of their team members. Also, in order to sustain high-performing teams, one should institutionalize certain core work-values such as, Collaboration (beyond borders), being Business-first and valuing Leadership and Integrity – all of these mixed with a healthy dose of happiness, warmth and belonging for one’s workplace. Old fashioned hierarchies, inefficient processes and rote-thinking haveno place in a modern team anymore.
As Agentic AI begins to access and act on enterprise data, what legal considerations must companies weigh in to manage internal risk?
Just as we began to comprehend LLMs and associated Gen-AI models, the unforgiving evolution cycle of technology has crashed onto the centre stage of the technology world, with a bang.
Agentic AI lifts the role of AI from being an enabler to being a decision-making agent. This technology acts independently on a course of action linked to repetitive, low-risk decisions that are mostly binary, objective and driven by empirical considerations.
Agentic AI technology also raises the stakes from a risk and legal standpoint, as its outcomes will impact consumers/citizens, businesses, governments and institutions that intersect for these stakeholders.
As attorneys, we are analysing several questions:
- How do we balance competing facets of quicker and efficient decision-making via Agentic AI against potential inadvertent outcomes? How do we ensure that this technology factors in nuances, subjective factors, and navigates past misleading trends and false negatives? Even as we think about these issues, Agentic AI should adhere to generally accepted principles, such as auditing, oversight, transparency, explainability, and accountability principles which are entrenched in the development of Gen AI technology.
- As we cascade decision-making to Agentic AI, how do we allocate risk and responsibility for these technologies? We are also reviewing aspects such as trust and transparency of Agentic AI solutions and whether these offerings can adapt to dynamic situations with built-in fail-safe mechanisms.
- If Agentic AI substitutes human intervention in practical use cases, we will need to incorporate robust auditing frameworks to ensure accuracy and bias-free outcomes, especially when it directly impacts human beings. This is especially true when Agentic AI is deployed in the context of Healthcare, Education, Administration, Governance and the like.
What data privacy safeguards must be prioritized when Agentic AI systems operate autonomously on personal or regulated datasets?
Agentic AI brings with it the promise of contextualized decision-making, domain awareness, predictability in outcomes and lower costs.
However, there is no taking away the fact that Agentic AI would be most useful for individuals and enterprises when it is granted pervasive access to data (business and personal).
- Enterprise data: An Agentic AI solution would interact with the entire life cycle of decision-making for an enterprise to achieve desired outcomes. This life cycle of decision-making is tackled by planning, analyzing, validating and implementing task agents, as well as , learning and monitoring agents- all working under the aegis of supervisory agents that continually orchestrate decisions. Agentic AI platforms will need a high-level of access to enterprise data, and therefore, require guardrails to manage it appropriately. One will need to ensure targeted access, encryption, authentic protocols, and for service recipients – ensure informed consent, permitted use and robust data-sharing protocols.
- Public/Personal information: As Agentic AI is deployed in governance and civic administration projects, these models will require access to sensitive data (including personal data) to obtain a full context for delivering citizen-centric services. This fact in itself would require Agentic AI solutions to remain subject to robust monitoring, security and resiliency mechanisms and prudent data governance policies that comply with applicable data protection laws.
- Contextual decision-making: Agentic AI solutions sit at the intersection of different categories of data belonging to businesses and individuals and this creates a host of challenges in defining boundaries and unique guard-rails for separate sets of data. Access and storing of user-logs, user history and data storage or management will require strong governance norms and compliance with applicable regulations.
In the event of a harmful AI-generated outcome, how should responsibility be distributed across the enterprise, developer, or the AI system?
We have seen substantial analysis around allocating risk and liability for generative AI solutions between (i) Developer, (ii) Service provider (enterprise), and (iii) Deployer. An illustrative list of liability considerations that differ across the value chain is :
- biased AI/ML models or IP infringement (developer),
- inadequate audit/testing of solutions, data privacy violations (enterprises), and
- non-disclosure when deploying Gen AI solutions (deployers and enterprises).
There is a consensus that allocating liability for AI systems solely to developers of Gen AI solutions would have a chilling effect on innovation and development. Instead, legal frameworks and end-users/deployers seek to impose liability on AI-solution providers, on the basis that they can mitigate their liability by implementing robust, audit and bias-prevention mechanisms at the design/offering stage itself. This is only one point of view, as causation and disclosure of risks are key factors to be considered. The jury is still out on how to allocate liability across the AI systems value chain, and there are no one-size-fits-all answers yet.
An alternative approach was proposed under the Revised Product Liability Directive, 2024/2853, for a Strict Liability regime for certain high-risk AI systems. However, the withdrawal of this directive indicates that strict liability is not the most optimal design point for affixing liability under a statutory framework. In the Indian context, discussions on liability for AI solutions/systems are at a formative stage and will likely be based on (i) the risk associated with specific areas of deployment of AI solutions; and (ii) using the currently applicable fault-based approach under the Indian Contract Act, 1871. In any event, we must consider a graded approach to liability based on the impact of Agentic AI solutions on various end users.
Given that LLMs often draw from unverified or copyrighted sources, how can businesses mitigate IP and copyright risks in their outputs?
On the issue of IP and copyright considerations for LLMs and associated AI systems, the crux of the issue is to balance inventor/creator rights against a need to encourage innovation and restrain gatekeeping of technological advances. One solution would be to incentivize. Innovators/developers to be rewarded via a robust and balanced licensing regime, and yet, deter them from imposing prohibitive costs or corner the benefits of innovation to specific interest groups in the value chain. As we evaluate IP risks linked to the use of LLMs/AI systems, it might be useful to revert to first principles:
- While evaluating the “creation of a new work”, we would assess the extent of human impact on Gen AI solutions. If there is substantial, original input by human beings in developing an AI solutions using Gen AI/LLMs, there would be a case to consider grant of copyrights in such works.
- The perception that LLMs scrape all aspects of data-sets is firmly contested by developers who claim that text and data mining (TDM) involves accessing only factual, statistical and non-expressive elements in a data set. Therefore, there is a strong push towards making an exception for TDM use on data sets.
- Even as we design an AI-ready IP-licensing and monetization regime, we need to recognize the primacy of a data-subject’s right of consent against data-sharing for AI-based training and research. Obviously, this view limits developer’s ability to train LLMs on data-sets, but it also levels the asymmetric bargaining power between individual data subjects and large enterprises/data-handling entities.
Given that LLMs often draw from unverified or copyrighted sources, how can businesses mitigate IP and copyright risks in their outputs?
Lawyering for tomorrow will need attorneys to discard conventional legal research, analysis of judicial trends and established methods of assessing legal/regulatory risks. This is because future enterprises will rely on an optimal mix of technology and human capital to enable swift decision-making and risk mitigation.
From where I see tomorrow, here are a few hacks for future-proofing the capabilities of attorneys:
- Befriend technology (or perish): Working with emerging technologies is an inevitability, going forward. Think contract drafting – attorneys wouldn’t start from zero anymore; they will provide inputs at the final stages of both drafting and negotiations. A Disputes lawyer will work with assistive technology in advising clients and analyzing judicial precedents. Additionally, the conduct of adjudication proceedings itself could look dramatically different, with AI systems aiding in decision-making on criminal and civil matters.
- Technology-assisted decisions. Trust but verify: Legal leaders should use data and analytics to aid decision-making. However, over-reliance on data could lead to distorted results, as data sets can be skewed, contain intrinsic bias or, more simply, be inaccurate or manipulated. Therefore, it will always be incumbent on professionals to use their judgment, legal proficiency and acumen in making data-assisted decisions.
- Human intervention will retain relevance: The emergence of Agentic AI, a decision-oriented technology, will entail AI systems effecting routine decisions. However, several factors that require nuance and subjectivity and do not yield to trend-based/ML models, will continue to require human intervention. For instance, critical healthcare decisions, human safety, civic policy and governance matters that impact citizens and enterprises alike.
*Disclaimer: The views expressed in this interview are solely of the interviewee and do not represent the views of any organization with which they are or have been associated with.
About Saurabh Awasthi:
Saurabh Awasthi is an attorney who works at the intersection of law, business, and emerging technology. Currently General Counsel at Kyndryl India and earlier, as a private practice lawyer, he has steered several large stakes acquisitions, commercial contracts, complex disputes and regulatory matters. His career has been shaped by relentless curiosity and a desire to provide clarity amidst complexity. With degrees from Delhi University and NYU, he keenly follows the current discourse on AI, ethics, and governance.