top of page

Artificial Intelligence HANDBOOK

  • Abir Roy, Vivek Pandey, Aman Shankar, Biyanka Bhatia, Sasthibrata Panda & Shreya Kapoor
  • Jan 27
  • 4 min read

Updated: 23 hours ago

“Can machines think?” was the question raised back in 1950 by Alan Turing while contemplating a method for assessing machine intelligence. This groundbreaking idea laid the foundation for Artificial Intelligence (“AI”), which has since evolved from a theoretical concept into a powerful force shaping industries and daily life.


Defining Artificial Intelligence
Defining Artificial Intelligence

Advancements in machine learning, deep learning, robotics, and natural language processing have propelled AI to the forefront of global technology. Today, AI is revolutionizing sectors like healthcare, agriculture, finance, and manufacturing, enhancing everything from medical diagnostics to automated translations.


However, as AI development accelerates, so does the urgency for governance, regulation, and ethical oversight. AI systems, being probabilistic, generative, and adaptive, pose risks such as bias, misuse, transparency failures, loss of human control, and national security threats.


In response, jurisdictions worldwide have begun to adopt divergent regulatory approaches. In November 2025, India’s Ministry of Electronics and Information Technology (“MeitY”) released the India AI Governance Guidelines (“Governance Guidelines”), which provide a policy framework to advance the goals of the IndiaAI

Mission. Rather than imposing a compliance-heavy regulatory regime, the Governance Guidelines adopt an adaptive governance and responsible innovation, emphasizing regulation only where risks arise. Existing Indian laws, including the Information Technology Act, 2000 Digital Personal Data Protection Act, 2003 Consumer Protection Act, 2019, Copyright Act, 1957 and sectoral regulations form the foundation for AI governance. The handbook discusses how Indian laws currently address the challenges through multiple domains:


  • Intellectual Property Rights: The rise of generative AI has placed significant strain on intellectual property rights by challenging foundational concepts of authorship, originality, consent, and liability. In copyright law, disputes centre on whether copying during AI training and the generation of outputs that imitate protected works or styles constitute infringement, with Indian law offering limited clarity due to the absence of specific AI or text and data mining exceptions. The Department for Promotion of Industry and Internal Trade Working Paper’s proposed Hybrid Model seeks to balance legal certainty for AI developers with remuneration for creators, though raising constitutional and implementation concerns. Under trademark law, AI-generated content can inadvertently infringe or dilute existing marks, with courts increasingly recognising developer liability rather than placing responsibility solely on users. In patent law, while AI-assisted inventions may be protectable, prevailing legal frameworks continue to require human inventorship, leaving fully autonomous AI-generated inventions outside the scope of patent protection.


  • Data Protection: AI systems are often trained using vast datasets and significant computing power. In contrast, data protection laws are premised on the ability to clearly identify each stage of data processing for individuals to exercise control over their personal data. The opaque “black box” nature of AI systems makes it difficult to determine whether and how data processing complies with legal requirements. It is interesting to see how existing data protection principles in India will play herein. Overregulation could slow India’s AI ambitions and under-regulation could erode privacy.


  • Competition Law: From a competition law perspective, artificial intelligence has the capacity to both intensify competition and entrench market power. As reflected in the Competition Commission of India’s AI Market Study, structural features of AI markets such as economies of scale, network effects, control over data and compute infrastructure, lock-in effects, and limited interoperability can create conditions conducive to concentration and exclusion. AI systems may facilitate anticompetitive outcomes, including algorithmic collusion, self-preferencing, personalised pricing, etc. without explicit human coordination. The handbook looks into whether the traditional laws have the flexibility to be extended to such conducts.


  • Information Technology Act: The current law has provisions adequate to regulate malicious use of technology, obscenity and pornography, and data breach. MeitY has released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 for added obligations on intermediaries dealing with synthetically generated information. However, AI’s active content generation challenges traditional intermediary roles and safe harbor protections, highlighting the need for clearer liability.


  • Tort Law: A crucial question that must be determined when assessing any dispute involving AI is that of liability i.e., how to ascertain who should be held responsible. Tort law, as a common law principle, offers a framework for regulating AI, primarily through concepts such as negligence, strict liability, product liability, and volenti non fit injuria. However, determining liability in AI- related disputes remains complex.


  • Criminal Law: The Bharatiya Nyaya Sanhita, 2023 has the potential to address AI-enabled crimes by applying existing penal provisions to new forms of digital misconduct, though liability questions such as whether the AI user, or platform is responsible remain complex.


  • Contract Law: Liability in AI systems can be traced through contractual relationships across the AI supply chain. However, as all components of an AI system are functionally interdependent, a fault in any part can cause system failure, raising questions about where liability lies.



Across these domains, a consistent theme emerges that currently AI governance in India is evolving through incremental adaptation rather than sweeping legal overhaul. This evolution reflects an understanding that AI is a general-purpose technology whose risks vary widely depending on context, scale, and use.


The Governance Guidelines recognizing the same recommends mitigating AI risks by creating a national AI incident database, promoting voluntary industry standards with stricter safeguards for high-risk sectors, embedding compliance-by-design into AI systems, and ensuring human oversight through reviewable AI outputs to prevent harm.


This handbook does not take a position for or against AI, nor does it propose a specific regulatory solution. Instead, it provides a structured approach to understanding AI from a legal perspective, recognizing both its transformative potential and its ability to challenge existing legal principles. It has attempted to examine AI technologies within India’s current legal framework to helps stakeholders identify where risks lie and where accountability must be strengthened. As AI evolves, the legal issues it raises will also change, influenced by technological advances, economic factors, and societal expectations.


Please feel free to reach out to our Team to discuss any of the Technology Law, Competition Law and Policy Issues.


To view the complete handbook, download the file below:



Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.

Let's Connect

Is there a legal landscape we haven’t mapped yet? Tell us which topics matter most to your enterprise, and let’s expand the conversation.

Location pin white.png

C-564, Ground Floor, Defence Colony,

New Delhi – 110024

PRELIMINARY (3).png
Website logo white
Youtube logo white
Spotify logo white

Find us here!

© 2026 by Sarvada Legal. All rights reserved.

bottom of page