Artificial Intelligence (AI) is transforming industries worldwide, but in India, it’s also creating new challenges—especially for IT services and consulting firms. As AI regulations evolve, businesses must navigate complex legal and compliance issues that could impact their competitiveness in the global market.
Recently, the Ministry of Electronics and Information Technology (MeitY) issued an advisory to major platforms, setting new guidelines for the regulation of generative AI.
About the Advisory
- The advisory primarily targets large platforms and does not apply to startups.
- MeitY stipulated that platforms must explicitly seek permission from the government to operate in India and provide disclaimers and disclosures indicating that their platforms are under testing.
- All platforms ensure their computer resources do not permit bias, discrimination, or threats to the integrity of the electoral process through the use of AI, generative AI, large-language models (LLMs), or similar algorithms.
- However, Big Tech firms building apps on AI will need to label their models as “under testing”, which experts say is subjective and vaguely defined.
Key Challenges and Opportunities
- Competitive Pressures: India is in a three-way race with Silicon Valley and China, facing rigorous competition to maintain its position in AI technologies.
- Regulatory Concerns: The fear is that stringent regulations could stifle innovation and affect India’s competitiveness, similar to EU’s strict regulatory approach versus the US’s more lenient stance.
- AI Adoption Issues: Major concerns include job losses, algorithmic discrimination, and misinformation like “deepfakes” that destabilize political processes.
Regulatory Landscape in India
Regulation/Policy | Key Provisions | Limitations/Remarks |
Information Technology Act, 2000 | Legal recognition for electronic transactions, data protection, and cybersecurity. | Lacks specific AI-related provisions; does not address AI-generated content or biases. |
IT Act & IT Rules, 2011 | Includes Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules. | Set to be replaced by the Digital India Act, 2023, which is expected to include AI-related regulations. |
Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021 | Regulates social media, OTT platforms, and digital news media. | No direct mention of AI, but relevant for AI-generated content and misinformation. |
Government Advisories on AI and Large Language Models (March 2024) | Requires MeitY approval for significant AI platforms before deployment. Introduces labeling for unreliable AI models, user notifications for inaccuracies, and deep fake detection. | Exemptions for startups and smaller platforms; primarily focused on AI safety but lacks comprehensive governance mechanisms. |
Digital Personal Data Protection Act (DPDP), 2023 | Regulates data collection, storage, and processing. | No specific provisions for AI-related challenges like algorithmic bias or AI accountability. |
Principles for Responsible AI (2021) | Establishes seven core principles: Safety, reliability, inclusivity, non-discrimination, privacy, transparency, accountability, and human values. Encourages government-private sector collaboration. | Non-binding; serves as broad ethical guidance rather than enforceable regulation. |
National Artificial Intelligence Strategy (2018) – #AIFORALL | Focuses on AI applications in healthcare, education, agriculture, smart cities, and transport. Recommends high-quality datasets and legal frameworks for cybersecurity. | A foundational document but lacks enforceable regulatory mechanisms. |
Draft National Data Governance Framework Policy (2022) | Modernizes government data management; aims to support AI-driven research and startups with a comprehensive dataset repository. | Still in draft stage; unclear how effectively it will integrate AI governance. |
Challenges in Regulating AI
AI is transforming industries but also exposing gaps in India’s legal framework by the following ways:
1. Privacy and Data Protection Issues
AI systems collect and analyze massive amounts of personal data, often without proper safeguards, putting citizens’ privacy at risk. While the Digital Personal Data Protection Act (2023) is a step forward, it lacks strong enforcement, especially in areas like AI-powered surveillance.
- Facial Recognition Concerns: Hyderabad’s police use facial recognition under the Smart Policing Mission, raising fears of mass surveillance.
- Cybersecurity Gaps: India ranked second globally in cyberattacks (PwC 2022), yet 40% of Indian firms using AI lack proper data security (NASSCOM, 2023).
2. Bias and Discrimination in AI Decisions
AI often reinforces existing biases because it learns from flawed datasets. This leads to unfair outcomes in hiring, lending, and policing, contradicting India’s constitutional principles of equality.
- Hiring Bias: AI recruitment tools in India have been found to filter out female candidates for tech roles.
- Global Example: Amazon scrapped its AI hiring tool in 2018 for being biased against women, yet similar biased systems may still be in use in India.
3. Intellectual Property (IP) Conflicts
AI is blurring the lines of ownership in creative works, leading to legal confusion.
- Copyright Issues: India’s Copyright Act (1957) only recognizes human-created works, meaning AI-generated content isn’t protected under copyright law.
- Artists at Risk: The Andersen v. Stability AI Ltd. case highlights how artists struggle with unclear copyright protections against AI-generated replicas of their work.
4. Job Losses and Labor Law Challenges
AI-driven automation could worsen unemployment and increase economic inequality. Unfortunately, India’s labor laws (Four Labour Codes) do not address AI-driven job displacement.
- Risk to Workers: A McKinsey report suggests AI could replace up to 60 million jobs in India’s manufacturing sector by 2030, particularly in textiles and electronics.
5. National Security Risks
AI is being misused for cyberattacks, deep fakes, and misinformation, threatening India’s security and democratic processes.
- Election Manipulation: Deep-fake videos were used in the 2024 Lok Sabha elections to spread misinformation.
- Cyber Threats: India saw a 15% rise in cyberattacks in 2023, but lacks AI-specific cybersecurity laws, leaving banking and defense sectors vulnerable.
6. Ethics and Accountability Concerns
AI is being used in critical areas like healthcare and law enforcement, but there are no clear rules on who is accountable when AI makes errors.
- Healthcare Risks: A JAMA study found that AI biases reduced doctors’ diagnostic accuracy by 11.3 percentage points, raising concerns about reliance on flawed AI predictions.
7. Environmental Impact
AI models require massive computing power, leading to high energy consumption and increased carbon emissions.
- Energy Use: Training a large AI model like ChatGPT-3 consumes 10 gigawatt-hours (GWh) of electricity, worsening India’s environmental challenges.
- No Green AI Laws: India lacks regulations to enforce sustainable AI practices, conflicting with its climate commitments.
AI is advancing rapidly, but India’s laws are struggling to keep up. Stronger regulations are needed to protect privacy, prevent biases, secure jobs, and address environmental concerns while ensuring AI benefits everyone.
The Way Forward: Ensuring Responsible AI Regulation
To navigate the challenges posed by AI while fostering innovation, India must adopt a balanced approach that combines regulation, collaboration, and investment. Here’s how:
- Global AI Standards: Countries should work towards the universal adoption of the Bletchley Declaration, which promotes safe and ethical AI use.
- Clear and Flexible Regulations: Governments need to create comprehensive laws covering data privacy, algorithm transparency, accountability, and bias prevention to ensure responsible AI deployment.
- International Cooperation: Since AI impacts the world at large, global collaboration is essential. Initiatives like the G7 Hiroshima AI Process (HAP) can help align ethical AI standards across nations.
- Industry Self-Regulation: AI companies should take responsibility for ethical AI use, ensuring fairness, transparency, and security in their applications.
- Investment in AI Research & Education: Governments, academic institutions, and industries must fund AI research and train a workforce that can tackle AI-related challenges, ensuring sustainable growth in the sector.
India needs to pursue a path that aligns with its national interests, focusing on rapid AI adoption and supporting open-source and other alternatives. The goal is to ensure that AI regulations do not hinder India’s ability to maintain its global IT leadership.
#BACK2BASICS: INTERNATIONAL REGULATORY FRAMEWORKS FOR AI
The UN General Assembly (UNGA) adopted a landmark resolution on the promotion of “safe, secure and trustworthy” Artificial Intelligence (AI) systems.
Key highlights of the UNGA Resolution on Artificial Intelligence
- Calls for same rights at offline and online and “to govern technology rather than let it govern us”.
- Resolves to bridge the artificial intelligence and other digital divides between and within countries.
- Supports regulatory and governance approaches by encouraging Member States and stakeholders from all regions to develop safe, secure and trustworthy artificial intelligence.
- Emphasizes on Human Rights Protection throughout the life cycle of artificial intelligence systems.
- Encourages private sector to adhere to applicable international and domestic laws in line with the United Nations Guiding Principles on Business and Human Rights.
- Calls for continued discussion on AI governance so that international approaches keep pace with the evolution of AI system, promote inclusive research, mapping and analysis etc.
Other International Regulatory frameworks for AI
- European Union’s Artificial Intelligence Act: It defines 4 levels of risk for AI systems- Unacceptable risk, High-risk, Specific Transparency risk and Minimal risk.
- Aims to ensure that rights, rule of law and environment are protected from high risk AI.
- Aims to tackle racial and gender bias through training of AI with sufficiently representative datasets.
- China’s Model: Prompts AI tools and innovation with safeguards against any future harm to the nation’s social and economic goals
- Focuses on content moderation, personal data protection, and algorithmic governance.
- UK’s approach: It has adopted a cross-sector and outcome-based framework for regulating AI with core principles of safety, security and robustness, transparency and accountability, and governance etc.
- Framework has not been codified into law for now, but the government anticipates the need for targeted legislative interventions in the future.
- Balances innovation and safety by applying the existing technology neutral regulatory framework to AI.
- AI & Digital Hub will be launched as a multi-regulator advisory service to help innovators navigate multiple legal and regulatory obligations.
Other Steps taken to promote AI Globally
- Bletchley Declaration for AI: It was signed by 29 countries including United States, China, Japan, United Kingdom, France, and India, and the European Union.
- Objective: To address the risks and responsibilities involved in AI comprehensively
- “Frontier AI” has been defined in the declaration as “highly capable foundation generative AI models that could possess dangerous capabilities that can pose severe risks to public safety”.
- Hiroshima AI Process (HAP) by G7 to regulate AI: It aims to promote safe, secure, and trustworthy AI. Hiroshima AI Process Comprehensive Policy Framework presents-
- Hiroshima Process International Guiding Principles for All AI Actors and
Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems
Key Issues Related to Artificial Intelligence (AI) in India
Issue | Description | Example |
Job Displacement and Skill Gap | AI is automating routine jobs, leading to job losses. Workers need advanced digital skills to stay relevant. | NASSCOM (2023): 69% of Indian tech workers need to upskill in AI and machine learning to remain employable. |
Algorithmic Bias and Ethical Concerns | AI can reflect societal biases, leading to discrimination in hiring, lending, and public services. | UPSC (2023): AI-based screening allegedly disadvantaged candidates from marginalized backgrounds in preliminary exams. |
Misinformation and Deepfake Threats | AI-generated deepfakes and misinformation threaten public trust, security, and elections. | Lok Sabha Elections (2024): Deepfake videos of political leaders spread on social media, raising concerns about election manipulation. |
Regulatory Uncertainty and Compliance Costs | The absence of a unified AI policy creates legal confusion, making compliance costly for startups. | Indian App Developers (2023): Filed a complaint against Google with the CCI for restrictive AI-related practices on the Play Store. |
Global Competitiveness and Innovation Lag | Over-regulation and high compliance costs could slow AI innovation, making India less competitive. | Stanford AI Index (2023): China attracted 4x more AI funding than India, limiting India’s global AI leadership. |
Privacy and Data Security Risks | AI systems collect and analyze vast amounts of personal data, increasing risks of data breaches and misuse. | PwC (2022): India ranked 2nd globally in cyberattacks, with weak AI-specific data protection laws. |
Lack of AI-Specific Legal Framework | India’s legal system lacks dedicated laws to address AI accountability, liability, and ethical use. | Digital Personal Data Protection Act (2023): Covers data privacy but lacks provisions for AI-related biases and accountability. |
Environmental Impact of AI | AI model training consumes huge amounts of energy, contributing to carbon emissions and environmental strain. | ChatGPT-3 Training: Consumed 10 GWh of electricity, equivalent to the energy use of thousands of households. |