On July 20, the Telecom Regulatory Authority of India (TRAI) released recommendations for a risk-based framework aimed at regulating artificial intelligence (AI) within the country. This development comes at a time when Sam Altman, the creator of ChatGPT, expressed his skepticism about India's ability to compete with OpenAI in training foundational AI models, sparking discussions on the challenges and prospects for Indian startups in the AI domain.
India's AI market has witnessed remarkable growth, expanding by 22% compared to the previous year. The country now boasts around 109,000 skilled professionals contributing to the AI revolution, reflecting a significant 20% increase from the previous year. Notably, the AI sector has garnered substantial investments, totaling approximately $3000 million, fostering an environment of growth, enthusiasm, and innovation.
Understanding the meaning of AI
Artificial intelligence (AI) focuses on developing computers and machines capable of reasoning, learning, and performing tasks that imitate human intelligence or involve analyzing vast amounts of data. It encompasses various disciplines, such as computer science, data analytics, and linguistics, among others. In the professional sphere, AI primarily relies on machine learning and deep learning technologies, enabling tasks like data analytics, prediction, natural language processing, and intelligent data retrieval. AI serves as a valuable toolset that enhances business capabilities, driving innovation and efficiency.
AI has the power of turning your smartphone into a wish granting Genie. Suppose you are having a rough day at work and to get over the work blues you turn to you phone and say “Hey AI, make me the perfect cup of coffee." The AI, equipped with an extensive understanding of your taste preferences, brewing methods, and coffee background manifests the perfect coffee for you. Thus instantly, your phone metamorphoses into a mini coffee-making marvel, brewing your favorite beans to perfection and adding a touch of froth art. Therefore, AI is treasure box with endless possibilities capable of both improving and destroying the mankind.
Generative AI
Generative AI is a part of AI that uses advanced learning from lots of data to create new content like text, images, videos, audio, and code. In India, some tech firms are making their own tools using a popular AI called Generative Pre-trained Transformer (GPT) from OpenAI. Other startups are creating their own AI language models for natural language processing. These AI models act as personal assistants for different tasks in companies like marketing, HR, customer support, and more. They help with boring tasks and work alongside human employees.
Indian Startups building AI
Recent data from a study conducted by Nasscom, a prominent Indian industry association, reveals a remarkable trend in the growth of generative AI startups in India. Between 2021 and the current year 2023, the number of generative AI startups in the country has more than doubled. In the Indian AI startup ecosystem, we encounter two main categories: AI-first startups and AI-enabled startups. AI-first firms primarily focus on providing AI technology directly to end customers as a core part of their product or service offerings. On the other hand, the second group of AI-enabled startups concentrates on developing innovative AI tools and solutions to support other companies in their AI applications. Some generative AI startups in India are: Haptik which is a conversational messaging platform that has been using AI for a long time, Uniphore which is one of the earliest natural language processing startups in India. It examines interactions, including voice, video, and text conversations, between companies and their users to understand customer sentiment, Fluid AI that provides customized enterprise GPT solutions capable of comprehending an organization's entire content and many others.
Benefits of using AI
AI offers various benefits to startups:
More personalized customer experience: By analyzing vast amounts of data, AI helps startups understand customer preferences, leading to personalized experiences and targeted marketing campaigns, fostering higher satisfaction and loyalty.
Reduces inefficiency: AI automates repetitive tasks and streamlines operations, reducing manual effort and errors, leading to cost savings and improved productivity.
Forecasting: Using AI-driven predictive analytics, startups can get valuable information about market trends, what customers want, and how well their business is doing. This helps them make smarter decisions and plan more effectively.
24/7 Customer Support with Chatbots: AI-powered chatbots enable startups to offer round-the-clock customer support, addressing queries promptly, and enhancing customer engagement and retention.
Strengthened Data Security: AI identifies potential cybersecurity threats in real-time, protecting sensitive data and ensuring a secure environment for customers.
Empowered Market Research and Competitive Analysis: AI tools analyze market data and competitor information comprehensively, empowering startups to stay informed and make strategic moves accordingly.
Insights from Natural Language Processing (NLP): NLP capabilities extract valuable insights from unstructured data like social media comments, aiding in sentiment analysis and understanding customer feedback.
Requirement for AI specific policy in India
To empower startups to harness AI effectively, it is imperative to establish comprehensive laws and regulations. Although a central law is yet to be enacted, efforts are underway to formulate regulations that address the intricacies of AI. India must introduce a comprehensive "Digital India Act" that addresses the regulatory aspects of emerging technologies like AI, as the existing IT Act of 2000 falls short in tackling the complexities of the digital landscape today. Additionally, there is a pressing need to strengthen and clarify laws related to data privacy and protection to prevent any unethical exploitation of AI technology. The immense benefits AI brings to sectors like healthcare, education, and agriculture outweigh any potential risks. Taking timely action to introduce regulations will promote responsible AI practices and maximize its positive impact.
Efforts undertaken in India
1. NITI Aayog
The National Strategy on Artificial Intelligence (NSAI) discussion paper was released in 2018, setting forth the "AI for All" mantra as the governing benchmark for AI design, development, and deployment in India. The strategy prioritized research promotion, workforce skilling, AI adoption facilitation, and responsible AI guidelines, with a focus on developmental sectors. The collaboration with the private sector was also emphasized in this initiative. Later in 2021, a two-part approach paper on 'Principles of Responsible AI (RAI),' outlining ethical guidelines for AI in India was released. The paper also proposed enforcement mechanisms for effective implementation.
Type of regulation recommended: Risk based mechanism for regulating AI (Correlated to the potential harm an AI system can cause)
The Niti Aayog paper proposes creating an independent Council for Ethics and Technology (CET) to aid sectoral regulators in formulating AI policies, conducting research, and devising ethics review mechanisms for evaluating AI system efficacy.
2. Proposed Digital Personal Data Protection Bill, 2022:
The forthcoming bill will be applicable to AI developers involved in creating and supporting AI technologies. As a result, AI developers will need to adhere to the fundamental principles of privacy and data protection outlined in the bill.
3. TRAI recommendations
Establishment of the Artificial Intelligence and Data Authority of India (AIDAI), an independent statutory authority with dual roles as a regulator and advisory body for all AI-related domains.
Ministry of Electronics and Information Technology should serve as the administrative ministry responsible for overseeing AI matters.
Suggests a risk-based framework and a multi-stakeholder advisory body to assist AIDAI
Collaborate with the Ministry of Education and the All India Council for Technical Education (AICTE) for the development of AI courses and ethical AI education for students in both technical and non-technical institutions.
Establishing a Centre of Excellence for Artificial Intelligence in each state/UT to facilitate R&D of AI capabilities in various fields.
Utilizing "Digital Communication Innovation Square (DCIS)" scheme to support startups and organizations, fostering idea demonstration and solution improvement.
Given the current lack of a comprehensive regulatory framework for governing the use of AI systems in India, there is an opportunity to draw valuable insights and lessons from the experiences of other nations. By studying how different countries have approached AI regulation, India can better understand the challenges and best practices involved in effectively governing AI technology to formulate a robust and adaptive regulatory framework tailored to the specific needs and context of the country.
Lessons from Rest of the World
European Union: The EU is developing a separate AI act to prioritize human needs and ensure responsible AI use. It emphasizes on the need of horizontal regulation. The proposed regulation bans certain AI practices, sets strict standards for high-risk AI systems, and imposes lighter requirements on low-risk systems. AI products will be categorized based on potential harm, with stricter rules for higher-risk applications and a complete ban on certain uses like government-led social grading.
United Kingdom: The UK's National AI Strategy outlines its approach to AI, aiming to foster innovation while protecting individuals' rights. Adopts context-specific regulations and emphasizes addressing real threats rather than minor ones related to AI. It provides cross-sectoral principles encompassing safety, transparency, fairness, accountability, and redress. Existing regulators such as Ofcom or the Competition and Markets Authority will implement these principles without statutory enforcement, enabling customized approaches based on AI applications. The government will collaborate with regulators during the initial phase to identify barriers and assess the framework's effectiveness.
Australia: In November 2019, Australia released its AI Ethics Framework, which features voluntary AI Ethics Principles intended to guarantee the safety, security, and dependability of AI. These principles aim to promote safer, fairer outcomes for all Australians, minimize negative impacts on affected parties, and uphold the highest ethical standards during AI design, development, and implementation. The principles apply throughout the AI system lifecycle, encompassing design, data and modeling, development and validation, deployment, and monitoring and refinement, especially when the AI system's use may significantly impact individuals, the environment, or society.
The Australian government has proposed a system of categorizing AI tools into three tiers based on their level of risk, with each tier having different obligations. Higher-risk AI tools would undergo more rigorous assessments and monitoring. Currently, there are no nationwide laws requiring risk assessments for AI, except for government suppliers in New South Wales. Australia has been an early adopter of voluntary AI Ethics principles, which aim to achieve safer and fairer outcomes, minimize AI-associated risks, and promote ethical standards.
USA: In April 2020, the Federal Trade Commission (FTC) issued five principles for companies to follow while using AI and algorithms to manage consumer-protection risks. These guidelines emphasize transparency, fair decision-making, robust data and models, and accountability. The White House's Office of Science and Technology Policy (OSTP) also published a blueprint for an AI Bill of Rights in October 2022, outlining core principles for responsible AI development. These principles focus on user safety, non-discrimination, data privacy, notice and explanation, and access to human alternatives. Additionally, the US Department of Commerce sought public opinions in April 2023 to create rules and laws ensuring AI systems operate as advertised, considering the possibility of an auditing system to assess potential harmful bias or misinformation.
Common Thread
The consensual recommendations from TRAI and the rest of the world include the emphasis on responsible AI use and the need for effective regulation. Both highlight the significance of transparency, fairness, and accountability in AI systems. They also prioritize user safety, non-discrimination, and data privacy as crucial aspects of AI governance. Additionally, they advocate for collaboration with relevant stakeholders, such as educational institutions and startups, to foster the development and adoption of AI technologies while ensuring ethical practices.
Way Forward
As a policy think tank, ADIF is committed to promoting the ethical use of AI and safeguarding against potential risks, including fraud, identity theft, deep fakes, and criminal activities. Recognizing the dynamic startup ecosystem in India, ADIF emphasizes the necessity of defining broad policies and a centralized framework for AI. This approach aims to prevent unnecessary regulatory hurdles and complexities, thereby facilitating a conducive environment for businesses to thrive. By streamlining laws and establishing a comprehensive framework, India can effectively harness the transformative potential of AI while ensuring responsible and secure implementation.