Social Media: Prospect and Challenges
AI Content Detectors to Combat Deepfakes
From UPSC perspective, the following things are important :
Prelims level: Emerging Technologies; Deepfake Technology;
Why in the News?
During the General Elections 2024, the proliferation of AI-generated content (AIGC), including deepfake videos featuring prominent figures like Aamir Khan and Ranveer Singh, raised concerns about misinformation.
What is Deepfake Technology?
- It is a type of Artificial Intelligence used to create convincing images, audio and video hoaxes. Deepfakes often transform existing source content where one person is swapped for another.
- Creating such content involves a technique known as Generative Adversarial Networks (GANs), comprising Artificial Neural Networks.
Legal Safeguards in India:
|
Significance of Deepfake Technology:
- Promotes Right to Expression: Deepfakes amplify voices of marginalised individuals, enabling them to share important messages. Recently, a video was created to deliver the final message of a journalist killed by the Saudi government, calling for justice.
- Can contribute to the Education System: Online educators use deepfakes to bring historical figures to life for engaging lessons. For example, a video of Abraham Lincoln delivering his Gettysburg.
- Provides Autonomy: Deepfakes empower individuals to control their digital identity and explore new forms of self-expression. For instance, the Reface App.
- Provides a realistic experience: Artists leverage deepfakes for creative expression and collaboration, as seen in Salvador Dali’s interactive museum promotion. Deepfake tech enables realistic lip-syncing for actors speaking different languages, enhancing global accessibility and immersion in films.
- Renovating old memories: Deepfakes aid in restoring old photos, enhancing low-quality footage, and creating realistic training materials for public safety.
What are the limitations of Deepfake Technology?
- Spreading False Information: Deepfakes can purposefully spread misinformation, influencing public opinion or elections, like the videos of politicians/celebrities can manipulate viewers and create confusion about important issues.
- Frauds: Deepfake technology enables impersonation for financial frauds, tricking individuals into revealing sensitive information. They can also fuel harassment, especially targeting women, and lead to psychological distress.
- Accuracy: While no AI detector guarantees 100% accuracy, tools like Originality.ai boast a 99% true positive rate. Detection models report probability scores, allowing for nuanced assessments despite inherent uncertainties.
Future Scope:
- Adversarial AI: Keeping pace with evolving generative AI models poses a significant challenge for content detectors.
- Accessibility and Cost: With increased adoption and advancements, the accessibility and affordability of detection tools are expected to improve.
PYQ:With the present state of development, Artificial Intelligence can effectively do which of the following? (2020) 1. Bring down electricity consumption in industrial units 2. Create meaningful short stories and songs 3. Disease diagnosis 4. Text-to-Speech Conversion 5. Wireless transmission of electrical energy Select the correct answer using the code given below: (a) 1, 2, 3 and 5 only (b) 1, 3 and 4 only (c) 2, 4 and 5 only (d) 1, 2, 3, 4 and 5 |
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
The democratic political process is broken
From UPSC perspective, the following things are important :
Prelims level: NA
Mains level: Role of Media in civil society
Why in the news?
Due to the loss of credibility, many institutional news media struggle to establish a factual foundation or maintain control over diverse social narratives, affecting society, media principles, and the Political milieu in India.
The Present Scenario of Discourse in News Media:
- Institutional Crises: Loss of credibility in institutional news media leads to a lack of establishment of factual baseline and narrative control. Without credibility, news media struggles to maintain authority and trust, hindering its role in shaping public discourse.
- Impact on Public Discourse: The rise of social media has decentralized content creation and dissemination. Virality, rather than substance, becomes the primary measure of content value. Prioritization of engagement over quality and veracity distorts public discourse.
- Hyper-partisanship in Media: Loss of credibility in mainstream media contributes to hyper-partisanship. News and content are utilized as tools to promote factional interests rather than fostering dialogue and deliberation. Lack of interest in genuine discourse further exacerbates divisions within society.
- Fragmentation of Attention: The proliferation of media channels leads to the fragmentation of collective attention. A constant stream of transient content makes issues appear less significant. Gaining visibility and capturing attention becomes paramount, overshadowing the importance of substantive dialogue.
- Individual Battles and Tribal Affiliation: Public discourse becomes a battleground for individual interests seeking attention and reinforcing tribal affiliations. Lack of genuine dialogue hampers the evolution of consensus, further polarizing society.
Present Scenario of Discourse in Civil Society:
- Increase in Dependency: Liberal civil society increasingly directs its efforts towards engaging with the state and its institutions. Dependency on the state for functioning compromises civil society’s autonomy and independence.
- Legitimacy Issues: Civil society’s legitimacy is now derived more from normative purity than representativeness. This shift undermines civil society’s ability to truly represent diverse viewpoints and reconcile conflicting interests.
- Undermining Societal issues: Civil society becomes more inclined towards single-issue campaigns rather than engaging in broader negotiation and consensus-building. This narrow focus limits its effectiveness in addressing complex societal issues.
- Bypassing Political Processes: Civil society groups tend to bypass political processes and opt for institutional interventions, such as judicial or bureaucratic avenues, to advance their agendas. This strategy may sideline democratic processes and undermine the role of elected representatives in decision-making.
The Present Scenario of Discourse in Political Parties:
- Internal Focus of Political Parties: Political parties often prioritize internal issues over broader deliberations and policy formulation. This internal focus detracts from the party’s ability to engage in constructive dialogue and address pressing societal issues.
- Unable to play a role: Elected representatives are expected to translate constituency issues into a policy agenda. However, within the party setup, they often lack the power and inclination to do so effectively.
- Uncertain Electoral Payoff: Elected representatives may prioritize direct interventions for constituent services over influencing the policy agenda due to uncertain electoral benefits.
- Complex Electoral Dynamics: Elections involve a mix of constituency, state, and national issues, making it challenging for representatives to effectively represent their constituents’ interests. Candidates often rely heavily on party symbols for electoral success, diminishing the significance of individual policy agendas.
- Power Dynamics within Parties: Decision-makers for party tickets hold significant power within political parties, influencing candidate selection and party direction. Limited institutional positions of power lead to internal power struggles and sycophancy among aspirants.
Way Forward:
- Rebuilding Credibility: Implement measures to enhance transparency and accountability within news organizations. Encourage fact-checking and adherence to journalistic standards. Promote diversity of perspectives in news reporting to rebuild trust with diverse audiences.
- Regulation for Social Media Platforms: Implement regulations to combat misinformation and promote responsible content sharing. Foster partnerships between social media companies and fact-checking organizations to verify information.
- Promote Digital Literacy: Invest in education and public awareness campaigns to enhance media literacy among citizens. Equip individuals with critical thinking skills to discern credible sources from misinformation. Foster a culture of skepticism and verification when consuming news and information online.
- Encouraging Civil Society Engagement: Provide support for civil society initiatives that promote inclusivity and dialogue among diverse stakeholders. Enhance funding and resources for civil society organizations to reduce dependency on the state and encourage autonomy.
- Facilitate Political Dialogue and Reform: Encourage political parties to prioritize policy formulation and public deliberation over internal politics. Reform electoral systems to reduce the influence of party symbols and empower individual candidates with policy agendas.
Conclusion: The broken democratic process is exacerbated by media credibility loss, civil society’s state dependency, and internal party issues. Rebuilding media trust, regulating social media, promoting dialogue, and empowering civil society is crucial for restoration.
Mains PYQ-
Q- How do pressure groups influence Indian political process? Do you agree with this view that informal pressure groups have emerged as powerful than formal pressure groups in recent years? ( UPSC IAS/2017 )
Q- Can Civil Society and Non-Governmental Organisations present an alternative model of public service delivery to benefit the common citizen? Discuss the challenges of this alternative model.(UPSC IAS/2021)
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Centre notifies Fact-Check Unit to screen online content
From UPSC perspective, the following things are important :
Prelims level: Legal liability protection, Fact check unit
Mains level: IT Rules, 2021
Why in the news?
The Ministry of Electronics and Information Technology has designated the Press Information Bureau’s Fact Check Unit to point out misinformation about Central government departments on social media platforms ahead of the election.
Context-
- According to the IT Rules of 2021, social media platforms might lose their legal protection from being held responsible for content posted by users if they decide to keep the misinformation flagged by the Fact Check Unit.
Background of this news-
- Due to the controversy surrounding the concept, the Union government had delayed officially notifying the Fact Check Unit as there was ongoing litigation at the Bombay High Court challenging the provision.
- However, this month, the court decided not to prolong a temporary halt that prevented the government from implementing the rules.
Key points as per IT Rules, 2021-
- Mandates: In essence, the IT Rules (2021) demand that social media platforms exercise heightened diligence concerning the content present on their platforms. Legal obligation on intermediaries to make reasonable efforts to prevent users from uploading such content.
- Appoint a Grievance Officer: Social media platforms are mandated to set up a grievance redressal mechanism and promptly remove unlawful and inappropriate content within specified timeframes.
- Ensuring Online Safety and Dignity of Users: Intermediaries are obligated to remove or disable access within 24 hours upon receiving complaints about content that exposes individuals’ private areas, depicts them in full or partial nudity, shows them engaged in sexual acts, or involves impersonation, including morphed images
- Informing users about privacy policies is crucial: Social media platforms’ privacy policies should include measures to educate users about refraining from sharing copyrighted material and any content that could be considered defamatory, racially or ethnically offensive, promoting pedophilia, or threatening the unity, integrity, defense, security, or sovereignty of India or its friendly relations with foreign states, or violating any existing laws.
Fake news on social media can have several negative impacts on governments-
- Undermining Trust- Fake news can erode public trust in government institutions and officials. When false information spreads widely, it can lead to scepticism and doubt about the government’s credibility.
- Destabilizing Democracy- Misinformation can distort public perceptions of government policies and actions, potentially leading to unrest, protests, or even violence. This can destabilize democratic processes and undermine the functioning of government.
- Manipulating Public Opinion- Fake news can be strategically used to manipulate public opinion in favour of or against a particular government or political party. By spreading false narratives, individuals or groups can influence elections and policymaking processes.
- Impeding Policy Implementation- False information circulating on social media can create confusion and resistance to government policies and initiatives. This can impede the effective implementation of programs and reforms.
- Wasting Resources- Governments may be forced to allocate resources to address the fallout from fake news, such as conducting investigations, issuing clarifications, or combating disinformation campaigns. This diverts resources away from other important priorities.
- Fueling Division- Fake news can exacerbate social and political divisions within a country by spreading divisive narratives or inciting hatred and hostility towards certain groups or communities. This can further polarize society and hinder efforts towards unity and cohesion
Measures to Tackle Fake News on Social Media:
- Mandatory Fact-Checking: Implement a requirement for social media platforms to fact-check content before dissemination.
- Enhanced User Education: Promote media literacy and critical thinking skills to help users discern reliable information from fake news.
- Strengthened Regulation: Enforce stricter regulations on social media platforms to curb the spread of misinformation and hold them accountable for content moderation.
- Collaborative Verification: Foster partnerships between governments, fact-checking organizations, and social media platforms to verify the accuracy of information.
- Transparent Algorithms: Ensure transparency in algorithms used by social media platforms to prioritize content, reducing the spread of false information.
- Swift Removal of Violative Content: Establish mechanisms for prompt removal of fake news and penalize users or entities responsible for spreading it.
- Public Awareness Campaigns: Launch campaigns to raise awareness about the detrimental effects of fake news and promote responsible sharing practices.
Conclusion: To address misinformation, governments should enforce IT Rules (2021), empower fact-checking units, and promote media literacy. Collaboration between authorities, platforms, and citizens is vital for combating fake news and upholding democratic values.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Centre bans 18 OTT Platforms for Inappropriate Content
From UPSC perspective, the following things are important :
Prelims level: Laws governing OTT Ban
Mains level: Read the attached story
In the news
- The Information & Broadcasting Ministry has blocked 18 OTT platforms on the charge of publishing obscene and vulgar content.
How were these platforms banned?
- The contents listed on the OTT platforms was found to be prima facie violation of:
- Section 67 and 67A of the Information Technology Act, 2000;
- Section 292 of the Indian Penal Code; and
- Section 4 of the Indecent Representation of Women (Prohibition) Act, 1986.
- These platforms were violative of the responsibility to not propagate obscenity, vulgarity and abuse under the guise of ‘creative expression’.
How are OTT Platforms regulated in India?
- Regulatory Framework: The Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 introduce a Code of Ethics applicable to digital media entities and OTT platforms.
- Key Provisions: These guidelines encompass content categorization, parental controls, adherence to journalistic norms, and the establishment of a grievance redressal mechanism to address concerns.
[A] Content Regulations
- Age-Based Classification: OTT platforms like Netflix and Amazon Prime are mandated to classify their content into five age-based categories: U (universal), 7+, 13+, 16+, and A (adult).
- Parental Locks: Effective parental locks must be implemented for content classified as 13+, ensuring that parents can control access to age-inappropriate material.
- Age Verification: Robust age verification systems are required for accessing adult content, enhancing parental oversight and safeguarding minors from exposure to inappropriate material.
[B] Grievance Redressal Mechanism
- Three-Tier System: A comprehensive grievance redressal mechanism consisting of three tiers has been established:
-
- Level-I: Publishers are encouraged to engage in self-regulation to address grievances and concerns internally.
- Level-II: A self-regulating body, headed by a retired judge from the Supreme Court or High Court or an eminent independent figure, will oversee complaints and ensure impartial resolution.
- Level-III: The Ministry of Information and Broadcasting will formulate an oversight mechanism and establish an inter-departmental committee tasked with addressing grievances. This body possesses the authority to censor and block content when necessary.
[C] Selective Banning of OTT Communication Services
- Parliamentary Notice: Concerns about the influence and impact of OTT communication services prompted a notice from a Parliamentary Standing Committee to the Department of Telecom (DoT).
- Scope of Discussion: This discussion focuses exclusively on OTT communication services such as WhatsApp, Signal, Meta (formerly Facebook), Google Meet, and Zoom, excluding content-based OTTs like Netflix or Amazon Prime.
- Regulatory Authority: Content regulation within OTT communication services falls under the jurisdiction of the Ministry of Information and Broadcasting (MIB), emphasizing the government’s commitment to ensuring responsible communication practices.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
As we enter election year, let us not be defined by our politics — but our kindness
From UPSC perspective, the following things are important :
Prelims level: K-shaped recovery
Mains level: importance of looking beyond personal interests and extending kindness to others.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Broadcast regulation 3.0, commissions and omissions
From UPSC perspective, the following things are important :
Prelims level: Broadcast Advisory Council
Mains level: press freedom and diversity
Central idea
India’s Broadcasting Services Bill aims at regulating broadcasting comprehensively, introducing positive steps like audience data transparency and competition in terrestrial broadcasting. However, concerns arise over privacy, jurisdictional conflicts with OTT regulation, and lack of measures on ownership and an independent regulator.
Key Highlights:
- The Broadcasting Services (Regulation) Bill aims to regulate broadcasting comprehensively, marking the third attempt since 1997.
- Positive propositions include obligations for record-keeping, audience measurement transparency, and allowing private actors in terrestrial broadcasting.
Key Concerns:
- Lack of privacy safeguards for subscriber and audience data in data collection practices.
- Inclusion of Over-the-Top (OTT) content suppliers in the definition of broadcasting creates jurisdictional conflicts and poses threats to smaller news outlets.
Positive Provisions Requiring Refinement:
- Obligation for maintaining records of subscriber data.
- Stipulation of a methodology for audience measurement.
- Provision to permit private actors in terrestrial broadcasting.
Apprehensions:
- Expanded definition of broadcasting may limit conditions for journalists and news outlets not part of large television networks.
- The mandate for a ‘Content Evaluation Committee’ to self-certify news programming raises feasibility and desirability concerns.
Crucial Silences in the Bill:
- Lack of measures to assess cross-media and vertical ownership impacts diversity in the news marketplace.
- Absence of provisions for creating an independent broadcast regulator.
Government Empowerment and Intrusive Mechanisms:
- The Bill grants the government leeway to inspect broadcasters without prior intimation, impound equipment, and curtail broadcasting in “public interest.”
- Violations of the Programme Code and Advertisement Code could result in deleting or modifying content.
Concerns Regarding Broadcast Advisory Council:
- Doubts about the Council’s capacity to address grievances raised by over 800 million TV viewers.
- Lack of autonomy for the Council, as the Central government has the ultimate decision-making authority.
Key Terms and Phrases:
- Over-the-Top (OTT) content suppliers
- National Broadcasting Policy
- Content Evaluation Committee
- Vertical integration
- Broadcast Advisory Council.
Key Statements:
- Privacy concerns arise due to the Bill’s lack of guardrails for subscriber and audience data collection practices.
- The absence of measures to assess cross-media and vertical ownership impacts the diversity of news suppliers.
- The Bill’s silence on creating an independent broadcast regulator is a significant omission.
Key Examples and References:
- The Bill is part of a series of attempts to regulate broadcasting, following initiatives in 1997 and 2007.
- TRAI’s ‘National Broadcasting Policy’ proposes including OTT content suppliers in the definition of broadcasting services.
Key Facts and Data:
- Lack of specifics on cross-media and vertical ownership in the Bill impedes diversity in the news marketplace.
- No provisions for an independent broadcast regulator, with the proposal for a ‘Broadcast Advisory Council.’
Critical Analysis:
- The potential positive provisions of the Bill require refinement, particularly concerning privacy protection and oversight bodies for news outlets.
- Intrusive mechanisms grant significant power to the government, posing concerns about press freedom and external pressure on news suppliers.
Way Forward:
- The Bill must address jurisdictional conflicts, incorporate privacy safeguards, and reconsider intrusive provisions for effective and balanced regulation.
- Protection of press freedom and diversity should be prioritized through fine-tuning potentially positive provisions and addressing omissions.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
The challenge of maritime security in the Global South
From UPSC perspective, the following things are important :
Prelims level: India's Maritime Vision 2030
Mains level: Blue Economy: Sustainable use of ocean resources for economic development
Central idea
The article underscores the evolving challenges in the maritime domain, emphasizing the shift from traditional military approaches to a developmental model for maritime security. It highlights the need for collaboration among developing nations to address unconventional threats, such as illegal fishing and climate change, while acknowledging the reluctance to prioritize collective action over political and strategic autonomy.
Key Highlights:
- Evolution of Maritime Challenges: New dimensions in hard security challenges, including asymmetrical tactics and grey-zone warfare. Use of land attack missiles and combat drones reshaping the security landscape.
- Shift in Demand for Maritime Security: Growing demand from states facing unconventional threats such as illegal fishing, natural disasters, and climate change. Need for a broader approach beyond military means to address diverse maritime challenges.
- India’s Developmental Approach: Maritime Vision 2030 focuses on economic growth and livelihood generation through port, shipping, and inland waterway development. Indo-Pacific Oceans Initiative with seven pillars, including maritime ecology, marine resources, and disaster risk reduction.
New Threats in Maritime Domain:
- Recent developments include Ukraine’s asymmetrical tactics and China’s maritime militias, indicating a shift to improvised strategies.
- Emerging threats involve grey-zone warfare, land attack missiles, and combat drones.
Demand for Maritime Security:
- Majority of recent demand stems from unconventional threats like illegal fishing, natural disasters, and climate change.
- Addressing these challenges requires commitment of capital, resources, and specialized personnel.
Global South’s Perspective:
- Developing nations perceive Indo-Pacific competition among powerful nations as detrimental to their interests.
- Challenges involve interconnected objectives in national, environmental, economic, and human security.
Challenges in Global South:
- Rising sea levels, marine pollution, climate change disproportionately impact less developed states, leading to vulnerability.
- Unequal law-enforcement capabilities and lack of security coordination hinder joint efforts against maritime threats.
Creative Models for Maritime Security:
- Maritime security transcends military actions, focusing on generating prosperity and meeting societal aspirations.
- India’s Maritime Vision 2030 emphasizes port, shipping, and inland waterway development for economic growth.
- Dhaka’s Indo-Pacific document and Africa’s Blue Economy concept align with a developmental approach.
Fight Against Illegal Fishing:
- Significant challenge in Asia and Africa marked by a surge in illegal, unreported, and unregulated fishing.
- Faulty policies encouraging destructive methods like bottom trawling and seine fishing contribute to the problem.
India’s Indo-Pacific Oceans Initiative:
- Encompasses seven pillars, including maritime ecology, marine resources, capacity building, and disaster risk reduction.
- Advocates collective solutions for shared problems, garnering support from major Indo-Pacific states.
Challenges in Achieving Consensus:
- Implementation of collaborative strategy faces hurdles in improving interoperability, intelligence sharing, and establishing a regional rules-based order.
- Balancing sovereignty and strategic independence remains a priority for many nations, hindering consensus.
Key Challenges:
- Complexity of Unconventional Threats: Conventional military approaches insufficient; requires capital, resources, and specialist personnel. Challenges include illegal fishing, marine pollution, human trafficking, and climate change.
- Global South’s Coordination Challenges: Unequal law-enforcement capabilities and lack of security coordination among littoral states. Reluctance to prioritize collective action due to varying security priorities and autonomy concerns.
- Vulnerability of Less Developed States: Disproportionate impact of rising sea levels, marine pollution, and climate change on less developed states. Vulnerability stemming from inadequate resources to combat environmental and security challenges.
- Lack of Consensus and Reluctance: Reluctance among littoral states to pursue concrete solutions and collaborate. Paradox of non-traditional maritime security, where collective issues clash with political and strategic autonomy.
Key Terms and Phrases:
- Grey-Zone Warfare: Tactics that fall between peace and war, creating ambiguity in conflict situations.
- Asymmetrical Tactics: Strategies that exploit an opponent’s weaknesses rather than confronting strengths directly.
- Maritime Vision 2030: India’s 10-year blueprint for economic growth in the maritime sector.
- Blue Economy: Sustainable use of ocean resources for economic development.
- Indo-Pacific Oceans Initiative: India’s initiative with pillars like maritime ecology, marine resources, and disaster risk reduction.
- IUU Fishing: Illegal, unreported, and unregulated fishing.
- Bottom Trawling and Seine Fishing: Destructive fishing methods contributing to illegal fishing.
Key Examples and References:
- Ukraine’s Asymmetrical Tactics: Utilization of unconventional strategies in the Black Sea.
- China’s Maritime Militias: Deployment in the South China Sea as an example of evolving threats.
- India’s Maritime Vision 2030: Illustrates a developmental approach to maritime security.
- Illegal Fishing in Asia and Africa: Rising challenge with negative environmental and economic impacts.
Key Facts and Data:
- Maritime Vision 2030: India’s 10-year plan for the maritime sector.
- Indo-Pacific Oceans Initiative: Seven-pillar initiative for collective solutions in the maritime domain.
Critical Analysis:
- Shift to Developmental Model: Emphasis on generating prosperity and meeting human aspirations in addition to traditional security measures.
- Comprehensive Maritime Challenges: Recognition of diverse challenges beyond military threats, including environmental and economic goals.
- Littoral State Reluctance: Paradox in the Global South, where collective issues clash with autonomy, hindering collaborative solutions.
Way Forward:
- Collaborative Strategies:Improved interoperability, intelligence sharing, and agreement on regional rules-based order.
- Prioritizing Collective Action: Developing nations must prioritize collective action over sovereignty for effective maritime solutions.
- Sustainable Development Goals: Prioritize sustainable development goals in littoral states, addressing challenges such as illegal fishing and climate change.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Rashmika Mandanna’s deepfake: Regulate AI, don’t ban it
From UPSC perspective, the following things are important :
Prelims level: deepfake
Mains level: Discussions on Deepfakes
Central idea
The article highlights challenges in deepfake regulation using the example of the Rashmika Mandanna video. It calls for a balanced regulatory approach, citing existing frameworks like the IT Act, and recommends clear guidelines, public awareness, and potential amendments in upcoming legislation such as the Digital India Act to effectively tackle deepfake complexities.
What is deepfake?
- Definition: Deepfake involves using advanced artificial intelligence (AI), particularly deep learning algorithms, to create manipulated content like videos or audio recordings.
- Manipulation: It can replace or superimpose one person’s likeness onto another, making it appear as though the targeted individual is involved in activities they never participated in.
- Concerns: Deepfakes raise concerns about misinformation, fake news, and identity theft, as the technology can create convincing but entirely fabricated scenarios.
- Legitimate Use: Despite concerns, deepfake technology has legitimate uses, such as special effects in the film industry or anonymizing individuals, like journalists reporting from sensitive or dangerous situations.
- Sophistication Challenge: The increasing sophistication of AI algorithms makes it challenging to distinguish between genuine and manipulated content.
Key Highlights:
- Deepfake Impact: The article discusses the impact of deepfake technology, citing the example of a viral video of actor Rashmika Mandanna, which turned out to be a deepfake.
- Regulatory Responses: It explores different approaches to regulate deepfakes, highlighting the need for a balanced response that considers both AI and platform regulation. Minister Rajeev Chandrasekhar’s mention of regulations under the IT Act is discussed.
- Legitimate Uses: The article recognizes that while deepfakes can be misused for scams and fake videos, there are also legitimate uses, such as protecting journalists in oppressive regimes.
Challenges:
- Regulatory Dilemma: The article points out the challenge of finding a balanced regulatory approach, acknowledging the difficulty in distinguishing between lawful and unlawful uses of deepfake technology.
- Detection Difficulty: Advancements in AI have made it increasingly difficult to detect deepfake videos, posing a threat to individuals depicted in such content and undermining trust in video evidence.
- Legal Ambiguities: The article highlights legal ambiguities around deepfakes, as creating false content is not inherently illegal, and distinguishing between obscene, defamatory, or satirical content can be challenging.
Key Facts:
- The article mentions the viral deepfake video of Rashmika Mandanna and its impact on the debate surrounding deepfake regulations.
- It highlights the challenges in detecting the new generation of almost indistinguishable deepfakes.
Government Actions:
- Legal Frameworks in Action: The Indian government relies on the Information Technology (IT) Act to regulate online content. For instance, platforms are obligated to remove unlawful content within specific timeframes, demonstrating an initial approach to content moderation.
- Policy Discussions on Deepfakes: Policymakers are actively engaging in discussions regarding amendments to the IT Act to explicitly address deepfake-related challenges. This includes considerations for adapting existing legal frameworks to the evolving landscape of AI-generated content.
What more needs to be done:
- Legislative Clarity for Platforms: Governments should provide explicit guidance within legislative frameworks, instructing online platforms on the prompt identification and removal of deepfake content. For instance, specifying mechanisms to ensure compliance with content moderation obligations within stringent timelines.
- AI Regulation Example: Develop targeted regulations for AI technologies involved in deepfake creation. China’s approach, requiring providers to obtain consent from individuals featured in deepfakes, serves as a specific example. Such regulations could be incorporated into existing legal frameworks.
- Public Awareness Campaigns: Drawing inspiration from successful public awareness initiatives in other domains, governments can implement campaigns similar to those addressing cybersecurity. These campaigns would educate citizens about the existence and potential threats of deepfakes, empowering them to identify and report such content.
- Global Collaboration Instances: Emphasizing the need for global collaboration, governments can cite successful instances of information-sharing agreements. For example, collaboration frameworks established between countries to combat cyber threats could serve as a model for addressing cross-border challenges posed by deepfakes.
- Technological Innovation Support: Encourage research and development by providing grants or incentives for technological solutions. Specific examples include initiatives that have successfully advanced cybersecurity technologies, showcasing the government’s commitment to staying ahead of evolving threats like deepfake.
Way Forward:
- Multi-pronged Regulatory Response: The article suggests avoiding reactionary calls for specialized regulation and instead opting for a comprehensive regulatory approach that addresses both AI and platform regulation.
- Digital India Act: The upcoming Digital India Act is seen as an opportunity to address deepfake-related issues by regulating AI, emerging technologies, and online platforms.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Meta lawsuits: Big Tech will always be bad for mental health
From UPSC perspective, the following things are important :
Prelims level: Dopamine
Mains level: The problem with social media and its business model
Central idea
The article delves into the social media crisis, pointing fingers at Meta for exacerbating youth mental health issues through Instagram’s addictive features. Legal actions highlight the platforms’ intentional exploitation of young users’ vulnerabilities. To address this, a suggested solution is contemplating a shift from the current profit-driven business model to a subscription-based one.
Key Highlights:
- Social Media Crisis: Social media platforms, especially Meta (formerly Facebook), are facing a crisis due to concerns about their impact on mental health, particularly among youth.
- Legal Action Against Meta: Forty-two US Attorney Generals have filed lawsuits against Meta, alleging that Instagram, a Meta-owned platform, actively contributes to a youth mental health crisis through addictive features.
- Allegations Against Meta: The lawsuit claims that Meta knowingly designs algorithms to exploit young users’ dopamine responses, creating an addictive cycle of engagement for profit.
- Dopamine and Addiction: Dopamine, associated with happiness, is triggered by likes on platforms like Facebook, leading to heightened activity in children’s brains, making them more susceptible to addictive behaviors.
Prelims focus – Dopamine
|
Key examples for mains value addition
- The Social Dilemma (2020): A Netflix show that revealed how social media, led by Meta, messes with our minds and influences our behavior, especially impacting the mental health of youngsters.
- Frances Haugen’s Revelations: A whistleblower exposed internal Meta documents showing that Instagram worsened body image issues for teen girls, making social media’s impact on mental health a serious concern.
- US Surgeon General’s Advisory: The government’s health expert issued a warning about the negative effects of social media on young minds, emphasizing its importance in President Biden’s State of the Union address.
Challenges:
- Addictive Business Model: The core issue with social media is its business model, focusing on user engagement and data monetization, potentially at the expense of user well-being.
- Transformation from Networks to Media: Social networks, initially built for human connection, have transformed into media properties where users are treated as data for advertisers, impacting their habits and behaviors.
- Global Regulatory Scrutiny: Meta faces regulatory challenges beyond the US, with UK, EU, and India considering legislative measures. India, having the largest Instagram user base, emphasizes accountability for content hosted on platforms.
Analysis:
- Business Model Critique: The article argues that the problem with social media lies in its business model, which prioritizes user engagement for data collection and monetization.
- Regulatory Consequences: If the lawsuit succeeds, Meta could face significant penalties, potentially adding up to billions of dollars, and signaling a major setback for the company.
- Global Impact: Regulatory scrutiny extends beyond the US, indicating a need for platforms to be more accountable and responsible for their content and user interactions on a global scale.
Key Data:
- Potential Penalties: Meta could face penalties of up to $5000 for each violation if the lawsuit succeeds, posing a significant financial threat considering Instagram’s large user base.
- Regulatory Pressure in India: India, with 229 million Instagram users, emphasizes the end of a free pass for platforms, signaling a global shift towards increased accountability.
Way Forward:
- Shift to Subscription Model: The article suggests that social networks might consider adopting a subscription model, akin to OpenAI’s approach, to prioritize user well-being over advertising revenue.
- Listen to Regulatory Signals: Platforms are urged to heed regulatory signals and work collaboratively to address issues rather than adopting a confrontational stance.
- Long-term Survival: To ensure long-term survival, social media networks may need to reevaluate their business models, aligning them with user well-being rather than prioritizing engagement and data monetization.
In essence, the article highlights the crisis in social media, legal challenges against Meta, the critique of the business model, global regulatory scrutiny, and suggests potential shifts in the industry’s approach for long-term survival.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
TRAI can’t regulate OTT platforms: TDSAT
From UPSC perspective, the following things are important :
Prelims level: TRAI
Mains level: OTT Regulations
Central Idea
- The Telecom Disputes Settlement and Appellate Tribunal (TDSAT) has issued an interim order clarifying that Over the Top (OTT) platforms, such as Hotstar, fall outside the jurisdiction of the Telecom Regulatory Authority of India (TRAI).
- Instead, they are governed by the Information Technology Rules, 2021, established by the Ministry of Electronics and Information Technology (MeitY).
Context for TDSAT’s Decision
- The All India Digital Cable Federation (AIDCF) initiated the petition, alleging that Star India’s free streaming of ICC Cricket World Cup matches on mobile devices through Disney+ Hotstar is discriminatory under TRAI regulations.
- This is because viewers can only access matches on Star Sports TV channels by subscribing and making monthly payments.
Diverging Opinions on OTT Regulation
- IT Ministry vs. DoT: The IT Ministry contends that internet-based communication services, including OTT platforms, do not fall under the jurisdiction of the DoT, citing the Allocation of Business Rules.
- DoT’s Draft Telecom Bill: The DoT proposed a draft telecom Bill that classifies OTT platforms as telecommunications services and seeks to regulate them as telecom operators. This move has encountered objections from MeitY.
TRAI’s Attempt at OTT Regulation
- Changing Stance: TRAI, after three years of maintaining that no specific regulatory framework was required for OTT communication services, began consultations on regulating these services.
- Consultation Paper: In June, TRAI released a consultation paper seeking input on regulating OTT services and exploring whether selective banning of OTT services could be considered as an alternative to complete Internet shutdowns.
- Telecom Operators’ Demand: Telecom operators have long advocated for “same service, same rules” and have pushed for regulatory intervention for OTT platforms.
Significance of TDSAT’s Order
- TDSAT decision holds significance due to ongoing debates over the regulation of OTT services.
- TRAI and the Department of Telecommunications (DoT) have been attempting to regulate OTT platforms, while the Ministry of Electronics and Information Technology opposes these efforts.
Recommendations and Monitoring
- In September 2020, TRAI recommended against regulatory intervention for OTT platforms, suggesting that market forces should govern the sector.
- However, it also emphasized the need for monitoring and intervention at an “appropriate time.”
Conclusion
- The recent TDSAT ruling on OTT platform jurisdiction adds complexity to the ongoing debate over the regulation of these services in India.
- While TRAI and the DoT seek regulatory measures, the IT Ministry contends that such services fall outside the purview of telecommunications regulation.
- The evolving landscape highlights the need for a nuanced approach to balance the interests of various stakeholders, including telecom operators, government authorities, and the broader public.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Fediverse: Understanding Decentralized Social Networking
From UPSC perspective, the following things are important :
Prelims level: Fediverse
Mains level: NA
Central Idea
- Meta, the parent company for Facebook, Instagram, and WhatsApp, has launched Threads, a Twitter rival, which is set to become a part of the fediverse.
- While Meta’s move has garnered attention, the company is yet to reveal its plans for utilizing the fediverse to build a decentralized social network.
What is the Fediverse?
- Network of Servers: The fediverse is a group of federated social networking services that operate on decentralized networks using open-source standards.
- Third-Party Servers: It comprises a network of servers run by third parties, not controlled by any single entity. Social media platforms can utilize these servers to facilitate communication between their users.
- Cross-Platform Communication: Users on social media platforms within the fediverse can seamlessly communicate with users of other platforms within the network, eliminating the need for separate accounts for each platform.
- Media Platforms Using: Meta’s Threads is set to join the fediverse, along with other platforms like Pixelfed (photo-sharing), PeerTube (decentralized video-sharing), Lemmy, Diaspora, Movim, Prismo, WriteFreely, and more.
Benefits of Using the Fediverse
- Decentralized Nature: Social media platforms adopt the fediverse to leverage its decentralized nature, giving users more control over the content they view and interact with.
- Cross-Platform Communications: The fediverse enables easier communication between users of different social media platforms within the network.
- Data Portability: Users can freely transport their data to other platforms within the fediverse, ensuring greater flexibility and control over their online data.
Challenges Hindering Wider Adoption
- Scalability: Decentralized servers might face challenges in handling large amounts of traffic, leading to potential scalability issues.
- Content Moderation: The decentralized nature of the fediverse poses difficulties in implementing and enforcing uniform content moderation policies across servers.
- Data Privacy: Enforcing data privacy policies becomes more challenging since data posted on one server might not be deleted due to differing data deletion policies on other servers.
The Fediverse’s Evolution
- Long-standing Idea: The concept of the fediverse has been around for decades, with attempts made by companies like Google to embrace decentralized networks.
- Emergence of Notable Platforms: Platforms like Identi.ca (founded in 2008) and Mastodon and Pleroma (emerged in 2016) have contributed to the development of the fediverse.
- ActivityPub Protocol: In 2018, the W3 (World Wide Web Consortium) introduced the ActivityPub protocol, a commonly used protocol in applications within the fediverse.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Section 69 (A) of IT Act
From UPSC perspective, the following things are important :
Prelims level: Section 69A of IT Act
Mains level: Not Much
Central Idea
- The Indian government has exercised its powers under Section 69(A) of the Information Technology Act, 2000.
- It requested Twitter and other social media platforms to remove a video depicting the naked parade and sexual assault of two Manipur women.
What is Section 69(A) of the IT Act?
- Empowering Content Takedown: Section 69(A) allows the government to issue content-blocking orders to online intermediaries like ISPs, web hosting services, search engines, etc.
- Grounds for Blocking: Content can be blocked if it is considered a threat to India’s national security, sovereignty, public order, or friendly relations with foreign states, or if it incites the commission of cognizable offenses.
- Review Committee: Requests made by the government for blocking content are sent to a review committee, which issues the necessary directions. Such orders are typically kept confidential.
Supreme Court’s Verdict on Section 69(A)
- Striking Down Section 66A: In the case of Shreya Singhal vs. Union of India (2015), the Supreme Court struck down Section 66A of the IT Act, which penalized the sending of offensive messages through communication services.
- Section 69(A) Validated: The Court upheld the constitutionality of Section 69(A) of the Information Technology Rules 2009, noting that it is narrowly drawn and includes several safeguards.
- Limited Blocking Authority: The Court emphasized that blocking can only be carried out if the Central Government is satisfied about its necessity, and the reasons for blocking must be recorded in writing for legal challenges.
Other Rulings on Section 69(A)
- Twitter’s Challenge: Twitter approached the Karnataka High Court in July last year, contesting the Ministry of Electronics and Information Technology’s (MeitY) content-blocking orders issued under Section 69(A).
- Court’s Dismissal: In July of this year, the single-judge bench of the Karnataka HC dismissed Twitter’s plea, asserting that the Centre has the authority to block tweets.
- Extending Blocking Powers: Justice Krishna D Dixit ruled that the Centre’s blocking powers extend not only to single tweets but to entire user accounts as well.
Conclusion
- The application of Section 69(A) has been a subject of legal and societal debate, as it aims to balance national security and public order concerns with the protection of free speech and expression.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Why TRAI wants to regulate WhatsApp, similar services
From UPSC perspective, the following things are important :
Prelims level: TRAI
Mains level: Regulating OTT communication services, necessity and challenges
Central Idea
- In a surprising move, the TRAI is reconsidering its previous stance on regulating OTT communication services such as WhatsApp, Zoom, and Google Meet. Almost three years after advising against a specific regulatory framework for these services, TRAI has released a consultation paper, inviting stakeholders to provide suggestions on regulating OTT services.
What is Telecom Regulatory Authority of India (TRAI)?
- TRAI is an independent regulatory body established by the Government of India to regulate and promote telecommunications and broadcasting services in the country.
- TRAI’s primary mandate is to ensure fair competition, protect consumer interests, and facilitate the growth and development of the telecom industry in India.
- TRAI performs various functions to fulfill its objectives, including formulating regulations and policies, issuing licenses to telecom service providers, monitoring compliance with regulations, resolving disputes, promoting fair competition, and conducting research and analysis in the telecom sector.
- TRAI also acts as an advisory body to the government on matters related to telecommunications and broadcasting.
What is Over-the-top (OTT)?
- OTT refers to the delivery of audio, video, and other media content over the internet directly to users, bypassing traditional distribution channels such as cable or satellite television providers.
- OTT communication services offer users the ability to make voice and video calls, send instant messages, and engage in group chats using internet-connected devices.
- Examples of popular OTT services include video streaming platforms like Netflix, Amazon Prime Video, and Disney+, music streaming services like Spotify and Apple Music, communication apps like WhatsApp and Skype, and social media platforms like Facebook and Instagram.
Growing complexity of regulating Internet services
- Rapid Technological Advancements: The Internet landscape is constantly evolving, with new technologies, platforms, and services emerging regularly which makes it challenging for regulators to keep up with the latest developments and their potential implications.
- Convergence of Services: Traditionally distinct services such as telecommunications, broadcasting, and information technology are converging in the digital realm. Internet services now encompass a wide range of functionalities, including communication, entertainment, e-commerce, social networking, and more.
- Global Nature of the Internet: The Internet transcends national boundaries, making it difficult to implement uniform regulations across jurisdictions. Different countries have varying approaches to Internet governance, privacy laws, content regulation, and data protection.
- Privacy and Data Protection: The collection, storage, and use of personal data by Internet services have raised concerns about privacy and data protection.
- Content Moderation and Fake News: The rise of social media and user-generated content platforms has brought forth challenges related to content moderation, misinformation, and disinformation. Regulators are grappling with issues of freedom of speech, ensuring responsible content practices, and combatting the spread of fake news and harmful content online.
Why is TRAI exploring selective banning of OTT apps?
- Economic Ramifications: Shutting down telecommunications or the entire Internet can have significant negative consequences for a country’s economy. By exploring selective banning of OTT apps, TRAI aims to mitigate the economic ramifications while still addressing concerns related to specific apps or content.
- Technological Challenges: Traditional methods of blocking websites or apps may face challenges when dealing with dynamic IP addresses and websites hosted on cloud servers. Advanced techniques and encryption protocols like HTTPS make it difficult for service providers to block or filter content at the individual app level. Despite these challenges, TRAI believes that it is still possible to identify and block access to specific websites or apps through network-level filtering or other innovative methods.
- Parliament Committee Recommendation: TRAI’s exploration of selective banning of OTT apps aligns with the recommendation made by the Parliamentary Standing Committee on IT. The committee suggested that targeted blocking of specific websites or apps could be a more effective approach compared to a blanket ban on the entire Internet.
Why it is necessary to regulate OTT communication services?
- Consumer Protection: Regulations can help ensure consumer protection by establishing standards for privacy, data security, and user rights. OTT communication services handle vast amounts of personal data and facilitate sensitive conversations, making it crucial to have safeguards in place to protect user privacy and secure their data from unauthorized access or misuse.
- Quality and Reliability: By establishing minimum service standards, authorities can ensure that users have consistent and reliable access to communication services, minimizing disruptions and service outages.
- National Security: OTT communication services play a significant role in everyday communication, including personal, business, and government interactions. Ensuring national security interests may require regulatory oversight to address issues like lawful interception capabilities, preventing misuse of services for illegal activities, and maintaining the integrity of critical communications infrastructure.
- Level Playing Field: Regulatory measures aim to create a level playing field between traditional telecom operators and OTT service providers. Regulating OTT communication services can address the perceived disparity in obligations and promote fair competition among different service providers.
- Public Interest and Social Responsibility: OTT communication services have become integral to societal functioning, enabling education, healthcare, business communication, and more. Regulations can ensure that these services operate in the public interest and uphold social responsibilities. For example, regulations can address issues like combating misinformation, hate speech, or harmful content on these platforms.
Conclusion
- TRAI’s decision to revisit its stance on regulating OTT communication services reflects the evolving dynamics of the Internet industry. The consultation paper and the draft telecom Bill highlight the need for regulatory parity and financial considerations in this sector. As stakeholders provide suggestions, it remains to be seen how TRAI will strike a balance between regulating OTT services and fostering innovation in the digital landscape
Also read:
Fake News: Addition of The Provision In Intermediary Guidelines
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
A case of unchecked power to restrict e-free speech
From UPSC perspective, the following things are important :
Prelims level: Related provisions and important Judgements
Mains level: Channing prospects of freedom of speech and expression
Central idea
- The recent judgment by the Karnataka High Court dismissing Twitter’s challenge to blocking orders issued by the Ministry of Electronics and Information Technology (MeitY) raises serious concerns about the erosion of free speech and unchecked state power. By imposing an exorbitant cost on Twitter and disregarding established procedural safeguards, the judgment sets a worrisome precedent for content takedowns and hampers the exercise of digital rights.
*Relevance of the topic
The concerns raised in the Karnataka High Court judgment are in contrast to the principles established in the Shreya Singhal case.
Highly relevant with the principles of natural justice and expanded scope of online speech and expression
Concerns raised over the judgement
- Ignorance of Procedural Safeguards: The court’s interpretation undermines the procedural safeguards established under the Information Technology Act, 2000, and the Blocking Rules of 2009. By disregarding the requirement to provide notice to users and convey reasons for blocking, the judgment enables the state to restrict free speech without proper oversight, leading to potential abuse of power.
- Unchecked State Power: The judgment grants the state unchecked power in taking down content without following established procedures. This lack of oversight raises concerns about potential misuse and arbitrary blocking of content, which could lead to the suppression of dissenting voices and curtailment of free speech rights.
- Expansion of Grounds for Restricting Speech: The court’s reliance on combating “fake news” and “misinformation” as grounds for blocking content goes beyond the permissible restrictions on free speech under Article 19(2) of the Constitution. This expansion of grounds for blocking content raises concerns about subjective interpretations and the potential for suppressing diverse viewpoints and dissent.
- Chilling Effect on Free Speech: The acceptance of wholesale blocking of Twitter accounts without specific justification creates a chilling effect on free speech. This can deter individuals from expressing their opinions openly and engaging in meaningful discussions, ultimately inhibiting democratic discourse and stifling freedom of expression.
- Deviation from Judicial Precedent: The judgment deviates from the precedent set by the Supreme Court in the Shreya Singhal case, which upheld the constitutionality of Section 69A while emphasizing the importance of procedural safeguards.
Shreya Singhal case for example
- The Shreya Singhal case is a landmark judgment by the Supreme Court of India that has significant implications for freedom of speech and expression online.
- In this case, the Supreme Court struck down Section 66A of the Information Technology Act, 2000, as unconstitutional on grounds of violating the right to freedom of speech and expression guaranteed under Article 19(1)(a) of the Constitution.
- The judgment in the Shreya Singhal case is significant in the context of freedom of speech and expression because it reinforces several principles:
- Overbreadth and Vagueness: The court emphasized that vague and overly broad provisions that can be interpreted subjectively may lead to a chilling effect on free speech. Section 66A, which allowed for the punishment of online speech that caused annoyance, inconvenience, or insult, was considered vague and prone to misuse, leading to the restriction of legitimate expression.
- Requirement of Procedural Safeguards: The Supreme Court highlighted the importance of procedural safeguards to protect freedom of speech. It stated that any restriction on speech must be based on clear and defined grounds and must be accompanied by adequate procedural safeguards, including the provision of notice to the affected party and the opportunity to be heard.
- Need for a Direct Nexus to Public Order: The judgment reiterated that restrictions on speech should be based on specific grounds outlined in Article 19(2) of the Constitution. It emphasized that there must be a direct nexus between the speech and the threat to public order, and mere annoyance or inconvenience should not be a ground for restriction.
Its impact on freedom of speech and expression
- Undermining Freedom of Speech: The judgment undermines freedom of speech and expression by allowing the state to exercise unchecked power in taking down content without following established procedures. This grants the state the ability to curtail speech and expression without proper justification or recourse for affected parties.
- Prior Restraint: The judgment’s acceptance of wholesale blocking of Twitter accounts, without targeting specific tweets, amounts to prior restraint on freedom of speech. This restricts future speech and expression, contrary to the principles established by the Supreme Court.
- Lack of Procedural Safeguards: The judgment disregards procedural safeguards established in previous court rulings, such as the requirement for recording a reasoned order and providing notice to affected parties. This lack of procedural safeguards undermines transparency, accountability, and the protection of freedom of speech and expression.
- Unchecked State Power: Granting the state unfettered power in content takedowns without proper oversight or recourse raises concerns about abuse and arbitrary censorship. It allows the state to remove content without clear justifications, potentially stifling dissenting voices and limiting the diversity of opinions.
- Restricting Online Discourse: By restricting the ability of users and intermediaries to challenge content takedowns, the judgment curtails the online discourse and hampers the democratic values of open discussion and exchange of ideas on digital platforms.
- Disproportionate Impact on Digital Rights: The judgment’s disregard for procedural safeguards and expanded grounds for content takedowns disproportionately affect digital rights. It impedes individuals’ ability to freely express themselves online, limiting their participation in public discourse and impacting the vibrancy of the digital space.
Way forward
- Strengthen Procedural Safeguards: It is essential to reinforce procedural safeguards in the process of blocking content. Clear guidelines should be established, including the provision of notice to affected users and conveying reasons for blocking. This ensures transparency, accountability, and the opportunity for affected parties to challenge the blocking orders.
- Uphold Judicial Precedents: It is crucial to adhere to established judicial precedents, such as the principles outlined in the Shreya Singhal case. Courts should interpret laws relating to freedom of speech and expression in a manner consistent with constitutional values, protecting individual rights and ensuring a robust and inclusive public discourse.
- Review and Amend Legislation: There may be a need to review and amend relevant legislation, such as Section 69A of the Information Technology Act, to address the concerns raised by the judgment. The legislation should clearly define the grounds for blocking content and ensure that restrictions are based on constitutionally permissible grounds, protecting freedom of speech while addressing legitimate concerns.
- Promote Digital Literacy: Enhancing digital literacy among citizens can empower individuals to navigate online platforms responsibly, critically evaluate information, and exercise their freedom of speech effectively. Educational initiatives can focus on teaching digital literacy skills, media literacy, and responsible online behavior.
- Encourage Public Discourse and Open Dialogue: It is important to foster an environment that encourages open discourse and dialogue on matters of public interest. Platforms for discussion and debate should be facilitated, providing individuals with opportunities to express their opinions, share diverse perspectives, and engage in constructive conversations.
- International Collaboration: Collaboration with international stakeholders and organizations can contribute to promoting and protecting freedom of speech and expression in the digital realm. Sharing best practices, lessons learned, and cooperating on global norms and standards can strengthen the protection of these rights across borders
Conclusion
- The Karnataka High Court’s judgment undermines procedural safeguards, erodes the principles of natural justice, and grants unchecked power to the state in removing content it deems unfavorable. This ruling, coupled with the recently amended IT Rules on fact-checking, endangers free speech and digital rights. It is crucial to protect and uphold the right to free speech while ensuring that restrictions are justified within the confines of the Constitution
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Deepfakes: A Double-Edged Sword in the Digital Age
Central Idea
- Deepfakes, produced through advanced deep learning techniques, manipulate media by presenting false information. These creations distort reality, blurring the lines between fact and fiction, and pose significant challenges to society. While deepfakes have emerged as an “upgrade” from traditional photoshopping, their potential for deception and manipulation cannot be underestimated
What is mean by Deepfakes?
- Deepfakes refer to synthetic media or manipulated content created using deep learning algorithms, specifically generative adversarial networks (GANs).
- Deepfakes involve altering or replacing the appearance or voice of a person in a video, audio clip, or image to make it seem like they are saying or doing something they never actually did. The term “deepfake” is a combination of “deep learning” and “fake.
- Deepfake technology utilizes AI techniques to analyze and learn from large datasets of real audio and video footage of a person.
The Power of Deepfakes
- Manipulate Media: Deepfakes can convincingly alter images, videos, and audio, allowing for the creation of highly realistic and deceptive content.
- Blur Reality: Deepfakes can distort reality and create false narratives, blurring the lines between fact and fiction.
- Transcend Human Skill: Deepfakes go beyond traditional methods of manipulation like photoshopping, utilizing advanced deep learning algorithms to process large amounts of data and generate realistic falsified media.
- Produce Real-Time Content: Deepfakes can be generated in real-time, enabling the rapid creation and dissemination of manipulated content.
- Reduce Imperfections: Compared to traditional manipulation techniques, deepfakes exhibit fewer imperfections, making them more difficult to detect and debunk.
- Spread Misinformation: Deepfakes have the potential to spread misinformation on a large scale, influencing public opinion, and creating confusion.
- Exploit Facial Recognition: Deepfakes can be used to manipulate facial recognition software, potentially bypassing security measures and compromising privacy.
- Create Illicit Content: Deepfakes have been misused to generate non-consensual pornography (“revenge porn”) by superimposing someone’s face onto explicit material without their consent.
- Influence Elections: Deepfakes can be employed to create videos that depict political figures engaging in inappropriate behavior, potentially swaying public opinion and impacting election outcomes.
- Persist in Digital Space: Once released, deepfakes can continue to circulate online, leaving a lasting impact even after their falsehood is exposed.
Positive applications of deepfakes
- Voice Restoration: Deep learning algorithms have been employed in initiatives like the ALS Association’s “voice cloning initiative.” These efforts aim to restore the voices of individuals affected by conditions such as amyotrophic lateral sclerosis, providing a means for them to communicate and regain their voice.
- Entertainment and Creativity: Deepfakes have found applications in comedy, cinema, music, and gaming, enabling the recreation and reinterpretation of historical figures and events. Through deep learning techniques, experts have recreated the voices and/or visuals of renowned individuals
- Visual Effects and Film Industry: Deepfakes have been utilized in the film industry to create realistic visual effects, allowing filmmakers to bring fictional characters to life or seamlessly integrate actors into different environments.
- Historical and Cultural Preservation: Deepfakes can aid in preserving and understanding history by recreating historical figures or events. By using deep learning algorithms, experts can breathe life into archival footage or photographs, enabling a deeper understanding of the past and enhancing cultural preservation efforts.
- Augmented Reality and Gaming: Deep learning techniques are employed to create immersive augmented reality experiences and enhance gaming graphics. By generating realistic visuals and interactions, deepfakes contribute to the advancement of these technologies, providing users with captivating and engaging virtual experiences.
- Medical Training and Simulation: Deepfakes can be used in medical training and simulation scenarios to create lifelike virtual patients or simulate medical procedures. This allows healthcare professionals to gain valuable experience and enhance their skills in a controlled and safe environment.
The path to redemption regarding deepfakes
- Regulatory Framework: Implementing comprehensive laws and regulations is necessary to govern the creation, distribution, and use of deepfakes. These regulations should address issues such as consent, privacy rights, intellectual property, and the consequences for malicious actors.
- Punishing Malicious Actors: Establishing legal consequences for those who create and disseminate deepfakes with malicious intent is essential. This deterrence can discourage the misuse of this technology and protect individuals from the harmful effects of false and manipulated media.
- Democratic Inputs: Including democratic input in shaping the future of deepfake technology is crucial. Involving diverse stakeholders, including experts, policymakers, and the public, can help establish guidelines, ethical frameworks, and standards that reflect societal values and interests.
- Digital Literacy and Education: Promoting scientific, digital, and media literacy is essential for individuals to navigate the deepfake landscape effectively. By equipping people with the critical thinking skills necessary to identify and analyze manipulated media, they can become empowered consumers and contributors to a more informed society.
- Responsible Technology Development: Technology companies must prioritize ethical considerations and societal implications when developing and deploying deepfake-related technologies. Instead of solely focusing on what can be done, they should also question what should be done, ensuring that deepfake technologies are aligned with ethical guidelines and serve the collective good.
- International Collaboration: Encouraging international cooperation and collaboration can foster a unified approach to tackling the challenges posed by deepfakes. This can involve sharing best practices, establishing common standards, and creating platforms for knowledge exchange and coordination.
- Fundamental Moral Rights: Recognizing the fundamental moral right to protect against the manipulation of hyper-realistic digital representations of individuals’ image and voice is crucial. Upholding and safeguarding these rights can provide a foundation for addressing the ethical implications of deepfakes and ensuring respect for individual autonomy and dignity.
- Ethical AI Practices: Applying ethical principles to the development and deployment of artificial intelligence, including deepfake technologies, is essential. Companies should prioritize responsible AI practices, including transparency, accountability, fairness, and inclusivity, to mitigate the potential harm caused by deepfakes.
Individual responsibility in addressing the challenges posed by deepfakes
- Media Literacy: Developing media literacy skills is vital in today’s digital landscape. Individuals should educate themselves about the existence of deepfakes, understand how they are created, and learn to critically evaluate media content. This includes questioning the authenticity and sources of information before accepting it as true.
- Critical Thinking: Cultivating critical thinking skills enables individuals to analyze information objectively and discern between genuine and manipulated content. By questioning the credibility, context, and motives behind media content, individuals can better protect themselves from falling victim to deepfake manipulation.
- Responsible Sharing: Individuals should exercise caution when sharing content online. Before disseminating media, it is important to verify its authenticity and consider the potential consequences of sharing potentially misleading or harmful information. Being mindful of the impact one’s actions can have on others is crucial.
- Fact-Checking: Fact-checking sources and using reliable news outlets can help individuals verify the accuracy of information before accepting or sharing it. Consulting reputable sources, checking multiple perspectives, and utilizing fact-checking organizations can contribute to a more informed understanding of the content being consumed.
- Reporting Misinformation: If individuals encounter deepfake content or suspect its presence, reporting it to the relevant authorities, platforms, or organizations can help combat its spread. Promptly notifying the appropriate channels can contribute to the identification and removal of harmful deepfake content.
- Advocacy and Awareness: Individuals can actively participate in raising awareness about the dangers of deepfakes by engaging in discussions, sharing educational resources, and advocating for responsible use of technology. By spreading awareness and promoting media literacy, individuals can contribute to a more informed and vigilant society.
- Ethical Considerations: Considering the ethical implications of deepfakes and actively choosing not to engage in their creation or dissemination can contribute to responsible technology use. Upholding ethical values, such as respecting privacy, consent, and the well-being of others, helps maintain integrity in the digital space.
Facts for prelims
What are the catfish accounts?
|
Conclusion
- Deepfakes present a paradoxical challenge in our modern age, wielding immense power alongside significant risks. While laws and regulations are necessary to mitigate their negative consequences, fostering public awareness and digital literacy is equally important. By collectively addressing the ethical, legal, and technological aspects of deepfakes, we can navigate this powerful yet controversial technology, ensuring it serves the betterment of society while safeguarding our moral rights and democratic values
Also read:
The Need for Fact-Checking Units to Combat Fake News |
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
The Need for Fact-Checking Units to Combat Fake News
From UPSC perspective, the following things are important :
Prelims level: IT rules, 2021 and other such provisions
Mains level: Menace fake news, deepfakes, government's efforts for fact checking units and criticism associated with it
Central Idea
- The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 aim to tackle the dissemination of false or misleading information through the introduction of fact-checking units. In light of the detrimental impact of fake news, particularly during the Covid-19 crisis, governments worldwide have recognized the urgency to combat this menace. India, in particular, has experienced a surge in fake news related to the pandemic, making it crucial for the government to proactively address the issue.
What is mean by Fake news?
- Fake news refers to intentionally fabricated or misleading information presented as if it were real news. It can be spread through traditional media sources like newspapers or television, but it is more commonly associated with social media platforms and other online sources.
- Fake news can range from completely made-up stories to misleading headlines and selectively edited or out-of-context information designed to deceive readers.
- It is often used for political purposes, to manipulate public opinion or to spread misinformation about individuals, organizations or events
- Scholars at the Massachusetts Institute of Technology even found that falsified content spreads six times faster than factual content on online platforms.
The Menace of Fake News
- Dissemination of misinformation: Fake news spreads false or misleading information, leading to a distortion of facts and events. This can misguide individuals and the public, leading to incorrect beliefs and actions.
- Erosion of trust: Fake news undermines trust in media organizations, journalism, and sources of information. When people encounter fake news repeatedly, it becomes challenging to distinguish between reliable and unreliable sources, eroding trust in the media landscape.
- Manipulation of public opinion: Fake news is often created with the intent to manipulate public sentiment and shape public opinion on specific issues, individuals, or events. This manipulation can have far-reaching effects on public discourse and decision-making processes.
- Polarization and division: Fake news can contribute to the polarization of society by promoting extreme viewpoints, fostering animosity, and deepening existing divisions. It can exacerbate social, political, and cultural conflicts.
- Personal and reputational harm: Individuals, public figures, and organizations can suffer reputational damage due to false information circulated through fake news. Innocent people may be targeted, leading to personal, professional, and social repercussions.
- Public safety concerns: Fake news related to public safety issues, such as health emergencies or natural disasters, can spread panic, hinder effective response efforts, and jeopardize public safety. It can impede the dissemination of accurate information and guidance.
What is mean by Deepfakes?
- Deepfakes refer to synthetic media or manipulated content created using deep learning algorithms, specifically generative adversarial networks (GANs).
- Deepfakes involve altering or replacing the appearance or voice of a person in a video, audio clip, or image to make it seem like they are saying or doing something they never actually did. The term “deepfake” is a combination of “deep learning” and “fake.
- Deepfake technology utilizes AI techniques to analyze and learn from large datasets of real audio and video footage of a person.
The Rise of Deepfakes
- Advanced manipulation technology: Deepfakes leverage deep learning algorithms and artificial intelligence to convincingly alter or generate realistic audio, video, or images. This technology enables the creation of highly sophisticated and deceptive content.
- Spreading disinformation: Deepfakes can be used as a tool to spread disinformation by creating fabricated videos or audio clips that appear genuine. Such manipulated content can be shared on social media platforms, leading to the viral spread of false information.
- Political implications: Deepfakes have the potential to disrupt political landscapes by spreading misinformation about politicians, political events, or election campaigns. Fabricated videos of political figures making false statements can influence public opinion and undermine trust in democratic processes.
- Amplifying fake news: Deepfakes can amplify the impact of fake news by adding a visual or audio component, making false information appear more credible. Combining deepfakes with misleading narratives can significantly enhance the persuasive power of fabricated content.
- Challenges for content verification: The emergence of deepfakes presents challenges for content verification and authentication. The increasing sophistication of deepfake technology makes it harder to detect and debunk manipulated content, leading to a potential erosion of trust in online information sources.
- Detection and mitigation efforts: Efforts are underway to develop deepfake detection tools and techniques. Researchers, tech companies, and organizations are investing in AI-based solutions to identify and combat deepfakes, aiming to stay ahead of the evolving manipulation techniques.
Existing Provisions to Combat Fake News
- Intermediary Guidelines of 2021: The most preferred democratic process to combat the threats and impact of fake news on a polity would be through Parliament-enacted laws. India opted for the speedier alternative of an addition to the Intermediary Guidelines of 2021 (as amended), through Rule 3(1)(v).
- Can not disseminate misleading content: Under this rule, intermediaries including social media platforms have to ensure that users do not disseminate content that deceives or misleads on the origin or knowingly and intentionally communicates any information which is patently false or misleading in nature but may reasonably be perceived as a fact.
Facts for prelims
Digital India Act, 2023
|
Importance of Fact-Checking Units
- Ensuring accuracy: Fact-checking units play a crucial role in verifying the accuracy of information circulating in the media and online platforms. They employ rigorous research and investigation techniques to assess the credibility and truthfulness of claims, helping to distinguish between reliable information and misinformation.
- Countering fake news: Fact-checking units are instrumental in combating the spread of fake news and misinformation. By systematically debunking false claims, identifying misleading narratives, and providing accurate information, they help to minimize the impact of false information on public perception and decision-making.
- Promoting media literacy: Fact-checking units contribute to promoting media literacy and critical thinking skills among the general public. Their work serves as a valuable resource for individuals seeking accurate information, encouraging them to question and verify claims rather than relying solely on unsubstantiated sources.
- Enhancing transparency: Fact-checking units operate with transparency, providing detailed explanations and evidence-based assessments of their findings. This transparency helps to build trust with the audience, fostering credibility and accountability in the information ecosystem.
- Holding accountable those spreading misinformation: Fact-checking units contribute to holding accountable those who deliberately spread misinformation or engage in disinformation campaigns. By publicly exposing false claims and identifying the sources of misinformation, they discourage the dissemination of false information and promote ethical standards in media and public discourse.
Conclusion
- With over 80 million Indian citizens online, the challenge of combating false information cannot be underestimated. The Indian government’s initiative to introduce fact-checking units reflects an understanding of the urgent need to tackle the spread of fake news. Jonathan Swift’s timeless quote, “Falsehood flies, and the truth comes limping after,” captures the essence of the problem we face today.
Interesting to read:
What is Generative AI? |
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Unraveling Social Fabric: The Impact of Social Media on Public Discourse
From UPSC perspective, the following things are important :
Prelims level: NA
Mains level: Impact of social media on public discourse and adaptability and Solutions
Central Idea
- The recent wave of violence in Manipur serves as another grim reminder of the deterioration of our social fabric. The Finance Minister’s recent expression of sorrow over the lack of personal regard among politicians despite ideological differences resonates with many of us. We reminisce about a time when meaningful conversations and differing opinions could coexist without animosity. However, in today’s landscape, we find ourselves drifting apart from those with whom we disagree and nurturing a deep aversion towards them.
The phenomenon of polarization
- Ideological Divisions: Polarization refers to the growing ideological divisions within societies. It is characterized by an increasing separation of people into distinct ideological camps, often with extreme views and a lack of willingness to engage with opposing perspectives.
- Us vs. Them Mentality: Polarization fosters an us vs. them mentality, where individuals identify strongly with their own group and view those outside their group as adversaries. This mentality fuels hostility, animosity, and a deep sense of distrust towards those who hold different beliefs or opinions.
- Echo Chambers: Polarization is exacerbated by the prevalence of echo chambers, which are created by social media and other platforms. Echo chambers are virtual spaces where like-minded individuals reinforce each other’s beliefs and shield themselves from differing viewpoints. This reinforces preexisting biases and prevents exposure to alternative perspectives.
- Confirmation Bias: Polarization is fueled by confirmation bias, whereby individuals seek out information that confirms their existing beliefs and dismiss or ignore contradictory evidence. This selective exposure to information further entrenches people in their ideological positions and prevents the formation of nuanced opinions.
- Emotionalization of Issues: Polarization often leads to the emotionalization of issues, where discussions become heated and personal. Emotions such as anger, fear, and resentment drive the discourse, making it difficult to engage in rational and constructive conversations.
- Loss of Civil Discourse: Polarization erodes civil discourse and respectful disagreement. Rather than engaging in meaningful dialogue, individuals tend to resort to personal attacks, demonization, and dehumanization of those with differing views. This breakdown of civility undermines the foundations of a healthy democratic society.
- Political Gridlock: Polarization can result in political gridlock, where the inability to find common ground hinders policy-making and governance. As political parties become more polarized, finding compromises and reaching consensus becomes increasingly challenging, leading to a stalemate in decision-making processes.
- Social Fragmentation: Polarization contributes to social fragmentation, dividing communities and societies along ideological lines. It undermines social cohesion, trust, and cooperation, making it harder to address common challenges and work towards collective goals.
- Threat to Democracy: Polarization poses a significant threat to democratic processes. It undermines the principles of compromise, inclusivity, and consensus-building that are essential for a functioning democracy. When polarization intensifies, it can lead to social unrest, political instability, and a breakdown of democratic institutions.
- Implications for Social Well-being: Polarization has negative consequences for societal well-being. It can contribute to heightened levels of stress, anxiety, and social isolation. It impedes constructive problem-solving, stifles innovation, and hampers social progress.
Impact of Social Media
- Positive Impact:
- Connectivity and Communication: Social media platforms have revolutionized communication, allowing individuals to connect and stay in touch with friends, family, and communities across geographical boundaries.
- Information Sharing: Social media provides a platform for the rapid dissemination of information, enabling users to access news, updates, and educational content from various sources.
- Amplification of Voices: Social media empowers marginalized individuals and communities by providing them with a platform to share their stories, experiences, and perspectives, thereby amplifying their voices and fostering inclusivity.
- Business and Entrepreneurship Opportunities: Social media platforms offer businesses and entrepreneurs the ability to reach a global audience, market their products or services, and build brand awareness at a relatively low cost.
- Awareness and Activism: Social media plays a crucial role in raising awareness about social and environmental issues, mobilizing communities, and facilitating social and political activism.
- Negative Impact:
- Spread of Misinformation: Social media platforms are susceptible to the rapid spread of misinformation, fake news, and rumors, which can lead to confusion, polarization, and manipulation of public opinion.
- Cyberbullying and Online Harassment: Social media platforms have provided a platform for cyberbullying, hate speech, and online harassment, causing emotional distress and harm to individuals, especially young people.
- Privacy and Data Security Concerns: Social media platforms collect and store vast amounts of user data, raising concerns about privacy breaches, data misuse, and unauthorized access to personal information.
- Impact on Mental Health: Excessive use of social media has been linked to increased feelings of anxiety, depression, loneliness, and low self-esteem, as individuals compare themselves to others and seek validation through online interactions.
- Erosion of Civil Discourse: The anonymity and distance provided by social media can lead to the erosion of civil discourse, with conversations turning hostile, polarized, and lacking empathy and respect for diverse opinions.
- Addiction and Time Management Issues: Social media addiction can disrupt daily routines, affect productivity, and lead to an excessive focus on virtual interactions at the expense of real-life relationships and activities.
How Social media amplifies narcissistic tendencies?
- Social media has the potential to amplify narcissistic tendencies and prioritize personal opinions over the feelings of others in several ways:
- Self-Centric Nature: Social media platforms often encourage users to present curated versions of their lives, focusing on self-presentation and self-promotion. This self-centric nature can fuel narcissistic tendencies, as individuals seek validation, attention, and admiration from their online peers.
- Selective Self-Presentation: Social media allows individuals to carefully select and highlight aspects of their lives that project a positive image. This selective self-presentation can contribute to a self-centered mindset, where individuals prioritize their own opinions and perspectives without fully considering or empathizing with the feelings and experiences of others.
- Validation through Likes and Followers: Social media platforms often employ metrics such as likes, followers, and shares as measures of popularity and social validation. This can incentivize users to prioritize personal opinions and content that garners attention and positive feedback, further reinforcing self-centered behavior and disregarding the impact on others.
- Echo Chambers and Confirmation Bias: Social media algorithms create echo chambers, where individuals are exposed to content that aligns with their existing beliefs and perspectives. This reinforces confirmation bias, leading users to seek out and engage with content that supports their own opinions.
- Disinhibition and Online Anonymity: Social media platforms often provide a sense of anonymity and detachment from real-life consequences. This can lead to disinhibition, where individuals feel freer to express their opinions without the social norms and inhibitions present in face-to-face interactions.
- Limited Non-Verbal Cues: Social media communication lacks non-verbal cues, such as facial expressions and tone of voice, which are crucial for understanding others’ emotions and maintaining empathy. The absence of these cues can make it easier for individuals to prioritize their own opinions without fully recognizing or acknowledging the impact their words may have on others.
Way forward: A Citizen-Led Solution
- Critical Media Consumption: Develop critical media literacy skills to discern reliable information from misinformation or fake news. Be vigilant about verifying information before sharing it and actively seek out diverse perspectives to avoid falling into echo chambers.
- Mindful Social Media Usage: Be mindful of your social media usage and the impact it has on your well-being. Set boundaries, allocate specific times for social media engagement, and prioritize real-life interactions and relationships over virtual ones.
- Responsible Sharing: Before sharing content on social media, consider the accuracy, credibility, and potential impact of the information. Share content responsibly, ensuring that it contributes positively to public discourse and avoids the spread of misinformation or hate speech.
- Promote Civil Discourse: Engage in respectful and constructive discussions online. Foster empathy and understanding, even when encountering differing opinions. Be open to listening and learning from others, while maintaining a respectful tone.
- Support Digital Literacy Initiatives: Advocate for and support initiatives that promote digital literacy and critical thinking skills. Encourage educational institutions, policymakers, and community organizations to prioritize digital literacy programs that equip individuals with the skills needed to navigate the digital landscape responsibly.
- Advocate for Responsible Platform Practices: Encourage social media platforms to prioritize responsible content moderation practices, transparency, and user privacy. Support efforts that combat hate speech, misinformation, and cyberbullying on these platforms.
- Engage in Positive Online Activism: Use social media as a platform for positive activism and constructive dialogue. Support causes, campaigns, and initiatives that promote inclusivity, tolerance, and social justice. Share stories and content that uplift and inspire others.
- Foster Digital Empathy: Cultivate empathy in online interactions by considering the perspectives and feelings of others. Treat online interactions as you would face-to-face conversations, with respect, kindness, and consideration for others’ emotions.
- Promote Offline Connections: Encourage offline interactions and relationships. Invest time in meaningful face-to-face conversations, community engagement, and real-world connections. Strengthening offline relationships can help balance and reduce dependence on social media.
- Advocate for Ethical Tech Practices: Support efforts to regulate and hold social media companies accountable for their practices. Advocate for ethical tech practices, user privacy protection, and responsible use of user data.
Conclusion
- The impact of social media on public discourse and the unraveling of our social fabric cannot be underestimated. It is imperative that individuals take responsibility and break free from the addictive allure of social media platforms. By prioritizing genuine human connections, engaging with diverse perspectives, and rebuilding our social bonds, we can mitigate the threats posed by social media and restore a healthier, more respectful public discourse.
Also read:
Social Media: Prospect and Challenges |
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
IT Rules Amendments: Government the Sole Arbiter of Truth
From UPSC perspective, the following things are important :
Prelims level: IT Rules
Mains level: Fake news, IT Rules amendments and issues
Central Idea
- The Ministry of Electronics and Information Technology (MeitY) has created powers to determine fake or false or misleading internet content about any business of the Central Government, which is inspired by George Orwell’s novel 1984 and its concept of Newspeak. While the government claims that these changes are for an Open, Safe & Trusted and Accountable Internet, this claim is questionable, and their impact on natural justice, transparency, and trust in government.
What is mean by Fake news?
- Fake news refers to intentionally fabricated or misleading information presented as if it were real news. It can be spread through traditional media sources like newspapers or television, but it is more commonly associated with social media platforms and other online sources.
- Fake news can range from completely made-up stories to misleading headlines and selectively edited or out-of-context information designed to deceive readers.
- It is often used for political purposes, to manipulate public opinion or to spread misinformation about individuals, organizations or events.
What makes Government’s claim questionable?
1. No safeguards for natural justice
- Against the principle of natural justice: The IT Amendment Rules, 2023, contain powers that allow the government to act as a judge in its own case. This goes against the principles of natural justice, where a transparent process with a fair chance of hearing and a legal order is essential.
- Government censorship: The absence of such safeguards in the IT Rules could result in government censorship, where press releases and tweets by the government may rally citizens to its cause without providing legal reasoning or the remedy of a legal challenge.
2. Government censorship in the name of safety
- Swift take-down of the content: With the new powers, the determination of fake or false or misleading information by a fact-checking unit of the Central Government will result in a swift take-down of the content, making it inaccessible not only on social media but also on the news portal’s website.
- Prevents critical understanding: This will prevent readers from developing a critical understanding of facts, which is a natural outcome of a democratic system. Thus, the IT Rules undermine the administration of justice and assume that the executive alone knows what is best for the citizen.
3. Lack of details on fact-checking body composition
- Lack of details and autonomy of the fact checking body: For a trusted internet, the fact-checking body’s composition and design of regulatory institutions are important. When these bodies are not insulated or formed with financial and functional autonomy, they become subservient to government and political interests. This undermines the basis of trust in government built through scrutiny.
- Government the sole arbiter of truth: The present system makes the Union Government the sole arbiter of truth, leaving citizens with little choice but to trust the government.
The basis of accountability
- Accountability requires remedial actions that are neither an artificial measure of placation nor a disproportionate or aggressive penalty.
- The IT Rules target institutions that work towards accountability, making it difficult to achieve its purpose.
- The mission of journalists is to report facts and speak truth to power, and the slogan Open, Safe & Trusted and Accountable Internet means little in a Digital India, where Newspeak-like rules prevent the free exchange of information.
Conclusion
- IT Rules of 2023, inspired by Orwell’s Newspeak, could lead to government censorship, lack of natural justice, and trust in government. The government needs to provide transparency, impartiality, and accountability in the regulatory institutions’ design to build trust among citizens. Instead of relying on a fact-checking unit of the Central Government, it is essential to establish independent regulatory bodies with financial and functional autonomy to promote a truly open, safe, and trusted internet.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Fake News: Addition of The Provision In Intermediary Guidelines
From UPSC perspective, the following things are important :
Prelims level: NA
Mains level: Dangers of Fake news and IT rules, 2021
Central Idea
- The addition of the fake news provision in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Intermediary Guidelines) must be seen in the context of protecting fundamental rights while combatting fake news. The recent addition by the central government clearly militates against settled law and the Constitution.
What is mean by Fake news?
- Fake news refers to intentionally fabricated or misleading information presented as if it were real news. It can be spread through traditional media sources like newspapers or television, but it is more commonly associated with social media platforms and other online sources.
- Fake news can range from completely made-up stories to misleading headlines and selectively edited or out-of-context information designed to deceive readers.
- It is often used for political purposes, to manipulate public opinion or to spread misinformation about individuals, organizations or events.
Existing Provisions to Combat Fake News
- Intermediary Guidelines of 2021: The most preferred democratic process to combat the threats and impact of fake news on a polity would be through Parliament-enacted laws. India opted for the speedier alternative of an addition to the Intermediary Guidelines of 2021 (as amended), through Rule 3(1)(v).
- Can not disseminate misleading content: Under this rule, intermediaries including social media platforms have to ensure that users do not disseminate content that deceives or misleads on the origin or knowingly and intentionally communicates any information which is patently false or misleading in nature but may reasonably be perceived as a fact.
Remedies Available
- Complaints and grievance: Any complaints from users, government, or court have to be actioned by the grievance officer of an intermediary, including social media platforms, within 15 days. This timeframe for actioning a complaint for complaints of false or misleading news is reduced to 72 hours.
- Resolution: The next step for resolution is provided through the Grievance Appellate Committees, which the government recently announced appointments for.
- Other actions: These remedies are independent of and in addition to the remedies available in law for a government agency to seek takedowns or blocking, as per due process or for courts to decide thereon.
Critique of the Addition
- Provisions already exists: The recent addition of a separate category for restraint on dissemination by users of content in respect of any business of the Central Government is unwarranted as provisions already exist. The restraint is on users and not intermediaries, as misconceived by many. The onus on intermediaries is only of reasonable effort.
- No transparency: With merely a central government-authorised fact check unit saying so, content could be classified as fake, false or misleading and a takedown and action necessitated, without even a semblance of due process.
- No legitimacy: In the present instance, there is an absolute absence of legitimate aim for this additional restriction on users and an abject lack of procedures that would assure due process.
Reaffirming the Need for Legitimacy
- The recent addition clearly militates against settled law and the Constitution: The Supreme Court in Puttaswamy judgment reaffirmed the need for legitimacy, supported by parliament enacted laws, which are proportionate to meet the test of constitutionality.
- Media One case: Supreme Court’s recent judgment in the Media One case (Madhyamam Broadcasting Limited v. Union of India, April 5, 2023) reiterates that any law or regulation inconsistent with fundamental rights is void. This judgment also reaffirms the four principles that will decide the constitutionality of a law or regulation: (i) unreasonableness or irrationality; (ii) illegality; and (iii) procedural impropriety.
Some of the dangers of fake news
- Inciting communal violence: In India, fake news has been known to incite communal violence. For instance, the spread of fake news on social media was one of the factors behind the Muzaffarnagar riots of 2013.
- Undermining public trust: Fake news can undermine public trust in institutions and the media. This can have serious consequences for democracy and social cohesion.
- Impact on health: Fake news about health issues can have serious consequences. For example, during the COVID-19 pandemic, fake news about remedies and cures for the disease led to people consuming dangerous substances.
- Misinformation during elections: Fake news can also be used to spread misinformation during elections, which can influence voters and distort the democratic process.
- Economic harm: Fake news can cause economic harm by spreading false information about businesses, leading to loss of investor confidence and financial losses.
- Spreading rumors: Fake news can also be used to spread rumors about individuals, which can have serious consequences, such as the recent case of fake news leading to the lynching of two men in Assam.
Conclusion
- The recent addition is unsustainable and unwarranted as provisions already exist. The fight should be for the protection of fundamental rights that are essential to our very existence.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
OTT Governance: Measures To Enhance Transparency
From UPSC perspective, the following things are important :
Prelims level: Digital Media Ethics Code
Mains level: OTT governance In India, concerns and measures
Central Idea
- It has been two years since the government issued the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules through which the Ministry of Information and Broadcasting (I&B) was given the task of regulating content on OTT and online platforms. India’s approach can be termed as a light-touch co-regulation model where there is self-regulation at the industry level and final oversight mechanism at the Ministry level.
What are OTT Media?
- An over-the-top (OTT) media service is a streaming media service offered directly to viewers via the Internet.
- OTT bypasses cable, broadcast, and satellite television platforms, the companies that traditionally act as a controller or distributor of such content.
- The term is most synonymous with subscription-based video-on-demand (SVoD) services that offer access to film and television content.
- They are typically accessed via websites on personal computers, as well as via apps on mobile devices (such as smartphones and tablets), digital media players, or televisions with integrated Smart TV platforms.
Digital Media Ethics Code Relating to Digital Media and OTT Platforms
- This Code of Ethics prescribes the guidelines to be followed by OTT platforms and online news and digital media entities.
- Self-Classification of Content: Platforms must self-classify content into five age-based categories and implement parental locks and age verification mechanisms.
- Norms for news: Publishers of news on digital media would be required to observe Norms of Journalistic Conduct of the Press Council of India and the Programme Code under the Cable Television Networks Regulation Act.
- Self-regulation by the Publisher: Publisher shall appoint a Grievance Redressal Officer based in India who shall be responsible for the redressal of grievances received by it. The officer shall take a decision on every grievance received it within 15 days.
- Self-Regulatory Body: Publishers can have a self-regulatory body headed by a retired judge or eminent person with up to six members. The body must register with the Ministry of Information and Broadcasting, monitor publisher compliance with the Code of Ethics, and address grievances not resolved by publishers within 15 days.
- Oversight Mechanism: The Ministry of Information and Broadcasting must establish an oversight mechanism and establish an Inter-Departmental Committee to hear grievances.
Guidelines Related to social media
- Due Diligence to Be Followed By Intermediaries: The Rules prescribe due diligence that must be followed by intermediaries, including social media intermediaries. In case, due diligence is not followed by the intermediary, safe harbour provisions will not apply to them.
- Grievance Redressal Mechanism: The Rules seek to empower the users by mandating the intermediaries, including social media intermediaries, to establish a grievance redressal mechanism for receiving resolving complaints from the users or victims.
- Ensuring Online Safety and Dignity of Users, Especially Women Users: Intermediaries shall remove or disable access within 24 hours of receipt of complaints of contents that erodes individual privacy and dignity.
What are the concerns?
- Low compliance and limited public awareness: OTT Rules require display of contact details for grievance redressal mechanisms and officers, but compliance is low and awareness among public is limited. Though the OTT Rules were notified in 2021, there is little awareness about them among the general public.
- Lack of Transparency in Complaint Redressal Information: In many cases, either the complaint redressal information is not published or published in a manner that makes it difficult for a user to notice easily. In some cases, the details are not included as part of the OTT app interface.
The Singapore Model
- In Singapore, the Infocomm Media Development Authority is the common regulator for different media.
- Aside from instituting a statutory framework and promoting industry self-regulation, its approach to media regulation emphasises on promoting media literacy through public education.
What needs to be done?
- Uniformity: There is a need for uniformity in displaying key information on obligations, timelines, and contact details for grievance redressal.
- Specified rules: Rules should specify manner, text, language, and frequency for display of vital information and mandate industry associations to run campaigns in print and electronic media
- Description in respective languages: Age ratings and content descriptors should be displayed in respective languages of the video, and shown prominently in full-screen mode for a mandatory minimum duration
- Guidelines should be prominent in advertisements: Guidelines should ensure film classification/rating is legible and prominent in advertisements and promos of OTT content in print and electronic media.
Measures to Enhance Transparency and Accountability in OTT Platform Governance
- Periodic Audits by Independent Body: Periodic audits should be undertaken by an independent body to check the existence and effectiveness of access controls, age verification mechanisms, and display of grievance redressal details by each OTT platform.
- Dedicated Umbrella Website: The Ministry could facilitate a dedicated umbrella website for the publication of applicable Rules, content codes, advisories, contact details for complaints/appeals, etc.
- Publish Complaint Details in public domain: Publish detailed complaint descriptions and decisions by OTT providers and self-regulatory bodies in the public domain; providers should upload this information on a dedicated website for transparency.
- IDC Membership to be Broad-Based and Representative: The Inter-Departmental Committee (IDC) comprising officer-nominees from various ministries of the Central government and domain experts should be made more broad-based and representative with security of tenure.
- Provision for Disclosure: Provision for the disclosure or publication of an apology/warning/censure on the platform or website should be incorporated in the Rules.
- Financial Penalties: Financial penalties may be imposed on erring entities.
- Common Guidelines for Content Governance: A common set of guidelines for content, classification, age ratings, violations, etc. should be evolved to govern content uniformly across platforms in the era of media convergence.
Conclusion
- India’s OTT regulatory model aims to strike a balance between self-regulation and legal backing, aligning with global trends. The government’s efforts to enhance media literacy and transparency will not only promote effective self-regulation but also empower millions of OTT consumers. These initiatives are crucial for achieving the objective of raising India’s stature at an international level and serving as a model for other nations to emulate.
Mains Question
Q. Despite the launch of Intermediary Guidelines and Digital Media Ethics Code there are still concerns over the OTT governance. In this backdrop Discuss what can be done to improve the transparency and safeguarding the its users?
Attempt UPSC 2024 Smash Scholarship Test | FLAT* 100% OFF on UPSC Foundation & Mentorship programs
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Digital Governance: Are GACs well equipped to deal with grievances?
From UPSC perspective, the following things are important :
Prelims level: GAC's
Mains level: Digital governance in India
Context
- Indian digital governance recently witnessed multiple developments in its appellate mechanisms. In December 2022, Google appealed two of the most significant antitrust decisions that the Competition Commission of India (CCI), issued on the functioning of digital markets. GAC’s capacity to handle complaints needs to be increased.
Crack Prelims 2023! Talk to our Rankers
Background: The Google case of anti-competitive contracts
- In October 2022, CCI found Google anti-competitive in its Android licensing contracts and app store policies in two separate orders.
- The National Company Law Appellate Tribunal (NCLAT), an authority for company law, competition law, and insolvency law matters, will hear Google’s appeals from 15-17 February.
- Simultaneously, the Ministry of Electronics and Information Technology (MeitY) recently announced the formation of three Grievance Appellate Committees to enforce the accountability of online intermediaries.
What is the grievance appellate committee (GAC)?
- Based on IT Act: The Centre established three Grievance Appellate Committees based on the recently amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021).
- Three GAC’s: The Centre has announced three different GACs led by the IT, Home Affairs, and Information and Broadcasting ministries.
- Composition: The committee is styled as a three-member council out of which one member will be a government officer (holding the post ex officio) while the other two members will be independent representatives.
- Complaint within 30 days: Users can file a complaint against the order of the grievance officer within 30 days.
- Online dispute resolution: The GAC is required to adopt an online dispute resolution mechanism which will make it more accessible to the users.
Importance of appellate jurisdiction
- Three pillars of regulatory framework: Regulatory frameworks stand on three pillars. These include a governing law, an empowered regulator and a fair appeals mechanism.
- An appellate mechanism is a critical part: An appellate mechanism is a critical part of this framework because it ensures an opportunity to remedy inappropriate application of governing laws. Therefore, if the framework is incapacitated, there will be an unfair application of law, which defeats the purpose of the legislation.
- Appellate bodies are essential tools for digital markets: Appellate bodies operate under a specialised mandate, which allows them to adapt their processes to the unique facets of a case. They are an essential tool for digital markets, which tend to be more complex than first meets the eye.
- For instance: Google allows Android users to bypass the Play Store and directly install apps from the internet known as sideloading. But when they do so, Google issues disclaimers about associated security risks linked to downloads from unknown sources. The CCI’s order on Android calls such disclaimers anti-competitive because they reinforce Google’s monopoly over app distribution.
Are GACs well equipped to deal with grievances?
- Not well equipped to deal with the user grievances: The recently formed Grievance Appellate Committees do not seem equipped to deal with the barrage of user grievances linked to online intermediary services.
- For instance: In October 2022, Facebook received 703 complaints, Twitter 723 and WhatsApp 701. WhatsApp then banned 2.3 million accounts. And this does not even account for all other types of online intermediation, such as e-commerce intermediaries.
- Multiple steps to arrive at a decision while the online is accessible instantly: Online content is accessible by millions instantly, and the longer unlawful content is accessible, the greater the harm to affected parties. Accordingly, a 30-day disposal period for the appeals to the GAC has been mandated. However, any dispute resolution process involves multiple steps.
- Prolonged dispute resolution: The principles of natural justice also require the originator of the disputed content to be heard. Therefore, when they’re implicated along with intermediaries and complainants, it prolongs the dispute resolution process.
- GAC’s may struggle to substantially resolve the grievances in time: The Centre has announced three different GACs led by the IT, Home Affairs, and Information and Broadcasting ministries. However, the sheer volume of online user content suggests that GACs may struggle to substantially resolve these grievances in time.
Conclusion
- Effective appeals mechanisms form an integral part of the digital governance toolkit. India has a progressive adjudicatory system that recognises the need for specialised appellate mechanisms, but its potential requires actualisation. The appellate mechanism must be strengthened for any technology policy reforms to succeed.
Mains question
Q. Briefly explain what is the grievance appellate committee (GAC)? Are GACs well equipped to deal with grievances? Discuss
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Balancing the Free Speech and Social Media Regulation
From UPSC perspective, the following things are important :
Prelims level: New developments related to regulation of social media
Mains level: social media advantages and challenges
Context
- Recently Facebook, one of the social media giant set up the Oversight Board, an independent body, which scrutinizes its ‘content moderation’ practices.
What are the IT rules of 2021?
- Regulating social media intermediaries (SMIs): World over, governments are grappling with the issue of regulating social media intermediaries (SMIs).
- Addressing the issues of SMI controlling the free speech: Given the multitudinous nature of the problem the centrality of SMIs in shaping public discourse, the impact of their governance on the right to freedom of speech and expression, the magnitude of information they host and the constant technological innovations that impact their governance it is important for governments to update their regulatory framework to face emergent challenges.
- Placing obligations on SMI: In a bid to keep up with these issues, India in 2021, replaced its decade old regulations on SMIs with the IT Rules, 2021 that were primarily aimed at placing obligations on SMIs to ensure an open, safe and trusted internet.
What are the recent amendments?
- Draft amendments in June 2022, the stated objectives of the amendments were threefold.
- Protecting the constitutional rights: There was a need to ensure that the interests and constitutional rights of netizens are not being contravened by big tech platforms,
- Grievance redressal: To strengthen the grievance redressal framework in the Rules,
- To avoid the dominance: That compliance with these should not impact early-stage Indian start-ups.
- This translated into a set of proposed amendments that can be broadly classified into two categories.
- Additional obligation on SMI: The first category involved placing additional obligations on the SMIs to ensure better protection of user interests.
- Appellate mechanism: The second category involved the institution of an appellate mechanism for grievance redressal.
Why social media is said to be double-edged sword?
- Moderation of content by platforms: Social media platforms regularly manage user content on their website. They remove, priorities or suspend user accounts that violate the terms and conditions of their platforms.
- Excessive power in government’s hands: In today’s online environment, however, the existing government control on online speech is unsustainable. Social media now has millions of users. Platforms have democratized public participation, and shape public discourse.
- Platforms of democratic freedom: As such, large platforms have a substantial bearing on core democratic freedoms.
- Hate speech on internet: Further, with the increasing reach of the Internet, its potential harms have also increased. There is more illegal and harmful content online today.
- Disinformation campaigns: On social media during COVID19 and hate speech against the Rohingya in Myanmar are recent examples.
What could be the balanced approach between free speech and regulation?
- Government orders must be respected: Government orders to remove content must not only be necessary and proportionate, but must also comply with due process.
- Example of DSA: The recent European Union (EU) Digital Services Act (DSA) is a good reference point. The DSA regulates intermediary liability in the EU. It requires government takedown orders to be proportionate and reasoned.
- Platforms can challenge the governments order: The DSA also gives intermediaries an opportunity to challenge the government’s decision to block content and defend themselves. These processes will strongly secure free speech of online users. Most importantly, an intermediary law must devolve crucial social media content moderation decisions at the platform level.
- An idea of co-regulation: Platforms must have the responsibility to regulate content under broad government guidelines. Instituting such a coregulatory framework will serve three functions.
- Platforms will retain reasonable autonomy over their terms of service: Coregulation will give them the flexibility to define the evolving standards of harmful content, thereby obviating the need for strict government mandates. This will promote free speech online because government oversight incentivizes platforms to engage in private censorship. Private censorship creates a chilling effect on user speech. In turn, it also scuttles online innovation, which is the backbone of the digital economy.
- Coregulation aligns government and platform interests: Online platforms themselves seek to promote platform speech and security so that their users have a free and safe experience. For instance, during the pandemic, platforms took varied measures to tackle disinformation. Incentivizing platforms to act as Good Samaritans will build healthy online environments.
- Outsourcing the content regulation: instituting coregulatory mechanisms allows the state to outsource content regulation to platforms, which are better equipped to tackle modern content moderation challenges.
- Platforms must follow the due process of law: Platforms as content moderators have substantial control over the free speech rights of users. Whenever platforms remove content, or redress user grievance, their decisions must follow due process and be proportionate. They must adopt processes such as notice, hearing and reasoned orders while addressing user grievances.
- Transparency in algorithm: Platform accountability can be increased through algorithmic transparency.
Conclusion
- The GACs must be re-looked because they concentrate censorship powers in the hands of government. A Digital India Act is expected to be the successor law to the IT Act. This is a perfect opportunity for the government to adopt a coregulatory model of speech regulation of online speech.
Mains Question
Q. Social media is a double-edged sword in the realm of free speech. Substantiate. Explain in detail the Idea of coregulation of social media.
Click and Get your FREE copy of Current Affairs Micro notes
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
The Amendments To The IT Rules, 2021
From UPSC perspective, the following things are important :
Prelims level: NA
Mains level: freedom of speech and Issues associated with regulating social media platforms
Context
- The Ministry of Electronics and IT (MeitY) has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021) on October 28. In June 2022, MeitY had put out a draft of the amendments and solicited feedback from the relevant stakeholders. The draft generated considerable discussion and comment on the regulation of social media in India.
What are the IT rules 2021?
- Regulating SMI’s: World over, governments are grappling with the issue of regulating social media intermediaries (SMIs).
- Addressing the issues of SMI controlling the free speech: Given the multitudinous nature of the problem the centrality of SMIs in shaping public discourse, the impact of their governance on the right to freedom of speech and expression, the magnitude of information they host and the constant technological innovations that impact their governance it is important for governments to update their regulatory framework to face emergent challenges.
- Placing obligations on SMI: In a bid to keep up with these issues, India in 2021, replaced its decade old regulations on SMIs with the IT Rules, 2021 that were primarily aimed at placing obligations on SMIs to ensure an open, safe and trusted internet.
What are the proposed amendments?
- Draft amendments in June 2022, the stated objectives of the amendments were threefold.
- Protecting the constitutional rights: there was a need to ensure that the interests and constitutional rights of netizens are not being contravened by big tech platforms,
- Grievance redressal: to strengthen the grievance redressal framework in the Rules,
- To avoid the dominance: that compliance with these should not impact early-stage Indian start-ups.
- This translated into a set of proposed amendments that can be broadly classified into two categories.
- Additional obligation on SMI: The first category involved placing additional obligations on the SMIs to ensure better protection of user interests.
- Appellate mechanism: The second category involved the institution of an appellate mechanism for grievance redressal.
What are the additional obligations placed on social media intermediaries?
- Users need to comply with rules of platforms(intermediaries): The original IT Rules, 2021 obligated the SMIs to merely inform its users of the “rules and regulations, privacy policy and user agreement” that governed its platforms along with the categories of content that users are prohibited from hosting, displaying, sharing etc. on the platform. This obligation on the SMIs has now been extended to ensuring that its users are in compliance with the relevant rules of the platform.
- Prevent the prohibited content: Further, SMIs are required to “make reasonable” efforts to prevent prohibited content being hosted on its platform by the users.
- SMIs have to respects rights under constitution: Second, a similar concern arises with the other newly introduced obligation on SMIs to “respect all the rights accorded to the citizens under the Constitution, including in the articles 14, 19 and 21”. Given the importance of SMIs in public discourse and the implications of their actions on the fundamental rights of citizens, the horizontal application of fundamental rights is laudable.
- Remove the content within 72 hours: SMIs are now obligated to remove information or a communication link in relation to the six prohibited categories of content as and when a complaint arises. They have to remove such information within 72 hours of the complaint being made. Given the virality with which content spreads, this is an important step to contain the spread of the content.
- Ensuring the accessibility of services: SMIs have been obligated to “take all reasonable measures to ensure accessibility of its services to users along with reasonable expectation of due diligence, privacy and transparency”.
- Provide content in all scheduled language: In this context, the amendments also mandate that “rules and regulations, privacy policy and user agreement” of the platform should be made available in all languages listed in the eighth schedule of the Constitution.
What is the grievance appellate committee (GAC)?
- Composition of GAC: The government has instituted Grievance Appellate Committees (GAC). The committee is styled as a three-member council out of which one member will be a government officer (holding the post ex officio) while the other two members will be independent representatives.
- Complaint within 30 days: Users can file a complaint against the order of the grievance officer within 30 days.
- Online dispute resolution: The GAC is required to adopt an online dispute resolution mechanism which will make it more accessible to the users.
What are the concerns associated with GAC?
- Confusion over GAC and High courts: It is unclear whether this is a compulsory tier of appeal or not, that is will the user have to approach the grievance appellate committee before approaching the court. The confusion arises from the fact that the press notes expressly stated that the institution of the GAC would not bar the user from approaching the court directly against the order of the grievance officer. However, the final amendments provide no such indication.
- Apprehensions about appointment by central government: While this makes the inhouse grievance redressal more accountable and appellate mechanism more accessible to users, appointments being made by the central government could lead to apprehensions of bias in content moderation.
- GAC doesn’t have enforcement power: Further, the IT Rules, 2021 do not provide any explicit power to the GAC to enforce its orders.
- Overlapping jurisdiction of courts and appellate: if users can approach both the courts and the GAC parallelly, it could lead to conflicting decisions often undermining the impartiality and merit of one institution or the other.
Conclusion
- Across the world, social media regulation is need of an hour. Fake news, protests, riots are fuelled by social media outrage on petty things. However, government should not usurp the unaccountable power of in the name social regulation. Power of government should also be scrutinized by parliamentary committee.
Mains Question
Q. How social media can disrupt the law-and-order situation? Social media intermediaries have become the master regulators of free speech. Explain. critically analyze the new draft recommendations of IT rules 2021.
Click and Get your FREE copy of Current Affairs Micro notes
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
EU member states agrees on Digital Services Act (DSA)
From UPSC perspective, the following things are important :
Prelims level: DSA
Mains level: India's IT Rules 2021
The European Parliament and European Union (EU) Member States announced that they had reached a political agreement on the Digital Services Act (DSA).
What is DSA?
- DSA is a landmark legislation to force big Internet companies to act against disinformation and illegal and harmful content, and to “provide better protection for Internet users and their fundamental rights”.
- The Act, which is yet to become law, was proposed by the EU Commission (anti-trust) in December 2020.
- As defined by the EU Commission, the DSA is “a set of common rules on intermediaries’ obligations and accountability across the single market”.
- It seeks to ensure higher protection to all EU users, irrespective of their country.
- The proposed Act will work in conjunction with the EU’s Digital Markets Act (DMA), which was approved last month.
Whom will the DSA apply?
- Intermediaries: The DSA will tightly regulate the way intermediaries, especially large platforms such as Google, Facebook, and YouTube, function when it comes to moderating user content.
- Abusive or illegal content: Instead of letting platforms decide how to deal with abusive or illegal content, the DSA will lay down specific rules and obligations for these companies to follow.
- Ambit platforms: The legislation brings in its ambit platforms that provide Internet access, domain name registrars, hosting services such as cloud computing and web-hosting services.
- Very large platforms: But more importantly, very large online platforms (VLOPs) and very large online search engines (VLOSEs) will face “more stringent requirements.”
- 45 million monthly users-base: Any service with more than 45 million monthly active users in the EU will fall into this category. Those with under 45 million monthly active users in the EU will be exempt from certain new obligations.
Key features
A wide range of proposals seeks to ensure that the negative social impact arising from many of the practices followed by the Internet giants is minimised or removed:
- Faster removal of illicit content: Online platforms and intermediaries such as Facebook, Google, YouTube, etc will have to add “new procedures for faster removal” of content deemed illegal or harmful. This can vary according to the laws of each EU Member State.
- Introduction of Trusted Flaggers: Users will be able to challenge these takedowns as well. Platforms will need to have a clear mechanism to help users flag content that is illegal. Platforms will have to cooperate with “trusted flaggers”.
- Imposition of duty of care: Marketplaces such as Amazon will have to “impose a duty of care” on sellers who are using their platform to sell products online. They will have to “collect and display information on the products and services sold in order to ensure that consumers are properly informed.”
- Annual audit of big platforms: The DSA adds an obligation for very large digital platforms and services to analyse systemic risks they create and to carry out risk reduction analysis. This audit for platforms like Google and Facebook will need to take place every year.
- Promoting independent research: The Act proposes to allow independent vetted researchers to have access to public data from these platforms to carry out studies to understand these risks better.
- Ban ‘Dark Patterns’ or “misleading interfaces: The DSA proposes to ban ‘Dark Patterns’ or “misleading interfaces” that are designed to trick users into doing something that they would not agree to otherwise.
- Transparency of Algorithms: It also proposes “transparency measures for online platforms on a variety of issues, including on the algorithms used for recommending content or products to users”.
- Easy cancellation of subscription: Finally, it says that cancelling a subscription should be as easy as subscribing.
- Protection of minors: The law proposes stronger protection for minors, and aims to ban targeted advertising for them based on their personal data.
- Crisis mechanism clause: This clause will make it “possible to analyse the impact of the activities of these platforms” on the crisis, and the Commission will decide the appropriate steps to be taken to ensure the fundamental rights of users are not violated.
- Others: Companies will have to look at the risk of “dissemination of illegal content”, “adverse effects on fundamental rights”, “manipulation of services having an impact on democratic processes and public security”, “adverse effects on gender-based violence, and on minors and serious consequences for the physical or mental health of users.”
Bar over Social Media
- It has been clarified that the platforms and other intermediaries will not be liable for the unlawful behaviour of users.
- So, they still have ‘safe harbour’ in some sense.
- However, if the platforms are “aware of illegal acts and fail to remove them, they will be liable for this user behaviour.
- Small platforms, which remove any illegal content they detect, will not be liable.
Are there any such rules in India?
- India last year brought the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
- These rules make the social media intermediary and its executives liable if the company fails to carry out due diligence.
- Rule 4 (a) states that significant social media intermediaries — such as Facebook or Google — must appoint a chief compliance officer (CCO), who could be booked if a tweet or post that violates local laws is not removed within the stipulated period.
- India’s Rules also introduce the need to publish a monthly compliance report.
- They include a clause on the need to trace the originator of a message — this provision has been challenged by WhatsApp in the Delhi High Court.
UPSC 2023 countdown has begun! Get your personal guidance plan now! (Click here)
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Big Tech’s privacy promise could be good news and also bad news
From UPSC perspective, the following things are important :
Prelims level: Competition Commission of India
Mains level: Privacy as a metric of quality
Context
In February, Facebook stated that its revenue in 2022 is anticipated to reduce by $10 billion due to steps undertaken by Apple to enhance user privacy on its mobile operating system.
Move towards more privacy-preserving options
- Apple introduced AppTrackingTransparency feature that requires apps to request permission from users before tracking them across other apps and websites or sharing their information with and from third parties.
- Through this change, Apple effectively shut the door on “permissionless” internet tracking and has given consumers more control over how their data is used.
- Privacy experts have welcomed this move because it is predicted to enhance awareness and nudge other actors to move towards more privacy-preserving options, leading to a market for “Privacy Enhancing Technologies”.
- Google’s Privacy Sandbox project is a case in point, though it remains to be seen whether it will be truly privacy-preserving.
Big Tech dominance and issues related to it
- Privacy and acquisitions: One standout feature of the Big Tech dominance has been the non-price factors such as quality of service (QoS) in general and privacy and acquisitions in particular.
- Acquisitions to kill competition: Acquisitions by Big Tech are regular and eat up big bucks, not always to promote efficiency but to eliminate potential competition, described evocatively as “kill zone” by specialists.
- According to a report released by the Federal Trade Commission, between 2010 and 2019, Big Tech made 616 acquisitions.
- In the absence of a modern framework, competition law continues to rely on Bork’s theory of consumer welfare which postulated that the sole normative objective of antitrust should be to maximise consumer welfare, best pursued through promoting economic efficiency.
- Market structure thus became irrelevant and conduct became the sole criterion for judgement.
- Conduct now predominantly revolves around QoS which, like much else surrounding digital platforms, is pushing competition authorities to fortify their existing regulatory toolkits.
Privacy as a metric of quality
- Companies such as Apple and DuckDuckGo (with its slogan “the search engine that doesn’t track you”) are employing enhanced user privacy as a competitive metric.
- It has been shown that “websites which do not face strong competition are significantly more likely to ask for more personal information than other services provided for free”.
- In 2018, OECD accepted that privacy is a relevant dimension of quality despite the low quality that may be prevalent due to lack of market development.
- Regulators across the globe are recognising privacy as a serious metric of quality.
- For instance, the Competition Commission of India (CCI) in 2021 took suo moto cognisance of changes to WhatsApp’s “take-it” or “leave-it” privacy policy that made it mandatory for every user to share data with Facebook.
- In its prima facie order, the CCI inter alia observed that this amounts to degradation of privacy and therefore quality.
Way forward
- Privacy and competition have overlapping boundaries.
- If privacy becomes a competitive constraint, then companies will have the incentive to create privacy-preserving and enhancing technologies.
- Barriers for new entrants: On the other hand, care must be taken so that Big Tech, aka the gatekeepers in the EU’s Digital Markets Act, do not misuse privacy to create barriers for newer entrants.
- Restricting third-party tracking is not novel and other browsers such as Mozilla Firefox and Microsoft’s Edge have already done so.
- But Google, which owns 65 per cent of the global browser market, is different.
- By disabling third parties from tracking but continuing to use that data in its own ad tech stack, Google harms competition.
- The use of privacy as a tool for market development, therefore, has to tread this tightrope between enabling and stifling competition.
Conclusion
An approach that balances user autonomy, consumer protection, innovation, and market competition in digital markets is a real win-win and worth investing in.
UPSC 2022 countdown has begun! Get your personal guidance plan now! (Click here)
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Fake news in social media
From UPSC perspective, the following things are important :
Prelims level: Not much
Mains level: Paper 2- Dealing with disinformation problem
Context
Social media platforms have adopted design choices that have led to a proliferation and mainstreaming of misinformation while allowing themselves to be weaponised by powerful vested interests for political and commercial benefit.
Problems created by social media and issues with response to it
- The consequent free flow of disinformation, hate and targeted intimidation has led to real-world harm and degradation of democracy in India: Mainstreamed anti-minority hate, polarised communities and sowed confusion have made it difficult to establish a shared foundation of truth.
- Political agenda: Organised misinformation (disinformation) has a political and/or commercial agenda.
- Apolitical and episodic discourse in India: The discourse in India has remained apolitical and episodic — focused on individual pieces of content and events, and generalised outrage against big tech instead of locating it in the larger political context or structural design issues.
- Problematic global discourse: The evolution of the global discourse on misinformation too has allowed itself to get mired in the details of content standards, enforcement, fact-checking, takedowns, de-platforming, etc.
- Moderating misinformation vs. safeguarding freedom of expression: Such framework lends itself to bitter partisan contest over individual pieces of content while allowing platforms to disingenuously conflate the discourse on moderating misinformation with safeguards for freedom of expression.
- The current system of content moderation is more a public relations exercise for platforms than being geared to stop the spread of disinformation.
Framework to combat disinformation
- Consider it as a political problem: The issue is as much about bad actors as individual pieces of content.
- Content distribution and moderation are interventions in the political process.
- Comprehensive transparency law: There is thus a need for a comprehensive transparency law to enforce relevant disclosures by social media platforms.
- Bipartisan political process for content moderation: Content moderation and allied functions such as standard setting, fact-checking and de-platforming must be embedded in the sovereign bipartisan political process if they are to have democratic legitimacy.
- Regulatory body should be grounded in democratic principles: Any regulatory body must be grounded in democratic principles — its own and of platforms.
- Three approaches to distribution that can be adopted by platforms: 1) Constrain distribution to organic reach (chronological feed);
- 2) take editorial responsibility for amplified content;
- 3) amplify only credible sources (irrespective of ideological affiliation).
- Review of content creator: The current approach to misinformation that relies on fact-checking a small subset of content in a vast ocean of unreviewed content is inadequate for the task and needs to be supplemented by a review of content creators itself.
Conclusion
Social media cannot be wished away. But its structure and manner of use are choices we must make as a polity after deliberation instead of accepting as them fait accompli or simply being overtaken by developments along the way.
UPSC 2022 countdown has begun! Get your personal guidance plan now! (Click here)
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Social media platforms must be held accountable for subjugating rights: Centre to HC
From UPSC perspective, the following things are important :
Prelims level: Read the attached story
Mains level: IT Rules 2021
The Centre told the Delhi High Court that social media platforms must be held accountable for “subjugating and supplanting fundamental rights like the right to freedom of speech and expression, otherwise the same would have dire consequences for any democratic nation”.
What is the news?
- The Ministry’s submission came in response to a petition filed by a Twitter user whose account was suspended by the microblogging site for alleged violations of platform guidelines.
- The Twitter user said his account was suspended for the reason of “ban evasion” (creating an account when a similar account was earlier banned).
- The complainant said Twitter suspended his accounts without giving him an opportunity for a hearing.
Centre’s argument
- The Centre said when a Significant Social Media Intermediary (SSMI) such as Twitter takes a decision to suspend the whole or part of a user’s account on its own due to its policy violation, it should afford a reasonable opportunity to the user to defend his side.
- The exception, the Centre said, where the SSMIs could take such a decision include certain scenarios such as rape, sexually explicit material or child sexual abuse material, bot activity or malware, terrorism-related content etc.
- If an SSMI fails to comply with the above, then it may amount to a violation of IT Rules 2021, the Centre clarified.
- No platform or intermediary will be allowed to infringe upon the citizens’ rights, including but not limited to Articles 14, 19 and 21 guaranteed under the Constitution of India under the guise of violation of the platform’s policies unless it constitutes a violation of extant law in force.
What are the IT Rules 2021?
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 are broadly termed as IT Rules 2021.
Why is this a matter of concern?
- Social media platforms must respect the fundamental rights of the citizens and should not take down the account itself or completely suspend the user account in all cases.
- Taking down the whole information or the user account should be a last resort.
- Only in cases where the majority of the contents/posts/tweets on an account are unlawful, the platform may take the extreme step of taking down the whole information or suspending the whole account.
Conclusion
- Hence it can be argued that undue discontinuance of social media accounts of any person is violative of fundamental rights guaranteed under Articles 14, 19 and 21.
UPSC 2022 countdown has begun! Get your personal guidance plan now! (Click here)
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Protocols for social media
From UPSC perspective, the following things are important :
Prelims level: Not much
Mains level: Paper 3- Social media in the time of conflict
Context
The lack of clear systems within social media companies that claim to connect the world is appalling. It is time that they should have learned from multiple instances, as recent as the Israeli use of force in Palestine.
Role of social media platforms in the context of conflict
- There was no unpredictability over conflicts in the information age spilling over to social media platforms.
- In the context of conflict, social media platforms have multiple challenges that go unaddressed.
- Threat of information warfare: Content moderation remains a core area of concern, where, essentially, information warfare can be operationalised and throttled.
- Their sheer magnitude and narrative-building abilities place a degree of undeniable onus on them.
- After years of facing and acknowledging these challenges, most social media giants are yet to create institutional capacity to deal with such situations.
- Additionally, they also act as a conduit for further amplification of content on other platforms.
- Major social media platforms such as Facebook, Instagram, and Twitter also provide space for extremist views from fringe platforms, where the degree of direct relation to the user generating such content is blurred.
Technology falls short
- Misinformation and disinformation are thorny challenges to these platforms.
- Algorithmic solutions are widely put to use to address them.
- These include identification of content violative of their terms, reducing the visibility of content deemed inappropriate by the algorithm, and in the determination of instances reported to be violative of the terms by other users.
- More often than not in critical cases, these algorithmic solutions have misfired, harming the already resource-scarce party.
- The operational realities of these platforms require that the safety of users be prioritised to address pressing concerns, even at the cost of profits.
Lessons for India
- The lack of coherent norms on state behaviour in cyberspace as well as the intersection of business, cyberspace, and state activity is an opportunity for India.
- Indian diplomats can initiate a new track of conversations here which can benefit the international community at large.
- India should ensure that it initiates these conversations through well-informed diplomats.
- Finally, it is necessary to reassess the domestic regulatory framework on social media platforms.
- Transparency and accountability need to be foundational to the regulation of social media platforms in the information age.
Conclusion
It is in our national interest and that of a rule-based global polity that social media platforms be dealt with more attention across spheres than with a range of reactionary measures addressing immediate concerns alone.
UPSC 2022 countdown has begun! Get your personal guidance plan now! (Click here)
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Loss of Safe Harbour for Twitter
From UPSC perspective, the following things are important :
Prelims level: Not much
Mains level: Social media content issue
Twitter has reportedly lost the coveted “safe harbour” immunity in India after failing to appoint statutory officers on time, as mandated by the new Information Technology (IT) Rules, 2021.
What is the news?
- With this, the social media giant becomes the only American platform to have lost the protective shield – granted under Section 79 of the IT Act.
- Its rival platforms such as YouTube, Facebook and WhatsApp remain protected.
- The new development could mean that Twitter’s senior executives that include its India managing director, face legal actions under relevant IPC for ‘unlawful’ activities on the platform – even if conducted by users.
Why such a move?
- Earlier this week, Twitter said it appointed an interim Chief Compliance Officer (CCO), and the details of the officer were not yet shared with the government.
- The company also posted job openings for a Nodal Officer and Resident Grievance Officer – the two key positions mandated by the central government’s IT Rules, 2021.
What is safe harbour protection?
- According to Section 79 of IT Act, 2000, “an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him,” therefore providing Safe Harbour protection.
- To put it simply, the law notes that intermediaries such as Twitter or your Internet Service Providers (ISPs) are not liable to punishment if third parties (users) misuse the infrastructure, in this case, the platform.
- However, the protection is guaranteed only when the intermediary does not ‘initiate the transmission,’ ‘select the receiver of the transmission,’ and ‘modify the information contained in the transmission.’
- It means that as long as the platform acts just as the medium to carry out messages from users A to user B, that is, without interfering in any manner, it will be safe from any legal prosecution.
Inception of the concept
- In its original form, the IT Act 2000 provided little or no Safe Harbour protection to internet intermediaries as the definition of the intermediary was restricted.
- However, things began changing in 2004, in a case where a student posted an obscene clip for sale.
- The student and the CEO of that company were both held later for letting pornographic material circulate online.
- The CEO challenged the proceedings against him, contending that he could not be personally held liable for the listing and that the MMS was transferred directly between the seller and buyer without the intervention of the website.
- The executive was acquitted, the case eventually resulted in the addition of Section 79 in the IT Act to provide immunity to intermediaries.
Why has Twitter lost the protection?
- Over the years, social media platforms have evolved and often tend to act as gatekeepers.
- For instance, Twitter banning Donald Trump and adding “manipulated media” label on select posts have been questioned by excerpts.
- In other words, an intermediary’s ability to “modify the information contained in the transmission,” opens rooms for revision of the law, experts believe.
- Hence, the government introduced the IT Rules 2021 in December last year and implemented it in May 2021.
- As per the new order, all social media platforms with more than 50 lakh (five million) users will need to appoint a Chief Compliance Officer, a Nodal Contact Person, and a Resident Grievance Officer from India to smoothen the grievance mechanism for citizens.
- The officers will need to acknowledge queries with 24 hours and resolve them in 15 days from the date of receipt.
What can happen next?
- Once a company loses the Safe Harbour protection, technically, officials are liable to punishment if a post even by a third user violates local laws.
- The new IT Rules 2021 do not mention any ban for non-compliance.
- But with an estimated 1.75 crore users in India, Twitter would likely fill key positions soon to comply with the new norms laid by the government.
- As mentioned, the company already appointed an interim Chief Compliance Officer earlier.
- This, according to the government, means that the protection under Section 79 of the Information Technology (IT) Act, accorded to Twitter for being a social media intermediary, now stands withdrawn.
How does this impact Twitter?
- If someone puts out any content on Twitter that leads to some form of violence, or violates any Indian law with respect to content, not only the person that has put out the tweet will be held responsible.
- Even Twitter will be legally liable for the content as it no longer has the protection.
Is there something else that can happen subsequently?
- In the longer run, there is also the theoretical possibility that Twitter might be subjected to the 26 per cent cap of direct foreign investment in media and publishing.
- This in turn means that the platform may be forced to look for an Indian buyer for the remaining 74 per cent stake.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
New IT Rules is not the way forward
From UPSC perspective, the following things are important :
Prelims level: IT Act 2000
Mains level: Paper 3- Issues involved in traceability of originator of information on social media
The article deals with the issues involved in the traceability requirement of the originator of information on social media platform as per new IT Rules.
Traceability clause and issues involved
- Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 imposes certain obligation on significant social media intermediaries.
- Rule 4(2) puts an obligations to ensure traceability of the originator of information on their platforms.
- Consequently, WhatsApp has filed a petition in the Delhi High Court.
- WhatsApp contends that the mandate for traceability violates the privacy rights of Indian citizens, by rendering WhatsApp unable to provide encrypted services.
Government’s response
- The Government primarily relies on the argument that: privacy is not an absolute right, and that the traceability obligation is proportionate, and sufficiently restricted.
- Notably, the new Rules mandate traceability only in the case of significant social media intermediaries i.e. those that meet a user threshold of 50 lakh users, which WhatsApp does.
- Traceability is also subject to an order being passed by a court or government agency and only in the absence of any alternatives.
- While it is indeed true that privacy is not an absolute right, the Supreme Court of India in the two K.S. Puttaswamy decisions of 2017 and 2018 has laid conditions for restricting this right.
- In Puttaswamy cases, the Supreme Court clarified that any restriction on this right must be necessary, proportionate and include safeguards against abuse.
Issues with traceability
- Not proportionate: A general obligation to enable traceability as a systemic feature across certain types of digital services is neither suitable nor proportionate.
- No safeguard against abuse: The Rules lack effective safeguards in that they fail to provide any system of independent oversight over tracing requests made by the executive.
- This allows government agencies the ability to seek any messaging user’s identity, virtually at will.
- Presumption of criminality: Weakening encryption — which a traceability mandate would do — would compromise the privacy and security of all individuals at all times, despite no illegal activity on their part, and would create a presumption of criminality.
Way forward
- Explore the alternatives: The Government already has numerous alternative means of securing relevant information to investigate online offences including by accessing unencrypted data such as metadata, and other digital trails from intermediaries.
- Already has ability to access encrypted data: The surveillance powers of the Government are in any case vast and overreaching, recognised even by the Justice B.N. Srikrishna Committee report of 2018.
- Importantly, the Government already has the ability to access encrypted data under the IT Act.
- Notably, Section 69(3) of the Information Technology Act and Rules 17 and 13 of the Information Technology Rules, 2009 require intermediaries to assist with decryption where they have the technical ability to do so, and where law enforcement has no alternatives.
- Judicial scrutiny of Section 79 of IT Act: The ability of the government to issue obligations under the guise of “due diligence” requirements under Section 79 of the IT Act must be subject to judicial scrutiny.
- Legislative changes needed: The long-term solution would be for legislative change along multiple avenues, including in the form of revising and reforming the now antiquated IT Act, 2000.
Consider the question “What are the issues involved in the traceability of the originator of the information on social media platforms as mandated by the new IT Rules 2021? Suggest the way forward.”
Conclusion
While, undoubtedly, there are numerous problems in the digital ecosystem that are often exacerbated or indeed created by the way intermediaries function, ill-considered regulation of the sort represented by the new intermediary rules is not the way forward.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
New IT Rules 2021
From UPSC perspective, the following things are important :
Prelims level: Provisions of IT Rules 2021
Mains level: Paper 3- Issues with IT Rules 2021
The article highlights the issues with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Important provision made in the IT Rules 2021
- The Rules mandate duties such as removal of non-consensual intimate pictures within 24 hours.
- The rules also mandates publication of compliance reports to increase transparency.
- Rules provides for setting up of a dispute resolution mechanism for content removal.
- It provides for adding a label to information for users to know whether content is advertised, owned, sponsored or exclusively controlled.
Issues with the rules
1) Affects right to free speech and expression
- The Supreme Court, in the case of Life Insurance Corpn. Of India vs Prof. Manubhai D. Shah (1992) had elevated ‘the freedom to circulate one’s views as the lifeline of any democratic institution’.
- So, the rules need to be critically scrutinised for the recent barriers being imposed by it.
2) Violation of legal principles
- The rules were framed by the Ministry of Electronics and Information Technology (MeiTY).
- However, the Second Schedule of the Business Rules, 1961 does not empower MeiTY to frame regulations for digital media.
- This power belongs to the Ministry of Information and Broadcasting.
- This action violates the legal principle of ‘colourable legislation’ where the legislature cannot do something indirectly if it is not possible to do so directly.
- Moreover, the Information Technology Act, 2000, does not regulate digital media.
- Therefore, the new IT Rules which claim to be a piece of subordinate legislation of the IT Act, goes beyond the rule-making power conferred upon them by the IT Act.
- This makes the Rules ultra vires to the Act.
3) Deprives the fair recourse to intermediary
- An intermediary is now supposed to take down content within 36 hours upon receiving orders from the Government.
- This deprives the intermediary of a fair recourse in the event that it disagrees with the Government’s order due to a strict timeline.
4) Privacy violation
- These Rules undermine the right to privacy by imposing a traceability requirement.
- The immunity that users received from end-to-end encryption was that intermediaries did not have access to the contents of their messages.
- Imposing this mandatory requirement of traceability will break this immunity, thereby weakening the security of the privacy of these conversations.
- This will also render all the data from these conversations vulnerable to attack from ill-intentioned third parties.
- The threat here is not only one of privacy but to the extent of invasion and deprivation from a safe space.
- Recent data breach affecting a popular pizza delivery chain and also several airlines highlights the risks involved in such move in the absence of data protection law.
- Instead of eliminate the fake news, the Rules proceed to hurriedly to take down whatever authority may deem as “fake news”.
5) Operational cost
- The Rules create additional operational costs for intermediaries by requiring them to have Indian resident nodal officers, compliance officers and grievance officers.
- Intermediaries are also required to have offices located in India.
- This makes profit making a far-fetched goal for multinational corporations and start-up intermediary enterprises.
- Therefore, not only do these Rules place a barrier on the “marketplace of ideas” but also on the economic market of intermediaries in general by adding redundant financial burdens.
Consider the question “What are the challenges associated with the social media? How the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 will help is dealing with these challenges? What are the issues with these rules?”
Conclusion
Democracy stands undermined in direct proportion to every attack made on the citizen’s right. The IT Rules 2021 have tilt towards violation of rights. Therefore, these rules need reconsideration.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Explained: Social Media and Safe Harbour
From UPSC perspective, the following things are important :
Prelims level: Not Much
Mains level: Social media regulation
The new rules for social media platforms and digital news outlets called the Intermediary Guidelines and Digital Media Ethics Code has come into effect.
New guidelines for digital media
- The guidelines had asked all social media platforms to set up a grievances redressal and compliance mechanism.
- This included appointing a resident grievance officer, chief compliance officer and a nodal contact person.
- The IT Ministry had also asked these platforms to submit monthly reports on complaints received from users and action taken.
- A third requirement was for instant messaging apps was to make provisions for tracking the first originator of a message.
- Failure to comply with any one of these requirements would take away the indemnity provided to social media intermediaries under Section 79 of the Information Technology Act.
What is Section 79 of the IT Act?
- Section 79 says any intermediary shall not be held legally or otherwise liable for any third party information, data, or communication link made available or hosted on its platform.
- This protection, the Act says, shall be applicable if the said intermediary does not in any way, initiate the transmission of the message in question, select the receiver of the transmitted message and does not modify any information contained in the transmission.
- This means that as long as a platform acts just as the messenger carrying a message from point A to point B, without interfering in any manner, it will be safe from any legal prosecution.
- The intermediary must not tamper with any evidence of these messages or content present on its platform, failing which it loses its protection under the Act.
Effect of non-compliance
- As of now, nothing changes overnight. Social media intermediaries will continue to function as they were, without any hiccups.
- People will also be able to post and share content on their pages without any disturbance.
- Social media intermediaries such as Twitter, Facebook, and Instagram have so far not appointed any officer or contact person as required under the new rules.
- They have also failed to submit monthly action taken reports on grievances and complaints submitted to them by users. Thus, protection under Section 79 of the IT Act does will not hold for them.
Liabilities with the new rules
- Further, Rule 4(a) of the IT Rules mandates that significant social media intermediaries must appoint a chief compliance officer (CCO) who would be held liable in case the intermediary fails to observe the due diligence requirements.
- This means that if a tweet, a Facebook post or a post on Instagram violates the local laws, the law enforcement agency would be well within its rights to book not only the person sharing the content but the executives of these companies as well.
Global norms on safe harbour protection
- As most of the bigger social media intermediaries have their headquarters in the US, the most keenly watched is Section 230 of the 1996 Communications Decency Act.
- This provides Internet companies a safe harbour from any content users post of these platforms.
- Experts believe it is this provision in the US law that enabled companies such as Facebook, Twitter, and Google to become global conglomerates.
- Like Section 79 of India’s IT Act, this Section 230 states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”.
- This effectively means that the intermediary shall only be like a bookstore owner who cannot be held accountable for the books in the store unless there is a connection.
Repercussions of the rules in India
- WhatsApp has approached the Delhi High Court challenging the new Rules which include a requirement for social media platforms to compulsorily enable “the identification of the first originator of the information” in India upon government or court order.
- It argued that this provision forces it “to break end-to-end encryption on its messaging service, as well as the privacy principles underlying it.
Must read:
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Kerala HC restrains Centre over new IT Rules
From UPSC perspective, the following things are important :
Prelims level: New IT Rules
Mains level: Free speech related issues
The Kerala High Court has restrained the Centre from taking coercive action against a legal news portal, for any non-compliance with Part III of the new IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
What was the petition?
Three-tier mechanism
- The petition said Part III of the rules imposed unconstitutional three-tier complaints and adjudication structure on publishers.
- This administrative regulation on digital news media would make it virtually impossible for small or medium-sized publishers, such as the petitioner, to function.
- It would have a chilling effect on such entities, the petition said.
- The creation of a grievance redressal mechanism, through a governmental oversight body (an inter-departmental committee constituted under Rule 14) amounted to excessive regulation, the petitioner contended.
Violation of free speech
- The petitioner pointed out that Rule 4(2), which makes it mandatory for every social media intermediary to enable tracing of originators of information on its platform, violated Article 19(1)(a) (freedom of speech and expression).
- It also deprived the intermediaries of their “safe-harbour protection” under Section 79 of the IT Act.
Violation of Right to Privacy
- The rules obligate messaging intermediaries to alter their infrastructure to “fingerprint” each message on a mass scale for every user to trace the first originator.
- This was violative of the fundamental right of Internet users to privacy.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
In Centre’s IT rules, there is accountability with costs
From UPSC perspective, the following things are important :
Prelims level: Not much
Mains level: Paper 3- Regulation of social media
The article examines the issues with Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Change in the immunity for social media platforms
- With the social media platforms amassing tremendous power, the Government of India and has over time sought to devise a core framework to governs social media.
- This framework known as the “intermediary liability” has been made legally through Section 79 of the Information Technology Act, 2000.
- This framework has been supplemented by operational rules, and the Supreme Court judgment in Shreya Singhal v. Union of India.
- All this legalese essentially provides large technology companies immunity for the content that is transmitted and stored by them.
- Recently, the Government of India announced drastic changes to it through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Issues with the Rules
1) Privacy concern
- The regulations do contain some features that bring accountability to social media platforms.
- For instance, they require that prior to a content takedown, a user should be provided adequate notice.
- However, there are several provisions in the rules that raise privacy concerns.
- Take traceability, where instant messaging platforms which deploy end-to-end encryption that helps keep our conversations private will now effectively be broken.
- This is because now the government may require that each message sent through WhatsApp or any other similar application be tied to the identity of the user.
- When put in the larger context of an environment that is rife with cybersecurity threats, an inconsistent rule of law and the absence of any surveillance oversight, this inspires fear and self-censorship among users.
- The core of the traceability requirement undermines the core value of private conversations.
2) Regulation without clear legal backing
- The rules seek to regulate digital news media portals as well as online video streaming platforms.
- Rules will perform functions similar to those played by the Ministry of Information and Broadcasting for TV regulation.
- For instance, as per Rule 13(4), this also now includes powers of censorship such as apology scrolls, but also blocking of content.
- All of this is being planned to be done without any legislative backing or a clear law made by Parliament.
- A similar problem exists with digital news media portals.
- The purview of the Information Technology Act, 2000, is limited.
- It only extends to the blocking of websites and intermediary liabilities framework, but does not extend to content authors and creators.
- Hence, the Act does not extend to news media despite which it is being stretched to do so by executive fiat.
- The oversight function will be played by a body that is not an autonomous regulator but one composed of high ranking bureaucrats.
- This provides for the discretionary exercise of government powers of censorship over these sectors.
Way forward
- This could have ideally been achieved through more deliberative, parliamentary processes and by examining bodies in other democracies, which face similar challenges.
- For instance, OFCOM, a regulator in the United Kingdom, has been studying and enforcing regulations that promise higher levels of protection for citizens’ rights and consistency in enforcement.
- Instead, the present formulation increases government control that suffers from legality and core design faults.
- It will only increase political control.
Consider the question “What is the purpose of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and what are the concerns with these rules?”
Conclusion
While every internet user in India needs oversight and accountability from big tech, it should not be at the cost of increasing political control, chilling our voices online and hurting individual privacy.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Intermediary Guidelines and Digital Media Ethics Code, 2021
From UPSC perspective, the following things are important :
Prelims level: Not Much
Mains level: Regulation of social media and ott platforms
For the first time, the union government, under the ambit of the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, has brought in detailed guidelines for digital content on both digital media and Over The Top (OTT) platforms.
Try answering this
Q.What is Over the Top (OTT) media services? Critically analyse the benefits and challenges offered by the OTT media services in India.
Guidelines Related to Social Media
- Due Diligence To Be Followed By Intermediaries: The Rules prescribe due diligence that must be followed by intermediaries, including social media intermediaries. In case, due diligence is not followed by the intermediary, safe harbour provisions will not apply to them.
- Grievance Redressal Mechanism: The Rules seek to empower the users by mandating the intermediaries, including social media intermediaries, to establish a grievance redressal mechanism for receiving resolving complaints from the users or victims.
- Ensuring Online Safety and Dignity of Users, Especially Women Users: Intermediaries shall remove or disable access within 24 hours of receipt of complaints of contents that erodes individual privacy and dignity.
Additional Due Diligence to Be Followed by Significant Social Media Intermediary:
- Appoint a Chief Compliance Officer who shall be responsible for ensuring compliance with the Act and Rules. Such a person should be a resident of India.
- Appoint a Nodal Contact Person for 24×7 coordination with law enforcement agencies. Such a person shall be a resident in India.
- Appoint a Resident Grievance Officer who shall perform the functions mentioned under the Grievance Redressal Mechanism. Such a person shall be a resident in India.
- Publish a monthly compliance report mentioning the details of complaints received and action taken on the complaints.
- Significant social media intermediaries providing services primarily in the nature of messaging shall enable identification of the first originator of the information.
Digital Media Ethics Code Relating to Digital Media and OTT Platforms
This Code of Ethics prescribes the guidelines to be followed by OTT platforms and online news and digital media entities.
(a) Self-Classification of Content
- The OTT platforms, called the publishers of online curated content in the rules, would self-classify the content into five age-based categories– U (Universal), U/A 7+, U/A 13+, U/A 16+, and A (Adult).
- Platforms would be required to implement parental locks for content classified as U/A 13+ or higher and reliable age verification mechanisms for content classified as “A”.
- The publisher of online curated content shall prominently display the classification rating specific to each content or programme together with a content descriptor.
(b) Norms for news
- Publishers of news on digital media would be required to observe Norms of Journalistic Conduct of the Press Council of India and the Programme Code under the Cable Television Networks Regulation Act.
(c) Self-regulation by the Publisher
- Publisher shall appoint a Grievance Redressal Officer based in India who shall be responsible for the redressal of grievances received by it.
- The officer shall take a decision on every grievance received it within 15 days.
(d) Self-Regulatory Body
- There may be one or more self-regulatory bodies of publishers. Such a body shall be headed by a retired judge of the Supreme Court, a High Court or independent eminent person and have not more than six members.
- Such a body will have to register with the Ministry of Information and Broadcasting.
- This body will oversee the adherence by the publisher to the Code of Ethics and address grievances that have not to be been resolved by the publisher within 15 days.
(e) Oversight Mechanism
- Ministry of Information and Broadcasting shall formulate an oversight mechanism.
- It shall publish a charter for self-regulating bodies, including Codes of Practices.
- It shall establish an Inter-Departmental Committee for hearing grievances.
Back2Basics: Social Media usage in India
- The Digital India programme has now become a movement that is empowering common Indians with the power of technology.
- The extensive spread of mobile phones, the Internet etc. has also enabled many social media platforms to expand their footprints in India.
- Some portals, which publish analysis about social media platforms and which have not been disputed, have reported the following numbers as the user base of major social media platforms in India:
- WhatsApp users: 53 Crore
- YouTube users: 44.8 Crore
- Facebook users: 41 Crore
- Instagram users: 21 Crore
- Twitter users: 1.75 Crore
- These social platforms have enabled common Indians to show their creativity, ask questions, be informed and freely share their views, including constructive criticism of the government and its functionaries.
- The govt acknowledges and respects the right of every Indian to criticize and disagree as an essential element of democracy.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Australia vs Facebook Row
From UPSC perspective, the following things are important :
Prelims level: Not Much
Mains level: Social media regulation
The social media giant Facebook is locked in a battle with Australia over legislation that would require FB, Google to pay for news outlets.
Row over the news on social media
- Australia had proposed a law called the News Media and Digital Platforms Mandatory Bargaining Code Bill 2020.
- It seeks to mandate a bargaining code that aims to force Google and Facebook to compensate media companies for using their content.
Imagine if the case arises in India where tons of news channels and impulsive journalists are dying off hard to gather TRPs!
Response from the ‘giants’
- Google had threatened to make its search engine unavailable in Australia in response to the legislation, which would create a panel to make pricing decisions on the news.
- Facebook responded by blocking users from accessing and sharing Australian news.
Why countries are bringing such legislation?
- Australia has launched a global diplomatic offensive to support its proposed law to force Internet giants Facebook and Google to pay media companies.
- Google accounts for 53% of Australian online advertising revenue and Facebook for 23%.
- The legislation sets a precedent in regulating social media across geographies and is being closely watched the world over.
What is happening in other countries?
- Australia’s proposed law would be the first of its kind, but other governments also are pressuring Google, Facebook and other internet companies to pay news outlets and other publishers for the material.
- In Europe, Google had to negotiate with French publishers after a court last year upheld an order saying such agreements were required by a 2019 EU copyright directive.
- France is the first government to enforce the rules, but the decision suggests Google, Facebook and other companies will face similar requirements in other parts of the 27-nation trade bloc.
The ‘doubted’ reluctance
- Last year, Facebook announced it would pay US news organizations including The Wall Street Journal, The Washington Post and USA Today for headlines.
- In Spain, Google shut down its news website after a 2014 law required it to pay publishers.
Why does this matter?
- Developments in Australia and Europe suggest the financial balance between multibillion-dollar internet companies and news organizations might be shifting.
- Australia is responding to complaints by news reports, magazine articles and other content that appears on their websites or is shared by users.
- The government acted after its competition regulator tried and failed to negotiate a voluntary payment plan with Google.
- The proposed law would create a panel to make binding decisions on the price of news reports to help give individual publishers more negotiating leverage with global internet companies.
Not losing out revenue gain
- Google’s agreement means a new revenue stream for news outfits, but whether that translates into more coverage for readers, viewers and listeners is unclear.
- The union for Australian journalists is calling on media companies to make sure online revenue goes into newsgathering.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Controversial hashtags on twitter and their regulation
From UPSC perspective, the following things are important :
Prelims level: Not Much
Mains level: Social media as a lobbying tool
The Centre has issued notice to Twitter after the micro-blogging site restored more than 250 accounts that had been suspended earlier on the government’s ‘legal demand’.
Take this new term “Hashtags Activism”.
What is the news?
- Twitter was asked to block accounts and controversial hashtags that spoke of an impending ‘genocide’ of farmers for allegedly promoting misinformation about the protests, adversely affecting public order.
- Twitter reinstated the accounts and tweets on its own and later refused to go back on the decision, contending that it found no violation of its policy.
Concerns with the directive
- This direction presents a clear breach of fundamental rights but also reveals a complex relationship between the government and large platforms on the understanding of the Constitution of India.
- The specific legal order issued is secret.
- This brings into focus the condition of secrecy that is threshold objection to multiple strands of our fundamental rights.
- It conflicts against the rights of the users who are denied reasons for the censorship.
- Secrecy also undermines the public’s right to receive information, which is a core component of the fundamental freedom to speech and expression.
- This is an anti-democratic practice that results in an unchecked growth of irrational censorship but also leads to speculation that fractures trust.
- The other glaring deficiency is the complete absence of any prior show-cause notice to the actual users of these accounts by the government.
- This is contrary to the principles of natural justice.
- This again goes back to the vagueness and the design faults in the process of how directions under Section 69A are issued.
Are platforms required to comply with legal demands?
- Cooperation between technology services companies and law enforcement agencies is now deemed a vital part of fighting cybercrime and various other crimes that are committed using computer resources.
- These cover hacking, digital impersonation and theft of data.
- The potential of the misuse has led to law enforcement officials constantly seeking to curb the ill-effects of using the medium.
- Therefore, most nations have framed laws mandating cooperation by Internet service providers or web hosting service providers and other intermediaries to cooperate with law and order authorities in certain circumstances.
What does the law in India cover?
- In India, the Information Technology Act, 2000, as amended from time to time, governs all activities related to the use of computer resources.
- It covers all ‘intermediaries’ who play a role in the use of computer resources and electronic records.
- The term ‘intermediaries’ includes providers of telecom service, network service, Internet service and web hosting, besides search engines, online payment and auction sites, online marketplaces and cyber cafes.
- It includes any person who, on behalf of another, “receives, stores or transmits” any electronic record. Social media platforms would fall under this definition.
What are the Centre’s powers, vis-à-vis intermediaries?
- Section 69 of the Act confers on the Central and State governments the power to issue directions “to intercept, monitor or decrypt…any information generated, transmitted, received or stored in any computer resource”.
The grounds on which these powers may be exercised are:
- in the interest of the sovereignty or integrity of India, defence of India, the security of the state,
- friendly relations with foreign states,
- public order, or for preventing incitement to the commission of any cognizable offence relating to these, or
- for investigating any offence
How does the government block websites and networks?
- Section 69A, for similar reasons and grounds, enables the Centre to ask any agency of the government, or any intermediary, to block access.
- Any such request for blocking access must be based on reasons given in writing.
- Procedures and safeguards have been incorporated in the rules framed for the purpose.
Obligations of intermediaries under Indian law
- Intermediaries are required to preserve and retain specified information in a manner and format prescribed by the Centre for a specified duration.
- Contravention of this provision may attract a prison term that may go up to three years, besides a fine.
- When a direction is given for monitoring, the intermediary and any person in charge of a computer resource should extend technical assistance in the form of giving access or securing access to the resource involved.
- Failure to extend such assistance may entail a prison term of up to seven years, besides a fine.
- Failure to comply with a direction to block access to the public on a government’s written request also attracts a prison term of up to seven years, besides a fine.
Is the liability of the intermediary absolute?
- Section 79 of the Act makes it clear that “an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him”.
- This protects intermediaries such as Internet and data service providers and those hosting websites from being made liable for content that users may post or generate.
- However, the exemption from liability does not apply if there is evidence that the intermediary abetted or induced the commission of the unlawful act involved.
Judicial intervention in this regard
- In Shreya Singhal Case (2015), the Supreme Court read down the provision to mean that the intermediaries ought to act only upon receiving actual knowledge that a court order has been passed.
- This was because the court felt that intermediaries such as Google or Facebook may receive millions of requests, and it may not be possible for them to judge which of these were legitimate.
- The role of the intermediaries has been spelt out in separate rules framed for the purpose in 2011.
Legislative efforts
- In 2018, the Centre favoured coming up with fresh updates to the existing rules on intermediaries’ responsibilities, but the draft courted controversy.
- This was because one of the proposed changes was that intermediaries should help identify originators of offensive content.
- This led to misgivings that this could aid privacy violations and online surveillance.
- Also, tech companies that use end-to-end encryption argued that they could not open a backdoor for identifying originators, as it would be a breach of promise to their subscribers.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
‘Toolkit’ tweeted by Greta Thunberg
From UPSC perspective, the following things are important :
Prelims level: Toolkit
Mains level: Social media as a lobbying tool
The Delhi Police filed an FIR on charges of sedition, criminal conspiracy and promoting hatred against the creators of a ‘toolkit’ on farmer protests, which was shared by climate activist Greta Thunberg.
Q.What do you mean by a social media toolkit? Discuss its potential mis-uses.
What is a Toolkit?
- A toolkit is essentially a set of adaptable guidelines or suggestions to get something done. The contents differ depending on what the aim of the toolkit is.
- For example, the Department for Promotion of Industry and Internal Trade (DPIIT) has a toolkit for the implementation of Intellectual Property Rights (IPR).
- This includes basics such as the guidelines to follow when investigating IPR violations, applicable laws, and definitions of terms such as counterfeit and piracy.
- In the context of protests, a toolkit usually includes reading material on the context of the protest, news article links and methods of protest (including on social media).
Why have they gained prominence?
- While toolkits have been around for decades, the accessibility of social media has brought them into the spotlight over the past few years.
- References to toolkits for protesters can be found in the Occupy Wall Street protests of 2011, in the Hong Kong protests of 2019, several climate protests across the world, anti-CAA protests across India.
- During the Hong Kong protests, toolkits advised participants to wear masks and helmets to avoid being recognised and ways to put out tear gas shells.
- During the anti-CAA protests, a toolkit suggesting twitter hashtags to use, places to hold protests, and a guide on what to do and carry with you if you are detained by the police were shared on social media.
Toolkit tweeted by Greta Thunberg
- The 18-year-old shared a toolkit on Twitter on the anti-farm law protests in India.
- This came on the heels of singer-businesswoman tweeting a news article on internet curbs near protest sites in and around Delhi.
- The toolkit tweeted by Thunberg was later deleted, with the activist saying it was being updated by people on the ground in India.
- The toolkit asked those interested to start a ‘Twitter storm’ to share solidarity photo/video message by social media users.
It is being speculated that the document was proof that an international conspiracy is being hatched to defame India and the central government over the ongoing farmers’ protest.
What is the recent apprehension?
- The police have said that during the inquiry it appears that the toolkit was created by Poetic Justice Foundation.
- It says the prior action section delineated the action plan for January 26, when violence was seen at several areas as a group of farmers diverted from the set route and started marching towards the Red Fort.
- The unfolding of events over the past few days, including the violence of 26th January, has revealed copycat execution of the ‘action plan’ detailed in the tool kit.
- The intention of the creators of the tool kits appeared to be to create disharmony among various social, religious and cultural groups and encourage disaffection and ill-will against the state and the nation at large.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Fake News
From UPSC perspective, the following things are important :
Prelims level: Not much
Mains level: Paper 3- Social media and spread of fake news.
The Supreme Court has asked the Centre to explain its “mechanism” against fake news and bigotry on air, and to create one if it did not already exist.
Discuss how Fake News affects free speech and informed choices of citizens of the country?
What did the Centre say?
- The media coverage predominantly has to strike a balanced and neutral perspective.
- It explained that as a matter of journalistic policy, any section of the media may highlight different events, issues and happenings across the world as per their choice.
- It was for the viewer to choose from the varied opinions offered by the different media outlets.
What is Fake News?
- Fake news is untrue information presented as news. It often has the aim of damaging the reputation of a person or entity, or making money through advertising revenue.
- Once common in the print and digital media, the prevalence of fake news has increased with the rise of social media and messengers.
- Political polarization, post-truth politics, confirmation bias, and social media have been implicated in the spread of fake news.
Threats posed
- Fake news can reduce the impact of real news by competing with it.
- In India, the spread of fake news has occurred mostly with relation to political and religious matters.
- However, misinformation related to COVID-19 pandemic was also widely circulated.
- Fake news spread through social media in the country has become a serious problem, with the potential of it resulting in mob violence.
Countermeasures
- Internet shutdowns are often used by the government as a way to control social media rumours from spreading.
- Ideas such as linking Aadhaar to social media accounts have been suggested to the Supreme Court of India by the Attorney General.
- In some parts of India like Kannur in Kerala, the government conducted fake news classes in government schools.
- The government is planning to conduct more public-education initiatives to make the population more aware of fake news.
- Fact-checking has sparked the creation of fact-checking websites in India to counter fake news.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Tackling the challenge of Big Tech
From UPSC perspective, the following things are important :
Prelims level: Not much
Mains level: Paper 3- Social media and challenges
The article discusses the threat posed by the spread of misinformation on the internet and suggests the steps to tackle it.
Warning for India
- The U.S.’s experience with the Internet should serve as a stark warning to India.
- Most Americans now get their news from dubious Internet sources.
- This resulted in hardening of political stances and the acute polarisation of the average American’s viewpoint.
- For India, the danger is that like the U.S., such extreme polarisation can happen in a few short years.
- There are anywhere between 500 million and 700 million people are now newly online, almost all from towns and rural areas.
Use of targeted algorithm
- Social networks such as Facebook, WhatsApp, and Twitter have become the source of news for the people, but these have no journalistic norms.
- The spread of the misinformation or news has been greatly enhanced by the highly targeted algorithms that these companies use.
- They are likely to bombard users with information that serves to reinforce what the algorithm thinks the searcher needs to know.
- As they familiarise themselves with the Internet, newly online Indians are bound to fall prey to algorithms that social network firms use.
Steps to control the misinformation on the internet
- 1) Tech firms are already under fire from all quarters, nonetheless, we need to act.
- They are struggling to meet calls to contain the online spread of misinformation and hate speech.
- 2) Unlike the U.S., India might need to chart its own path by regulating these firm before they proliferate.
- In the U.S., these issues were not sufficiently legislated for and have existed for over a decade.
- Free speech is inherent in the Constitution of many democracies, including India’s.
- This means that new Indian legislation needs to preserve free speech while still applying pressure to make sure that Internet content is filtered for accuracy, and sometimes, plain decency.
- 3) The third issue is corporate responsibility.
- Facebook, for instance, has started to address this matter by publishing ‘transparency reports’ and setting up an ‘oversight board’.
- But we cannot ignore the fact that these numbers reflect judgements that are made behind closed doors.
- What should be regulatory attempts to influence the transparency are instead being converted into secret corporate processes.
- We have no way of knowing the extent of biases that may be inherent inside each firm.
- The fact that their main algorithms target advertising and hyper-personalisation of content makes them further suspect as arbiters of balanced news.
- This means that those who use social media platforms must pull in another direction to maintain access to a range of sources and views.
Consider the question “What are the factors responsible for the spread of misinformation on social media and suggest the measures to tackle it.”
Conclusion
We need strong intervention now. Else, in addition to the media, which has largely been the responsible fourth estate, we may well witness the creation of an unmanageable fifth estate in the form of Big Tech.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Social media and dilemmas associated with it
From UPSC perspective, the following things are important :
Prelims level: Internet and related terms
Mains level: Paper 3- Social media and related issues
Internet has transformed our life like no other technologies. However, it has created several problems as well. The article analyses such issues.
Examining the role of social media
- The first reason for the examination of role is the impending US presidential election.
- Ghosts of Cambridge Analytica, are returning to haunt us again.
- The second reason is the COVID pandemic.
- Social media has emerged as a force for good, with effective communication and lockdown entertainment, but also for evil, being used effectively by anti-vaxxers and the #Unmask movement to proselytize their dangerous agenda.
Understanding the problems associated with social media
- The big problem with social networks is their business model.
- The internet was created as a distributed set of computers communicating with one another, and sharing the load of managing the network.
- This was Web 1.0, and it worked very well. But it had one big problem—there was no way to make money off it.
- The internet got monetized, Web 2.0 was born.
- Come 2020, search and social media advertising has crossed $200 billion, rocketing past print at $65 billion, and TV at $180 billion.
- This business model has led to a “winner-takes-all” industry structure, creating natural monopolies and centralizing the once-decentralized internet.
- The emergence of Web 3.0, a revolution that promises to return the internet to users.
Way forward
- One principle of the new model is to allow users explicit control of their data, an initiative aided by Europe-like data protection regulation.
- Another is to grant creators of content—artists, musicians, photographers, —a portion of revenues, instead of platforms taking it all (or most).
- The technologies that Web 3.0 leverages are newer ones, like blockchains, which are inherently decentralized.
- They have technology protection against the accumulation of power and data in the hands of a few.
- Digital currencies enabled by these technologies offer a business model of users paying for services and content with micro-transactions, as an alternative to advertiser-pays.
Conclusion
The path to success for these new kinds of democratic networks will be arduous. But a revolution has begun, and it is our revulsion of current models that could relieve us of our social dilemmas.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Turkey enacts Social Media Law
From UPSC perspective, the following things are important :
Prelims level: NA
Mains level: Need for social media regulation
Turkey’s parliament approved a law that gives authorities greater power to regulate social media despite concerns of growing censorship.
Unregulated social media promotes misinformation, hate speech, defamation, and threats to public order, terrorist incitement, bullying, and anti-national activities.
Turkey: The forerunner of cyber policing
- Turkey leads the world in removal requests to Twitter, with more than 6,000 demands in the first half of 2019.
- More than 408,000 websites are blocked in Turkey, according to The Freedom of Expression Association.
- Online encyclopedia Wikipedia was blocked for nearly three years before Turkey’s top court ruled that the ban violated the right to freedom of expression and ordered it unblocked.
- The country also has one of the world’s highest rates of imprisoned journalists, many of whom were arrested in a crackdown following a failed coup in 2016.
Features of the Law:
1) Appointing representatives:
- The law requires major social media companies such as Facebook and Twitter to keep representative offices in Turkey to deal with complaints against content on their platforms.
- If the social media company refuses to designate an official representative, the legislation mandates steep fines, advertising bans and bandwidth reductions.
2) Bandwidth reductions
- Bandwidth reductions mean social media networks would be too slow to use.
- With a court ruling, bandwidth would be reduced by 50% and then by 50% to 90%.
3) Privacy protection
- The representative will be tasked with responding to individual requests to take down content violating privacy and personal rights within 48 hours or to provide grounds for rejection.
- The company would be held liable for damages if the content is not removed or blocked within 24 hours.
4) Data storage
- A most alarming feature of the new legislation is that SM companies would require social media providers to store user data in Turkey.
- The government says the legislation was needed to combat cybercrime and protect users.
- This would be used to remove posts that contain cyberbullying and insults against women.
Turkey seems to have given an attempt to regulate social media amidst the chaos. It lags on various fronts, making it realizable for India not to go hastily for such a regulation.
Concerns over the law
- Hundreds of people have been investigated and some arrested over social media posts.
- The opposition is pointing that the law would further limit freedom of expression in a country where the media is already under tight government control and dozens of journalists are in jail.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
What is ‘Milk Tea Alliance’?
From UPSC perspective, the following things are important :
Prelims level: ‘Milk Tea Alliance’
Mains level: Chinese assertion in the Indo-China region
The ‘Milk Tea Alliance’ is an informal term coined by social media users which are highly trending these days.
The term though in news without any institutional backing is gaining popularity. It clearly shows the public outrage against Chinese agressiveness in Taiwan and Hong-Kong.
What is the ‘Milk Tea Alliance’?
- Thai social media users began calling for the sovereignty of Taiwan and Hong Kong, extending support to the two countries.
- This spurred social media users from other Southeast Asian countries to join the call, in a rejection of China’s influence in the region for its own diplomatic and economic gains.
- The ‘Milk Tea Alliance’ is an informal term coined by social media users because in the region, tea is consumed in many nations with milk, with the exception of China.
- Memes were formed showing flags of the countries in the “Milk Tea Alliance” with China as a lone outsider.
What started this online war?
- The online battle started with a Thai twitter post that questioned whether coronavirus had emerged in a laboratory in Wuhan.
- There were some related tweets by pro-Taiwanese and Hong Kong people.
- Pro-China social media users then began attacking Thailand for being a “poor” and “backward” nation and also hurled insults at the Thai king and the Thai prime minister.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
Social Media: Prospect and Challenges
Trolling in India
From UPSC perspective, the following things are important :
Prelims level: Not Much
Mains level: Abuse of women on social media and its implications
The Amnesty International India has released a report titled “Troll Patrol India: Exposing Online Abuse Faced by Women Politicians in India”. The report analysed more than 114,000 tweets sent to 95 women politicians in the three months during and after last year’s general elections in India.
Highlights of the report
- The research found that women are targeted with abuse online not just for their opinions – but also for various identities, such as gender, religion, caste, and marital status.
- Indian women politicians face substantially higher abuse on Twitter than their counterparts in the U.S. and the U.K.
- Around 13.8% of the tweets in the study were either “problematic” or “abusive”.
- Problematic content was defined as tweets that contain hurtful or hostile content, especially if repeated to an individual on multiple occasions, but do not necessarily meet the threshold of abuse.
- While all women are targeted, Muslim women politicians faced 55% more abuse than others.
- Women from marginalized castes, unmarried women, and those from non-ruling parties faced a disproportionate share of abuse.
A matter of concern
- Abusive tweets had content that promote violence against or threaten people based on their race, national origin, sexual orientation, gender, religious affiliation, age, disability or other categories.
- They include death threats and rape threats.
- Problematic tweets contained hurtful or hostile content, often repeated, which could reinforce negative or harmful stereotypes, although they did not meet the threshold of abuse.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024
The term social media is being used quite often by everyone and has become a popular topic of conversation, debates and controversies.
“The social networking phenomenon continues to gain steam worldwide, and India represents one of the fastest growing markets at the moment”.
This reflects the emergence of social media in India in upcoming time…so here is presenting a brief article on social media: prospects and challenges.
- Introduction
- What is social media?
- Types of social media
- Implications and uses
- Challenges before social media
- Prospects of social media in India
- Concerns about privacy
- Way ahead
Introduction:
- Social Media is the latest form of media available to the audience of varied groups. It is a form of electronic communication through which users share information, ideas, personal messages, videos and pictures and other content through it instantly.
- The major reason behind its popularity is that the users are given a free service to create a virtual social world where they exchange photographs, play games, become friendly, fall in love, split, fight, argue and quarrel at many times without having met physically.
- But, on the other side it is an encroachment to someone’s privacy which can lead to different type of illegal activities by using the information such as name, location, and email addresses.
- Social media provides us a platform to express ourselves without any restrictions which is becoming a major challenge as it may infringe the fundamental rights of privacy of a human being.
Various forms of social media
- Social Networks– Services that allow you to connect with other people of similar interests and background. Usually they consist of a profile, various ways to interact with other users, ability to set up groups, etc. The most popular are Facebook and Linkedin
- Bookmarking Sites – Services that allow you to save, organize and manage links to various websites and resources around the internet. Most allow you to “tag” your links to make them easy to search and share. The most popular are Delicious and StumbleUpon.
- Social News– Services that allow people to post various news items or links to outside articles and then allows it’s users to “vote” on the items. The voting is the core social aspect as the items that get the most votes are displayed the most prominently. The community decides which news items get seen by more people. The most popular are Digg and Reddit.
- Media Sharing– Services that allow you to upload and share various media such as pictures and video. Most services have additional social features such as profiles, commenting, etc. The most popular are YouTube and Flickr.
- Microblogging– Services that focus on short updates that are pushed out to anyone subscribed to receive the updates. The most popular is Twitter.
- Blog Comments and Forums – Online forums allow members to hold conversations by posting messages. Blog comments are similar except they are attached to blogs and usually the discussion centers around the topic of the blog post. There are many popular blogs and forums like RSS Feeds.
Implications and uses of social media:
- Social media serve as a superior medium to stay connected with friends and family, to meet new people, and make new friends.
- It seems to be the most effective form of communication as feedback is instant.
- It is like a boon to introverts as they find a safer zone to initiate conversations.
- It is an upcoming media to integrate people and follow the principle of many voices one world.
Effect on Function & Performance of government
- Accountability and transparency in Government
- Various deals, decision by representative are Instantly getting shared on Social media
- It is helping people decide that what things are actually done by Government
- It is also causing swift actions by government
- Making representative more closer through Digital interface
- Democratising effect
Effect on Governance and Institutions
- It is providing voice to the people
- Office delays, and Bureaucratic red tapism, absenteeism has been affected.
- Recent protest on free speech has also caused judicial activism thus prudent judiciary in even of infringement of rights
- It also helps political leader during election campaign for propagating manifesto
- Cost of dissemination of information, expenditure of government has been reduced Vis a Vis to other forms of Information.
Challenges before Social Media:
- The misuse of one’s personal information, hacking of accounts, morphing of personal photographs, addiction of social networking sites, spam and viruses are most high-flying problems faced due to social media.
- Some other prospective challenges are illiteracy, reach and accessibility of internet, lack of censorship on social media, need of regulatory body to govern the social media.
- The most worrying aspect to the social media is the fact that it cannot be controlled and therefore it goes without saying that its consequences can also be dangerous and uncontrollable for all those who use it recklessly and in an irresponsible manner.
- Frequent networking on sites like Facebook could also generate negative feelings like inadequacy, envy, jealousy or even aggressive behavior.
Concerns about privacy:
- Most networking sites do not really protect an individual’s privacy. A simple example is that of photos being posted on such sites without taking permission from all the people concerned.
- There is no authenticity of the data posted nor can everything be taken on its face value.
Prospects of social media in India:
- There has been a remarkable increase in the internet connectivity in India. There has been successful penetration of personal computers even in the small cities and towns in the country. We all witness the intense mobile penetration in all nooks in India.
- Seeing the high number of youth in the country who are tech friendly, it can be said that there is a bright prospect of social media in India.
Way ahead:
- Social media in India has to meet other challenges apart from illiteracy, reach and accessibility that are revenue generation.
- There has been a rapid increase in social networking sites, microblogging, media sharing and bookmarking sites. India is lucrative market and social media is certainly gaining opportunities to deepen its roots resulting into a strong foothold in India.
- There is need to regularize the social media. Some agency must be deputed to monitor the anti-social activities taking place on virtual world of social media. Laws relating to cyber crime should be made more stringent. There should be a separate policing department for cyber crimes.
- There is a need of extensive research study for finding ways to regularize crime against social media.
Social Media, with all its benefits and the potential for more, is definitely a boon, however misuse or irresponsible usage can have negative effects on individuals and society, especially the young impressionable minds. We need to guard against the negative impact of the social media, which ought to be used in the correct manner for creative or productive purposes so that it is progressive to mankind and society at large, rather than regressive.
References: