Policy Wrap: Meta appeals against Messenger and Marketplace’s EU gatekeeper status, EU opens probe against Amazon, Call for stronger provisions in IT Act and IPC to fight deepfake menace, and more
Share your feedback and comments with us at office@adif.in
POLICY
Consensus eludes EU gatekeeper status as Meta appeals against it for Messenger, Marketplace
In September, the European Union selected 22 "gatekeeper" services, which are operated by six of the largest tech companies globally, to be subject to new regulations as part of its most recent campaign against Big Tech.
These gatekeepers must let consumers choose which apps to pre-install on their devices and interoperate their messaging apps with rival apps in accordance with the Digital Markets Act (DMA).
Services from Alphabet, Amazon, Apple, Microsoft, Meta, and ByteDance, the owner of TikTok, will be subject to the DMA.
Google and Microsoft have agreed to comply with the EU legislation.
Those who object to the label and its conditions can file a complaint with the General Court of Luxembourg, which handles matters pertaining to trade, the environment, and competition law.
Meta on the other hand filed an appeal on Wednesday against being designated as a "gatekeeper" for its Messenger and Marketplace platforms.
The business declared that it would not contest Facebook, Instagram, and WhatsApp's classification.
Industry insiders reckon that TikTok could challenge its status as well. TikTok had previously declared that the gatekeeper designation was fundamentally incorrect.
EU antitrust authorities are looking into whether Apple's iMessage and Microsoft's Bing should abide by the new regulations.
PRIVACY & USER PROTECTION
Amazon's consumer protection measures under scrutiny as EU opens probe against the Bigtech
In an effort to stack up Amazon's consumer protection policies to a recently passed EU internet regulatory law, the European Commission on Wednesday said that it will be looking into the online retail behemoth.
The information was requested to assess Amazon's risk assessment and mitigation strategies, which are mandated under the EU's Digital Services Act (DSA) "especially with relation to the protection of fundamental rights and the dissemination of illegal products."
Amazon must respond to the queries by December 6th.
The details will shed light on the risk assessments and mitigating actions it takes to safeguard customers' rights and interests online.
After the implementation of DSA, Amazon is classified as a "very large online platform," which means that it has to abide by new regulations (including those pertaining to increased openness, data sharing, and risk management) or face stiff fines of up to 6% of its worldwide sales.
If Amazon doesn't reply, the European Commission has the authority to demand the information by decision (indicating or imposing a penalty) and can impose periodic penalty payments for inaccurate or misleading information.
IT Act, IPC needs stronger provisions to fight deepfake menace
Experts warned that while provisions in the current IT law can prevent deepfakes from being created and distributed, they may not be enough to address the problem. As a result, lawmakers should take action to mitigate the harm that deepfakes can do. The harms caused by deepfakes are only partially addressed by the criminal provisions under the IT Act and the IPC.
State Minister for Information Technology Rajeev Chandrasekhar advised victims to "avail remedies provided under the Information Technology rules" and report incidents to the police. Additionally, social media companies received an advice informing them that if they do not remove detected deepfake content within 36 hours, they risk losing their "safe harbour immunity" under the Act.
But experts pointed out that these are ex-post remedies. When deepfakes and AI-generated false material are disseminated, their harm could cause irreversible damages.
Calls for stringent regulations for AI has grown louder as a result.
A legislation on artificial intelligence (AI) is necessary to regulate the complications surrounding AI and related applications. It also has to outline legal obligations in circumstances of AI-related incidents and offer responsible frameworks to safeguard people in the firing line.
Meta, Alphabet, ByteDance, Snap face lawsuits for social media addiction and mental health deterioration among minors
Major social media giants Alphabet, Meta, and Bytedance attempted to have a statewide lawsuit against them dismissed on Tuesday, but a federal judge rejected their arguments. The lawsuit claimed the companies had illegally lured and hooked millions of minors to their platforms, causing mental health harm.
The ruling encompasses hundreds of claims brought on behalf of specific children who claimed to have experienced detrimental impacts on their physical, mental, and emotional well-being due to their usage of social media, including suicidal thoughts and feelings, anxiety, and sadness.
Among other remedies sought in the lawsuit are damages and a stop to the defendants' alleged wrongdoings.
According to the corporations, the law shields users from liability for anything they post on their platforms and mandates the dismissal of any claims. In a 52-page decision, U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California dismissed claims that the corporations were shielded from litigation by the federal Communications Decency Act and the First Amendment of the United States Constitution.
However, Rogers noted that the plaintiffs' accusations went beyond just highlighting content from third parties, and pointed out that the defendants failed to explain why they shouldn't be held accountable for offering faulty parental controls, failing to assist users in setting screen time limits, and putting up obstacles in the way of account deactivation.
She cited an example that businesses might have alerted parents when their kids were online by using age-verification systems.
As a result, they present a tenable argument that suggests that users suffer injury from defendants' platforms' inability to properly verify users' ages, harm that is different from harm resulting from users viewing third-party content.
Due to their legal standing as product manufacturers, the corporations owed their customers a responsibility. They may face negligence lawsuits for failing to create reasonably safe products and notify users of known problems.
Meta and Snap must detail child protection measures by December 1: EU
The European Commission announced last Friday that Meta and Snap have been given until December 1st by the EU to provide further details about how they shield youngsters from harmful and unlawful content.
The request for information on the measures the companies have taken to improve the protection of minors comes a day after a similar message by the EU to YouTube and TikTok.
The Commission also issued urgent orders to Meta, X, and TikTok last month, requesting them to provide information on the steps they had taken to stop the spread of violent content, hate speech, and terrorist-related content on their platforms.
If the Commission is not pleased with the corporations' answers, it may launch investigations.
Major internet platforms now have to take additional steps to remove dangerous and unlawful information under the recently enacted Digital Services Act (DSA), or risk fines of up to 6% of their global revenue.