Policy Wrap: Meta faces €550m lawsuit from Spanish media, complainants argue that Meta’s paid ad-free service violates EU consumer regulations, all about EU’s landmark AI rules, and more
Share your feedback and comments with us at office@adif.in
ANTITRUST
Meta faces €550m lawsuit from Spanish media over alleged dominance in the Adtech domain
Citing unfair competition in the advertising industry, a coalition representing 83 Spanish media sites has filed a €550 million (approximately $600 million) complaint against Meta Platforms. The newspapers together filed the litigation with a commercial court last Friday, alleging that Meta had violated EU data protection legislation from 2018 to 2023.
The plaintiffs claim that Meta has an unfair advantage in producing and providing personalised ads because of its "massive" and "systematic" usage of users' personal data from Facebook, Instagram, and WhatsApp platforms. They claim this unfair competition.
According to the complainants, the majority of Meta's advertisements use personal information that was acquired without the clients' express authorization. They contend that this is against the EU General Data Protection Regulation, which came into effect in May 2018 and requires websites to obtain permission before retaining or using personal data.
The case represents legacy media's (forms of mass media institutions which were more prevalent before the dawn of internet, such as print media, radio and television broadcasting etc) most recent attempt to defend its territory in court against tech behemoths. One of the most notable cases was when the Google News service was shut down by the Spanish government in 2014, which then reopened in 2022 under new rules allowing media outlets to negotiate prices directly with the internet giant.
Many nations, including Canada earlier this year, have implemented laws aimed at forcing tech behemoths to pay for news recently.
Meta’s paid ad-free service violates EU consumer regulations, claims complaint
The largest consumer organisation in Europe, European Consumer Organisation (BEUC), said last week that Meta Platforms' paid no-ads subscription service, which it launched in Europe last month, violates EU consumer regulations.
Claiming that Meta's new service amounted to paying a fee to ensure privacy, the BEUC and eighteen of its members jointly filed a complaint with the network of consumer protection authorities (CPC), two days after advocacy group NOYB filed a complaint with the Austrian privacy watchdog on the same issue.
In addition to this, the BEUC identified a number of other problems.
In a statement, they said that by employing unfair, dishonest, and forceful tactics, such as partially barring users from accessing the services to pressure them into making a decision right away and giving them inaccurate and partial information in the process, Meta is violating EU consumer legislation.
According to BEUC, even if consumers choose the new service, their data would probably still be gathered and utilised for other purposes. Additionally, it criticised the "very high subscription fee for ad-free services" as a potential turnoff to users.
Customers will merely agree to Meta's tracking and profiling at this price, which is precisely what the tech behemoth wants. It is not appropriate to charge people for maintaining their privacy, the Deputy Director General of BEUC said.
For Web users, the monthly fee for the ad-free service is €9.99 euros, while iOS and Android customers need to pay €12.99.
AI & DEEPFAKE
EU's landmark AI rules, which could be voted into law by the EU parliament later this month, still being ironed out
The groundbreaking legislation governing artificial intelligence (AI) in the European Union is currently being negotiated in what some believe to be the last round of talks. The decisions made could serve as a model for other governments creating regulations for their own artificial intelligence sectors.
Governments and parliamentarians are still discussing a number of important topics prior to the meeting, including how to regulate the rapidly expanding field of generative AI and how law enforcement should use it.
The primary problem stems from the fact that the initial draft of the rule was prepared in early 2021—nearly two years before ChatGPT, one of the fastest-growing software applications ever—was released by OpenAI. The act's original risk categories did not clearly apply to ChatGPT and other generative AI tools, which has led to a continuing debate about how they should be governed.
The European Union has proposed regulations pertaining to foundation models, which require enterprises to maintain a comprehensive record of their system's training data and capabilities, attest to the fact that they have taken precautions against potential hazards and submit to audits by outside experts.
However, some of the EU’s most powerful nations, France, Germany, and Italy have contested that in recent weeks. In their view, rather than imposing strict regulations, the creators of generative AI models should be given the freedom to exercise self-regulation, which they argue is essential if European companies are to compete with dominant U.S. players like Google and Microsoft.
Legislators in the EU seek regulations that safeguard citizens' fundamental rights, but member states also want certain latitude so that technology can be utilised for national security purposes, such as by border protection agencies or the police.
The bill might potentially be voted into law by the EU Parliament later this month if a final version is agreed upon. Even then, it might not take effect for nearly two more years.
Governments and lawmakers in the EU may alternatively negotiate a "provisional agreement" in the absence of a final accord, with the details worked out over the course of several weeks of technical discussions.
GoI taking a tough stance on deepfakes, state that violation may lead to action under IPC as well as IT Rules
The Ministry of Electronics and Information Technology (MeitY), together with senior government officials, met with executives from social media and other internet-based intermediaries to "review" their progress in addressing the 'deepfake' issue. Shri Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, presided over the meeting.
The government issued a strong message to social media and internet intermediaries, stating that they risk legal action not only under the Information Technology (IT) Rules of 2021, but also under relevant sections of the Indian Penal Code (IPC) for failing to address deepfake, synthetically altered content, and 11 other user harms mentioned in the rules.
According to Shri Rajeev Chandrashekhar, a new amendment to the IT Rules is "actively under consideration" to guarantee social media and other platforms comply with laws regarding deep fakes and other matters to protect public safety and confidence.
The next meeting with the internet intermediaries and social media companies will be held after 7 days.
In order to address the problem of deep fakes, key executives from social media and online intermediaries were met by Union Minister of Electronics and Information Technology, Shri Ashwini Vaishnaw, and Shri Rajeev Chandrasekhar earlier this month.
Following their meetings with these corporations, Shri Vaishnaw and Shri Chandrasekhar declared that deep fakes posed a threat to democracy and stable governance and that they needed to be addressed right away.
While Shri Chandrasekhar stated that all companies should first warn users about the dangers of uploading synthetic and deepfake media at every step of their contact with the platform's interface, Shri Vaishnaw suggested introducing new legislation to deal with deep fakes.
The IT ministry had issued an advisory in November to all social media intermediaries that they take prompt action against deepfake content. The ministry had stated that social media intermediaries must make sure that users of their platform do not host any content that impersonates another person, citing Section 66D of the IT Act as well as Rules 3(1)(b)(vii) and Rule 3(2)(b) of the IT Rules.