Self-regulation or overregulation – OpenAI’s cross-continental dilemma
Since OpenAI’s ChatGPT was released in November 2022, tech experts, media, and governments have expressed worry over what it could become in the hands of bad actors. As a result, some countries have leaned towards over-regulation, stifling development (the EU), while others, like the US, have adopted a more hands-off approach to the industry.
As of December 2024, various countries have legally besieged AI startups like OpenAI on claims of breaches in data privacy policies and regulations.
Most recently, Canadian media houses, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press, and CBC, joined a lawsuit, reportedly the first of its kind in the country. They all sued OpenAI for allegedly ignoring safeguards like paywalls or copyright disclaimers meant to prevent the unauthorized copying of content.
“Journalism is in the public interest. OpenAI using other companies’ journalism for their own commercial gain is not. It’s illegal,” the media organizations claimed in a joint statement.
The group, which includes the publishers of Canada’s top newspapers, demands damages of C$20,000 per article ($14,300, £11,000) that OpenAI allegedly used to illegally train ChatGPT. The total could amount to billions of dollars in compensation.
The news organizations also want an order that would force the company to share profits made from using their articles, as well as an injunction prohibiting OpenAI from using them in the future.
Meanwhile, OpenAI says its models are “trained on publicly available data” and that the software is “grounded in fair use and related international copyright principles that are fair for creators and support innovation.”
“We collaborate closely with news publishers, including in the display, attribution and links to their content in ChatGPT search, and offer them easy ways to opt out should they so desire,” the company has said.
The New York Times and other publishers filed a similar lawsuit against OpenAI and Microsoft in the US last year.
OpenAI faces different regulatory standards around the world
China was one of the first countries to set up a regulatory framework for generative AI technology as it issued the final version of its Generative AI Measures on 10 July 2023.
Seven Chinese central government agencies jointly adopted the Generative AI Measures, which took effect on 15 August 2023. Compared to the first draft of the Generative AI Measures released for public consultation in April 2023, the new Generative AI Measures encouraged the development of and investment in generative AI technology and services.
However, OpenAI’s ChatGPT and similar services are banned due to censorship laws and fears of misinformation. This pushed Chinese developers, many of whom used OpenAI’s API in their products, to scramble for domestic AI alternatives.
The government has also supported these alternatives as they align with its regulations.
On the other hand, Russia has taken a more reserved stance regarding artificial intelligence. The country now has restrictions in place to counter what it perceives as risks of US influence through AI. It also cites cybersecurity concerns.
In the months leading up to the US election, there were several reports claiming Russia was using AI to influence perspective although many brushed it off as propaganda.
In North Korea, the government treats AI with suspicion and great caution, as it does many external innovations. The government has banned ChatGPT and similar technologies in a bid to maintain strict control over information and prevent potential misuse.
The EU is behind in the AI race
Many experts have accused Europe of stifling AI innovation through its EU AI Act. The EU AI Act is the regulation created to harmonize the legal framework for AI systems across the EU.
According to the Act, AI applications fall into three risk levels: unacceptable risk (banned), high-risk (subject to strict requirements), and low-risk (largely unregulated). The Act prohibits AI practices that are manipulative or deceptive, but there are exceptions for certain law enforcement uses.
Flouting the Act can result in fines of up to EUR 35 million or 7% of global turnover. Europe’s cautious approach to AI was on full display as Italy’s data privacy regulator clashed with OpenAI over access to one of its media houses’ content.
The core purpose of the partnership was to use Italian language content from GEDI’s news portfolio to train and improve OpenAI’s products, a step to train ChatGPT models on Italian content.
In response, GEDI stated, “The project has not been launched yet therefore no editorial content has been made available to OpenAI at the moment and will not be until the reviews under way are completed.”
The clash comes almost a year after another Italian authority, known as Garante, one of the European Union’s most proactive in assessing AI platform compliance with the bloc’s data privacy regime, banned ChatGPT.
It alleged breaches of the European Union (EU) privacy rules, but the service was reactivated after OpenAI addressed issues such as users’ right to decline to consent to the use of personal data to train algorithms.
In an e-mail statement, OpenAI said it believes its practices are aligned with the EU’s privacy laws. “We actively work to reduce personal data in training our systems like ChatGPT,” it said, adding it “plans to continue to work constructively with the Garante.”
Europe is behind in the AI race while places like America continue to leap forward. Currently, American companies, including OpenAI’s parent company, Microsoft, dominate the ranks of the largest AI firms by market valuation. The nation’s measured response has no doubt paid off for its AI businesses, at least in the short term.
Very recently, Kanye West, a Grammy-winning American rapper, became the latest to embrace the technology as he debuted a music video created using generative AI for his single titled, BOMB.
While many places cite concerns about the ethics of AI usage as the rationale for their tight restrictions, they also appear to be because of political tensions and efforts to protect national interests. If nothing changes, the world may be far ahead by the time Europe finally catches the AI bug.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
XRP Price Prediction For December 18
Pressure Mounts on Gensler to Release IG Report as New Letter Exposes XRP’s Unfair Treatment and Ethereum’s Free Pass
Deutsche Bank develops solution to address blockchain regulatory challenges
ILV breaks through $50