The fear of artificial intelligence, what are governments doing to try to fix it?

The rapid development of artificial intelligence has excited as much as it has frightened governments around the world. How far can artificial intelligence go? In these moments, no one knows. Not even those who are contributing to its development. What is known is that artificial intelligence offers the possibility to completely transform the world in which we live.

But rapid advances in artificial intelligence, such as Microsoft-backed OpenAI’s ChatGPT, are complicating governments’ efforts to agree laws governing the technology’s use.

Different countries in the world, but also international organizations are making their efforts to control the use or even the advancement of AI, so that it does not turn into a boomerang for humanity.

Australia is in the research phase of the regulations. The government is consulting with Australia’s main science advisory body and considering next steps, a spokesman for the industry and science minister said in April.

China, on the other hand, has implemented temporary regulations. The government has issued a series of temporary measures effective from August 15, 2023 to manage the AI generation industry, requiring service providers to submit security assessments and obtain permits before releasing AI products to the market.

Following government approvals, four Chinese tech firms, including Baidu Inc and SenseTime Group, launched their AI chatbots to the public on August 31.

The European Union has focused its efforts on planning and drafting the most effective regulations. EU lawmakers agreed in June on changes to a draft European Artificial Intelligence Act. Lawmakers will now have to decide on the details with EU countries before the draft rules become legislation.

The biggest issue is expected to be facial recognition and biometric surveillance where some lawmakers want a total ban, while EU countries want an exception for national security purposes, military defense.

France, more specifically, is in the phase of investigating possible violations. The French privacy watchdog said in April it was investigating several complaints about ChatGPT, after the chatbot was temporarily banned in Italy for a suspected breach of privacy rules.

France’s National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, ignoring warnings from civil rights groups.

Italy is also investigating possible violations. Italy’s data protection authority plans to review other artificial intelligence platforms and hire AI experts.

ChatGPT became available again to users in Italy in April, after being temporarily banned over concerns by the national data protection authority in March.

Japan expects to introduce by the end of 2023 regulations that are likely to be closer to the US position than the tough regulations planned in the EU, as it sees technology as a way to boost economic growth and make it Japan’s leader in advanced chips.

Israel has been working on AI regulations “for about the last 18 months” to strike the right balance between innovation and maintaining human rights and citizen protections, Ziv Katzir, director of national AI planning at the Innovation Authority, said in June. of Israel.

Israel released a 115-page AI policy draft in October last year and is considering public feedback before a final decision.

Britain’s competition regulator said in May it would begin examining the impact of AI on consumers, businesses and the economy and whether new controls are needed.

In the US, Washington DC District Judge Beryl Howell ruled on August 21 that a work of art created by AI without any human input cannot be protected by copyright under US law, upholding the rejection by the Copyright Office of an application presented by computer scientist Stephen Thaler on behalf of his DABUS system.

The US Federal Trade Commission opened a broad investigation into OpenAI in July, alleging that it violated consumer protection laws by putting reputation and personal data at risk.

Generative artificial intelligence raises competition concerns and is a focus of the Federal Trade Commission’s Technology Bureau, along with its Office of Technology, the agency said in an official blog post in June.

G7 leaders meeting in Hiroshima, Japan, acknowledged in May 2023 the need to control AI and pervasive technologies and agreed that ministers should discuss

technology as the “AI Hiroshima process” and report the results by the end of 2023.

G7 countries should adopt “risk-based” regulations for AI, digitization ministers said after a meeting in April.

The UN Security Council held its first official discussion on AI in New York in July 2023. The council addressed military and non-military applications of AI, which “could have very serious consequences for global peace and security,” the Secretary said. UN General Antonio Guterres.

The UN Secretary-General has also announced plans to start work by the end of the year on a high-level AI advisory body to regularly review AI control agreements and provide recommendations.

Comments are closed.