Controlling artificial intelligence is a 4D challenge
The writer is the founder He toastedan FT-backed site about European start-ups
G7 leaders addressed a range of global concerns over sake-steamed Nomi oysters in Hiroshima last weekend: the war in Ukraine, economic resilience, clean energy and food security, among others. But they also threw an extra element into the farewell swag of their good intentions: the promotion of inclusive and reliable artificial intelligence.
While leaders recognized the innovative potential of AI, they were concerned about the damage it could do to public safety and human rights. Launching the Hiroshima Artificial Intelligence Process, the G7 commissioned a working group to analyze the impact of generative artificial intelligence models such as ChatGPT and prepare leaders’ discussions by the end of this year.
The initial challenge will be how best to define AI, categorize its threats, and determine the appropriate response. Is regulation best left to existing national agencies? Or is the technology so consistent that it requires new international institutions? Do we need today’s equivalent of the International Atomic Energy Agency, founded in 1957 to promote the peaceful development of nuclear technology and prevent its military use?
It is debatable how effectively the UN body has fulfilled this mission. In addition, nuclear technology involves radioactive materials and a vast infrastructure that can be easily detected physically. Artificial intelligence, on the other hand, is relatively cheap, invisible, pervasive, and has an infinite number of uses. It is at least a four-dimensional challenge that needs to be addressed with more flexible methods.
The first dimension is discrimination. Machine learning systems are designed to discriminate, spot outlier patterns. This is good for detecting cancer cells during radiological examinations. But black-box systems trained on flawed data sets are used to hire and fire workers or approve bank loans. Bias in, bias out, as they say. Banning these systems in areas of unacceptably high risk, as proposed by the EU’s upcoming AI law, is a strict, precautionary approach. Establishing independent, expert auditors may be a more adaptable route.
Second, disinformation. As academic expert Gary Marcus warned the US Congress last week, generative artificial intelligence could threaten democracy itself. Such models can generate believable lies and fake people at lightning speed and on an industrial scale.
The responsibility should be placed on the tech companies themselves to watermark content and minimize misinformation, as well as suppress email spam. Failure to do so only reinforces the need for more drastic intervention. The precedent may have been set in China, where a draft law places responsibility for misuse of artificial intelligence models on the manufacturer, not the user.
Third, dislocation. No one can predict exactly what kind of economic impact artificial intelligence will have in general. But it seems pretty certain that it will lead to the “deprofessionalization” of white-collar jobs, as entrepreneur Vivienne Ming said at the DC Weekend festival.
Generative AI has been widely adopted by computer programmers as a productivity tool. By contrast, high-profile Hollywood screenwriters may be the first of many to fear the automation of their core skills. This messy story defies simple solutions. Nations must adapt to social challenges in their own way.
Fourth, destruction. The incorporation of artificial intelligence into lethal autonomous weapon systems (LAWS) or killer robots is a terrifying prospect. The principle that people should always remain in the decision-making circle can only be established and enforced through international treaties. The same goes for the general AI debate, the day (possibly fictional) when AI surpasses human intelligence in every field. Some campaigners dismiss this scenario as a disturbing fantasy. But it is definitely worth paying attention to those experts who warn of potential existential risks and call for international research cooperation.
Others might argue that trying to regulate AI is as futile as praying that the sun doesn’t set. Laws always evolve only gradually, while AI evolves exponentially. But Marcus says he’s encouraged by the bipartisan consensus in the U.S. Congress. Concerned that EU regulators could set global standards for artificial intelligence, as they did for data protection five years ago, US tech companies are also publicly supporting regulation.
G7 leaders should encourage competition for good ideas. Now they must spark a regulatory race to the top rather than preside over a frightening slide.
Source: https://www.ft.com/content/57bc42f7-2b44-49e9-9df1-4facddd43e3d