Main page > News > International Law Expert on AI Regulation: Doing Nothing is not an Option

International Law Expert on AI Regulation: Doing Nothing is not an Option

 

As technology and artificial intelligence (AI) advance rapidly, states must take steps to help regulate algorithms and artificial intelligence developed by companies, believes Professor Simon Chesterman, Vice Provost of the National University of Singapore.

On what circumstances will the regulation of artificial intelligence depend? What are the possible solutions in order to find a balance between the security of citizens and the development of innovation? These and other questions were answered by S. Chesterman, a well-known international law expert, during the conference “Technological Change and International Law” held in Vilnius on September 5-6.

What threats do you think artificial intelligence could pose to humanity? And who should take steps to prevent those threats?

Some people worry about the so-called existential threats, for example that AI could rise up and kill us all. I think that threat has gone from science fiction to a possible concern for people, but so far we don’t see any evidence, that the AI we see deployed today is going in that direction. I think, that if we had the prospect to create an uncontrollable AI, we should hold back from doing that, but at the moment we have not reached such a point. There are a lot of short-term concerns, however, that we should address right now, such as discrimination or bias. AI, when making decisions, relies on data sets that contain human bias and may lack representation. So I think organizations and governments should be very wary of using algorithms to give recommendations unless they are prepared to stand behind those recommendations. Because if those algorithms belong to private companies, we may not know what goes into them and what comes out of them. For those reasons, I think we need to be wary of relying on systems that we don’t understand.

This relates to a larger threat, which is that if we interact with AI more and more, it may change the way we think. I think we’re all aware of the changes that social media causes to the young and other generations, so AI may also affect the way we think about the world itself. If we rely on AI for very basic information, then – a good analogy would be smartphones, due to which we no longer need to memorize a lot of telephone numbers, which is very convenient. But instead of phone numbers, what if we begin relying on technology for forming our opinions? Then we’ll have a very big problem because we’d be no longer forming our own opinions, but something else would be shaping them.

So what should we do? Firstly, we should learn to understand it better. We should be more wary of deploying systems that we’re not familiar with and don’t know what the consequences are going to be. We should be clearer on responsibility for when things go wrong with AI, for example, if a driverless vehicle crashes or an algorithm makes a racist decision, some individual or company should be held accountable for that.

In your conference paper, you mentioned that states, in order to avoid threats, sometimes put too much legal restrictions on the development of technology. How do you think, is it possible for a balanced level of regulation to be achieved so that people are protected from risks and the technology can still progress?

Yes, but it’s difficult. It depends on what your risk threshold is. For example, the EU (European Union) has regulated much more than some other countries, because it sees risks that it doesn’t want to accept. Other countries, for instance, Singapore, look at the European approach and say “Well, that would restrain innovation, it would drive it elsewhere.” And the Europeans respond something like “In some areas, like real-time biometric surveillance, we don’t want innovation. We don’t want facial recognition to be all around.” So, to conclude, it depends on the kind of risks each country sees and, to some extent, on their size. Because a big country or a group of countries like the EU is still an important market, even if it regulates strictly. So if that means that Meta and Google have to jump through hoops to operate in the European Union, they will still do it. If a small country did something similar, they might say it’s not worth doing business there at all.

There are some choices to be made other than underregulation and overregulation, like, for example, do you view technological development generally, like the EU, where they cover all of AI under the same umbrella? Or, like Singapore and elsewhere, do you approach it as different sectors? Identify specific problems that you want to address rather than regulating technology as a whole? And I assume most countries will choose to do the latter.

Could you mention some positive examples of states that have a balanced AI regulation?

Again, it depends on what your objectives are. I think, that the EU is still an important example of a group of states that prioritize rights over uncontrolled technology advancement, and I think it’s really interesting to see where this goes. Singapore has made a very practical approach, where their goal was to develop some tools, such as “AI verify”, that enhances the self-regulation of companies. Many companies claim to have specific guidelines and standards around AI, so Singapore said “Alright, here’s a tool with which you can measure if you’re living up to your standards.” Plus, we have regulatory sandboxes – some governments will create a kind of virtual environment, for example in the finance sector, in which you can run programs for your products and have minimal risk, because it can’t escape the sandbox. And then there’s hard regulation. Some countries apply hard regulations on specific sectors, like medicine, finance, or transportation, so yes, I think Singapore is doing interesting things.

Another quite interesting case is China. For the most part, they’re looking at AI through a national security lens and it’s had some interesting social policies, that may at least interest the parents of teenagers around the world, like limiting the amount of time children play video games. So I don’t think any country has got it perfectly yet, but that’s natural in an early stage of technological evolution – you’re gonna see all kinds of experimentation around the world. I think the one thing we’re gonna all land of soon is the realization, that doing nothing isn’t an option.

Do you see any current challenges in the world, that AI could help overcome?

Yes, AI is very good at optimizing resources or workflows, trying to find the most efficient ways of processing huge amounts of data. There are some things that AI can do, that humans can’t, like drug discovery: antibiotics only really took off around the 1940s, because we discovered them naturally. And now with AI, you can rely not just on the soil naturally creating antibiotics, but you can do it virtually. You can experiment with every different combination of molecules, so in areas like that, AI has a huge potential. Also, modeling the climate and the weather is an incredibly complex process, that AI can play a role in. Processing huge amounts of data is an area where it could really make a difference.

Speaking of international law, do you think that perhaps in the future there will be an international organization, like the United Nations, that helps create AI laws and regulations for all countries?

I think that would be very challenging because all of the power right now is in the hands of companies. Companies make money and they don’t necessarily want regulations, meanwhile, states also have power, but they’re wary about their overregulation or underregulation risks. There’s virtually no power at the international level, other than the power states willingly give to international organizations. So yes, you could imagine a scenario in which states all agree to give power to an international organization, but why would they do that? For example, if you look at the tension between the US and China right now, why would the US want to be governed in the same way that China is governed, and vice versa? So it is possible, but not likely. These scenarios are only possible under two circumstances: if something is extremely uncontroversial, like postal standards, or extremely dangerous, like nuclear bombs.

 

Conference “Technological Change and International Law” was organized jointly by the European Society of International Law (ESIL), Vilnius University (VU) and VU Faculty of Law. ESIL conferences each year are held in a different European city, this year was the first time when it took place in Lithuania. Some of the presentations were recorded, videos are uploaded on VU Law Faculty’s Youtube platform.