During Arqus Research & Innovation Workshop in Vilnius, Questions of AI speed, Autonomy and Opacity Were Discussed

Sukurta: 01 June 2022

FinTech4On 18-20 May, Vilnius University hosted the interdisciplinary research workshop "FinTech: Finance, Technology & Regulation". This event attracted 54 participants from the Arqus Alliance partner institutions and beyond (Spain, France, Austria, Poland, Italy, Lithuania, and Norway) and FinTech industry sector representatives. The workshop was organized in the framework of Arqus Research & Innovation project.

A wide variety of topics were discussed during the workshop, including European FinTech policy and its risks, regulatory strategies, security and privacy solutions, and digital transformation in banking, investment, insurance, environment, health, and other fields.

Among the representatives of FinTech were Inga Karulaitytė-Kvainauskienė – regulatory expert, FintechHub LT Board member; Andrius Ramoška – member of FINTECH Lithuania/Infobalt, Co-founder and CEO at Soverio, Andrius Petkevičius – CEO at BCCS (Blockchain Cybersecurity and Compliance Solutions) cluster, Emilia Chehtova – Policy Officer, Inovation Policy Policy & Access to Finance Unit, DG Research&Innovation, European Commission (EC); Mr. Stéphane Ouaki – Head of Department EIC, European Innovation Council and SME’s Executive Agency (EISMEA), established by European Comiission (EC), Ms. Liudmila Andreeva-Paskov (DG FISMA, EC) and others.

Two keynote speakers, well-known in the FinTech world, gave lectures during the event. Prof. dr. Andrei Kirilenko, professor of finance, founding director of the Cambridge Centre for Finance, Technology & Regulation (CCFTR) at the University of Cambridge, talked about the optimal selection of crypto assets.

Another keynote speaker, prof. dr. Simon Chesterman, Dean of the National University of Singapore Faculty of Law and Senior Director of AI Governance at AI Singapore, gave a lecture on challenges regulating artificial intelligence (AI).

Simon Chesterman

This event continued the series of four large Arqus workshops and was the second one organized after the workshop in Graz (Austria) focusing on Climate change (Climate risks in a changing world), which took place on 26-28th of April 2022. These workshops focus on the fields of Artificial Intelligence/Digital Transformation (Vilnius and Lyon events) and Green Deal/Climate Change (Graz and Bergen events) as transversal priority areas for enhanced research collaboration, characterized by their interdisciplinarity.

Therefore, the format of this event and other mentioned workshops is new for the participants: it is a mixture between the brokerage event and the networking. During the Fintech workshop, the researchers and business representatives introduced themselves, their fields of activities and interests, experience in participating in and implementing research and innovation projects.

The participants were introduced to the upcoming EC calls for proposals (Horizon Europe, Internal Security Fund, Digital Europe Programmes, European Innovation Council’s instruments) and encouraged to discuss the possible development of the joint research projects.
Moreover, the participants were introduced to the forthcoming initiative - Seed funding, which will be made available to support the organization of consortium meetings and other activities required to prepare competitive joint projects. The call for this initiative is foreseen to be published in June, 2022.

Based on the joint research interests identified, Arqus aims to build a community of inquiry. To this end, based on the enhanced understanding of the expertise of other Arqus universities and research groups, developed partner universities will be encouraged to develop joint research projects. Lyon (France) event on Artificial Intelligence and Digitalization is organized on 1-3rd of June 2022, and Bergen (Norway) workshop on Climate change will take place on 8-9th of June 2022.

FinTech1

AI speed challenges its regulation

According to prof. dr. Chesterman, the first challenge, related to the speed of AI, could be illustrated by the United States trillion-dollar flash crash in 2010, also known as the "crash of 2:45". The crash lasted only 36 minutes.

In 2015, Navinder Singh Sarao was arrested for allegedly using an automated program to generate large sell orders, pushing down prices, which he then canceled to buy at the lower market prices. He was sentenced in 2020.

Professor explains that the same rules apply to a human and a machine when buying and selling stocks.

"But what we discovered in 2020 was the mere fact that machines could do this much faster. Instead of me and you engaging in a couple of dozens of transactions, if we were acting very quickly on a regular basis, there were high-frequency trading algorithms engaging in tens of thousands of trades every second. And this is how the New York stock exchange lost a trillion dollars in value", prof. dr. Chesterman claims.

According to the professor, one way to address the issue with high-frequency trading algorithms is by imposing bumps or circuit breakers. The other way to manage the speed could be by taking away some of the incentives.

"Another example of how important speed is – is when a Chicago-based trading house paid tens of millions of dollars to lay a fabric optic cable between Chicago and New Jersey to save three milliseconds of the amount of time it took a signal to go through telephone wires," he says.

AI autonomy as a challenge

Prof. dr. Chesterman claims that the second challenge to regulating AI is more focused on its autonomy – the idea that a machine can make decisions without additional input from a human. For example, autonomous vehicles – many people worry about cars making the right decisions.
Therefore, there are ongoing discussions about a trolley problem, which is a thought experiment in ethics about a fictional scenario in which an onlooker has the choice to save five people in danger of being hit by a trolley by diverting the trolley to kill just one person.

"Should the car be programmed in such a way as to save children or kill the driver. There was a detailed survey in which people were asked if the choice for the car was to kill ten schoolchildren or to drive off a cliff and kill the driver. Most people said it should kill the driver and save the children. The follow-up question was: would you ever get into the car programmed this way? And the majority of people said no", said the professor giving a lecture at Vilnius University.

While manufacturers are assuring that they would prioritize a driver's safety, the professor thinks that many of these limit scenarios are pretty unrealistic.

"The legal challenge is the responsibility if such a vehicle were to injure someone or to run a red light – who, if anyone, should be held to blame. Neither of these problems is particularly difficult.
In terms of injury, there is already a case. If I were to injure you by negligently driving my car in a manner that caused injury to you, then you or your estate might be able to sue me. If, however, I injured you because you were standing next to my car when the car blew up, there is not much point in suing me because I'm dead. But you might sue the car's manufacturer under product liability", the professor claims.

FinTech2

Prof. dr. Chesterman says that regarding autonomous vehicles, we will see the shift from drivers' responsibility to producers' responsibility towards product liability, with insurance playing a bridging role.

"It is mandatory to have insurance to drive on the roads, which means personal liability insurance for me. What are we going to see most likely over the coming decades, as we move to autonomous vehicles, is a shift to requiring product liability insurance for the vehicle", he says.

Concerns over opacity

Prof. dr. Chesterman claims that opacity in AI causes another challenge in regulating it. The new machine learning techniques, deep learning techniques, and artificial neural networks emerged over the past decade, and these AI systems are complex, if not impossible, to understand, even for experts.

"These systems may have millions of variables, and it might be impossible for an AI system to give an explanation that is understandable to a human. If you simplify them, you might get a greater understanding, but it might be at the expense of accuracy", he argues.

Professor says that it is crucial to recognize that we don’t need to understand everything in the world. For example, as passengers, we take planes while fully not understanding aerodynamics. In pharmaceuticals, we accept work based on statistical trials or clinical trials.

"But there are some legal decisions where it is important that, for example, a judge makes the decision based not only on statistical history or data but based on the individualized determination of the case. And does it in a matter that is explainable and understandable by the parties concerned", he claims.

For example, a software trial in Malaysia is conducted where judges rely on sentencing algorithms to recommend sentencing processes, while in China, predictive decisions are prompting judges to make determinations.

Challenges could be easily overcome

Professor argues that even though questions of speed, autonomy, and opacity pose some challenges, most activities by AI systems can be covered by most laws most of the time.

He says many ethical guidelines, frameworks, and principles started taking ground off around 2016 after the Cambridge Analytica scandal came to light. Then it was realized that the consequences of AI systems gone wrong beyond a driverless vehicle crashing into someone.

Professor says that while regulating AI, it is essential to establish rules regarding human control and transparency. It means limiting the ability to develop AI systems that can get beyond human control or human containment. At the same time, transparency means that you can get an explanation when there’s an adverse decision.

"It is often discussed in the EU context of a right to explanation when there is an adverse decision. But it is a very limited interpretation of transparency because it presumes that you knew there was a decision about you, that you heard about it and it was against you, and it will be useful to get a response", he says.

According to prof. dr. Chesterman, given the spread of AI algorithms in decision making, transparency has to mean more than the ability to challenge adverse decisions because, in the future, so many decisions will be made about us.

---

The Arqus Research & Innovation (R&I) project aims to enhance the research and innovation dimension of the Arqus Alliance, which unites nine European universities, activities. Moreover, the project addresses current global societal challenges through intensified joint research, characterized by the pursuit of excellence, openness, transparency, and effective engagement with society.