Tech & Crypto

Artificial Intelligence’s Errors: Searching For A Guilty Party

Por 1 septiembre, 2023 No Comments

One, two, three, go! 1. Google Photos image processing software mistakenly labeled a black couple as «gorillas»; 2 Facebook’s automatic language translation software incorrectly translated an Arabic post saying «Good morning» into Hebrew saying «hurt them,» leading to the arrest of a Palestinian man in Beitar Illit, Israel; 3. Bing Chat Response Cited ChatGPT Disinformation Example. Absolutely, there are more than these three cases of AI malfunction, actually, there are thousands of them — and their number is increasing by the second. Accordingly, a critical issue arises: who should be responsible for artificial intelligence or algorithm malfunctions: the programmer, the manufacturer, or the user?

t must be you. It is a fact that everywhere liability legal frameworks are clearly unprepared for AI. They were conceived and enacted when humans caused most harm, with or without intention, but always with direct human input, as one author accurately put it up. Not surprisingly, current liability inquiries seem to be focused on the person who uses an AI algorithm.

There’s a little black box, holding all the truth about us. However, AI’s malfunctions do not always derive from the users´ own faults. So, let’s go to the programmers: the difficulty in putting the blame on machines lies in the impenetrability of the AI decision-making process. That precisely is the case of the black-box AI models, which involve algorithms that are too complex to understand even for the programmer because he does not explicitly program the algorithm. Thus, we are at the starting point again: who is responsible?

There is more than one answer. As always happens when such a complex issue is posed, its approach should be complex too. A solution is contemplated by the European Community legal framework — part of it is not yet enacted. It sets a fourfold approach: First, a manufacturer of an AI system should be liable for damages caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market. Second, an operator of an AI system that carries an increased risk of harm to others, for example, AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation. In addition, a service provider ensures the technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI. This should be taken into account in determining who primarily operates the technology. Lastly, a person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor, and maintain the technology in use. Upon its failure to comply with these duties, it should be liable for breach of such duties if at fault. Besides, the proposed Artificial intelligence liability directive creates a presumption of causality that gives claimants seeking compensation for damage caused by AI systems a more reasonable burden of proof and a chance of a successful liability claim.

In America, has been suggested that a potential response is to hold everyone involved in the use and implementation of the AI system accountable. It has been argued too that cases involving AI must be submitted before a tech-proficient Court — does such a thing always exist?

I do not know much. European regulation and American suggestions notwithstanding, the issue concerning the legal liability derived from AI misuse is as unresolved as fascinating. Beyond the reaches of regulation rises another most worrying uncertainty: could judges and jurors, with their limited tech knowledge handle this matter? Could we? AI is an uncharted terrain; AI liability is only one of its compartments.

This article was originally posted at Miami Independent.

This article was written by Martín Elizalde


Dejar comentario