A study into the layers of automated decision-making: emergent normative and legal aspects of deep learning

This source preferred by Argyro Karanasiou

This data was imported from Scopus:

Authors: Karanasiou, A.P. and Pinotsis, D.A.

http://eprints.bournemouth.ac.uk/28269/

Journal: International Review of Law, Computers and Technology

Volume: 31

Issue: 2

Pages: 170-187

eISSN: 1364-6885

ISSN: 1360-0869

DOI: 10.1080/13600869.2017.1298499

© 2017 Informa UK Limited, trading as Taylor & Francis Group. The paper dissects the intricacies of automated decision making (ADM) and urges for refining the current legal definition of artificial intelligence (AI) when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. Whilst coming up with a toolkit to measure algorithmic determination in automated/semi-automated tasks might be proven to be a tedious task for the legislator, our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm. The paper offers a fresh look at AI, which exposes certain vulnerabilities in its current legal interpretation. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of human–machine interaction. Part 2 further discusses the intricate nature of AI algorithms and considers how one can utilize observed patterns in acquired data. Finally, the paper explores the legal challenges that result from user empowerment and the requirement for data transparency.

The data on this page was last updated at 04:42 on September 20, 2017.