Dr. Enrique Herrera-Viedma


 
Dr. Enrique Herrera-Viedma

Biography: Dr. Enrique Herrera-Viedma is Professor of the Dept. of Computer Science and Artificial Intelligence at the University of Granada (UGR) and he is currently serving as Vice-Rector for Research and Knowledge Transfer at the UGR. He is Fellow IEEE and Fellow IFSA and Doctor Honoris Causa by Oradea University.
He was Vice-President (VP) for Publications in IEEE System Man and Cybernetics Society and now he is VP for Cybernetics, one of the founders of the IEEE Trans. in Artificial Intelligence, and Highly Cited Researcher by Clarivate Analytics in Computer Science and Engineering in 2014-2023. He has published more than 350 papers in JCR journals, his h-index is 121 in Google Scholar (>65000 citations) and 92 in WoS (>35000 citations). In 2013 he published in the prestigious journal SCIENCE about the new role of digital libraries in the era of the information society. He is a member of the panel of experts of the national project evaluation agencies in Portugal, Switzerland, France, and Kazastan; and Member of the European Committee of Experts for the evaluation of strategic information infrastructure projects in Europe (ESFRI- European Strategy Forum on Research Infrastructures), since November 2017. He has also been guest lecturer in plenary lectures and tutorials in multiple national and international conferences related to Artificial Intelligence such as: 4th Int. Workshop on Preferences and Decisions, 2003, Trento (Italy); Modeling Decision for Artificial Intelligence. 2004, Barcelona (Spain); AGOP 2005, Lugano; 4th EUSFLAT & 11th LFA Conference, Barcelona, 2005; Third Int. Workshop of Artificial Intelligence ; ESTYLF 2010, Huelva; ; Int. IEEE Intelligent Systems 2014, Poland; IEEE SMC 2014; EUSFLAT 2017, Poland; SOMET 2017, Japan; PIC 2018, Nanjing, China; BAFI 2018, Chile; IPMU 2018, Cadiz. He .is Associated Editor in several AI journals like IEEE TFS, IEEE ITS, IEEE TSMC-Syst, Knosys, ASOC, Fuzzy Opt. and Decision Making, Information Sciences, Soft Computing.


About Trustworthy Artificial Intelligence

Abstract - Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI also includes a regulation debate, with the purpose of serving as a entry point to this crucial field in the present and future progress of our society.