Beyond Compliance: Abstracts

Home

 

 

Michael Bernstein

Ethics and society review: ethics reflection as a precondition to research funding

Artificial intelligence (AI) research is routinely criticized for its realized and potential impacts on society, and we lack adequate institutional responses to this criticism and to the responsibility that it reflects. AI research often falls outside the purview of existing feedback mechanisms such as Institutional Review Boards (IRBs), which are designed to evaluate harms to human subjects rather than harms to human society. In response, we have developed the Ethics and Society Review board (ESR) at Stanford University, a feedback panel that works with researchers to mitigate negative ethical and societal aspects of AI research. The ESR’s main insight is to serve as a requirement for funding: researchers cannot receive grant funding from a major AI funding program at our university until the researchers complete the ESR process for the proposal. We will describe the ESR as we have designed and run it over two years and more than 70 proposals. We analyze aggregate ESR feedback on these proposals, finding that the panel most commonly identifies issues of harms to minority groups, inclusion of diverse stakeholders in the research plan, dual use, and representation in data. Surveys and interviews of researchers who interacted with the ESR found that 58% felt that it had influenced the design of their research project, 100% are willing to continue submitting future projects to the ESR, and that they sought additional scaffolding for reasoning through ethics and society issues.

Michael Bernstein is an Associate Professor of Computer Science and STMicroelectronics Faculty Scholar at Stanford University, where he is a member of the Human-Computer Interaction Group. His research focuses on the design of social computing systems. This research has won best paper awards at top conferences in human-computer interaction, including CHI, CSCW, and UIST, and has been reported in venues such as The New York Times, New Scientist, Wired, and The Guardian. Michael has been recognized with an Alfred P. Sloan Fellowship, UIST Lasting Impact Award, and the Patrick J. McGovern Tech for Humanity Prize. He holds a bachelor's degree in Symbolic Systems from Stanford University, as well as a master's degree and a Ph.D. in Computer Science from MIT.


 

 

Yves-Alexandre de Montjoye

The search for anonymous data - From de-identification to privacy-preserving systems

We live in a time when information about most of our movements and actions is collected and stored in real time. The availability of large-scale behavioral data dramatically increases our capacity to understand and potentially affect the behavior of individuals and collectives. The use of this data, however, raises legitimate privacy concerns.

Anonymization is meant to address these concerns: allowing data to be fully used while preserving individuals’ privacy. In this talk, I will first describe a line of research in attacks against de-identified datasets showing how traditional data protection mechanisms mostly fail to protect people’s privacy in the age of big data. I will then describe what I see as a necessary evolution of the notion of data anonymization towards an anonymous use of data and discuss the pros and cons of some of the modern privacy engineering techniques currently developed ranging from Differential Privacy to Query-Based Systems. I will conclude by describing how, when combined, I think they will allow large-scale behavioral data to be used while giving individual strong privacy guarantees.

Yves-Alexandre de Montjoye is an Associate Professor at Imperial College London where he heads the Computational Privacy Group. He's currently a Special Adviser on AI and Data Protection to EC Justice Commissioner Reynders and a Parliament-appointed expert to the Belgian Data Protection Agency. In 2018-2019, he was a Special Adviser to EC Commissioner Vestager for who he co-authored the Competition policy for the digital era report. He's affiliated with the Data Science Institute and Department of Computing. He was previously a postdoctoral researcher at Harvard working with Latanya Sweeney and Gary King and he received his PhD from MIT under the supervision of Alex "Sandy" Pentland.


 

 

Bernd Stahl

Realising ethics and responsible innovation in a large neuroinformatics project

The talk will describe how ethical reflection and responsible innovation can be put in practice in a large neuroinformatics research project. The project in question is the EU-funded Future and Emerging Technologies Human Brain Project (HBP, www.humanbrainproject.eu). The HBP has a duration of 10 years (2013-2023), core EU funding in the area of €450 million, more than 100 partner organisations and over 500 researchers. It brings together neuroscience, medicine and computer science with a view to building an infrastructure for brain research. It was recognised from the outset that this project would raise many ethical concerns and the project therefore has always had a strong component focused on ethics and society with particular emphasis on responsible innovation. In the talk I will discuss the various issues that need to be addressed in the project. This provides the background for an overview of the structures and processes that were put in practice in order to address these. I will end by highlighting some of the lessons learned and the challenges that the project now faces in the transition from a research project to a distributed infrastructure.

Bernd Carsten Stahl is Professor of Critical Research in Technology and Director the Centre for Computing and Social Responsibility at De Montfort University, Leicester, UK. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes the ethics of ICT and critical approaches to information systems. 


 

 

Catherine Tessier

Research ethics training: debating is learning

Research ethics has to be experienced through one’s involvement in debates about real research situations. Actually, ethics is not compliance but a thought about which values to consider to make a decision and how they may conflict with one another. That is why ethics considerations about questionable projects involving digital technologies go beyond checking a list of predefined properties. We will give an insight of how researchers can experiment ethics through debating. We will also claim that, in the context of responsible research, thinking about one’s own research is as necessary as studying the state-of-the-art.

Dr. Catherine Tessier is a senior researcher at ONERA, Toulouse, France. Her research focuses on modelling ethical reasoning and on ethical issues related to the use of “autonomous” robots. She is also ONERA’s research integrity and ethics officer. She is a member of the French national ethics committee for digital technologies and a member of the ethics committee of the French ministry of defense. She is also a member of Inria’s Research Ethics Board (Coerle).


 

 

Sally Wyatt

New digital research possibilities/Old ethical forms

In this presentation, I will reflect on the opportunities provided to researchers by digital technologies, across all stages of research, from data collection to publication, including analysis and engagement with citizens and other social partners. Digital technologies also make collaboration across distance much easier than in the past. Understanding the challenges facing society, including those raised by the climate crisis, poverty, pandemics and large-scale displacements of people, all require interdisciplinary research to enrich our knowledge and possibilities for action.

The values, norms and practicies guiding ethical research practices have a long history. These are often very much dependent on national and disciplinary cultures, and often driven by high-profile instances of fraud or abuse. Thus while the growth of interdisciplinary and international research teams may contribute to the solution of urgent societal problems, it may be accompanied by misunderstandings and conflicts about what constitutes responsible and good research.

There are no simple solutions but I will attempt to provide some indications of actions that might promote good research. Some actions are appropriate for individual researchers and research groups, others are for universities and funding agencies.

Sally Wyatt is Professor of Digital Cultures in the Faculty of Arts and Social Sciences at Maastricht University. She is also the chair of ZonMw Health Councils programme called ‘Promoting Good Science’. Her research focuses on the ways in which digital technologies are used in healthcare, and on what digital technologies mean for the production of knowledge in the humanities and the social sciences.


 

 

Sylvie Delacroix

Data Trusts and the need for bottom-up data empowerment infrastructure

Data Trusts are a proposed bottom-up mechanism, whereby data subjects choose to pool the rights they have over their personal data within the legal framework of the Trust. They aim to empower us, data subjects to ‘take the reins’ of our data in a way that acknowledges both our vulnerability and our limited ability to engage with the day-to-day choices underlying data governance.

Specifically, there are three key problems that bottom-up data trusts seek to address:

  • Lack of mechanisms to empower groups, not just individuals
  • Can we do better than current ‘make belief’ consent?
  • Can we challenge the assumed trade-off between promoting data-reliant common goods on one hand and addressing vulnerabilities that stem from data sharing on the other?

Sylvie Delacroix is a professor in law and ethics at the University of Birmingham and a fellow of the Alan Turing Institute. Her interest in the infrastructure that molds our habits notably leads her to pay attention to the power imbalances that stem from our increased reliance on data-reliant tools. As a concrete way of mitigating the latter, she co-chairs the Data trust Initiative. @SylvieDelacroix


 

 

Rowena Rodrigues

Looking back, moving forward: AI research ethics

This presentation will share results of the EU-funded Horizon 2020 SIENNA project – what we learnt from our surveys of research ethics committee (REC) approaches and AI codes: identified ethical values and principles, guidance, areas of advancements, what new Codes should consider, and specific guidance to develop RECs in the field. The presentation will also cover how SIENNA were results incorporated into Horizon Europe. We will also share some insights on how the EU AI Act might impact researchers in the future.

Rowena Rodrigues co-leads and works to drive the growth of the Innovation & Research services of Trilateral Research in defined strategic business areas. She carries out legal and policy research related to new technologies and provides regulatory, industry and policy advice. Rowena’s background lies in research and consultancy in law (including human rights), ethics and impacts of new and emerging technologies (e.g., AI, robotics, human enhancement technologies). She has expertise in various types of impact assessments (privacy, legal, ethical, socio-economic), comparative legal analysis, privacy, and data protection (law, policy, and practice). She a keen interest in the intersections of ethics and law in relation to new technologies, responsible research and innovation, cybersecurity, and law enforcement research. She was deputy coordinator of the SIENNA project. https://trilateralresearch.com/tri_profile/rowena-rodrigues


 

 

Raja Chatila

Responsible development use and governance of AI

Innovations in several sectors, from healthcare to e-commerce and transportation, including public services, are enabled and accelerated by digital technologies such as Artificial Intelligence systems. While it enables to increase productivity or reduce costs through automation by means of physical or software processes, this technology has a direct impact on lifestyles of individuals and on society.

The formulation of the principles of medical bioethics became a necessity after the criminal misuse of medical sciences by the Nazi regime. The negative impacts of AI systems in terms of discrimination and bias, but also their transformative impact on society and work, and their limitations in terms of robustness has raised the awareness of scientists and developers, of decision and law-makers, as well as of the general public, about issues that emerge from the development and deployment of this technology. As they become more powerful (but not necessarily more performant) questions about the very nature of AI system become more acute, as recently shown with the debate about large language models.

Innovation often outpace regulation. Ethical reflection and ethics initiatives and committees have the capacity to question the motivations and relevance of specific technological choices, their safety, their impact on ethical principles, on rights and values, on populations and on the environment. This leads to the formulation of digital ethics principles to ground the development, deployment and use of the technology and to define responsible governance frameworks that can be achieved through a combination of regulation, codes of conduct, tecno-ethical standard development, certification, and public oversight.

Raja Chatila is Professor Emeritus of Artificial Intelligence, Robotics and IT Ethics at Sorbonne University in Paris. He is former director of the SMART Laboratory of Excellence on Human-Machine Interactions and of the Institute of Intelligent Systems and Robotics. He is co-chair of the Responsible AI Working group of the Global Partnership on AI (GPAI). He is chair of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and member of the French National Pilot Committee on Digital Ethics (CNPEN). He is IEEE Fellow and recipient of the IEEE Robotics and Automation Society Pioneer Award.


 

 

Karën Fort

Teaching ethics in NLP: DIY (do it yourself)

In the past 5 years, ethical issues have become more visible in AI, in particular in Natural Language Processing. “Ethics and NLP” tracks have appeared in many of the domain top conferences and an Ethics committee has been put in place by the Association for Computational Linguistics (ACL). However, training in ethics is still too rare, with less than 38% of respondents of the “ethics in NLP” survey of 2022, who say they participated to some training on the subject (either as teachers or as students). Building material on the subject and making it available is therefore paramount. I started doing so in 2017, with two goals in mind: i) put the students in action from the very beginning and ii) do not limit ethics to consequentialism. I’ll share my experience with you and show what a well-organized Do It Yourself (DIY) can bring to such a training. All the produced material is available on my Web page (https://members.loria.fr/KFort/).

Karën Fort is Associate Professor at Sorbonne Université and does her research at the LORIA laboratory in Nancy. Her primary research interest is manual annotation for natural language processing (NLP), which she extended to crowdsourcing annotation, in particular using Games With A Purpose (GWAPs). She also developped an interest in ethics in NLP and co-organized the first colloquium on the subject in 2014, in France, followed by a national workshop (ETeRNAL 2015 and 2020) and a special issue of the TAL journal in 2016. She initiated the ethics and NLP French blog (http://www.ethique-et-tal.org/) as well as the survey on ethics in NLP (Fort & Couillault, 2016). She was co-chair of the first two ethics committees in the field (EMNLP 2020 and NAACL 2021) and is co-chair of the newly created ethics committee of the association for computational linguistics (ACL). She was also a member of the Sorbonne IRB. She teaches ethics in data science and NLP at Assas University in Paris, at IDMC in Nancy (NLP Master), and at the University of Malta.


 

 

Gordana Dodig-Crnkovic

Research-based perspective in teaching ethics to engineering students

During more than twenty years, starting 2001 at Mälardalen University, Gordana has been teaching students of Computer Science, Engineering, Interaction Design and occasionally Economics in courses “Professional ethics” at Mälardalen University (Bachelor, Master and PhD levels) (2001-2014), and “Research Ethics and Sustainable Development” at Chalmers University of Technology (PhD level) 2014-2017. During the years Gordana also had regular guest lectures in Professional Ethics, Ethics of Computing, Ethics of AI, Design Ethics, Ethics for Cognitive Scientists, Robotic Ethics and Ethics of Autonomous Cars. In all educational work in ethics she is using current research, and especially ethical aspects of the emerging technologies. In this talk Gordana will present lessons learned, illustrated by concrete examples from the courses, sketching briefly future possibilities, anticipations and hopes for further developments.

Gordana Dodig-Crnković is Professor of Interaction Design at Chalmers University of Technology and Professor of Computer Science at Mälardalen University, Sweden. She holds PhD degrees in Physics and Computer Science. Her research focuses on the relationships between computation, information and cognition, including ethical and value aspects. She is a member of the editorial board of the Springer SAPERE series, World Scientific Series in Information Studies, and various journals. She is a member of the AI Ethics Committee at Chalmers and the Karel Capek Center for Values in Science and Technology. More information: http://gordana.se


 

 

Panagiotis Kavouras

Open science: hopes, challenges and the intervention of ROSiE project

Open science (OS) has come to foster free sharing of the outputs of research and to boost the institutionalization of such practices. More broadly, there have been arguments supporting that OS carries the potential to open up all stages of the research process. This could render science more relevant to societal needs by integrating consultation processes, more collaborative by enabling citizens to participate, more reliable by making research more transparent, and more equitable by the free sharing of its results. However, as almost all scientific breakthroughs, OS is a double-edged knife, since it raises acute concerns about research ethics and research integrity. It is therefore crucial to identify and analyse current ethical, social, legal, and research integrity-related challenges in the context of OS practice. ROSiE (Responsible Open Science in Europe) project is applying an elaborate work plan to apply this broad identification and analysis exercise, in order develop relevant guidelines and training materials that are going to be funneled through an open online platform: the ROSiE Knowledge Hub. Already, ROSiE – being at the middle of its timeline – has performed a mapping of existing OS policies, guidelines, and OS infrastructures, while it has produced a preliminary set of recommendations, elements of which are going to be part of the presentation. We plan to connect these elements with the currently acknowledged interventions that need to be made so as to ensure that research ethics and research integrity will become a structural component of OS. In this way, ROSiE has the ambition to support the endeavors that strive ensure that the systemic changes OS may cause will be mostly beneficial to the modus operandi of research.

Acknowledgements: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under GA No 101006430.

Dr Kavouras is a senior researcher at the School of Chemical Engineering, National Technical University of Athens. He is a physicist with an MSc in Materials Science and Technology and a PhD in Physics. For several years he was involved in research on solid waste management and characterization of mechanical properties at the micro-scale and nano-scale. Currently, his main research interest is on Research Integrity through the participation in several Horizon 2020 and Horizon Europe projects. He is also involved in capacity building and awareness raising activities for research integrity in Greece.


 

 

Jeroen van der Ham

Beyond “human subject”: the challenges of ethics oversight in digital science

Research in digital science and cybersecurity has given rise to new dilemmas. Digital infrastructures form a fundamental part of our current society. This means that many digital experiments and measurements can have an indirect effect on humans. This means that ethics oversight in this area can not be simply limited to “human subject research”. Jeroen will provide some examples and experiences in dealing with these kinds of dilemmas.

Jeroen van der Ham is associate professor of Cyber Security Incident Response at the University of Twente. Jeroen combines this with his work at the National Cyber Security Centre in The Netherlands (NCSC-NL). At NCSC-NL he focuses on the many developments in coordinated vulnerability disclosure and ethics of the security profession. At the University of Twente he focuses on incident response, ethics of incident response and internet security research, denial of service attacks, and anonymization in network measurements. He is also a member of the Ethics Committee of the EEMCS faculty at the University of Twente.


 

 

Casey Fiesler

Data is people: research ethics and the limits of human subjects review

Everyone’s tweets, blog posts, photos, reviews, and dating profiles are all potentially being used for science. Though much of this research stems from social science and purposefully engages with the human aspects of online content, in many cases this human-created content simply becomes “data”—for example, for the creation of training datasets for machine learning algorithms. In these kinds of contexts—from algorithms trained on dating profile photos to recognize gender to algorithms that can predict mental health conditions from your tweets—traditional ethical oversight such as university Institutional Review Boards often does not apply. But what is the line between “data” and human subjects research?

Casey Fiesler is an associate professor of Information Science (and Computer Science by courtesy) at University of Colorado Boulder. She researches and teaches in the areas of technology ethics, internet law and policy, and online communities. Her work on research ethics for data science, ethics education in computing, and broadening participation in computing is supported by the National Science Foundation, and she is the recipient of an NSF CAREER Award. She holds a PhD in Human-Centered Computing from Georgia Tech and a JD from Vanderbilt Law School.


 

 

Inioluwa Deborah Raji

Research accountability in machine learning

Many in the machine learning field see ethical challenges as an issue for other stakeholders to solve. In this talk, I delve into the reality of what responsibilities ML researchers have in protecting those potentially impacted by the design, development and possible deployment of their research. In particular, I’ll discuss my personal experiences in participating in the research ethics review process at the Neural Information Processing Systems (NeurIPS) conference, one of the largest international publication venues for ML research. I’ll discuss the challenges specific to implementing this system at scale and the impact this development has had on the move towards more meaningful ethical oversight for machine learning research more broadly.

Deborah is a Mozilla fellow and CS PhD student at University of California, Berkeley, who is interested in questions on algorithmic auditing and evaluation. In the past, she worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice. Recently, she was named to Forbes 30 Under 30 and MIT Tech Review 35 Under 35 Innovators.


 

 

Dirk Lanzerath

Ethics reviews in modern research: learning from medical RECs

The use of ethics committees in medical research was not always a matter of course. It has arisen from negative experiences during the Second World War and research thereafter. Independent ethics committees are supposed to consult researchers. The objective of the consultation is to protect the research participants and their data. That this is also a problem in other scientific disciplines is by no means new experience, but that RECs can also be beneficial here is rather new. In the meantime, this is also strongly promoted by research funding agencies and scientific journals. But how can RECs be used outside of medicine without the administrative burden being greater than the societal benefit? How does ethical consultation become a research facilitator rather than a research impediment? How can other disciplines learn from developments in medical research?

Professor of Ethics and Research Ethics, graduated in biology, philosophy and education; PhD and venia legendi (habilitation) of the faculty of philosophy of the University of Bonn; Secretary General of the European Network of Research Ethics Committees (EUREC); Head of the German Reference Centre for Ethics in the Life Sciences (DRZE), Bonn (Central Research Institute of the University of Bonn and Research Centre of the North Rhine Westfalian Academy of Sciences, Humanities and the Arts); honorary professor at the Centre for Ethics and Responsibility at University of Applied Sciences Bonn Rhein-Sieg; member of the board of the Central Ethics Committee at the German Physician Association; member of the Ethics Committee of the Medical Association North Rhine; member of the Ethics Committee of the University of Maastricht; member of the Editorial Board of the Journal "Research Ethics Review"; study abroad professor for ethics/bioethics/environmental ethics/research integrity/ethics and the arts at the Study Abroad Program of the Loyola Marymount University, Los Angeles, Ca. (USA) at the Academy of International Education (AIB).


 

 

Philip Brey

Research ethics guidelines for the computer and information sciences

I will present, discuss and defend research ethics guidelines for the computer and information sciences. Only very recently has there been an effort to establish research ethics frameworks and ethics committees for these fields. Arguments are presented concerning these developments, and a specific proposal is made for ethics guidelines for the computer and information sciences. It is argued that although there are shared issues and principles for research ethics across scientific fields, all scientific fields raise unique ethical issues that require special ethical principles and guidelines. Following this discussion, the historical development of professional ethics and research ethics in the engineering science and the computer and information sciences is discussed, and special guidelines for these fields are presented that were developed as part of a CEN (European Committee for Standardization) standard for research ethics within the European Commission-funded SATORI project on research ethics and ethics assessment.

Philip Brey (PhD, University of California, San Diego, 1995) is professor of philosophy and ethics of technology at the Department of Philosophy, University of Twente, the Netherlands. He is currently also programme leader of the ESDiT (Ethics of Socially Disruptive Technologies), a ten-year research programme with a budget of € 27 million and the involvement of seven universities and over sixty researchers (www.esdit.nl). Esdit runs from 2020 to 2029. He is a former president of the International Society for Ethics and Information Technology (INSEIT), and of the Society for Philosophy and Technology (SPT). Hi is also former scientific director of the 4TU.Centre for Ethics and Technology 2013-2017. He is on the editorial board of twelve leading journals and book series in his field, including Ethics and Information Technology, Nanoethics, Philosophy and Technology, Techné, Studies in Ethics, Law and Technology and Theoria.

Home