Machine Learning, Abstract Thought, and the Expanding Reach of AI: Ethical and Conceptual Frontiers

A conference and workshop co-hosted by the Institute for Research in Sensing (IRiS) and the Department of Philosophy at the University of Cincinnati, with support from the Taft Research Center.

April 7th and 8th, 2022 in the Probasco Auditorium

  • Conference organizers: Dr. Peter Langland-Hassan, Associate Professor of Philosophy & Mel Andrews, Doctoral Candidate
  • Conference coordinators: Dr. Nathan Morehouse, Associate Professor & IRiS Director; Dr.
    Neşe Devenot, IRiS Postdoctoral Associate; Imogen Watts, IRiS Co-op Student
View the lecture videos at our livestream archive.

Speakers

Headshot of Cameron Buckner

Cameron Buckner

Associate Professor of Philosophy, University of Houston

Dr. Buckner is a philosopher of mind and cognitive science well known for his work on Deep Neural Networks and their relationship to abstract thought, and other foundational issues at the intersection of cognitive science and the philosophy of A.I.

Headshot of Kathleen Creel

Kathleen Creel

Postdoctoral Embedded EthiCS Fellow, Stanford University, Center for Ethics in Society and Institute for Human-Centered Artificial Intelligence

Dr. Creel is a philosopher & ethicist specializing in the moral, political, & epistemic implications of machine learning. She researches questions such as how we can gain scientific understanding through the use of “black-box” computational methods.

Headshot of Eva Dyer

Eva Dyer

Assistant Professor, Coulter Department of Biomedical Engineering, Georgia Institute of Technology & Emory University

Dr. Dyer works at the intersection of neuroscience & machine learning—developing computational approaches to interpret complex neuroscience datasets & designing new machine intelligence architectures inspired by the organization of biological brains.

Headshot of Ahmed Elgammal

Ahmed Elgammal

Professor of Computer Science, Rutgers University

Dr. Elgammal is a professor of computer science and the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University, which focuses on data science in the domain of digital humanities.

Headshot of Subbarao Kambhampati

Subbarao Kambhampati

Professor of Computer Science, Arizona State University

Dr. Kambhampati is past president of the Association for the Advancement of Artificial Intelligence (AAAI) and has written widely on A.I. and its societal impact, in both professional and popular venues.

Headshot of S. Matthew Liao

S. Matthew Liao

Director of the Center for Bioethics & Affiliated Professor of Philosophy, New York University

Dr. Liao is a philosopher interested in a wide range of issues including ethics, epistemology, metaphysics, moral psychology, and bioethics. He is the author of many articles on machine learning and morality, and is editor of the Ethics of A.I.

Headshot of Zachary Lipton

Zachary Lipton

Assistant Professor of Machine Learning and Operations Research, Carnegie Mellon University

Dr. Lipton’s research concerns core machine learning methods and theory, including their applications in healthcare and natural language processing. His interests extend to critical concerns about the interpretability of machine learning algorithms.

Headshot of Mariya Toneva

Mariya Toneva

Postdoctoral Fellow, Neuroscience Institute, Princeton University

Dr. Toneva’s research is at the intersection of Machine Learning, Natural Language Processing & Neuroscience, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems.

Event Summary

On April 7th and 8th, 2022, The Department of Philosophy and the Institute for Research in Sensing (IRiS) at the University of Cincinnati will be hosting the interdisciplinary conference and workshop, “Machine Learning, Abstract Thought, and the Expanding Reach of AI: Ethical and Conceptual Frontiers.” The event has two principle aims. The first aim is to assess the prospects of deep learning (DNN) algorithms to model the human capacity for language and abstract thought, including especially the discernment and prediction of human psychological states. The second aim is to anticipate and explore the ethical dilemmas that may be generated by such advances. The need for AI that avoids race class, gender, and other biases is abundantly clear; yet another pressing question is whether we will be better served by AI that quite accurately discerns the content of our character. Is it possible that AI will know us better than we know ourselves? If so, what effect would this have on the human experience? If not, why are there such barriers to the reach of AI?  

The two-day conference/workshop exploring these questions will feature invited lectures from a distinguished group of philosophers, computer scientists, cognitive neuroscientists, artists, and ethicists. These lectures will be open to the public and will take as their topic the intertwined themes of the conceptual reach of AI and the ethical dilemmas that attend its expansion into new realms. We are particularly interested in the capacity of AI to model the kinds of abstract cognition normally thought to be exclusive to humans.  

In the workshop portion of the event, which will span parts of both days, invited speakers will form into smaller working groups with UC faculty members to brainstorm new approaches to the conference’s questions. The aim of the workshop is to foster lasting research collaborations among UC and external researchers, with successful working group teams co-writing and publishing research papers that address the nest of quandaries that arise as deep-learning peers deeper into the mind. These new working groups will have the opportunity to publish any resulting collaborative work in a special issue.

Subject Overview

Contemporary deep neural network (DNN) models excel at discriminating members of concrete (i.e., easily perceptible) categories, such as dogs, hammers, and artichokes. Questions remain, however, concerning whether AI could extend itself to discriminating instances of more abstract categories, such as "democracy," "belief," "desire," “freedom,” or even "justice." Whether and how machine learning can be extended to learning such categories is important both for understanding the potential reach of such technologies and for understanding whether the human sensory neural networks on which they are modeled can be invoked in explanations of how humans themselves learn abstract concepts. 

At bottom, we face the question of whether the simplest capacity for discriminating one color from another can, by degrees, scale up into a capacity for representing extremely abstract concepts and relationships of a sort that, until now, have been considered the exclusive province of human thought and which, for many, are inextricably linked to language-use. DNNs can shed new light on this question because their programming structure is inspired by the structure and functioning of the human neural networks involved in simple perceptual tasks. Thus, the ability of DNNs to master abstract relationships would suggest—as Charles Darwin and others have long suspected—that the cognitive differences between humans and other animals (including their linguistic differences) are ones of degree and not of kind. 

These questions about the reach of DNN models in machine learning generate new moral predicaments as well. We face a future where AI not only easily discriminates and tracks objects and faces, but human mental states as well, including beliefs, emotions, and political affiliations—and all on the basis of quite superficial input. Yet it is only in the last two years that the conferences where groundbreaking AI research is presented—the equivalent to academic journal publications in that field—have instituted an ethical component to the review process. The norms and procedures for such reviews are still in flux and subject to debate, with some conference attendees closely involved in those discussions. 

Looking further to the future, a capacity in AI for abstract thought may bring with it an ability not only for awareness of the minds of others, but for AI to understand itself as a thinking thing. Whether this is grounds for assigning moral agency to such entities—and for assigning them both moral duties and moral rights—may be pressing practical questions sooner than we think. 

Conference Structure

Lectures with Q&A: Each day of the conference will feature four hours of workshop activities, and four public lectures from our invited speakers. Each lecture will be 30 minutes plus 5 minutes for a moderated question and answer period.  A group luncheon will be provided for workshop participants at mid-day each day, and a general reception and dinner will be provided for workshop participants at the conclusion of each day’s events. 

Workshop: Workshop participants will be split into 4 groups of roughly 5 individuals each (e.g., 2 invited speakers, 2 UC faculty, 1 early career researcher).  We are now accepting applications to be among the UC workshop participants. Click here to apply to the workshop. If admitted, participation is free.

  • On Day 1, the workshop will involve introduction and brainstorming activities among the eight invited speakers and a pre-selected group of UC faculty and advanced graduate students or postdocs from across the humanities and sciences.
  • On Day 2, workshop teams will reconvene to synthesize and summarize their ideas into themes, resulting in a “poster” to be shared with the other working groups. Groups will circulate and provide feedback on each other’s posters.
  • For the remainder of the workshop time, the groups will meet to digest and integrate the feedback from other workshop participants, focusing on identifying a single theme or set of ideas suitable for further exploration and subsequent publication as a review, perspective piece, or theoretical work.

Conference Schedule

Afternoon talk schedule. All talks will take place in the Probasco Auditorium, and be 35 minutes long with a 5 minute break between talks.
Time
Thursday, April 7th Time Friday, April 8th
13:00

Welcome Address

University Provost, Valerio Ferme

13:00

Ethics of AI and Health Care: Towards a substantive human rights framework

Matthew S. Liao, Director, Center for Bioethics, New York University

13:20

Beyond Curve-Fitting

Zachary Lipton, Director, ACMI (Approximately Correct Machine Intelligence) Lab, Machine Learning, Carnegie Mellon University

13:40

Picking on the Same Person: Does algorithmic monoculture homogenize outcomes?

Kathleen Creel, HAI, Stanford University, Philosophy, Northeastern University

14:00

Deep Neural Networks as Model Organisms for Human Language Comprehension

Mariya Toneva, Princeton Neuroscience Institute, Princeton University

14:15 Coffee & Tea
14:35 Coffee & Tea 14:30

Tune In and Dropout: How we can use dropout to build more generalizable links between the brain and behavior

Eva Dyer, Biomedical Engineering, Georgia Institute of Technology, Emory University

14:50

Moderate Empiricism and Machine Learning Research: Domain-general faculties and domain-specific abstractions

Cameron Buckner, Philosophy, University of Houston

15:10      

Human-Aware AI: How mental-modeling can lead to cooperation as well as coercion

Subbarao Kambhampati, Computing & AI, Arizona State University

15:30     

Art at the Age of AI

Ahmed Elgammal, Computer Science, Rutgers University