Professor Seth Lazar

D.Phil. M.Phil. BA (Hons)
Professor
ANU College of Arts and Social Sciences

Areas of expertise

  • Social Philosophy 220319
  • Ethical Theory 220305
  • Applied Ethics 2201
  • Human Rights And Justice Issues 220104
  • Political Theory And Political Philosophy 160609
  • Decision Theory 220302
  • International Relations 160607
  • Ethical Use Of New Technology 500103

Research interests

I write on topics in political philosophy, and normative and applied ethics. In recent years, I have begun to focus exclusively on the morality, law and politics of data and AI (broadly construed). I am concurrently pursuing four major projects, all as part of my Machine Intelligence and Normative Theory(MINT) Lab. 

My central interests are in the political philosophy of data and AI. I aim to evaluate how big data and AI, and related technologies, create new modalities for the exercise and concentration of power, and to ask whether and how those new power relations can be legitimated and justified. This involves rethinking how analytic philosophers conceptualise power and legitimacy, as well as integrating empirical and technical research into AI and its impacts on society to enable both a critical evaluation of where we are now, and a productive attempt to articulate promising avenues for the future. This work lays the foundations for my Future Fellowship, a AU$1,020,698 grant from the Australian Research Council, to run from 2022-26.

As well as thinking about how to reshape society in order to legitimate (or eliminate) the new power relations enabled by intelligent systems, I also think about how to design those systems themselves, so that they take our moral or political values into account. This involves answering several distinct kinds of question. How would we decide which values to incorporate? Who should decide? Once that decision is made, how would we operationalise it? This involves transitioning from questions in political epistemology to practical questions in algorithm design, in collaboration with computer scientists. This project is associated with the HMI Grand Challenge, a AU$5,500,000 investment by the ANU in the multidisciplinary pursuit of democratically legitimate intelligent systems. I was founding lead of HMI from 2018-2021, when I stepped down to take up my future fellowship. 

Third, I am interested in thinking about how to design intelligent systems that are calibrated to work with humans as we are, not as we might be: that take into account the predictable and fallible biases and heuristics that we use to navigate the world, as well as attending to how our interaction with intelligent systems will itself change us. This project involves collaboration with sociologists and political scientists, and is associated with my Templeton World Charity Foundation grant on moral skill and AI (US$234,000, 2021-23).

Last, I have a new project, beginning in 2023, on 'Socially Responsible Insurance in the Age of AI'. Funded by a Linkage grant (total award apx AU$1,000,000, 2023-25) bringing together ANU, IAG, Sydney and the Gradient institute, the project aims to discover the social costs and benefits of using Artificial Intelligence in insurance, and to design practical interventions—responsible design workshops, practical guidance, regulatory proposals, new algorithmic tools—that realise the benefits while mitigating the costs. It will focus in particular on themes associated with privacy, fairness, and power. 

Before working on AI, I worked on the ethics of risk, leading an ARC Discovery Project on the topic (AU$335,000, 2017-2021), and before that on the ethics of war, supported by an ARC DECRA on ‘Justifying War’ (AU$366,000, 2013-2016).

Throughout all of my work—on war, moral theory, and AI—I have had two central preoccupations: with the inadequacy of an individualist, moral-philosophy-based approach for addressing the normative questions raised by complex institutions and collective action problems; and with the necessity of incorporating uncertainty into moral and political theorising (rather than abstracting away from it). Risk and the distinct challenges raised by political normativity structured by monograph on the protection of civilians in war, and they are also central to how I think about AI.

I've published papers in many top journals, including Ethics (2009, 2015, 2017), Philosophy & Public Affairs (2010, 2012, 2018), Australasian Journal of Philosophy (2015), Nous (2017), Canadian Journal of Philosophy (2021), Synthese (2021), Philosophical Quarterly (2018), Philosophical Studies (2017), Oxford Studies in Political Philosophy (2017), Oxford Studies in Normative Ethics (2019), and others. See 'Research' for more. I'm also on the editorial board of Oxford Studies in Political Philosophy, and on the executive committee for ACM FAccT. Recently I have taken on leadership roles for Computer Science-led conferences on AI and society. As well as being on program committees for both principal and ethical review for IJCAI, NeurIPS, EAAMO and other conferences, I was Program Co-Chair for the ACM/AAAI AI, Ethics and Society Conference 2021, and General Co-Chair for ACM Fairness, Accountability and Transparency Conference (FAccT) 2022.

In 2019, I was awarded the ANU Vice Chancellor’s award for excellence in research, I also received an Early Career Researcher Commendation from the Academy of Social Sciences of Australia in 2016, and won the American Philosophical Association’s Frank Chapman Sharp prize in 2011. In 2022 I gave the Mala and Solomon Kamm lecture in Ethics at Harvard University, and I will in 2023 give the Tanner Lectures on AI and Human Values at Stanford University.

Biography

Seth Lazar is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defence, and risk, and now leads the Machine Intelligence and Normative Theory(MINT) Lab, where he directs research projects on the moral and political philosophy of AI, funded by the ARC, the Templeton World Charity Foundation, and Insurance Australia Group. He was General Co-Chair for the ACM Fairness, Accountability, and Transparency conference 2022, and Program Co-Chair for the ACM/AAAI AI, Ethics and Society conference in 2021, and is one of the authors of a study by the US National Academies of Science, Engineering and Medicine, which reported to Congress on the ethics and governance of responsible computing research. He has given the Mala and Solomon Kamm lecture in Ethics at Harvard University, and will in 2023 give the Tanner Lectures on AI and Human Values at Stanford University. 

 

Current student projects

Moral Anti-Rationalism

Addressing the Environmental Crisis

Contractualism and Future Generations

Predictive Power

Ethics of Machine Learning

Publications

Projects and Grants

Grants information is drawn from ARIES. To add or update Projects or Grants information please contact your College Research Office.

Return to top

Updated:  29 September 2023 / Responsible Officer:  Director (Research Services Division) / Page Contact:  Researchers