Our last meeting of the semester focused on organizations – what it means for an organization to be ethical, and how they are or should be held accountable (readings and all discussion questions are at the end of this post). We were lucky to have attendees from around MIT, including CS, political science, MechE, Comparative Media Studies, and the MIT Libraries. The questions we set out to discuss fell into three main buckets:

  1. Organizations taking “ethical stances”: effectiveness and ways forward
  2. Who shapes ethical design practices: balancing experts and public-at-large
  3. Tech worker organization: what has worked and why?

We started off with a conversation about incentives – what are the incentives for companies to act ethically? Why did Google even publish a set of “AI Principles”? One person brought up the difference between acting versus seeming ethical, the former being much harder and less incentivized. Because seeming ethical (e.g. publishing a document, making a statement, signing a petition) doesn’t cost much and makes a company look good, it seems like organizations have mostly done things in this camp. Another person suggested that removing “AI” or “ML” from the dicussion wouldn’t change much, and that we should look to how other scientific disciplines have grappled with ethical dilemmas (e.g. gene splicing, atomic bombs). The discussion took a step back as we talked about what it even means to be ethical: in a democracy, for example, the ethical decision may be what the majority of people decide is right. Perhaps in these discussion, we need to spend more time upfront grappling with what we mean by “ethical” – perhaps we should even use a different, less contested or more specific terminology.

Our discussion about Google’s AI Principles contained a lot of skepticism – we had a hard time understanding what many of the principles actually meant, and some terms like “socially beneficial” seemed to vague to be taken seriously. A few people did defend the publication for doing something at the corporate level as a first step. We also discussed MIT as an organization, and the desire for a statement from the Institute about principles for its own practices – especially motivated by recent controversies such as MIT’s ties with Saudi Arabian funding amid the murder of journalist Jamal Khashoggi.

We also talked about how these issues are often much more complicated than they seem: a particular technology may be beneficial for one slice of the population and very harmful to others, not to mention all the shades in the middle. It’s rare that something is just outright good or bad. How do we weigh the consequences to different groups, and moreover, how do we then make a decision? What if a very small percent of society is harmed by a technology? Is anything ever win-win? Although we don’t have an answer, we talked about ensuring a diverse tech workforce to empathize with the experiences of different groups, or having “ethicist” roles to deeply assess and make transparent a technology’s risk to core principles like human dignity for intersectional slices of the population. In our current society, power and knowledge co-habitate, leading to those in power making decisions that benefit themselves, whether knowingly or not. Changing this would require fundamentally altering power structures. Research in fields like design justice tries to get at this problem by not making assumptions about what the problems are, talking to the most victimized groups and giving them the power to change the technology during its development. One person also brought up that democracy as a political system also tries to involve many people in decision making – perhaps we could learn lessons from it?

Finally, we ended on a vision of power coming from tech workers: if big tech companies were staffed by a diverse cohort of engineers with the power to speak out without fear of being fired (e.g., unionize), maybe the public would have more confidence in tech, creating more transparency and accountability.

Thank you to everyone who came to our meetings this semester! We are excited for more discussions in 2019.


Readings:

  1. AI Now 2018 Report, section 2.3 (Why Ethics is Not Enough) and section 3.7 (Research and Organizing: An Emergent Coalition)
  2. Corporate Accountability by Lucy Suchman
  3. Google’s AI Principles
  4. Optional

Discussion Questions:

  1. What are possible incentives for companies to act “ethically”?
  2. Google’s first AI Principle is “Be socially beneficial.” How should Google evaluate whether a technology is socially beneficial?
    a. What should the requirements be before any deployment?
    b. What about after deployment – should they be continually measuring and evaluating a technology’s effect? Is it ethical for affected people to be a testbed for technologies in this way?
    c. What is an appropriate measure of “social benefit”?
  3. For many technologies, the benefit may be unevenly distributed across society. How should a company begin to weigh the pros and cons when an innovation is good for some people and harmful to others?
  4. Is it useful for a company to publish ethical principles?
  5. What structural changes could help make such principles more useful? E.g., should there be a board of people in charge of making sure that everything that’s developed meets these principles? Who should sit on such a board and what would their incentives be?
  6. What might be the implications of powerful companies taking an overt moral or political position?
  7. Greene et. al. state “Despite assuming a universal community of ethical concern, these vision statements are not mass mobilization documents. Rather, they frame ethical design as a project of expert oversight, wherein primarily technical, and secondarily legal, experts come together to articulate concerns and implement primarily technical, and secondarily legal solutions. They draw a narrow circle of who can or should adjudicate ethical concerns around AI/ML.” a. Should people other than AI or ethics experts be involved in ethical design? In what manner, and to what degree? b. Is a framework for ethical design that is driven by the (non-expert) public at large 1) feasible and 2) desirable?
  8. Is the ability of tech workers to organize for change (e.g., Google engineers protest against Project Maven) due to their relative position of power, being skilled workers in a high-demand field? Do you think this bargaining power will decrease as the supply of skilled engineers increases?
  9. Perhaps one reason these protests have mostly worked is because they’re able to gain widespread attention in large high-profile companies like Google which are then under public pressure to change course.
    a. Do you think this is the case? Would organized dissent be effective in smaller companies? Would management just fire the handful of people who complain, and is there a way to avoid that?