top of page

How the Project Works

Motivated by Future of Life’s AI Principles (2017), we investigate data professionals' responsibility and ability to mitigate algorithmic harms, recognizing that data professionals are one of many entities that can have responsibility for algorithmic harms.

The Process

Imagine you're a data professional involved in a project like developing a facial recognition algorithm for the Michigan State Police. This project may require algorithm design, implementation, and/or application. As you start working on the project, you might have a few questions about how to design/implement/apply the algorithm (see the below diagram for some sample questions that might come up).

As we think about how to respond to some these questions, we may be considering client priorities, our bosses' priorities, and our own priorities. Sometimes, we may not be consciously aware of the priorities or perspectives we're bringing to the table as we make decisions about algorithm design/implementation/application. For example, are we an all-light-skinned team of data professionals? If so, does this make it easy for us to miss that our training dataset is highly skewed towards light-skinned faces?

We use two methods – improvisation and scores (read more about these in on the next pages) – to identify where in our work our perspectives as people might be affecting our decision-making without our conscious awareness in ways that might perpetuate algorithmic harms down the road; and as a way to structure and frame our collective reflection as data professionals on our work and responsibilities for algorithmic harm in algorithmic design, implementation, and application.

Can I just use a conventional sort algorithm, or do I need to design a new algorithm?

algorithm design

If I need to design a new algorithm, how do I approach the mathematical space I'm trying to describe with a new algorithm?

Which variables do I include in the implementation of my algorithm?

algorithm implementation

How will I tune parameters that I include in my algorithm implementation?

How will I initialize variable weights when I implement a given algorithm? 

Which training datasets will I use?

algorithm application

Do I need to collect new training data?

How will I define annotation tasks for human labeling of my training dataset?

The Outcome

The outcome of this project will be an open source collection of scores that can be widely used by data professionals interested in understanding their own responsibility in mitigating algorithmic harm.

ALGORITHMIC HARM

METHODS

References

AI Principles. (2017, August 11). Future of Life Institute. https://futureoflife.org/open-letter/ai-principles/

© 2015-2024 by Angela M. Schöpke Gonzalez.

bottom of page