Welcome to the website for STS 10, the Reading Lab on Mathematical Models in Social Context, for Spring 2019.

Professors     Moon Duchin (office: BP 113),     Daryl DeFord (office: MIT 32-D475A)

Course meetings     Fridays 1:30-2:45pm, Miner 221

Description

From supply-demand curves to Feynman diagrams to Punnett squares to the Kinsey scale, our models often have an outsize role in constituting the scientific and social concepts they purport to describe. We will spend some time analyzing what models are (broadly and narrowly), and their intended and unintended uses and consequences. We will survey the emerging field of algorithmic auditing and algorithmic accountability and will focus on ethical concerns in algorithm design and deployment.

This reading lab, carrying 2 SHU credit, is intended to be taken alongside any course in which you make mathematical models or algorithms. It is designed to pair well with Math 87 (Mathematical Modeling and Computation), but there are dozens of other courses at Tufts that fit the bill, especially in Math, Computer Science, Engineering, and Political Science.

We will survey the STS literature on models and algorithms, in addition to closely reading about 30 pages per week for discussion. The grade will be based solely on contributions to the group discussion.

The course is divided into two parts. Part One asks big questions about model structure, selection, and impact. The main readings come from STS, philosophy, and history. This part of the course closes with students selecting readings that show examples of models that exhibit the qualities described in the shared readings, that delve deeper into some frequent properties of models or modeling practices, or that build theory around those properties.

Part Two of the course jumps to the computer moment for a close look at algorithms, data (and so-called Big Data), and Artificial Intelligence. We begin with a student selection of readings that look at ethical breakdowns, crises, and opportunities in applied algorithms, from credit scoring to recidivism risk to face recognition. We will pay special attention to fairness critiques that highlight instances in which algorithms heighten inequality. We will read about search engines and ranking schemes, machine learning and worries about model interpretability, and the introduction of randomness.

This Reading Lab aims to prepare you to be a critical reader, and an ethical designer, of models and algorithms.