Procurement as Policy: Administrative Process for Machine Learning
At every level of government, officials contract for technical systems that employ machine learning—systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they, and the public, have little input into—or even knowledge about—their design, or how well that design aligns with public goals and values.
In this talk I explore specific ways that design decisions inherent in machine-learning systems are substantive policy decisions, and how the procurement processes, which today dominates their adoption, limits their full consideration. Specifically, these embedded policies receive little or no agency or outside expertise beyond that provided by the vendor: no public participation, no reasoned deliberation, and no factual record. Design decisions are left to private third-party developers: Government responsibility for policymaking is abdicated. I argue that, when policy decisions are made through system design processes suitable for substantive administrative determinations should be used: processes that demand reasoned deliberation reflecting both technocratic concerns about the informed application of expertise, and democratic concerns about political accountability. Finally, I sketch ways that agencies might garner relevant technical expertise and overcome problems of system opacity, satisfying administrative law’s technocratic demand for reasoned expert deliberation; and institutional and engineering design solutions to the challenge of policy making opacity, offering both process paradigms to ensure the “political visibility” required for public input and political oversight, and proposing the importance of using “contestable design”—design that exposes value-laden features and parameters, and provides for iterative human involvement in system evolution and deployment. Together, these institutional and design approaches further both administrative law’s technocratic, and democratic, mandates.
Deirdre K. Mulligan is a Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. She is a member of the Defense Advanced Research Projects Agency’s Information Science and Technology study group (ISAT); and, a member of the National Academy of Science Forum on Cyber Resilience. She is past-Chair of the Board of Directors of the Center for Democracy and Technology, a leading advocacy organization protecting global online civil liberties and human rights; an initial board member of the Partnership on AI; a founding member of the standing committee for the AI 100 project; and a founding member of the Global Network Initiative, a multi-stakeholder initiative to protect and advance freedom of expression and privacy in the ICT sector, and in particular to resist government efforts to use the ICT sector to engage in censorship and surveillance in violation of international human rights standards. She recently served as a Commissioner on the Oakland Privacy Advisory Commission and helped to develop a local ordinance providing oversight of surveillance technology. Mulligan chaired a series of interdisciplinary visioning workshops on Privacy by Design with the Computing Community Consortium to develop a shared interdisciplinary research agenda. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law.
Mulligan was the Policy lead for the NSF-funded TRUST Science and Technology Center, which brought together researchers at U.C. Berkeley, Carnegie-Mellon University, Cornell University, Stanford University, and Vanderbilt University; and a PI on the multi-institution NSF funded ACCURATE center. In 2007 she was a member of an expert team charged by the California Secretary of State to conduct a top-to-bottom review of the voting systems certified for use in California elections. This review investigated the security, accuracy, reliability and accessibility of electronic voting systems used in California. She was a member of the National Academy of Sciences Committee on Authentication Technology and Its Privacy Implications; the Federal Trade Commission’s Federal Advisory Committee on Online Access and Security, and the National Task Force on Privacy, Technology, and Criminal Justice Information. She was a vice-chair of the California Bipartisan Commission on Internet Political Practices and chaired the Computers, Freedom, and Privacy (CFP) Conference in 2004. She co-chaired Microsoft’s Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.