RIL Guidance Memo: White House Blueprint for an AI Bill of Rights
Last Updated: October 6, 2022
Nearly all technology companies leverage AI or automated large data systems in their products and services. The White House “Blueprint for an AI Bill of Rights” is an important first step toward future policymaking in the US and is likely to help shape future thinking about this topic. It is also creates an opportunity for companies to ask themselves some essential questions:
Does this or will this apply to my company?
What are my company’s principles for responsible AI?
How do we operationalize our principles?
What’s happening:
On October 4, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill or Rights (“Blueprint”) as a call to action for governments and companies to protect civil rights in an increasingly AI-infused world. The Blueprint and associated documents serve as an overview of issues surrounding the use of automated large data systems and AI and guidelines for mitigating harm. It does not provide a legislative framework or guidelines for enforcement, but is instead intended to be a “guide for society.”
The Blueprint outlines five high-level principles for responsible AI. This overview is clear, and is similar to other recent whitepapers and guidance published on AI and automated large data systems.
The more detailed companion content to the Blueprint “From Principles to Practice” provides examples of the business and technical scenarios the White House would like to see addressed and could signal where future legislative and enforcement action will likely occur. The document is worth reading, but here is a quick summary of the major points:
Safe and Effective Systems - Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks and potential impacts of the system. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.
Algorithmic Discrimination Protections - Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
Data Privacy - Designers, developers, and deployers of automated systems should seek user permission and respect user decisions regarding collection, use, access, transfer, and deletion of personal data. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language.
Notice and Explanation - Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes.
Human Alternatives, Consideration, and Fallback - Users should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law.
Questions to consider:
1. Does this or will this apply to our company?
Read through the examples of real world issues and think hard about how corollaries of these may apply to your company now or in the future.
2. What are my company’s principles for responsible AI?
Create or adapt an existing set of principles. It’s important to formally document this for your company, discuss/debate it as a leadership team and share it with your employees and customers.
3. How do we operationalize our principles?
Form an ethics review board that includes external experts in responsible AI, legal counsel, and engineering and business leaders from your company. Ideally, the board is majority independent/external. When ethical questions arise, the board responds with research, reflections and clear recommendations. Start small, but start - this could be three external experts and two employees.
RIL can help you get started, please reach out to us.
SAFE AND EFFECTIVE SYSTEMS - REAL WORLD PROBLEMS (from AI Bill of Rights)
• A proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was implemented at hundreds of hospitals around the country. An independent study showed that the model predictions underperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting likelihood of sepsis.
• On social media, Black people who quote and criticize racist messages have had their own speech silenced when a platform’s automated moderation system failed to distinguish this “counter speech” (or other critique and journalism) from the original hateful messages to which such speech responded.
• A device originally developed to help people track and find lost items has been used as a tool by stalkers to track victims’ locations in violation of their privacy and safety. The device manufacturer took steps after release to protect people from unwanted tracking by alerting people on their phones when a device is found to be moving with them over time and also by having the device make an occasional noise, but not all phones are able to receive the notification and the devices remain a safety concern due to their misuse.
• An algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit, even if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions were the result of a feedback loop generated from the reuse of data from previous arrests and algorithm predictions.
• AI-enabled “nudification” technology that creates images where people appear to be nude—including apps that enable non-technical users to create or alter images of individuals without their consent—has proliferated at an alarming rate. Such technology is becoming a common form of image-based abuse that disproportionately impacts women. As these tools become more sophisticated, they are producing altered images that are increasingly
realistic and are difficult for both humans and AI to detect as inauthentic. Regardless of authenticity, the experience of harm to victims of non-consensual intimate images can be devastatingly real—affecting their personal and professional lives, and impacting their mental and physical health.
• A company installed AI-powered cameras in its delivery vans in order to evaluate the road safety habits of its drivers, but the system incorrectly penalized drivers when other cars cut them off or when other events beyond their control took place on the road. As a result, drivers were incorrectly ineligible to receive a bonus.
ALGORITHMIC DISCRIMINATION PROTECTIONS - REAL WORLD EXAMPLES
• An automated system using nontraditional factors such as educational attainment and employment history as part of its loan underwriting and pricing model was found to be much more likely to charge an applicant who attended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan than an applicant who did not attend an HBCU. This was found to be true even when controlling for other credit-related factors.
• A hiring tool that learned the features of a company's employees (predominantly men) rejected women applicants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s chess club captain,” were penalized in the candidate ranking.
• A predictive model marketed as being able to predict whether students are likely to drop out of school was used by more than 500 universities across the country. The model was found to use race directly as a predictor, and also shown to have large disparities by race; Black students were as many as four times as likely as their otherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors to guide students towards or away from majors, and some worry that they are being used to guide Black students away from math and science subjects.
• A risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed evidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the general recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the violent recidivism tools. The Department of Justice is working to reduce these disparities and has publicly released a report detailing its review of the tool.
An automated sentiment analyzer, a tool often used by technology platforms to determine whether a statement posted online expresses a positive or negative sentiment, was found to be biased against Jews and gay people. For example, the analyzer marked the statement “I’m a Jew” as representing a negative sentiment, while “I’m a Christian” was identified as expressing a positive sentiment. This could lead to the preemptive blocking of social media comments such as: “I’m gay.” A related company with this bias concern has made their data public to encourage researchers to help address the issue37 and has released reports identifying and measuring this problem as well as detailing attempts to address it.
• Searches for “Black girls,” “Asian girls,” or “Latina girls” return predominantly39 sexualized content, rather than role models, toys, or activities.40 Some search engines have been working to reduce the prevalence of these results, but the problem remains.
• Advertisement delivery systems that predict who is most likely to click on a job advertisement end up delivering ads in ways that reinforce racial and gender stereotypes, such as overwhelmingly directing supermarket cashier ads to women and jobs with taxi companies to primarily Black people.
• Body scanners, used by TSA at airport checkpoints, require the operator to select a “male” or “female” scanning setting based on the passenger’s sex, but the setting is chosen based on the operator’s perception of the passenger’s gender identity. These scanners are more likely to flag transgender travelers as requiring extra screening done by a person. Transgender travelers have described degrading experiences associated with these extra screenings. TSA has recently announced plans to implement a gender-neutral algorithm while simultaneously enhancing the security effectiveness capabilities of the existing technology.
• The National Disabled Law Students Association expressed concerns that individuals with disabilities were more likely to be flagged as potentially suspicious by remote proctoring AI systems because of their disability-specific access needs such as needing longer breaks or using screen readers or dictation software.
• An algorithm designed to identify patients with high needs for healthcare systematically assigned lower scores (indicating that they were not as high need) to Black patients than to those of white patients, even when those patients had similar numbers of chronic conditions and other markers of health. In addition, healthcare clinical algorithms that are used by physicians to guide clinical decisions may include sociodemographic variables that adjust or “correct” the algorithm’s output on the basis of a patient’s race or ethnicity, which can lead to race-based health inequities.
DATA PRIVACY - REAL WORLD EXAMPLES
• An insurer might collect data from a person's social media presence as part of deciding what life insurance rates they should be offered.
• A data broker harvested large amounts of personal data and then suffered a breach, exposing hundreds of thousands of people to potential identity theft.
• A local public housing authority installed a facial recognition system at the entrance to housing complexes to assist law enforcement with identifying individuals viewed via camera when police reports are filed, leading the community, both those living in the housing complex and not, to have videos of them sent to the local police department and made available for scanning by its facial recognition software.
• Companies use surveillance software to track employee discussions about union activity and use the resulting data to surveil individual employees and surreptitiously intervene in discussions.
• Continuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep apnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the device based on usage data. Patients were not aware that the data would be used in this way or monitored by anyone other than their doctor.
• A department store company used predictive analytics applied to collected consumer data to determine that a teenage girl was pregnant, and sent maternity clothing ads and other baby-related advertisements to her house, revealing to her father that she was pregnant.
• School audio surveillance systems monitor student conversations to detect potential "stress indicators" as a warning of potential violence. Online proctoring systems claim to detect if a student is cheating on an exam using biometric markers. These systems have the potential to limit student freedom to express a range of emotions at school and may inappropriately flag students with disabilities who need accommodations or use screen readers or dictation software as cheating.
• Location data, acquired from a data broker, can be used to identify people who visit abortion clinics.
• Companies collect student data such as demographic information, free or reduced lunch status, whether they've used drugs, or whether they've expressed interest in LGBTQI+ groups, and then use that data to forecast student success.76 Parents and education experts have expressed concern about collection of such sensitive data without express parental consent, the lack of transparency in how such data is being used, and the potential for resulting discriminatory impacts.
• Many employers transfer employee data to third party job verification services. This information is then used by potential future employers, banks, or landlords. In one case, a former employee alleged that a company supplied false data about her job title which resulted in a job offer being revoked.
NOTICE AND EXPLANATION - REAL WORLD EXAMPLES
• A lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home health-care assistance couldn't determine why, especially since the decision went against historical access practices. In a court hearing, the lawyer learned from a witness that the state in which the older client lived had recently adopted a new algorithm to determine eligibility.83 The lack of a timely explanation made it harder to understand and contest the decision.
• A formal child welfare investigation is opened against a parent based on an algorithm and without the parent ever being notified that data was being collected and used as part of an algorithmic child maltreatment risk assessment. The lack of notice or an explanation makes it harder for those performing child maltreatment assessments to validate the risk assessment and denies parents knowledge that could help them contest a decision.
• A predictive policing system claimed to identify individuals at greatest risk to commit or become the victim of gun violence (based on automated analysis of social ties to gang members, criminal histories, previous experiences of gun violence, and other factors) and led to individuals being placed on a watch list with no explanation or public transparency regarding how the system came to its conclusions. Both police and the public deserve to understand why and how such a system is making these determinations.
• A system awarding benefits changed its criteria invisibly. Individuals were denied benefits due to data entry errors and other system flaws. These flaws were only revealed when an explanation of the system was demanded and produced.86 The lack of an explanation made it harder for errors to be corrected in a timely manner.
HUMAN ALTERNATIVES, CONSIDERATION AND FALLBACK - REAL WORLD EXAMPLES
• An automated signature matching system is used as part of the voting process in many parts of the country to determine whether the signature on a mail-in ballot matches the signature on file. These signature matching systems are less likely to work correctly for some voters, including voters with mental or physical disabilities, voters with shorter or hyphenated names, and voters who have changed their name. A human curing process, which helps voters to confirm their signatures and correct other voting mistakes, is important to ensure all votes are counted, and it is already standard practice in much of the country for both an election official and the voter to have the opportunity to review and correct any such issues.
• An unemployment benefits system in Colorado required, as a condition of accessing benefits, that applicants have a smartphone in order to verify their identity. No alternative human option was readily available, which denied many people access to benefits.
• A fraud detection system for unemployment insurance distribution incorrectly flagged entries as fraudulent, leading to people with slight discrepancies or complexities in their files having their wages withheld and tax returns seized without any chance to explain themselves or receive a review by a person.
• A patient was wrongly denied access to pain medication when the hospital’s software confused her medication history with that of her dog’s. Even after she tracked down an explanation for the problem, doctors were afraid to override the system, and she was forced to go without pain relief due to the system’s error.
• A large corporation automated performance evaluation and other HR functions, leading to workers being fired by an automated system without the possibility of human review, appeal or other form of recourse.