Location: 633 Pennsylvania Avenue, NW Washington, DC 20004
Email: info@ncnw.org
Phone: 202-737-0120
Follow us

Code Without Conscience: How AI Discrimination Puts Black Lives at Risk

NCNW
Code Without Conscience: How AI Discrimination Puts Black Lives at Risk

by Ja’Lia Taylor, Ph.D., Director of Policy, Telecommunications and Technology

Artificial intelligence is quickly becoming one of the most influential forces in our daily lives. From healthcare to hiring, AI tools are making decisions that affect who receive care, who gets a job interview, who is approved for a loan, and even who is targeted by police surveillance. These tools are often promoted as fair and objective. But in reality, they are frequently built on biased data that reflects a long history of discrimination. That bias does not disappear in the algorithm, it actually multiplies. And now, a new federal policy is threatening to make things even worse.

The President Donald J. Trump Executive Order titled ‘Preventing Woke AI in the Federal Government’ prohibits federal agencies from using AI systems that incorporate diversity, equity, and inclusion principles. This move attempts to erase the very safeguards designed to make technology more fair and inclusive. By barring the federal government from using DEI-informed systems, the Executive Order threatens to lock in existing inequities and deny critical protections to those who need them most.

When Technology Cannot See You: Bias in Healthcare

AI-powered hand hygiene systems have also revealed troubling design flaws, particularly for individuals with darker skin tones. Whether in hospitals or public restrooms, soap dispensers and faucets that rely on infrared sensors or computer vision often fail to activate when dark-skinned hands are placed beneath them. These systems were largely trained and tested using lighter skin tones, leaving many people invisible to the sensors. In environments where handwashing is essential for disease prevention and basic hygiene, this creates not just an inconvenience but a serious health risk. Failing to ensure these technologies work for everyone undermines public trust and perpetuates exclusion in one of the most basic daily functions.

Racial bias in artificial intelligence is particularly dangerous in the field of healthcare, where it has already contributed to misdiagnoses and unequal treatment. Prostate cancer screening tools have failed to accurately detect risk factors in Black men, who already face higher mortality rates from the disease. Similarly, breast cancer detection tools often underperform for Black women due to higher breast tissue density. These tools are often trained on datasets that underrepresent people of color, making them less effective at identifying disease in non-white patients. Even algorithms used to determine who should receive follow-up care have mistakenly ranked Black patients as having fewer needs simply because less money was spent on their care in the past. These oversights are not just errors in code. They are systemic issues with real consequences for health outcomes.

Injustice by Design: AI in Law Enforcement

AI systems are increasingly being used by law enforcement to identify suspects, analyze risk, and monitor communities. But these tools are far from neutral. Facial recognition technology, for example, is notoriously less accurate when analyzing Black faces. In one widely reported case, Robert Williams, a Black man in Detroit, was wrongfully arrested after an AI system misidentified him. He spent time in jail for a crime he did not commit because the technology could not distinguish between two Black men. Such errors are not isolated incidents. They are embedded in systems built without diverse data or oversight.

Locked Out of Opportunity: AI in Hiring

Bias in AI extends to hiring and employment, where algorithms are often used to screen resumes and assess interview performance. In one case, Amazon scrapped its internal hiring tool because it was penalizing resumes that included words like ‘women’s chess club’ or listed historically Black colleges. The system had been trained on resumes from past hires, most of whom were white men. As a result, it learned to prefer applicants who looked just like them.

Even video interview tools have problems. These systems sometimes rate applicants poorly if they have darker skin or speak with regional or cultural accents. They might misinterpret facial expressions, tone, or speech patterns as signs of dishonesty or lack of confidence, when in fact they are simply signs of cultural diversity. Qualified candidates can be filtered out before a human even sees their application. The result is not just individual rejection; it is systemic exclusion from the workforce.

Digital Redlining: AI in Banking and Credit

AI is also used to determine who qualifies for loans, mortgages, and credit. These systems analyze historical data to assess risk, but if that data reflects decades of discrimination, the algorithm will too. Studies have shown that Black applicants are more likely to be denied loans than white applicants with identical credit profiles. This is not due to individual risk, it is due to the fact that biased data is being treated as neutral truth. When Black families are denied access to credit, they lose out on homeownership, education, and financial stability.

An Executive Order with Deadly Consequences

The Executive Order banning DEI from federal AI systems does not prevent bias. It prevents efforts to detect and reduce bias. Diversity in AI is not a political stunt. It is a life-saving necessity. By blocking inclusive AI tools, the federal government is setting a precedent that will ripple across industries. Developers may feel pressured to strip equity considerations from their products in order to win federal contracts. Agencies may be unable to choose tools that serve all Americans. And most dangerously, the communities who already suffer from inequality will be pushed even further to the margins.

When AI systems do not see you, understand you, or account for your needs, they fail you. And when the government refuses to require fairness in these systems, it puts lives at risk. The Executive Order is not just misguided. It is a public health threat, an economic barrier, and a civil rights violation rolled into one.

Building A Better Future with Inclusive AI

Across the country, researchers, engineers, and advocates are actively working to correct bias in artificial intelligence systems. From developing diverse training datasets to conducting fairness audits and engaging impacted communities, these initiatives are essential to ensuring AI reflects and serves the full spectrum of American society. Ethical AI is not a radical idea, it is a responsible one.

The federal government must play a leading role in establishing standards that center fairness, transparency, and equity. Artificial intelligence has the potential to drive transformative change, but only if its development is grounded in justice. Inclusive design and rigorous oversight must become the norm, not the exception. The path forward depends on our collective commitment to ensuring that the tools shaping the future do not repeat the injustices of the past.

Category: #Blog #The Latest