Please ensure Javascript is enabled for purposes of website accessibility

AI liability cases raise questions for courts, employers

BridgeTower Media Newswires//August 26, 2025//

Depositphotos.com

Depositphotos.com

AI liability cases raise questions for courts, employers

BridgeTower Media Newswires//August 26, 2025//

Listen to this article

SUMMARY

  • Lawyers warn courts must catch up to rapid AI developments in employment and health care
  • Federal class action accuses Workday’s screening algorithms of systemic discrimination
  • may serve as national model for regulating AI in hiring
  • Experts stress need for AI laws prioritizing impacted populations over industry lobbyists

 

BALTIMORE — Artificial intelligence development is moving at a fast pace and the faster it moves, the more rapidly the courts will need to play catch-up, according to New York-area lawyers.

Matthew D. Kohel, a partner at Saul Ewing LLP, and , a professor of law at the University of Baltimore School of Law, tackle issues regarding liability in AI applications across health care and employment.

According to Gilman, the U.S. lags behind its counterparts in .

“We don’t have comprehensive AI regulation in the United States, so we must look to traditional doctrines such as torts, contracts and the like to protect consumers from harm,” she said. By contrast, the European Union has already enacted extensive AI-related legislation.

Cases pending

Kohel, who advises companies on AI-related matters, warns that liability is moving up the chain; therefore, deployers of AI must proceed with caution. Kohel referred to a federal class action lawsuit against Workday, a cloud-based HR platform.

“The suit alleges discrimination by the systems applicant screening algorithms,” Kohel said.

The plaintiff, Derek Mobley, a Black man over 40 with anxiety and depression, applied to more than 100 jobs through the Workday-powered applicant screening systems. He was rejected every time, even though he met the qualifications for the roles. Some rejections arrived overnight, causing him to conclude that his resumes weren’t being screened manually and that an algorithm was systemically screening him out.

Kohel also points to a case where a candidate was rejected until he filled out the application with a younger age, which led to a claim of algorithmic discrimination.

Kohel said that states may look to New York as a national model.

“NYC Local Law 144, also known as the Automated Employment Decision Tools Law, is the model that governs the use of automated tools in hiring and promotion decisions within New York City,” he said. “The law went into effect in the summer of 2023 and requires independent bias audits of hiring tools,” he said.

Hiring and AI

Gilman cites another case, this time related to AI and housing.

“In a fair housing discrimination case against the developer of a tenant-screening algorithm, the trial court held that the developer was not liable because it was clear in the contracts and marketing materials that the downstream users, such as landlords, were responsible for the tool’s outcomes. The case is being appealed, but it goes to show how analog-era laws are sometimes ill-fit for digital-era issues,” she said.

She stressed the importance of crafting AI laws that prioritize impacted populations, saying: “Private companies should not be the arbiters of what these laws look like. We also need to guard against lobbyist-driven legislation.”

So how are employers supposed to protect themselves? Gilman said that the Equal Employment Opportunity Commission issued guidance for employers on the use of AI, but that the Trump administration has since rescinded it.

“Existing federal and state anti-discrimination laws still apply, however,” she said.

Health care and AI

The courts will be sorting out liability in health care and AI in years to come.

“The issues are complex, particularly because of the ‘black box’ nature of AI tools — meaning that they are so complex that sometimes their own developers can’t always explain how certain outcomes are generated,” Gilman said.

While she cautions against overregulation that could hinder innovation, she insists that human oversight must remain central.

“Health care professionals should be responsible for diagnosis and treatment. AI should be a supplemental tool and not replace human judgment. If/when AI is used, the patient should be told,” she said.

A path forward

The experts all agree on the need for specific, robust laws governing AI.

“Developers, deployers and users of AI all have liability for AI-generated harms and laws will likely need to recognize new forms of harm generated by AI. At the end of the day, AI does not have human intelligence. It is a tool used by people and companies to carry out human objectives and the people and entities that adopt, develop and deploy it must be responsible for its outcomes,” Gilman said.

EXTERNAL LINKS


Top Legal News

See All Top Legal News

Commentary

See All Commentary