Please ensure Javascript is enabled for purposes of website accessibility

AI-generated reviews raise legal ethics, FTC violation risks

Artificial intelligence has even made its way into testimonials law firms post on their websites, raising the potential for violation of consumer protection laws and the rules of professional conduct. (Depositphotos.com)

Artificial intelligence has even made its way into testimonials law firms post on their websites, raising the potential for violation of consumer protection laws and the rules of professional conduct. (Depositphotos.com)

AI-generated reviews raise legal ethics, FTC violation risks

Listen to this article

SUMMARY

  • A study by found 34.4% of 2025 law office reviews are likely AI-written.
  • Since 2022, suspected surged 1,586%, with large increases in Raleigh and Columbia.
  • Legal experts say such reviews may violate ethics rules and a 2024 on fake testimonials.
  • Attorneys must monitor online content and take action if false or misleading reviews are discovered, even if not authored by them.

 

By Pat Murphy and Kallie Cox

A new study has found an ongoing surge of artificial intelligence-generated law firm reviews appearing online, raising the question of whether some professionals might be attempting to attract clients through false advertising in violation of laws and rules of professional conduct.

Canadian tech company Originality.ai released the study on AI-generated reviews of law offices in the United States on April 30. The findings show that of law office reviews created and posted in 2025, 34.4% are “likely” AI-written.

The incidence of such reviews also is skyrocketing.

The study further found that since ChatGPT was launched in 2022, there has been a jaw-dropping 1,586% increase in AI-generated reviews.

The implications of the findings are profound, says Madeleine Lambert, Originality.ai’s director of marketing and sales.

“People rely on reviews to make informed decisions,” Lambert says. “If somebody is making a decision based on an array of amazing, perfect, 10-out-of-10 reviews that aren’t actually real, that are fabricated, is that actually informed decision-making?”

The study found that Raleigh, North Carolina, saw a 45% increase in suspected AI-generated reviews, while Columbia, South Carolina, had about a 25% increase.

Grace Wynn, an attorney with HWG Law and a lecturing fellow at Duke University Law School, said she understands why firms have an incentive to generate positive reviews.

“There is a strong incentive for firms, particularly small firms or firms that are just starting out, to generate reviews because the way that clients find practices is often through Google searches, particularly for high volume practices,” she says. “So, I understand why the incentive is there to get those reviews up. [But] it is definitely ethically fraught, to say the least.”

Ethical violation

It’s one thing if a client is using to create a review of a firm, she said. But it is a clear ethical violation if an attorney is doing the same.

Further, “If they are paying a marketing service who is facilitating those reviews getting posted, that is also going to be an ethics violation because attorneys can violate the rules through the acts of another,” Wynn added.

Amy Richardson, the managing partner of HWG’s Raleigh office and chair of the firm’s and malpractice group, said that false, AI-generated reviews would be a violation of Rule 7.1 of the Rules of Professional Conduct.

If these reviews are not generated by the attorney and are instead placed by clients or bots, caution is still required in responding because of the confidentiality duties under Rule 1.6.

“You can respond, according to the North Carolina bar, as long as you’re responding in a professional and restrained manner that does not reveal any 1.6 information,” Richardson said.

According to the authors of the Originality.ai study, the results were obtained by filtering 4,706 law office reviews, each with 100 words or more, through the company’s AI detecting tool.

Lambert says the company’s artificial intelligence software has been trained on millions of pieces of verified human-written content.

“It has also been fed millions of pieces of data of verified AI-generated content and has been trained to pick up the differences between the two [sets of data],” she says.

The reviews were collected using a search tool, Rapid API, which “bulk scraped” posts from business Google pages, where most reviews appear. The analysis targeted 49 major markets nationwide, Lambert says.

She acknowledges that the AI-detection tool has certain limitations.

“Originality.ai provides highly accurate AI content detection that indicates the likelihood a piece of text was generated by AI,” she says. “However, like any detection tool, it provides a probabilistic assessment, not absolute proof.”

In addition, Lambert says no AI detection tools on the market can “attribute authorship or intent.”

According to the study’s authors, while AI detection tools such as Originality.ai have sophisticated capabilities to analyze content for “tells” indicative of whether a post is the product of generative AI, there also are certain tells that can be picked up by the average reader.

For example, a prompt used by an AI tool to guide users, such as “Here is the revised review,” might appear out of place in a post and be a dead giveaway that the review is AI-generated. The authors also highlight that tools such as ChatGPT sometimes use symbols such as asterisks around certain words, so another tip-off of AI generation can be the appearance of symbols in text that would seem out of place in normal writing.

Finally, the authors say suspicions should be raised when text has “[o]ver-polished and general language, lacking personal or case-specific detail.”

Possible legal violations

While the identity of the actors and motivations behind the surge in AI law firm reviews is unclear, there is no question that firms or lawyers who create and post phony reviews (or enlist a third party to do so) to attract clients place their law licenses in jeopardy.

In addition to the ethical implications of generating a false review, attorneys should know that it also is illegal, thanks to a Federal Trade Commission rule adopted last year, according to Barbara M. Seymour, a professional responsibility and legal ethics attorney with Clawson & Staubes in Columbia.

On Aug. 14, 2024, the FTC announced a final rule prohibiting the sale or purchase of fake consumer reviews or testimonials. The rule, which went into effect Oct. 21, further prohibits certain insiders in a business from creating consumer reviews or testimonials without clear disclosure. It authorizes the FTC to impose penalties of up to $51,744 per violation while also allowing consumer remedies.

While it is impossible to ensure reviews on every online platform don’t contain AI generated content created by bots, attorneys do have a responsibility to monitor the advertising on their websites and any services they pay or sign up for, Richardson said. But what to do if they become aware of a potentially misleading review remains unclear.

“One of the questions … is: If you become aware of it, do you have an obligation to resolve it, even if you’re not directly in control of it? I don’t think bars have had to resolve that issue,” she said. “With the proliferation of AI and bots and stuff, I think that’s something they’re going to have to look at.”

Wynn said if an attorney becomes aware of a false review, he or she should take action.

“The first step would be to follow up with the client who appears to have posted a review or to look at whether the person posting it is a client or a former client,” she said. “If you see that it’s not, you should go about the process of getting it taken down because it is misleading even if you are not the one who is posting it.”

Not naming names — for now

Lambert says her company has firm-specific data on AI-generated reviews. But because its AI detector produces a small percentage of false positives, the company is reluctant to “call out” specific firms.

“I do have a table that names the ‘worst offenders,’” Lambert says. “But these are law firms. We don’t want to bark up that tree.”

On the other hand, she says her company would be amenable to contracting with regulators for the use of Originality.ai in their investigations. The demand for AI-detection has already exploded in the private sector, she notes.

“We’re seeing that a lot,” she says. “We’re seeing Google leverage AI-detection software in a number of instances, including verifying that posts are human-written content on advertising platforms so that they are not throwing advertising dollars at pages that are just AI-generated.”

EXTERNAL LINKS


Top Legal News

See All Top Legal News

Commentary

See All Commentary