Please ensure Javascript is enabled for purposes of website accessibility

When the client brings ChatGPT to the consultation

Charles D. Hatley//April 27, 2026//

When the client brings ChatGPT to the consultation

Charles D. Hatley//April 27, 2026//

Listen to this article

Not too long ago, clients would walk into an initial consultation with a attorney with a few questions, maybe a printout from a legal website or news article they had read, or at most some knowledge gained from Google. They relied on us to listen and offer recommendations that best fit their situation and jurisdictional constraints.

Today, that dynamic has shifted.

Clients now show up with their own fully formed legal strategies. They bring , asset division agreements and even drafted arguments to support their perspective. And often, these have been generated from a simple one- or two-sentence prompt in , Claude or some other AI tool.

For us, the challenge isn’t just that the information is incorrect or incomplete. It’s that AI is designed to respond in such a way that it validates the client’s perspective. The client is fully confident that their AI-generated information is authoritative and correct. When we suggest that what they’ve provided doesn’t quite fit their scenario (or the law in general), they naturally become upset, especially when the attorney gets defensive about the use of AI.

While most practitioners are used to dealing with advice from Google, Facebook or even a client’s friends, AI is different. It changes the attorney/client relationship from the very beginning. Clients are walking in with perceptions validated and expectations already formed.

They’ve played the outcome out in their heads. Now, any advice we offer is going to be filtered through that lens.

What is the client really bringing you?

Clients aren’t trying to circumvent knowledgeable, experienced legal counsel. Most don’t have a lot of experience working with attorneys. They’re simply trying to make sense of their emotional overload and vulnerability and are seeking a quick, supportive answer. When people feel out of control, they look for any information to gain some sense of footing, even if that information is wrong.

Unfortunately, AI tools offer immediate, confident, professional-sounding responses to their most complex situations. Never mind the lack of jurisdictional precision or that the client has an outcome in mind and didn’t give the chatbot complete facts. There’s no comprehensive factual context and no accountability.

Clients are countering their fear and vulnerability, combined with the need for instant information, with AI pseudo-advice. They’re bringing in something that feels credible. What we’re dealing with is vulnerability fueled by technology. And we are the ones left to rein them in with information and advice they may not want to hear. When this happens, we as practitioners must be mindful of what our clients are going through.

Plausible but wrong

AI-generated legal information is rarely obviously incorrect, even to attorneys. More often, it’s just close enough to create problems and provide citations that look legitimate. In other words, it’s plausible but wrong.

Take a simple example of jurisdiction: One party lives in Virginia, while the other lives in Texas. The AI-generated strategy will likely be a combination of both jurisdictions’ rules and sound credible but flawed.

From the client’s perspective, our advice isn’t aligning with what they were expecting — or hoping for. That disconnect can quickly turn into distrust. And when doubt sets in, it’s harder to unwind than getting the correct legal advice in the first place.

The real challenge isn’t correcting the information. It’s doing so without damaging the relationship or insulting the client. If we immediately dismiss what the client brings to us, we risk shutting down the conversation entirely.

The first and easiest step is to take the time to review the prompts the client used to get their information. This will go a long way in helping both sides — attorneys and potential clients — understand where things differ.

It is important to acknowledge a client’s efforts and understand their process. This will go a long way toward building trust and authority. From here, we can address the limits of the tool itself. We never want to dismiss their preparation. We want to respect their active participation without minimizing it.

Just as importantly, we’re establishing early that legal outcomes aren’t formulaic. They’re jurisdiction- and fact-specific and far more nuanced than generalized AI answers can capture.

Risks of client-driven strategy

The rise of the AI-informed client also raises several ethical considerations that we as practitioners must keep in mind.

Our duty of competence and communication extends beyond simply providing correct legal advice. It also requires us to identify and clearly correct misunderstandings so that clients can make informed decisions. If misunderstandings aren’t addressed early, clients can develop expectations that aren’t realistic, leading to complications as the case progresses.

We are starting to see requests for AI prompts in from both the client and the attorney on the grounds that the AI platform is a third party. This can lead to a treasure trove of information and insight for the other side. (See United States v. Heppner.)

Clients who arrive with pre-formed strategies also are more resistant to alternatives. This creates tension between client expectations and professional judgment. As practitioners, we must ensure that strategy isn’t shaped by unverified external information because ultimately, we are responsible for the advice given and the course of representation.

Practical strategies for practitioners

Given that AI is not going away anytime soon, and we will continue to encounter more AI-informed clients, the question becomes how best to engage with them.

Here are some strategies to consider:

· Ask what the client has already reviewed up-front. Address it early instead of waiting for incorrect information to surface. Understanding what the client has read or generated via AI helps identify misconceptions before they affect the entire conversation.

· Use the client’s strategy as a teaching tool. Walk through their proposed approach step by step. Show where it aligns with the law and where it doesn’t. This reinforces our role as the one applying the law, not just explaining it.

· Be clear about . This is where AI often falls short. Explain how state statutes and case law apply to their matter, so the client understands what actually governs their situation.

· Document your advice carefully. Create a client-centric best practices policy that will be shared with your clients that outline the risks and shortfalls of using AI. If you need to correct significant misunderstandings, it’s worth documenting the guidance you’ve provided, especially if the client initially held a strong belief in an alternative approach.

Taken together, these approaches are less about managing the technology and more about reinforcing what we already know to be true — that our value as attorneys isn’t just about dispensing accurate legal information. It’s applying the law in ways that truly protect our client’s rights and future.

Litigation implications

AI’s impact doesn’t stop at the consultation.

We are already seeing the successful discovery of AI prompts from both clients and attorneys on the basis that the AI platform is a third party, based on Heppner. A client who relies heavily on AI will likely disclose information they believe to be privileged, and this information will likely be very beneficial to the opposing party.

From the practitioner’s standpoint, the use of AI in ligation is equally harmful. In Mata v. Avianca Inc., 678 F. Supp. 3d443 (SDNY), counsel submitted a brief containing fabricated case citations generated by ChatGPT. While that case involved the attorney’s use of AI rather than a client’s use, the takeaway is that AI-generated legal content can’t be trusted without verification.

It’s not difficult to imagine similar issues arising when clients provide their attorneys with AI-generated research or proposed arguments. These materials should be treated no differently than any other unverified source.

Shift in attorney-client dynamic

Our clients’ perspectives and opinions are more validated than ever before because of the nature of AI responses. Today, they arrive with legal terminology, concepts and expectations that must be unpacked before legally grounded advice can be given.

As AI continues to evolve, so will the way we interact with clients. Our goal isn’t to reject it, but to create the proper context, meeting clients where they are while helping them understand the limits of the information they’ve been given. Because, at the end of the day, AI can generate answers. It can organize information and even mirror legal reasoning. But it can’t sit across from a client, understand what really matters to them and apply the law in a way that accounts for both their legal and human realities. That’s still our role. And today, it’s never been more important.

Charles D. Hatley is CEO of family law firm Melone Hatley, where he focuses on building systems-driven, client-centered family law and estate planning practices. Melone Hatley has offices in Virginia, South Carolina, Florida and Texas.


Top Legal News

See All Top Legal News

Commentary

See All Commentary