Introduction: New Tools, New Challenges for Mediators and Attorneys
Artificial intelligence is rapidly transforming negotiation and legal practice. Mediators and attorneys now have AI assistants that can draft settlement proposals, summarize case law, analyze negotiation tactics, and more. These tools promise efficiency and insight, yet they also introduce subtle cognitive challenges. Professionals trained to listen, reason critically, and recall complex details may find that constant AI assistance changes how they think. The challenge for mediators and lawyers in 2025 is to harness AI’s benefits without letting their own skills atrophy. In fact, for any ambitious professional, the directive is clear: adapt to the new landscape of AI integration or risk obsolescence. But as we race to integrate AI, a critical question emerges, one that will define the winners and losers of this new era: Are we using these tools to augment our thinking, or to replace it? For the last two years, this has been a matter of speculation. Now, we have some real data.
The Cognitive Cost of Overreliance on AI: Lessons from a MIT Brain-Scan Study
For the first time, a team at the MIT Media Lab has gone beyond speculation. In a preliminary study published in June 2025 titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” researchers gave us the first hard data on the neurological consequences of AI use. While the findings are preliminary, they raise important questions about how to use AI responsibly. The study included 54 participants from universities like MIT and Harvard, using electroencephalography (EEG) to measure their brain engagement in real-time writing tasks.
The Amnesia Effect: Users Couldn’t Recall Their Own Work
The most stunning behavioral result was the immediate impact on memory. Participants were asked to quote a sentence from the essay they had just completed.
The AI Group: A staggering 83.3% of participants who used ChatGPT failed to provide a correct quotation from their own essay in the first session. In fact, zero participants in this group could produce a fully correct quote. Let that sink in: ZERO!
The Control Group: Meanwhile, those who wrote without tools had no problem. Only 11.1% of participants in the “Brain-only” and “Search Engine” groups had the same difficulty.
The implication is clear: you process ideas but don’t fully internalize them.
The Neural Dimmer Switch: Brain Connectivity Collapsed
The behavioral data was mirrored by the neurophysiological evidence. The researchers found that mental engagement systematically decreased with more AI assistance. The brain essentially powers down parts of its creative network when AI takes over.
The “Brain-only” group showed 79 significant neural connections in the alpha band — a frequency associated with internal attention and creative ideation.
The LLM group showed just 42 connections.
That’s a dramatic 47% reduction in neural engagement during creative work.
The Atrophy Effect: The Brain Didn’t Bounce Back
Perhaps most concerning, when regular AI users were forced to write without the tool in a later session, their brains didn’t simply return to a “normal” state.
Their brains showed significant “under-engagement of alpha and beta networks” compared to those who had practiced without AI all along.
The study supports the idea that frequent AI use can lead to “skill atrophy” in tasks like brainstorming and problem-solving.
Like a muscle that’s forgotten how to work, the neural pathways for independent thought were measurably weaker.
While the AI-assisted essays often scored well on technical metrics, the human teachers evaluating the work told a different story. They described the LLM-generated essays as “soulless,” noting that “many sentences were empty with regard to content and essays lacked personal nuances”. The bottom line so far? Relying too heavily on AI weakens memory, reduces neural engagement, and diminishes originality. But this isn’t a reason to avoid AI, it’s a reason to rethink how we use it. Another study, conducted by the AI company Anthropic in 2025, analyzed hundreds of thousands of interactions between university students and an AI assistant (Claude). The findings state similar concerns.
This phenomenon of cognitive offloading isn’t new. The study itself references historical parallels that prove we’ve been here before.
The Calculator Precedent: The paper highlights educational observations that students who rely heavily on calculators “can struggle more when those aids are removed” because they haven’t internalized the problem-solving process.
The Google Effect: The study also discusses the well-documented “Google Effect,” where reliance on search engines changes how we remember. We stop retaining the information itself and instead just remember where to find it, discouraging deeper cognitive processing. Unfortunately, this phenomenon isn’t limited to students. Corporate and legal environments are seeing similar patterns. In early 2025, researchers from Microsoft and Carnegie Mellon University surveyed over 300 professionals across fields (business, law, engineering, etc.) about their use of AI in day-to-day work. They found a clear correlation: those who placed the most trust in AI outputs engaged in the least critical evaluation of the AI’s conclusions⁴. In fact, participants admitted that, on average, 40% of the time they used an AI at work, they applied no critical thinking to the results.
Implications for Negotiators, Mediators, and Legal Professionals
For mediators, arbitrators, and attorneys, these findings hit close to home. Success in our fields has always required more than just knowledge of the law or negotiation tactics , it requires cognitive presence: sharp attention, active listening, critical analysis, creativity in problem-solving, and the ability to recall and synthesize complex information (e.g. facts of a case, interests of parties, legal precedents). If overreliance on AI can weaken these very faculties, then unchecked use of AI in mediation or legal practice could undermine professional effectiveness over time.
One immediate concern is memory retention. Legal work and mediation often involve absorbing large volumes of information, case facts, client narratives, contract clauses, etc. AI tools can summarize transcripts, filter discovery documents, or pull up relevant case law instantly. This is extremely useful, but there’s a catch: the less we actively engage in the material, the less likely we are to truly remember it. There’s a risk of becoming a passive conduit for information the AI supplies, rather than deeply internalizing the case.
Perhaps the most significant risk is the slow erosion of critical thinking and judgment. Mediators and attorneys are decision-makers and advisors: we weigh evidence, test arguments, identify fallacies, and ensure that outcomes are fair and sound. If we increasingly defer to AI outputs (“The computer suggests this settlement is fair” or “ChatGPT says this brief is legally sufficient”), we might get out of the habit of rigorously scrutinizing the reasoning.
Effective dispute resolution often requires mediators & attorneys to think creatively, generating a variety of creative options, finding novel legal arguments, or reframing problems. Current generative AI is impressive at producing standard, well-structured content drawn from its training data. But it is derivative; by design, it mirrors existing patterns rather than truly innovating. If a negotiator or attorney becomes overly reliant on an AI to propose solutions, there’s a risk that the outcomes will start to all look the same (because the AI is drawing from the same conventional playbooks). The human professional might gradually lose practice in creative brainstorming.
A Constructive Path Forward: AI as an Aid for Better Decision-Making
Despite the cautions outlined above, it’s important to remain optimistic and proactive. AI is here to stay in the legal and dispute resolution world, and when used wisely, it can undeniably enhance outcomes. The key is designing AI integration that supports and amplifies human expertise, rather than undercutting it. One example is the approach taken by Next Level Mediation, a platform that leverages AI and decision science to assist mediators and attorneys without replacing the human role.
Next Level Mediation’s software illustrates what responsible AI augmentation can look like in practice. The platform uses a Decision-Science (DS) based application to help mediators and lawyers systematically analyze conflicts. For example, it can rapidly sift through “large amounts of case documents and extract key patterns or points for the mediator’s attention. It then presents this information through intuitive visualizations: timelines of events, charts of each party’s priorities and risk assessments, decision trees of possible outcomes, and so forth. By converting complex case information into accessible visuals, the AI is offloading some cognitive burden but in a positive way, it reduces information overload for humans without making the decisions for them. Mediators report that seeing a dispute mapped out in a timeline or network diagram helps them and the parties quickly grasp the “big picture” and spot areas of agreement or tension they might have missed in dense text. Importantly, these visual decision aids still require the mediator to interpret and guide the discussion. The mediator remains in control, using their judgment to decide which insights are relevant and how to explain them to the parties. Next Level’s AI is a tireless analyst working in the background, freeing the human mediator to focus on higher-level strategy, communication, and empathy.
Next Level’s philosophy is that AI and decision analytics should inform and educate the humans in the loop, not dictate outcomes.
Conclusion
Integrating AI into mediation and legal practice is not only inevitable; done right, it’s desirable. The goal is thoughtful integration: using these tools to elevate the work product and decision-making without falling into the trap of overdependence. Yes, recent studies have issued a wake-up call that unchecked reliance on AI can lead to cognitive complacency, a sort of “brain drain” where our memory, attention, and critical thinking may weaken. But these findings should not scare us away from AI; rather, they should guide us in developing the best practices for its use. By staying aware of the risks of cognitive offloading, professionals can put guardrails in place: continuing to practice core skills, critically evaluating AI outputs, and keeping themselves intellectually engaged in every matter.
Footnotes / References
- MIT Media Lab EEG Study (2025) , Kosmyna, N. et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” Preliminary study results published June 2025. Participants using ChatGPT had significantly reduced brain connectivity and memory recall, underperforming on neural and behavioral metrics compared to non-AI users(1)(1).
- Fan et al., “Metacognitive Laziness” Study (2024) , Yizhou Fan, Luzhen Tang et al., “https://www.media.mit.edu/publications/your-brain-on-chatgpt/,” Brit. J. Educational Technology, Dec 2024. Found that students using ChatGPT offloaded cognitive processes and did not engage deeply in learning, indicating risk of dependency.
- Anthropic University Study (2025) , Barshay, J., The Hechinger Report (May 19, 2025), “University students offload critical thinking, other hard work to AI.” Analysis of Claude AI usage showed students using AI for higher-order tasks, raising concern that AI can become a “crutch” and stunt development of foundational thinking skills.
- Microsoft/CMU Knowledge Workers Survey (CHI 2025) , Turner, B., Live Science (Apr 3, 2025), “Using AI reduces your critical thinking skills, Microsoft study warns.” Survey of 319 professionals found a strong tendency to not think critically about AI outputs among those who trust them, with participants reporting zero critical scrutiny on 40% of AI-assisted tasks.
- “Irony of Automation” , Atrophy of Judgment , Ibid. (Live Science, 2025). Researchers noted that automating routine tasks deprives professionals of practice in judgment, leading to “atrophied” cognitive skills when facing novel situations. This echoes earlier human-factors research on how over-automation can erode human expertise.
- Gerlich, M. (2025) , AI Use vs Critical Thinking , Gerlich, M., “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking,” in Societies vol.15 (2025). Found a significant negative correlation between frequent AI use and critical thinking test scores, with cognitive offloading cited as a mediating factor. Also referenced Sparrow et al. (2011) on the “Google effect” (tendency to forget info that is readily accessible) now extending to deeper reasoning.
- Legal Ethics and AI , Professional Judgment , E.g., Harpst, S., “Responsible Realism About AI in Legal Practice,” Mediate.com (2023). Emphasizes that attorneys must not substitute AI output for their own judgment and must validate AI-derived insights against their legal knowledge and ethical standards.
- Next Level Mediation , Decision Science and AI Integration , Bergman, R., Next Level Mediation (2024), and LinkedIn article “Mediation in the Age of Digital Distraction” (Nov 2025). Next Level’s platform uses AI-driven dispute visualization and decision analytics to assist mediators and parties. It simplifies complex information into visual form (timelines, decision trees) to reduce cognitive load and improve understandinglinkedin.com, while keeping the mediator in control. This approach has achieved faster settlements without sacrificing fairness or empathy, by augmenting human decision-making rather than automating it.
- https://www.media.mit.edu/publications/your-brain-on-chatgpt/