Loading...

Publication

implementing ai

Implementing AI in Next Level Mediation Software

The history of ethics and AI is long and spotty. The earliest works came from Isaac Asimov in his book “Runaround” where he discussed the three laws of robots. A few years later, the topic was addressed by Norbert Wiener in Cybernetics. Of course, popular films like “A Space Odyssey”, “West World”, “Robot and Frank”, etc. drove the discussions through the rest of the 20th Century. More recently, movies such as Ex Machina, Her, Matrix continued the discussion.

In “Runaround,” Isaac Asimov created the “Three Laws of Robotics.”

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings unless such orders would conflict with the first law.
  • A robot must protect its own existence as long protection does not conflict with the first or second laws.

These three foundation laws have been the theoretical underpinnings of AI and ethics discussions over the past 80 years. With the emergence of deep machine learning and new generative, Large Language Models (LLMs), the discussion has exploded. [i]

The race between competing companies, like OpenAI (with Microsoft backing) and Google (with its associates DeepMind’s Sparrow, and Bard), is moving quickly. Products are being released at an unbelievable pace, just for the sake of capturing market share advantage, without really worrying about consequences in society (figure 1).

system diagram

Figure 1

Unfortunately, LLMs are usually trained on open internet information (e.g., commoncrawl.org), which contains too much noise and is not always closely relevant to the specific business or problem context. Since governments have not addressed the potential societal problems, people are left to their own to deal with the introduction of Generative AI LLMs and the possibilities of biases, misinformation, fake news, and videos.

So why haven’t governments established standards and a set of usable ethics for AI?

There are several plausible reasons there has been so little progress in establishing ethics or regulations for AI. In the first place, AI is an incredibly complex technology, and the ethical implications of its use are difficult to predict and assess. As technology develops, new ethical dilemmas arise that have yet to be addressed. Second, there is no overarching authority or governing body responsible for establishing regulations for AI, and governments and other organizations are hesitant to legislate without fully understanding the implications. Of course, there is work on AI Ethics in various institutions (Alan Turing Institute Public Policy, University of Oxford FHI, Berkman Klein Center, Harvard, etc.). Last, AI has the potential to be used for both good and bad, and the ethical implications of its use are still not fully clear. As a result, it is difficult to agree on a unified set of ethical standards and regulations. For example, the ethical and governance issues related to AI are different for human rights vs. the environment vs. the law vs. mediation. Combined, these issues represent a challenge when integrating AI into software applications.

Given the risks and lack of governing regulations, what precautions had to be taken when integrating OpenAI/LLMs into NextLevel Mediation platform?

Using AI in mediation has the potential to revolutionize the way disputes are resolved. One of the first hurdles when integrating AI into mediation software was to agree on how to maximize its potential benefit safely. To better understand its potential benefit to mediators, it was useful to create a distinction between assisted, augmented, and automated intelligence. Assisted intelligence supports the work of a human being, augmented intelligence allows humans to do something that they otherwise could not accomplish, and automated intelligence describes those cases in which the entire task is performed by an AI. It was decided that Generative AI’s greater power was in complementing and augmenting human capabilities. However, to implement AI responsibly, this would have to be accomplished safely and without violating ODR standards of practice or current efforts in AI ethics.

Insuring Safety

One approach to insuring safety and usability would be to fine tune the AI models. The traditional method of adapting a general machine learning model to a specific task is to use the labeled data from the specific domain to uptrain the general model end-to-end. During the up training, parts of or all the learnable parameters in the model are fine-tuned via backpropagation. Unfortunately, there are two problems with this approach for mediation. The first problem is that LLMs nowadays are way too big. Some have hundreds of billions of parameters. The second issue is that end-to-end fine-tuning not only consumes a huge amount of computational resources but also requires a decent size of domain specific labeled data, which is actually not available for mediation since mediation has to be confidential. In the end, it was decided that a human (mediator) would have to always be in the loop and that results of AI prompts could not be directly shown to clients.

AI Ethics and ODR Practice Standards

To implement AI responsibly, we reviewed the intersection of ODR practice standards and the current efforts in AI ethics.

The definition of ethics in artificial intelligence (AI) is a rapidly developing field of research. Unfortunately, the development of AI technology has outpaced our ability to regulate it. Ethical concerns associated with AI are wide-ranging, from privacy and data protection to algorithmic bias and decision-making. Researchers have been exploring a variety of ways to define ethics in AI, from using traditional ethical principles to developing frameworks that encompass the unique characteristics of AI. Some of those efforts are listed below:

  • NIH
  • Alan Turing Institute Public Policy
  • UNESCO
  • World Economic Forum
  • University of Oxford Schwarzman Center
  • European Commission: Proposal for a Regulation, laying down harmonized rules on artificial intelligence

After careful review of those efforts, we chose to the Alan Turing Institute [ii] as a baseline. The Public Policy Program at The Alan Turing Institute was set up in May 2018 to develop research, tools, and techniques that help governments innovate with data-intensive technologies and improve the quality of people’s lives. ” These values, principles, and techniques are intended both to motivate morally acceptable practices and to prescribe the basic duties and obligations necessary to produce ethical, fair, and safe AI applications.”

The National Center for Technology and Dispute Resolution (NCTDR) created the first Online Dispute Resolution (ODR) Standards in 2009 and issued updates to Ethical Principals through 2017. Updated ODR standards as defined by the International Council on Dispute Resolution (ICODR), have been published recently.[iii] They were written to provide a baseline for mediators/neutrals in terms of rules, qualifications, and certification efforts for online dispute resolution processes and practices.

When the two efforts are compared side by side, we can see the overlap (intersection) of the two efforts. See below:

AI Ethical PrincipalsODR Practice Standards
RESPECT the dignity of individual persons: Ensure their abilities to make free and informed decisions about their own lives.IMPARTIAL:  Online Mediation must treat all participants with respect and dignity. Online Mediation may enable often silenced or marginalized voices to be heard and ensure that offline privileges and disadvantages are not replicated in the Online Mediation process.
CONNECT with each other sincerely, openly, and inclusively: Safeguard the integrity of interpersonal dialogue, meaningful human connection, & social cohesionACCESSIBLE: Online Mediation should be easy for parties to find and participate in and not limit their right to representation.: Should be available through both mobile and desktop channels, minimize costs to participants, and be easily accessed by people with different physical ability levels.
CARE for the wellbeing of each and all. Design & deploy AI systems to foster & cultivate the welfare of all stakeholders whose interests are affected.COMPETENT:  Online Mediation providers must have the relevant expertise in dispute resolution, legal, technical execution, language, and culture required to deliver competent, effective services in their target areas: Online Mediation services must be timely and use participant time efficiently.
PROTECT the priorities of social values, justice, and public interest. Treat all individuals equally and protect social equity.CONFIDENTIAL: Online Mediation providers must maintain the confidentiality of party communications in line with policies that must be made public around: who will see what data, and b) how that data can be used.
FAIRNESS Data Fairness, Design Fairness, Outcome Fairness: Representativeness, Recency, Accuracy, Fit for Purpose, Model BuildingFAIR/IMPARTIAL/NEUTRAL: Online Mediation providers must treat all parties impartially and in line with due process, without bias or benefits for or against individuals, groups, or entities. Conflicts of interest of providers, participants, and system administrators must be disclosed before the commencement of Online Mediation services.
ACCOUNTABILITY  both anticipatory and remedial: AI systems must facilitate end-to-end answerability and auditability. This requires both responsible humans-in-the-loop across the entire design and implementationACCOUNTABLE: Online Mediation providers must be accountable to all participants in the mediation effort: Mediation providers should be continuously accountable to participants and the legal institutions and communities that are served.
SUSTAINABILITY aware of long-term impact on individuals and society: SIA (Stakeholder Impact Analysis ) .. technically sustainable AI system is safe, accurate, reliable, secure, and robust. SECURE:  Online Mediation providers must ensure that data collected and communications between those engaged in Online Mediation is not shared with any unauthorized parties: Users must be informed of any breaches in a timely manner.
TRANSPARENCY  aware of long-term impact on individuals and society : Demonstrate that a specific decision or behavior of your system is ethically permissible, non-discriminatory/fair, and worthy of public trust/safety-securing.COMPETENT:  Online Mediation providers must have the relevant expertise in dispute resolution, legal, technical execution, language, and culture required to deliver competent, effective services in their target areas: Online Mediation services must be timely and use participant time efficiently.
 LEGAL:   Online Mediation providers must abide by and uphold the laws in all relevant jurisdictions.  

Intersection of AI Ethical Standards and ODR Practice Standards

After reviewing the intersection of AI Ethics efforts and ODR Practice standards it was decided that responsible AI for mediation should, at a minimum, encompass:

  • Fairness and inclusiveness
    Minimize the potential for stereotyping, demeaning, or erasing identified demographic groups, including marginalized groups.
  • Reliability and safety
    Minimize the potential for stereotyping, demeaning, or erasing identified demographic groups, including marginalized groups.
  • Transparency
    Stakeholders who will use the system outputs to make decisions.
    Identify potential issues that may arise from using AI system.
  • Privacy and security
    Insure privacy and confidentiality.
  • Accountability
    Conduct Evaluations with users to judge impact.

Human oversight and control (always keep a human in the loop)

Integration of OpenAI into NextLevel™ Mediation

To responsibly integrate AI into NextLevel™ Mediation software, it was important to ensure that the OpenAI technology was only used to supplement the mediator’s skills and never directly accessible to the disputing parties. This was purposely designed to help protect the integrity of the process and ensure that the parties are able to remain in control of the process. Furthermore, the software was developed with the highest ethical standards in mind and with a focus on protecting the privacy and confidentiality of the disputing parties by signing an agreement with OpenAI not to store prompt data or use it for training purposes.

To supplement the mediator skills, we focused on four specific areas:

  • Developing questionnaires to help understand the dispute.
  • Developing models to help determine disputing party priorities.
  • Summarizing questionnaire results
  • Suggesting ideas for negotiation from prioritized objectives of disputing parties.

One of the most important aspects of the implementation was prompt engineering. Prompt engineering ensures that the data entered into the software, is of the right format, type, and size. This is critical for successful data manipulation and analysis. This also helps to prevent erroneous results (hallucinations) or purposely misleading queries (jailbreaks). Finally, the software is regularly tested and monitored to ensure that it works correctly and does not introduce any biases into the process.

Bibliography

[i] AI Ethics: A Long History and a Recent Burst of Attention, Computer Magazine, Jan. 2021, pp. 96-102, vol. 54
[ii] https://www.turing.ac.uk/news/publications/understanding-artificial-intelligence-ethics-and-safety
[iii] https://icodr.org/standards/