By David Van Bruwaene

The COVID-19 pandemic accelerated digital transformation across many industries. From financial services to healthcare, agile and fast-moving companies have expanded their in-person experiences to add virtual interactions online.

These digital transformations are creating large amounts of digital data that didn’t exist previously. At the same time, advancements in artificial intelligence (AI), decreasing costs of computing power, and talent shortages are driving businesses to use AI’s automated decision making capabilities to help reduce costs and increase revenues. As consumers witness machines being entrusted to make decisions that impact their lives, they want businesses to take necessary precautions to prevent bias in AI and the data used to train models.

Automated AI decisions impact lives

To put it simply, any applications using AI for screening decisions such as hiring, admissions, insurance, criminal justice, and credit have the potential to introduce algorithmic bias that can significantly impact consumers’ lives. As a result, consumers want businesses and organizations that use AI to proactively work to prevent bias. Consumers expect that the AI models used to determine who is approved for a loan and which patients are recommended for specialty healthcare services, for example, are fair and unbiased.

Trust is the new brand equity, according to Edelman, the world’s largest public relations firm and publisher of the annual Edelman Trust Barometer report. Fair and trustworthy AI is no longer just nice to have, it is a must-have for organizations. Unfortunately, consumers’ trust of AI (and the companies using it) has been eroded by instances of bias.

One well-known example of bias’ impact is the Apple Card, an Apple-branded credit card introduced by Goldman Sachs in 2019. The Apple Card came under fire for alleged gender discrimination in the months after its launch, with some customers complaining that women were granted lower credit limits than men. Goldman Sachs was eventually cleared of wrongdoing, but the damage to the company’s reputation had already been done. This incident famously highlights how bias in AI can impact consumers and result in financial, legal, and reputational harm to the companies involved.

Introducing responsible AI principles

Considering everything that is at stake, it is no coincidence that more than 300 guidelines and principles have been published by organizations worldwide in the past 3 years to address trust issues in AI. For example, the Business Roundtable (BRT)—a nonprofit comprised of 200+ CEOs of the largest companies in the United States—recently published a “Roadmap for Responsible Artificial Intelligence” with “trusted and inclusive” AI being a main theme. This roadmap is a response to the need for businesses to align with consumers’ expectations of fair and responsible AI, and it is just the tip of the iceberg. The World Economic Forum (WEF) also recently published an “AI C-Suite Toolkit” to help executives navigate the responsible AI landscape. Many more guidelines and principles are currently in the works that will shape the use of AI in the future.
How can businesses operationalize responsible AI principles and build consumer trust?

Moving Beyond Principles: Addressing AI Operational Challenges is a valuable resource from the Canadian RegTech Association that details processes and technologies early AI adopters in the financial services industry are successfully using to help promote trust with responsible AI. The resource offers actionable information to help organizations get started. Additionally, businesses and organizations inside and outside of the financial services industry can leverage the following best practices as they work to meet consumers’ demand for ethical and responsible AI.

Use the three lines of defense framework for model risk management
The model risk management three lines of defense framework was introduced by the Federal Reserve Board in the United States following the 2008 financial crisis. It involves 1) data scientists and model developers, 2) independent validators, and 3) internal auditors. It has been used by the banking industry to manage traditional model risk for over a decade. Now, this model risk management framework can be extended to AI models.

It establishes responsibilities and quality checks throughout the AI lifecycle, enabling teams to identify and mitigate risks such as AI bias. Prioritizing risk management during the development and deployment of AI provides a structure for completing the functions necessary to build high-performing, trusted AI for all stakeholders.

Adopt standardized documentation and reporting
Documentation and reporting are necessary to make AI transparent, auditable, and compliant, the most important components of trust. The idea of having documentation and reports that can be shared among all stakeholders—including consumer-facing reports similar to the nutritional labels on food packaging—has been championed by regulators and businesses. The main challenge for producing these artifacts is time. Documentation and reporting are time-consuming tasks when they are performed manually. Automating documentation and reporting can save countless hours that can be redirected toward developing new AI. One time-saving solution is to use an AI governance tool that standardizes and automates documentation and reporting.

Track evolving compliance requirements
Organizations need to be aware of current and proposed regulations. Multiple U.S. states are working on developing new legislation related to fairness and bias. For example, New York City passed a first-of-its-kind law in December 2021 that prohibits employers from using AI and algorithm-based technologies for recruiting, hiring, or promotion without those tools first being audited for bias.

In Canada, the Office of the Superintendent of Financial Institutions (OSFI) is planning to introduce AI regulations and guidelines in 2023. To avoid compliance and legal issues, along with reputational damage, companies should:

● Implement stringent fairness standards internally that meet or exceed external regulations
● Create a library of policies, requirements, and guidelines to track new regulatory developments
● Ensure workflows align with regulations
● Use an AI governance tool that automatically provides compliance checklists for all applicable regulations

Maintain an accurate and up-to-date model inventory
Businesses need an organized way to inventory AI. A model inventory can provide at-a-glance performance and risk information for all AI models in use. Having this type of heat map view makes it easier to manage bias risks. A model inventory also stores data and documentation for each model that is necessary in case of an audit, but also in anticipation of the new California Privacy Rights Act and similar regulations that are expected to follow. On Jan 1, 2023, consumers in California will have the right to opt-out of automated decision making algorithms. Having an AI model inventory management system will help companies track and ensure compliance.

Using an AI governance tool that supports model cataloging in a central repository is the most efficient way to inventory AI.

Constantly monitor performance
Continuously monitoring AI is important and can help catch algorithmic bias problems early. Organizations should have access to real-time performance, risk, and bias information for all of their AI, and a plan for using that data to check for quality and fairness over time.

As a result of the pandemic and the rapid digital transformations that ensued thereafter, AI is becoming more ingrained in many business functions. Per the CEOs in the Business Roundtable, it is predicted that “In 12-18 months you’re going to start to see the results of this compressed transformation.” As businesses and organizations grow their AI, they need to adopt strategies and put processes and technologies in place that will enable them to build trustworthy AI and gain consumer confidence.

David Van Bruwaene is a purpose-driven serial entrepreneur, philosopher, and educator; a leader in consumer and business strategy for ethical technologies. He is the Founder and CEO of FAIRLY, a governance, risk, and compliance solution built to help businesses accelerate responsible AI models to market. Through FAIRLY, Van Bruwaene is working to promote and protect human rights at a time of growing concern of AI model development. With academic roots in Cognitive Science and Philosophy at Cornell University, David has academic relationships at UC Berkeley, the University of Ottawa, the University of Waterloo, and the University of Guelph.

Previous post

Winning on Purpose: Interview with Fred Reichheld

Next post

Three Trends Propelling Digital Identity Momentum in 2022 And Beyond

DMN