Businesses are excited about the transformative potential of artificial intelligence (AI) to innovate and enhance business models, customer insights, products, and processes. Alongside this potential, there is a growing need to identify and mitigate risk associated with AI technologies. CIOs, in particular, are exploring how to ensure the responsible design and use of AI and their roles in managing legal, regulatory, financial and reputational risks.
We recently met with several Global 100 CIOs to discuss this critical intersection of AI opportunity and risk management. Three common challenges emerged from our conversation:
- How to formulate an acceptable use policy for AI and establish an environment where employees can make the most of it, without incurring undue risk.
- How to create a governance model for AI that keeps pace with its rapid evolution, fostering innovation now while anticipating future breakthroughs.
- How to optimize the interplay between AI enablement and AI governance. This is crucial in balancing AI’s potential rewards with its inherent risks, a responsibility of which CIOs are keenly aware.
The conversation echoed the concerns we hear throughout the market. CIOs of large corporations are grappling with the same AI governance challenges, regardless of industry. Let’s look at these challenges in greater depth and discuss some potential solutions.
AI and acceptable use
While most of the CIOs have drafted some form of initial acceptable use policy, many struggle with making those policies applicable and effective for AI more broadly. Their approaches varied: some start from the perspectives of risk mitigation and limiting AI usage; others have opened the gates fairly wide to employees’ independent exploration of AI’s possibilities.
Even as they make progress toward clarifying acceptable use, some CIOs are still exploring the extent to which they’ll allow the use of publicly available AI tools and are concerned about their ability to block public AI tool use. To help counter that risk — and provide a safe way to engage with AI — roughly half of the CIOs were focused on developing internal technologies that emulated the capabilities of publicly available tools.
Some CIOs are evaluating acceptable use from a use case perspective. For instance, they’d prohibit using AI in any employment decision. This use case-driven approach considers both current risk exposure and future anticipated regulation.
Recommendations: Enterprises will want to define acceptable use policies for AI if they don’t have them already. CIOs may want to consider proprietary and secured AI solutions, built on large AI foundation models, to circumvent the use of publicly available AI tools.
Defining what constitutes acceptable use could be considered the first challenge of AI governance. The next challenge would be to equip the organization with new AI opportunities as they arise by applying governance that is agile and streamlined enough to keep pace with AI’s rapid changes.
AI and agile governance
These CIOs are concerned with striking the right balance between controlling risk and enabling, supporting and managing innovation. They want to establish future-proof governance frameworks whereby an AI solution won’t be obsolete before it’s put in place.
By the time an AI opportunity gets through a traditional governance cycle, the potential solution could be outmoded by a new set of AI features or a new player in the AI marketplace. Identifying every risk and potential compliance concern can sometimes take weeks or even months per AI use, depending on the organization’s approach to governance.
A governance committee reviews an AI solution to be built with version N of a technology, but version N+1 may be running before all the approvals are in. The upgraded platform modifies the use case’s risk profile. Meanwhile, regulators could change the compliance picture as decision-makers evaluate the use case. The challenge is to establish governance that’s as diligent as ever while also thinking of governance as an accelerator. One enterprise had rebranded its governance as enablement — an idea that appealed to many at the gathering.
Recommendations: CIOs will want to consider a governance framework that’s use case specific, then develop a framework that is both flexible and high-level enough to account for all risks. Organizations can reduce risk by acknowledging that mitigating and managing risk is not just for those professionals who have traditionally handled risk, but it is now everyone’s responsibility.
Defining acceptable use for AI and then establishing agile governance is essential to effective and responsible use of the technology. The next challenge is to balance innovation with risk by bringing those perspectives together in AI decision-making.
AI and striking a balance between innovation and control
What’s appropriate for the functions within one organization is an anathema for the next. One wants to take full advantage and stay current with each new AI development; another applies firm pressure to the brakes. What they have in common is a need to find a middle ground that enables both perspectives to move forward. This requires figuring out how to find the balance between risk and innovation that’s appropriate for each organization and function and then bringing that balance into being. More recently, boards generally have had a significant influence on the balance they find appropriate.
Equilibrium is found in the interplay between governance and enablement. Organizations have forums for these inherently valuable and distinct functions: managing risk and pursuing innovation. The challenge is first to recruit the right representation of these two views, then establish the appropriate operating model that will result in an effective level of collaboration, including both viewpoints, to develop risk-commensurate AI.
One common error is the development of AI councils or working groups that are external to existing governance and innovation structures. These bolted-on groups might make great progress with an AI proof of concept, for example, only to get stuck in legal and risk reviews prior to launch. It’s often more effective to:
- Retrofit existing forums to take AI into their purview.
- Familiarize innovation teams with top security, privacy and transparency principles so those concerns are considered from use case conception.
- Minimize duplicity in intake forms/questionnaires to avoid frustrations in both the innovation and risk management process.
- Integrate risk control assessment approaches so that a use case can be reviewed just once from multiple perspectives (such as data quality, privacy, security and so on). Checklists are a direct means to ensure this happens.
It is important to note that the steps outlined above must occur at the right points in the process. Most successful approaches will ask no more than five questions in an initial AI innovation/build forum to determine the initial low/medium/high risk rating. At this point, the appropriate levels of security, privacy and controls are established in line with the level of risk as the use case progresses.
Recommendations: CIOs will want to clearly understand their boards’ viewpoints. Next, they can recruit individuals representing both risk and innovation perspectives to govern and enable AI jointly and effectively.
In summary
As AI opportunities proliferate and accelerate, CIOs struggle to ensure responsible use of the technology. From formulating acceptable use policies for AI and defining agile governance, to striking the enterprise-specific balance between AI risk and AI reward, not many CIOs feel as ready to govern AI as they’d like to be. Nevertheless, recommendations and best practices are emerging to ensure risk-responsible AI.
To learn more about our AI solutions, contact us.