Navigating the Growth and Governance Tension Around AI

This blog was originally posted on forbes.com. Kim Bozzella is a member of the Forbes Technology Council.

In the past year, I have spoken to numerous technology leaders and professionals about artificial intelligence (AI), and their opinions on the technology vary. Some express fatigue, others are concerned and some are exuberant about it. These different opinions can be attributed to their alignment and focus within their organizations. Unfortunately, this creates a disjointed approach to AI at best, and at worst, it leads to an adversarial approach throughout the organization.

Tension often arises between business stakeholders eager to leverage AI to enhance and accelerate business performance and their colleagues in risk, legal and compliance, prioritizing data protection and risk mitigation. Interestingly, both perspectives emphasize “AI governance,” albeit with disparate definitions and areas of focus across the organization.

Leveraging AI for business improvement

This group of stakeholders typically sees “AI governance” as the understanding, resourcing and monitoring of use cases, AI’s anticipated value, and the quickest path to achieving outcomes. They are actively brainstorming ways in which AI can impact their business and are focused on implementing the most viable and valuable ideas. Some challenges these stakeholders express may include:

  • How do we gather and prioritize ideas across our organization?
  • How do we begin executing these ideas? Who should be responsible for their execution?
  • How can we anticipate and measure the value delivered?
  • What should be our target operating model for AI initiatives? Should we establish a center of excellence (CoE)?
  • What is our strategy for building, buying or partnering in the delivery process? When should we use enterprise tools like Microsoft Copilot?
  • How can we outpace our competitors in leveraging AI to enhance our business?
  • Should we centralize resources?

Managing AI for operational resiliency and risk mitigation

These stakeholders see “AI governance” as a way to manage the risks associated with AI, understand the legal and regulatory implications, and identify and mitigate risks that may arise from AI. Since AI tools are easily accessible, people within organizations can benefit from AI without fully understanding the risks involved.

Risk practitioners, in particular, are worried about effectively managing the risks AI introduces, including traditional ones like model drift and bias as well as emerging risks related to generative AI and its ability to create new content without regard to accuracy or IP protection. The challenges these stakeholders raise include:

  • What use cases and ideas are our business stakeholders considering?
  • How do we assess the risks associated with the ever-evolving AI landscape?
  • How do we establish and implement policies and procedures for AI throughout the organization?
  • How do we monitor the use of AI by our employees and within our organization in the long run?
  • Which AI tools and platforms are currently in use or planned for use?
  • Will we integrate AI capabilities into our existing technology platforms?
  • Which material risks should we be aware of? Which future considerations should be on our radar?

Call to action

In almost every organization, both of these areas are advancing quickly. In many cases, however, we find these two different “camps” aren’t communicating effectively—and in some cases, hardly at all. This can create unnecessary disagreements and slow down progress for organizations wanting to balance the opportunities and risks of AI.

This itself is a problem—as with any technology, speed is important in producing results. Organizations that can bridge this gap are more likely to succeed with AI in the long term and will develop the skills to consistently deliver value. We have identified a few key steps to accelerate communication and ultimately create value.

1. Establish a target operating model for AI

While the target operating model (TOM) concept is not new, many companies have jumped straight into generating use cases without fully considering how they will execute them. Which technology, teams and partners will provide the most value? Which part of the organization should be involved? How will decisions be made throughout the process, from idea to prototype to pilot to production, considering production is where the value is generated?

Many organizations no longer struggle to generate use cases but rather turn them into reality and integrate them into business processes. Once an AI capability is integrated into a business process, governance and risk management become important. This is why it’s vital to include governance in the target operating model from the beginning to ensure its durability.

2. Establish communication between “innovators” and “mitigators”

Instead of working in isolation and only communicating when prototypes or use cases reach a critical point, establish ongoing communication and a formal schedule for collaboration between the so-called business performance and risk groups.

Often, risk and compliance colleagues are not involved early enough in considering use cases, which leads to missing critical inputs for prototype design. On the other hand, designing a robust governance framework without considering specific use cases may result in overly complex and cumbersome processes or misunderstanding of how the technology works.

3. Collaborate on initial idea intake and risk screenings

Note that both perspectives aim to understand the use cases and the technology. Develop a collective approach to gather high-level information to understand a use case and identify key inputs for both value and risk assessments. Use this initial intake to prioritize use cases functionally and determine which ones require more detailed risk analyses.

By implementing these key steps and functions, organizations can reduce internal friction and expedite the time it takes to achieve value. While every organization is different, these initial strategies have proved to be successful in developing and deploying AI responsibly and sustainably.

Managing Directors Christine Livingston and Bryan Throckmorton also contributed to this blog.

Visit our NEW AI hub or contact us to learn more about our artificial intelligence services

Kim Bozzella

Managing Director
Global Lead - Technology Consulting

Subscribe to Topics

AI is a high priority for companies but results often fall short of expectations. These 12 steps will help you successfully manage #AI projects and deliver business value. #ProtivitiTech's Christine Livingston her thoughts with #TechTarget. https://ow.ly/4SZq50TK6ZR

With the right tools and processes, companies can leverage cloud-based #data #analytics to drive smart decisions and growth. #ProtivitiTech's Kim Bozzella shares how to make the most of cloud-based data analytics costs in #Forbes Technology Council. https://ow.ly/KXBu50TGo2B

Protiviti's Nick Puetz joined Jill Malandrino on #Nasdaq #TradeTalks to discuss the top technology risks organizations face and why organizations must ensure that their tech stacks are resilient against #cyber threats. Watch the episode now! https://ow.ly/WJiy50TFgm5

While underutilized in #cybersecurity, the inherent security of #blockchain could see it help augment #data protection and access management. #ProtivitiTech's Konstantinos Karagiannis explains how blockchain can enhance #IoT security in #CyberMagazine. https://ow.ly/eS3v50TFg3o

#Protiviti is co-hosting a 2-hour, virtual workshop—Enable Enterprise Financial Reporting with Datasphere and SAP Analytics Cloud—for #SAP Data & Analytics Discovery Days. Register now to reserve space on Oct. 16. https://ow.ly/PvfP50SV2aU

Load More