Technology Insights HOME | Perspectives from Our Experts on Technology Trends and Risks

Technology Insights HOME

Perspectives from Our Experts on Technology Trends and Risks.

Search

ARTICLE

4 mins to read

Navigating the Growth and Governance Tension Around AI

Kim Bozzella

Managing Director - Technology Consulting

Views
Larger Font
4 minutes to read

This blog was originally posted on forbes.com. Kim Bozzella is a member of the Forbes Technology Council.

In the past year, I have spoken to numerous technology leaders and professionals about artificial intelligence (AI), and their opinions on the technology vary. Some express fatigue, others are concerned and some are exuberant about it. These different opinions can be attributed to their alignment and focus within their organizations. Unfortunately, this creates a disjointed approach to AI at best, and at worst, it leads to an adversarial approach throughout the organization.

Tension often arises between business stakeholders eager to leverage AI to enhance and accelerate business performance and their colleagues in risk, legal and compliance, prioritizing data protection and risk mitigation. Interestingly, both perspectives emphasize “AI governance,” albeit with disparate definitions and areas of focus across the organization.

Leveraging AI for business improvement

This group of stakeholders typically sees “AI governance” as the understanding, resourcing and monitoring of use cases, AI’s anticipated value, and the quickest path to achieving outcomes. They are actively brainstorming ways in which AI can impact their business and are focused on implementing the most viable and valuable ideas. Some challenges these stakeholders express may include:

  • How do we gather and prioritize ideas across our organization?
  • How do we begin executing these ideas? Who should be responsible for their execution?
  • How can we anticipate and measure the value delivered?
  • What should be our target operating model for AI initiatives? Should we establish a center of excellence (CoE)?
  • What is our strategy for building, buying or partnering in the delivery process? When should we use enterprise tools like Microsoft Copilot?
  • How can we outpace our competitors in leveraging AI to enhance our business?
  • Should we centralize resources?

Managing AI for operational resiliency and risk mitigation

These stakeholders see “AI governance” as a way to manage the risks associated with AI, understand the legal and regulatory implications, and identify and mitigate risks that may arise from AI. Since AI tools are easily accessible, people within organizations can benefit from AI without fully understanding the risks involved.

Risk practitioners, in particular, are worried about effectively managing the risks AI introduces, including traditional ones like model drift and bias as well as emerging risks related to generative AI and its ability to create new content without regard to accuracy or IP protection. The challenges these stakeholders raise include:

  • What use cases and ideas are our business stakeholders considering?
  • How do we assess the risks associated with the ever-evolving AI landscape?
  • How do we establish and implement policies and procedures for AI throughout the organization?
  • How do we monitor the use of AI by our employees and within our organization in the long run?
  • Which AI tools and platforms are currently in use or planned for use?
  • Will we integrate AI capabilities into our existing technology platforms?
  • Which material risks should we be aware of? Which future considerations should be on our radar?

Call to action

In almost every organization, both of these areas are advancing quickly. In many cases, however, we find these two different “camps” aren’t communicating effectively—and in some cases, hardly at all. This can create unnecessary disagreements and slow down progress for organizations wanting to balance the opportunities and risks of AI.

This itself is a problem—as with any technology, speed is important in producing results. Organizations that can bridge this gap are more likely to succeed with AI in the long term and will develop the skills to consistently deliver value. We have identified a few key steps to accelerate communication and ultimately create value.

1. Establish a target operating model for AI

While the target operating model (TOM) concept is not new, many companies have jumped straight into generating use cases without fully considering how they will execute them. Which technology, teams and partners will provide the most value? Which part of the organization should be involved? How will decisions be made throughout the process, from idea to prototype to pilot to production, considering production is where the value is generated?

Many organizations no longer struggle to generate use cases but rather turn them into reality and integrate them into business processes. Once an AI capability is integrated into a business process, governance and risk management become important. This is why it’s vital to include governance in the target operating model from the beginning to ensure its durability.

2. Establish communication between “innovators” and “mitigators”

Instead of working in isolation and only communicating when prototypes or use cases reach a critical point, establish ongoing communication and a formal schedule for collaboration between the so-called business performance and risk groups.

Often, risk and compliance colleagues are not involved early enough in considering use cases, which leads to missing critical inputs for prototype design. On the other hand, designing a robust governance framework without considering specific use cases may result in overly complex and cumbersome processes or misunderstanding of how the technology works.

3. Collaborate on initial idea intake and risk screenings

Note that both perspectives aim to understand the use cases and the technology. Develop a collective approach to gather high-level information to understand a use case and identify key inputs for both value and risk assessments. Use this initial intake to prioritize use cases functionally and determine which ones require more detailed risk analyses.

By implementing these key steps and functions, organizations can reduce internal friction and expedite the time it takes to achieve value. While every organization is different, these initial strategies have proved to be successful in developing and deploying AI responsibly and sustainably.

Managing Directors Christine Livingston and Bryan Throckmorton also contributed to this blog.

Visit our NEW AI hub or contact us to learn more about our artificial intelligence services

Was this article helpful to you?

Thanks for your feedback!

Subscribe to The Protiviti View Blog

To face the future confidently, you need to be equipped with valuable insights that align with your interests and business goals.

In this Article

Find a similar article by topics

Authors

Kim Bozzella

By Kim Bozzella

Verified Expert at Protiviti

Visit Kim Bozzella's profile

Kim is Protiviti’s Global Leader of Technology Consulting. She is responsible for the strategy, offerings, consulting...

No noise.
Just insights.

Subscribe now

Related posts

Article

What is it about

Ready to revolutionize your organization with Microsoft 365 Copilot? Before diving in, make sure to have a well-thought-out plan. Even...

Article

What is it about

This blog was originally posted on The Protiviti View. Adapting new technologies to manufacturing applications and processes is often a...

Article

What is it about

As the wheel of digital transformation continues to turn, it brings with it profound changes across a myriad of industries....