Hero Panel Image

Executives say adopting AI is essential, but risky

Share:
Kavitha Mariappan

Kavitha Mariappan

Contributor

Zscaler

Feb 5, 2024

Executives say adopting AI is essential, but risky

2023 was a milestone year in what has been dubbed the Fourth Industrial Revolution, as generative artificial intelligence systems (AI) like ChatGPT saw wide adoption. The revolution brings the benefits of intelligent, adaptive automation along with pitfalls, including security risks and disruption of roles within enterprises. Executives readily acknowledge the competitive advantages of AI, but many fear new risks arising from a lack of oversight and experience with the technology. We need to establish clear guidelines for safely managing and integrating this potentially disruptive innovation into business operations.

In the third session of our Executive Connect Live series (aired on January 17), two of the industry’s leading voices in cybersecurity and digital transformation share their insights for securely using AI: Shamim Mohammad, Executive Vice President, CIO and CTO at CarMax; and Greg Simpson, a Zscaler advisor, and former CTO of Synchrony Financial.

Mohammad and Simpson spoke to the importance of balancing the risks of using AI against those of banning it. They addressed the need for hierarchical, transparent governance of AI initiatives, and understanding the critical role of risk management. They also stressed the need to acknowledge the changes AI is making as a general-purpose tool widely used in organizations - both sanctioned and as shadow IT. 

From behind-the-scenes to ubiquitous: accelerating human and machine interaction

Mohammad emphasized that, although AI and machine learning have been integrated into many different technologies for decades, the technology came into its own last year. He noted that ChatGPT has taken generative AI mainstream as a universal technology capable of changing several aspects of our lives and work. “It accelerates the human and computer interaction model,” he said. “And that is happening everywhere incredibly fast. We have to be cognizant of balancing AI’s great potential with the risks associated with it and be sure we manage it carefully.”

Simpson mentioned AI’s impact on human creativity. Citing the recent Hollywood writers’ strike, he pointed out that AI’s capabilities are spurring accelerated change in the generation of text, images, and videos and how this has resulted in a ramp-up of legal cases. When using AI for creative work, he offered a cautionary note: “We need to leverage the benefits of AI in a way that respects copyright.”

Governance for growing AI uptake and customer satisfaction

Mohammad shared how a strong governance model enables successful use cases at CarMax, an early adopter of OpenAI and ChatGPT. He emphasized that CarMax revolves around creating exceptional customer experience, and AI applications are built with this objective in mind. To minimize risk, CarMax has built an AI governance model based on its existing successful data governance, privacy, and education best practices. “We created a culture of transparency and a strong, hierarchical AI governance model to bring AI’s advantages to any group who wants it while minimizing risk,” he said.

CarMax uses a couple of methods that enable it to productively implement AI. One is a hierarchical governance structure featuring a cross-functional team led by the CTO. This team provides guidance and support to groups pursuing better business outcomes. Another is delivering tools, education, and security training in a transparent environment that encourages communication with the governance team and decreases downside risks.

By carefully following these guiding principles, CarMax has built an AI system that has reaped multiple benefits: 

  • Exceptional, customized customer experiences from an AI-driven platform that delivers the easiest and most meaningful way for consumers to comparison shop for cars
  • An AI process that automates the publishing of customer reviews while keeping humans in the loop—reducing what used to take a dozen person-years to organize to only a matter of days 
  • Algorithmic machine learning technologies that speed code generation by up to 30%, freeing software engineers for more strategic projects

If you aren’t using the tools, you’re already behind

I asked Simpson about the time and cost of training generative AI and performing extract, transform, and load (ETL) data processes from multiple use cases and how CTOs and CXOs should frame the cost benefits for new projects. “The tools are out there,” he said. “They work right now, and, if you’re not using them in your business, you’re already falling behind.”

Since AI readily learns from unstructured data, including everything from data lakes to websites, AI solutions can be trained quickly. Simpson cited 50% improvements in coding productivity for software developers using generative AI-powered code versus those that do not. He has also seen marked increases in customer satisfaction and productivity for support departments using AI-driven chatbots. 

AI and cybersecurity: A continual evolution and a team effort

Speaking to security and data protection, Simpson and Mohammad stressed that AI presents new challenges for threat mitigation and data loss prevention, demanding more collaboration among CIOs, CISOs, technology groups, business groups, and vendors. Stopping new AI-assisted attacks requires ongoing training and education throughout the organization. These attacks are more sophisticated and harder to detect than traditional threat campaigns. This makes building a strong alliance with an innovative security partner like Zscaler more important than ever before. 

Additionally, as AI-enhanced attack algorithms proliferate, higher levels of transparency and communication are needed to ensure consistent protection across the enterprise. The same kind of open information sharing is critical between companies and vendors for the development of use case-appropriate solutions.

Positive impacts on every role

My final question focused on exploring the types of work that are most susceptible to AI disruption. Both Mohammad and Simpson agreed that everyone in an enterprise can be more productive with generative AI.

According to Simpson, “Your best employees are already working with AI even if you’ve blocked it internally. You’re at a bigger risk if you don’t enable these tools for employees than if you do. There are a number of ways to protect your data and use these tools safely and effectively in your business today. I see generative AI as a wizard from the future that will soon become commonplace.”

Adding to that, Mohammad observed, “Every role will be impacted. I think what’s exciting is there’s going to be a lot of new roles created. We need more AI engineers, more data engineers… we're going to see huge demand—and we need to cultivate those talents.”

What to read next:

Between boards and technology executives, cyber risk is a shared responsibility

Navigating the complex landscape of critical infrastructure cybersecurity

Explore more insights

Recommended