Enterprise adoption of AI has doubled over the past five years, with CEOs today stating that they face significant pressure from investors, creditors and lenders to accelerate adoption of generative AI. This is largely driven by a realization that we’ve crossed a new threshold with respect to AI maturity, introducing a new, wider spectrum of possibilities, outcomes and cost benefits to society as a whole.

Many enterprises have been reserved to go “all in” on AI, as certain unknowns within the technology erode inherent trust. And security is typically viewed as one of these unknowns. How do you secure AI models? How can you ensure this transformative technology is protected from cyberattacks, whether in the form of data theft, manipulation and leakage or evasion, poisoning, extraction and inference attacks?

The global sprint to establish an AI lead—whether amongst governments, markets or business sectors—has spurred pressure and urgency to answer this question. The challenge with securing AI models stems not only from the underlying data’s dynamic nature and volume, but also the extended “attack surface” that AI models introduce: an attack surface that is new to all. Simply put, to manipulate an AI model or its outcomes for malicious objectives, there are many potential entrypoints that adversaries can attempt to compromise, many of which we’re still discovering.

But this challenge is not without solution. In fact, we’re experiencing the largest crowdsourced movement to secure AI that any technology has ever instigated. The Biden-Harris Administration, DHS CISA and the European Union’s AI Act have mobilized the research, developer and security community to collectively work to drive security, privacy and compliance for AI.

Securing AI for the enterprise

It is important to understand that security for AI is broader than securing the AI itself. In other words, to secure AI, we are not confined to the models and data solely. We must also consider the enterprise application stack that an AI is embedded into as a defensive mechanism, extending protections for AI within it. By the same token, because an organization’s infrastructure can act as a threat vector capable of providing adversaries with access to its AI models, we must ensure the broader environment is protected.

To appreciate the different means by which we must secure AI—the data, the models, the applications, and full process—we must be clear not only about how AI functions, but exactly how it is deployed across various environments.

The role of an enterprise application stack’s hygiene

An organization’s infrastructure is the first layer of defense against threats to AI models. Ensuring proper security and privacy controls are embedded into the broader IT infrastructure surrounding AI is key. This is an area in which the industry has a significant advantage already: we have the know-how and expertise required to establish optimal security, privacy, and compliance standards across today’s complex and distributed environments. It’s important we also recognize this daily mission as an enabler for secure AI.

For example, enabling secure access to users, models and data is paramount. We must use existing controls and extend this practice to securing pathways to AI models. In a similar vein, AI brings a new visibility dimension across enterprise applications, warranting that threat detection and response capabilities are extended to AI applications.

Table stake security standards—such as employing secure transmission methods across the supply chain, establishing stringent access controls and infrastructure protections, as well as strengthening the hygiene and controls of virtual machines and containers—are key to preventing exploitation. As we look at our overall enterprise security strategy we should reflect those same protocols, policies, hygiene and standards onto the organization’s AI profile.

Usage and underlying training data

Even though the AI lifecycle management requirements are still becoming clear, organizations can leverage existing guardrails to help secure the AI journey. For example, transparency and explainability are essential to preventing bias, hallucination and poisoning, which is why AI adopters must establish protocols to audit the workflows, training data and outputs for the models’ accuracy and performance. Add to that, the data origin and preparation process should be documented for trust and transparency. This context and clarity can help better detect anomalies and abnormalities that might present in the data at an early stage.

Security must be present across the AI development and deployment stages—this includes enforcing privacy protections and security measures in the training and testing data phases. Because AI models learn from their underlying data continually, it’s important to account for that dynamism and acknowledge potential risks in data accuracy, and incorporate test and validation steps throughout the data lifecycle. Data loss prevention techniques are also essential here to detect and prevent SPI, PII and regulated data leakage through prompts and APIs.

Governance across the AI lifecycle

Securing AI requires an integrated approach to building, deploying and governing AI projects. This means building AI with governance, transparency and ethics that support regulatory demands. As organizations explore AI adoption, they must evaluate open-source vendors’ policies and practices regarding their AI models and training datasets as well as the state of maturity of AI platforms. This should also account for data usage and retention—knowing exactly how, where and when the data will be used, and limiting data storage lifespans to reduce privacy concerns and security risks. Add to that, procurement teams should be engaged to ensure alignment with the current enterprises privacy, security and compliance policies, and guidelines, which should serve as the base of any AI policies that are formulated.  

Securing the AI lifecycle includes enhancing current DevSecOps processes to include ML—adopting the processes while building integrations and deploying AI models and applications. Particular attention should be paid to the handling of AI models and their training data: training the AI pre-deployment and managing the versions on an ongoing basis is key to handling the system’s integrity, as is continuous training. It is also important to monitor prompts and people accessing the AI models.

By no means is this a comprehensive guide to securing AI, but the intention here is to correct misconceptions around securing AI. The reality is, we already have substantial tools, protocols, and strategies available to us for secure deployment of AI.

Best practices to secure AI

As AI adoption scales and innovations evolve, so will the security guidance mature, as is the case with every technology that’s been embedded into the fabric of an enterprise across the years. Below we share some best practices from IBM to help organizations prepare for secure deployment of AI across their environments:

  1. Leverage trusted AI by evaluating vendor policies and practices.
  2. Enable secure access to users, models and data.
  3. Safeguard AI models, data and infrastructure from adversarial attacks.
  4. Implement data privacy protection in the training, testing and operations phases.
  5. Conduct threat modeling and secure coding practices into the AI dev lifecycle.
  6. Perform threat detection and response for AI applications and infrastructure.
  7. Assess and decide AI maturity through the IBM AI framework.
See how IBM accelerates secure AI for businesses
Was this article helpful?
YesNo

More from AI for the Enterprise

Why CHROs are the key to unlocking the potential of AI for the workforce 

3 min read - It’s no longer a question of whether AI will transform business and the workforce, but how it will happen. A study by the IBM® Institute for Business Value revealed that up to three-quarters of CEOs believe that competitive advantage will depend on who has the most advanced generative AI.  With so many leaders now embracing the technology for business transformation, some wonder which C-Suite leader will be in the driver’s seat to orchestrate and accelerate that change.  CHROs today are…

How your business can prioritize responsible AI with IBM watsonx 

3 min read - Over the next decade, AI will impact all industries and help shape which companies, teams and executives come out ahead. This is why we’ve seen so many early AI adopters in sports, where even the slightest competitive advantage can be the difference between first and second place.    Take last year’s US Open, for example, where IBM watsonx™ projected the level of advantage or disadvantage of all players in the singles draw. Overseas, Sevilla FC launched a tool built on watsonx…

Enterprise generative AI made simple: IBM’s differentiated approach to delivering enterprise grade foundation models 

5 min read - In 2023, organizational departments such as human resources, IT and customer care focused on generative artificial intelligence (AI) use cases such as summarization, code generation and question-answering to reduce costs and boost productivity. A Gartner executive poll indicates that 55% of organizations are already piloting or implementing generative AI. The major challenge facing enterprise decision-makers is achieving the right balance between operationalizing generative AI faster and mitigating foundational model-related risks, while staying on top of a rapidly evolving technology landscape. …

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters