October 16, 2023 By IBM Data and AI Team 5 min read

Artificial intelligence (AI) adoption is still in its early stages. As more businesses use AI systems and the technology continues to mature and change, improper use could expose a company to significant financial, operational, regulatory and reputational risks. Using AI for certain business tasks or without guardrails in place may also not align with an organization’s core values.

This is where AI governance comes into play: addressing these potential and inevitable problems of adoption. AI governance refers to the practice of directing, managing and monitoring an organization’s AI activities. It includes processes that trace and document the origin of data, models and associated metadata and pipelines for audits.

An AI governance framework ensures the ethical, responsible and transparent use of AI and machine learning (ML). It encompasses risk management and regulatory compliance and guides how AI is managed within an organization.

Foundation models: The power of curated datasets

Foundation models, also known as “transformers,” are modern, large-scale AI models trained on large amounts of raw, unlabeled data. The rise of the foundation model ecosystem (which is the result of decades of research in machine learning), natural language processing (NLP) and other fields, has generated a great deal of interest in computer science and AI circles. Open-source projects, academic institutions, startups and legacy tech companies all contributed to the development of foundation models.

Foundation models can use language, vision and more to affect the real world. They are used in everything from robotics to tools that reason and interact with humans. GPT-3, OpenAI’s language prediction model that can process and generate human-like text, is an example of a foundation model.

Foundation models can apply what they learn from one situation to another through self-supervised and transfer learning. In other words, instead of training numerous models on labeled, task-specific data, it’s now possible to pre-train one big model built on a transformer and then, with additional fine-tuning, reuse it as needed.

Curated foundation models, such as those created by IBM or Microsoft, help enterprises scale and accelerate the use and impact of the most advanced AI capabilities using trusted data. In addition to natural language, models are trained on various modalities, such as code, time-series, tabular, geospatial and IT events data. Domain-specific foundation models can then be applied to new use cases, whether they are related to climate change, healthcare, HR, customer care, IT app modernization or other subjects.

Foundation models are widely used for ML tasks like classification and entity extraction, as well as generative AI tasks such as translation, summarization and creating realistic content. The development and use of these models explain the enormous amount of recent AI breakthroughs.

“With the development of foundation models, AI for business is more powerful than ever,” said Arvind Krishna, IBM Chairman and CEO. “Foundation models make deploying AI significantly more scalable, affordable and efficient.”

Are foundation models trustworthy?

It’s essential for an enterprise to work with responsible, transparent and explainable AI, which can be challenging to come by in these early days of the technology.

Most of today’s largest foundation models, including the large language model (LLM) powering ChatGPT, have been trained on information culled from the internet. But how trustworthy is that training data? Generative AI chatbots have been known to insult customers and make up facts. Trustworthiness is critical. Businesses must feel confident in the predictions and content that large foundation model providers generate.

The Stanford Institute for Human-Centered Artificial Intelligence’s Center for Research on Foundation Models (CRFM) recently outlined the many risks of foundation models, as well as opportunities. They pointed out that the topic of training data, including its source and composition, is often overlooked. That’s where the need for a curated foundation model—and trusted governance—becomes essential.

Getting started with foundation models

An AI development studio can train, validate, tune and deploy foundation models and build AI applications quickly, requiring only a fraction of the data previously needed. Such datasets are measured by how many “tokens” (words or word parts) they include. They offer an enterprise-ready dataset with trusted data that’s undergone negative and positive curation.

Negative curation is when problematic datasets and AI-based hate are removed, and profanity filters are applied to remove objectionable content. Positive curation means adding items from certain domains, such as finance, legal and regulatory, cybersecurity, and sustainability, that are important for enterprise users.

How to scale AL and ML with built-in governance

A fit-for-purpose data store built on an open lakehouse architecture allows you to scale AI and ML while providing built-in governance tools. It can be used with both on-premise and multi-cloud environments. This type of next-generation data store combines a data lake’s flexibility with a data warehouse’s performance and lets you scale AI workloads no matter where they reside.

It allows for automation and integrations with existing databases and provides tools that permit a simplified setup and user experience. It also lets you choose the right engine for the right workload at the right cost, potentially reducing your data warehouse costs by optimizing workloads. A data store lets a business connect existing data with new data and discover new insights with real-time analytics and business intelligence. It helps you streamline data engineering with reduced data pipelines, simplified data transformation and enriched data.

Another benefit is responsible data sharing because it supports more users with self-service access to more data while ensuring security and compliance with governance and local policymakers.

What an AI governance toolkit offers

As AI becomes more embedded into enterprises’ daily workflows, it’s even more critical it includes proactive governance—throughout the creation, deployment and management of AI services—that helps ensure responsible and ethical decisions.

Organizations incorporating governance into their AI program minimize risk and strengthen their ability to meet ethical principles and government regulations: 50% of business leaders surveyed said the most important aspect of explainable AI is meeting external regulatory and compliance obligations; yet, most leaders haven’t taken critical steps toward establishing an AI governance framework, and 74% are not reducing unintended biases.

An AI governance toolkit lets you direct, manage and monitor AI activities without the expense of switching your data science platform, even for models developed using third-party tools. Software automation helps mitigate risk, manage the requirements of regulatory frameworks and address ethical concerns. It includes AI lifecycle governance, which monitors, catalogs and governs AI models at scale from wherever they reside. It automates capturing model metadata and increases predictive accuracy to identify how AI tools are used and where model training needs to be done again.

An AI governance toolkit also lets you design your AI programs based on principles of responsibility and transparency. It helps build trust in trees and document datasets, models and pipelines because you can consistently understand and explain your AI’s decisions. It also automates a model’s facts and workflows to comply with business standards; identifies, manages, monitors and reports on risk and compliance at scale and provides dynamic dashboards and customizable results. Such a governance program can also translate external regulations into policies for automated adherence, audit support and compliance and provide customizable dashboards and reporting.

Using proper AI governance means your business can make the best use of foundation models while ensuring you are accountable and ethical as you move forward with AI technology.

Foundation models, governance and IBM

Proper AI governance is key to harnessing the power of AI while safeguarding against its myriad pitfalls. AI involves responsible and transparent management, covering risk management and regulatory compliance to guide its use within an organization. Foundation models offer a breakthrough in AI capabilities to enable scalable and efficient deployment across various domains.

Watsonx is a next-generation data and AI platform built to help organizations fully leverage foundation models while adhering to responsible AI governance principles. The watsonx.governance toolkit enables your organization to build AI workflows with responsibility, transparency and explainability.

With watsonx organizations can:

  1. Operationalize AI workflows to increase efficiency and accuracy at scale. Your organization can access automated, scalable governance, risk and compliance tools, spanning operational risk, policy, compliance, financial management, IT governance and internal/external audits.
  2. Track models and drive transparent processes. Monitor, catalog and govern models from anywhere across your AI’s lifecycle.
  3. Capture and document model metadata for report generation. Model validators and approvers can access automatically generated factsheets for an always up-to-date view of lifecycle details.
  4. Increase trust in AI outcomes. Collaborative tools and dynamic user-based dashboards, charts and dimensional reporting increase visibility into AI processes.
  5. Enable responsible, transparent and explainable data and AI workflows with watsonx.governance.
Was this article helpful?
YesNo

More from Uncategorized

Kubernetes version 1.29 is available on IBM Cloud Kubernetes Service

3 min read - We are excited to announce the availability of Kubernetes version 1.29 for your clusters that are running in IBM Cloud Kubernetes Service (IKS). This marks our 24th release of Kubernetes and has been accessible since 14 February. Our Kubernetes service ensures a straightforward upgrade experience by using the IBM Cloud console, sparing you the need for extensive Kubernetes expertise with just a few clicks! For more information and methods on upgrading your cluster, look here. When you deploy new clusters, the default Kubernetes version…

EDGE3 to help universities and athletes navigate recruiting landscape using IBM watsonx AI and data platform

2 min read - The commercialization of amateur sports has accelerated college recruiting decision-making timelines, putting enormous pressure on athletes, parents, and coaches. This reality often forces coaching staffs to rely on inadequate tools to efficiently analyze large amounts of data from disparate sources.  EDGE3 is an athlete intelligence and digital advisory platform for coaches and athletes. Along with a handful of other former professional athletes, I created EDGE3 to use AI to tackle this growing challenge in college athletics.  We are taking our…

Introducing the IBM Framework for Securing Generative AI

7 min read - While generative artificial intelligence (AI) is becoming a top technology investment area, many organizations are unprepared to cope with the cybersecurity risks associated with it. Like with any new technology, it’s paramount that we recognize the new security risks generative AI brings, because it’s without a doubt that adversaries will try to exploit any weakness in pursuit of their objectives. In fact, according to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters