article thumbnail

NIST launches ambitious effort to assess LLM risks

CIO Business Intelligence

The National Institute of Standards and Technology (NIST) on Tuesday announced an extensive effort to try to test large language models (LLMs) “to help improve understanding of artificial intelligence’s capabilities and impacts.” Guidance, over regulation, is a great approach to managing the safety of a new technology,” Prins said.

Risk 90
article thumbnail

Open source large language models: Benefits, risks and types

IBM Big Data Hub

Large language models (LLMs) are foundation models that use artificial intelligence (AI), deep learning and massive data sets, including websites, articles and books, to generate text, translate between languages and write many types of content. All this reduces the risk of a data leak or unauthorized access.

Risk 93
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

What is Model Risk and Why Does it Matter?

DataRobot Blog

With the big data revolution of recent years, predictive models are being rapidly integrated into more and more business processes. This provides a great amount of benefit, but it also exposes institutions to greater risk and consequent exposure to operational losses. What is a model?

Risk 111
article thumbnail

Automating Model Risk Compliance: Model Development

DataRobot Blog

Addressing the Key Mandates of a Modern Model Risk Management Framework (MRM) When Leveraging Machine Learning . The regulatory guidance presented in these documents laid the foundation for evaluating and managing model risk for financial institutions across the United States.

Risk 64
article thumbnail

How to use foundation models and trusted governance to manage AI workflow risk

IBM Big Data Hub

As more businesses use AI systems and the technology continues to mature and change, improper use could expose a company to significant financial, operational, regulatory and reputational risks. It includes processes that trace and document the origin of data, models and associated metadata and pipelines for audits.

Risk 72
article thumbnail

Report: AI giants grow impatient with UK safety tests

CIO Business Intelligence

Key AI companies have told the UK government to speed up its safety testing for their systems, raising questions about future government initiatives that too may hinge on technology providers opening up generative AI models to tests before new releases hit the public.

Testing 108
article thumbnail

Automating Model Risk Compliance: Model Monitoring

DataRobot Blog

In our previous two posts, we discussed extensively how modelers are able to both develop and validate machine learning models while following the guidelines outlined by the Federal Reserve Board (FRB) in SR 11-7. Monitoring Model Metrics.

Risk 59