Remove 2011 Remove Deep Learning Remove Statistics Remove Visualization
article thumbnail

Automating Model Risk Compliance: Model Validation

DataRobot Blog

When the FRB’s guidance was first introduced in 2011, modelers often employed traditional regression -based models for their business needs. In addition to the model metrics discussed above for classification, DataRobot similarly provides fit metrics for regression models, and helps the modeler visualize the spread of model errors.

Risk 52
article thumbnail

Deep Learning Illustrated: Building Natural Language Processing Models

Domino Data Lab

Many thanks to Addison-Wesley Professional for providing the permissions to excerpt “Natural Language Processing” from the book, Deep Learning Illustrated by Krohn , Beyleveld , and Bassens. The excerpt covers how to create word vectors and utilize them as an input into a deep learning model. Introduction.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Understanding the different types and kinds of Artificial Intelligence

IBM Big Data Hub

In other words, traditional machine learning models need human intervention to process new information and perform any new task that falls outside their initial training. For example, Apple made Siri a feature of its iOS in 2011. This early version of Siri was trained to understand a set of highly specific statements and requests.

article thumbnail

Themes and Conferences per Pacoid, Episode 7

Domino Data Lab

O’Reilly Media had an earlier survey about deep learning tools which showed the top three frameworks to be TensorFlow (61% of all respondents), Keras (25%), and PyTorch (20%)—and note that Keras in this case is likely used as an abstraction layer atop TensorFlow. The data types used in deep learning are interesting.

article thumbnail

Data Science at The New York Times

Domino Data Lab

.” And this is one of his papers about “you’re doing it wrong” where he talked about the algorithmic culture that he was observing in the machine learning community versus the generative model community that was more traditional in statistics. For visualization we’re not building our own dashboards.