Remove Knowledge Discovery Remove Metrics Remove Modeling Remove Visualization
article thumbnail

Explaining black-box models using attribute importance, PDPs, and LIME

Domino Data Lab

In this article we cover explainability for black-box models and show how to use different methods from the Skater framework to provide insights into the inner workings of a simple credit scoring neural network model. The interest in interpretation of machine learning has been rapidly accelerating in the last decade. See Ribeiro et al.

Modeling 139
article thumbnail

Performing Non-Compartmental Analysis with Julia and Pumas AI

Domino Data Lab

Having calculated AUC/AUMC, we can further derive a number of useful metrics like: Total clearance of the drug from plasma. NCA doesn’t require the assumption of a specific compartmental model for either drug or metabolite; it is instead assumption-free and therefore easily automated [1]. Mean residence time. pain_df.TIME.==

Metrics 59
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Fundamentals of Data Mining

Data Science 101

Data mining is the process of discovering these patterns among the data and is therefore also known as Knowledge Discovery from Data (KDD). The models created using these algorithms could be evaluated against appropriate metrics to verify the model’s credibility. Data Mining Models. Deployment.

article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

In this article we discuss why fitting models on imbalanced datasets is problematic, and how class imbalance is typically addressed. def get_neigbours(M, k): nn = NearestNeighbors(n_neighbors=k+1, metric="euclidean").fit(M) Figure 3 shows visual explanation of how SMOTE generates synthetic observations in this case.