Remove 2002 Remove Data Collection Remove Forecasting Remove Risk
article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

Insufficient training data in the minority class — In domains where data collection is expensive, a dataset containing 10,000 examples is typically considered to be fairly large. In their 2002 paper Chawla et al. 2002) have performed a comprehensive evaluation of the impact of SMOTE- based up-sampling.

article thumbnail

Unintentional data

The Unofficial Google Data Science Blog

Implicitly, there was a prior belief about some interesting causal mechanism or an underlying hypothesis motivating the collection of the data. As computing and storage have made data collection cheaper and easier, we now gather data without this underlying motivation.