Rethinking Modeling: Joining Human and Machine Intelligence

Scaling AI Marie Merveilleux du Vignaux

Nobody saw COVID-19 coming. No machine learning model, no matter how advanced, predicted a global pandemic and all the impacts it would have. This doesn’t mean that models aren’t useful, but it does mean that we should be thinking about new ways of modeling. This blog post will explore the need for rethinking modeling infrastructure to diminish unanticipated events with insights from Jeff McMillan, Chief Analytics and Data Officer for Morgan Stanley Wealth Management, who shared his thoughts in a recent EGG On Air Episode.

Unexpected Incidents

Why do organizations fail to predict big, global disruptions? Jeff McMillan outlines one main weakness that played a role in this model deficiency.
→ Watch Full Episode Now!

Relying too heavily on technology alone: Organizations should be combining the power of machine learning algorithms and the quality of historical data with human intelligence. Teams need to think together and engage in conversation. They should ask themselves, “Is our data good and predictive? Do we believe in what it is actually predicting?” Oftentimes, organizations will skip this very essential step and get themselves into a lot of trouble!

 

The Continuous Need for a Human Eye

Not only should human intelligence be used to rethink the feasibility and effectiveness of the data and goal, but the human eye should also be constantly used in the modelling process.

That being said, there needs to be independence in the process. This means there should be a separate group of individuals that evaluate the output of models during the review process. This group cannot be the same as the one who produced the models. This is essentially to avoid bias and get a second (or third or fourth) pair of eyes looking at the model.

Independence, integrity, and transparency are critical to effective model and model controls.”

There’s a lot of talk about how machines are replacing humans, but Jeff McMillan emphasizes that this really isn’t the case at all.

The machine is not enough and the machine will never be enough to know the future.”

Organizations must build a human decision making element and always ask themselves one main question: “Does this make sense?” A model cannot feel, thus humans have to be the ones to question it and determine whether or not something feels like it works and makes sense.

The Power of Learning

Individuals in a team should constantly challenge assumptions and look at results. They have to remain open to admitting something went wrong and going back to make improvements. This cycle should never end as machine learning can reach its full potential only when both humans and machines learn together.

Machine learning models can identify patterns, recognize hidden behaviors, and respond to market anomalies while correcting for them over time. That’s already very impressive, but there are things that cannot be determined by a machine. A machine cannot really know if the output it comes up with does not make sense. It’s only by putting together human questioning and machine intelligence that organizations can drive their business forward.

Conclusion

Jeff McMillan reiterates the need for human intelligence in modeling because it is by relying too much on machines that we let flaws enter the process and end up with weaker predictions. It’s true that a global health crisis might be a quite extreme example, but sometimes it takes a radical demonstration to start questioning our methods. If the global health crisis taught us anything about the world of AI, it’s that humans and machines are stronger together.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More