tamapackage.blogg.se

The black box fix
The black box fix













the black box fix

This belief stems from the historical use of machine learning in society: its modern techniques were born and bred for low-stakes decisions such as online advertising and web search where individual decisions do not deeply affect human lives. Over the last few years, the advances in deep learning for computer vision have led to a widespread belief that the most accurate models for any given data science problem must be inherently uninterpretable and complicated. This was the first data science competition that reflected a need to make sense of outcomes calculated by the black box models that dominate machine learning–based decision making.

the black box fix

In December 2018, hundreds of top computer scientists, financial engineers, and executives crammed themselves into a room within the Montreal Convention Center at the annual Neural Information Processing Systems (NeurIPS) conference to hear the results of the Explainable Machine Learning Challenge, a prestigious competition organized collaboratively between Google, the Fair Isaac Corporation (FICO), and academics at Berkeley, Oxford, Imperial, UC Irvine, and MIT. Keywords: interpretability, explainability, machine learning, finance We discuss this team’s thought processes during the competition and their implications, which reach far beyond the competition itself. This leads to the question of whether the real world of machine learning is similar to the Explainable Machine Learning Challenge, where black box models are used even when they are not needed.

the black box fix the black box fix

Instead of sending in a black box, they created a model that was fully interpretable. The goal of the competition was to create a complicated black box model for the dataset and explain how it worked. In 2018, a landmark challenge in artificial intelligence (AI) took place, namely, the Explainable Machine Learning Challenge.















The black box fix