Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Toward a sociology of machine learning explainability : Human-machine interaction in deep neural network-based automated trading. / Borch, Christian; Hee Min, Bo.
In: Big Data & Society, Vol. 9, No. 2, 20539517221111361, 07.2022.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Toward a sociology of machine learning explainability
T2 - Human-machine interaction in deep neural network-based automated trading
AU - Borch, Christian
AU - Hee Min, Bo
PY - 2022/7
Y1 - 2022/7
N2 - Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm's quest for explaining its deep neural network system's actionable predictions. We demonstrate that this explainability effort involves a particular form of human-machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human-machine companionship.
AB - Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm's quest for explaining its deep neural network system's actionable predictions. We demonstrate that this explainability effort involves a particular form of human-machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human-machine companionship.
KW - Algorithmic ethnography
KW - automated trading
KW - deep neural networks
KW - explainability
KW - machine learning
KW - human-machine companionship
KW - ROBOT
U2 - 10.1177/20539517221111361
DO - 10.1177/20539517221111361
M3 - Journal article
VL - 9
JO - Big Data & Society
JF - Big Data & Society
SN - 2053-9517
IS - 2
M1 - 20539517221111361
ER -
ID: 319801028