Toward a sociology of machine learning explainability: Human-machine interaction in deep neural network-based automated trading

Research output: Contribution to journalJournal articlepeer-review

Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm's quest for explaining its deep neural network system's actionable predictions. We demonstrate that this explainability effort involves a particular form of human-machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human-machine companionship.

Original languageEnglish
Article number20539517221111361
JournalBig Data & Society
Volume9
Issue number2
Number of pages13
ISSN2053-9517
DOIs
Publication statusPublished - Jul 2022
Externally publishedYes

    Research areas

  • Algorithmic ethnography, automated trading, deep neural networks, explainability, machine learning, human-machine companionship, ROBOT

ID: 319801028