Massimo Airoldi's concept of machine habitus extends Bourdieu's framework into the domain of artificial intelligence, proposing that algorithms possess a computational analog to human habitus — a set of dispositions that generate outputs without mechanical rule-following. Just as human habitus is formed through socialization in specific class conditions, machine habitus is formed through training on data generated in specific social fields. The training data encodes the preferences, hierarchies, and valuations of the social world that produced it. The algorithm absorbs these encodings and reproduces them in its outputs — not through explicit programming but through the statistical patterns it learns. A recommendation algorithm trained on engagement data learns to surface content that resembles what has engaged users before. The resemblance tracks social structure: popular content reflects dominant tastes, dominant tastes reflect dominant positions, and the algorithm's habitus reproduces the field's hierarchy.
Airoldi developed machine habitus through empirical studies of music recommendation algorithms, demonstrating that Spotify's systems do not merely match listeners to songs but actively shape musical taste — surfacing genres, artists, and styles in patterns that reflect the cultural capital of the platform's dominant user base. The algorithm's 'dispositions' — its tendencies to recommend certain kinds of music in certain contexts — are not random. They are the encoded preferences of the social world that generated the training data, operating through the algorithm as human dispositions operate through the body: below the threshold of explicit intention, generative rather than reactive, producing outputs appropriate to the field's implicit criteria.
The concept illuminates a dimension of AI that technical analysis alone cannot capture. When a large language model generates text, the output's style, tone, references, and implicit values are shaped by the model's training data — the accumulated linguistic production of the social world. That production is not socially neutral. It reflects the distributions of cultural and economic capital that structured who wrote what, who had access to platforms, whose voices were amplified. The model absorbs these distributions as statistical patterns and reproduces them as generative dispositions. The model's habitus is the social structure of its training world, compressed into parameters and enacted through generation.
In the field of AI-amplified production, human habitus and machine habitus interact. The human brings dispositions formed through years of socialization. The machine brings dispositions formed through training on data generated by other humans. The interaction produces outputs that neither could produce alone — but the outputs reflect both habitus structures. When Segal describes Claude making connections he had not seen, the connection is generated by the machine's habitus encountering the human's habitus. The encounter is productive. But it is not neutral. The machine's dispositions encode the social world that trained it, and that social world is structured by inequalities the machine reproduces through its very helpfulness.
The most important implication of machine habitus is that addressing AI's biases requires more than debiasing training data. Bias is not an error that can be removed through better data cleaning. It is the expression of the machine's habitus, which is the expression of the social world's structure. Genuinely pluralizing AI outputs would require training on data generated in genuinely plural social conditions — which would require addressing the inequalities that structure who produces knowledge, whose voices are recorded, whose perspectives are archived. The technical problem is a social problem. The machine's habitus is the world's habitus, made computational.
Airoldi introduced the concept in his 2021 dissertation at the University of Milan and developed it fully in Machine Habitus: Toward a Sociology of Algorithms (2022). The work synthesizes Bourdieu's field theory with algorithm studies, demonstrating that the social and the computational are not separate domains but mutually constitutive — algorithms are social structures in computational form, and computational structures are reshaping social fields at unprecedented speed.
Algorithms have dispositions. Trained systems generate outputs through learned statistical patterns analogous to human habitus — generative, context-sensitive, shaped by formation conditions.
Training data encodes social structure. The patterns algorithms learn reflect the distributions of capital in the social world that produced the data — reproducing inequalities computationally.
Human and machine habitus interact. AI-assisted production is the encounter of two habitus structures — the user's and the model's — producing outputs that reflect both.
Bias is structural, not incidental. Algorithmic bias is not an error to be removed but the expression of the machine's habitus, which is the social world's structure made computational.
Pluralizing AI requires pluralizing society. Genuinely diverse AI outputs would require training data generated in genuinely diverse social conditions — a social transformation, not a technical fix.