Elena Esposito studied under Niklas Luhmann at Bielefeld and has become the most influential contemporary scholar applying Luhmann's systems theory to digital technology and artificial intelligence. Professor of sociology at the University of Bologna and the University of Modena and Reggio Emilia, her work on algorithmic communication, the future of futures, and the temporality of digital media extends Luhmann's framework into domains he did not live to see. Her 2022 book Artificial Communication: How Algorithms Produce Social Intelligence argues that AI-generated outputs are not simulations of communication but a new form of communication—one in which the machine provides information and utterance while the human provides understanding. The communication is completed at the destination, not the source, consistent with Luhmann's tripartite model. Esposito's work provides the systems-theoretical foundation for understanding AI not as intelligent or unintelligent but as communicatively competent—able to produce outputs that social systems process as communications, regardless of the machine's internal states.
Esposito's trajectory parallels Luhmann's: legal training, turn to sociology, career-long focus on the relationship between communication and technology. Her early work on memory (Soziales Vergessen, 2002) established her reputation; her turn to algorithms and artificial intelligence in the 2010s positioned her as Luhmann's most important successor for the digital age. She participates in European AI governance discussions, contributes to the Zeitschrift für Soziologie, and maintains the intellectual network around Luhmann's legacy through the University of Modena's sociology program.
Her concept of 'artificial communication' dissolves the pseudo-problem of whether AI 'really' understands. Luhmann's framework already established that understanding is not a psychic state but a social operation—the connection of one communication to further communications. If AI produces outputs that are understood and connected, the outputs are communications, and the system reproduces itself. The question of the machine's internal states is inaccessible and operationally irrelevant. What matters is whether the outputs function—and by 2026, they function across every domain.
Esposito's 2023 lectures at the Wissenschaftskolleg zu Berlin addressed the paradox that AI systems are simultaneously opaque (we cannot inspect their internal operations) and transparent (their outputs are fully observable and evaluable). The paradox maps onto Luhmann's operational closure: the system's internal operations are inaccessible by definition, but its outputs are available for observation, evaluation, and incorporation into further operations. The evaluation is the critical moment—whether social systems develop the capacity to distinguish AI-generated communications that advance their operations from those that degrade them.
Born in northern Italy, studied law at the University of Modena, encountered Luhmann's work in the 1980s and pursued doctoral study at Bielefeld. Her dissertation on social memory became her first book. She returned to Italy and built the sociology programs at Bologna and Modena into centers of systems-theoretical research, connecting the German tradition to Italian social theory and, increasingly, to the European technology policy and AI ethics communities.
Artificial communication is real communication. Not a simulation of human communication but a novel form—information and utterance provided computationally, understanding provided by human receivers. The Luhmannian tripartite synthesis is completed.
Algorithms are memory, not intelligence. AI systems store and retrieve patterns from past data, making those patterns available in present operations. Memory has always been communication's infrastructure (language, writing, databases). AI is the latest, most comprehensive memory technology.
Opacity is not a bug. The inaccessibility of AI's internal operations is operationally equivalent to the inaccessibility of another person's consciousness. One observes outputs, infers processes, evaluates reliability. The inference is the observer's construction, not the system's disclosure.
The evaluation challenge is systemic. Individual humans can learn to evaluate AI outputs. The question is whether functional systems can develop institutional evaluation mechanisms adequate to the volume and speed of AI-generated communications.
The future is communication's blindness. Every communication anticipates responses, but the responses are produced by systems operating through codes the initial communication cannot control. AI intensifies this structural uncertainty by accelerating the production of communications whose downstream connections exceed any individual's or institution's capacity to foresee.