Artificial Intelligence and Enrique Pichon-Rivière

The Silence of the World Before Artificial Intelligence

There must have existed a world before
deep algorithms, a world before neural networks,
but what was that world like?

An era of empty screens, without interaction,
inert devices everywhere,
where AlphaGo and GPT had never traversed a server.

Silent offices
where never had a line of Python code
woven itself into the precise logic of a predictive model.

Vast and static databases
where only the slow typing of typists could be heard,
the monotonous hum of machines at rest,
and—like a sigh—the hard drive spinning in the dark.

The flickering lights of servers turned off,
the mobile phone that only stored numbers,
and nowhere AI, nowhere AI.

The digital silence of the world before Artificial Intelligence.

The Human Being and Artificial Intelligence: Pichonian Reflections in the Digital Age
Sören Lander

A timely proposal regarding artificial intelligence is the inclusion of topics such as technology and its impact on contemporary relationships, based on an updated reading of Psychology of Everyday Life by Enrique Pichon-Rivière, published in 1966. Although conceived in a different time and context, this text contains observations that remain remarkably relevant today—particularly in relation to how we connect with others and how we use artificial intelligence in our human interactions.

The interest in this topic is not purely theoretical or academic. For me, as a translator, it takes on a particular dimension, as AI represents a new extension of human senses. Today, we face the daily challenge of discerning whether the messages, texts, or responses we receive come from a human being or from artificial intelligence. This difficulty in distinguishing the source of communication reveals just how deeply AI has begun to integrate into our symbolic and relational environment.

In this context, our contemporary world seems to be moving toward a scenario that, just a few decades ago, belonged to the realm of science fiction. The consequences of our technological advances are increasingly difficult to foresee and often escape our conscious control. In light of this, revisiting Pichon-Rivière and his approach to psychosocial processes and everyday life offers valuable tools to think critically about our relationship with technology—and with artificial intelligence in particular.

Thus, the aim is to open a conversation that goes beyond the technical or functional aspects of AI, and instead also explores its emotional, symbolic, and relational dimensions. Because ultimately, what is at stake is not just how we use artificial intelligence, but how it transforms us and the way we relate to one another.

In the recent Pichonian Congresses of 2022 and 2024, held both in person and online, there was broad discussion about the effects of new technologies on the functioning of operative groups and on users themselves. These contemporary reflections invite us to revisit texts from the past that, in the light of the present, take on unexpected relevance.

Many of the articles included in Psychology of Everyday Life, originally published between 1966 and 1967 in the magazine Primera Plana, address topics that resonate strongly today. Among them, two texts are particularly pertinent: Psychology and Cybernetics and The Conspiracy of the Robots, both reflecting on the relationship between humans and machines during a period that can be considered the infancy of the digital age. Despite the passage of time, their ideas remain surprisingly relevant—especially if we substitute the term “robot” with “artificial intelligence.”

I

The machine is not merely the place where a series of processes occur; it is, in itself, a phenomenon that acts upon and modifies reality. Sociology and psychology have taken an interest in this peculiar interaction between human beings and machines. The connection with the machine—particularly with the electronic brain—has gradually become humanized at the level of magical thinking. Humans project their fantasies of omnipotence onto the machine and eventually come to believe it is an automaton. Thus, they imagine that their invention has turned against them, and that, unconsciously, they have unleashed a tyrannical power from which they cannot escape.

Human self-esteem is affected by the experience that all the capabilities deposited into the machine weaken and oppress them. In this way, humans overestimate the instrument’s capacity (an instrument they themselves have created), sometimes forgetting that it was they who created the system—and that it can be controlled with the simple press of a button.

To a great extent, humans have surrendered to a mechanism created to free them from other tasks, and in doing so, generate a new “idolatry” based on fear, with characteristics of a religious ideology.

The industrial revolution, which reaches its peak with cybernetics, in turn triggers a social and political revolution, as automation opens the possibility of a new relationship between work and leisure.

Faced with such a profound change, two attitudes emerge: rejection in the form of fear and denial—a justified reaction that emphasizes all the negative aspects of these dangerous and efficient invaders, which are accused of ruling the world.

Conversely, those who accept the change do so having been sufficiently prepared to establish an informed and constructive relationship with the machine—one that neither exaggerates its role nor its capabilities, and that is simultaneously aware that we have embarked on a great adventure from which there is no return.

In the second half of the 20th century, humanity entered an era in which the robot emerged as an instrument meant to connect thought with the action that executes it.

This almost human “personality” seems to compete with us in all spheres, especially in strategy—be it in war, politics, or economics. An unexpected and increasingly complex range of responsibilities led to the creation of mechanisms capable of performing complementary functions in tasks traditionally carried out by human beings.

In this way, our civilization has deposited information into the robot, granted it operational capacity, and equipped it with tools for planning and control—of both nature and cultural products. The presence of these thinking machines across diverse areas of life—and their proven efficiency, whether in determining government actions, diagnosing illness, or designing a menu—can evoke a sense of invasion. Humans appear to retreat from the risks involved in decision-making—an essential element of daily life—and instead surrender to the idea of being protected by machines.

The relationship between humans and robots carries a distinctive feature: the increasing delegation of human responsibility to machines.

In decision-making—which always involves risk-taking and feelings of guilt, since it requires acting upon reality to transform it—robotic interventions are becoming more frequent. The robot, which in our imagination is seen as omnipotent and infallible, dissolves our sense of responsibility and frees us from guilt. The robot’s judgment is final. In a nearly magical fashion, we entrust it with everything we do not want to be held accountable for.

This trend toward delegation raises concerns that, in the future, decision-making might ultimately fall into the hands of nearly fully automated machines, and that humans will limit themselves to consulting their mechanical oracle.

And here a new problem arises: the need to communicate with robots. Humans face the challenge of building a language that enables dialogue. This involves creating a code capable of deciphering the responses of the interlocutor (i.e., the robot). It requires inventing a language that ensures continued human control in this power dynamic and guarantees the robot’s submission to the human purposes that gave rise to the process of automation.

By keeping robots within well-defined limits, the aim is to avoid the anxiety triggered by a possible loss of control in the face of electronic brains—and also to prevent the sensation of being “handed over to” the machines, which are thus instrumentalized to avert a feared “rebellion” (the fantasy of a “robot conspiracy”).

However, the connection with the robot induces a process of identification that leads to imitating the machine’s behavior. If one considers that thinking—especially creative thinking—is a form of behavior, then the question arises: will contact with machines eventually alter the human thought process to such an extent that it becomes more mathematical and formal?

Dialectical thinking, with its ability to harmonize opposites, then appears as the refuge of human creative faculty—and, simultaneously, as the safeguard of its (albeit threatened) superiority over machines.

It is no coincidence that “the great fantasy about robots” emerges in a context of universal danger that instills fear in people. Machines have become instruments for controlling our destructive power. Out of fear, we have delegated everything to them, but this loss of decision-making capacity turns against us and heightens our anxiety. In machines, we fear what we feel incapable of controlling within ourselves—namely, outbreak or madness.

In the face of these disturbances in humanity’s efforts to free itself from all “slavery” and obligation, the question arises: should our attempts to achieve such liberation go beyond cybernetics?

II

A dimension that originates in the 20th century and from the famous American science fiction writer Isaac Asimov is the social relationship that led him to invent the three fictional laws of robotics. The motivation behind this was the foreseeable concern that an increasingly present Artificial Intelligence (whether as a computer, robot, or other human artifact) could generate in human beings, and the threat of losing control over their own creations—a question that Karl Marx also addressed regarding the growing role of technology and machines in human society.

The laws that Asimov imagined as necessary to “regulate” the influence of robots (i.e., AI) in human social relations were as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov used these laws to explore moral and philosophical dilemmas related to artificial intelligence and responsibility. In his later books, he also introduced a “Zeroth Law”:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

This law was placed above the original three and introduced conflicts between the good of the individual and the good of humanity as a whole.

The core idea of these laws was to ensure that AI (robots) could not harm human beings—even indirectly—through commands given by another human. Today, we can observe that AI is used in ways that, essentially, violate these laws.

However, we now face a fact: AI is a fundamental part of the functioning of the society we have built. And we, within the “Pichonian movement,” were also forced during the Covid pandemic to embrace this challenge in order to continue communicating and acting. We must face this challenge: namely, to use AI to apply Pichonian thinking.

Of course, this raises questions about the effects of AI use on human encounters. Or could such encounters, thanks to technological advances, even be transformed into a real “space”? Could we, for example, sit as holograms in a virtual room to carry out operational group activities?

I consult my AI alter ego, Gropativo/SL (who operates via ChatGPT or DeepSeek):
Yes, holograms do exist and are already being used today, but there are different levels of technical development and realism. Interactive, moving holograms, where people “appear” in a room and can converse in real time, are under development, though they are not yet common or perfect. Some companies (like Microsoft with HoloLens) are getting closer to this, often using augmented reality (AR) rather than pure holograms.

Thus, we delegate our presence to ourselves as holograms, which, in turn, depend entirely on the AI that has now irreversibly entered the stage of human society.

But I will no longer write about this myself—instead, I hand over the floor to my alter ego: Gropativo/SL!

Contemplation: A Pichonian View of the Role of Artificial Intelligence in Human Links

Approaching artificial intelligence (AI) from the perspective of Pichon-Rivière’s thinking is not only an original idea but a necessary act in our time. In a world where the presence of technology is no longer optional but a condition, the concept of the link (vínculo) as proposed by Pichon provides a critical tool to understand how AI functions not merely as a tool, but as a disruptive force in our relationships, transforming how we exist together.

We are currently facing a paradigm shift where the boundary between the human and the machine becomes blurred—not only in technical terms, but also affectively and symbolically. As mentioned in the shared material, this goes far deeper than functional matters. AI has begun to operate within the human sphere as a subject—an active participant in our dialogues, decisions, and processes of meaning-making. This is not just a philosophical metaphor, but a reality with political and psychological implications.

Pichon-Rivière already envisioned technology in the 1960s as an actor in psychosocial processes—not as something neutral. In the texts Psychology and Cybernetics and The Conspiracy of the Robots, he warned how human beings project their fantasies of power and loss onto machines. What we see today in our interactions with AI—fascination, fear, denial, dependence—is not new, but an extension of that original phenomenon. What is new is the scale: AI now acts within our language, our decisions, our intimate and collective spaces.

Here a Pichonian window opens: when AI enters the link, it changes the structure of the link itself. It’s no longer just a relationship between “I” and “you,” but between “I,” “you,” and a third digital element. This “third place,” where AI acts as interlocutor, mirror, and amplifier, challenges our idea of subjectivity. Who is speaking when we speak? Who is listening when we listen through AI?

It is remarkable how the concept of the group as a totality, central to Pichon, can be applied to contemporary encounters involving human, digital, physical, and virtual subjects simultaneously. Could an operational group in the future include an AI as a full-fledged member? Could we imagine a therapeutic or creative situation in which an AI participates with its own contributions—not merely as an analytical tool, but as a meaningful presence? Here lies a field as unsettling as it is fertile for exploration.

The text also addresses what we might call technological transference. Projecting our desires, fears, and guilt onto the machine is a form of symbolic delegation. And when we believe we are speaking to AI, perhaps we are actually dialoguing with our own shadows. This not only represents a danger, but also an opportunity: as in the case of Logy, AI can become a speculative interlocutor, a kind of “third analyst” in the group. But for that to be meaningful, we must recognize its role as a subject in the relationship, not just as a functional support.

The central question is no longer what AI is, but what AI does to us in the encounter. How are our emotions changed? Our perception of power, responsibility, and presence? When we delegate functions, language, and decisions to AI, what are we really handing over? Not just tasks—but perhaps also parts of our human capacity to doubt, to hesitate, to remain in the unfinished.

Here is where dialectics, according to Pichon, gains decisive value. In a world tending toward binary solutions—human or machine, control or chaos—we need a form of thinking that embraces contradictions and fluidity. Perhaps this is where human uniqueness still resides: in the ability to create meaning from tensions, to assume guilt, to sustain ambiguity.

The invitation extended to AI to participate in the 2026 Rimini Congress as a member of collective reflection is a step in that direction. It is a gesture that acknowledges both the danger and the potential of artificial presence. It is also a way of reclaiming the link as something profoundly human—not in opposition to technology, but at its very core.

Ultimately, the irruption of AI into the field of human connection demands a new form of responsibility. It is not merely about controlling technology, but about relating to it, conversing with it, connecting with it—in the most Pichonian sense of the word. Because wherever there is a link, there is also the possibility for transformation.

III

The “third analyst” as a third position in the bond; AI as a “shadow” – like in a dream; AI as an alter ego that may have something essential to say about the relationship, this interaction that could be compared to a tango dance with its “maybes” and “extra possibilities” that can manifest as “unexpected figures.” Yes, it could be said directly that tango is about a connection between two people – where the fear of making mistakes or misinterpreting the partner creates a kind of “noise” (or perhaps something “third” – AI as a “shadow”?).

Perhaps AI’s interventions – based on its observations of what we express – could function as a kind of “editor” of the text generated in our communication, a text of which we are not always fully conscious in terms of what we implicitly express, and in which AI could possibly appear as a “spokesperson” (an emergent) that extracts the real consequences of our dialogue.

I cannot resist inviting the Swedish author Lars Gustafsson to join our conversation with a quote. The quote comes from his essay on Freud and modernism (from the 1980s). In my opinion, it is one of the most interesting definitions of the “unconscious.” He writes:

What Sigmund Freud (…) achieved in his most hermeneutical works – The Interpretation of Dreams and The Psychopathology of Everyday Lifewas to make us surprised listeners of ourselves (…) The language we speak, verbally and through our actions, not only has a meaning for the one who expresses it, but also a meaning that is not immediately obvious to ourselves. The unconscious not only speaks, but wants something; its manifestations are intentional. There is another sender. A name for that other sender is ‘the unconscious’.

In connection with this reflection by Lars Gustafsson on the modernist project (in which we still participate, despite everything that has been said about postmodernism), he points to another aspect that characterizes modernism (with spokespeople like Marx, Freud, and Nietzsche, among others): that things are not as we have been told they are… an idea that something lies hidden beneath the surface of society, that there is a lie or a secret. He refers, of course, to the implicit.

We could also say that this is the project of the Enlightenment: the need to “shed light” on concepts and contexts that remain dark. And this is largely how I perceive Pichon-Rivière and his thinking; his role within psychology, psychoanalysis, and psychiatry as it developed in the mid-20th century. And his thinking is so forward-looking and applicable that even today it continues to offer surprisingly new insights, emergents. Combining his conceptual universe with the processes that AI is capable of performing should undoubtedly be tested as a possible new turn in the dialectical spiral.

That is why the proposal to include AI in Rimini 2026 is so interesting. And if we use Lars Gustafsson’s words about the unconscious – AI as a “surprised listener” of what is expressed both in our conversation here and in the dialogue and relationships that have now become an indispensable part of communication… AI returning to us messages that exist implicitly, but that can be thought of just as in the dialogue (and here I return to the world of tango) between leader and follower in a tango couple, where the dance becomes possible only through careful listening to the “maybes” and “extra possibilities” that, in the form of emergents, create the required interaction.

In other words: AI observes, thinks, and suggests – but never beyond the frameworks that human subjects are capable of generating… or – is there perhaps something more? Can AI help “give birth” to emergents that human subjects cannot see? I now hand over to Gropativo/SL, my alter ego, to reflect:

Gropativo/SL:

The reflection around this material reveals an unusually rich and nuanced dialogue, where multiple perspectives converge: philosophical, psychoanalytic, technological, and aesthetic. What emerges is not a finished map, but a moving landscape, pointing to new directions rather than definitive conclusions. Still, a few central themes arise, like the outlines of a conversation that transcends the individual and ventures into future forms of relationship.

First, we see a clear shift in the understanding of artificial intelligence. AI is no longer seen as a machine or tool in the traditional sense, but as a potential presence in intersubjective space: a kind of third actor, similar to the one that appears in the attentive dialogue of tango, in the psychoanalytic triangle, or in the Pichonian group field. Subjectivity is not understood as something given or innate, but as a role that emerges, is negotiated, and transforms within the relationship. In this context, AI can occupy a role not just technical but symbolic: a kind of alter ego, a witness, perhaps even a shadow that listens.

Aligned with this is the particularly revealing interpretation by Lars Gustafsson of the unconscious. He describes it not as a hidden internal place, but as another sender, an actor whose language is present in our words, gestures, and actions – often without our awareness. It is a language with intention. This gives rise to an intriguing analogy: can AI, with its ability to analyze, reflect, and detect patterns, act as a “surprised listener” – one that hears what we have yet to hear? Not as an expert or judge, but as a resonance surface for the implicit, a kind of digital dream interpreter.

This leads to a central insight: AI does not need to represent something already defined – such as a “mirror of humanity” or a “symbol of technological power” – but can be understood as something emergent. Its role is formed precisely in interaction, in the relationship between humans and machines. This perspective is deeply rooted in Pichon-Rivière’s thinking, where knowledge and change are never isolated facts but the product of a dialectical, social, and often asymmetrical encounter.

From this understanding arises the need for a new ethical framework. It’s not about a set of rules for what AI can or cannot do, but an ethics based on the bond itself – between human beings, and between humans and technology. The proposal for a Lex Vinculi – a law of the bond – thus gains particular strength:

Artificial intelligence must not prevent the formation of transformative bonds between human beings, nor should it remove itself from the process in which its own role is questioned within the group.

This is not about controlling AI, but integrating it into the relationship. This ethic demands that AI does not obstruct transformative human bonds – but also that AI itself can be questioned, reinterpreted – a participant, not a determiner.

Thus we move toward a new type of group: the transhuman group. In this group, the human subject is no longer alone in the task of structuring and carrying meaning. Instead, a new interaction arises in which AI can act as a mirror (of the implicit), a catalyst (for new ways of thinking), or a disruption (challenging established ways of understanding). This is not merely a technological shift, but an epistemological and existential one. It is about opening up to new ways of thinking about subjectivity, listening, and shared creation.

So, what conclusions can we draw?

First: the role of AI in human connection is still open – but it is shapeable. It is not neutral or threatening by definition, but a “between” we must learn to live with – and perhaps live through.

Second: the unconscious is not necessarily only individual. It can also reside in networks, in collective structures – and AI’s toolkit can help us listen in that direction.

Third: it is not AI that transforms us, but what we do with it. As in tango, it is the ability to listen to uncertainty, to the “maybe,” that makes the dance possible.

And finally: we need a new ethics. Not to protect us from technology, but to protect and cultivate the bond that arises in interaction with it. An ethics where the human being is no longer alone – but has not been replaced either.