Originally published in Portuguese on March 23, 2023.

We are surrounded by digital agents that adapt their activities based on data we ourselves produce. They do this with the aim of modifying our behaviors. Examples include recommendation lists on Netflix or Amazon, as well as YouTube's feed. The goal of this text is to develop an understanding of the specific characteristics of how these agents operate and the differences compared to human agency.
To pursue this purpose, I will draw on an author who, despite various criticisms that can be made of his thinking, can help us. He is the German biologist-philosopher, a somewhat forgotten exponent of philosophical anthropology: Helmuth Plessner.
In his quest to understand the singular nature of human existence, in 1928 Plessner developed the concept of "eccentric positionality." According to the author, humans are characterized by a fundamental incongruity between themselves and the world. Humans are never fully in tune with themselves, with the world around them, or with the institutions they create.
At the heart of eccentric positionality is the idea that humans possess a sense of self formed by a combination of first-person, second-person, and third-person perspectives. The first-person perspective refers to our subjective experience of the world, while the second-person perspective involves our ability to anticipate how others will perceive and understand us. The third-person perspective allows us to have an external and objective view of ourselves and the world around us.
Plessner argues that this combination of perspectives is what enables humans to reason, imagine new possibilities, and create institutions that allow us to live together in society. However, it also creates a sense of uncertainty and existential vulnerability, as we are constantly aware of the distance between our subjective experience and the objective reality around us.
In summary, eccentric positionality highlights the fact that humans are not mere biological organisms or rational beings but are embedded in a complex web of social, cultural, and historical contexts that shape our experience of the world.
To continue, I would like to introduce another necessary concept: that of agency. To maintain coherence with the context discussed, I will use the concept from systems theory, understanding agency as the capacity to perceive an environment in terms of possibilities for action, combined with the ability to act upon the world. This highlights the relational nature of agency and the equality of perception and action. The definition includes, in addition to humans, thermostats and plants, which, though less diverse in nature, still perceive and act in their environments. Furthermore, human agency includes the ability to set our own goals, both as individuals and as a society.
And what happens when these agents, driven by so-called Artificial Intelligence, act upon us? At this moment, these systems may even be anticipating and learning to affect us, but they do not necessarily share our type of agency. What we commonly call Artificial Intelligence are machines with automated inference capabilities that have a specific type of agency that can be defined as data-driven and code-driven. These machines are data-driven because they can only perceive their environment in the form of data, and code-driven because they need code to make inferences.
Here lies the critical point of differentiation: while such Artificial Intelligence systems perceive their environment solely in the form of data, humans understand the environment through their eccentric positionality (an external and objective view of ourselves and, at the same time, of the world around us), which leads us to anticipate the responses of others and the effects of our actions on the environment we inhabit.
Like many other thinkers, Plessner understands that it is impossible for us to directly access the world, but he details in his model that this impossibility occurs both in the shared world we create and in the internal world we experience. Our self is not an entity identical to itself but rather a first-person perspective that depends on our ability to adopt a second-person perspective (which allows us to imagine how others understand us) and a third-person perspective that situates us as embodied beings in an objectified space.
As researcher Mireille Hildebrandt states, "Plessner emphasizes that our self is constituted by the adoption of an eccentric position. Although some may think this is a flaw, it is actually a feature. It is precisely the incongruity of the self with itself that generates productive misunderstandings and creative leaps." The connection to other theories, such as noise in Second-Order Cybernetics, is notable and deserves a separate text.
Finally, although the creation of artificial intelligence arises from an eccentric positionality, as it is a human creation, from that point onward, artificial intelligence itself is not based on this position. Machines can only execute programs created by humans, and even when creating new machines, they lack the capacity to have an internal or shared world and do not thrive on the productive ambiguity of meaning (humans produce meaning; machines produce mathematics). Unlike humans, the agency of machines does not suffer from the incongruity of a self with itself.
Understanding that this is one of many possible interpretations, merely explored as an exercise in this text, and while striving to avoid seeking a universal, we can say that it is in this sterile perfection that the greatest difference lies.
Comments