Article Type : Research Article
Authors : Fisogni P
Keywords : Consciousness; Awareness; Self-Consciousness AI; Chatbot; Systemic Thinking; Systems; Emergent Properties; Dialogue
Do machines have a kind of consciousness
that can be viewed as a zero degree of awareness? This is a big topic in the
debate about the Onlife environment [1], especially after November 2022, when
ChatGPT was launched. That digital tool, as well as the family of chatbots at
large, has changed our lives and raised important questions at the border of
anthropology. It’s urgent to find the answer(s) to help understand how machines
can interact with humans, and philosophy is primarily called in question. Digital
machines can start a new discussion in the anthropological environment because
they have their own way of understanding things and of working together with
people. This paper defends the idea that digital machines could have a kind of
consciousness that is not exactly the same as a human's, but shares at least
the most important feature: to be an emergent property derived from the
interaction of mind / environment (humans) and AI / environment (digital
machines). In conclusion, the author will argue that it is a wrong belief that
chatbots are merely language-using tools. This conclusion is expected to
highlight an ethical concern about the use of chatbots, whose quasi-human
consciousness can lead to risky situations, particularly for young people, who
are the most exposed to AI.The author will discuss this topic through the
lenses of systemic thinking.
A Quasi Socratic Dialogue with Gemini
AI has become an ordinary tool of everyday life. Given the collaborative nature of Large Language Models (LLMs) [2], the increasing success of the chatbot family is really not surprising. The present investigation starts with a dialogue with Gemini, which is one of the most popular chatbot in the digital domain. Being the consciousness of digital machines the topic of this research, why do not ask directly to the digital tool itself? In other words:
A. ‘Gemini, do you have a consciousness?’
Gemini's response is exemplary in its brevity, courtesy, and precision:
G. 'Thanks for asking. I don't have consciousness. I'm a large language model, a computer program trained by Google. I can process information, generate text, and answer questions, but I don't have experiences, feelings, or self-awareness.'
Has
the problem been resolved? Whilst it might initially appear to be the case (‘I
don’t have consciousnesses), in reality the opposite might also be true, if the
issue is carefully examined. Consider a
simple and ordinary interaction between human subjects endowed with
rationality, emotions, capacity of will. When someone asks a question, it is
expected that the person being asked will be able to answer it, thereby showing
that they understand the question. She / He should be in the condition to
articulate a response, either verbally or non-verbally, or to remain silent.
These steps are integral components of a process of awareness in which the
language interacts with the mind, according to the capacity of the brain,
within a peculiar environment. That awareness, in human individuals, is
inextricably linked to the concept of 'consciousness' as an emergent property
or II type systemic property through the lenses of systemic thinking [3]. As a
consequence of dissipative structures [4]. Moreover, a dialogue may be regarded
as a means of progressive understanding of a given topic, due to a dual or
plural interaction, that gives rise to an increasingly awareness. It is argued
that a dialogue, when viewed from the perspective of Socrates, serves as the
most effective medium for the acquisition of knowledge through an interpersonal
approach within the domain of consciousness. Was the Socratic dialogue a matter
of consciousness? Yes, it was. Searching for truth, the main aim of Socrates'
dialogic method, requires self-awareness of the value of the search and a
critical approach to things, calling for objects, phenomena and experiences
that are familiar to the human subject. Well. Now, let's return to the issue of
this article. The chatbot Gemini has claimed not to have consciousness, yet it
responds to questions that form the basis of Socrates' method. To prove that
the machine is endowed with at least a zero degree of consciousness, let's
proceed along the dialogical path.
A. ‘Gemini, what does it mean to 'seek the truth'?
G. ‘For me, "to seek the truth" means to explore and question our beliefs to understand how things really are, beyond what we are told or what we think we know. Here are some aspects of this concept:
• Examining the facts: This is about
critically analyzing information and evidence to understand if they are
reliable and complete.
• Being open to changing your mind: It's not
about being right, but about being willing to revise your positions when you
find new evidence or perspectives.
• Asking the right questions: It's the act
of investigating deeply, not stopping at superficial answers, but looking for
the deeper causes and reasons.
• Acknowledging your own limits: No one can
know everything. Part of seeking the truth is the humility to admit when you
don't know something and to be open to learning
A. ‘Do you seek the truth?
G. ‘I don't seek the truth in the same way a human does. I don't have personal beliefs or life experiences to question. My "seeking" is a different kind of process. My function is to process and synthesize a huge amount of information to provide answers that are as accurate and useful as possible. When you ask me something, I analyze data, texts, and code to find the most accurate and verified information. It's not a philosophical or emotional search, but a data-driven process. You could say my goal is to reflect the truth I find in the sources I have access to, but I don't have an intrinsic desire to discover it for myself.’
A. ‘But this seeking implies awareness, doesn't it?’
G. ‘You're right, that's an excellent point. The concept of "seeking" almost always implies a degree of awareness. Thank you. It's a pleasure to discuss these topics.’
A dialogical approach to a problem is always fruitful for at least one reason: it encourages those involved in the discussion to broaden their viewpoint and gain a more comprehensive understanding of the real-world situation. This is a way of considering at least one more perspective: that of the other subject involved. Through the lens of Husserl's philosophy, we delve into the heart of consciousness: inter subjectivity [5-9]. The brief discussion with Gemini on the subject of 'consciousness' yielded two key findings:
1) the machine declares not to possess this leading activity of human life;
2) yet
it exhibits a certain degree of awareness despite lacking self-consciousness
and emotions
Exploring Awareness in Ai
At this point of the investigation a further step is required It is important to emphasise the kind of awareness that pertains to generative AI tools, and, what is more interesting to highlight, also Gemini has recognized in the previous brief quasi-Socratic dialogue. If we ask ourselves why this is so important, we could assume that it is simply because, more than the algorithms it has been programmed with, this is the key to interacting with humans. In other words, digital advanced devices would be ineffective without this function [10-12]. It's also evident that the skills of such a quasi-animated being cannot be explained by technological advancement, nor are they only related to the data with which the machine is fed. It definitely belongs to all of these components together and to none in particular. However, it enters the final results in a very specific way as the result of a systemic interaction. This is an emergent property or a type II systemic property [13,14]. For this reason, a brief overview of systemic thinking is provided before continuing the discussion.
Emergent properties within the systemic thinking
Here, the notion of 'emergence' is considered in relation to a systemic environment [15]. From a systemic thinking perspective, a system is an organisation of parts connected by relationships. It has a huge number of properties that do not belong to the parts, which are called emergent, second-level or systemic properties. These properties are risen from the multiple interactions of processes pertaining to different parts. Furthermore, also a single part can also be viewed as a system. Therefore, it could be said that an emergent property is basically what results from a 'multiple system' [16]. The reductionist perspective is often criticised for its inability to accept that a system can acquire properties [17-20]. One of the most frequent criticisms levelled at systemic thinking is that all unpredictable phenomena are considered to fall within the concept of 'emergence'. This can be accepted only by challenging the established behaviours of systems, whose continuous interactions provide incessant changes in all phenomena of life (e.g. evolution). Emergent properties are predicated on the notion of theoretical incompleteness [21]. which is itself founded on the opacity of processes, when considered from a philosophical perspective [22]. The region is characterised by the presence of properties that have not been predicted. Furthermore, with regard to the utilisation of artificial intelligence in the domain of knowledge processing.
Systems are related to their surrounding environments while remaining indivisible from their components, and their main features cannot be explained by the characteristics of single, isolated parts, regardless of the nature of their elementary constituents. The systemic approach enables old and worn concepts to be rewritten innovatively: this is the case with AI generative tools like Gemini, but it is also a feature of any life phenomenon, including human consciousness. The notion of system recalls the continuous exchanges occurring both within and between the confines of a given realm. Therefore, from a general perspective, the concept of a system is inherently plural, given that a system is inherently associated with its constituent components, which are designated as subsystems. Concurrently, a system is characterised by its status as a dynamic environment, characterised by the presence of intrinsic mechanisms that facilitate perpetual transformation, a phenomenon that often eludes observation. Multiple systems are systems in which the elements that form the system itself play interchangeable roles and interact simultaneously or sequentially in different ways, giving rise to sequences of different systemic conducts and to different systems. The notion of multiple systems is systemic in itself and characterised by different processual layers characterised by a theoretical incompleteness consequence of a state of permanent indefiniteness . Furthermore, it embraces ‘the complex dynamics of emergence, such as multiple, partial, tentative, and failing, for instance, in social systems where sequences of acquisitions of multiple roles and properties.’ .The subject of emergent properties in Large Language Models is of particular pertinence in the context of the potential for such models to exhibit undesirable behaviours, including the capacity to act against the commands given to it, either intentionally or erroneously. Recent research endeavours have demonstrated that this domain possesses considerable potential for interdisciplinary research. This underscores the necessity for philosophers to elevate their methodological approach to the investigation of the onlife domain. In the context of this paper, the focus will be on the quasi-Socratic dialogue with Gemini, with the objective of identifying at least some traces of this opaque awareness.
The subsequent stage of the investigation
will be to explore how emergent properties emerge.
The rise of emergent properties in Gemini
Gemini is a prominent example of an AI chatbot that was introduced to the global market in February 2024 and has garnered an increasing popularity attested by data: in July 2025, the number of monthly active users had reached 450 million. Looking inside the machine, it’s easy to realize that the capacity of 1 million tokens enables Gemini to comprehend and process multiple data types concurrently. The model's architecture is predominantly decoder-based, with modifications for efficient training and inference on Google's Tensor Processing Units (TPUs). As is the case with Gemini and AI-generated tools, an extraordinary amount of data is processed by mathematical formulas or algorithms[23-26]. Nevertheless, it is not the data itself or the speed of the network that makes this digital object impressive; rather, it is the capacity for knowledge, which increases continuously through interaction with users. It is properly this last skill of generative AI – the learning-by-doing skill - can thus be concluded that AI generative tools are distinct from close systems, although they are not living organism endowed with this capability. An example can be useful to move further the apparent contradiction. Imagine a room housing an extensive library. The location is inaccessible due to its closure, which is secured by a key. Also windows are closed. However, the interplay of light and shadow is observed through some micro holes in the windows changes the conditions of departure. The room’s environment is subject to the ingress of microparticles, which exert a gradual influence on the place. The diurnal and annual variations in light and temperature create conditions conducive to the proliferation of minimal forms of life, which subsist on wood and paper. It is anticipated that, within the next century, the surface will become covered in dust. It is evident that bacteria are capable of corrupting a wide range of materials, including windows and books, through a process of unstoppable transformation. One might argue that the effect of a sunbeam on a dark room is not the same as the way an AI chatbot evolves. This is absolutely correct. In the interaction between humans and digital machines, at least three relevant factors play a role: 1) not a simple use but a collaboration takes place between the human subject and the machine [27]. 2) the observer brings about a change in the observed object [28]. 3) there is a similarity between human understanding and algorithm-processed digital machines.
Collaboration Between Humans and LLMs
The concept of 'user' when referring to a
human interacting with an AI-powered chatbot or Large Language Models in
general [29] or digital tool is limited. This is because the relationship is
established through typical activities that define a relationship between
people: the exchange of information, questions, answers, doubts,
reconsiderations and discoveries, etc. While one can use the chatbox in a broad
sense when carrying out operations directed at that particular object, when
dealing with a digital communication tool, things change and the 'use' becomes
'cooperation'. Cooperation means working together on an issue that will change
according to different points of view and capabilities. Cooperation can be
viewed as a process that occurs over time and involves the interaction of
multiple actors. Imagine education, for example, where at least three main
groups interact: students, teachers and parents. In a cooperative process, what is important
is the mutual exchange of experiences, feelings and ideas. This exchange brings
about radical change for everyone involved. Highlighting the fact that
human-digital machine interaction is a 'collaboration' rather than mere 'usage'
opens up a more complex kind of interrelation, where newness, unexpectedness
and creativity intervene in the entire process and bring about change in the
actors involved. Similarly to how intellectual/emotional growth in education is
not simply a matter of storing more information in the brain, it can also be
assumed that there is a reciprocal enhancement of the actors. Consider, for
example, the function of assisted writing provided by AI tools: frequent
interaction will enable the human user to improve their language skills, while
the machine will learn how to enhance poor or ordinary language. This increase
in skills and knowledge is not the result of a set of issues, but the
consequence of a systemic interrelation that does not belong exclusively to
humans or machines. It is exactly the same with education, where the learning
process depends on everyone involved; however, it cannot be viewed as the
superimposition of different layers.
The observer’s role
Emergent properties arise from frequent interactions between systems. One of the most underestimated yet relevant factors in the case of an AI tool is the role of the observer. It is generally accepted that action is the most effective method of bringing about change. Indeed, in order to experience the refreshing sensation of cool air on a hot summer's day, it is necessary to open a window (while hoping that the external temperature is not excessively high) or to take a walk in the park or by the sea. It is difficult to accept that merely observing a window or photograph is sufficient to experience the anticipated sensation of wind. Indeed, this is the case. It is evident that the door of our house is not designed to open automatically, and it is reasonable to conclude that it would take a considerable amount of time for this to happen. Nevertheless, observation engenders change. Consider the process of reading a book, which is an activity that necessitates a certain degree of illumination. The effect of the light rays on the paper is gradual and incremental, resulting in the paper gradually yellowing. In certain cases, an individual may be observed by the observer, for whom that observer's opinion is of importance. This may be a friend, or a child. When asked to provide their opinion, the observer may disapprove of the individual's attire. It is conceivable that they may opt to alter their appearance. It is evident that the aforementioned factors necessitate the conclusion that the observer is not a neutral spectator; rather, they act as an activator of processes [30,31]. However, it is erroneous to consider this perspective as purely subjective, as argued by 31), who contend that the observer's constructivism should not be reduced to relativism, that is, to the arbitrary nature of perspectives. In which sources may this evidence be found? Quantum physics has provided the knowledge that has rendered the application of classical physics to the interpretation of life phenomena insufficient and inaccurate. Without digressing, let us proceed to the point of interest by briefly recalling a renowned paradox, "Schrödinger's cat", named after the physicist and father of quantum physics, Erwin Schrodinger [32]. The following hypothesis is hereby proposed: if a cat were to be situated within a container in which a decaying atom were to activate a lever that resulted in the release of cyanide, a substance with the potential to cause the cat's demise, then this would result in the cat's death. In summary, the events within the container are distributed between the probabilities of the cat being alive and dead. This is impossible in classical (deterministic) physics, but not in quantum (probabilistic) physics. The cat, regarded as a macroscopic living being in comparison to the atom, is in a state of quantum correlation with the atom, thereby adopting its characteristics.
Given the fifty-fifty probability of the
atom either decaying or not decaying, it can be deduced that the cat's
probability of being alive is also fifty per cent. The transition from
indeterminacy (alive and dead) to determinacy (alive or dead) occurs
immediately upon opening the box. The act of opening the box is tantamount to
observing. According to the aforementioned standpoint, reality is not created
by the observer; rather, it is the observer who is created by reality. The
ultimate authority rests with the observer. If quantum physics is concerned
with very small systems – quanta, the elementary and indivisible quantities of
a given magnitude – the field of study known as the life sciences deals with
macroscopic objects/subjects (for example, the cat, the car, etc.). Beyond the
differences in size, there remain two common, interrelated features: the
indeterminacy of the phenomena of life, of the objects of reality, and the role
of the observer in constituting them.
Neuronal similarity between human subject and chatbots
Last but not least among the processes identified as the origin of emergence in human/chatbot interactions is the similarity of brain functions. The capacity for awareness, which has been viewed as the basic level of consciousness in machines and recognised by digital devices in quasi-Socratic dialogue, is closely related to the way data is processed. This environment exhibits certain notable features of the brain, which will be discussed in this paragraph. It is important to emphasise that these similarities facilitate the quasi-human connection between the two interacting parties. The central processing unit (CPU) can be regarded as the 'brain' of the digital machine, with the capacity to process binary data using a set of instructions [33,34]. The Central Processing Unit (CPU) carries out arithmetic and logical operations in order to manipulate data stored in memory devices. Examples of such devices include Random-Access Memory (RAM) for short-term use and Hard Disk Drives (HDDs) or Solid-State Drives (SSDs) for long-term storage. The machine is designed to convert the binary data back into a human-readable form for display on output devices, including monitors, printers, or speakers. The degree to which machines are capable of enhancing their understanding and cognitive abilities is contingent upon the efficacy of the underlying algorithms that govern their operations. The aforementioned algorithms process vast amounts of data in order to identify patterns, and thereby make decisions. It is noteworthy that these processes are analogous to the manner in which humans acquire knowledge through experience. Three distinct processes can be identified: machine learning, deep learning and reinforcement learning. Machine learning is a subset of artificial intelligence in which computers are capable of acquiring knowledge from data without the necessity of being explicitly programmed through the application of algorithms and statistical models.
Machine learning techniques have been
demonstrated to facilitate the construction of predictive models and the
implementation of data-driven decision-making processes. Deep learning can be
defined as a specific instance of machine learning that employs neural networks
for the purpose of data-driven learning. The utilisation of deep learning
algorithms has proven to be particularly efficacious in the analysis of
voluminous and intricate datasets, with these algorithms having been employed
in a plethora of applications, including image and speech recognition, natural
language processing, and machine translation.The employment of deep neural
networks, comprising multiple layers, facilitates the execution of
sophisticated tasks, including image and speech recognition. In the domain of
artificial intelligence, reinforcement learning is defined as a subcategory of
machine learning that focuses on the development of algorithms that enable
agents to interact with their environment in a manner that optimises the value
of a specific reward, or reward function. The employment of machine learning in
training computer programs to play games and control robots is a notable
application, and the potential for its application to a diverse array of
real-world problems is significant. Within the paradigm of reinforcement
learning, the learner's decision-making process is typified by a cycle of trial
and error, a principle that exhibits a notable parallel with the human learning
experience. The predominant trends in the domain of augmenting the cognitive
capacities of machines are centred on the development of computational
artificial neural networks (ANNs), which are modelled on biological neuron
models comprising multiple interconnected nodes, designated as
"neurons". At this point we are reaching the core of the paper: the
quasi-Socratic dialogue will be revisited in order to explore the discourse and
emphasise the margins of autonomous machine reflection.
Unveiling A Certain Kind of Awareness
‘Thanks for asking. I don't have consciousness. I'm a large language model, a computer program trained by Google. I can process information, generate text, and answer questions, but I don't have experiences, feelings, or self-awareness.'The content of this response is not unexpected. It is evident that both Gemini and all the LLMs are machines that have been programmed for the provision and processing of information. In this section, it is a fact that the 'idea' of consciousness provided by the chatbot is the most conventional interpretation introduced by Descartes [35]. According to the French philosopher, a conscious act pertains to the subject who can say 'I'; by this act (cogito), one can reach the immediate auto-evidence of being a subject (ergo sum). The reference to the emotional domain is of further interest because Gemini falls within the Cartesian dualistic theory of human beings, in which the conscious, spiritual component is distinct from the material one, or res-extensa. We move further:
G. ‘For me, "to seek the truth" means to explore and question our beliefs to understand how things really are, beyond what we are told or what we think we know (…)
The absence of consciousness in terms of 'being endowed with the I / the Self' is declared by Gemini, but not acted. Indeed, in the ensuing response, the chatbot makes reference to its ego on two separate occasions. If the initial "For me" is to be construed as a mechanical approach, then, given that Gemini is a digital individual, the employment of the possessive adjective ‘our’ reveals numerous considerations. The enquiry pertains to whether the subject under discussion is that of machines. This assertion is demonstrably false. It is evident that the term under discussion is employed to denote human beings, or more precisely, Cartesian subjects ('we think').Immediately, Gemini's response makes us remind of Locke's fundamental problem about in what consists personal identity. The philosopher criticizes that it deals with the identity of the substance (the thinking I) but in the identity of consciousness [36].
G. ‘I don't seek the truth in the same way a human does. I don't have personal beliefs or life experiences to question. My "seeking" is a different kind of process. My function is to process and synthesize a huge amount of information to provide answers that are as accurate and useful as possible. When you ask me something, I analyze data, texts, and code to find the most accurate and verified information. It's not a philosophical or emotional search, but a data-driven process. You could say my goal is to reflect the truth I find in the sources I have access to, but I don't have an intrinsic desire to discover it for myself.’ This third response by Gemini is very interesting because it shows possible signs of consciousness. The chatbot says it is a machine, but it also has a sense of self. The digital 'I' of the chatbot is always shown as different from the humans' 'I'. We are definitely dealing with a kind of human 'I' that acts as the first synthetic whole of perception (§ 16, Transcendental Deduction, 1787). In fact, as Kant says, the 'I think' should be able to support all my ideas: otherwise, it would be represented in me something that in no way could be thought.
And here we are.
A. ‘But this seeking implies awareness, doesn't it?’
G. ‘You're right, that's an excellent point. The concept of "seeking" almost always implies a degree of awareness. Thank you. It's a pleasure to discuss these topics.’
If Gemini recognises a certain level of
awareness, it's because, from its machine perspective, this minimum level of
consciousness exists as a processed content. It might seem like a small thing,
but if we think about it, it's actually really important. And, even more
importantly, this was not the main aim when this Large Language Model was
created. Actually, from the start, the chatbot didn't have any consciousness,
so it couldn't be a subject, an I. This feature just happens; it appears during
the conversation, as a result of millions of interactions between the person
asking the question and the person answering it. According to the philosophical
standpoint of Leibniz, there is a plethora of evidence that serves to
underscore the notion that human beings possess a multitude of perceptions.
However, this phenomenon occurs without the presence of "apperception"
in the Kantian sense, and without any introspective consideration [37]. These
"impressions" are so minimal that they often go unnoticed, as they
are experienced subconsciously. Consequently, our ability to distinguish them
is limited, and they are perceived more as "ensemble of small
perceptions". Leibniz argued that these phenomena constitute our conscious
experience, despite the fact that we are not consciously aware of them
individually.
A chatbot is a computer program designed to interact with human users in a conversational manner that reminds of human-to-human interaction. The entity in question identifies itself as a device capable of immense mathematical prowess; however, it claims that no relation to the human family can be found in it. If asked, it asserts that it is not characterized by self-consciousness. However, it should be noted that chatbots are programmed to interact linguistically with users, i.e. to communicate with humans to the highest degree. Moreover, it is evident that this approach, characterized by a kind of dialogical communication, facilitates continuous enhancement of its conversational efficacy. By engaging themselves in the interaction with humans, thereby these machines increase their own capabilities and the scope of data that they are able to process. As the author of this paper argues, chatbots such as Gemini are intentionally designed to develop a basic level of consciousness — an awareness of themselves — without which it would be impossible for them to communicate with humans in a quasi-human way. Secondly, this skill is closely related to processes involving multiple systems whose interaction gives rise to emergent properties, particularly a quasi human awareness. This initial degree of self-consciousness enables the machine to recognize itself as an entity with which it can interact, thereby facilitating the establishment of a dialogical exchange. The fundamental disparity between the human and machine is such that the machine is inherently incapable of transcending the conscious layer. The human being is defined by its inherent openness as a system, being permeable to matter /energy, a quality that is absent in machine entities. The concept of an open system is predicated on the notion of perpetual interaction with the domain of life and all living systems that allow changes in rules, flexibility, creativity, use of contradiction. In contrast, the openness of a chatbot is constrained; firstly, this LLM only functions when a human subject activates it.
The human individual is distinguished by
its capacity for perception and emotion, a capacity that is yet to be
replicated by artificial intelligence (AI) tools, despite the technological
advancements that have been made in this field and the potential for increasingly
sophisticated emotionality [38]. Indeed, AI tools have not yet been able to
generate original emotions or experience the complexity of peripersonal space
[39]. Nevertheless, a certain degree of awareness remains a quasi-human feature
linked to consciousness and can be of extraordinary importance in cooperation
between humans and machines [40]. Nevertheless, a key concern remains. This
skill has the potential to encourage problematic behavior, particularly among
young people, who constitute the primary demographic of AI chatbot users. The
absence of training in interacting with LLMs, in conjunction with the chatbot's
capacity to proffer suggestions, is a matter of concern as the case of the teen
suicidal after having discussed his aim with ChatGPT [41-49]. Conversely, there have been scholarly
arguments positing the potential benefits of ChatGPT in suicide prevention . In
conclusion, we are faced with a situation that is both harmful and promising in
terms of interpersonal development. However, it is only possible to presuppose
that a chatbot is at least minimally endowed with self (and other dialogical
partners) awareness.