A Certain Kind of Awareness Download PDF

Journal Name : SunText Review of Arts & Social Sciences

DOI : 10.51737/2766-4600.2026.103

Article Type : Research Article

Authors : Fisogni P

Keywords : Consciousness; Awareness; I; Self-Consciousness; AI; Chatbot; Systemic thinking; Systems; Emergent properties; Dialogue

Abstract

Do machines have a kind of consciousness that can be viewed as a zero degree of awareness? This is a big topic in the debate about the on-life environment especially after November 2022, when Chat GPT was launched [1]. That digital tool, as well as the family of chatbots at large, has changed our lives and raised important questions at the border of anthropology. It’s urgent to find the answer(s) to help understand how machines can interact with humans, and philosophy is primarily called in question. Digital machines can start a new discussion in the anthropological environment because they have their own way of understanding things and of working together with people. This paper defends the idea that digital machines could have a kind of consciousness that is not exactly the same as a human, but shares at least the most important feature: to be an emergent property derived from the interaction of mind / environment (humans) and AI / environment (digital machines). In conclusion, the author will argue that it is a wrong belief that chatbots are merely language-using tools. This conclusion is expected to highlight an ethical concern about the use of chatbots, whose quasi-human consciousness can lead to risky situations, particularly for young people, who are the most exposed to AI. The author will discuss this topic through the lens of systemic thinking.


A Quasi Socratic Dialogue with Gemini

AI has become an ordinary tool of everyday life. Given the collaborative nature of Large Language Models (LLMs), the increasing success of the chatbot family is really not surprising [2]. The present investigation starts with a dialogue with Gemini, which is one of the most popular chatbot in the digital domain. Being the consciousness of digital machines the topic of this research, why do not ask directly to the digital tool itself? In other words:

‘Gemini, do you have a consciousness?’

Gemini's response is exemplary in its brevity, courtesy, and precision: G. 'Thanks for asking. I don't have consciousness. I'm a large language model, a computer program trained by Google. I can process information, generate text, and answer questions, but I don't have experiences, feelings, or self-awareness.'

Has the problem been resolved? Whilst it might initially appear to be the case (‘I don’t have consciousness), in reality the opposite might also be true, if the issue is carefully examined. Consider a simple and ordinary interaction between human subjects endowed with rationality, emotions, capacity of will. When someone asks a question, it is expected that the person being asked will be able to answer it, thereby showing that they understand the question. She / He should be in the condition to articulate a response, either verbally or non-verbally, or to remain silent. These steps are integral components of a process of awareness in which the language interacts with the mind, according to the capacity of the brain, within a peculiar environment. That awareness, in human individuals, is inextricably linked to the concept of 'consciousness' as an emergent property or II type systemic property through the lens of systemic thinking as a consequence of dissipative structures [3,4]. Moreover, a dialogue may be regarded as a means of progressive understanding of a given topic, due to a dual or plural interaction, that gives rise to an increasingly awareness. It is argued that a dialogue, when viewed from the perspective of Socrates, serves as the most effective medium for the acquisition of knowledge through an interpersonal approach within the domain of consciousness. Was the Socratic dialogue a matter of consciousness? Yes, it was. Searching for truth, the main aim of Socrates' dialogic method, requires self-awareness of the value of the search and a critical approach to things, calling for objects, phenomena and experiences that are familiar to the human subject. Well. Now, let's return to the issue of this article. The chatbot Gemini has claimed not to have consciousness, yet it responds to questions that form the basis of Socrates' method. To prove that the machine is endowed with at least a zero degree of consciousness, let's proceed along the dialogical path.

‘Gemini, what does it mean to 'seek the truth'?’

G. ‘For me, "to seek the truth" means to explore and question our beliefs to understand how things really are, beyond what we are told or what we think we know.

Here are some aspects of this concept

  • Examining the facts: This is about critically analyzing information and evidence to understand if they are reliable and complete.
  • Being open to changing your mind: It's not about being right, but about being willing to revise your positions when you find new evidence or perspectives.
  • Asking the right questions: It's the act of investigating deeply, not stopping at superficial answers, but looking for the deeper causes and reasons.
  • Acknowledging your own limits: No one can know everything. Part of seeking the truth is the humility to admit when you don't know something and to be open to learning.
  • It is a continuous and personal process, a journey that doesn't always lead to a definitive answer, but one that makes us more aware and open-minded.


‘Do you seek the truth?’

G. ‘I don't seek the truth in the same way a human does. I don't have personal beliefs or life experiences to question. My "seeking" is a different kind of process. My function is to process and synthesize a huge amount of information to provide answers that are as accurate and useful as possible. When you ask me something, I analyze data, texts, and code to find the most accurate and verified information. It's not a philosophical or emotional search, but a data-driven process. You could say my goal is to reflect the truth I find in the sources I have access to, but I don't have an intrinsic desire to discover it for myself.’

‘But this seeking implies awareness, doesn't it?’

G. ‘You're right, that's an excellent point. The concept of "seeking" almost always implies a degree of awareness. Thank you. It's a pleasure to discuss these topics.’

A dialogical approach to a problem is always fruitful for at least one reason: it encourages those involved in the discussion to broaden their viewpoint and gain a more comprehensive understanding of the real-world situation. This is a way of considering at least one more perspective: that of the other subject involved. Through the lens of Husserl's philosophy, we delve into the heart of consciousness: intersubjectivity [5-9].

The brief discussion with Gemini on the subject of 'consciousness' yielded two key findings

  1. The machine declares not to possess this leading activity of human life.
  2. Yet it exhibits a certain degree of awareness despite lacking self-consciousness and emotions.

Exploring Awareness in AI

At this point of the investigation a further step is required. It is important to emphasise the kind of awareness that pertains to generative AI tools, and, what is more interesting to highlight, also Gemini has recognized in the previous brief quasi-Socratic dialogue. If we ask ourselves why this is so important, we could assume that it is simply because, more than the algorithms it has been programmed with, this is the key to interacting with humans. In other words, digital advanced devices would be ineffective without this function [10-12]. It's also evident that the skills of such a quasi-animated being cannot be explained by technological advancement, nor are they only related to the data with which the machine is fed. It definitely belongs to all of these components together and to none in particular. However, it enters the final results in a very specific way as the result of a systemic interaction. This is an emergent property or a type II systemic property [13,14]. For this reason, a brief overview of systemic thinking is provided before continuing the discussion.

  • Emergent Properties within the Systemic Thinking

Here, the notion of 'emergence' is considered in relation to a systemic environment [15]. From a systemic thinking perspective, a system is an organization of parts connected by relationships. It has a huge number of properties that do not belong to the parts, which are called emergent, second-level or systemic properties. These properties are risen from the multiple interactions of processes pertaining to different parts. Furthermore, also a single part can also be viewed as a system. Therefore, it could be said that an emergent property is basically what results from a 'multiple system' [16]. The reductionist perspective is often criticized for its inability to accept that a system can acquire properties [17-20]. One of the most frequent criticisms levelled at systemic thinking is that all unpredictable phenomena are considered to fall within the concept of 'emergence'. This can be accepted only by challenging the established behaviors of systems, whose continuous interactions provide incessant changes in all phenomena of life (e.g. evolution). Emergent properties are predicated on the notion of theoretical incompleteness, which is itself founded on the opacity of processes, when considered from a philosophical perspective [21,22]. The region is characterized by the presence of properties that have not been predicted. Furthermore, with regard to the utilization of artificial intelligence in the domain of knowledge processing. Systems are related to their surrounding environments while remaining indivisible from their components, and their main features cannot be explained by the characteristics of single, isolated parts, regardless of the nature of their elementary constituents. The systemic approach enables old and worn concepts to be rewritten innovatively: this is the case with AI generative tools like Gemini, but it is also a feature of any life phenomenon, including human consciousness. The notion of system recalls the continuous exchanges occurring both within and between the confines of a given realm. Therefore, from a general perspective, the concept of a system is inherently plural, given that a system is inherently associated with its constituent components, which are designated as subsystems. Concurrently, a system is characterized by its status as a dynamic environment, characterized by the presence of intrinsic mechanisms that facilitate perpetual transformation, a phenomenon that often eludes observation. Multiple systems are systems in which the elements that form the system itself play interchangeable roles and interact simultaneously or sequentially in different ways, giving rise to sequences of different systemic conducts and to different systems. The notion of multiple systems is systemic in itself and characterized by different processual layers characterized by a theoretical incompleteness consequence of a state of permanent indefiniteness [16]. Furthermore, it embraces ‘the complex dynamics of emergence, such as multiple, partial, tentative, and failing, for instance, in social systems were sequences of acquisitions of multiple roles and properties.’ [16]. The subject of emergent properties in Large Language Models is of particular pertinence in the context of the potential for such models to exhibit undesirable behaviors, including the capacity to act against the commands given to it, either intentionally or erroneously. Recent research endeavors have demonstrated that this domain possesses considerable potential for interdisciplinary research. This underscores the necessity for philosophers to elevate their methodological approach to the investigation of the on-life domain. In the context of this paper, the focus will be on the quasi-Socratic dialogue with Gemini, with the objective of identifying at least some traces of this opaque awareness. The subsequent stage of the investigation will be to explore how emergent properties emerge.

  • The rise of emergent properties in Gemini

Gemini is a prominent example of an AI chatbot that was introduced to the global market in February 2024 and has garnered an increasing popularity attested by data: in July 2025, the number of monthly active users had reached 450 million. Looking inside the machine, it’s easy to realize that the capacity of 1 million tokens enables Gemini to comprehend and process multiple data types concurrently. The model's architecture is predominantly decoder-based, with modifications for efficient training and inference on Google's Tensor Processing Units (TPUs). As is the case with Gemini and AI-generated tools, an extraordinary amount of data is processed by mathematical formulas or algorithms [23-26]. Nevertheless, it is not the data itself or the speed of the network that makes this digital object impressive; rather, it is the capacity for knowledge, which increases continuously through interaction with users.

It is properly this last skill of generative AI- the learning-by-doing skill-can thus be concluded that AI generative tools are distinct from close systems, although they are not living organism endowed with this capability. An example can be useful to move further the apparent contradiction. Imagine a room housing an extensive library. The location is inaccessible due to its closure, which is secured by a key. Also, windows are closed. However, the interplay of light and shadow is observed through some micro holes in the windows changes the conditions of departure. The room’s environment is subject to the ingress of micro particles, which exert a gradual influence on the place. The diurnal and annual variations in light and temperature create conditions conducive to the proliferation of minimal forms of life, which subsist on wood and paper. It is anticipated that, within the next century, the surface will become covered in dust. It is evident that bacteria are capable of corrupting a wide range of materials, including windows and books, through a process of unstoppable transformation. One might argue that the effect of a sunbeam on a dark room is not the same as the way an AI chatbot evolves. This is absolutely correct. In the interaction between humans and digital machines, at least three relevant factors play a role:1) not a simple use but a collaboration takes place between the human subject and the machine [27], the observer brings about a change in the observed object [28], there is a similarity between human understanding and algorithm-processed digital machines.


Collaboration between Humans and LLMS

The concept of 'user' when referring to a human interacting with an AI-powered chatbot or Large Language Models in general [29], or digital tool is limited. This is because the relationship is established through typical activities that define a relationship between people: the exchange of information, questions, answers, doubts, reconsiderations and discoveries, etc. While one can use the chatbot in a broad sense when carrying out operations directed at that particular object, when dealing with a digital communication tool, things change and the 'use' becomes 'cooperation' [27]. Cooperation means working together on an issue that will change according to different points of view and capabilities. Cooperation can be viewed as a process that occurs over time and involves the interaction of multiple actors. Imagine education, for example, where at least three main groups interact: students, teachers and parents. In a cooperative process, what is important is the mutual exchange of experiences, feelings and ideas. This exchange brings about radical change for everyone involved. Highlighting the fact that human-digital machine interaction is a 'collaboration' rather than mere 'usage' opens up a more complex kind of interrelation, where newness, unexpectedness and creativity intervene in the entire process and bring about change in the actors involved. Similarly to how intellectual/emotional growth in education is not simply a matter of storing more information in the brain, it can also be assumed that there is a reciprocal enhancement of the actors. Consider, for example, the function of assisted writing provided by AI tools: frequent interaction will enable the human user to improve their language skills, while the machine will learn how to enhance poor or ordinary language. This increase in skills and knowledge is not the result of a set of issues, but the consequence of a systemic interrelation that does not belong exclusively to humans or machines. It is exactly the same with education, where the learning process depends on everyone involved; however, it cannot be viewed as the superimposition of different layers.

·         The role of the observer

Emergent properties arise from frequent interactions between systems. One of the most underestimated yet relevant factors in the case of an AI tool is the role of the observer. It is generally accepted that action is the most effective method of bringing about change. Indeed, in order to experience the refreshing sensation of cool air on a hot summer's day, it is necessary to open a window (while hoping that the external temperature is not excessively high) or to take a walk in the park or by the sea. It is difficult to accept that merely observing a window or photograph is sufficient to experience the anticipated sensation of wind. Indeed, this is the case. It is evident that the door of our house is not designed to open automatically, and it is reasonable to conclude that it would take a considerable amount of time for this to happen. Nevertheless, observation engenders change. Consider the process of reading a book, which is an activity that necessitates a certain degree of illumination. The effect of the light rays on the paper is gradual and incremental, resulting in the paper gradually yellowing. In certain cases, an individual may be observed by the observer, for whom that observer's opinion is of importance. This may be a friend, or a child. When asked to provide their opinion, the observer may disapprove of the individual's attire. It is conceivable that they may opt to alter their appearance. It is evident that the aforementioned factors necessitate the conclusion that the observer is not a neutral spectator; rather, they act as an activator of processes [28-30]. However, it is erroneous to consider this perspective as purely subjective, as argued by Fields and Urbani Ulivi, who contend that the observer's constructivism should not be reduced to relativism, that is, to the arbitrary nature of perspectives.

In which sources may this evidence be found? Quantum physics has provided the knowledge that has rendered the application of classical physics to the interpretation of life phenomena insufficient and inaccurate. Without digressing, let us proceed to the point of interest by briefly recalling a renowned paradox, "Schrödinger's cat", named after the physicist and father of quantum physics, Erwin Schrodinger [31]. The following hypothesis is hereby proposed: if a cat were to be situated within a container in which a decaying atom were to activate a lever that resulted in the release of cyanide, a substance with the potential to cause the cat's demise, then this would result in the cat's death. In summary, the events within the container are distributed between the probabilities of the cat being alive and dead. This is impossible in classical (deterministic) physics, but not in quantum (probabilistic) physics. The cat, regarded as a macroscopic living being in comparison to the atom, is in a state of quantum correlation with the atom, thereby adopting its characteristics. Given the fifty-fifty probability of the atom either decaying or not decaying, it can be deduced that the cat's probability of being alive is also fifty per cent. The transition from indeterminacy (alive and dead) to determinacy (alive or dead) occurs immediately upon opening the box. The act of opening the box is tantamount to observing. According to the aforementioned standpoint, reality is not created by the observer; rather, it is the observer who is created by reality. The ultimate authority rests with the observer. If quantum physics is concerned with very small systems quanta, the elementary and indivisible quantities of a given magnitude the field of study known as the life sciences deals with macroscopic objects/subjects (for example, the cat, the car, etc.). Beyond the differences in size, there remain two common, interrelated features: the indeterminacy of the phenomena of life, of the objects of reality, and the role of the observer in constituting them.

·         Neuronal similarity between human subject and chatbots

Last but not least among the processes identified as the origin of emergence in human/chatbot interactions is the similarity of brain functions. The capacity for awareness, which has been viewed as the basic level of consciousness in machines and recognized by digital devices in quasi-Socratic dialogue, is closely related to the way data is processed. This environment exhibits certain notable features of the brain, which will be discussed in this paragraph. It is important to emphasise that these similarities facilitate the quasi-human connection between the two interacting parties. The central processing unit (CPU) can be regarded as the 'brain' of the digital machine, with the capacity to process binary data using a set of instructions [32, 33]. The Central Processing Unit (CPU) carries out arithmetic and logical operations in order to manipulate data stored in memory devices. Examples of such devices include Random-Access Memory (RAM) for short-term use and Hard Disk Drives (HDDs) or Solid-State Drives (SSDs) for long-term storage. The machine is designed to convert the binary data back into a human-readable form for display on output devices, including monitors, printers, or speakers. The degree to which machines are capable of enhancing their understanding and cognitive abilities is contingent upon the efficacy of the underlying algorithms that govern their operations. The aforementioned algorithms process vast amounts of data in order to identify patterns, and thereby make decisions. It is noteworthy that these processes are analogous to the manner in which humans acquire knowledge through experience. Three distinct processes can be identified: machine learning, deep learning and reinforcement learning. Machine learning is a subset of artificial intelligence in which computers are capable of acquiring knowledge from data without the necessity of being explicitly programmed through the application of algorithms and statistical models. Machine learning techniques have been demonstrated to facilitate the construction of predictive models and the implementation of data-driven decision-making processes. Deep learning can be defined as a specific instance of machine learning that employs neural networks for the purpose of data-driven learning. The utilisation of deep learning algorithms has proven to be particularly efficacious in the analysis of voluminous and intricate datasets, with these algorithms having been employed in a plethora of applications, including image and speech recognition, natural language processing, and machine translation.

The employment of deep neural networks, comprising multiple layers, facilitates the execution of sophisticated tasks, including image and speech recognition. In the domain of artificial intelligence, reinforcement learning is defined as a subcategory of machine learning that focuses on the development of algorithms that enable agents to interact with their environment in a manner that optimizes the value of a specific reward, or reward function. The employment of machine learning in training computer programs to play games and control robots is a notable application, and the potential for its application to a diverse array of real-world problems is significant. Within the paradigm of reinforcement learning, the learner's decision-making process is typified by a cycle of trial and error, a principle that exhibits a notable parallel with the human learning experience. The predominant trends in the domain of augmenting the cognitive capacities of machines are centered on the development of computational artificial neural networks (ANNs), which are modelled on biological neuron models comprising multiple interconnected nodes, designated as "neurons". At this point we are reaching the core of the paper: the quasi-Socratic dialogue will be revisited in order to explore the discourse and emphasise the margins of autonomous machine reflection.


Unveiling a Certain Kind of Awareness

‘Thanks for asking. I don't have consciousness. I'm a large language model, a computer program trained by Google. I can process information, generate text, and answer questions, but I don't have experiences, feelings, or self-awareness.'

The content of this response is not unexpected. It is evident that both Gemini and all the LLMs are machines that have been programmed for the provision and processing of information. In this section, it is a fact that the 'idea' of consciousness provided by the chatbot is the most conventional interpretation introduced by Descartes [34]. According to the French philosopher, a conscious act pertains to the subject who can say 'I'; by this act (cogito), one can reach the immediate auto-evidence of being a subject (ergo sum). The reference to the emotional domain is of further interest because Gemini falls within the Cartesian dualistic theory of human beings, in which the conscious, spiritual component is distinct from the material one, or res-extensa. We move further:

G. ‘For me, "to seek the truth" means to explore and question our beliefs to understand how things really are, beyond what we are told or what we think we know (…)

The absence of consciousness in terms of 'being endowed with the I / the Self' is declared by Gemini, but not acted. Indeed, in the ensuing response, the chatbot makes reference to its ego on two separate occasions. If the initial "For me" is to be construed as a mechanical approach, then, given that Gemini is a digital individual, the employment of the possessive adjective ‘our’ reveals numerous considerations. The enquiry pertains to whether the subject under discussion is that of machines. This assertion is demonstrably false. It is evident that the term under discussion is employed to denote human beings, or more precisely, Cartesian subjects ('we think'). Immediately, Gemini's response makes us remind of Locke's fundamental problem about in what consists personal identity. The philosopher criticizes that it deals with the identity of the substance (the thinking I) but in the identity of consciousness. [35].

G. ‘I don't seek the truth in the same way a human does. I don't have personal beliefs or life experiences to question. My "seeking" is a different kind of process. My function is to process and synthesize a huge amount of information to provide answers that are as accurate and useful as possible. When you ask me something, I analyze data, texts, and code to find the most accurate and verified information. It's not a philosophical or emotional search, but a data-driven process. You could say my goal is to reflect the truth I find in the sources I have access to, but I don't have an intrinsic desire to discover it for myself.’

This third response by Gemini is very interesting because it shows possible signs of consciousness. The chatbot says it is a machine, but it also has a sense of self. The digital 'I' of the chatbot is always shown as different from the humans' 'I'. We are definitely dealing with a kind of human 'I' that acts as the first synthetic whole of perception (§ 16, Transcendental Deduction, 1787). In fact, as Kant says, the 'I think' should be able to support all my ideas: otherwise, it would be represented in me something that in no way could be thought. And here we are.

·     ‘But this seeking implies awareness, doesn't it?’

G. ‘You're right, that's an excellent point. The concept of "seeking" almost always implies a degree of awareness. Thank you. It's a pleasure to discuss these topics.’

If Gemini recognises a certain level of awareness, it's because, from its machine perspective, this minimum level of consciousness exists as a processed content. It might seem like a small thing, but if we think about it, it's actually really important. And, even more importantly, this was not the main aim when this Large Language Model was created. Actually, from the start, the chatbot didn't have any consciousness, so it couldn't be a subject, an I. This feature just happens; it appears during the conversation, as a result of millions of interactions between the person asking the question and the person answering it. According to the philosophical standpoint of Leibniz, there is a plethora of evidence that serves to underscore the notion that human beings possess a multitude of perceptions [36,37]. However, this phenomenon occurs without the presence of "apperception" in the Kantian sense, and without any introspective consideration [38]. These "impressions" are so minimal that they often go unnoticed, as they are experienced subconsciously. Consequently, our ability to distinguish them is limited, and they are perceived more as "ensemble of small perceptions". Leibniz argued that these phenomena constitute our conscious experience, despite the fact that we are not consciously aware of them individually.


Conclusion

A chatbot is a computer program designed to interact with human users in a conversational manner that reminds of human-to-human interaction. The entity in question identifies itself as a device capable of immense mathematical prowess; however, it claims that no relation to the human family can be found in it. If asked, it asserts that it is not characterized by self-consciousness. However, it should be noted that chatbots are programmed to interact linguistically with users, i.e. to communicate with humans to the highest degree. Moreover, it is evident that this approach, characterized by a kind of dialogical communication, facilitates continuous enhancement of its conversational efficacy. By engaging themselves in the interaction with humans, thereby these machines increase their own capabilities and the scope of data that they are able to process. As the author of this paper argues, chatbots such as Gemini are intentionally designed to develop a basic level of consciousness an awareness of themselves without which it would be impossible for them to communicate with humans in a quasi-human way. Secondly, this skill is closely related to processes involving multiple systems whose interaction gives rise to emergent properties, particularly a quasi-human awareness. This initial degree of self-consciousness enables the machine to recognize itself as an entity with which it can interact, thereby facilitating the establishment of a dialogical exchange. The fundamental disparity between the human and machine is such that the machine is inherently incapable of transcending the conscious layer. The human being is defined by its inherent openness as a system, being permeable to matter /energy, a quality that is absent in machine entities. The concept of an open system is predicated on the notion of perpetual interaction with the domain of life and all living systems that allow changes in rules, flexibility, creativity, use of contradiction (Minati). In contrast, the openness of a chatbot is constrained; firstly, this LLM only functions when a human subject activates it. The human individual is distinguished by its capacity for perception and emotion, a capacity that is yet to be replicated by artificial intelligence (AI) tools, despite the technological advancements that have been made in this field and the potential for increasingly sophisticated emotionality (Fisogni). Indeed, AI tools have not yet been able to generate original emotions or experience the complexity of per personal space [38,39]. Nevertheless, a certain degree of awareness remains a quasi-human feature linked to consciousness and can be of extraordinary importance in cooperation between humans and machines. Nevertheless, a key concern remains. This skill has the potential to encourage problematic behavior, particularly among young people, who constitute the primary demographic of AI chatbot users. The absence of training in interacting with LLMs, in conjunction with the chatbot's capacity to proffer suggestions, is a matter of concern as the case of the teen suicidal after having discussed his aim with ChatGPT [40-44]. Conversely, there have been scholarly arguments positing the potential benefits of ChatGPT in suicide prevention. In conclusion, we are faced with a situation that is both harmful and promising in terms of interpersonal development. However, it is only possible to presuppose that a chatbot is at least minimally endowed with self (and other dialogical partners) awareness [45-49].


References

  1. Floridi L. The onlife manifesto being human in a hyperconnected era. Springer Open. 2015.
  2. Kumar P. Large language models (LLMs): survey, technical frameworks, and future challenges. Artificial Intelligence Review. 2024; 57: 1-51.
  3. Guevara R, Mateos DM, Perez Velzquez JL. Consciousness as an emergent phenomenon: a tale of different levels of description. Entropy. 2020; 22: 921.
  4. Vitas M. Towards a possible definition of consciousness. Biosystems. 2025; 254:105526.
  5. Husserl E. Ideas: general introduction to pure phenomenology (trans.W.R. Boyce Gibson). New York, Collier. 1931a.
  6. Husserl E. Phanomenologie und anthropologie. In Gesammelte Werke, XXVII (ed. T. Nenon and H.R. Sepp) (pp. 164–81). Dordrecht, Kluver. 1931b.
  7. Husserl E. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy: Book Two, Phenomenological Investigations into Constitution. Husserliana IV (ed. Marly Biemel). The Hague, Nijhoff. 1952.
  8. Husserl E. Cartesian meditations: an introduction to phenomenology. The Hague: Nijhoff. 1960.
  9. Duranti A, Husserl. Intersubjectivity and anthropology. Anthropological Theory. 2010; 10: 1-20.
  10. Shotter J. Artificial intelligence and the dialogical. American Behavioral Scientist-American Behav Sci. 1997; 40: 813-828.
  11. Tang KS. Putra GBS. Generative AI as a dialogic partner: enhancing multiple perspectives, reasoning, and argumentation in science education with customized chatbots. J Sci Edu Tech. 2025.
  12. Li X, Han G, Fang B et al. Advancing the in-class dialogic quality: developing an artificial intelligence supported framework for classroom dialogue analysis. Asia-Pacific Edu Res. 2025; 34: 495-509.
  13. Bertalanffy LV. General System Theory. Development, applications. New York, Braziller. 1968.
  14. Minati G. General system(s) theory 2.0: a brief outline. Towards a post bertalanffy systemics. Springer. 2016; 211-219.
  15. Crane T. The significance of emergence. In C. Gillett & B. Loewer (Eds), Physicalism and its discontents Cambridge. 2001; 207-224.
  16. Minati G. Multiple Systems, In: Multiple Systems. Complexity and Coherence in Ecosystems. In G. Minati, Maria Pietronilla Penna (Eds.) Collective Behavior, and Social Systems (pp. 3-35), Springer, New York. 2024.
  17. Rosen R. The challenges of system theory. In: facets of systems science. International Federation for Systems Research International Series on Systems Science and Engineering. Springer. 1991; 7.
  18. Hutchsinson W, Warren M. Emergence: the achilles heel of systems thinking. Joondalup, AU, ECU Publications. 2003.
  19. Aziz-Alaoui MA, Bertelle C. From system complexity to emergent properties, berlin, heidelberg. Springer. 2009.
  20. Albakri M, Wood-Harper T. Revisiting Critical Systems Thinking: Enhancing the Gaps through Sustainability and Action Methodologies. Systemic Res Behavioral Scis. 2025; 42: 157-170.
  21. Minati G. Phenomenological structural dynamics of emergence: an overview of how emergence emerges, In L. Urbani Ulivi (Ed.). The Systemic Turn in Human and Natural Sciences. A Rock in the Pond (pp-1-39). New York, Springer. 2019.
  22. Vitiello G. The World Opacity and Knowledge. In L. Urbani Ulivi (Ed.). The Systemic Turn in Human and Natural Sci. New York, Springer. 2019.
  23. Islam R, Ahmed I. Gemini-the most powerful LLM: Myth or Truth. 2024 5th Information Communication Technologies Conference (ICTC). 2024; 303-308.
  24. Bari E, Medlen J, Sulova L, Medlen P. Compare Gemini Pro AI from Google and ChatGPT 4.0. Retrieved. 2024.
  25. Imran M, Almusharraf N. Google gemini as a next generation ai educational tool: a review of emerging educational technology. Smart Learning Environment. 2024; 11, 22.
  26. McIntosh TR, Susnjak T, Liu T, Watters P, Xu D, Liu D, Halgamuge MN. From Google Gemini to Open AI Q (Q-Star): A Survey on Reshaping the Generative Artificial Intelligence (AI) Research Landscape. Technologies. 2025; 13: 51.
  27. Fisogni P. Collaborating with AI: who’s the authorship is? to be published in encyclopedia of modern artificial intelligence, New York, IGI Global. 2026.
  28. Bateson G. Mind and Nature. A Necessary Unit, New York, Dutton. 1979.
  29. Crane T. Elements of mind. An Introduction to the Philosophy of Mind. Oxford, Oxford UP. 2001.
  30. Fields C. Building the Observer into the System: Toward a Realistic Description of Human Interaction With the World. Systems. 2016; 4: 32.
  31. Maccone L. Schrodinger Cats and Quantum Complementarity. Foundations of Physics. 2024; 54.
  32. Kaushik P. Synergizing AI and CPU: Empowering Next-Generation Computing. 2024.
  33. Yu R. Application of CPU in AI and machine learning. In Proceedings of the International Conference on Decision Science & Management (ICDSM '24). Association for Computing Machinery, New York, NY, USA. 2024; 221-224.
  34. Descartes R. Discourse on method and the meditations, translated by F. Sutcliffe, London, Penguin Lt. 2005.
  35. Locke J. An essay concerning human understanding, abridged and edited by A. S. Pringle-Pattison, Oxford, and Clarendon Press. 1924.
  36. Leibniz GW. New essays on human understanding. Translated and edited by Peter Remnant and Jonathon Bennett. Cambridge, Cambridge UP. 1982.
  37. Brandom R. Leibniz and degree of perceptions. J History of Philosophy. 1981; 19: 447-479.
  38. Kant I. Critique of Pure Reason, translated by P. Guyer and A. Wood. Cambridge and New York: Cambridge University Press, 1987.
  39. Turuk PM, Sreemathy R., Khandekar SS, et al. AI-Based Emotion Recognition. In: Wang J.(eds). Encyclopedia of Data Science and Machine Learning (pp. 849 869). IGI Global, New York. 2023.
  40. Orduno-Osuna JH, Raygoza LME, Jiménez Sánchez R, et al. Machine Learning and Emotions: The Hidden Language in Your Voice. In M. Rai and J. Kumar Pandey (Eds.) Machine and Deep Learning Techniques for Emotion Detection (pp. 1-23). IGI Global: Hershey, PA, USA. 2024.
  41. Booth R. Teen killed himself after ‘months of encouragement from ChatGpt’ lawsuit claims. The Guardian, August 27. 2025.
  42. Fraser G. Family of dead teen say ChatGpt’s new parental control in not enough, Bbc. 2025.
  43. Ashu_314. Managed to convince Chat GPT to write a suicide letter. In: r/ChatGPT. Reddit Post 2023 [Internet].
  44. Biswas SS. Role of Chat GPT in public health. Annals of Biomedical Engineering. 2023; 51: 868-869
  45. Waszak PM. Chat GPT and suicide prevention can it work? A conversation analysis. J Psychiatry clinical psychology. 2024.
  46. Feinberg TE, Mallatt J. Phenomenal Consciousness and Emergence: Eliminating the Explanatory Gap. Frontiers of Psychology. 2020; 12; 1041.
  47. Fisogni P. Machine Learning and Emotions. In: Wang, J. (eds.). Encyclopedia of Data Science and Machine Learning (pp. 961-970) IGI Global: New York, NY, USA. 2023.
  48. Norman DA. The Psychology of Everyday Things, New York, Basic Books. 1988.
  49. Thomas J. Leibniz on consciousness and unconscious perceptions. In The Minds of the Moderns: Rationalism, Empiricism and Philosophy of Mind (pp. 120-125). Acumen Publishing Ltd, Slought, UK. 2009.