Sound Research WIKINDX

WIKINDX Resources

Bie, A., & Grimshaw-Aagaard, M. (2026). What is it like to be an AI? In S. O. Irwin, N. Liberati & E. Garrett (Eds.), Palgrave Handbook of Phenomenology and Media. Switzerland: Springer Nature. 
Added by: Mark Grimshaw-Aagaard (10/31/25, 8:53 AM)   Last edited by: Mark Grimshaw-Aagaard (10/31/25, 8:54 AM)
Resource type: Book Chapter
Language: en: English
Peer reviewed
BibTeX citation key: Bie2026
Email resource to friend
View all bibliographic details
Categories: Embodied Cognition
Creators: Bie, Garrett, Grimshaw-Aagaard, Irwin, Liberati
Publisher: Springer Nature (Switzerland)
Collection: Palgrave Handbook of Phenomenology and Media
Views: 10/11
Abstract
"In a polemical and speculative essay, we update Thomas Nagel’s classic question (What is it like to be a bat?) for our Artificial Intelligence [AI] age. We explore the phenomenological ontology (the nature of experiential being) of AI, speculating on what a conscious AI’s conception of self and non-self might be while questioning current assumptions and marketing hype. Everything that is AI is built on an anthropocentric foundation; the foundation of its ontology derives from generalised human knowledge and our own phenomenology and is filtered further through human-engineered algorithms. With this in mind, we must assume that any potential phenomenological ontology of an AI would be restricted to that of humans. Furthermore, following Nagel's argument that we cannot possibly know what it is like to be a bat, restricted as we are by an anthropocentric embodiment and phenomenology, we must also conclude that an AI, likewise, cannot possibly know what it is like to be a bat, or indeed any other species: all claims to the contrary must be viewed as wishful anthropomorphism. In order to present our arguments, our essay will cover a range of topics all focussed on the question What is it like to be an AI? These include: consciousness; embodiment; presence; creativity; anthropomorphism and anthropodenial; bias; and artificial otherness. We do not deny the possibility of an AI one day having a conception of self with a phenomenological ontology that is different to that of a human being. Rather, our purpose is to point out that, should this occur, we ourselves will be incapable of knowing, at least fully, what it is like to be an AI. But would an AI, with an anthropocentric ontology, even an AIcentric ontology, be capable of knowing what it is like to be a human? As AI increasingly pervades and directs our lives, what might be the ethical implications of allowing an artificial other, with a potentially unknowable phenomenological ontology, this level of power?"
Added by: Mark Grimshaw-Aagaard  Last edited by: Mark Grimshaw-Aagaard
WIKINDX 6.12.1 | Total resources: 1442 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA) | Time Zone: Europe/Copenhagen (+01:00)