Full Symposium Programme

The “After AI” Symposium offers a range of sessions that provide

the opportunity for an interdisciplinary holistic discussion on Artificial Intelligence

Session 1

Lessons from Wittgenstein and the Unbearable Lightness of "Artificial Intelligence”

by paul wong (he/him)


In Philosophical Investigation, Wittgenstein used “games” to illustrate the theory of family resemblance. We identify many activities as “games”, but understanding game playing does not require the comprehension and mastery of an exhaustive and exclusive list of features for all games. Like “games”, we suggest that it is limiting to characterise “AI” with an exhaustive and exclusive list of features for all AI systems (or services). An alternative is to consider the concept and practice of “AI” as a cluster of evolving features based on family resemblance. The upshot is that, instead of assuming an “after AI” future, it is also helpful to consider and imagine an evolving AI future. This re-framing allows us to ask different questions and explore other possibilities (and impossibilities).


The Ecological in AI Fictions and Futures

by Joanna Boehnert (SHE/HER) and Alistair Alexander (he/him)


AI can be seen as the ultimate and most extreme manifestation of extractive technological accelerationism. Key figures in AI claim that its astronomical power demands will require a future energy “breakthrough” but large-scale energy from nuclear fusion is not likely to materialise in the near future. The dynamics of industry hype, speculation, and financialisation exist independently of the material requirements and ecological consequences of AI technologies. AI fictions exist in a parallel universe beyond basic material concerns such as resource availability and climate change. AI futures will be determined by their material requirements and ecological context to a degree that remains widely unacknowledged. In this presentation we offer strategies to assess AI’s ecological and social impact – both positive and negative – beyond the current hype cycle – with ecological frameworks to help to methodically assess the true potential of AI – or lack thereof - in a context of climate and ecological crises and (our hoped for) capacity for planetary regeneration.


We use the concept of “after” as part of a long series of post-bubble and post-collapse states where aspirations and promises of AI hit boundaries of resource requirements to manufacture AI, the energy to power AI, and the ecological consequences of these processes, including associated acceleration of GHG emissions, and associated climate change impacts. Intersecting vectors of ecological, social, and economic crises will all be accelerated with incautious approaches to AI where the supposed “unintended consequences” are inadequately investigated and accounted for. In this context, the speculation that widespread AGI can exist in the context of the polycrisis, is pure fiction. The fact that the tech world has made this narrative as dominant as it is, reveals a deep denial and dismissal of the ecological context in tech discourses, a consequence of ecological illiteracy. In response to this problem, this presentation will present an Innovation Landscape Matrix that we will use to interrogate the ecological implications of AI. Current AI futures rest on ecologically incoherent assumptions of its techno-optimistic proponents. AI infrastructure needs to be redirected and/or scaled back to enable net zero transitions – not accelerated to produce even more extreme ecological impacts. With this tool we explore how AI might contribute to ecological regeneration instead of accelerating ecological harms.


‘Is it possible to learn how to see the softness of wood shavings?’

by Peter Marsh (he/him)


Current research into embodied metaphor appreciation by AI provides two dimensional outputs based on two-dimensional, unfiltered data input, algorithmic comparisons between myriad text and image. Such pathways of vokenization may lead to an understanding of what a relationship is but it could never relate and will always fall short.


The presentation understands embodiment as situated, with knowledge intrinsically experienced through the physical contexts we act in connecting our embodied engagement within our spatial and cultural surroundings to language.


Providing an understanding of how human beings develop multi modal concepts to express our existence, through a fictional narrative, we propose that similar opportunities have been denied AI, and that the true benefits of any language only come through creative freedom and conversation.


Conceptual thinking is not binary but experiential and multimodal, and this presentation asks, in a continuation of Merleau-Ponty’s discussions on embodiment, ‘Is it possible to learn how to see the softness of wood shavings?’


The Future of AI is Ancestral

by Mateus van Stralen (he/him)


The book "Ancestral Future" (Futuro Ancestral) by Ailton Krenak invites us to reflect on the relationship between humanity and nature, emphasizing the importance of reconnecting with ancestral wisdom to address contemporary challenges. Krenak begins by recalling a scene of indigenous boys paddling a canoe, reflecting on their proximity to their ancestors' way of life. One of them, verbalizing the experience, said: "Our parents say that we are nearing what it once was." The boys envisioned a future grounded in their territory—not defined by a map, but by the rivers, their relationship with nature, and each other. In this context, time is not linear but cyclical and interconnected. To look at the future is to look at the past, to the beings—the rivers, the mountains—that were already here and will continue to be.


In this presentation, I aim to bridge Krenak’s philosophical insights with artificial intelligence (AI), exploring the future of this technology as a tool to help us visualize and create narratives and dreams of preferable futures. Instead of fast-forwarding our current status quo, we can rewind it to consider how our ancestors connected with nature. Today, in Western societies, we have great difficulty dreaming of other ways of being in the world.


Using AI, we can create new narratives and dreams rooted in diverse territories and cultural memories. We can simulate different ways of relating to rivers that integrate traditional knowledge. We can envision transforming cities into forests. Ultimately, we can create a multiplicity of narratives connected by an ancestral vision of body and nature, envisioning the degrowth of technology, the economy, and even the disappearance of AI. This perspective is linked to second-order cybernetics, providing an epistemological framework that deepens our understanding of our integration within the systems we cohabit.


Shaping AI Futures. Collective AI Towards a Solarpunk Tomorrow.

by Anca Serbanescu (she/her)


The current confusion regarding AI systems' meaning, potential, and limits (Serbanescu, 2024) fuels an existing fear of a dystopian portrayal of AI taking over humanity (Bostrom, 2017; 2014). AI's fast growth poses substantial societal issues, including controlling dangers, adhering to ethical principles, and fulfilling governmental obligations. Several factors contribute to the complexity of AI, including a) the topic's interdisciplinarity, b) the rapid pace of technological progress, c) a lack of simplified knowledge of the topic available to all, and d) a lack of governance that ensures the responsible and safe use of AI tools (Serbanescu, 2024; Serbanescu & Nack, 2024).

After AI is reinterpreted in a posthumanist (Ferrando, 2017; Coeckelberg, 2013) and solarpunk (Reina-Rozo, 2021) key, adopting a holistic approach in which the end of man is understood as the death of capitalist and colonialist values (Escobar, 2018; Manzini, 2015) towards sustainable, inclusive and peaceful co-existence of all forms of life. This paper explores an example of a speculative future in which AI becomes a container of knowledge for all of humanity that any citizen can draw on to benefit from and generate new knowledge to contribute to the resolution of complex problems in the world (e.g., climate change, research for the treatment of rare diseases, etc.).


The AI system comprises an elaborate and complicated network of interactions, similar to a mangrove forest that is constantly connected and exchanging information, which helps plants survive (Floridi, 2015). Citizens may use shared knowledge to improve themselves and the environment. New meanings emerge when the language and its definitions grow. Citizens, for example, are networked and synergistic cyborgs capable of immersing themselves in information reality, as evidenced by greater human capacities and collaboration. Humans are boosted by a technological gadget that allows them to access the “organiplatform”, a root-cloud-AI system, entire of challenges to complete and connections with peers. The futuristic scenario outlined in the paper originated from the Design PhD 2020 Summer School "Design in Transitional Times" at Politecnico di Milano. It was further refined and expanded during the doctoral research on "Human-AI Co-Creativity," which incorporates elements of the solarpunk movement.


This hypothetical future is rooted in the solarpunk movement, a notable narrative ecology distinguished by a more inclusive, empathetic, and eco-sustainable future transcending exclusive human-centric perspectives. In these imagined futures, AI coexists peacefully with humanity and other living beings (Chambers, 2021; Rupprecht et al., 2021; Mok, 2021), challenging the dominant dystopian narrative and providing a refreshing perspective that fosters a more optimistic and multifaceted understanding of the symbiotic relationship between AI and society. This contribution aims to spread hopeful narratives about a possible future so we may begin moving in that direction.


Bibliography

Bostrom, N. (2014). Superintelligence: Paths, strategies, dangers.

Bostrom, N. (2017). Superintelligence. Dunod.

Chambers, B. (2021). A Psalm for the Wild-built (Vol. 1). Tordotcom.

Escobar, A. (2018). Designs for the Pluriverse. Duke University Press.

Ferrando, F. (2017). Postumanesimo, transumanesimo, antiumanesimo, metaumanesimo e nuovo materialismo. Relazioni e differenze. Lo sguardo, 24, 51–61.

Floridi, L. (2015). The onlife manifesto: Being human in a hyperconnected era. Springer Nature.

Manzini, E. (2015). Design, when everybody designs: An introduction to design for social innovation. MIT Press.

Mok, D. K. (2021). The Birdsong Fossil. In Multispecies Cities: Solarpunk Urban Futures. Edited by Christoph Rupprecht, Deborah Cleland, Norie Tamura, Rajat Chaudhuri and Sarena Ulibarri, 294-324. Albuquerque: World Weaver Press, 2021

Reina-Rozo, J. D. (2021). Art, energy and technology: The Solarpunk movement. International Journal of engineering, social justice, and peace, 8(1), 47–60.

Rupprecht, C. D., Cleland, D., Chaudhuri, R., & Ulibarri, S. (2021). Multispecies Cities: Solarpunk Urban Futures. (No Title).

Serbanescu, A. (2024). Human-AI Co-Creativity. Understanding the Relationship between Designer and AI systems in the field of Interactive Digital Narrative. Politecnico di Milano.

Serbanescu, A., & Nack, F. (2024). Towards an analytical framework for AI-powered creative support systems in interactive digital narratives. Journal of Entrepreneurial Researchers.



VR & AI - our new way of grieving?

by Chantal Pisarzowski (SHE/HER)


In my presentation titled "Digital Twins: Reuniting with the Deceased through AI and VR," I explore the intersection of technology and human emotion by leveraging ChatGPT and virtual reality (VR) to facilitate unique and emotional dialogues with digital replicas of deceased individuals. This project originated from a desire to give a voice to lost loved ones, demonstrating how AI-driven dialogues can offer not just feasibility but also therapeutic value by enabling users to exchange unsaid words and farewells.


The technical foundation of the project relies on a comprehensive database filled with information about the deceased, processed by AI to generate responses. These responses are then converted into speech, visualized through a digital avatar within a VR environment, enhancing the immersive experience and creating a perceived proximity to the digital twin. The case study of Helga Goetze, a Berlin feminist, serves as an example, whose digital replica not only preserves her memory but also reintroduces her messages into contemporary discourse.


This project raises fundamental questions about the implications of digitally "reviving" someone. What does it mean to digitally resurrect an individual? What ethical and societal implications emerge from digital immortality? While the technology offers potential for comfort and a new form of mourning, I also address the associated risks, including the potential for abuse through deepfakes and the dissemination of misinformation. My presentation aims to spark a critical discussion on the responsibility in the development and application of such technologies, inspired by the notion that we have the power to redefine the boundaries between past and future, farewell and reunion.


Session 2

Session 3

Algorithmic patterns after AI

by Alex McLean (he/him) and Anu Reddy (she/her)


In use, the term 'artificial intelligence' is often conflated with the idea of the 'algorithm'. As protesters chant "fuck the algorithm", AI and algorithms in general have become known as technologies of control; practically unexplainable, yet governing what we read on social media, what we see in search results, and how we are ourselves profiled and assessed.


Will AI tech culture continue to accelerate towards information and environmental overload, or will it hit a conceptual and/or financial brick wall, allowing us to relax back into yet another AI winter? In either case, we propose a post-AI future, based on an alternative history of algorithms as patterns.


Algorithmic patterns are creative, culturally-embedded ways to work beyond our imaginations, where complex and surprising results can result from the combination of simple parts (or rules). Humans have explored algorithmic patterns obsessively, across practices and in many forms, for millennia. This can be seen in ancient practices such as geometric Kolam drawings, the discrete mathematics of textile weaves and braids, or more recent developments such as juggling siteswap patterns, and creative code-based practices such as live coding.


Through our presentation we will introduce Algorithmic Pattern as an emerging, interdisciplinary field of research and practice. We will showcase examples of various algorithmic pattern contributions whose outcomes are deterministic yet unpredictable, as they emerge into great complexity from simplicity. This will include our own work in Kolam drawing and live coding. Through this we will signal a Luddite reclaiming of algorithms as technologies of human craftwork. When AI has exhausted itself, we shall return to human-centric algorithmic patterns, which offers us rich ways of making that are easy to learn but taking lifetimes to explore.


Apocalyptic AI Otherwise

by mustafa ali


In this paper, I push back against McQuillan’s (2022) assertion that “rather than being an apocalyptic technology, AI is more aptly characterized as a form of supercharged bureaucracy that ramps up everyday cruelties, such as those in our systems of welfare.” (p.4) While not disputing the value and importance of bureaucratic readings of AI, especially given the biopolitical and necropolitical entanglement of AI technology with statist formations, close attention to ‘the political’, at least as usefully theorised by German jurist, legal theorist (and Nazi party member), Carl Schmitt, prompts engagement with the issue of sovereignty and the latter’s relationship to political theology[1]. Crucially, I maintain that in times of crisis, political theology can assume apocalyptic – that is, revelatory, inevitable, world-ending[2] (or at least, world-transformational) – form.


In earlier work (Ali 2019, 2020), I argue that the contemporary moment is marked by an entangled confluence of two developments, viz. (1) the pervasive rollout and deployment of AI systems (and cognate technologies) underpinned by machine cum deep learning, and (2) a (re)iteration of the phenomenon of ‘White Crisis’[3]. Given that the notion of ‘existential threat’ associated with some strands of AI discourse is at least partly driven by ostensibly apocalyptic projections, I suggest that what comes ‘after AI’ is usefully explored in terms of Schmittian sovereignty and political theology.


Yet while there are an increasing number of works exploring the religious and theological implications of AI, few if any have approached developments within this area through the lens of political theology. For example, in his brief think piece for Medium entitled “The Great White Robot God” (2019), cultural theorist David Golumbia explores connections between white supremacy and the nebulous phenomenon of artificial general intelligence (or AGI) through the bridging phenomenon of discourse about IQ, notably pointing to “the messianic/ Christological structure of AGI belief, especially when promoted by members of the Radical Atheist community, which itself has significant overlap with the alt-right.” Notwithstanding such resonances between AI, crude or overt white supremacy, and strands of apocalyptic Christian messianism, drawing on Gray (2007) and Mills (2008), I suggest the need to consider the entanglement of AI, white supremacy, and Christian apocalypticism within the more mainstream political landscape of liberalism within Western polities. Insofar as these state formations order their populations biopolitically (for differential management) and target ‘Other’-ed populations necropolitically (for extermination), they are readily understood as exercising sovereignty along Fanon’s “line of the human”.


Some commentators[4] have argued that whiteness continues to occupy the position of the human, technological beings such as robots and AI coming to displace non-white others in the realm of the sub-human. However, I suggest a different possibility, viz. the migration of whiteness into the realm of the Transhuman (cyborg) and technological Posthuman (AI) under apocalyptic conditions of ‘White Crisis’. Crucially, this migratory shift is intended to maintain the relational and hierarchical binary between the European (Western, white) and non-European (non-Western, non-White). Insofar as this hierarchy can usefully be thought about in the politically onto-theological terms of a ‘Great Chain of Being’, then insofar as the apex of this chain is occupied by God, whiteness as AI is attempting to occupy “the God spot” such that what is to come “after AI” will be a post-apocalyptic whiteness.


Against this racially dystopian possibility, re-inscribing the racialised present into a similarly racialised future (thereby collapsing other possible futures), I want to wind back on the issue of apocalypse and consider what might follow from thinking – and doing – apocalypse ‘otherwise’ by engaging with various works exploring the phenomenon of apocalypse in an-‘Other’-ed tradition, viz. Islam, with a view to mining it for resources to mount resistance against the accelerating momentum of that which comes “after AI”.


NOTES


[1] In his seminal work, Political Theology: Four Chapters on the Concept of Sovereignty (1922), Schmitt asserts that “sovereign is he who decides on the exception” (p.5), and that “all significant concepts of the modern theory of the state are secularized theological concepts.” (p.36) Regarding the latter statement, Schmitt maintains that in Western historical experience there was a transfer “from theology to the theory of the state, whereby, for example, the omnipotent God became the omnipotent lawgiver.” (p.36)


[2] By ‘world’, I do not refer here to the planetary phenomenon that is the Earth, but rather to a shared material and symbolic realm constructed by human beings. It is crucial to appreciate that what counts as a world at a particular time and place is a function of power, and that while many worlds might be possible (worlding/world-making engendering contingent realities), some worlds dominate others.


[3] By ‘White Crisis’ I refer to a situation in which a hegemonic whiteness is subjected to increasing contestation by the non-white ‘other’, engendering a heightened sense of anxiety and threat among those self-racialised as white expressed through various discursive formulations and prompting a variety of responses.


[4] In this connection, see, for example, Atanasoski and Vora (2019).


REFERENCES


Ali, S.M. (2020) Transhumanism and/as Whiteness. In Transhumanism – The Proper Guide to a Posthuman Condition or a Dangerous Idea? Edited by Hans-Jorg Kreowski and Wolfgang Hofkirchner. Switzerland: Springer, Cham, pp.169-183.


Ali, S.M. (2019) ‘White Crisis’ and/as ‘Existential Risk’: The Entangled Apocalypticism of Artificial Intelligence. Zygon: Journal of Religion and Science 54(1): 207-224.


Atanasoski, N. and Vora, K. (2019) Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. Durham: Duke University Press.


Gray, J. (2007) Black Mass: Apocalyptic Religion and the Death of Utopia. London: Penguin.


McQuillan, D. (2022) Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. Bristol: Bristol University Press.


Mills, C.W. (2008) Racial Liberalism. PMLA 123(5): 1380-1397.


AI Atemporality vs. Recursive Sacred Time: The plural Machine-Now

by Gaston Welisch (he/him)


This paper argues that the atemporal “mode of existence” (Simondon, 1958) of current AI models like Large Language Models (LLMs) and Text-to-Image Models (TTI) render them amotivated and reactive. That is to say, AI, in its ontological mode (way of being), appears detached from the continuity of time—each interaction is isolated, devoid of historical context or future anticipation, reminiscent of Heidegger's conception of objects that are simply 'ready-to-hand' until called upon. Take for example AI romantic partners like those offered by Replika, which are always available and continuously respond to users, in stark contrast with human relationship dynamics, and with examples of disastrous consequences (Singleton, Gerken, & McMahon, 2023; Xiang, 2023; Laestadius, Bishop, Gonzalez, Illenčík, & Campos-Castillo, 2022). This tension echoes Stiegler's (1994) discourse on alienation and disorientation, where technology, as a prosthetic extension (temporal objects) of human consciousness, disrupts our experience of time. To offer an alternative to this narrative, the paper invokes examples of pre-industrial conceptions of time. These will include figures from the practice of Magic, like that of the Witch (Hutton, 2017), a symbol of fear and otherness, and the Magus (Grafton, 2024)— emblematic of the concept of rejected knowledge (Hanegraaff, 2012). These figures, often seen as marginalised or esoteric, now re-emerge in contemporary discourse, particularly the Witch in feminist thought, as a symbol of resistance and agency. These practices will offer a counterpoint to our current understanding of linear, quantifiable time, and suggest a cyclical, interconnected temporality, particularly in epistemic approaches to discovery and revelation. On that point, Mircea Eliade's concepts of sacred and profane time provides a framework to understand how human beings perceive and interact with the temporal world (Eliade, 1954). Eliade's Sacred time is cyclical and regenerative, while profane time is linear and secular. What modality of Time does AI belong to? The use and “Stochastic Parroting” (Bender et al., 2021) of vast amounts of training data could suggest a cyclical approach to time, tending toward a homogenisation (Alemohammad et al., 2023). Additionally, these models, built on past data, paradoxically project past biases and trends into the future. This paper aims to deconstruct the ways our entanglements with technology shape not only our conception of time but also our “being in the world”. The paper questions how rejected knowledge can inform desirable futures (Hancock and Bezold, 1994). What rituals, inspired by these practices of rejected knowledge, could be put into place to collectively imagine (Godelier, 2015) and shape a society “After AI?”.


'Queering Autobiographical Memory: Emotional Analysis of Goodreads Book Reviews for (Sub)-Genre Recognition'

by Izzy Barrett-Lally (she/her)


In this paper I model an approach to building empirical enquiry into literary analysis of genre using emotional analysis of reader reviews employing a large language model. My method iteratively hypothesises autobiographical subgenres by identifying the presence and intensity of eight key emotions in reader review texts ('anger', 'joy', 'disgust', 'fear', 'anticipation', 'sadness' 'surprise', 'trust'). I investigate recognition of hypothesised autobiographical subgenres, both literary-institutional and unrecognised, amongst self-selecting readers who choose to leave online book reviews. I look for patterns of emotional response in reader reviews in order to propose subgenres in a Grounded way. In the past empirical studies of textual interpretation have relied on classifications of genre by experts, often graduate students or academics specialised in linguistics and psychology, to test genre recognition. This approach reinforces expert bias whereby readers in positions of historic privilege and institutional power determine the recognition of literary subgenres, which limits the available discourse around experiences of reading and writing autobiography. I propose that readers who are marginalised within literary institutional contexts may perceive subgenres that also exist along fertile literary margins.


The refinement of definitions of literary subgenres using this method can facilitate literary analysis and support future empirical study of textual interpretation through the inclusion of readers and texts that recombine, reject and reconstitue genre. Research shows that genre plays a role in the process of textual interpretation. In a classic 1998 study, Hanauer proposed a "genre-specific hypothesis of reading" based on his findings that genre alters recall, reading speed and readers' attitudes to the perceived difficulty of a text. Further empirical studies have built on Hanauer's theory. For instance, McCarthy found that it is possible for readers to identify genre using only the first few words of text. Meyer and Wijekumar claim that training in genre recognition can improve reader comprehension because skilled readers activate appropriate expectations and strategies when they correctly identify a text's genre. These studies, which test the genre-specific hypothesis of reading, suggest that genre changes the way readers interpret texts. I develop three main points in this paper in favour of a Grounded, empirical approach to the definition of genre: i) if textual interpretation is tied to the recognition of genre, then a clearer process for generic categorisation could inform and decentralise textual analysis; ii) a clearer process for generic categorisation may feedback into the analysis of 'problematic texts' that display 'compositionality' (heterogeneous genre) and support the reception of marginalised texts, their readers and writers; iii) further analysis of textual interpretation of texts that do not easily fit into a single genre may produce further insights into the role of genre in textual interpretation. My work uses AI along these lines of enquiry as a means of decentralising literary criticism, venturing into a future in which real readers and writers set the agenda for literary interpretation and analysis.


After the Binary: AI and Greenberg in 2024

by Sarah Jane Field (she/her)


The discourse around AI and what is and isn’t art has been hyperbolic, binary, and fraught with unwavering conviction from every side. Despite Clement Greenberg’s reputation for having divided the arts into low and high, tasteful and banal, it is worth revisiting his 1939 essay, Avant-Garde and Kitsch. Simply replace the word “kitsch” with “AI”, and it reads like it could have been written today. Almost. The pre-war landscape in which he wrote and the shadow of fascism also resonate.

As an artist aiming to explore and integrate various language-materials, including machine-learning output, I resist hierarchies between media. Nevertheless, I must admit to internal conflict when faced with the dominant AI aesthetic. My desire to eschew assumed hierarchies forces me to question my tastes. In addition, I can’t help but ask if the now familiar and seemingly narrow deterministic aesthetic, often accused of being the kitschiest ever, is inevitable.

Kitsch was described as “monological self-enjoyment” by Karsten Harries in 1979.

In our hyper-digitised world, and despite living in the "global village" (McLuhan, 1964), the Other has been characterised as remote or even non-existent. And AI, some suggest, lead us potentially towards even greater social atomisation, along with all its accompanying ills, such as depression, anxiety, and deep loneliness – famously linked to the rise of totalitarianism by Hannah Arendt (1951).

In response to Greenberg and the continued tensions around what constitutes art, my presentation explores how AI aesthetics’ familiar “surreal capacities” (Davis, 2022) are transforming culture. I will interrogate the assumption that AI merely “draws the lifeblood” from a reservoir of so-called “true culture” (Greenberg,1939). And finally, I ask if this latest revolution has the potential to positively reconfigure the structural class divisions that Greenberg identified and arguably reinforced – or will it generate even more disparity?


Charting the Human-AI Creative Continuum: A Personalized Navigation

by Caleb Weintraub (he/him)


As artificial intelligence continues to advance, it challenges traditional notions of creativity, authorship, and the role of human creators. This presentation explores the evolving landscape of creativity in the "after AI" world, examining the unique aspects of human creative practice that remain distinct from AI-generated outputs, while also addressing critical ethical, social, and environmental considerations.


By considering AI as a holistic entity that amalgamates vast datasets to produce highly consistent and widely accessible creative works, the contrasting nature of human-made physical artifacts is set into relief. The imperfections, idiosyncrasies, and materiality of human creations serve as testaments to the internal, personal experience of the creative process and the intrinsic value of the journey over the final product.


The perceived universality and consistency of AI-generated works may lead to a homogenization of creative output and the potential for AI to be seen as an arbiter of aesthetic value. To address these concerns and harness the power of AI while promoting diversity, this presentation proposes a methodology for fine-tuning generative AI models through active feedback loops informed by insights and cultural awareness of domain experts.


Central to this approach is the development of an AI-driven artistic feedback and development tool that engages creatives in a continual dialogue about their work and their vision. By creating personalized learning loops, these AI companions can foster deeply individualized critique experiences, revolutionizing creative education, collaboration, and refinement in the "after AI" era.


This presentation explores the ethical implications of AI in creative sectors, focusing on strategies to mitigate biases, increase access, and cultivate thoughtful development and deployment of these technologies. It also examines the potential environmental impact of AI systems and proposes ways to promote sustainability in the long term.


By charting the human-AI creative continuum through personalized AI tools, ethical frameworks, and a commitment to diversity and sustainability, we can navigate this new landscape and adapt our skills, mindset, and approach to creativity. This ensures a vibrant and responsible creative environment in the "after AI" world. The presentation sets a precedent for future endeavors at the intersection of technology, creativity, and social responsibility, inviting reflection on the evolving role of human creators in a world increasingly shaped by and shared with artificial intelligence.

Session 4