AI images for dreaming or ghosting?
BLOG: Utopian Pulses
“The threat is no longer the deadly sweet seduction of nostalgia. The problem is not, any more, the longing to get to the past, but the inability to get out of it… Do we really have more substance than the ghosts we endlessly applaud? The past cannot be forgotten, the present cannot be remembered. Take care. It’s a desert out there...”
– Mark Fisher, 2013 [1, p. 62]
Mark Fisher argues in Ghosts of My Life that society is increasingly haunted by old dreams of better futures that never came to pass. As a result, the myriad efforts to create images of better futures tend to reflect not new dreams but old ones. He refers to this inability to be genuinely creative in the face of environmental crises as “the slow cancellation of the future”—"the gradual yet relentless way in which the future has been eroded… as capitalist work culture leaves people in a state where they are simultaneously exhausted and overstimulated” [1, p. 16]. Text-to-image generative AI promises to revitalize this creativity—by allowing anyone to dream up and depict possible, or even preposterous, future worlds. Yet, this hollow promise of dreaming conceals the ghosting that manifests in the shadows. Ghosting, in the double sense—through how AI severs our deeply personal relationship with our imaginative capacities, and how it simultaneously intensifies our collective haunting by lost dreams. Yet, given AI magnifies these old spectres, can seeing them more clearly help us to confront them, to then dream anew? Or do such AI-generative acts even further render us zombies of capitalist imagination, incapable of breaking free?
— by Josie Chambers
The image above is bizarre: bobbing corn stretches toward the horizon, awaiting its eventual conversion into biofuel via absurdist jumbo corn towers. An image just like this one was created by students studying technology and innovation at Utrecht University. Their task was to choose an emerging technology and use AI to generate images of two possible futures in 2060—a utopian one and a dystopian one. Guided through iterative prompting and reflecting, the intention was to unpack assumptions behind what futures they presumed could, or should, unfold.
With an estimated 34 million AI images generated daily, such images are now ubiquitous [2]. There is, for instance, an AI tool that lets you “add a touch of Dutch to your street”—making them greener and more cycle-friendly. Similarly, UrbanistAI uses AI for the participatory reimagination of cities. There is much to say about the implications of using AI to depict so called better worlds, with already numerous critiques of bias [3], [4], labor exploitation [5], [6], [7], and energy unsustainability [8], [9]. Or even worse, the weaponization of AI to promote “ideal” futures, such as the fake racist campaign images secretly made by two members of Dutch parliament. What has, however, remained less explored, is the potential of AI tools to lay bare our deeply held dreams of better futures, such that they can then be challenged. To begin this exploration, let’s return to the futuristic image of water-borne corn. You may wonder, what were the students’ intentions behind creating such an image?
Well, students in fact presented this image as utopian. They explained that if humans could develop the technology to grow corn in water, this would help solve the shortage of space for farming. They presumed that technology could efficiently convert corn into biofuel, such that all farmers could enable a transition to green energy. They acknowledged that as a result some water life in the areas where corn is planted would adapt while some would “die out”.
This utopian image embodies a complex mixture of old dreams. The dream that more efficient agricultural production must meet ever-growing demands. The dream that energy can come from more sustainable sources. The dream that technology will inevitably save us from our troubles. The dream that humans should dominate nature. The dream that good ends justify any means, regardless of the harms for some. This AI image makes these dreams ever more salient, and even caricatured. But does seeing these dreams laid bare spark critical thinking about them, or does it further trap us in them? And where do these dreams even come from?
These questions prompted an unexpected conversation with my father. You see, he lived out these dreams in 1970s rural America. I grew up on the outskirts of Champaign, Illinois in the 1990s, surrounded by expansive cornfields. But two decades prior, my father was part of the mid-1970s push to develop renewable fuel from corn—known as “gasohol”. He had just finished his PhD in physics and faced the heart-wrenching choice of whether to pursue his academic dream or help advance the transition toward renewable energy. He had invented a technology to produce renewable gasohol from corn byproducts and sought to enlist the help of his father—my grandfather—John Chambers, a leading world expert in alcohol distillation, to engineer it. With the Arab oil embargo in 1973 and resulting 300% spike in oil prices, new government incentives made such renewable technology financially viable [10]. My father recalls his PhD advisor saying: “It’s better for the country for you to go work with your dad.”
However, corn was not a common energy producer, so my father wanted to first see if gasohol was worth doing from an environmental perspective. Together with colleagues, he showed that the energy balance of converting corn into gasohol could not be taken for granted. Many were planning to burn coal to produce it, which would not be environmentally responsible, nor produce net positive energy. But alternative resources like corn waste, switchgrass and cellular materials could result in net positive energy. The analysis was submitted to Science and received two scathing reviews: one saying the net energy balance was “clearly positive”, and one saying the net energy balance was “clearly negative”. The editor phoned my father with the news, to which he replied: “I rest my case”. Science published the paper: Gasohol: Does It or Doesn’t It Produce Positive Net Energy? [11].
My father and grandfather’s company ACR Process Corporation went on to design a distillery model to enable small “satellite” farms, to compete head-on, or even outcompete, much larger distilleries. His dream, as stated in the 1980 Alcohol Fuels Policy, was to “get to 100 percent alcohol fuels in use by the year 2000” [12]. The big companies using unsustainable methods claimed that small-scale farms were not financially viable, yet emerging cooperatives were proving otherwise. For example, the SWAFCA Co-op in Alabama—composed of black tenant farmers who were evicted after participating in Martin Luther King's voter registration drives—had implemented a process for turning spoiled crops into alcohol for fuel [12]. In Colorado, farm activist Eugene Schroder created the guide "Makin' It on the Farm” to share his novel gasohol methods, which “cut down on pollution and stopped the flow of the dollar out of the country” [13]. My father published a handbook to help farmers figure out what it would cost them to produce gasohol and received so many questions that he had to hire four students to continuously answer the phone lines for over a month.
These dreams were short-lived, however, he explained: “It was a time when I felt really optimistic. I thought I could do something for my country that was worth doing, but then it all fell apart.” Falling oil prices of the 1980s made gasoline plentiful once more, and the government pulled the rug on their supportive policies. I asked my father how he felt now about his old dreams. The answer was mixed. On the one hand, he cited the Brazilian government’s efforts to invest in biofuel from sugar cane to reduce carbon emissions. On the other hand, he said we need to let go of gasoline altogether and should pursue more fundamentally renewable options like solar, wind, hydroelectric, and battery-operated vehicles. I also asked: what did he think of the students’ image? He chuckled, saying “the stairs around those corn cobs would be impossible to climb!” But ultimately, he said it created the misleading perception that edible corn would be used, instead of waste materials, or more efficient crops for fuel conversion like switchgrass or sugar cane. This was apart from the negative implications for biodiversity loss and water scarcity.
After this deep dive into one corner of my family history, I was left feeling that the shiny, easy, generic AI image did not capture—and could never quite capture—the sort of complexities of these old dreams and how they masquerade as new ones. The histories of where they come from. The real experience of how they are grappled with in the world. The complicated ways in which such dreams continue to shape and reshape our thinking in the present… and of possible futures…
Ghosting imagination
The idea behind creating images of possible desirable and undesirable technological futures was that it could reveal biases that collectively haunt our imagination. After the initial utopian and dystopian images were generated, we used multiple rounds of questions to encourage students to interrogate their excavated dreams and nightmares:
How might it feel to live in the two worlds?
What might everyday life look like for you?
What new kinds of problems might you encounter?
How might you enjoy life differently than we do now?
How might others experience these worlds?
How might everyday life look from three different vantage points?
What new kinds of problems might they each encounter?
How might they each enjoy life differently than we do now?
In what ways are your worlds too binary?
For whom is your utopia dystopian? Why?
For whom is your dystopia utopian? Why?
What are the cultures of your worlds like?
What worldview underpins your worlds?
What if you were to remake them from another worldview?
What future cultural changes might you have underestimated?
Students were encouraged to use their reflections to seriously rethink and reprompt their images. Yet, despite this iterative process, the majority of images—like the one above—never progressed beyond a relatively superficial view of a technological future. The images and corresponding stories served as provocations, yet failed to engage with the layers and textures of these dreams. The students furthermore struggled to recognize the old dreams that AI silently amplified. As a result, the final images often reified these same dreams.
But what was so haunting about them? Well, there were the generic vantage points which said nothing about everyday life, or how people might experience life differently in such a future. In fact, nearly one-quarter of the images contained no recognizable people. The bright, sunny skies of the harmonious utopias, filled with stereotypical blue, glowing technology flattened the importance of navigating tensions and contestation in any future that can possibly be desirable beyond a single perspective. Furthermore, the portrayal of technology as always improving something—be it technical procedures, genetic modification, assisted living, education—papered over deep societal and ethical implications. For example, one utopian image showed a red-haired boy absorbing knowledge via his ears from books circling next to his head, in connection with others.
This is not too dissimilar from the passive one-way transfer model of education portrayed by artist Jean-Marc Côté in the 1890s, when he envisioned the future of schools in 2000. Old dreams simply repackaged into flatter aesthetics.
Even more concerning, among the images that included people, 85% showed only white people— even though we had discussed exactly this issue of algorithmic bias. Among the dystopian images, the dark, gloomy skies stamped out the complexities of real life. In fact, all too often the dystopian images were nearly identical to the utopian images, except with bad weather, broken technology and societal breakdown. Two sides of the same coin—both reflecting a fundamental lack of imagination about how the future might be qualitatively different than today, whether good or bad. For example, in the case of the water-borne corn, the dystopian image depicted a future where resistant mutant corn overran the urban environment, producing social and ecological exploitation and chaos.
I started to realize that the exercise seemed to be doing precisely the opposite of what I had intended. Instead of fostering a more critical perspective on what AI serves up and using that to open towards alternatives, it seemed to be ghosting imagination. Ghosting, in the double sense. Ghosting, in how it lured people towards old dreams that haunt us on a collective scale. A form of “networked haunting” only made possible by these AI tools [14]. But also ghosting, in how it severed people’s deeply personal relationship with their own imaginative capacities. The AI images seemed to trigger a form of dreaming backwards. A momentum that insidiously sucked imagination towards old dreams. Dreams that have erased countless histories… and possible futures. If we take seriously Alfred Whitehead’s assertion that “imagination is never very free” [15, p. 132] due to being limited by our real-world experience, then perhaps AI imagination is like a dark impenetrable bunker that masquerades as an open sunny market.
But why and how did this ghosting take place? As the exploration began with simple prompts, students seemed to become excited by their early images, and typically preferred to tinker around the edges of those thin futures largely devoid of social context and meaning, rather than experiment more radically in their storylines. Because their imagination was rendered technical by simple prompts and clicks of a button, the imaginative work was done for them. Much of the story, or at least its endpoint, was already filled in. Students thus began to back-cast storylines to plausibly explain such end results, rather than to reimagine them. Through this experience, it became apparent just how ill-equipped we are to spot our old dreams and biases, and even more so, when they are infused into seemingly creative and imaginative acts that take on new and bizarre forms.
Confronting our ghosts
The question remains—how can we acknowledge and confront the insidious ways our imagination tends to reproduce old dreams? And is there still a role for AI in such efforts? Does AI serve as a kind of oracle that channels old ghosts into visual form, regurgitating the popular culture it has ingested? And can we confront these ghosts that haunt our present, without then reproducing them? Or do such AI-generative acts necessarily render us zombies of capitalist imagination?
One challenge is how emerging technologies often appear in “thin” images that merely describe the technology devoid of social context. In The Thickening of Futures, Li Jönsson et al. argue “the future is not an empty space awaiting projected visions from an incomplete present or a predefined destination that we can simply foresee and arrive at… There is no manifesto of what the world is, because it is always in a state of becoming.” [16, p. 2]. They claim that the problem with “thin” images, whether utopian or dystopian, is that they present futures which are exhausted of alternatives—"they all become layers in the compost pile of trouble in our ongoing epoch.” [16, p. 1]
But how might we “thicken” futures? Donna Haraway describes how “in urgent times, many of us are tempted to address trouble in terms of making an imagined future safe, of stopping something from happening that looms in the future, of clearing away the present and the past in order to make futures for coming generations” [17, p. 1]. For example, many future visioning efforts are driven by the implicit assumption that “if we don’t have a destination, how do we know what future to build?” [18]. Yet, building thick futures necessitates letting go of the need to stabilize a single desired end point. Haraway calls for being “committed to the finicky, disruptive details of good stories that don’t know how to finish. Good stories reach into rich pasts to sustain thick presents to keep the story going for those who come after” [17, p. 125].
Generative AI inherently pulls from past data, yet offers the opposite of rich pasts. Rather, AI enshrines “WEIRD”—i.e. Western, Educated, Industrialized, Rich, and Democratic—content that dominates the internet, and projects this into the future [19]. As a result, AI images tend to exude a generic “pastness” that erodes history into mere aesthetic “vibes” [20]. For example, Eryk Salvaggio prompted “The future as imagined in…”—inserting four distinct historical eras—and found that the images were nevertheless very similar—“cylindrical objects still hovered in the air… People looked out at cityscapes. Moons have multiplied” [20]. Yet, as Roland Meyer argues, “the problem with these images is not just their genericness. It’s the deeply populist idea that politics can be reduced to its immediately visible effects: politics is not judged by how it affects people's concrete daily lives, but rather by its aesthetics—by what image it produces.” [21]. It is no wonder that populist leaders find AI to be the perfect tool to showcase their imaginary solutions—rendering invisible the politics behind what it would actually take to create such futures (ibid). Through weaponizing “thin” pasts into “thin” futures, we are served up a “zombie representation of the human world” that is artificially bloated through its endless attempts to resurrect itself [22].
So, is it even possible to thicken futures from such a starting point? Nele Fischer & Wenzel Mehnert offer a framework for creating “thick descriptions” of possible worlds to “critically reflect upon the everyday images of the future we hold and live by” [23, p. 26]. In this way, “every image of the future is such a window to a possible world.” [23]. Thin images of the future from news, media, or even AI can be valuable starting points to see how a particular issue is often portrayed. But they advocate for constructing a possible social world around the image [23]. For emerging technology, this means that the technology itself is not only different in this world, but the people are too; they are embedded in different value systems [24]. And that the imagined worlds should include the kind of contradictory and conflicted dynamics we see in our present world [25]. Once thick descriptions are built, it becomes much more fruitful to reflect on the assumptions that underpin the world. Such reflections can then further inspire the reconstruction of alternative possible worlds [23]. In this way, the thickening of the present can become a form of anticipatory practice [16].
Now, I pause here—because the catch to this piece is that I had this year’s students read it—up until this point—before engaging in the exercise—to see if it was possible to facilitate more meaningful critique of the “thin” images AI was serving up and use this as a basis for “thickening” possible social-technological futures. I also included a ritualistic dumping—whereby students were not only encouraged to critique the biases underpinning their initial “thin” images but had to throw them into a virtual trash bin. We then used the “Futures Wheel” [26] (a.k.a. “Seeds Approach” [27]) to help them build narratives of possible first, second and third order effects of these unfolding technologies. AI image generation was then used to show a glimpse or window into their imagined utopian and dystopian worlds, with an iterative process of spotting and counteracting AI and human biases.
The result of this process was, in some cases, more nuanced portrayals of possible worlds and less overt biases. Yet, at the same time, the underlying ghosting of imagination continued. The shift from narrative building to AI images seemed to dampen their creativity and flatten their critical analysis, as old ghosts once more crept into their images with a simple click of a button. Their stories, again, turned towards explaining the images rather than building towards deeper notions of “thick” possible futures that hold space for critical reflection.
My overall conclusion from this experimentation is that AI is indeed exceptionally good at portraying the ghosts that haunt us. We can critically dissect and reject the “thin” futures it serves up. Yet using AI as a starting point or mode for “thickening” possible futures is extremely challenging, and most likely counterproductive. Instead, we need diverse imaginative modes and ingredients that can thicken our presents by infusing thick notions of possible pasts and futures (see also my previous post: Around the future in eighty worlds). As my father stated, upon reading this piece: “When I was young, I used to enjoy listening to mystery stories on the radio because my imagination could create the images. If I go to a movie and then read the book that the movie is based on, my imagination is just replaying the movie.”
AI is increasingly wriggling into the modes by which people imagine possible futures. New applications are released weekly to support this, whether photoshop AI plugins or specialized tools such as Oracolo, which aims to use AI for “imagination transfer”—such that “everybody can have the opportunity to express their ideas in the same language” [28]. But do we really want to lower our imagination to the lowest common denominator? To give AI not only the agency to depict better futures, but to also shape our critical capacity around them, is something we should be highly wary of. We need greater exploration of the dangers these tools pose to our collective imagination. Not only do they expand the scope by which ghosts of the past can haunt us, but their potential role in helping us to confront these ghosts is also not straightforward. And this is aside from the exploitative “ghost work”—i.e. invisible labor and impacts—which continues to expand under AI systems [5].
This begs the question of whether AI is even useful at all in the urgent collective endeavor to imagine better futures? In From Thin to Thick: Toward a Politics of Human-Compatible AI, Jacob Foster argues that AI (and AI research) in its current form is doomed to “crush local variation in the name of standardization and legibility” [29, p. 420]. AI inherently enables a “world of thin construction”, where the ability to impose narrow meanings is amplified precisely because “technologies all end up favoring similar objectives and meanings”. This stands in contrast to a “world of thick construction”, where “meanings are constantly debated, negotiated and co-constructed” [p. 426]. Foster claims we fundamentally lack the kind of social science that would enable AI to contribute to thick constructions, as it is far too concerned with the world as it is (e.g. leaving intact the alliance between modernist state and modernist science), rather than imagining the world as it could be [29]. As we enrich the modes by which to critically imagine possible futures, perhaps we should strive for modes that, as Michel Foucault says, can “bear the lightning of possible storms” [30]. He describes:
I can't help but dream about a kind of criticism that would try not to judge but to bring an oeuvre, a book, a sentence, an idea to life; it would light fires, watch the grass grow, listen to the wind, and catch the sea foam in the breeze and scatter it. It would multiply not judgments but signs of existence; it would summon them, drag them from their sleep. Perhaps it would invent them sometimes-all the better. All the better. Criticism that hands down sentences sends me to sleep; I'd like a criticism of scintillating leaps of the imagination. It would not be sovereign or dressed in red. It would bear the lightning of possible storms.
Now, how does one do this in practice? My experiments are only in nascent form, but I think it is imperative that we reject the ways we tend to see critical thinking as removed from imaginative acts, and imaginative thinking as removed from critique. AI-generation-for-critique indeed collapses this binary, yet in the most problematic of ways; it enables a form of haunting that spooks away genuine creativity and social critique. We instead need modes of critical imagination that allow us to simultaneously thicken our pasts and futures—to enfold them into the absurd conundrums we must collectively grapple with in the present, regardless of whether they relate to corn, renewable energy, or otherwise.
[1] M. Fisher, Ghosts of my life: writings on depression, hauntology and lost futures. Winchester: Zero books, 2013.
[2] B. Marr, “15 Mind-Blowing AI Statistics Everyone Must Know About Now,” Forbes. [Online]. Available: https://www.forbes.com/sites/bernardmarr/2025/03/10/15-mind-blowing-ai-…
[3] C. Bird, E. L. Ungless, and A. Kasirzadeh, “Typology of Risks of Generative Text-to-Image Models,” July 08, 2023, arXiv: arXiv:2307.05543. [Online]. Available: http://arxiv.org/abs/2307.05543
[5] G. M. L. Gray and S. S. Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston, 2019.
[6] A. Birhane et al., “Power to the People? Opportunities and Challenges for Participatory AI,” in Equity and Access in Algorithms, Mechanisms, and Optimization, Arlington VA USA: ACM, Oct. 2022, pp. 1–8. doi: 10.1145/3551624.3555290.
[7] E. C. Gross, “The Creative Paradox of AI: Enabler or Disruptor of Human Imagination?,” SSL, pp. 69–74, Aug. 2023, doi: 10.31926/but.ssl.2023.16.65.1.7.
[8] A. Ponce, “Exposing generative AI: Human-Dependent, Legally Uncertain, Environmentally Unsustainable,” Sept. 23, 2024, Social Science Research Network, Rochester, NY: 4975411. doi: 10.2139/ssrn.4975411.
[9] S. Falk and A. van Wynsberghe, “Challenging AI for Sustainability: what ought it mean?,” AI Ethics, vol. 4, no. 4, pp. 1345–1355, Nov. 2024, doi: 10.1007/s43681-023-00323-3.
[10] “1973 oil crisis,” Wikipedia. Sept. 23, 2025. [Online]. Available: https://en.wikipedia.org/w/index.php?title=1973_oil_crisis&oldid=131287…
[11] R. S. Chambers, R. A. Herendeen, J. J. Joyce, and P. S. Penner, “Gasohol: Does It or Doesn’t It Produce Positive Net Energy?,” Science, vol. 206, no. 4420, pp. 789–795, Nov. 1979, doi: 10.1126/science.206.4420.789.
[12] United States Congress, Joint Economic Committee, Subcommittee on Energy, “Alcohol Fuels Policy: Energy self-sufficiency for rural America,” U.S. Government Printing Office, 1980.
[13] M. Mellis, “‘Makin’ It On the Farm’ - Historical Success Stories,” Hudson Valley Biofuel. [Online]. Available: https://www.hudsonvalleybiofuel.org/index.php/farms-feedstocks/experien…
[14] D. Lockton, “Far away is close at hand in images of elsewhere: a reflection on haunting and technology,” in Spooky Technology: A reflection on the invisible and otherworldly qualities in everyday technologies, D. Byrne and D. Lockton, Eds., Amsterdam: Imaginaries Lab, 2021. [Online]. Available: https://studioforcreativeinquiry.org/project/spooky-technology
[15] A. N. Whitehead, Process and Reality. New York: Free Press, 1978.
[16] L. Jönsson, K. Lindström, and Å. Ståhl, “The thickening of futures,” Futures, vol. 134, p. 102850, Dec. 2021, doi: 10.1016/j.futures.2021.102850.
[17] D. J. Haraway, Staying with the Trouble: Making Kin in the Chthulucene. in Experimental Futures: Technological Lives, Scientific Arts, Anthropological Voices. Durham, NC: Duke University Press, 2016.
[18] L. Schofield, “Imagining preferred futures: One of Generative AI’s best use cases?,” Lee’s Substack. [Online]. Available: https://leeschofield.substack.com/p/imagining-preferred-futures-one-of?…
[19] L. A. Bechthold, “Rendering new futures or enshrining pictures of the past? The double-edged sword of using AI-generated imagery for anticipation * Journal of Futures Studies,” Journal of Futures Studies, Aug. 2025, [Online]. Available: https://jfsdigital.org/rendering-new-futures-or-enshrining-pictures-of-…
[20] E. Salvaggio, “The Future As Imagined,” Cybernetic Forests. [Online]. Available: https://cyberneticforests.substack.com/p/the-future-as-imagined
[21] R. Meyer, “There’s a growing & deeply problematic tendency to use #genAI for political and historical education,” Bluesky. [Online]. Available: https://bsky.app/profile/bildoperationen.bsky.social/post/3m3ubhhqvo22p
[22] H. Kalaitzidis, “AI as a zombie representation of the human world,” Void Network. [Online]. Available: https://voidnetwork.gr/2025/03/23/ai-as-a-zombie-representation-of-the-…
[23] N. Fischer and W. Mehnert, “Building Possible Worlds: A Speculation Based Framework to Reflect on Images of the Future,” Journal of Futures Studies, vol. 25, no. 3, Mar. 2021, doi: 10.6531/JFS.202103_25(3).0003.
[24] A. Nordmann, “Responsible innovation, the art and craft of anticipation,” Journal of Responsible Innovation, vol. 1, no. 1, pp. 87–98, Jan. 2014, doi: 10.1080/23299460.2014.882064.
[25] P. G. Raven, “week 39 / 2025: staying with various troubles,” Worldbuilding Agency. [Online]. Available: https://www.worldbuilding.agency/weeknotes/week-39-2025-staying-with-va…
[26] D. N. Bengston, “The Futures Wheel: A Method for Exploring the Implications of Social–Ecological Change,” Society & Natural Resources, vol. 29, no. 3, pp. 374–379, Mar. 2016, doi: 10.1080/08941920.2015.1054980.
[27] L. Pereira, “Imagining Better Futures Using the Seeds Approach,” Social Innovations Journal, vol. 5, Mar. 2021, [Online]. Available: https://socialinnovationsjournal.com/index.php/sij/article/view/694
[28] L. Galiotto, “Oracolo: a more-than-human envisioning tool : aI text-to-image as a support for futures thinking conversations,” May 2023, [Online]. Available: https://www.politesi.polimi.it/handle/10589/207915
[29] J. G. Foster, “From Thin to Thick: Toward a Politics of Human-Compatible AI,” Public Culture, vol. 35, no. 3 (101), pp. 417–430, Sept. 2023, doi: 10.1215/08992363-10742593.
[30] Z. VanderVeen, “Bearing the lightning of possible storms: Foucault’s experimental social criticism,” Cont Philos Rev, vol. 43, no. 4, pp. 467–484, Nov. 2010, doi: 10.1007/s11007-010-9160-7.
[31] M. Foucault, P. Rabinow, and M. Foucault, Ethics: subjectivity and truth. in The essential works of Foucault, 1954-1984, no. v. 1. New York : New York: New Press ; Distributed by W.W. Norton, 1997.
Utopian Pulses is a blog series in which Josie Chambers shares creative approaches for collectively imagining the world otherwise. From challenging seemingly inevitable, unjust futures to facilitating alternative forms of politics, Utopian Pulses invites you into new ways of enabling our collective imagination to flourish.