Sharing a world with AI: Can technology such as AI be reliable, creative collaborators?

On 11 May 1997, the world of chess was shaken. Russian grandmaster and world champion Gary Kasparov had just lost a historic match – he cried foul play and accused his opponent of cheating (which is not uncommon). What was uncommon was the victor: IBM’s supercomputer Deep Blue. A few months later, in the same year, in a toned down, musical version of the match, a musician at the University of Oregon, Dr. Steve Larson, competed with a computer program called EMI (Experiments in Musical Intelligence) to compose in the style of Johannes Sebastian Bach. There were three entries in the contest: one by Bach, one by Dr. Larson and one by EMI. Larson lost, but what hurt him most was that the audience thought that the composition by EMI was genuine Bach.

Twenty-five years later, complex algorithms have become an inseparable part of our life – they help us choose our music, our partners, our investments and navigate data with greater precision than humans have. So, given that they can create material (sometimes truly weird stuff), can they be good creative collaborators too?

Theatre-makers have been experimenting with storytelling through tech long before the pandemic forced us to have discomfiting conversations about performing in the absence of presence. Back in 2019, before the pandemic, and long before I had come across the phrase ‘digital theatre’, I watched the most tech-forward performance I had thus far. It was a documentary theatre piece by Silke Huysmans (Brazil) & Hannes Dereere (Belgium), where the two deviser-performers stand on stage, with their heads permanently tilted down towards their phones. Each phone was plugged into a screen at least 10 feet high, and the entire performance was mediated through these two screens. The 60-minute piece titled Pleasant Island took us through their Whatsapp messages, videos, text typed live into their notes app, various music making apps, their phone gallery, even google searches. The show was about the people of the island of Nauru, a Micronesian country that has been ravaged by colonization, mining, and once again by modern imperialism. This tiny nation, in the middle of the Pacific Ocean, with a population too poor to travel, can access the world only through their phones, and that evening we all got to bear witness to their stories, through the performers’ phones.

One might imagine that a show where the performers never spoke, never once looked at the audience (or even each other) would be cold, distant, impersonal. On the contrary, six years on, this remains one of the most compelling shows I’ve ever watched, which moved me to my core, and one that I think of regularly, not only for its radical use of form, but also because of its emotional impact.

It’s the Age of AI – and the Age of Talking about AI

Post the pandemic, many artists were delighted to be off Zoom, and many rooms echoed with “I don’t ever want to talk about digital theatre again.” While Zoom theatre definitely seems to be on the ebb, innovative use of technology as an asset to storytelling is nothing new, and continues to grow at a remarkable pace.

One of my favourite theatre factoids is that in old English, one would say that they are off to “hear a play”, indicating that theatre was an aural tradition. Plays were performed during the day, in natural light. Audiences with cheap tickets at watching Shakespeare at The Globe in London often couldn’t see the stage, which is why many of Shakespeare’s texts include stage direction in dialogue. Over the years, theatre came to be lit by artificial light, allowing staging and storytelling to find new avenues, eventually bringing us to modern English, where we “watch/see” a play.

Artists across performing arts mediums are finding truly brilliant ways to push the edges of technology to enhance their storytelling. During the pandemic, I witnessed how technology could bend the limits of human capability, with a recording of Complicite’s The Encounter where a lone actor (Simon McBurney) creates an aural and visual spectacle, with the help of binaural mics, headphones for the audience, and spectacular visuals on the LED cyclorama. “This is a piece that asks about the price of progress, but never forgets the possibilities,” says theatre critic Matt Trueman in a Variety review.

With every passing day, the conversation about tech is more and more overrun with talk of Artificial Intelligence. As AI (in particular Generative AI) finds its way into every sphere, the performing arts aren’t left behind. In 2022, Jaaga’s BeFantastic (India) and Future Everything (UK) brought together artists who were just beginning to experiment with AI and paired them with mentors, in a project culminating in the Future Fantastic festival in Bangalore in 2023. Kamya Ramchandran, creative director of Jaaga, believes there’s a lot of wariness surrounding AI. She says, “Our perspective in creating this fellowship was that AI is here and it is real, and unless we play with what might seem like the beast, we won’t know how to tame it or how to use it.” She definitely felt that the inclusion of AI in the projects at the festival had pushed the artists’ creativity, making them engage with their craft in new ways. This experimentation ranged from artists using AI generated backdrops for dance performances (such as the show Palimpsest) to improv shows where performers were fed dialogue by an AI bot for a continuous 12 hours (The Merge). Kamya also believes that artists’ engagement with new technology is essential for deeper insights, for conversations that some might shrug off as ‘esoteric’ though they are fundamental. Through the creative process, Future Fantastic artists were often asking the question “who controls which aspects of knowledge creation” and how can that power dynamic be subverted. 

A still from ‘Palimpsest’. Source: befantastic.in

Gaurav Singh Nijjer, a deviser/performer from New Delhi, works with tech in several projects – using videos, projection mapping, generative AI, and is excited by the possibilities it holds. He says, “As a director, working with AI can be really fulfilling, because it opens up new possibilities to work with. When it comes to creating something original, though, it cannot write anything really good. Whatever it writes will be pretty vanilla, with no bold choices, no risks.” Though it may not create wonderful things on its own, Gaurav has found innovative ways to make AI or technology his collaborator or co-performer. In Kaivalya Plays’ Absurdo, an AI generated voice over creates an eerie, alienated effect. In Himaniie Panth’s The Shunya Project, video and projections contribute to the scenography (and thus the storytelling) by being ever present visual companions to the actors on stage.

One of the performances at BeFantastic was ClimateProv, which Gaurav was also a part of. Its premise was that there were a handful of (human) improvisers who had a new AI member join the troupe. They had to be nice and helpful to this new member, who heard what the performers were saying, took a few seconds, and then responded in turn. Some responses were verbal, others were AI generated images that became the playing field for the artists to improvise in.

For the most part, the AI performed well, but like any machine, there was a risk of lag, or the machine going rogue. To circumvent this, they programmed a series of responses that they could trigger with a push of a button, ranging from a bland “ha ha that is so funny” to a relatable “I need to recharge my batteries.” The performers could then pick up the machine’s slack and take the scene ahead. What Gaurav and his team found was that at this stage, they can’t leave it all up to generative AI – the performers need to be extra sharp and ready to create with what they receive.

A still from ClimateProv, with two human performers in the foreground, and an AI generated image in the background.

Technology can also be a sharp, creative collaborator, as musician and creative technologist Aaron Myles Pereira finds. He often creates projection mapping for music gigs, and says it is frequently an accessory to the show, perhaps amplifying some aspect of the storytelling, certainly upping the ‘spectacle’. “It feels like the music industry is dying – there is a severe shortage of venues and money for up-and-coming bands. By adding a visual component, the bands can sell an experience which might bring audiences in.” In his compositional work, he has trained a machine learning algorithm in his style of composition. He especially uses this while performing Bach’s Inventions (where short phrases are stacked on top of one another, creating harmonies and melodies). Under normal circumstances, a musician may play one phrase with their left hand, and another with their right. But with this algorithm in place, he can play one key which triggers an entire phrase. Through this, he has freed up his other hand to play something else altogether.

Where several artists say that they want to use AI/machines to take on the load of their administrative tasks so that they may be free to create, Aaron seems to give his algorithm certain creative tasks, which further enables him to create even more!

Aaron’s projections for Richard Spaven and Fattybassman. Photographer: Abhishek Gupta

Technology can do some things we can’t

As an actor-creator, I too have dabbled with technology in my creative practice. Projections make up the only set/scenographic element of my original show, Plan B/C/D/E, which deals with the climate crisis. The show is hinged on the prediction that Mumbai, along with several coastal regions around the world, will be largely underwater by 2050. One of the most evocative parts of the show has been the moment when I open a website which allows us to check what the prediction is for our home, asking the frightening question, “Will my home be underwater in the monsoon of 2050?” (find out the answer here). In spite of the 20 odd minutes I spend, sharing various research and current climate occurrences, nothing chills the mood like inputting someone’s address and seeing the map turn red (indicating an area underwater). This live, visual aid allows me to take the performance somewhere I couldn’t have otherwise. It gives me an opportunity to talk about the varying reliability of climate predictions and the way that the absurdities of human behaviour throw future modelling out the window. For eg., this website uses existing predictions of ice sheet melt, to calculate potential sea level rise. But it can’t take into account the worsening of flooding due to the reclamation of seas and estuaries (such as the ‘maladaptive’ Mumbai Coastal Road

Meghana AT performing Plan B/C/D/E in Kochi’s Kerala Museum. Picture credits ALT EFF team.

I came across the website a few days into the devising process of the show. I went through a similar emotional journey as the one I put my audiences through – hours of reading many (too many?) articles about climate catastrophe had left me very anxious but one look at the website, seeing the edges of my hometown underwater in a brief 30 years pushed me into a minor breakdown. This minor breakdown then fuelled the rest of my creation process, the next four years of this production. Would I say AI was my collaborator? That wouldn’t be accurate. Did that single AI-based tool change my life as I know it, simply by amplifying my existing climate anxiety, motivating me to create a show that went on to be a turning point for my career? Well, yes.

On the flip side, technology such as AI consumes energy at an unprecedented rate. I often wonder how much my show is contributing to climate change through just this one moment of exploring climate predictions. On the face of it, the show has a tiny carbon footprint, with a small team, no set, no physical waste and limited electrical requirements. But in the corners of my mind lives the factoid that for every 20-50 inputs, a Generative AI like ChatGPT requires about 500ml of clean water to cool massive data servers. In moments of heightened climate anxiety, I ask myself what hidden cost is incurred with each location search on this software.

Technology can do things that perhaps we shouldn’t?

The ethical questions that arise with generative AI are plenty, each of which deserves more space than this one article can offer. These include loss of work to AI-generated stand-ins (which was a cornerstone of the Screen Actors Guild strike in 2023 in USA), plagiarism, heightened bigotry, deep fakes, spread of misinformation – to name a few. The one that I keep coming back to, is the climate consequences of an increased use of generative AI. Besides the massive amount of water it “drinks”, data servers have been pushing already strained electric grids to the brink. Some may believe they are off the hook once they move towards “green energy”, but what about the human and environmental cost of mass lithium mining in the Democratic Republic of Congo?

Because AI is so omnipresent, there is an ongoing conversation about the ethics of how we use it. Perhaps we can see these conversations as a gateway to having more in-depth conversations about the ethics of technology, or our relationship with our art, in general. 

A great example is offered by Diya Naidu, a movement artist who performed in Human in the Loop at the BeFantastic festival, who shares her experience of working with a generative AI, that was ‘live-choreographing’ what Diya and her colleague Parth would perform. In that show, IT Programmer Tammara Leites set up the program with a specific data bank, such that the AI wasn’t accessing all of the internet. “If the AI could have chosen randomly, there was a high probability of it giving racist/homophobic/gender normative suggestions. Even so, over the course of rehearsals and performances, the programme learned that we were in India and began to be slightly racist, describing ‘brown’ bodies and ‘exotic’ gestures.” 

‘Human in the Loop’, work in progress, performed at Kolkata Centre for Creativity.

The AI also tended to overestimate human ability (or perhaps it simply did not understand physics very well). It would ask the dancers to do 300 spins in a matter of seconds and then immediately slide across the floor. The dancers were encouraged to interpret the programme’s input as literally and immediately as possible which allowed for a “glitch” between robotic and human understanding to come through. This “glitch” created a certain tension in this experimental performance, which is the signature style of Nicole Seiler (switzerland) who choreographed and conceptualised the piece.

Would she come back to such a project, though? Yes – if the human team were like this one. “In general, I’m quite wary of ‘cutting edge technology’. I’ve worked in virtual/augmented reality projects before, but this time around, I felt included in the process, I was brought into the conversations with tech, with the evolutionary/ anthropological/ artistic questions this research was raising, and that was so stimulating, beyond the creative experience which also felt as free as it felt challenging!”

So what’s next?

In the past few weeks, as I’ve been preparing this article, I have been evaluating my own relationship with technology; in general, and in my creative process. The joke in my household is that I am jinxed when it comes to technology. Multiple incidents of faulty mobile phones/laptops/blenders/water heaters have dubbed me a panauti. Yet, both the shows my company tafreehwale produces have a big dependence on tech. Over the combined 50+ shows I’ve produced, I’ve had laptops crash, HDMI ports stop working, internet lag, Zoom choosing to update just before a digital show, short circuits switching off speakers, projectors going blue screen, screens tearing – perhaps every tech mishap that could occur has occurred. Each time, I swear off tech. “The next show I make will be performed with a tubelight and no sound cues, I am done with technology!” my colleagues have often heard me proclaim.

I know it’s a hollow threat though. My cyborg brain is dependent on the laptop and mobile phone that I have outsourced numerous processing tasks to. In a world where every bit of my life is soaked through in tech, why should I fight to keep it out of my creative practice?

Meghana AT is a theatre artist/addict, who has been exploring multiple roles in the field since 2012. In this time, she has worked as an actor, writer, director, producer, production manager, teacher, dramaturg – with more experiments to come. She has a Master’s in Authorial Creativity and Pedagogy from The Academy of Performing Arts in Prague. She is the co-founder of tafreehwale, a theatre company engaged in original, political, and playful theatre making and training.

Special thanks to Ayush Gupta (associate professor of science education at Homi Bhabha Centre for Science Education) for their inputs for this piece.

Similar Posts