- A new English course at UCLA embraces the potential of artificial intelligence tools.
- The ‘Algo-Lit’ class taught by Danny Snelson also explores ethical issues around text generator tools.
- Students in the class are encouraged to both study AI analytically and use it creatively.
A new UCLA English class is built around the premise that the best way to understand artificial intelligence tools, including their biases and limitations, is to experiment with them.
The class, “Algo-Lit: An Introduction to AI Literature,” is taught by Danny Snelson, an assistant professor of English.
“I think that the use of generative AI — to be specific, the type of large-language models or image synthesis tools built on massive accumulations of data — presents real ethical and moral concerns,” Snelson said. “But these tools, and the new ways of making they present, are not going away. That box has been opened.”
The course began with an introduction to articles and creative works about algorithmic biases as a way to foster discussion of the ethical issues around AI.
“It’s never just a single side that is being presented to us in class,” said Brenna Connell, a fourth-year English major taking the class. “It’s always like, ‘Here’s a lot of potential to stretch human creativity. Here are the biases encoded into these tools. How do we put these things in conversation with each other?’”
Snelson encourages students to both study AI analytically and use it creatively.
For the class’s first assignment, students worked as a team to produce a 233-page book using an AI text generator. After reading essays on digital poetry and electronic literature, students used the insights they gained to create a fictional look back, from the perspective of an “artificial literary historian” writing in the year 2063, reflecting on algorithmic literature over the past 40 years.
“It’s a kind of sci-fi experiment that speculates on where AI might go before we’ve even begun our collective study,” Snelson said. “It introduces students to tactics not just for producing speculative scholarship like this, but for new forms of critical collaboration.”
Another collaborative aspect of the class is the syllabus itself. AI is developing so quickly that each week brings new advances in the field. Students are responsible for researching the latest developments to contribute to next week’s material. They might share news about the field, a new AI tool or a particularly salient work of art.
For instance, OpenAI recently announced the launch of customizable chatbots. Just days later, the class discussed the implications of that advance. Much of the material Snelson is teaching this quarter — including AI tools, creative works of art and literature, and emerging scholarship and criticism — have been released just months, or even weeks, before it’s discussed in the class.
Snelson also raises real-life examples of how the capabilities of generative AI tools are often misunderstood, even by prominent thinkers. Recently, for example, a well-known novelist critiqued ChatGPT for its lackluster literary output, but Snelson suggested that failure was likely because the author had given the software an equally lackluster prompt, failing to address the algorithm’s unique set of prompt-based poetics and iterative processes.
“This class encourages students to experiment and play with these tools in order to understand their workings, all in preparation for the production of trenchant critiques and insightful analyses of these major technical transformations,” Snelson said.
Beyond books, comics and other visual arts, students work with sound and video to bring music videos, interviews and snippets of TV shows to life. As students use artificial intelligence to create works in a wide variety of media formats, they are also learning how to critique AI tools in a more informed way.
“If you want to learn more about AI, how to play with it, use it in a conscious way, and understand it, and that it’s not so confusing and scary, this is definitely a class worth taking,” Connell said.