Teaching Designers to use Generative AI as a Material

Published 5. December 2024 by Kevin Nørby Andersen
10 min read — Link to this post

Earlier this year, I was asked to teach the 3-week long course “Intelligent Design” (no, not that one) for the 3rd semester students in the “Coded Design” programme of the Danish School of Media and Journalism.

In this article, I lay out the thinking behind the course and its structure.

Seeing the Material

When Stable Diffusion was announced in 2022 and kickstarted the wave of Generative AI, I was immediately interested. Not just because it seemed a useful tool, a way to generate images, but because the model was released publicly.

The public release meant that people like me could use this technology as a material to make new products and tools with. Just a few months later, ChatGPT arrived, and ever since we have been experimenting with generative AI, giving talks and now also teaching.

This course felt like an opportunity to contribute to the discourse of Generative AI. By moving away from the mental model of it just being another set of tools through products like ChatGPT, Midjourney or Suno. Instead, I wanted to present Generative AI technologies as new materials that designers, especially those who code, can use to think and make in entirely new ways.

In the words of Andrej Karpathy, one of the creators of GPT:

… Looking at LLMs as chatbots is the same as looking at early computers as calculators. We’re seeing an emergence of a whole new computing paradigm, and it is very early.

Andrej Karpathy

Designers Needed

The reason I find this so important is that I find a lot of the current discourse on Generative AI so boring.

My LinkedIn feed is mostly about developing some sort of superintelligence, big consultancies claiming to be experts in the field, or just another chatbot. There are so many cases of companies rushing and forcing generative ai into products, and now studies even show that consumers are actively avoiding products with AI-features.

In my experience, one reason Generative AI as a field and business has developed this way is due to the lack of design. Or rather, that design is seen as a “UX/UI” activity, ie draw some wireframes for the concept that a PM and some engineers developed.

My friends over at Design Systems International have written great articles about this development, eg The Gulf Between Design and Engineering and Product Design is Lost.

The problem is, if nobody comes up with interesting products, something that people actually want in their lives, we will stop believing it’s actually possible, and the current large investments into making these Generative AI technologies more capable, will disappear.

My answer, exemplified through this course, is to train designers to dive into these materials. So that rather than just be consumers who use generative AI to write text or code or generate images, they build an understanding of what the models underneath are capable of.

With that understanding, combined with a capability to prototype, they can envision tools and products that feel interesting, fun and aspirational.

Structure

Teaching designers about Generative AI in just three weeks honestly felt daunting. Where do you start? How can you simultaneously advocate that these technologies are both capable of changing the way we think and make, but also that all a large language model is, is a massive, trained calculator that predicts the next word based on patterns it’s learned from lots of text?

The course was taught on-site with a mix of talks, exercises and plenary presentations and discussions. The goal was to get students hands-on with the technologies as fast as possible while providing perspectives that they could reflect on while learning and making.

Given that we only had three weeks and had to get our hands dirty, I structured the course into three modules:

  1. Intelligence and Teachable Machine (3 days)

  2. Generative AI (7 days)

  3. Self-directed project (5 days)

Module 1: “Intelligence”

I thought it might be helpful to show the students that we’ve been here before: claiming that a technology, and AI specifically, can replace and surpass human intelligence. That intelligence has been continuously rethought and redefined throughout history.

We’re arguably in the fourth wave of AI, and this time some claim we’re approaching “Artificial General Intelligence” (AGI). Maybe because there are hints of it, or maybe, as I argue in the course, it can be attributed to anthropomorphism: that humans have an innate tendency to attribute human traits, emotions or intentions to not only other living beings but also objects. This doesn’t mean that the fourth wave is a dead end, because great things can be achieved even when working towards the wrong goal.

On the very first day of the course link to introductory talk, I have the students use Teachable Machine to train their first “intelligence” through machine learning and image/sound/body classification. They get to see the difference between a training-based programming paradigm where data becomes key, and the classic rules-based programming paradigm that they learn in introductory programming.

Seeing this difference enables us to talk about what good and ethical data is, and students build up an intuition about when to use a rules-based approach, and when to use a training-based approach.

The module culminated in the students building drawing apps that use image, sound or pose recognition as input.

Student example: Anna Ellegaard made a drawing app that utilizes a pose classification model to attach a drawing to your body (www.annaellegaard.com)

Module 2: Generative AI

Now that we got our hands dirty with machine learning and a training-based programming paradigm, it’s time to move into Generative AI.

Due to the limited time available, I chose to focus on text and image generation. In the introductory module talk I spend some time talking about the origins of text and image generation models and how they work, but more so about their material properties and my mental model of how they work.

Student example: Baldrian Sector made an app that uses GPT to generate words and assemble poems (instagram: @baldrian_sector + github: @baldriansector)

When talking about the term Large Language Model (LLM), of which GPT is an example, it feels important to understand that they aren’t really limited to language, or even have “real” understanding of language:

It’s a bit sad and confusing that LLMs (“Large Language Models”) have little to do with language; It’s just historical. They are highly general purpose technology for statistical modeling of token streams. A better name would be Autoregressive Transformers or something.

They don’t care if the tokens happen to represent little text chunks. It could just as well be little image patches, audio chunks, action choices, molecules, or whatever. If you can reduce your problem to that of modeling token streams (for any arbitrary vocabulary of some set of discrete tokens), you can “throw an LLM at it”.

Andrej Karpathy

Examples like Matt Webb’s Poem/1 clock, a clock that shows the time through GPT-generated poems, or oio’s walkcast.fm, a podcast that feeds your location into GPT to continuously generate real and fake insights about your surroundings, show the students that generative AI can be used as a material to create new and entirely different products.

The exercises in this module focused first on using ChatGPT, but with advanced prompts to train the students’ understanding of how to direct these models more intentionally. For example, an exercise asks the students to use prompt injection to expose the hidden system prompt behind ChatGPT.

This not only shows them an inherent weakness in LLMs, but also what’s behind ChatGPT and prepares them for the next exercise where they define their own system prompts.

Module 3: Build your own Tool

The last week is entirely dedicated to a passion topic of mine: tools. Tools have a huge influence on how we think and make, and I have been building tools for myself and others for years.

Through two separate presentations I showed some of the tools I have made with generative AI the past few years, and give an intro to the idea of building tools. I wanted to show that Generative AI can be a great material for building tools, even if it’s just a small part of a larger system.

Student example: Freja Marott built Vessel in Orbit, a personalized menstrual cycle tracker. It uses a mix of Generative AI and personal knowledge about your own cycle, to offer advice for the different phases. (https://frejamarott.com / @frejamarott)

Reflections

Even if this was a very short time to both prepare and execute a course spanning such a wide and deep topic as Generative AI, I’m really happy with how it turned out. Not least because I really enjoyed seeing the diversity of work that the students produced.

A warm and open workspace was created from the start, and it was clear that Kevin had a genuine interest in his students and their wishes and thoughts. He encouraged us to ‘think for ourselves’ and to actively consider what kind of designer each of us wants to be, and how we want to impact the world around us. It was incredibly inspiring and generous.

When we experimented with Teachable Machine, I was afraid it would set the bar too high, but it turned out to be a completely natural entry point for understanding GPT models in a different way.

Kevin’s regular presentations and lectures were greatly appreciated, as they provided the class with an understanding of the subject’s possibilities before we dove into the learning process. It fostered intrinsic motivation, which became the driving force to tackle some challenging topics.

Samples of student feedback

I wanted to use the course as a culmination of the past two years of work our studio and other interesting players in the space has done and focus on a few themes:

1) Technologies are materials

All materials have functional, ethical, and aesthetic values shaped by their users and the world around us. Good design is about knowing how much of each material to use, when to use it, and how different materials come together to serve a purpose.

2) Materials don’t doom or save us

With every wave, people claim a technology will either doom or save us. The truth usually lies somewhere in between, and it’s our responsibility as designers to navigate this. We do so by working with materials to deepen our understanding.

3) What a material can do is not static

Although steam bending wood has been around for centuries, it wasn’t until Michael Thonet applied it to furniture design that people realized wooden furniture could take on new shapes. Similarly, the GPT model existed for years before ChatGPT. But only when OpenAI released an accessible chat interface did it become the fastest-growing software application in history.

Seeing what the students were able to create within just a few days, how their thinking around using Generative AI changed, and how much fun they had, gives me hope that designers will play a bigger part in defining the kinds of experiences that these new technologies unlock.

Thanks to the DMJX school for hosting me, and if you’re curious to learn more, feel free to check out the freely available course website on Github: https://github.com/knandersen/dmjx-intelligent-design-2024

   

Working with technologies as materials is what we do at super ultra, and we love working with clients on exploring what these new materials can do. If you’re interested in learning more, please reach out here or at hello@superultra.dk.

If you liked this post, please consider sharing it. Here is the link.

super ultra is a design studio based in Copenhagen, Denmark.

We help clients, all over the world, navigate complex problems and technologies.

Our mission is to design and develop tools and products that extend human ability to think and create.

© 2024 super ultra - all rights reserved

STUDIO
super ultra
Åboulevard 34A kl tv
2200 København N

GENERAL INQUIRIES
hello@superultra.dk

BOOK A FREE MEETING
meet.superultra.dk

INSTAGRAM
@superultradk

CVR/VAT ID
DK34937575