Laura Scherling | Essays

Imagery Made by a Designer or Artist, with Artificial Intelligence

Imagery Made by a Designer or Artist
Gas Station on Mars made on NightCafe by CEO Angus Russell. Courtesy of NightCafe Studio.

In less than a year, artificial intelligence (AI) tools that can generate imagery have collectively shaken up the design, business, and art industries. After years of deliberation about the use cases for AI-assisted design and art, there has suddenly been a surge of AI products geared towards creation. As an avalanche of AI-assisted imagery has taken over social media feeds and has served as a starting point for further research, this has been followed variously by awe and dismay.

In many ways, some of the results have been highly innovative and democratized. Shortly after artificial intelligence (AI) company OpenAI launched their image generator DALL·E 2 as a limited beta version in July 2022, there were one million people on the waitlist. DALL·E 2, a transformer language model that can create images from text, has been used in diverse projects like the first “artificially intelligent” magazine cover for Cosmopolitan, works to commemorate Austrian painter Egon Schiele, and to generate menu and food plating inspiration for Chef Tom Aviv’s new restaurant Branja Miami. Already the fashion design interface CALA has utilized the DALL-E API to customize clothing styles, including users’ desired trims and features such as cuffs. By August 2022, the beta version of Midjourney was released, and like DALL·E 2, it can create unique AI-generated imagery from text descriptions. Users have made everything from portraits and landscapes, to icons and logos, and tattoo art. In other examples, architect Frank Jacobus has worked with Midjourney to generate chair designs and AURA Modern Home, a Chicago-based studio, has found ways to efficiently make interior design mood boards. Examining social impact, Nigerian filmmaker and artist Malik Afegbua has generated imagery of older adults as Black African runway models to challenge stereotypes. And the collective Human Assisted Art, providing a public-facing opportunity, received hundreds of submissions for one of the first international exhibitions to curate AI-generated work.

Imagery Made by a Designer or Artist
A jewelry design concept generated by Fatima Zehra via starryai. Courtesy of starryai.

A number of additional startups have also launched since DALL·E 2 and Midjourney. Angus Russell, CEO and Co-founder of the Australia-based NightCafe Studio — whose mission is to “democratize art creation” reports that since their launch their platform has created over 150 million images, bringing together a community of “people who enjoy AI art.” Russell points out that NightCafe provides a space for users who “didn't have a creative outlet before” and has also been adopted by users who were already artists and graphic designers. AI art-generator starryai—run by design technologist Mo Kahn, AI writer Alex McFarland, and Head of Growth Fatima Zehra—has suggested varied applications like creating graphics, jewelry concepts, social media assets, tapestry print prototypes and podcast cover art. Zehra, like Russell, highlights that their mission has been to create an art-generating app that is accessible and makes it easy for creators to express themselves. Other popular text-to-image AI art generators include Artbreeder, dream by WOMBO, and Stable Diffusion by Stability AI.

Alongside these recent releases the questions that have come up about data ethics and copyright laws are fundamentally uncharted. On February 2nd, Getty Images Holdings, Inc. (the well-known supplier of stock images) sued Stability AI for copying more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata. Getty posits that Stability AI accessed these images without permission to train its Stable Diffusion model. Essentially large datasets like Getty’s are used to improve image classification and for the AI model to learn how to execute a vast array of subject matters and styles. On February 21, just weeks later, The United States Copyright Office concluded that copyright protection could not be applied to Midjourney-generated imagery in the graphic novel Zarya of the Dawn, only to the text authored by AI educator Kristina Kashtanova. The office determined that copyright does not protect non-human authorship, citing the 1997 case Urantia Foundation v. Kristen Maaherra where it was claimed that the Urantia Book was authored by celestial beings and transcribed, compiled and collected by mere mortals. Kashtanova and Midjourney’s general counsel declared that limited copyright protection could be considered a success (to an extent), especially given that it represents steps toward the formation of copyright standards relevant to a broad range of generative AI platforms.

Imagery Made by a Designer or Artist
A typography poster by Michelle Cortese via Midjourney. Courtesy of Michelle Cortese.

Cases like these in all likelihood will become more common. Scenarios like the Getty lawsuit and Kashtanova’s graphic novel have readily drawn the attention of the AI art community, various universities and institutes like The Alan Turing Institute’s AI & Arts Interest Group, and the broader public — all drawn in a debate about the ethics of AI art. Michelle Cortese, Design Lead at Meta Reality Labs and adjunct faculty at NYU, observes that “the lack of consent in content aggregation for models (visual or text) is an ongoing ethical concern.” Cortese, whose work spans augmented and virtual reality and neural interfaces, experiments with Midjourney to create typographic sketches in order to research the way it “merges global alphabets,” which she finds to be “something very Tower of Babel in-reverse.” She is particularly interested in emerging academic research by the Glaze Project at the University of Chicago. The Glaze tool is in development by several computer science professors and PhD students to explore ethical security techniques. Their tool can add a "cloak layer" to digital artwork, which can prevent the work from being used to train AI models. Another webtool called “Have I Been Trained” allows visual artists to upload their work to check to see if their work has been used to train popular AI art models. Additionally, some designers and artists have started to protest AI-generated art. When the showcase platform Art Station began allowing AI art uploads, platform users organized a protest called “No AI Art” and flooded Art Station’s “trending” website section with protest imagery for a period of time.

In her work at starryai, Zehra also recognizes that “any innovative tool's introduction can create a disruption and AI is no different.” She sees that plagiarism has been a problem for nearly every creative medium in the past, and her goal is to navigate and discourage these types of behaviors. Russell has also seen that some of the challenges are technical. He explains that “in some cases these AI models can spit out exact copies of the images they were trained on. They're not supposed to do that, but it's been shown to happen. I think there needs to be some effort applied to ensure that these models don't overfit.” In a recent interview with Google’s chief executive Sundar Pichai, he predicted that it’s going to be “incredibly beneficial, but clearly has the potential to cause harm in a deep way.” In the interview it was reported that Pichai was not in full agreement with the open letter asking AI labs to pause on creating systems more powerful than the large language model GPT-4 in order to “jointly develop and implement a set of shared safety protocols for advanced AI design and development.”

There are still a lot of unanswered questions and it’s challenging to develop an informed view on imagery made by designers and artists via artificial intelligence with so much happening all at once. Many bold and creative new products have recently emerged, after a relatively longer period of foundational research in deep neural networks and in the design of machine learning algorithms. Russell also acknowledged that this essentially the beginning and that “AI tools are going to change the game.” Nevertheless, if it looks like a duck and quacks like a duck, it might be a duck. In a world where a neural network can extract text prompts and turn them into astonishing graphical portraits, textiles, furniture, websites, paintings, fashion, fonts, and icons it seems that like it or not, we are all becoming part of a machine platform crowd.

Interviews quoted in this essay were courtesy of Michelle Cortese (Meta Reality Labs), Angus Russell (NightCafe Studio), and Fatima Zehra (starryai).

Posted in: Arts + Culture, Graphic Design, Technology

Laura Scherling Laura Scherling is a designer, researcher, and educator––working and teaching at Columbia University. Scherling holds a doctorate from Columbia University Teachers College. She is the co-editor of the recently published book Ethics in Design and Communication: New Critical Perspectives (Bloomsbury Academic UK). Scherling is also the co-founder of GreenspaceNYC, a nonprofit sustainability and design collective. Her work has been published by Brookings Metro, Design and Culture, Spark Journal, Interiors: Design/Architecture/Culture, and the Futures Worth Preserving Cultural Constructions of Nostalgia and Sustainability. Her work can be viewed at laurascherling.info.

Jobs | June 25