Skip to content
Home Analysis What does AI understand about fine art?

Xintian Tina Wang|Analysis

July 23, 2025

What does AI understand about fine art?

An experiment with ChatGPT reveals unsettling truths about how AI interprets artists from diverse backgrounds.

Artist-participants in AI experiment, courtesy artists. From left: Mei-Tsen Chen, Zachary Lieberman, Peter Zimmermann

Like every journalist who couldn’t resist the allure of an AI companion, I recently posed the question to ChatGPT: “What do you know about me, and can you create an image of what you think I look like?”

The 4o model paused noticeably before responding. When the image finally appeared, I couldn’t help but laugh. On my screen was a smiling Black man holding a pen. Though amusing at first, this image was also unsettling: despite having shared snippets of my bio — a Chinese-born woman who moved to the U.S. at 18 and covers intersections of identity, culture, and innovation — the AI crafted an identity entirely different from my own. It had rendered me into a composite archetype of an activist writer, raising troubling questions about how AI perceives and interprets our multifaceted identities.

From left: Wang; the image ChatGPT generated when prompted to depict Wang. By the way, those random characters on the wall mean nothing, she says. They likely represent a stylized or fictional name — possibly ‘Dr. Chens Lucky Medicine or ‘Auspicious Doctor Chen’ ”

This highlights the biases built into AI tools — biases that have already raised serious ethical concerns for marginalized communities, including how their identities are misrepresented by technology. But what happens when those same biases start shaping how art is interpreted and created?  

In an age where newfangled neural networks can conjure up dreamlike paintings, photorealistic portraits, and even entire video sequences with a single text prompt, the line between creation and curation has blurred. Generative AI is not just a tool; it’s becoming a tastemaker, increasingly shaping what we see, how we visualize identity, and who gets represented, and how their perspective is interpreted. Could AI capture, then, the emotional core of art, I started to wonder. Could it ever understand the physical, intuitive act of making? 

I decided to find out.

To do this, I designed an experiment centered on three artists who, while working across different geographies and mediums, all incorporate digital tools into their creative process: New York-based digital media artist Zachary Lieberman, known for his code-based work exploring perception and movement; German painter Peter Zimmermann, who uses resin and digital manipulation to challenge traditional painting forms; and Paris-based Taiwanese artist Mei-Tsen Chen, whose mixed-media installations often explore memory, identity, and diaspora. 

Rather than asking genAI to generate what these artists look like, I wanted to see whether it could interpret how they think. Could a machine interpret the instincts, philosophies, and tensions that shape an artist’s vision? I fed each artist’s statement and recent work into ChatGPT and used the output to generate images, then placed them side by side with the originals. 

That closeness haunted me. 

If a machine can mimic the appearance of depth without the years of questioning, resisting, and creating that real art demands — what happens to those up-and-coming artists still wrestling with that process? What happens when we start accepting “close enough” over craft, or spectacle over struggle? I did this experiment out of curiosity. But what I found left me uneasy. Because if we can’t tell the difference, or worse — if we stop caring — what does that say about the stories we’ll choose to believe?

Click an image to explore the artist’s AI experiment results.

Between human touch and AI’s approximation: Zach Lieberman’s daily dialogue with digital art

Ten years ago, Zach Lieberman found himself joining Facebook for the first time — not to share images or chase viral moments, but out of necessity, following the passing of his father, to connect with his father’s friends. 

What began as an obligation quickly transformed into a deeply personal creative practice. Starting January 1, 2016, Lieberman began posting daily short animations and sketches on Instagram, each limited to the platform’s then-maximum of 15 seconds. Originally, Lieberman made these digital doodles quietly at night as he sat beside his daughter’s bed, lulling her to sleep while healing himself. Soon, though, this nightly routine evolved into a committed artistic habit, a kind of visual diary shared publicly, driven by an intuitive curiosity rather than any preconceived artistic purpose. 

When Lieberman views the vibrant, ripple-like image that ChatGPT generated during our interview based on his artist statement, his reaction is layered. On the surface, the AI’s artwork — with its colorful radial waves and soft gradients — looks familiar. “I do a lot of work with gradients, radial forms, and noise, so those are definitely aspects I see in the AI’s image,” he says. But something deeper doesn’t sit right. “It doesn’t look computational — it almost looks like a hand-drawn imitation,” he adds. “It doesn’t look like something I would personally make.”

ChatGPT-generated image based on Lieberman’s artist statement and project description

In Lieberman’s recent Ripple Series, the artist uses shader code — a specialized programming language for computer graphics hardware — to translate his fascination with nature’s intangible beauty into striking digital imagery. Rather than replicating exact scenes, Lieberman uses this medium to explore the essence of natural phenomena: the flicker of light through leaves, the pulse of water, the moment of stillness before motion. The AI-generated image, in contrast, borrows surface-level motifs without tapping into the emotional undercurrent that drives Lieberman’s work. It knows what to recreate — the gradients, the symmetry, the ripple forms — but not why they matter.

Lieberman’s work carries intention: every coded movement, every modulation of light and color, is the result of sustained reflection. “When I make work, I’m thinking deeply about feeling: How does this image move me? What does it invoke?” he explains. 

The AI version, while not egregiously wrong, lacks that sense of internal inquiry. The generated image missed the soul of the work — the poetic tension between logic and feeling. It’s why Lieberman urges artists to approach AI with “skepticism and optimism,” using it as a tool rather than a compass. Because while the technology might recognize the pattern, it still doesn’t understand the pulse. “It’s easy to become a ‘demo maker’ for technology,” he says. “But art should strive to create poetry.” 

Lost in translation: Peter Zimmermann on AIs misinterpretation of materiality

In the 1990s, German conceptual painter Peter Zimmermann began shifting from traditional techniques to digital experimentation, driven by a desire to more efficiently manipulate layouts and type. “Computers are machines for discovering the world,” he says, tools that have not only accelerated his process but opened new ways of seeing. 

Today, Zimmermann starts each painting with a digitally manipulated image, often drawn from photographs or book covers, which he transforms into abstract compositions using software like Photoshop. But rather than printing these files, he painstakingly transfers them to canvas using poured layers of tinted epoxy resin. The resin, unpredictable and luminous, introduces an element of chance into each piece, forcing him into an intimate, tactile dialogue with the work.

When presented with an AI-generated image created from his artist statement, Zimmermann’s reaction is blunt: “Naive.” 

ChatGPT-generated image based on Zimmermann’s artist statement

While the image mimics formal elements such as curved shapes, gradients, and overlays, it entirely misses the mark on materiality. “The reflections, which are the captivating feature of my surfaces, are completely absent,” he explains. “It looks as if it were painted with a brush. There’s no resin, no luminosity.” Even the composition, he says, feels arbitrary and incoherent, with colors scattered in no meaningful sequence. “It confirms the criticism of AI as a digestion process of archived data.” 

Zimmermann’s original works, including into blue (2023), pulse with a depth and clarity that only resin can produce: semi-translucent layers catch and refract light, responding to a viewer’s movement in real time. By beginning with a digital image and reanimating it in physical space, Zimmermann creates a sensory experience that transcends the screen. In contrast, the AI-generated version feels like a flat echo — technically clever, perhaps, but emotionally inert. In a world oversaturated with digital imagery, Zimmermann’s art asks us to slow down and look more closely, to question how we see and what we’re seeing. 

Mapping the invisible: Mei-Tsen Chen on emotion, displacement, and the limits of AI

For Paris-based Taiwanese artist Mei-Tsen Chen, painting is a process of movement — across cities, cultures, and inner terrains. Calling herself an “urban nomad,” she draws from decades of global travel and cultural displacement to create what she calls “metaphorical cartographies,” paintings that blend geographic memory with emotional resonance. Each composition begins with research — Google maps of the cities where she has lived, walked, and exhibited — but what emerges is far from literal. Her painted maps, built from fine lines and abstract overlays, are deeply personal blueprints of lived experience, shaped as much by memory and emotion as by architecture or topography. “Like the rhizomes,” she writes in her statement, “I create my own network map. The map becomes a dream, a utopia that connects and interweaves disparate networks.”

That complexity is precisely what she finds missing in the AI-generated interpretation of her work. Though the image mirrors the structure of a map, it fails to grasp the poetic weight of mapping in her practice. “I expected AI to understand better how cartographic data could reflect the complexity of a lifespan in multicultural environments,” she says. “But the result was disappointingly simplistic. It couldn’t capture the emotional, affective layers I embed in my process.” 

ChatGPT-generated image based on Chen’s artist statement

Indeed, her series Into the Blue and Drifting Time navigate between macro systems and micro gestures, recalling city grids while evoking oceans, memories, and migratory drift. Each painting is built not just from data, but from memory’s regenerative process, which she likens to “lines flowing [to and from] the ocean, which is the origin of life.”

Chen embraces digital tools in her process, using 3D modeling for exhibition layouts, researching urban grids, even imagining the potential of brain-computer interfaces that could one day turn neurons into narrative. But she is also cautious: “AI is a powerful analyzer,” she notes, “but it cannot feel trauma, memory, or emotional response the way we do.” 

What AI misses is the hand-brain-heart connection, a physical act of painting that becomes an extension of the artist’s nervous system, a direct transmission of presence. As AI becomes more embedded in creative fields, Chen hopes artists and technologists alike will ask more urgent questions, not just about what machines can generate, but about what they still fail to hold: intimacy, displacement, the aching poetry of being human.


My experiment’s results, while visually compelling, feel hollow. What carries the artist’s emotional weight — the subtle gestures, the intentional imperfections, the pulse of presence — simply doesn’t translate. As I write this story, my AI transcription tool flags a soundbite it can’t place. I click it, only to hear my interviewee and me laughing, out of breath, after seeing the AI-generated image of me. A moment of shared delight, fleeting and deeply human, utterly untranslatable by machine.