OpenAI winds down AI image generator that blew minds and forged friendships in 2022

Enlarge / An AI-generated image from DALL-E 2 created with the prompt “A painting by Grant Wood of an astronaut couple, american gothic style.”

When OpenAI’s DALL-E 2 debuted on April 6, 2022, the idea that a computer could create relatively photorealistic images on demand based on just text descriptions caught a lot of people off guard. The launch began an innovative and tumultuous period in AI history, marked by a sense of wonder and a polarizing ethical debate that reverberates in the AI space to this day.

Last week, OpenAI turned off the ability for new customers to purchase generation credits for the web version of DALL-E 2, effectively killing it. From a technological point of view, it’s not too surprising that OpenAI recently began winding down support for the service. The 2-year-old image generation model was groundbreaking for its time, but it has since been surpassed by DALL-E 3’s higher level of detail, and OpenAI has recently begun rolling out DALL-E 3 editing capabilities.

But for a tight-knit group of artists and tech enthusiasts who were there at the start of DALL-E 2, the service’s sunset marks the bittersweet end of a period where AI technology briefly felt like a magical portal to boundless creativity. “The arrival of DALL-E 2 was truly mind-blowing,” illustrator Douglas Bonneville told Ars in an interview. “There was an exhilarating sense of unlimited freedom in those first days that we all suspected AI was going to unleash. It felt like a liberation from something into something else, but it was never clear exactly what.”

Rise of the latent space astronauts

Before DALL-E 2, AI image generation tech had been building in the background for some time. Since the dawn of computers with graphical displays in the 1950s, people have been creating images with them. As early as the 1960s, artists like Vera Molnar, Georg Nees, and Manfred Mohr let computers do the drawing, generatively creating artwork using algorithms. Experiments from artists like Karl Sims in the 1990s led to one of the earliest introductions of neural networks into the process.

Use of AI in computer art picked up again in 2015 when Google’s DeepDream used a convolutional neural network to bring psychedelic details to existing images. Then came generators based on Transformer models, an architecture discovered in 2017 by a group of Google researchers. OpenAI’s DALL-E 1 debuted as a tech demo in early 2021, and Disco Diffusion launched later that year. Despite these precursors, DALL-E 2 arguably marked the mainstream breakout point for text-to-image generation, allowing each user to type a description of what they wanted to see and have a matching image appear before their eyes.

When OpenAI first announced DALL-E 2 in April 2022, certain corners of Twitter quickly filled with examples of surrealistic artworks it generated, such as teddy bears as mad scientists and astronauts on horseback. Many people were genuinely shocked. “Ok it’s fake ?? tell me it’s fake. April fool joke a bit late,” read one early reaction on Twitter. “My mind can only be blown so many times. I can’t take much more of this,” wrote another Twitter user in May.

Other examples of DALL-E 2 artwork collected in threads soon followed, all of which were flowing from OpenAI and a group of 200 handpicked beta testers.

When OpenAI began handing out those beta testing invitations, the common bond quickly spawned a small community of artists who felt like pioneers exploring the new technology together. “There was a wild time where there were a few artists playing around with it. We all became friends,” said conceptual artist Danielle Baskin, who first received an invitation to use DALL-E 2 on March 30, 2022, and began testing in mid-April. “When I first got access, I felt like I had a portal into infinite alternate worlds. I didn’t think of it as ‘art making’—it felt like playing. I’d stay awake for hours just exploring.”

Because each DALL-E image sprung forth from a written prompt like “a photo of a statue slipping on ice” (drawing from associations gained in training between captions and images), the beta testers found themselves merging language and their visual imaginations in novel ways. “It was like being set loose in a lab,” said an artist named Lapine in an interview with Ars. Lapine received early access to DALL-E 2 on April 6 and began sharing her generations on Twitter. “I was using descriptive language in a way I had not previously.”

Source link

Related Articles

Back to top button