

When you’re an insecure artist, you want to show the best of the bunch – or the opposite, when you’re insecure you want to show the whole bunch. The idea that there are multiple versions in Dall-E is interesting – the thing is like showing you its sketches. I did a “selfie woven tapestry” and a “car buried in the ground” and “gas pump in the woods covered in pennies” – the first few I did were kind of creepy. You have to put “woven tapestry”, which is interesting because to me, the meaning of tapestry is something that’s woven, but you have to add that language. When you put in “tapestry”, it depicts what you see in dorm rooms – like a printed piece of fabric, it’s not actually a woven piece of fabric. This is what my human body looks like.” When I search on Dall-E, I’m asking it to be a form, like “tapestry” or “selfie tapestry” or “not your grandma’s quilt”. Selfies are kind of a check-in with the internet, like, “Hi, I exist. People write about my work and say “sexy selfies”, which is definitely simplified. I don’t need faces, so there’s this blurring of identity. When I was using other people’s images, I was using the essence of a selfie or a self portrait. Now, I look at library archives – like sexual education pamphlets or xeroxed brochures about domestic violence.

I also used to use Flickr or Photobucket. Google used to be a cache of images that I used in the space of memory. And GAN imagery is the average tool Dall-E is the next step in that direction.Īn image generated by Riley on Dall-E mini based using the prompt ‘Barbies scissoring’. I’m interested in generated imagery in relationship to motion, which I’m sure is coming sooner rather than later. It’s more surprising to put something not descriptive but more open-ended and let the Dall-E try to figure out what an adjective means. I’d like to be able to combine images, like if you had the ability to mate two images and add context, write different scenarios. I’d been familiar with OpenAI through two projects I worked on – Neural Swamp, on view at Philadelphia Museum of Art, and my first foray into AI with MythiccBeing. Some of this stuff you’re playing with online, it could feel like, “oh it’s so infinite” or sentient, but no, it’s not as infinite as my imagination. I’ve been playing around with it, trying to break it or to see how far it goes or where the edge is. I’m at a break between shows and exploring Dall-E 2. ‘It’s not as infinite as my imagination’ – Martine SymsĪn image generated by the artist Martine Syms using Dall-E mini and the prompt ‘photo of a purple gel capsule that says Martine’. We spoke to four artists working across textiles, photography, installation, video art, and oil painting about harnessing Dall-E’s trove of images – and asked them to provide us with an exclusive example of how they used the tool. When Dall-E produces an image, who is the creator? Is it the person who typed in the text, the coders who trained the neural network, the photographers whose images appear in the network – all of the above?

But who decides what is political? Isn’t the very definition of “sexual” subjective?ĭall-E is not the first text-to-image AI model, but its sophistication, along with Dall-E mini’s popularity, have given new urgency to questions about the role of AI in artmaking.
Picture art view mini software#
The company’s rules claim that the software prohibits the creation of “sexual or political content, or creating images of people without their consent”. The policy page says composite images may “contain stereotypes against minority groups”. OpenAI, the company behind Dall-E 2, acknowledges, however vaguely, that image-generators may “reinforce or exacerbate societal biases”. An image generated by the Guardian using Dall-E mini and the prompt ‘selfie woven tapestry’, provided by the artist Erin M Riley.
