They probably mean DALL-E as an AI.
Just like google has spend years to train their AI to categorize images(aka find words to describe the pixels the AI is feed) they recersed this process: the AI can generate images from text that describes a scene.
You type in “a calm moring day in alaska” and the AI will first parse the text to construct a “concept” of what its supposed to display and then draw from the millipns of samples it has been feed to generate a new image based on your text.
There are several ai tools out there that are publicly available which take a string of words and generate a picture from them. Some people are using these tools and repeatedly tweaking which words they put in to get a great picture with comparatively little effort on their part.
The one you’ve most likely heard was an artist entered a “digital art” contest using an ai-generated picture and did really well. This is upsetting the artist community because digital art is usually created by hand like traditional painting, just with some extra tools and effects only available with software. This guy won the competition with something like 1-3 hours of “work” (primarily using an ai he didn’t even create himself) when others put in 50-100+ hours.
An equivalent would be winning a sculpting contest using a 3d printer or an amateur rugby tournament by hiring 14 professional players to be on your team and you just jog around. Like, did you follow the rules to win? Technically yes. Was it in the proper spirit of the competition? Absolutely not.
There’s numerous AIs right now competing for the best program. The one that’s gotten the most headlines is Dall-E 2, though most of the images shared come from one called Midjourney. Right now, the best one I’ve found by far is one called Stability Diffusion. You can apply for access to all of these. They’re not publicly available but it’s not hard to get in. One of the only publicly available ones is an early off shoot of Dall-E 2 called Dall-E mini. It’s… Not great, and can only produce low resolution images.
All of them, save DALL-E Mini, operate on a token system, so you have access to a certain number of tokens each month/week from which you can generate an image with. It’s usually 1 token per image but altering the parameters or ordering extra iterations can cost extra tokens. DALL-E provides 300 tokens a month I believe and you can purchase more for like $1 per 100 tokens. The exception is Stability Diffusion, which has a token system if you want to use their servers to generate the image, but they’re also supporting open source versions of the code, so you can run it yourself. It’s actually caused quite a bit of controversy as a lot of people fear open sourcing this kind of technology, especially with Stabilities capability.
Nonetheless, it’s been producing a fantastic visual journal for my D&D campaign. Stability is also the only AI I’ve found so far that can reliably produce legible text. The other AIs haven’t learned yet that text isn’t just random wiggly squiggles, but somehow Stability learned.
Latest Answers