A traditional artist starts toying with AI image creation

Just a few months ago, I got access to Open AI’s DALL·E 2 – a text-to-image AI software. You tell DALL·E 2 in plain language what you want to see, and the software generates the image using a process called diffusion: the machine generates random noise and tries to progressively denoise it to find patterns that match the text description.

Results were absolutely fascinating. The imagery was not beautiful, but it was still startling to get an immediate, from nowhere, image. And you can get wild, weird combinations of concepts or subjects that otherwise take hours of editing in photo software. Just a couple weeks later, I got access to MidJourney, which used similar technology and produced routinely more beautiful images. And because then I found shortly after, that I could install on my local machine, for free, the open-source Stable Diffusion (MidJourney and DALL·E 2 are both proprietary and paid, or limited-access).

These technologies are moving so fast and new, improved models are coming out routinely, with complimentary technologies to increase their power and accuracy. For the longest time (in this space, that means for about 3 months) people lamented the crazy, swirly eyes you’d see in AI-photos or illustrations. Or the third arms and extra heads. But continually, we see less of these trippy artifacts and images are increasingly looking more and more like real-life photographs, CG renders, drawings, paintings, prints, etc.

Sidebar stuff