I got access to the OpenAI DALL-E AI which generates images and you can too!

You can type anything in the text box and the AI will use your description to make a picture.

I have generated images and I can show you an example of how it works.
I typed in the AI this:
“A 3d rendering of a rainbow colored hot air balloon flying above a reflective lake with green, dense mountains behind it”
The AI wants a wordy description so that it can generate something as close to what you want it to be. My description describes what I want, and I took the idea from this image:

And I changed it up a bit.
I got these four pictures:





The reason why the AI gives you four pictures is so that you can make a variation of them. As soon as I received the images, I thought: “The mountains look like they’ve been imported from Unity Assets!”, and I could understand why. Of course, these are not Unity Assets, but what the AI thinks of “dense mountains”. I could’ve described it better to the AI, in which case it could have provided me with a better result.
For example, with the fish bowl at the top of the Page, the AI read:
“3D render of a cute tropical fish in an aquarium on a dark blue background, digital art”
The AI needs simple yet detailed inputs, and it provides an output as an image.
Also, I found the best looking image from the bunch was Image #3 so I clicked to make a variation of that. It gave me four more images, just like the previous bunch. Here they are:





The AI is really trying here to understand what I want to get and it seems like it blurred and distorted the mountains, and brought the subject forward, the hot air balloon. The water is a bit blurred and distorted, and correctly reflects the hot air balloon. The sky looks like the sun is rising.
The variation uses the starting image from the previous bunch (Image #3) and uses that to make a variation of them.
For example, look at these two images:


They look almost identical, and that’s how its supposed to be. The AI made a variation based on them.
I can make a variation based on that image, and that image, and that image, and etc. If I pick the best picture, the AI will use it as a base and enhance it.
This also makes the AI model learn. This means that the AI learns how to better optimize and enhance the image so that it looks better. For example, in the first image above, the AI just made an image, but on the second, the AI enhanced it by blurring the mountain and lake and fusing them together, and making the subject, the hot air balloon pop out above the rest. It also made the sky brighter and with more clouds, rather than the dark, cloudless sky in picture 1.
Also, I should add that if you look at the images closely, specifically in the bottom right corner, there is what looks like to be a test pattern. But it is not! It it turns out, that is the DALL-E 2 logo. It is even in the favicon (the icon that appears on top of the browser window).
I will be posting more footage about DALL-E soon, so be on the lookout for that! And to finish it off, here are some images from DALL-E. Website: openai.com. Join the waitlist! Recommended article: Click here!









































