That's an open source recreation based on DALL-E 1. It's different to DALL-E 2, if you want that look for DALLE2-pytorch, but note that it hasn't been trained fully yet.
My propmt of 'penguin smoking a bong' does not disappoint on either, although hugging face more accurately portrayed the act of smoking, while replicate gave me images of penguin shaped bongs
Hang in there — I only got my invitation a couple days ago. They're still rolling out invitations at a steady pace. But, just as a side note, one of the first things they tell you is that they own the full copyright for any images you generate.
You definitely have to play around with prompts to get a feel for how it works and to maximize the chance of getting something closer to what you want.
When did you sign up? I just signed up, and it sounds like it takes a year to get access, probably longer now. It's a bit frustrating because I didn't sign up when it came out because I didn't need it at the time, but now I'm afraid of waiting a year when I do. These types of waitlist systems encourage everybody to sign up for everything on the off-chance that they might need it later. Wish they just went with a simple pay-as-you-go model (with free access for researchers and other special cases who request it), like how Copilot does it.
I signed up just over a month ago and from what I've seen, it looks like you won't have to wait more than two months to get your invite. A lot of people who signed up around the same time as me have already received their invites, so it looks like they're speeding things up and getting ready for a public launch soon.
I don't think the provider of an AI image generator service can decide they own the copyrights to it (perhaps they can require you assign the copyrights, though it may not even be copyrightable?), only courts can (and they decided the person setting up cameras for monkeys didn't own the copyrights to the monkey photos)?
Courts only decided the monkey couldn't copyright the photo (the PETA case).
The copyright office claimed works created by a non-human aren't copyrightable at all when they refused Slater, but that was never challenged or decided in court. It's not a slam dunk, since the human had to do something to set up the situation and he did it specifically to maximize the chance of the camera recording a monkey selfie.
If I set up a rube goldberg machine to snap the photo when the wind blows hard enough, how far removed from the final step do I have to get before it's not me owning the result anymore? That's the essence of the case, had it gone to court, probably the essence here too.
My guess is the creativity needed for the prompt would make the output at least a jointly derived work regardless of any assignment disclaimers--pretty sure you can't casually transfer copyright ownership outside a work for hire agreement, only grant licenses--but IANAL and that's just a guess.
I've played with Midjourney for awhile and just got my invite to DALL-E last night. One thing I think is really cool about Midjourney is the ability to give it image URLs as part of the prompts. I can't say I've had tremendous success with it, and it still feels a little half-baked, but I wish DALL-E had something along those lines. (Unless it does and I'm missing it). It's much easier to show examples of a particular style than to try to describe it, especially if it isn't something specifically named in the AI's training set.
DALLE2 isn't as flexible as the more open colab notebooks here; you can do "variations" of an image but you can't edit an image except through inpainting, so it's hard to generate "AI art" style images of the kind Midjourney and Diffusion are good at.
It also won't allow uploading images with faces in them.
I'm also waiting but only put myself on the waitlist recently. I want to use it to generate synthetic image datasets from text descriptions. Very curious to explore the depth of what can be generated.
I use both too. Dall E has heavy restrictions. It's basically G rated, so no horror. And no real world stuff like "Donald Trump with a mohawk".
MJ falls apart when you ask for fine detail. It's a bit of the AI cliche where you have to describe the colour, shape, etc in detail to mold what you want. Asking for a "monkey, gorilla, and chimp riding a bicycle" might have a chimp riding a monkey-gorilla as a bicycle.
Dall E is a lot better with words. It seems to "smooth" some stuff. Like asking for a bone axe will still show regular axes.
But MJ is probably the best choice if you want to do landscapes and stuff, especially horror/dystopian themed.
Damn I am salivating to get access to Dall-E for some projects. Been on the waiting list for quite a while.
I've been experimenting with Midjourney, which is amazing for spooky/ethereal artwork, but it struggles with complex prompts and realism.