RevISK
SILVER Star
Organic beings generating works with intent = artNice! I am partial to human generated art!
The overlord’s image is fascinating though, curious about some of the details it arrived at.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
This site may earn a commission from merchant affiliate
links, including eBay, Amazon, Skimlinks, and others.
Organic beings generating works with intent = artNice! I am partial to human generated art!
Yes those were the days! I taught myself to program in the 1970s on a PDP8-S with an ASR-33 teletype printer. I wrote programs to print out graphics using asterisks! It wasn't until college when I got into computer graphics and signal processing for real.@BrownWolf I remember days when high school computer classes would use dot matrix printers for pictures of naked chicks using X’s and double strikes and 3 feet of perforated paper. To me, that AI drawing of the wolf and Land Cruiser are like a dream back in the 1960’s . Frankly, I’m floored by that
@RevISK Can't argue with your definition of art. I am surrounded by artists!Organic beings generating works with intent = art
The overlord’s image is fascinating though, curious about some of the details it arrived at.
My brain leaked a bit (or is it byte?) reading that.@RevISK Can't argue with your definition of art. I am surrounded by artists!
WRT, "curious about some of the details it arrived at", here's what Wikipedia has to say (DALL-E - Wikipedia - https://en.wikipedia.org/wiki/DALL-E):
"DALL·E's model is a multimodal implementation of GPT-3 with 12 billion parameters which 'swaps text for pixels,' trained on text–image pairs from the Internet. In detail, the input to the Transformer model is a sequence of tokenized image caption followed by tokenized image patches. The image caption is in English, tokenized by byte pair encoding (vocabulary size 16384), and can be up to 256 tokens long. Each image is a 256×256 RGB image, divided into 32×32 patches of 4×4 each. Each patch is then converted by a discrete variational autoencoder to a token (vocabulary size 8192)."
As with everything these days, I scraped it off the internet!