• 8 Posts
  • 139 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • He goes into the details of the most upvoted Google Gemini fails and then branches out to how text/image/audio generative AI is being used on Facebook, Instagram to inflate traffic, as well as how you can actually earn some income by farming reactions on twitter now (with the blue checkmark).

    There’s a section on how adobe is selling AI generated images with their stock photos, but you can tell this video might be a little rushed because he comes to the conclusion that people are paying $80 for one of these images, when in reality the $80 adobe plan gives you 40 images (so about $2 per stock image). That or he knows this statement is misleading, but makes it anyway because it will drive his own reactions up (oh the irony). https://web.archive.org/web/20240701131247/https://stock.adobe.com/plans

    Link to timestamp in video:
    https://youtu.be/UShsgCOzER4&t=894s

    With adobe he touches on their updated ToS that state how any images uploaded to Adobe can be used to train their own generative image model.

    The Netflix section talks about the “What Jennifer Did” documentary which used AI generated images and passed them off as real (or at least didn’t mention that the images were fake).

    Spotify: How audio generative AI is being used to create music and is being published on there now as well as their failed

    Edit: as well as their failed “projects/features” (car accessory, exclusive podcasts, etc.)

    Multiple times throughout the video he pushes the theory that most of these companies are also using AI generated content to drive engagement on their own site (or to earn income without needing to pay any artists).

    He definitely focuses only on the worst ways that generative AI can be used without touching on any realistic takes from the other side (just the extreme takes from the other side with statements like “AI music will replace the soulless crappy music that’s being released now… and it will be better and have more soul!”).

    Still worth a watch, he brings up a ton of valid points about the market being oversaturated with AI generated products.
















  • despite the fact that hosting images is orders of magnitude less bandwith and storage requiring than videos.

    In general, yes, when comparing images/video of the same resolution. But if I compare an 8k image to a low quality video with low FPS, I can easily get a few minutes worth of video compared to that one picture.

    As you said, it definitely costs money to keep these services running. What’s also important is how well they are able to compress the video/images into a smaller size without losing out on too much quality.

    Additionally, with the way ML models have made their way into frame generation (such as DLSS) I wouldn’t be surprised if we start seeing a new compressed format that removes frames from a video (if they haven’t started doing it already).







  • SD? SD 3? The weights? All the above?

    Stable Diffusion is an open source image generating machine learning model (similar to Midjourney).

    Stable Diffusion 3 is the next major version of the model and, in a lot of ways, it looks better to work with than what we currently have. However, up until recently we were wondering if we would even get the model since Stability AI ran out of funding and they’re in the midst of being sold off.

    The “weights” refer to the values that make up the neural network. Basically by releasing the weights they are essentially saying that they are making the model open-source so that the community can retrain/fine-tune the model as much as we want.

    They made a wait list for those who are interested in getting notified once the model is released, and they turned it into a pun by calling it a “weights list”.