Now the AI will see this comic and go “ah, better flare-proof myself then.” Cycle broken.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Now the AI will see this comic and go “ah, better flare-proof myself then.” Cycle broken.
The specific subject that Triton is telling Ariel about is where babies come from.
It’s not meant to be a good thought or a bad one, just a description of how things work. If you want the customers of this company to change their mind then you’ll need to direct your own arguments and/or “propaganda” (as it will likely be perceived by some) at those customers and outdo what they’re being fed by opposing groups.
The problem isn’t stuff going in, it’s the baby coming out.
Wait until she finds out how she’ll be doing it once she’s human. I suspect she’ll prefer this approach.
Did take companies long to stop pretending like they care.
Of course they care, they care about what their customers think because that’s where their money comes from. This is just how corporations work, and it would have the opposite outcome if their customer base wanted those goals of theirs.
If you want corporations to change then convince them that they’ll make more money that way, by whatever means. Through customer preferences, regulations, etc. Don’t expect a corporation to “do what’s right because it’s right,” any more than you should expect a shark to “do what’s right.” It’s not designed that way.
And sometimes that’s exactly what I want, too. I use LLMs like ChatGPT when brainstorming and fleshing out fictional scenarios for tabletop roleplaying games, for example, and in those situations coming up with plausible nonsense is specifically the job at hand. I wouldn’t want to go “ChatGPT, I need a description of the interior of a wizard’s tower is like” and get the response “I don’t know what the interior of a wizard’s tower is like.”
Yup. Fortunately unsubscribing from politics subreddits is generally advisable whether one has been banned from them or not.
Being slightly wrong means more of an endorphin rush when people realize they can pounce on the flaw they’ve spotted, I guess.
Don’t sweat downvotes, they’re especially meaningless on the Fediverse. I happen to like a number of applications for AI technology and cryptocurrency, so I’ve certainly collected quite a few of those and I’m still doing okay. :)
There was a politics subreddit I was on that had a “downvoting is not allowed” rule. There’s literally no way to tell who’s downvoting on Reddit, or even if downvoting is happening if it’s not enough to go below 0 or trigger the “controversial” indicator.
I got permabanned from that subreddit when someone who’d said something offensive asked “why am I being downvoted???” And I tried to explain to them why that was the case. No trial, one million years dungeon, all modmail ignored. I guess they don’t get to enforce that rule often and so leapt at the opportunity to find an excuse.
Downvotes for not getting it right, I presume.
Which makes me concerned that the “Hole for Pepnis” answer has so many upvotes.
Those holes look open to me.
I recall reading once upon a time that the original idea for this exemption was that it was for literal scholars - a few hundred priestly intellectual sorts that were professional serious full-time Torah-studiers. But the exemption didn’t have any specific criteria listed for what that meant, so the ultra-orthodox all wound up saying “yeah, I study the Torah all day too, so I qualify.”
Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions.
Yes, but this is exactly the point of deduplication - you don’t want identical inputs, you want variety. If you want the AI to understand the concept of cats you don’t keep showing it the same picture of a cat over and over, all that tells it is that you want exactly that picture. You show it a whole bunch of different pictures whose only commonality is that there’s a cat in it, and then the AI can figure out what “cat” means.
They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict.
Why do you think this?
There actually isn’t a downside to de-duplicating data sets, overfitting is simply a flaw. Generative models aren’t supposed to “memorize” stuff - if you really want a copy of an existing picture there are far easier and more reliable ways to accomplish that than giant GPU server farms. These models don’t derive any benefit from drilling on the same subset of data over and over. It makes them less creative.
I want to normalize the notion that copyright isn’t an all-powerful fundamental law of physics like so many people seem to assume these days, and if I can get big companies like Meta to throw their resources behind me in that argument then all the better.
Remember when piracy communities thought that the media companies were wrong to sue switch manufacturers because of that?
It baffles me that there’s such an anti-AI sentiment going around that it would cause even folks here to go “you know, maybe those litigious copyright cartels had the right idea after all.”
We should be cheering that we’ve got Meta on the side of fair use for once.
look up sample recover attacks.
Look up “overfitting.” It’s a flaw in generative AI training that modern AI trainers have done a great deal to resolve, and even in the cases of overfitting it’s not all of the training data that gets “memorized.” Only the stuff that got hammered into the AI thousands of times in error.
You communicate with co-workers using natural languages but that doesn’t make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.
Sure, in those situations. I find that it doesn’t take that much effort to write a prompt that gets me something useful in most situations, though. You just need to make some effort. A lot of people don’t put in any effort, get a bad result, and conclude “this tech is useless.”
It also isn’t telepathic, so the only thing it has to go on when determining “what you want” is what you tell it you want.
I often see people gripe about how ChatGPT’s essay writing style is mediocre and always sounds the same, for example. But that’s what you get when you just tell ChatGPT “write me an essay about X.” It doesn’t know what kind of essay you want unless you tell it. You have to give it context and direction to get good results.
If one needs to modify corporate behaviour I did mention regulations as one way to do it.