

I tried replicating this myself, and got no similar results. It took enough coaxing just to get the model to not specify existing tariffs, then to make it talk about entire nations instead of tariffs on specific sectors, then after that it mostly just did 10, 12, and 25% for most of the answers.
I have no doubt this is possible, but until I see some actual amount of proof, this is entirely hearsay.
Seconded. I genuinely understand most of the hate against AI, but I can’t understand how some people are so completely against any possible implementation.
Sometimes, an LLM is just good at rewording documentation to provide some extra context and examples. Sometimes it’s good for reformatting notes into bullet points, or asking about that one word you can’t put your finger on but generally remember some details about, but not enough for the thesaurus to find it.
Limited, sure, but not entirely useless. Of course, when my fucking charity fundraising platform starts adding features where you can speak to it and tell it “donate $x to x charity” instead of just clicking the buttons yourself, and that’s where the development budget is going… yeah, I’m not exactly happy about that.