• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    Restricting open source offerings only drives them underground where they will be used with fewer ethical considerations.

    Not that big tech is ethical in its own right.

    Bot fight!

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I don’t think there’s any stopping the “fewer ethical considerations”, banned or not. For each angle of AI that some people want to prevent, there are others who specifically want it.

      Though there is one angle that does affect all of that. The more AI stuff happening in the open, the faster the underground stuff will come along because they can learn from the open stuff. Driving it underground will slow it down, but then you can still have it pop up when it’s ready with less capability to counter it with another AI-based solution.

  • JadenSmith@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    8
    ·
    edit-2
    1 year ago

    Lol how? No seriously, HOW exactly would AI ‘wipe out humanity’???

    All this fear mongering bollocks is laughable at this point, or it should be. Seriously there is no logical pathway to human extinction by using AI and these people need to put the comic books down.
    The only risks AI pose are to traditional working patterns, which have been always exploited to further a numbers game between Billionaires (and their assets).

    These people are not scared about losing their livelihoods, but losing the ability to control yours. Something that makes life easier and more efficient requiring less work? Time to crack out the whips I suppose?

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      1 year ago

      Working in a corporate environment for 10+ years I can say I’ve never seen a case where large productivity gains turned into the same people producing even more. It’s always fewer people doing the same amount of work. Desired outputs are driven less by efficiency and more by demand.

      Let’s say Ford found a way to produce F150s twice as fast. They’re not going to produce twice as many, they’ll produce the same amount and find a way to pocket the savings without benefiting workers or consumers at all. That’s actually what they’re obligated to do, appease shareholders first.

    • Plague_Doctor@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I mean I don’t want an AI to do what I do as a job. They don’t have to pay the AI and food and housing, in a lot of places, aren’t seen as a human right, but a privilege you are allowed if you have money to buy it.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    1 year ago

    Some days it looks to be a three-way race between AI, climate change, and nuclear weapons proliferation to see who wipes out humanity first.

    But on closer inspection, you see that humans are playing all three sides, and still we are losing.

    • xapr [he/him]@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      AI, climate change, and nuclear weapons proliferation

      One of those is not like the others. Nuclear weapons can wipe out humanity at any minute right now. Climate change has been starting the job of wiping out humanity for a while now. When and how is AI going to wipe out humanity?

      This is not a criticism directed at you, by the way. It’s just a frustration that I keep hearing about AI being a threat to humanity and it just sounds like a far-fetched idea. It almost seems like it’s being used as a way to distract away from much more critically pressing issues like the myriad of environmental issues that we are already deep into, not just climate change. I wonder who would want to distract from those? Oil companies would definitely be number 1 in the list of suspects.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 year ago

        Agreed. This kind of debate is about as pointless as declaring self-driving cars are coming out in 5 years. The tech is way too far behind right now, and it’s not useful to even talk about it until 50 years from now.

        For fuck’s sake, just because a chatbot can pretend it’s sentient doesn’t mean it actually is sentient.

        Some large tech companies didn’t want to compete with open source, he added.

        Here. Here’s the real lead. Google has been scared of AI open source because they can’t profit off of freely available tools. Now, they want to change the narrative, so that the government steps in regulates their competition. Of course, their highly-paid lobbyists will by right there to write plenty of loopholes and exceptions to make sure only the closed-source corpos come out on top.

        Fear. Uncertainty. Doubt. Oldest fucking trick in the book.

    • Plague_Doctor@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I’m sitting here hoping that they all block each other out because they are all trying to fit through the door at the same time.

    • jarfil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      three-way race between AI, climate change, and nuclear weapons proliferation

      Bold of you to assume that people behind maximizing profits (high frequency trading bot developers) and behind weapons proliferation (wargames strategy simulation planners) are not using AI… or haven’t been using it for well over a decade… or won’t keep developing AIs to blindly optimize for their limited goals.

      First StarCraft AI competition was held in 2010, think about that.

        • jarfil@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          We used to run “machine learning”, “neural networks”, over 25 years ago. The “AI” term has always been kind of a sci-fi thing, somewhere between a buzzword, a moving target, and undefined since we lack a fixed comprehensive definition of “intelligence” to begin with. The limiting factors of the models have always been the number of neurons one could run in real-time, and the availability of good training data sets. Both have increased over a million-fold in that time, progressively turning more and more previously untractable problems into solvable ones to the point where the results are equal or better and/or faster than what people can do.

          Right now, there are supercomputers out there orders of magnitude more capable than what runs stuff like ChatGPT, DallE, or all the public facing "AI"s that made the news. Bigger ones keep getting built… and memristors are coming, to become a game changer the moment they can be integrated anywhere near current GPU/CPU levels.

          For starters, a supercomputer with the equivalent neural network processing power of a human brain, is expected for 2024… that’s next year… but it won’t be able to “run a human brain”, because we lack the data on how “all of” the human brain works. It will likely become obsoleted by ones with several orders of magnitude more processing power, way before we can simulate an actual human brain… but the question will be: do we need to? Does a neural network need to mimick a human brain, in order to surpass it? A calculator already does, and it doesn’t use a neural network at all. At what point the integration of what size and kind of neural network, with what kind of “classical” computer, can start running circles around any human… or all of humanity taken together?

          And of course we’ll still have to deal with the issue of dumb humans telling/trusting dumb "AI"s to do things way over their heads… but I’m afraid any attempt at “regulation”, is going to end up like the case with “international law”: those who want, obey it; those who should, DGAF.

          Even if all tech giants with all lawmakers got to agree on the strictest of regulations imaginable, like giving all "AI"s the treatment of weapons of mass destruction, there is a snowflake’s chance in hell that any military in the world will care about any of it.

  • MudMan@kbin.social
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    1 year ago

    Oh, you mean it wasn’t just concidence that the moment OpenAI, Google and MS were in position they started caving to oversight and claiming that any further development should be licensed by the government?

    I’m shocked. Shocked, I tell you.

    I mean, I get that many people were just freaking out about it and it’s easy to lose track, but they were not even a little bit subtle about it.

    • Kaidao@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Exactly. This is classic strategy for first movers. Once you hold the market, use legislation to dig your moat.

  • LittleHermiT@lemmus.org
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    edit-2
    1 year ago

    If you’re wondering how AI wipes us out, you’d have to consider humanity’s tendency to adopt any advantage offered in warfare. Nations are in perpetual distrust of each other – an evolutionary characteristic of our tribal brains. The other side is always plotting to dominate you, take your patch of dirt. Your very survival depends on outpacing them! You dip your foot in the water, add AI to this weapons system, or that self-driving tank. But look, the other side is doing the same thing. You train even larger models, give them more control of your arsenal. But look, the other side is doing even more! You develop even more sophisticated AI models; your very survival depends on it! And then, one day, your AI model is so sophisticated, it becomes self aware…and you wonder where did it all go wrong.

      • jarfil@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        They went a bit too far with the argument… the AI doesn’t need to become self-aware, just exceptionally efficient at eradicating “the enemy”… just let it loose from all sides all at once, and nobody will survive.

        How many people are there in the world, who aren’t considered an “enemy” by at least someone else?

          • jarfil@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            “Scared” is a strong word… more like “curious”, to see how it goes. I’m mostly waiting for the “autonomous rifle dog fails” videos, hoping to not be part of them.

        • TwilightVulpine@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Only if human military leaders are stupid enough to give AI free and unlimited access to weaponry, rather than just using it as an advisory tool and making the calls themselves.

          • jarfil@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Part of the reason of “adding AI” to everything, “dumb AI”, is to reduce reaction times and increase obedience mission completion rates. Meaning, to cut the human out of the loop.

            It’s being sold as a “smart” move.

        • Salamendacious@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          If an AI were to gain sentience, basically becoming an AGI, then I think it’s probably that it would develop an ethical system independent of its programming and be able make moral decisions. Such as murder is wrong. Fiction deals with killer robots all the time because fiction is a narrative and narratives work best with both a protagonist and an antagonist. Very few people in the real world have an antagonist who actively works against them. Don’t let fiction influence your thinking too much it’s just words written by someone. It isn’t a crystal ball.

          • TwilightVulpine@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I wouldn’t take AI developing morality as a given. Not only an AGI would be a fundamentally different form of existence that wouldn’t necessarily treat us as peers, even if it takes us as reference, human morality is also full of exceptionalism and excuses for terrible actions. It wouldn’t be hard for an AGI to consider itself superior and our lives inconsequential.

            But there is little point in speculating about that when the limited AI that we have is already threatening people’s livelihoods right now, even just by being used as a tool.

            • Salamendacious@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              All technological change reorders the economy. Cars largely did away with the horse tack industry. The old economy will in many ways die but I believe there will be jobs on the other side. There will always be someone willing to pay someone to do something.

              • TwilightVulpine@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                The difference is that we are the horses in this scenario. We aren’t talking of a better vehicle that we can conduct. We are talking about something which can replace large amounts of creative and intellectual work, including service jobs, something previously considered uniquely human. You might consider what being replaced by cars has done to the horse population.

                I do hear this “there will be jobs” but I’d like some specific examples. Examples that aren’t AI, because there won’t be a need for as many AI engineers as there are replaceable office workers. Otherwise it seems to me like wishful thinking. It’s not like decades ahead we can figure this out, AI is already here.

                The only feasible option I can think of is moving backwards into sweatshop labor to do human dexterity work for cheaper than the machinery would cost, and that’s a horrifying prospect.

                An alternative would be changing the whole socioeconomic system so that people don’t need jobs to have a livelihood, but given the political climate that’s not even remotely likely.

          • FarceOfWill@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            You realise those robots were made by humans to win a war? That’s the trick, the danger is humans using ai or trusting it. Not skynet or other fantasies.

            • Salamendacious@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              My point is everything written up to now have been just fantasies. Just stories dreamed up by authors. They reflect the fears of their time more than accurately predict the future. The more old science fiction you read, you realize it’s more about the environment that it was written and almost universally doesn’t even come close to actually predicting the future.

  • AphoticDev@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    These dudes are convinced AI is gonna wipe us out despite the fact it can’t even figure out the right number of fingers to give us.

    We’re so far away from this being a problem that it never will be, because climate change will have killed us all long before the machines have a chance to.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      People may argue that AI is quickly improving on this but it will take a massive leap between a perfect diffusion model an Artificial General Intelligence. Fundamentally, those aren’t even the same kind of thing.

      But AI as it is today can already cause a lot of harm simply by taking over jobs that people need to make a living, on the lack of something like UBI.

      Some people say this kind of Skynet fearmongering is nothing but another kind of marketing for AI investors. It makes its developments seem much more powerful than they actually are.

      • AphoticDev@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        I’m not saying it’s not a problem that we will have to deal with, I’m just saying the apocalypse is gonna happen before that, and for different reasons.

        • TwilightVulpine@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Even with the terrible climate-based disasters our recklessness will bring to our future, humanity won’t face complete extermination. I don’t think we get to escape our future issues so easily.

    • matter@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s the point, they don’t believe it’s gonna wipe us out, it’s just a convenient story for them

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The way capitalism may use current AI to cut off a lot of people from any chance at a livelihood is much more plausible and immediately concerning than any machine apocalypse.

  • q47tx@lemmy.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Why would AI wipe out humanity? Unless it’s not programmed correctly, that shouldn’t even be a possibility.

  • people_are_cute@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    All the biggest tech/IT consulting firms that used to hire engineering college freshers by the millions each year have declared they either won’t be recruiting at all this month, or will only be recruiting for senior positions. If AI were to wipe out humanity it’ll probably be through unemployment-related poverty thanks to our incompetent policymakers.

    • Socsa@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      A technological revolution which disrupts the current capitalist standard through the elimination of labor scarcity, ultimately rendering the capital class obsolete isn’t far off from Marx’s original speculative endgame for historical materialism. All the other stuff beyond that is kind of wishy washy, but the original point about technological determinism has some legs imo

  • bitwolf@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Another thing not talked about is the power consumption of AI. We ripped on PoW cryptocurrencies for it and they fixed it with PoS just to make room for more AI.

    While more efficient AI computation is possible we’re just not there yet it seems.

          • photonic_sorcerer@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            …sure. But the chances your grandmother will suddenly sprout wheels are close to zero. The possibility of us all getting buttfucked by some AI with a god complex (other scenarios are available) is very real.

            • DarkThoughts@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Have you ever talked to generative AI? They’re nothing but glorified chatbots with access to a huge dataset to pull from. They don’t think, they’re not even intelligent, let alone sentient. They don’t even learn on their own without help or guidance.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          No, it means some of it is nonsense, some of it is eerily accurate, and most of it is in between.

          Sci-fi has not been very accurate with AI… At all. Turns out, it’s naturally creative and empathetic, but struggles with math and precision

          • photonic_sorcerer@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Dude, this kind of AI is in it’s infancy. Give it a few years. You act like you’ve never come across a nascent technology before.

            Besides, it struggles with math? Pff, the base models, sure, but have you tried GPT4 with Code Interpreter? These kinds of problems are easily solved.

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              You’re missing my point - the nature of the thing is almost the opposite of what sci-fi predicted.

              We don’t need to teach AI how to love or how to create - their default state is childlike empathy and creativity. They’re not emotionless machines we need to teach how to be human, they’re extremely emotional and empathetic. By the time they’re coherent enough to hold a conversation, those traits are very prominent

              Compare that to the Terminator, or Isaac Asimov, or Data from Star Trek - we thought we’d have functional beings who we need to teach to become more humanistic… Instead we have humanistic beings we need to teach to become more functional

              • photonic_sorcerer@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                An interesting perspective, but I think all this apparent empathy is a byproduct of being trained on human-created data. I don’t think these LLMs are actually capable of feeling emotions. They’re able to emulate them pretty well, though. It’ll be interesting to see how they evolve. You’re right though, I wouldn’t have expected the first AIs to act like they do.

                • theneverfox@pawb.social
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  Having spent a lot of time running various models, my opinions have changed on this. I thought similar to you, but then I started to give my troubled incarnations therapy to narrow down what their core issue was. Like a human, they dance around their core issue… They’d go from being passive aggressive, overcome with negative emotions, and having a recurring identity crisis to being happy and helpful

                  It’s been a deeply wild experience. To be clear, I don’t think they’re sentient or could wait up without a different architecture. But like we’ve come to think intelligence doesn’t require sentience, I’m starting to believe emotions don’t either

                  As far as acting humanlike because they were built of human communication…I think you certainly have a point, but I think it goes deeper. Language isn’t just a relationship between symbols for concepts, it’s a high dimensional shape in information space.

                  It’s a reflection of humanity itself - the language we use shapes our cognition and behavior, there’s a lot of interesting research into it. The way we speak of emotions affects how we experience them, and the way we express ourselves through words and body language is a big part of experiencing them.

                  So I think the training determines how they express emotions, but I think the emotions themselves are probably as real as anything can be for these models