• chobeat@lemmy.mlOP
      link
      fedilink
      arrow-up
      27
      arrow-down
      6
      ·
      1 year ago

      In the picture you can see organizations moving in the public sphere around AI. On the left you have right-wing and libertarian think tanks, corporations and frontline actors that fuel a sense of panic around AI, either to sabotage their business competitors or to leverage this panic to project an idea of being sellers of a very powerful tool while at the same time deflecting responsibility. If the AI is dangerous and sentient, you won’t care much about the engineers behind.

      On the right you have several public orgs or NGOs operating in the field of algorithmic accountability, digital rights and so on. They push the opposite of the AI panic, pointing the finger at the corporations and powers that create and govern AI

      • jrs100000@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        3
        ·
        edit-2
        1 year ago

        So its basically just a list of entities in the field. With no actual information or reasoning. In a vague and arbitrary mood chart.

  • pryte@lemmy.world
    link
    fedilink
    arrow-up
    43
    arrow-down
    4
    ·
    1 year ago

    What is this based on? Some kind of paper? Were there objective criteras, which were choosen beforehand, and on which the companies were rated, leading to there grouping in these groups?

    Or did you just made up 4 cool sounding categories, which you fitted various companies in, based on your personal opinion?

    • chobeat@lemmy.mlOP
      link
      fedilink
      arrow-up
      20
      arrow-down
      4
      ·
      edit-2
      1 year ago

      It’s not from me but from AlgorithmWatch, one of the most famous and respected NGOs in the field of Algorithmic accountability. They published plenty of stuff on these topics and human rights threats from these companies.

      Also this is an ecosystem analysis of political positioning. These companies and think tanks are going on newspapers with their names to say we should panic about AI. It’s not a secret, just open Google News and you fill find a landslide of news on these topics sponsored by these companies with a simple search.

  • whiny9130@lemmy.ml
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    If by panic you mean AI hype, then maybe.

    For example, this post is just as sensationalist.

  • varzaman@lemm.ee
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    Can I get an explanation as to how these companies are “marketers of ai panic”?

    • underisk@lemmy.ml
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      1 year ago

      They are directly selling AI-based products and services. They release or boost sensational stories about those capabilities through their various channels of media influence so they can make their products seem more powerful and useful than they really are. The sensationalisation widens the window on what seems possible even if it’s nowhere near the reality. Even people who don’t buy into those notions about society-destroying automation or humanity-threatening emergence are more likely to buy into stuff that seems tamer but still lacks any substantial proof of viability like AI driving or AI written movie scripts.

  • habanhero@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    1 year ago

    What the heck is AI Panic?

    btw you missed Meta, they are very significant in the field of AI.

    • Fedizen@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      The idea AI will destroy or reshape the world.

      I think it fits into the idea that people selling AI are pitching it as “this product is powerful either you buy now or pay for not doing so later” and they have an incentive to overstate its power.

      • habanhero@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        But that’s all marketing, it’s not specific to AI.

        Any company that does marketing is looking to create demand and generate interest. Part of generating interest is tapping into your desire , which could include want to get ahead and not getting left behind.

  • Sibbo@sopuli.xyz
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Never having heard the term AI panic makes this kinda meaningless. But I guess AI panic is evil, as it is promoted by the typically more evil companies?

      • chobeat@lemmy.mlOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        9
        ·
        edit-2
        1 year ago

        They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don’t forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.

        Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/

        • ultranaut@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          What’s this about OpenAI having a mission to create chaos? That sounds like “AI panic” or conspiratorial thinking on the surface at least.

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          deliberately harmful tool ???
          I am using it, and yes, it can be inaccurate sometimes, but deliberately harmful?
          The link that you gave is not about this AI, but potential danger of some future AGI, which would have to be more powerful than this one.

          • chobeat@lemmy.mlOP
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            1 year ago

            This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088

            OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they’ve put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won’t do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.

            • MxM111@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              I can only imagine what would happen if these authors were to write about internet.

              • chobeat@lemmy.mlOP
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                There are entire fields of research on that. Or do you believe the internet, a technology developed for military purposes, an infrastructure that supports most of the economy, the medium through billions of people experience most of reality and build connections, is free from ideology and propaganda?

                • MxM111@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  That’s my point, nearly everything in life have good and bad sides, you have to use it accordingly. Would you believe if I say that a banal kitchen knife can be used to murder people? Those kitchen knife manufacturers released a product which is a harmful tool! And they knew that!

    • chobeat@lemmy.mlOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      7
      ·
      1 year ago

      You might have heard of singularity, sentient AI, uprising of the ai, job losses due to automation. That’s all propaganda that sits under the concept of AI panic.

      • substill@lemm.ee
        link
        fedilink
        arrow-up
        10
        ·
        1 year ago

        But how are Microsoft and other LLM companies marketing on AI Panic?

        I honestly don’t understand what this graph means. I don’t get what the four sectors mean, how the author decided to distribute companies among the four sectors, or why the four sectors are divided into two equivalent circles.

        • markr@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          All I can figure out is the pink side is pure evil and the blue side are our saviors. Given the color scheme, perhaps this is yet another failed gender reveal?

        • chobeat@lemmy.mlOP
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          Microsoft bought OpenAI. The AI panic pushed by Sam Altman is sanctioned by Microsoft.

      • Bipta@kbin.social
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        It’s ridiculous to call ideas that have existed for half a century propaganda just because we’re now approaching those things…

      • DrMario@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        job losses due to automation

        Oh yeah this has never happened. Brb, gonna go tell all my fellow assembly line workers this concept is total propaganda

        • chobeat@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          automation never reduces jobs. It fragments them, it reduces their quality, it increases deskilling and replaceability. We are not going to work less as we never worked less thanks to automation. If we want to work less, we need unionization, not machines.

      • ultranaut@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        What’s this about OpenAI having a mission to create chaos? That sounds like “AI panic” or conspiratorial thinking on the surface at least.

  • DM_ME_NUDES@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    “Our AI is so good, it’s going to start replacing skilled laborers” is a hell of a sales pitch. I can’t even be mad, using panic to market AI is clever.