• R0cket_M00se@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    An interesting concept but it would require we run it through a previously untrained neural network to keep the concept of consciousness from leaking in, which means the test is useless on deployed models.

    But which AI are we going to be concerned about going sapient? The ones we are already using. I suppose the test could be applied usefully if you intended to test the network and hardware for its capacity of consciousness prior to deploying it in a live environment and training it though.

    • William Yam@social.williamyam.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I think that this line of thinking is smuggling a few assumptions. Ask yourself this: where is ChatGPT? Is it located in your computer, in the cloud? If we cannot even specify a spatio-temporal location for the chatbot, then we cannot begin to assign a notion of consciousness to a “thing” which isn’t even a “thing”. The real problem of consciousness is dividing the world into self vs non-self. What counts as the chatbot and what counts as the “external” world?