• narc0tic_bird@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    ·
    3 months ago

    That should be an easy fix in a future software update by simply not replicating eye movement as soon as the user is looking at the keyboard.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      Sounds like what they already did: as soon as the virtual keyboard pops up the eye movement isn’t transmitted as part of the avatar.

      • narc0tic_bird@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        Oh I see. According to the article:

        The GAZEpolit researchers reported their findings to Apple in April and subsequently sent the company their proof-of-concept code so the attack could be replicated. Apple fixed the flaw in a Vision Pro software update at the end of July, which stops the sharing of a Persona if someone is using the virtual keyboard.

        An Apple spokesperson confirmed the company fixed the vulnerability, saying it was addressed in VisionOS 1.3.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    Seems like we’re going to be stuck in the uncanny valley of telepresence. The more fidelity we add, the more we’re able to pick up on microexpressions, subtle eye movements, and breathing, which helps trigger oxytocin and promote trust. But also, the more fidelity we add, the more attack surface we open up for malicious actors to exploit.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Most people don’t look while typing, especially things with muscle memory like passwords, when using a physical keyboard. And a zoom call doesn’t convey facial data in three dimensions. The unique nature of the virtual keyboard, plus the three dimensional avatar, makes this new attack more feasible.