Aside from the obvious Law is there another blocker for this kind of situation?

I Imagine people would have their AI representatives trained to each individual personal beliefs and their ideal society.

What could that society look like? Or how could it work. Is there a term for this?

  • wabafee@lemmy.worldOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Direct technocracy that sounds cool, if such situation happen one could also vote and propose for themselves. But not all are into politics at least this person has a curated model built into his belief and ideals. Rather a human who has its own interest also or who has the biggest backer. Plus being able to correct easily (easily here means could potentially re-train it, but could become an issue also if re-training it would mean involving a lot steps). Then again if one votes directly could mean he would be at disadvantage since he would be interacting with AIs.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      AI is unreliable and also too easy to influence. you really don’t want this. just like online voting, because you can’t secure that, there’s no way to make sure it’s actually you who have selected actually the option you wanted, and not some program are acting in your name.

    • BougieBirdie@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      A common refrain I’m seeing in this post is that if there’s something wrong with the model you can just retrain it. There’s a couple problems with that assumption.

      The state of the technology actually makes training a model somewhere between difficult or opaque. And what I mean by this is that in order to train a model you need to give it data. A lot of data. An amount of data that a single person frankly either doesn’t have access to or has no simple way to generate. And even then, there’s no way to be sure how the model performs until after the training completes, so even if you’ve collected all that data you won’t know it’s an improvement.

      But for the sake of a hypothetical let’s ignore the current state of the technology and imagine that wasn’t a problem.

      If an AI representative votes for me, and it gets that vote wrong, I won’t know about it until after it has voted for me. And by then it’s too late - I’ve already voted against my interest.

      Also it seems that your position is that these AI reps are for people who care enough about politics to care, but don’t care enough to do. I don’t know that those people would ever confirm that their model is actually voting in their favour. If they don’t care enough to vote, then they don’t care enough to confirm their votes either.

      The most damning thing about using AI for policy though - AI is NOT a decision making tool. Ask anybody who actually works on AI. It might fool the people who use it, and the people who sell it to you will tell you anything to make an extra dollar. AI is just a formula that spits out words instead of numbers. Sometimes it strings together a cohesive sentence and sometimes it hallucinates. There isn’t any Intelligence happening under the machine, it’s all Artificial.

      AI is essentially autocomplete on steroids. It has no capacity to reason or argue, it just says what it’s trained for you to expect. It’s not a thinking machine and I sincerely doubt it ever will be