Aside from the obvious Law is there another blocker for this kind of situation?

I Imagine people would have their AI representatives trained to each individual personal beliefs and their ideal society.

What could that society look like? Or how could it work. Is there a term for this?

  • wabafee@lemmy.worldOP
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    5 days ago

    That is interesting thanks for this. I’ll try address some of your questions let me know what you think?

    “what model do we use? Based on what data - since it is inherently biased? How often can we re-roll / regenerate an answer until we like its outcome? Who has oversight over it?”

    I imagine a government like this would not still be fully run by AI. Laws proposed would still have human touch perhaps what they would act is almost like an assistant per citizen. They would be briefed on the laws proposed and have the citizen vote for it, or if they give consent have the AI do it. Argue about it in the floor for them.

    In the end the president or whoever is at the very top who’s human still have the final say if he approves the proposed law.

    The model could be based on whatever is available today or the future or a curated model. Though I agree it being bias could be a huge blocker though us humans are also inherently bias maybe that is something we just need to be aware if such thing cannot be removed at all if we have this kind of government.

    If the law breaks the constitution for example there will still be the supreme court who are all humans declaring the law invalid.

    Rather than have a representative who may or may not be contacted depending how revelant your are to this human representative.

    “This is inherently flawed because it means that the existing chat history will sort of lead the future responses, but it’s incredibly limited due to context size requiring such vast amounts of vram / ram and processing power.”

    Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed. If the person decide his AI assistant no longer aligns with his view he can then correct it.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      5 days ago

      Oh, so you don’t want an AI government, but an AI voter. That’s probably even worse to be honest.

      Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed.

      Only if it was trained on me and only me personally. But that would make me what we in German describe as “Gläserner Mensch”, gläsern coming from Glas, as in being a transparent person, which is a metaphor used in privacy topics. I’d have to lay myself open to an immense amount of data hording, to create a robot that may or may not decide like I would decide. Aside from the terrible privacy violations & implications that this would entail for every single person, it would also just be a snapshot of current me. Humans change over time. Our experiences and our perception of the world around us forms and changes us, constantly, and with that our decision making.

      But coming back to the privacy issue… We already have huge problems on that front. Companies hoard massive amounts of user data, usually through very thin veiled consent through those little checkbox agreements, or they just do it illegal now when it comes to their LLMs where they tend to just scrape everything on the internet, regardless of consent or copyright infringements. I think the whole LLM topic is one that should go nowhere until we have a globally agreed framework of regulations on how we want to handle those and future technologies. If you make an LLM based on all the data on the internet, then such models should inherently be Free Open Source, including everything that they create. That’d be the only agreeable term in my book. Whether true AI in the future would even rely on data scraping is another topic though.