Gaywallet (they/it)@beehaw.org to Technology@beehaw.org · 2 年前A jargon-free explanation of how AI large language models workarstechnica.comexternal-linkmessage-square14fedilinkarrow-up1100arrow-down10cross-posted to: [email protected][email protected][email protected][email protected]
arrow-up1100arrow-down1external-linkA jargon-free explanation of how AI large language models workarstechnica.comGaywallet (they/it)@beehaw.org to Technology@beehaw.org · 2 年前message-square14fedilinkcross-posted to: [email protected][email protected][email protected][email protected]
minus-squarePenguinTD@lemmy.calinkfedilinkEnglisharrow-up1·2 年前cause in the end it’s all statistics and math, human are full of mistakes(intentional or not), living language evolve over time(even the grammar), so whatever we are building “now” is a contemporary “good enough” representation.
minus-squarekosmoz@beehaw.orglinkfedilinkEnglisharrow-up1·2 年前Also, humans tend to be notoriously bad at both statistics and math :^)
cause in the end it’s all statistics and math, human are full of mistakes(intentional or not), living language evolve over time(even the grammar), so whatever we are building “now” is a contemporary “good enough” representation.
Also, humans tend to be notoriously bad at both statistics and math :^)