Channel 1 AI released a promotional video explaining how the service will provide personalized news coverage to users from finance to entertainment.
Channel 1 AI released a promotional video explaining how the service will provide personalized news coverage to users from finance to entertainment.
This won’t hilariously backfire at all…
deleted by creator
it changes things if you’ve ever wanted to become a new anchor
Also changes things if you wanted to become a news anchor programmer
Tel-AI-prompter?
The example where an interview of a victim of Hurricane Ciaran, originally in French, but deepfaked to be speaking English, was pretty scary. Some people will think that it’s just for convenience, but for me, it’s a step too far down the slippery slope. If they were to do the same for a politician, a slight nuance in how a phrase was translated could change everything.
Yeah for any sort of interviews I’d rather they kept the current convention of using a voice over, often after a 1-2 second clip of the original audio. It’s obvious that it’s a translation done by the media and not the exact original words of the source
a leaky abstraction is better because it reveals what is actually happening. That is better to me too. Heck I worry about the voice-overs giving an unfair or inaccurate version of what is being said.
I wonder if they’ve learned anything from the infamous Nothing Forever incident, or the infamous Infinite Steam incident, or any of the other various incidents.
To be fair, it looks like this actually has human editors and isn’t just running as a hands free experiment.
Don’t you mean lots of extra fingers on too many hands?
Are these the ones where the AI became incredibly racist?
Nothing Forever got a 14 day ban for generating this standup routine:
As for Infinite Steam, the only references to the banned clip seem to be on Reddit - and so a massive pain in the ass to access - but I remember the clip in question had Seymour saying something like “Oh no, I burned the Jews!”
Tay was such a hilarious snafu. It is, in all likelihood, one of the most influential lessons on AI. Modern models hold no memory of previous conversations for a very good reason.
Sounds a bit like the recent exploit for saying a word forever. I wonder how long until these break down and start spewing their source code.
yes…“hilarious”…