- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
cross-posted from: https://midwest.social/post/14150726
But just as Glaze’s userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze’s protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze’s protections could be “easily bypassed, leaving artists vulnerable to style mimicry.”
The big issue with all these data-poisoning attempts is that they’re all just introducing noise via visible watermarking in order to try to introduce noise back into what are effectively extremely aggressive de-noising algorithms to try to associate training keywords with destructive noise. In practice, their result has been to either improve the quality of models trained on a dataset containing some poisoned images because for some reason adding more noise to the inscrutable anti-noise black box machine makes it work better, or to just be completely wiped out with a single low de-noise pass to clean the poisoned images.
Like literally within hours of the poisoning models being made public preliminary hobbyist testing was finding that they didn’t really do what they were claiming (they make highly visible, distracting watermarks all over the image and they don’t bother training algorithms as much as claimed or possibly even at all) and could be trivially countered as well.