- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
JXL is based.
I took my existing JPEG file, compressed it using JXL, 15% smaller.
Then I decompressed it again into JPEG. The file was bit-for-bit identical to the original file (same hash). Blew my mind!
Directly using JXL is even better of course.
So it’s called xlarge… And it makes files smaller.
Why.
The same amount of JXL gives you more image than JPEG? Also, it supports ridiculous resolutions (terapixel).
this has been a bit of a meme, but if you wanted to look at XL as extra large, then that could refer to the max resolution which is far great. I’ve seen people refere to it as “extra long-term” but I think the real reason is they just wanted to fuck with us
Google’s involvement should always raise concerns but I guess it’s good Mozilla is trying to improve stuff.
this is from the google research team, they contribute a LOT to many foss projects. Google is not a monolith, each team is made of often very different folk, who have very different goals
As long as their goals suite the company, sure. The endgame of Google is very clear and it doesn’t include a free and open web.
they are making it seem just free and open enough to avoid regulation
I don’t even think this is the case, google does a lot pretty much everywhere. one example is one of the things they are pushing for is locally run AI (gemini, stable diffusion etc.) to run on your gpu via webgpu instead of needing to use cloud services, which is obviously privacy friendly for a myriad of reasons, in fact, we now have multiple implementations of LLMs that run locally in browser on webgpu, and even a stable diffusion implementation (never got it to work though since my most beefy gpu is an arc a380 with 6gb of ram)
they do other stuff too, but with the recent craze push for AI, I think this is probably the most relevant.
LLMs are expensive to run, so locally running them saves Google money.
ehh… not really, the amount of generated data you can get by snopping on LLM traffic is going to far out weigh the costs of running LLMs
There’s nothing technical stopping Google from sending the prompt text (and maybe generated results) back to their servers. Only political/social backlash for worsened privacy.
I doubt that. I’m going to guess that Google is going towards a sort of “P2P AI”
Well Google can still lock Mozilla out of the features and cooperation if they do something Google doesn’t like. It’s just one example. Nobody should ever trust Google.
like what? I can kinda understand them not cooperating but how on earth could they lock them out of features?
One example I can think of is Widevine DRM, which is owned by Google and is closed source: https://en.wikipedia.org/wiki/Widevine
Google currently allows Mozilla (and others) to distribute this within Firefox, allowing Netflix, Disney+, and various other video streaming services to work within Firefox without any technical work performed by the user
I don’t believe Google would ever willingly take this away from Mozilla, but it’s entirely possible that the movie and music industries pressure Google to reduce access to Widevine (the same way they pressured Netflix into adopting DRM)
yeah, that could indeed happen I suppose, didn’t think of that. Though I wonder if because of EME, an alternative drm solution could be viably implemented.
Its 2024 and this guy is telling us that google is not so bad.
It’s 2024 and this guy still can’t read.
It’s 2024
man…
Think the headline kind of buries the lead. Firefox is basically holding google by the balls and saying “Make a better decoder if you want this shit to become standard” which imo is great. Force them to do what they should have done already.
deleted by creator
This is not right on multiple levels. Google, or at least the chromium team were not interested in implementing jxl at all
maybe i misunderstood what they were saying on github then cuz to me it sounded like they were using their leverage as firefox to get a decoder made that was more secure.
Please let this happen!
Google’s involvement is weird, not for any conspiracy reasons but because the chromium team previously cancelled JPEG-XL.
I have a nagging doubt; jpeg-xl has a very extensive feature set (text overlays, etc). meanwhile, tech/media consortia want a basic spec for AV1 + OPUS on chip and push that to all media capable devices. we can expect av1, avif and opus to be ubiquitous in a few years. So i think they will prioritise AVIF.
I did some reading in AV1 and it’s derivative formats - are they any more accessible to Linux than HEVC/H265? Fedora IIRC removed support for them and a few other codecs out of the box over some patent concerns or something.