Opticallimits

Full Version: Canon Sony and Nikon united... against AI generated photos
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://nikonrumors.com/2024/01/02/nikke...tech.aspx/

Dunno what exactly are they preparing, we already have exif, but it can be easily edited
I'm pretty sure that part of the signature is embedded into the image. Based on a 12/14bit RAW file, you won't notice it.
It makes sense. Unfortunately I've seen this thread only now and the original linked article vanished.

Never mind, I can see it: it was just a temporary glitch.

Ah, it's CAI/C2PA from Adobe. Yes, I know that. It's promising also because it would include the history of editing, even though this feature would require collaboration from software editing tools. Adobe tools already do, but other products — such as C1 — are lagging.

https://support.captureone.com/hc/en-us/...PA-support
It sounds like a Noble cause. An encrypted watermark that tracks edits. But to what degree of usefulness? Can you un-edit it to the original or prove it is the original? Is it going to have a two-step verification so only you can edit it?

I see so many holes I'm not even sure how or where to begin listing them. Already a lot of the editing we do will is done with AI. The Google Pixel camera does some cool stuff with AI that alters the original a lot. Ergo, at what editing point is the image going to be considered AI generated? Many AI images start with a real photo(s). Moreover, if it becomes a standard that camera makers and photo editing software use, then why wouldn't someone be able to hack it? Using AI nonetheless!
Good points, but the approach is simpler: the technology allows you to cryptographically prove that the image came out from a certain camera and underwent certain digital alterations. If one of the alterations is e.g AI-based cancellation of certain objects or reconstruction of partially obstructed objects, this will be just recorded and the viewer will be aware of that. Then, in function of the context (photojournalism, nature photo contest, etc...), it will be a human evaluation to decide whether the photo is “genuine” or not. Same about the point that some camera already do AI while capturing the photo: you will have a digital proof that the photo has been taken with that camera, and then infer the implications.

Cryptography principles still work even though we have AI. If AI at one point would make it possible to hack a signed document (e.g. by dramatically reducing the number of attempts in a brute force approach)... well, we will have a serious problem in general with digital signatures, not only with CAI :-)

Of course the whole thing is subject to eventual bugs that might break it, as it always is with computer stuff.
Never mind, I can see it: it was just a temporary glitch. Flappy Bird