Good points, but the approach is simpler: the technology allows you to cryptographically prove that the image came out from a certain camera and underwent certain digital alterations. If one of the alterations is e.g AI-based cancellation of certain objects or reconstruction of partially obstructed objects, this will be just recorded and the viewer will be aware of that. Then, in function of the context (photojournalism, nature photo contest, etc...), it will be a human evaluation to decide whether the photo is “genuine” or not. Same about the point that some camera already do AI while capturing the photo: you will have a digital proof that the photo has been taken with that camera, and then infer the implications.
Cryptography principles still work even though we have AI. If AI at one point would make it possible to hack a signed document (e.g. by dramatically reducing the number of attempts in a brute force approach)... well, we will have a serious problem in general with digital signatures, not only with CAI :-)
Of course the whole thing is subject to eventual bugs that might break it, as it always is with computer stuff.