Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Do cards have different characteristics as did film emulsions?
#1
My wife, a sentimentalist, had in our bedroom a photo of our children, that I had taken over 40 years ago. Naturally it had severely faded, so I scanned in the print, and restored it digitally.

 

Afterwards we had some discussion about the results. I think our differences came from the fact that back in film days she had preferred Kodacolor, but I Extacolor, the former being too red for me.

 

This made me think, whether in the same way different cards may produce different renderings of a scene.

 

Thoughts anyone?

#2
What do you mean by "cards"? Memory cards? Graphic cards? Other stuff?

stoppingdown.net

 

Sony a6300, Sony a6000, Sony NEX-6, Sony E 10-18mm F4 OSS, Sony Zeiss Vario-Tessar T* E 16-70mm F4 ZA OSS, Sony FE 70-200mm F4 G OSS, Sigma 150-600mm Æ’/5-6.3 DG OS HSM Contemporary, Samyang 12mm Æ’/2, Sigma 30mm F2.8 DN | A, Meyer Gorlitz Trioplan 100mm Æ’/2.8, Samyang 8mm Æ’/3.5 fish-eye II | Zenit Helios 44-2 58mm Æ’/2 
Plus some legacy Nikkor lenses.
#3
With post processing you can do anything and get the colors you want, some even colorized black and white photos, fading is not linear, so it will be very difficult to restore the old colors from a faded print, however all depends on the eyes and capacities of te person behind the screen working on it and how much time he is ready to invest correcting it 

#4
If the question is, are there any differences in the rendition of images from different memory cards?

   I think the answer is no, as the image is composed of "noughts and ones", as long as the bites/bits are correctly memorized there can't be any differences. 

#5
Memory cards do not (obviously?); gpu can have different settings which impact colour tones but for the most part these can be adjusted. Monitors have the same issue (to a greater degree) and not all monitors and render colour or tonal range the same.

-

The same go for sensors and printers. You can adjust things as you feel fit with post editing but some systems (sensors, printers, monitors) make it more difficult than others to achieve a desired look.

 

Quote:My wife, a sentimentalist, had in our bedroom a photo of our children, that I had taken over 40 years ago. Naturally it had severely faded, so I scanned in the print, and restored it digitally.

 

Afterwards we had some discussion about the results. I think our differences came from the fact that back in film days she had preferred Kodacolor, but I Extacolor, the former being too red for me.

 

This made me think, whether in the same way different cards may produce different renderings of a scene.

 

Thoughts anyone?
#6
Cards no, sensors yes, monitors yes, printers yes.

 

However, as you2 mentions, they can all be adjusted.

 

An example is that Canon sensors , certainly the older ones, are more sensitive to red, and hence overemphasize reds.

 

Kind regards, Wim

Gear: Canon EOS R with 3 primes and 2 zooms, 4 EF-R adapters, Canon EOS 5 (analog), 9 Canon EF primes, a lone Canon EF zoom, 2 extenders, 2 converters, tubes; Olympus OM-D 1 Mk II & Pen F with 12 primes, 6 zooms, and 3 Metabones EF-MFT adapters ....
#7
Thank you for all your very informative replies. Smile

 

@toni-a I suppose the realistic way is to get the important colours as best as possible first.

#8
Quick complement to explanations to let blurred2016 understand what he should do in practice.

 

Basically, we have:

  1. a numeric representation of colours, that are those numbers in JPG, etc...
  2. an objective reality of colours, which are specific electro-magnetic frequencies (or wavelengths), measurable with scientific instruments such as spectrometers;
  3. a subjective perception of colours performed by our eyes and brain.
Point #2 and #3 are related, but we can simplify and drop #3. So all boils down to the way in which numbers at #1 are translated into specific radiation frequencies and vice-versa: the job of screen/printers and camera sensors. Each device (display, printer, camera sensor) behaves in a different way, so the same triple (e.g. RGB values 124,76,94) usually relate to different colours.

 

In a few words, with some imprecision but not so relevant: files produced by cameras contain a "colour space", which is a mathematical description of how to match a triple to the captured wavelength for that given sensor. A "colour profile", on the other hand, characterises a display device or a printer, and contains the mathematical description to compute the right power to drive a LED or the quantity of inks to mix to produce the correct wavelength than our eyes will interpret.

 

In order to correctly reproduce the colour through the chain "wavelength in the captured scene -> numbers in the file -> wavelength in the rendered scene (display/printer)" you need to have the right colour space and colour profile. In the consumer world, you don't have anything to do with the camera: the colour space is already provided and fine. But you don't have a proper colour profile for the screen or the printer (and the specific paper), so you should create it by yourself with a procedure called "calibration" and a few specific tools (the good ones for consumer aren't much expensive, in the order of 100€). Calibration implies also adjusting some settings on the device (e.g. the contrast, brightness etc... on a display).

 

Note that so far I've described a process for trying to get to the display and the printer with the closest perceptual match of the original colours. You mentioned different renderings of films and this has to do with personal taste. In the calibrated workflow I've described, this adjustment to your personal taste consists in changing numbers at point #1 with the sliders of a post-processing software. The workflow makes sure that what you're seeing on your monitor is seen also by other people on their monitors (assuming they did the calibration) and - roughly - the lab you're sending the photos will print stuff that mostly matches what you see on your monitor.

 

Do you really need a calibrated workflow? If you're going to share large numbers of photos (e.g. publishing them on the web, or having a printer lab to do your prints), yes, you do. Tweaking single colours is a waste of time, really. Better spending a few money and a few time to have calibration working properly and, after learning the job, everything becomes simpler and faster.

 

If on the other hand you don't share pictures and you just print by yourself a few ones, then you're probably fine by tweaking colours individually, starting from the ones you care much. 

stoppingdown.net

 

Sony a6300, Sony a6000, Sony NEX-6, Sony E 10-18mm F4 OSS, Sony Zeiss Vario-Tessar T* E 16-70mm F4 ZA OSS, Sony FE 70-200mm F4 G OSS, Sigma 150-600mm Æ’/5-6.3 DG OS HSM Contemporary, Samyang 12mm Æ’/2, Sigma 30mm F2.8 DN | A, Meyer Gorlitz Trioplan 100mm Æ’/2.8, Samyang 8mm Æ’/3.5 fish-eye II | Zenit Helios 44-2 58mm Æ’/2 
Plus some legacy Nikkor lenses.
  


Forum Jump:


Users browsing this thread:
1 Guest(s)