Forum Nikon D1/D2/D100/D200
Subject engineer's explanation (long!) - part 1 of 2
Posted by bob elkind
Date/Time 4:50:34 PM, Thursday, February 16, 2006 (GMT)
OK, I'm an electronics design engineer with 25+ years experience, so hopefully this explanation will make sense.
The CCD (or CMOS sensor, it doesn't make a difference for the purposes of this discussion) imaging sensor is an analogue device, not digital. If you take two "identical" shots of identical subject matter, there will be differences between the two images. The differences can be characterised as noise.
The analogue voltage levels output from the sensor are converted to digital data (this is the A/D converter stage, analogue => digital converter). Once the image information is converted to digital data, all operations are utterly repeatable with identical results, if given identical digital source data.
The A/D converter stage is critical. There are several critical aspects to this circuitry:
1. Black level - what analogue level coming in will correspond to a digital "black" level. This needs to be tuned/calibrated, and the A/D needs to match the sensor's characteristics. If this isn't done properly, there will be a loss of effective dynamic range. This tuning is NOT utterly repeatable. If you perform this calibration/tuning over and over again on the same sensor and A/D converter, noise and "resolution" in this process will lead to at least *SOME* differences each time this step is performed. This may not be comforting to us, knowing that there is not one (and only one) "perfect" setting. That's life in an analogue world.
2. White level - what analogue level coming in will correspond to a digital "white" level. - see the above description of black level above, because the same characteristics apply here.
Both of these settings adjustments can be made in a number of ways, but most commonly such adjustments to A/D converter stages are established as:
a. offset (matches sensor's black level output to digital value corresponding to black)
b. gain or amplification (matches sensor's full range of output to full range of digital values, after subtracting the offset)
Sounds pretty straightforward so far, but we're already dealing with the notion that analogue systems are *inherently* inconsistent. The degree of inconsistency is usually accounted for as "noise". Yes there are things that can be done do minimise the degree of inconsistencies, but we need to deal with the reality that noise and inconsistencies do and will exist. All we can hope for is to reduce the inconsistencies to levels where they are not "significant"... and in the world of photography, that threshold of "success" is often determined by the "eye of the beholder".
If you are bothered by this, consider that ISO1600 *film* grain is, essentially, nothing other than noise. ISO100 film also has noise (or grain), it's just that the level of noise is so low that *most* folks don't notice it, but it *IS* there.
You've been very patient so far, and you are undoubtedly thinking to yourself... what does this have to do with the D200. We're getting to this.
OK, we've established that two identical sensors and two A/D converters should theoretically be calibrated identically... but they aren't. There are going to be some differences in the calibration settings, in spite of the design engineer's best efforts to make these differences "insignificant". Noise happens. Rounding errors happen. Light levels fluctuate. Voltage levels fluctuate. Operating temperatures fluctuate. Air purity (dust, smog, whatever) fluctuates. Now consider that all sensors and A/D converters vary from one to the next, and that is why we need calibration adjustments in the first place, to "dial out" these sample-to-sample variations as much as possible.
Here's where we get to the D200 issues...
Analogue sensors such as those found in dSLRs do not (repeat DO NOT) respond uniformly to all light levels. In a properly exposed picture with laboratory/studio quality lighting levels, one expects that whites are white, blacks are black, and 50% grey is exactly centered between the two. At the point where the sensor's analogue output meets the A/D converter's input, this is probably NOT true. The analogue voltage level for 50% grey is likely NOT centered between the voltage levels for white and black. In engineering=speak, we say that the sensor's response is not "linear". There needs to be a correction table to process the digital levels captured by the A/D converter and make them truly linear (or *more* linear, since noise happens). This is probably done digitally (arithmetically, by calculation) in the camera's image processor. This is entirely analogous to applying curves (or gamma correction) in your image editing software.
So, a 0.3ev light difference near the white level is a different voltage difference at the sensor output than a 0.3ev light difference near the black level.
In dSLRs, the image processing algorithms to correct for all these inconsistencies and non-linearities and noise components are critical. One of the most difficult functions is to take a full dynamic range image and properly and accurately and consistently correct (or compensate) low-level details. The image processor's correction table needs to be set up for the entire range of sensor output levels, and at the same time the sensor output at the low end of its sensitivity range (where the sensor is likely LEAST linear and noise is likely to be MOST noticeable) needs to be heavily processed to reconstruct linearity. Because of the heavy processing at the low end (the shadows, so to speak), any differences (errors, variations, etc.) in calibration of the A/D converter (to the sensor output) are GREATLY AMPLIFIED (caps for emphasis!).
Now, to speed up image acquisition, the Sony sensor in the D200 has two sets of outputs (this is an oversimplification, but for the purposes of this discussion we'll call them ODD and EVEN columns). Odd column voltage outputs and even column outputs go to different sets of A/D converters. Both sets of A/D conveters are individually calibrated, but absolute total matching in the A/D converters or their calibrations is inherently impossible to achieve. Ideally, the matching is so good that any differences are insignificant (and see comments on the definition of "insignificant" above). The extent to which the differences are not insignificant, the differences would show up as level differences between odd and even columns. In the context of the D200, we've come to know these manifestations of level differences as banding.
There is a saying... a man with one watch *knows* what time it is, and a man with two watches is never sure. Differences between the odd and even columns are impossible to eliminate entirely, and the "system" (calibration methods, correction algorithms, image processing, etc.) is most sensitive to differences at low light/voltage levels, within an image that spans the full range of light levels/voltage levels.
The whole process of correcting any/all possible images captured by any/all sensors, converted by any/all A/D converters, is mind-boggling in its complexity and detail. Having multiple sets of sensor outputs (odd and even, for example) is a fundamental architectural advantage for speeding image acquisition, etc., and its cost is that the degree to which calibration and correction is critical has been compounded.
Here are the key points that need to be made:
1. dSLRs and sensors are far from perfect. The camera designers go to incredibly extreme measures to correct for imperfections, and the measure of success is that the imperfections are simply not noticeable in the final result. "Not noticeable" does not mean that the imperfections are absent.
2. Low light levels mean low voltage levels from the sensor, and that means noise components are a much more noticeable relative to the "true" image signal voltage levels. Noise happens.
3. The D200's sensor architecture inherently introduces additional noise/error components, particularly at low light levels. The error/noise components can be tuned/calibrated/corrected to be "less significant", or possibly even "not noticeable", but the fundamental architecture requires more calibration points and makes calibration and image processing much more critical than in single-output sensor architectures.
3.a All dSLRs with sensors with multiple row or column outputs and multiple A/Ds for a given colour channel, including the D200, will have banding. The question is degree rather than existence.
3.b All dSLRs, with or without multiple-output sensors, including the D200 and the D2x and the 5D and the 20D, do have noise and non-linearities. The question is degree rather than existence.
4. There is no perfection (or utter repeatability/consistency) in ANY analogue system, there is only "better" or "worse" - terms that more often than not are subjective and situational rather than simply quantitative.
FINAL NOTE: I do not work for Nikon or Sony, I do not even own a D200 (just a D70), and the only basis for this post and the assertions and "insights" herein is my years of experience as a design engineer.
Here's a followup...
I've tried to watch for followup questions to my contribution to this thread, so as to not "post and run" and leave important questions unanswered or murky. As usual, the participants in this forum are engaged and thoughtful, and pose challenging questions.
One prevalent question has been raised:
Question: Based on my wisdom and keen insight (and no doubt my good looks, as well), would I buy a D200 or would I buy (name any camera body) or would I wait for the "D200s" ?
Obvious (and somewhat true but useless) Answer: This is a question for every individual to answer for his/her self. (Thank you for that brilliant insight, duh!)
Another way of phrasing this question is: as a design engineer who *may* have an insight into the type/nature of the D200's difficulties (such as they are or might be), is there a compelling reason to dismiss the D200 product, out of hand, as a fatally flawed implementation.
Phrased as such, I'm more comfortable answering this question. The answer is *NO* - not yet.
The D200 isn't like a handheld MP3 player, where the underlying technology (audio DACs) so greatly exceeds the product's technical requirements that it is practically impossible to design a bad one.
The D200 isn't pushing the limits of esoteric technology, but it *IS* pushing the capabilities of the components in its cost/price range. The D200 is much of D2x performance at a cost closer to a D70. Some of the units off the line will perform more like a D2x in some regards, and some will perform much worse. It is up to production test and quality control to catch the odd units that aren't "lucky" and either scrap them or re-calibrate them to meet performance requirements. In such a case, the final product's performance (at least some aspects of it) is determined by the production quality control test criteria, rather than by the underlying design. The fundamental product design/architecture is good enough to get the final product "in range" so that a combination of production calibrations and screenings can get you "the rest of the way" to a final production unit that meets performance specs.
A completely successful product design does *NOT* require recalls to the factory for re-fit or re-calibration, this is much is clear; and Nikon (by their own admission, to their credit) has been "surprised" by both the inconsistent performance in the product and the very nature of the "failure mechanism" (careful, as this is a loaded term).
My opinions, expert or otherwise:
a. The Nikon folks probably oversimplified (or underdesigned) the production calibration/test process. Even if such unit-to-unit variations are impossible to avoid (and that's a huge and completely unfounded assumption on my part), a proper quality control screen would have kept the worst performing units from leaving the factory. This can and will be corrected by Nikon, this much is clear from their official statements.
b. Once Nikon addresses point (a) above, only then can we know what Nikon will "allow" the D200 to be. After Nikon redesigns their quality control tests and screens, only then can we judge what Nikon will allow (or not allow) in terms of performance for a market-worthy D200 unit.
Until Nikon retools their production cal and screening, the unavoidable fact of the matter is that individual customers are bearing much of the burden of product quality control. Nikon realises this, they are embarrassed by it, and they are correcting it.
Is the D200 (today) good enough for you ? Since the individual customer is performing quality control, the answer is YES if the individual unit you buy works well enough -- and NO if it doesn't, in which case your options are recalibration/re-fit, return and swap it for another unit until you find one that works well enough, or just pass on the D200.
The question that begs to be answered is: while Nikon is passing quality control responsibility to customers, how can anyone judge the product's performance (in the sense of a product review) with any confidence ?
At the risk of repeating myself, the D200 product performance (in some regards) is defined by either the individual customer or the standards and capabilities of the service technicians performing "rework."
If I were in Phil Askey's shoes, I would be very reluctant indeed to publish a D200 test report until I was confident that the unit is truly representative of what the customer is likely to receive.
Nikon always has the option of making minor design changes to the D200 to reduce scrap or calibration/test costs, or to specifically "design out" the need for some of the calibrations (e.g. design out the 2-A/D design in favour of a single-A/D design). As a design engineer, I know that such a redesign is possible (there are lots of opportunities for cleverness here, making the best use of the existing CCD sensor)... Cost and expected benefit will be considered.
If I had the time and money, I would not hesitate to buy a D200, given that I can always return it if not satisfied. Of course, just because I'm satisfied with my unit doesn't necessarily mean that the unit you get will be as good, or that you will be equally satisfied, as of today. That is probably weighing on Phil Askey's mind.
- Bob Elkind
Previous Entry, the Update
1 comment:
Hi Nice Blog .hp laptop battery usually run on a single main battery or from an external AC/DC adapter which can charge the battery while also supplying power to the computer itself.
Post a Comment