您的当前位置:首页正文

Improvements to

来源:独旅网
Improvements to Gamut MappingColour Constancy Algorithms

Kobus Barnard

Department of Computing Science,

Simon Fraser University,

888 University Drive, Burnaby, BC, Canada, V5A 1S6

email: kobus@cs.sfu.ca

phone: 604-291-4717; fax: 604-291-4424

Abstract. In his paper we introduce two improvements to the three-dimensional gamut mapping approach to computational colour constancy.This approach consist of two separate parts. First the possible solutions areconstrained. This part is dependent on the diagonal model of illuminationchange, which in turn, is a function of the camera sensors. In this work wepropose a robust method for relaxing this reliance on the diagonal model.The second part of the gamut mapping paradigm is to choose a solutionfrom the feasible set. Currently there are two general approaches for doingso. We propose a hybrid method which embodies the benefits of both, andgenerally performs better than either.

We provide results using both generated data and a carefullycalibrated set of 321 images. In the case of the modification for diagonalmodel failure, we provide synthetic results using two cameras with a

distinctly different degree of support for the diagonal model. Here we verifythat the new method does indeed reduce error due to the diagonal model.We also verify that the new method for choosing the solution offerssignificant improvement, both in the case of synthetic data and with realimages.

Key Words. colour, computational colour constancy, gamut constraint,CRULE, diagonal models, sensor sharpening

Improvements to Gamut MappingColour Constancy Algorithms

Abstract. In his paper we introduce two improvements to the three-dimensional gamut mapping approach to computational colour constancy.This approach consist of two separate parts. First the possible solutions areconstrained. This part is dependent on the diagonal model of illuminationchange, which in turn, is a function of the camera sensors. In this work wepropose a robust method for relaxing this reliance on the diagonal model.The second part of the gamut mapping paradigm is to choose a solutionfrom the feasible set. Currently there are two general approaches for doingso. We propose a hybrid method which embodies the benefits of both, andgenerally performs better than either.

We provide results using both generated data and a carefullycalibrated set of 321 images. In the case of the modification for diagonalmodel failure, we provide synthetic results using two cameras with a

distinctly different degree of support for the diagonal model. Here we verifythat the new method does indeed reduce error due to the diagonal model.We also verify that the new method for choosing the solution offerssignificant improvement, both in the case of synthetic data and with realimages.

2

1Introduction

The image recorded by a camera depends on three factors: The physical content of the scene, theillumination incident on the scene, and the characteristics of the camera. This leads to a problem formany applications where the main interest is in the physical content of the scene. Consider, forexample, a computer vision application which identifies objects by colour. If the colours of theobjects in a database are specified for tungsten illumination (reddish), then object recognition canfail when the system is used under the very blue illumination of blue sky. This is because thechange in the illumination affects object colours far beyond the tolerance required for reasonableobject recognition. Thus the illumination must be controlled, determined, or otherwise taken intoaccount.

Compensating for the unknown illuminant in a computer vision context is the

computational colour constancy problem. Previous work has suggested that some of the mostpromising methods for solving this problem are the three dimensional gamut constraint algorithms[1-4]. In this paper we propose two methods for further improving their efficacy.

The gamut mapping algorithms consist of two stages. First, the set of possible solutions isconstrained. Then a solution is chosen from the resulting feasible set. We propose improvementsto each of these two stages. To improve the construction of the solution set, we introduce a methodto reduce the error arising from diagonal model failure. This method is thus a robust alternative tothe sensor sharpening paradigm [2, 4-6]. For example, unlike sensor sharpening, this method isapplicable to the extreme diagonal model failures inherent with fluorescent surfaces. (Such surfacesare considered in the context of computational colour constancy in [7]).

To improve solution selection, we begin with an analysis of the two current approaches,namely averaging and the maximum volume heuristic. These methods are both attractive; the onewhich is preferred depends on the error measure, the number of surfaces, as well as other factors.Thus we propose a hybrid method which is easily adjustable to be more like the one method or theother. Importantly, the combined method usually gives a better solution than either of the two basicmethods. Furthermore, we found that it was relatively easy to find a degree of hybridization whichimproves gamut mapping colour constancy in the circumstances of most interest. We will nowdescribe the two modifications in more detail, beginning with the method to reduce the reliance onthe diagonal model.

2Diminishing Diagonal Model Error

We will begin with a brief review of Forsyth’s gamut mapping method [1]. First we form the setof all possible RGB due to surfaces in the world under a known, “canonical” illuminant. This set isconvex and is represented by its convex hull. The set of all possible RGB under the unknownilluminant is similarly represented by its convex hull. Under the diagonal assumption of

illumination change, these two hulls are a unique diagonal mapping (a simple 3D stretch) of eachother.

3

Figure 1 illustrates the situation using triangles to represent the gamuts. In the full RGBversion of the algorithm, the gamuts are actually three dimensional polytopes. The upper thickertriangle represents the unknown gamut of the possible sensor responses under the unknownilluminant, and the lower thicker triangle represents the known gamut of sensor responses underthe canonical illuminant. We seek the mapping between the sets, but since the one set is notknown, we estimate it by the observed sensor responses, which form a subset, illustrated by thethinner triangle. Because the observed set is normally a proper subset, the mapping to the canonicalis not unique, and Forsyth provides a method for effectively computing the set of possible diagonalmaps. (See [1, 2, 4, 8-10] for more details on gamut mapping algorithms).

We now consider the case where the diagonal model is less appropriate. Here it may bepossible that an observed set of illuminants does not map into the canonical set with a singlediagonal transform. This corresponds to an empty solution set. In earlier work we forced a

solution by assuming that such null intersections were due to measurement error, and various errorestimates were increased until a solution was found. However, this method does not give verygood results in the case of extreme diagonal failures, such as those due to fluorescent surfaces.

To deal with this problem, we propose the following modification: Consider the gamut ofpossible RGB under a single test illuminant. Call this the test illuminant gamut. Now consider thediagonal map which takes the RGB for white under the test illuminant to the RGB for white underthe canonical illuminant. If we apply that diagonal map to our test illuminant gamut, then we willget a convex set similar to the canonical gamut, the degree of difference reflecting the failure of thediagonal model. If we extend the canonical gamut to include this mapping of the test set, then therewill always be a diagonal mapping from the observed RGB of scenes under the test illuminant tothe canonical gamut. We repeat this procedure over a representative set of illuminants to produce acanonical gamut which is applicable to those illuminants as well as any convex combination ofthem. The basic idea is illustrated in Figure 2.

The convex hull of measured RGB is taken as an approximation of the entire gamut under the unknown illuminant

The known gamut of all possible RGB under the known, canonical illuminant.

Possible maps

The unknown gamut of all

possible RGB under the unknown illuminant.

Figure 1: Illustration of fundamentals of gamut mapping colour constancy.

4

The gamuts of all possible RGB under three training illuminants.

Originalcanonical gamut

Mapped sets to canonical based on white. The maps are not all the same due to diagonal model failure.

Extended canonical gamut is the convex hull of the union of mapped

sets based on white, using a collection of representative training illuminants

Figure 2: Illustration of the modification to the gamut mapping method to reduce diagonal modelfailure.

3Improving Solution Choice

Once a constraint set has been found, the second stage of the gamut mapping method is to select anappropriate solution from the constraint set. Two general methods have been used to do this. First,following Forsyth [1], we can select the mapping which maximizes the volume of the mapped set.Second, as proposed by Barnard [2], we can use the average of the possible maps. When

Finlayson's illumination constraint is used, then the set of possible maps is non-convex. In [2],averaging was simplified by using the convex hull of the illuminant constraint. In [10] Monte Carlointegration was used in conjunction with the two dimensional version of the algorithm, and in [4]the average was estimated by numerical integration.

In [4], we found that both averaging and the maximum volume method have appeal. Wefound that the preferred method was largely a function of the error measure, with other factorssuch as the diversity of scene surfaces also playing a role. When the scene RGB mapping errormeasure is used, the average of the possible maps is a very good choice. In fact, if we are

otherwise completely ignorant about the map, then it is the best choice in terms of least squares.

On the other hand, if we use an illumination estimation measure, then the original

maximum volume heuristic is often the best choice. This is important because we are frequentlymost interested in correcting for the mismatch between the chromaticity of the unknown illuminant

5

and the canonical illuminant. In this case, the errors based on the chromaticity of the estimatedscene illuminant correlate best with our goal, and the maximum volume heuristic tends to give thebest results.

In this work we will focus on estimating the chromaticity of the illuminant. Despite thesuccess of the maximum volume heuristic, we intuitively feel, that at least in some circumstances,some form of averaging should give a more robust estimate. This intuition is strengthened by theobservation that when we go from synthetic to real data, the maximum volume method loosesground to averaging (see, for example, [4, Figure 4.12].

To analyze the possibilities, we begin by considering solution selection by averaging. Thisaveraging take place in the space of diagonal maps, which is not quite the same as the space ofilluminants. Under the diagonal model, the illuminant RGB is proportional to the element-wisereciprocal of the diagonal maps. Thus we see that for an illumination oriented error measure, wemay be averaging in the wrong space, as intuitively, we want to average possible illuminants.

However, averaging the possible illuminants has some difficulties. As we go towards theorigin in the space of possible diagonal maps, the corresponding proposed illuminant becomesinfinitely bright. The origin is included in the constraint set because we assume that surfaces can bearbitrarily dark. Although it is rare for a physical surface to have a reflectivity of less than 3%,surfaces can behave as though they are arbitrarily dark due to shading. Thus we always maintainthe possibility that the illuminant is very bright. Specifically, if (R,G,B) is a possible illuminantcolour, then (kR,kG,kB) is also a possible illuminant for all k>1. Put differently, a priori the set ofRGB all possible illuminants is considered to be a cone in illuminant RGB space [11]. When weadd the surface constraints, then the cone becomes truncated. As soon as we see anything but

black, we know that the origin is excluded, and specific observed sensor responses lead to specificslices being taken out of the cone.

The above discussion underscores the idea that when we average illuminants, we shouldignore magnitude. However, since the work presented in [4] demonstrates that the three

dimensional algorithms outperform their chromaticity counterparts, we do not want to completelythrow away the brightness information. Considering the truncated cone again, we posit that thenature of the truncations matter. The problem is then how to average the possible illuminants.

Consider, for a moment, the success of the three-dimensional gamut mapping algorithms.In the space of maps, each direction corresponds to a illuminant chromaticity. Loosely speaking,the chromaticity implied by an RGB solution, chosen in some manner, is the average of the

possible chromaticities, weighted by an appropriate function. For example, the maximum volumeestimate simply puts all the weight in the direction of the maximum coordinate product. Similarly,the average estimate weights the chromaticities by the volume of the cone in the correspondingdirection.

Given this analogy, we can consider alternative methods of choosing a chromaticity

solution. Since the maximum volume method tends to give better chromaticity estimates, especiallywhen specularities are present, we wish to consider averages which put the bulk of the weight onsolutions near the maximum volume direction. Now, one possible outcome of doing so would bethe discovery that the maximum volume weighting worked the best. Interestingly, this proved notto be the case. Specifically we were able to find compromises which worked better.

We now present the weighting function developed for this work. Consider the solution setin mapping space. Then, each illuminant direction intersects the possible solution set at the origin,and at some other point. For an illuminant, i, let that other point be d(ri),d(gi),d(bi). Then, the

functions we use to moderate the above weighting are powers of the geometric mean of coordinatesof that mapping. Formally, we have parameterized functions fN given by:

()fN(i)=ddd

((i)(i)(i)rgb

)N3(1)

6

We note that the solution provided by the average of the mappings is roughly f3. The

correspondence is not exact because the averaging is done over illuminant directions, not mappingdirections. Similarly, as N becomes very large, the new method should approach the maximumvolume method.

In order to use the above weighting function, we integrate numerically in polar coordinates.We discretize the polar coordinates of the illuminant directions inside a rectangular cone boundingthe possible illuminant directions. We then test each illuminant direction as to whether it is a

possible solution given the surface and illumination constraints. If it is, we compute the weightingfunction, and further multiply the result by the polar coordinate foreshortening, sin(φ). We sumthe results over the possible directions, and divide the total by the total weight to obtain theweighted average.

4Experiments

We first consider the results for the method introduced to deal with diagonal model failure.Since the efficacy of the diagonal model is known to be a function of the camera sensors [1, 4, 5,8, 12, 13], we provide results for two cameras with distinctly different degrees of support for thediagonal model. Our Sony DXC-930 video camera has quite sharp sensors, and with this camera,the changes in sensor responses to illumination changes can normally be well approximated withthe diagonal model. On the other hand, the Kodak DCS-200 digital camera [14] has less sharpsensors, and the diagonal model is less appropriate [6].

In the first experiment, we generated synthetic scenes with 4, 8, 16, 32, 65, 128, 256,512, and 1024 surfaces. For each number of surfaces, we generated 1000 scenes with the surfacesrandomly selected from the reflectance database and a randomly selected illuminant from the testilluminant database. For surface reflectances we used a set of 1995 spectra compiled from severalsources. These surfaces included the 24 Macbeth colour checker patches, 1269 Munsell chips, 120Dupont paint chips, 170 natural objects, the 350 surfaces in Krinov data set [15], and 57 additionalsurfaces measured by ourselves. The illuminant spectra set was constructed from a number ofmeasured illuminants, augmented where necessary with random linear combinations, in order tohave a set which was roughly uniform in (r,g) space. This data set is describe in more detail in [4].

For each algorithm and number of scenes we computed the RMS of the 1000 results. Wechoose RMS over the average because, on the assumption of roughly normally distributed errorswith mean zero, the RMS gives us an estimate of the standard deviation of the algorithm estimatesaround the target. This is preferable to using the average of the magnitude of the errors, as thosevalues are not normally distributed. Finally, given normal statistics, we can estimate the relativeerror in the RMS estimate by 12N [16, p. 269] For N=1000, this is roughly 2%.For each generated scene we computed the results of the various algorithms. We

considered three-dimensional gamut mapping, with and without Finlayson's illumination constraint[9]. We will label the versions without the illumination constraint by CRULE, which is adoptedfrom [1]. When the illumination constraint is added, we use the label ECRULE instead (Extended-CRULE). Solution selection using the maximum volume heuristic is identified by the suffix MV.For averaging in the case of CRULE, we use the suffix AVE, and in the case of ECRULE, we usethe suffix ICA, indicating that the average was over the non-convex set (Illumination-Constrained-Average). This gives a total of four algorithms: CRULE-MV, CRULE-AVE, ECRULE-MV, andECRULE-ICA. Finally, the method described above to reduce diagonal model failure will beindicated by the prefix ND (Non-Diagonal). We test this method in conjunction with each of thefour previous algorithms, for a total of eight algorithms. We report the distance in (r,g)chromaticity space between the scene illuminant and the estimate thereof.

7

In Figure 3 we show the results for the Sony DXC-930 video camera. We see that whensolution selection is done by averaging (AVE and ICA), the ND algorithms work distinctly betterthan their standard counter-parts. On the other hand, when solutions are chosen by the maximumvolume heuristic, the ND algorithms performed slightly worse than their standard counterparts,provided that the number of surfaces was not large. Interestingly, as the number of surfaces

becomes large, the error in all the ND versions continues to drop to zero, whereas the error in thestandard versions levels off well above zero. In [4] we postulated that this latter behavior was dueto the limitations of the diagonal model, and the present results confirm this.

In Figure 4 we show the results for the Kodak DCS-200 digital camera. The sensors of thiscamera do not support the diagonal model very well [6], and thus it is not surprising that the newextension significantly improves the performance of all four algorithms.

We now turn to results with generated data for the solution selection method developedabove. For this experiment we included the ND extension to reduce the confound of diagonalmodel failure. We label the new method with the suffix SCWIA (Surface-Constrained-Weighted-Illuminant-Average) followed by the value of the parameter N in Equation (1). The results areshown in Figure 5. First we point out that solution selection by the original averaging method out-performs the maximum volume heuristic when the number of surfaces is small, but as the numberof surfaces increases, the maximum volume heuristic quickly becomes the preferred method.

Turning to the new method, we see that it indeed offers a compromise between these twoexisting methods, with the new method tending towards the maximum volume method as Nincreases. More importantly, as long as N is 6 or more, the new method invariably outperformssolution selection by averaging. Furthermore, for N in the range of 9-24, the performance of thenew method is better than the maximum volume heuristic, except when the number of surfaces isunusually large. When the number of surfaces becomes large, the maximum volume heuristiceventually wins out.

An important observation is that the results for N in the range of 9-24 are quite close,especially around 8 surfaces. This is fortuitous, as we have previously observed [4] that 8synthetic surfaces is roughly comparable in difficulty to our image data. Thus we are most

interested in improving performance in the range of 4-16 surfaces, and we are encouraged that theresults here are not overly sensitive to N, provided that it is roughly correct. Based on our results,N=12 appears to be a good compromise value for general purpose use.

Next we present some numerical results in the case of the Sony camera which shows theinteractions of the two modifications. These are shown in Table 1. The main point illustrated in thistable is that the slight disadvantage of the ND method, when used in conjunction with MV, doesnot carry over to the new solution selection method. To explain further, we note that the positiveeffect of reducing the diagonal model error can be undermined by the expansion of the canonicalgamut, which represents an increase in the size of the feasible sets. The positive effect occursbecause these sets are more appropriate, but, all things being equal, their larger size is an increasein ambiguity. Thus when the ND method is used in conjunction with a camera which supports thediagonal model, then, as the results here show, the method can lead to a decrease in performance.In our experiments on generated data, the negative effect is present in the case of MV, but in thecase of averaging, the effect is always slightly positive. When ND is used in conjunction with thenew solution method, the results are also minimally compromised by this negative effect. This isvery promising, because, in general, the diagonal model will be less appropriate, and the methodwill go from having little negative impact, to having a substantial positive effect. This has alreadybeen shown in the case of the Kodak DCS-200 camera, as well as when the number of surfaces islarge. Increasing the number of surfaces does not, of course, reduce the efficacy of the diagonalmodel, but under these conditions, the diagonal model becomes a limiting factor.

8

0.08

Algorithm Chromaticity Performance versusNumber of Surfaces in Synthetic Scenes(Sony DXC-930 Video Camera)CRULE-MVCRULE-AVEND-CRULE-MVND-CRULE-AVEECRULE-MVECRULE-ICAND-ECRULE-MVND-ECRULE-ICA0.07

Vector distance between (r,g) of illuminant and estimate thereof0.06

0.05

0.04

0.03

0.02

0.01

02

3

4567LOG2 (Number of generated surfaces)

8

9

10

Figure 3: Algorithm chromaticity performance versus the number of surfaces in generated scenes, showing themain gamut mapping algorithms and their non-diagonal counterparts. These results are for the Sony DXC-930video camera which has relatively sharp sensors (the diagonal model is a good approximation in general). The errorin the plotted values is roughly 2%.

9

0.08

Algorithm Chromaticity Performance versusNumber of Surfaces in Synthetic Scenes(Kodak DCS-200 Camera)CRULE-MVCRULE-AVEND-CRULE-MVND-CRULE-AVEECRULE-MVECRULE-ICAND-ECRULE-MVND-ECRULE-ICA0.07

Vector distance between (r,g) of illuminant and estimate thereof0.06

0.05

0.04

0.03

0.02

0.01

02

3

4567LOG2 (Number of generated surfaces)

8

9

10

Figure 4: Algorithm chromaticity performance versus the number of surfaces in generated scenes, showing themain gamut mapping algorithms and their non-diagonal counterparts. These results are for the Kodak DCS-200digital camera which has relatively dull sensors (the diagonal model is not very accurate). The error in the plottedvalues is roughly 2%.

10

0.06Algorithm Chromaticity Performance versusNumber of Surfaces in Synthetic Scenes(Sony DXC-930 Camera)ND-ECRULE-MVND-ECRULE-ICAND-ECRULE-SCWIA-3ND-ECRULE-SCWIA-6ND-ECRULE-SCWIA-9ND-ECRULE-SCWIA-12ND-ECRULE-SCWIA-18ND-ECRULE-SCWIA-240.05Vector distance between (r,g) of illuminant and estimate thereof0.040.030.020.0102

3

4

5

6

7

8

9

10

LOG2 (Number of generated surfaces)

Figure 5: Algorithm chromaticity performance versus the number of surfaces in generated scenes, showing theselected gamut mapping algorithms, including ones with the new solution selection method. These results are forthe Sony DXC-930 video camera. The error in the plotted values is roughly 2%.

11

Finally we turn to results with 321 carefully calibrated images. These images were of 30scenes under 11 different illuminants (9 were culled due to problems). Figure 6 shows the 30scenes used. We provide the results of some of the algorithms discussed above, as well as severalcomparison methods. We use NOTHING to indicated the result of no colour constancyprocessing, and AVE-ILLUM for guessing that the illuminant is the average of a normalizedilluminant database. The method labeled MAX estimates the illuminant RGB by the maximumfound in each channel. GW estimates the illuminant based on the image average on the assumptionthat the average is the response to a perfect grey. DB-GW is similar, except that the average is nowassumed to be the response to grey as defined by the average of a reflectance database. CIP-ICA isessentially a chromaticity version of ECRULE-ICA described in [11]. The method labeled

NEURAL-NET is another chromaticity oriented algorithm which uses a neural net to estimate theilluminant chromaticity [17-19]. C-by-C-MAP is the Colour by Correlation method using themaximum posterior estimator [20]. Finally, C-by-C-MSE is Colour by Correlation using theminimum mean square error estimate. All these comparison methods are described in detail in [4]

Table 2 shows the results over the 321 test images. The results from the image datagenerally confirm those from the generated data in the case of the new selection method. On theother hand, the ND method improves matters significantly in only one, case, has essentially noeffect in several others, and when used in conjunction with the new selection method, it has a smallnegative effect. Since the camera used already supports the diagonal model well, these variedresults are understandable.

Number of SurfacesECRULE-MVECRULE-ICAECRULE-SCWIA-3ECRULE-SCWIA-6ECRULE-SCWIA-9ECRULE-SCWIA-12ECRULE-SCWIA-18ECRULE-SCWIA-24ND-ECRULE-MVND-ECRULE-ICAND-ECRULE-SCWIA-3ND-ECRULE-SCWIA-6ND-ECRULE-SCWIA-9ND-ECRULE-SCWIA-12ND-ECRULE-SCWIA-18ND-ECRULE-SCWIA-24

40.0640.0580.0570.0540.0540.0550.0570.0580.0650.0570.0600.0540.0540.0550.0570.059

80.0440.0500.0510.0430.0410.0400.0400.0410.0470.0490.0540.0440.0410.0410.0410.042

160.0320.0440.0450.0360.0320.0310.0300.0290.0330.0430.0480.0360.0310.0300.0290.029

Table 1: Algorithm chromaticity performance for some of the algorithms developed here, together withthe original methods, for generated scenes with 4, 8, and 16 surfaces. The numbers are the RMS value of1000 measurements. The error in the values is roughly 2%.

12

Algorithm

MV

CRULEECRULEND-CRULEND-ECRULENOTHINGAVE-ILLUMGWDB-GWMAXCIP-ICANEURAL-NETC-by-C-MAPC-by-C-MMSE

0.0450.0410.0470.042

Solution Selection Method (If Applicable)AVE/ICA0.0460.0470.0390.048

0.045

0.1250.0940.1060.0880.0620.0810.0690.0720.070

0.041

0.040

0.040

0.043

0.039

0.037

0.037

SCWIA-6

SCWIA-9

SCWIA-12

SCWIA-15

Table 2: The image data results of the new algorithms compared to related algorithms. The numberspresented here are the RMS value of the results for 321 images. Assuming normal statistics, the error inthese numbers is roughly 4%.

6Conclusion

We have described two improvements to gamut mapping colour constancy. These improvementsare important because earlier work has shown that this approach is already one of the most

promising. For the first improvement we modified the canonical gamuts used by these algorithmsto account for expected failures of the diagonal model. When used with a camera which does notsupport the diagonal model very well, the new method was clearly superior. When used with acamera with sharp sensors, the resulting method improved gamut mapping algorithms when thesolution was chosen by averaging. When the maximum volume heuristic was used, there was aslight decrease in performance. This decrease was erased when the method was combined with thesecond improvement. Furthermore, we posit that any decreases in performance must be balancedagainst the increased stability of the new method as the number of surfaces becomes large.

We are also encouraged by the results of the new method for choosing the solution. Ourfindings contribute to the understanding of the relative behavior of the two existing methods.

Furthermore, the flexibility of the new method allows us to select a variant which works better thaneither of the two existing methods for the kind input we are most interested in.

13

7

1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.

References

D. Forsyth, A novel algorithm for color constancy, International Journal of Computer Vision, 5, pp. 5-36(1990).

K. Barnard, Computational colour constancy: taking theory into practice, M.Sc. Thesis, Simon FraserUniversity, School of Computing (1995).

B. Funt, K. Barnard, and L. Martin, Is Colour Constancy Good Enough?, Proc. 5th European Conferenceon Computer Vision, pp. I:445-459 (1998).

K. Barnard, Practical colour constancy, Ph.D. Thesis, Simon Fraser University, School of Computing(1999).

G. D. Finlayson, M. S. Drew, and B. V. Funt, Spectral Sharpening: Sensor Transformations for ImprovedColor Constancy, Journal of the Optical Society of America A, 11, pp. 1553-1563 (1994).

K. Barnard and B. Funt, Experiments in Sensor Sharpening for Color Constancy, Proc. IS&T/SID SixthColor Imaging Conference: Color Science, Systems and Applications, pp. 43-46 (1998).

K. Barnard, Color constancy with fluorescent surfaces, Proc. IS&T/SID Seventh Color ImagingConference: Color Science, Systems and Applications (1999).

G. D. Finlayson, Coefficient Color Constancy, : Simon Fraser University, School of Computing (1995).G. D. Finlayson, Color in perspective, IEEE Transactions on Pattern Analysis and Machine Intelligence,18, pp. 1034-1038 (1996).

G. Finlayson and S. Hordley, A theory of selection for gamut mapping colour constancy, Proc. IEEEConference on Computer Vision and Pattern Recognition (1998).

G. Finlayson and S. Hordley, Selection for gamut mapping colour constancy, Proc. British MachineVision Conference (1997).

J. A. Worthey, Limitations of color constancy, Journal of the Optical Society of America [Suppl.], 2, pp.1014-1026 (1985).

J. A. Worthey and M. H. Brill, Heuristic analysis of von Kries color constancy, Journal of the OpticalSociety of America A, 3, pp. 1708-1712 (1986).

P. L. Vora, J. E. Farrell, J. D. Tietz, and D. H. Brainard, Digital color cameras--Spectral response, (1997),available from http://color.psych.ucsb.edu/hyperspectral/..

E. L. Krinov, Spectral Reflectance Properties of Natural Formations: National Research Council ofCanada, 1947.

J. L. Devore, Probability and Statistics for Engineering and the Sciences. Monterey, CA: Brooks/ColePublishing Company, 1982.

B. Funt, V. Cardei, and K. Barnard, Learning Color Constancy, Proc. IS&T/SID Fourth Color ImagingConference: Color Science, Systems and Applications, pp. 58-60 (1996).

V. Cardei, B. Funt, and K. Barnard, Modeling color constancy with neural networks, Proc. InternationalConference on Vision Recognition, Action: Neural Models of Mind and Machine (1997).

V. Cardei, B. Funt, and K. Barnard, Adaptive Illuminant Estimation Using Neural Networks, Proc.International Conference on Artificial Neural Networks (1998).

G. D. Finlayson, P. H. Hubel, and S. Hordley, Color by Correlation, Proc. IS&T/SID Fifth Color ImagingConference: Color Science, Systems and Applications, pp. 6-11 (1997).

14

Insert Colour Plate Here

Figure 6: The images of the scenes used for real data under the canonical illuminant

因篇幅问题不能全部显示,请点此查看更多更全内容