Ana səhifə

Change Detection Exercise: Crossclassification

Yüklə 20.59 Kb.
ölçüsü20.59 Kb.

Change Detection Exercise:




Crossclassification is a procedure used to compare two images. Its most common application is land-cover / land-use change analysis using a two-date images taken on the same target land features. The features in the raw imagery of each date must be classified into land-cover / land-use categories through a supervised or unsupervised classification. The classification assigns the same unique and distinct identifier to each class on both dates. It is only after this task has been achieved that the two output maps are crossclassified.

The crossclassification is carried out by calculating the logical AND of all possible combinations of categories on the two classified input images. The aim is to evaluate whether areas fall into the same class on the two dates or whether a change to a new class has occurred. The procedure can be summarized by a crosstabulation matrix that shows the distribution of image cells between classes. The categories at date 1 are displayed on the X axis while the Y axis displays the same categories at date 2. The cells corresponding to stable areas are in the diagonal entries of the matrix. Off-diagonal entries indicate areas that have changed to new classes. If no change has occurred, all cells for each category would be on the diagonal entries and the off-diagonal entries would have zeros. In the case of change, pixels move from one category to another. Sometimes the change affects the majority of the pixels in a given class. As a result, the diagonal entry of the category affected is much lower than the off-diagonal entry of the category that gained most of its cells.

Crossclassification produces a crosscorrelation image as well as a crosstabulation table both of which can be used to produce a change image.

The crosscorrelation image shows all possible combinations. It can be used to produce two types of change images according to the objective of the study. First, if the objective is to differentiate overall change areas and overall non-change areas. The attributes of the crosscorrelation image are simply reclassified as a Boolean image (i.e., containing only two values, for example, zeros and ones). All non-change areas are assigned a value of 0 and the change pixels are assigned a value of 1. Change statistics can be generated from the result of the reclassification. Second, if the objective of the analysis is to identify which class on Date 1 has changed to which class on Date 2 and how much area was lost or gained, either the crosscorrelation image or the crosstabulation table can be used to produce a change image. In this case, the change image displays all categories on Date 1 and Date 2 except those that have not changed. However, this latter procedure gives a generalized output image (there is no other way around) since small chunks of land which escaped the change will be eliminated. The reason is that the reclassification is based on the class majority distribution. For example, Table 1 shows that 45123 cells in forested areas at date 1 have been converted into Cropland and 373 cells became rangeland at date 2; and 1563 wetland cells have been converted into cropland. Since each class cannot have more than one identifier, all of the forest will be classified as cropland (the class that received the majority of the conversion, and all of the remaining wetland will become cropland too. The operator always bears in mind that it the classes of date 1 that have become or changed to new categories at date 2.

The Kappa Index of Agreement (K): this is an important index that the crossclassification outputs. It measures the association between the two input images and helps to evaluate the output image. Its values range from -1 to +1 after adjustment for chance agreement. If the two input images are in perfect agreement (no change has occurred), K equals 1. If the two images are completely different, K takes a value of -1. If the change between the two dates occurred by chance, then Kappa equals 0. Kappa is an index of agreement between the two input images as a whole. However, it also evaluates a per-category agreement by indicating the degree to which a particular category agrees between two dates. The per-category K can be calculated using the following formula (Rosenfield and Fitzpatrick-Lins,1986):

K = (Pii - (Pi.*P.i )/ (Pi. - Pi.*P.i )


Pii = Proportion of entire image in which category i agrees for both dates

Pi. = Proportion of entire image in class i in reference image

P.i = Proportion of entire image in class i non-reference image

As a per-category agreement index, it indicates how much a category have changed between the two dates. In the evaluation, each of the two images can be used as reference and the other as non-reference.

The Data Set

The data set used in this exercise consists of two land cover images of southwestern Mauritania and a small portion of northern Senegal. The area is semiarid with annual rainfall between 200 and 250 mm, occasionally below 200 mm. The landscape is composed of encroaching sand dune systems gently sloping southward to the Senegal River. The encroachment goes in the same direction. They dominate the Senegal River plain composed of clayey and sandy clay soils on both sides. The vegetation is made of dominantly sparse Euphorbia balsamifera and Acacia senegal on the sand dune systems. Moderate halophytic species groups dominate the lower (western) section of the river due to presence of salt in the soils. The upper (eastern) part of the river is covered by annual grasses. Irrigated rice agriculture is practiced on the Mauritanian side near the town of Rosso and sugar cane on the Senegalese side. The images have been extracted from two Landsat MSS scenes taken in 1977 and 1979.


In this exercise, we will explore one land-cover / land-use change analysis technique called crossclassification. Crossclassification goes along with crosstabulation. The first is a qualitative approach to the land-cover change analysis while the second supports it with statistical outputs for a quantitative analysis. The two images to use are called MAUCL77 and MAUCL79.

P1 Use the display system of your software to examine MAURCL77 and MAURCL79 with the legend.

Q1 Before running the crossclassification module of your software, could you tell from the displays whether there has been some change between 1977 and 1979?

P2 Now use the crossclassification module of your software to produce a crosscorrelation image and a crosstabulation table from the two land cover images, using the 1977 one as reference. Call the crosscorrelation image CROSS. Then display it with a palette that accommodates more than 16 classes and the legend.

If your system can calculate Kappa index of Agreement per category, select this option. Now take time to examine the display. If you software provides a 3-column crosscorrelation legend with the legend captions, it has the following components:

1st = The new class attribute in the crosscorrelation image

2nd = The class attribute of reference image

3rd = The class attribute of the non-reference image

Q2 How many legend categories are there?

The generation of a change image using the crosscorrelation image is based on the class combinations. A category is stable (has not changed) if it attributes for the two dates coincide in the output image. For example, if you read 2: 4|4; this means that category 2 in the crosscorrelation image represents areas where land-cover category 4 (sparse vegetation) has not changed. You may keep them as they are in producing the change image or collapse them a unique non-change area depending on whether you want to have Boolean or non-Boolean change image.

Q3 How many of such combinations do you observe in the legend of your crosscorrelation image? Which land cover categories do they represent?

The remaining legend categories show unpaired classes like 3: 1|2; 7: 5|3; 7: 5|4; 10: 5|6. This indicates that the same category has been converted into two or more new classes between the two dates. In this case, a category may have changed in one of two opposite directions. The first of these is expansion also called accretion (one category takes over other categories) and the second is shrinkage (a category progressively disappears from the landscape).

Q4 Based on the above three change directions, could you tell from the display:

a. Which categories have expanded?

b. Which categories have shrunk?

c. Is it possible to accurately identify shrinkage from the tabular output? How?

When a category has changed to two or more new categories, it becomes impossible to use the graphic output and assign it to a particular class. The three possibilities that exist to produce a land-cover change image are:

a. The crosscorrelation image is reclassified as a Boolean image that shows only two categories, the stable (non-change) areas as 1 and the change areas a 0.

b. The crosscorrelation image is reclassified and non-change categories are retained with their identifiers while a new identifier is assigned to all the change pixels. In this case, the resulting image will show areas on no change in detail, and all change areas as one category.

c. The crosscorrelation image is reclassified and change categories are retained with their identifiers while a new identifier is assigned to all no change categories. In this case, the resulting image will show areas on change in detail, and all no change areas as one category.

Q5 Using option b above, how many classes would you have after reclassifying your crosscorrelation image?

P3 Use the reclassification module of your software to produce new an image described under option b.

P4 Now examine the statistical outputs of the crosstabulation. You need to look at three things: the cross-tabulation table, the overall Kappa Index of Agreement and the per class Kappa Index of Agreement. First, examine the cross-tabulation table generated with your crosscorrelation image. Compare the diagonal and off-diagonal entries of each category. They tell you how the landscape behaved between 1977 and 1979.

Q6 Would you describe the landscape as stable or unstable between 1977 and 1979? Confirm your answer by examining the overall Kappa Index of Agreement and the per category Kappa Index of Agreement.

Remember that the overall Kappa of Agreement indicates the degree of change in the landscape. A low Kappa Index means dramatic changes in the landscape and vice-versa. The per category Kappa Index of Agreement indicates the individual behavior of each category between the date. A value close to 1 means no or little change while a value close to zero indicates a great transformation of the category.


The crossclassification procedure appears to be a convenient way of undertaking a classification accuracy assessment as well as evaluating changes that have occurred in the landscape between two dates. The change evaluation can be done using either of the outputs of the crossclassification, the crosscorrelation image and the cross-tabulation table.

The crosscorrelation image is a qualitative output that shows spatial distribution land-cover change. As opposed to the crosscorrelation image, the cross-tabulation table is a quantitative output. It offers the possibility to quantify the changes from the correlation matrix which shows how much of a given land-cover type has changed into what categories.

The Kappa Index of Agreement produced with the table is also a quantitative means of evaluation the changes. Overall, the Kappa index of agreement can be used whenever the land cover classes of two-date images must be evaluated for change. It provides an overall change agreement for the two images and per category change agreements.

Back to Module 8 Digital Change Detection


This exercise was designed by Amadou K Thiam, Graduate School of Geography, Clark University.


For a detailed discussion of different change detection techniques you may wish to consult:

Eastman R. J., McKendry J. E., and Fulk M. A., 1995. Change and Time Series Analysis. Second Edition. Exploration in Geographic Information Systems Technology, Vol. 1 UNITAR, 119 pp.

For a detailed discussion of the Kappa Index of Agreement you may wish to consult:

Carsten, L.W., 1987. A Measure of Similarity for Cellular Maps. The American Cartographer, 14 (4): 345-358.

Rosenfield, G.H. and Fitzpatrick-Lins, K. 1986. A Coefficient of Agreement as a Measure of Thematic Classification Accuracy. Photogrammetric Engineering & Remote Sensing, 52 (2): 223-227.

Foody Giles M., 1992 On the Compensation for Change Agreement in Image Classification Accuracy Assessment. Photogrammetric Engineering & Remote Sensing, 58 (10): 1459-1460

Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur © 2016
rəhbərliyinə müraciət