Tuesday, November 19, 2013

Remote Sensing Lab 6: Geometric Correction



Part 1: Image to Map Rectification:

Step 1: Bring the image Chicago_drg.img into Erdas 2013.

Step 2: Next, open a second viewer in Erdas 2013 and bring in Chicago_2000.img. Fit both images to frame.

Step 3: Making sure that the viewer with Chicago_2000.img is active, activate the Multispectral tools and click on Control Points. This will open the Set Geometric Model dialogue box.

a)      Under Select Geometric Model, choose Polynomial followed by OK.

b)      A GCP Tool Reference Setup dialogue box will now open in addition to the Multipoint Geometric Correction dialogue box. Leave the default value in the GCP Tool Reference box as they are, i.e. Image Layer (New Viewer) and click OK to close it out.

c)      Navigate to the appropriate folder and select Chicago_drg.img to add as the reference image.

d)      Click OK on the reference Map Information dialogue box.

e)      A new dialogue box called Polynomial Model Properties will now open. A first order polynomial equation will be used to develop the model that will be used to rectify Chicago_2000.img. Accept the default values of this box and click Close to close it out.

f)       Maximize the Multipoint Correction Window. The window should be contain 6 panes, three for each image, and should look similar to figure 1.

g)      Delete the default GCPs that are at the bottom of the screen. Use Shift to select the values and delete them by click on them with the right mouse button.

h)      Fit the largest Chicago_2000.img to window (right mouse button); do the same for Chicago_drg.img.

i)        Click the Create GCP tool in the Multipoint Geometric Correction interface. It looks like a large cross-hair. This action will turn the cursor into a cross-hair on the screen as well.

j)        Add the first GCP to the Chicago_2000.img and do the same to the reference image on the right-hand side of the Multipoint Correction interface.

k)      Repeat the same process until GCPs 1-4 are roughly in the same areas as those in figure 1. Notice that the GCPs are spread across the images, this helps to ensure a better rectification than if all GCPs were located in close proximity to one another. At this point, having the GCP exactly corresponding to one another is not detrimental, but it will be later. Notice that the bottom of the Multipoint Correction interface now reads ‘Model Solution is Current.’ This is because there are enough GCPs to run a 1st order polynomial model.

l)        Next, the Root Mean Square (RMS) error will need to be adjusted. Total RMS is indicated in the bottom right-hand corner of the Multipoint Correction interface. Under the heading: Control Point Error Total. In order to run a good model, total Control Point Error should be less than 0.5.

m)   Adjust the RMS by zooming in on a GCP and adjusting it until the RMS is at the desired value. This part may be very tedious, but it generally helps to get the RMS close (e.g. less than 15) and then zoom in on a particular GCP and adjust until the X and Y values in the Control Point Error section of the interface each read as close to zero as possible.

Figure 1

  A)
 
B)
 

Figure 1a shows how the Multipoint Correction interface should appear and the approximate locations of the GCPs. Figure 1b shows the Total Control Point Error which should be less than 0.5.

n)      Once the RMS value is at the appropriate level, click Display Resample Image dialogue button at the top of the Multipoint Correction interface.

o)      Name the output file Chicago_2000gcr.img.

p)      Leave all parameters at their default values and click OK to run the model.

q)      Click Dismiss when the model is finished running. DO NOT SAVE CURRENT GEOMETRIC MODEL.

Q1 What function(s) did the image Chicago_drg.img performed in the geometric correction process? [Hint: you should name and describe the interpolation that this image aided in the process of geometrically correcting the Chicago_2000.img image].

Chicago_drg.img was for the spatial interpolation portion of the geometric correction process. In this case, a first order (linear) polynomial function was used in order fit the data derived from GCPs placed on each image in the Multipoint Correction interface in Erdas. If the image were more distorted a higher order polynomial function would have been more appropriate to use; however, doing this would also require a larger minimum of GCPs to be collected as well.

Q2 Name and describe the type of interpolation that is being performed by the resampling dialog window you just clicked above.

Nearest neighbor is the resampling method that is being used for the intensity interpolation of this particular image rectification.

 

Q3 Why did you spread the four points you collected across the images and did not only concentrate them on one or two areas of the images?

The GCPs were spread out in this image (and should be as much as possible in any image) so that the rectification process will be as accurate as possible. If the points were crammed together then proper geometric correction may not be as accurate.

Q4 Briefly explain the first order polynomial equation/model used in the above geometric correction exercise.

First order polynomial functions are used to spatially interpolate images that have a lower degree of distortion. However, linear functions also leave out more information than do higher order polynomial functions, such as quadratic or cubic ones. For instance, on an x-y graph, a linear function has very straight lines. However, when a straight line is drawn over a curved area, some of this area is left out; in the case of image rectification this equates to lost data. In contrast, higher order polynomial functions drawn on an x-y coordinate system fit to curves much better than a linear polynomial would, thus ensuring the preservation of more data.

Q5 What is the minimum number of ground control points needed to perform a 1st order polynomial transformation?

Three is the minimum number of GCPs required for a first order polynomial transformation, per the system requirements. However, using one or two extra points is advisable as three is just the minimum requirement.

 
Figure 2

Figure 2 shows image that was rectified using the steps above in section 1 of this lab.

 


Part 2: Image to Image Registration:

Step 1: Bring the image sierra_leone_east1991.img into Erdas 2013.

Step 2: Next, open a second viewer in Erdas 2013 and bring in sierra_leone_east1991grf.img. Fit both images to frame.

Step 3: Activate the Multispectral tools and click on Control Points. This will open the Set Geometric Model dialogue box.

a)         Under Select Geometric Model, choose Polynomial followed by OK.

b)         A GCP Tool Reference Setup dialogue box will now open in addition to the Multipoint Geometric Correction dialogue box. Leave the default value in the GCP Tool Reference box as they are, i.e. Image Layer (New Viewer) and click OK to close it out.

c)         Navigate to the appropriate folder and select sierra_leone_east1991grf.img to add as the reference image.

d)         Click OK on the Reference Map Information dialogue box.

e)         A new dialogue box called Polynomial Model Properties will now open. A third-order polynomial equation will be used to develop the model that will be used to rectify sierra_leone_east1991.img. Accept the default values of this box and click Close to close it out.

f)          Maximize the Multipoint Correction window.

g)         Delete the default GCPs that are at the bottom of the screen. Use Shift to select the values and delete them by click on them with the right mouse button.

h)         Fit the largest sierra_leone_east1991grf.img to window (right mouse button); do the same for sierra_leone_east1991.img.

i)          Click the Create GCP tool in the Multipoint Geometric Correction interface. It looks like a large cross-hair. This action will turn the cursor into a cross-hair on the screen as well.

j)          Add the first GCP to sierra_leone_east1991.img and do the same to the reference image on the right-hand side of the Multipoint Correction interface. This is similar to the process in Part 1 of this lab. However, a third order polynomial function is being used to spatially interpolate sierra_leone_east1991grf.img, more GCP will be used. In this case it is 12. Also, be sure to spread the GCPs on the images to ensure that they are rectified as completely and accurately as possible.

k)         Next, the Root Mean Square (RMS) error will need to be adjusted. Total RMS is indicated in the bottom right-hand corner of the Multipoint Correction interface. Under the heading: Control Point Error Total. In order to run a good model, total Control Point Error should be less than 0.5. Again, this process should be similar to Part 1 of the lab.

Figure 3a shows sierra_leone_east1991grf.img and its reference image in the Multipoint Correction interface prior to geometric corrections being performed on it. Just below it, in figure 3b is detail of the RMS error.

Figure 3
a)

 
b)

Figure 3a shows the reference image sierra_leone_east1991grf.img (right) and the distorted input image sierra_leone_east1991.img (left) as they appear in the Multipoint Correction interface prior to geometric correction. Figure 3b show the total control point (RMS) error.

l)              Once the RMS value is at the appropriate level, click Display Resample Image button at the top of the Multipoint Correction interface.

m)        Name the output file sl_east_gcc.img.

n)         Change the Resample Method to Bilinear Interpolation and click OK to run the model.

o)         Click Dismiss when the model is finished running. DO NOT SAVE CURRENT GEOMETRIC MODEL.

 

Q6 What type of map coordinate system is the reference image in?

UTM (Zone 29) projected coordinate system.

Q7What is the minimum GCPs you need to collect to perform a 3rd order polynomial transformation?

10 is the minimum number of GCPs needed to perform a 3rd-order polynomial transformation.

Q8 Why is the Multipoint Geometric Correction interface reporting that model has no solution even though you have collected up to 9 points but for part 1 above, once you had 3 points your model reported that “Model solution is current”?

Since the transformation is a 3rd order polynomial, more GCPs need to be placed on the Sierra Leone images than the Chicago images which used a 1st-order polynomial transformation in order to geometrically correct them.

Q9 How geometrically correct is your rectified image compared to the reference image you used?

Figure 4 a-c shows the rectified image overlaid on the reference image in Erdas at various swipe stages. The rectified image seems to fit nicely over the reference image; however, 4d shows that the southeast corner of sl_east_gcc.img is not a perfect fit in this particular location.

Q10 Why was a bilinear interpolation resampling selected above instead of nearest neighbor as executed in part 1?

Bilinear interpolation (BLI) was used as a resampling method because it will produce a smoother image than nearest-neighbor (NN) will. Also, because BLI uses a weighted average of the four nearest pixel values, it is more accurate than NN. However, the accuracy gained in using BLI is done so with greater computational expense compared to NN.
Figure 4
a)                                                                                      
 
b)
 
c)                                                                                           
 
d)

Figure 4 a-c shows the rectified image sl_east_gcc.img in various Swipe-Function stages as it overlays the reference image sierra_leone_east1991grf.img. Figure 4d shows the SE corner of the two images, illustrating how this particular corner was poorly rectified.

Friday, November 15, 2013

Lab 5: Image Mosaic and Miscellaneous Image Functions II




 

 
    The goal of this lab is to allow one to gain experience in image analysis. This will be done using Erdas 2013 and includes the following analytical processes: RGB to HIS transformation and back, image mosaic, spatial/spectral image enhancement, band ratio, and binary change detection. In addition to these analytical tools using Erdas, ArcMap will be used to create a map of the area of interest ((AOI) in the binary change detection section of this lab between August 1991 and August 2011.
Part 1: RGB to IHS Transformation and Back:

Section 1: IHS to RGB Transformation:
RGB to IHS is an image enhancement technique in which the additive primary (Red/Green/Blue) color coordinate system is converted to the I (intensity), H (hue), S (saturation) color system. Doing this results in an image that is easier for the human eye to perceive (Erdas, 1999).

 
Step 1: Use the Erdas Imagine 2013 to open the image that is to undergo the IHS to RGB transformation. In this case it is eau_claire_2000.img.

 Step 2: Check to ensure that band 3 is represented by the red color-gun, band 2 the green, and band 1 the blue. This is done by activating the Multispectral tool menu.

 Q1. Does eau_claire_2000.img accurately represent what one would see in the natural world?

No, the color found in eau_claire_2000.img is dull. Also, the contrast is rather poor in the image as well, making some surface features difficult to distinguish from one another. Figure 1 shows the original RGB image (eau_claire_2000.img; left) as compared to its transformation to IHS (ec_rgb_ihs.img; right).

 Step 3: Activate the Raster tool menu.

 Step 4: Click on Spectral, followed by RGB to IHS. This will open the RGB to IHS interface.

 

      a) Ensure that the input file on the RGB to IHS interface is eau_claire_2000.img.

b)      Click on the folder icon next to the Output File section and navigate to the appropriate output folder. Enter the name of the output file. In this case it is ec_rgb_ihs.img.

 

c)      Ensure that the following is true: red=3, green=2, and blue=1 on the interface.

 

d)      Leave the remaining default parameters as they are and Click OK.

 
Step 5: When the model finishes running click Dismiss and close the box.

Figure 1 



Figure 1 shows eau_claire_2000.img prior to (left) and after (right) its transformation from RGB to IHS displayed in the Erdas Imagine viewer. Color-guns for both images are set to R=3, G=2, B=1.
 


Q2. Describe the new IHS image that was just created from the standpoint of the color characteristics of the image itself and the original RGB image that was just transformed. In the description, state the differences in the patterns of the histograms for the three bands used in the transformation.
  Once the RGB image was transformed to IHS, the IHS image had way too much contrast. As shown in figure 1(left) is very hard to distinguish any features on the IHS except for the Mississippi and Chippewa Rivers. Also shown in figure 1, the color in the enhanced image (ec_rgb_ihs.img; right) is much less realistic than that of the original RGB image prior to its transformation.
   Figure 2 compares the 3 histograms of the original eau_claire_2001.img image (top row) to that of the three histograms of the IHS image (bottom row). The histograms of the IHS image show that the frequency of the radiometric data is more widely distributed across the histograms than those of the RGB image. This is to be expected, due to the high contrast exhibited by the IHS image in figure 1.
Figure 2
 
Figure 2 shows the histogram data of both the original RGB image (top row of three) and those of the image after the RGB to IHS conversion (bottom row of three).
 
 
Section 2: IHS to RGB Transformation:
Step 1: Open the IHS image that was created in section 1 in the Erdas viewer. In this case it is ec_rgb_ihs.img.
Step 2: Activate the Raster Processing Tools on the Erdas interface.
Step 3: Click on Spectral, followed by IHS to RGB. This will open the RGB to IHS interface.
a)      Ensure that the input file on the IHS to RGB interface is eau_claire_2000.img.
 
b)      Click on the folder icon next to the Output File section and navigate to the appropriate output folder. Enter the name of the output file. In this case it is ec_ihs_rgb.img.
 
c)      Make sure that Intensity=band 1, Hue=band 2, and Saturation=band 3 on the IHS to RGB interface.
 
d)      Leave all default parameters as they are on the interface and click OK to run the model.
 
Step 4: When the model has finished running click Dismiss and then close out the dialogue box.
 
Q3. Compare and contrast the newly transformed RGB image and the original RGB image in terms of color characteristics and histograms.
Figure 3 shows the original RGB image and the image that resulted from the IHS to RGB transformation (no stretch) that was described in section 2 of this lab. Both images have the color guns set as follows: red=3, green=2, and blue=1.  As far as differences in color, the enhanced image has more of a brown tint to it that the original RGB image. Also, the transformed image still does not represent what one would expect to see in the real world.
However, the histograms found in figure 4 shows that band 3 (red) is more widely distributed in the histogram of the original image (top row) than in the one after the IHS to RGB transformation (bottom row). In contrast, the histogram that represents band 1 (blue) shows better distribution in the output image, and thus higher contrast in this band for this image.
  Also, it worth noting that the histogram for band 2 (green) in both the original and enhanced image has not changed at all. This was confirmed by looking at the statistics for this image which showed no difference between the two images in this band, except that the median was off by about +0.333 in the enhanced image.
 
Figure 3
 
 
Figure 3 shows the original RGB image (left) and the image that resulted from the RGB to IHS transformation (no stretch; right) displayed in the Erdas viewer.
Figure 4
 
Figure 4 shows the histogram data of both the original RGB image (top row of three) and those of the image after the IHS to RGB conversion (no stretch; bottom row of three).
Section 3: IHS to RGB Transformation with I & S Stretch:
Repeat the steps in section 2 to perform an IHS to RGB transformation. However, this time apply Stretch I & S in the IHS to RGB interface. Name the new file ec_ihs_rgb2.img and set color guns to 3, 2, and 1 for R, G, and B, respectively.
Q4. Compare the newly stretched retransformed RGB image to both the non-stretched and original RGB images in terms of color patterns, quality and histograms (Make color gun 3, 2, 1).
Compared to the original RGB image, the IHS_RGB2 image, displayed in figure 5, is not much different than the first IHS_RGB image with no stretch in terms of color patterns. Likewise, the image itself resembles the first un-stretched IHS_RGB image as far as color and quality is concerned. That is, it bears little resemblance to what one would expect in the real world.
Figure 6 illustrates the histograms of each band of the enhanced and stretched IHS_RBG2 image. The distribution within the histograms compared to that of the original RGB image is similar to the differences exhibited between the unstretched IHS_RGB image and the original. However, in the stretched image the histograms show that distribution of data within each layer is more widely distributed than in the unstretched image. Of course the exception to this is the green band (layer 2) which has not changed significantly with the application of any enhancement.
 
Figure 5
 
 
 
Figure 5 shows the IHS to RGB transformation (ec_IHS_RGB2.img) in the Erdas Imagine viewer. A stretch to the saturation and intensity was applied.
 
Figure 6
 
Figure 6 shows the histogram data for the enhanced IHS to RGB image (ec_IHS_RGB2.img) after a stretch to the intensity and saturation was applied.
 
 
Part 2: Image Mosaicking:
Mosaicking is useful in remote sensing when the AOI is extremely large, or it transverses two satellite scenes.
Step 1: Open Erdas Imagine and navigate to the desired image, in this case it is eau_claire_2005p26r29 in the Select Layers to Add window. Do not add the image yet.
a)      Click on Multiple in the Select Layers to Add window and then on the Multiple Images in Virtual Mosaic button.
b)      Click on Raster Options and ensure that Background Transparent is checked. Also, check Fit to Frame.
Repeat step 1, but this time bring in the image eau_claire2005p25r29. Click OK when finished. This will load the image presented in figure 7 into the Erdas Imagine viewer.
Figure 7
 
 

Figure 7 illustrates how the image that resulted by performing step 1 will appear in Erdas Imagine.
 
 
Section 1: Image mosaic with the use of mosaic Express:
Step1: With the image illustrated in figure 7 in the Erdas Imagine viewer, activate the Raster Tools.
Step 2: Select Mosaic followed by Mosaic Express from the resulting drop menu. Doing this will result in the appearance of a Mosaic Express window.
a)      Under the Input tab, click on the folder icon.
b)      Next, select the file eau_claire2005p25r29. This will be the top layer.
c)      Repeat part c, but load the image eau_claire2005p26r29.
d)      Click on Next. Continue to do so, leaving all parameters at their defaults, until the Output Dialogue is reached.
e)      Click on the folder icon next to Root Name and navigate to the appropriate output folder.
f)       Name the output file eau_claire2005msx.img.
g)      Leave all parameter at their defaults and click Finish to run the model.
Step 3: When the model finishes running, click on Dismiss and close out the window.
 
Q5. Describe the nature of color in your output image, in other words, is there a smooth color transition between one image and the other especially at the boundaries?
As shown in figure 8, the colors in the two images that were merged via Mosaic Express do not match each other. The result is a distinct boundary in the images.
 
Figure 8
 
 

Figure 8 shows the mosaic image that was created by running Mosaic Express in Erdas Imagine as it appears in the viewer.
 
Section 2: Image Mosaic using MosaicPro:
Step 1: Open the same two images in Erdas Image as in section 1; follow the same steps when opening them (eau_claire1995p25r29.img and eau_claire1995p26r29.img).
Step 2: Click on Mosaic in the Raster tools and select MosaicPro. This will open the MosaicPro window.
a)      Click on the Add Images icon  to open the Add Images dialogue box.
b)      Highlight eau_claire1995p25r29.img, but DO NOT add it.
c)      Click on the Image Area Options tab in the Add Images dialogue box and then select Compute Active Area.  Click OK.
d)      Repeat steps a, b, and c to add eau_claire1995p26r29.img.
e)      Make sure that the eau_claire1995p25r29.img is the bottom image in the MosaicPro Window.
f)       Click on Color Corrections  in the MosaicPro tool bar. This will open the Color Corrections dialogue box.
g)      Check the Use Histogram Matching option. This will activate a Set button; click on Set and then select Overlap Areas and click OK.
h)      Click on Process in the MosaicPro interface followed by Run Mosaic.
i)        Navigate to the appropriate output folder and name the new image eau_claire1995msp.img. Click OK to run the model.
j)        Once MosaicPro is complete, click Dismiss and then close the window.
 
Q6. Compare the output mosaic image using the MosaicPro (MP) and that obtained earlier using the Mosaic Express (ME). In your discussion, state the reason(s) for the differences in the image quality.
  Figure 9 shows a comparison of two mosaicked images as viewed in the Erdas 2013 viewer. The image on the left in figure 9 shows the image in which MosaicExpress was applied to join the two images (eau_claire199msx.imge). The right-hand side of figure 9 illustrates an image that was processed using MosaicPro (eau_claire1995msp.img). From he
  Looking at figure 9, it is clear that MP creates a much more seamless image than does the ME image. This is because the MP image process is more precise. For instance the “color corrections” option in MP allows the radiometric data in the two input images to be synchronized. In this case “histogram matching” was used, which balanced the color differences in the two images.
However, even though MP created an image that was superior in quality to the ME image, the boundary-line of the two images can be seen in the MP image. This Fact is true especially when closely examining the image (i.e. zooming in).
Figure 9
 
 

Figure 9 compares the images produced using MosaicExpress (left) and MosaicPro (right).


Part 3: Band Ratioing:
 This section of the lab will use a ratio transformation to create a normalized difference vegetation index (NDVI) of the Eau Claire area. This type of index can help an analyst distinguish vegetation from other surface features.
Step 1: Bring the desired image into the Erdas 2013 viewer; in this case it is eau_claire_2011.img.
Step 2: Select Unsupervised, followed by NDVI in the Raster tools menu of the Erdas viewer. Doing this will open an indices interface.
a)      Make sure that the input image is eau_claire_2011.img.
b)      Click on the folder icon next to the Output File portion of the Indices interface and navigate to the appropriate output location.
c)      Name the new image eau_claire2011ndvi.img.
d)      Make sure the Sensor section of the indices interface reads ‘Landsat TM.’ Landsat TM 4 and 5 should have the same bands.
e)      Under Function in the indices interface, highlight NDVI.
f)       Click OK to run the model.
g)      Dismiss at the completion of the run and then close out the window.
Q7. What will you expect to find in areas that are very white in the NDVI image?
 Figure 10 shows the NDVI (eau_claire_2011ndvi.img ) that was created using the steps found in part 3 of this lab with an inset viewer over the city of Eau Claire. White areas throughout the entire image (i.e. not just those within the inset viewer) indicate high reflectance and show those in which vegetation is prominent
Q8. Comment on the presence or absence of vegetation in areas that are medium gray and black.
  Areas in the image in figure 10 that are black or medium gray indicate areas that lack vegetation completely or those that are sparsely vegetated (e.g. they have no or very little reflectance). For instance, the Mississippi, St. Croix, and Chippewa Rivers all appear black on the main map in figure 10.
  The inset viewer in figure 10 shows the city of Eau Claire which is generally medium gray. This is expected due to the lack of vegetation within the city limits compared to the more rural areas surrounding it. This is because rural areas in this region contain forested and agricultural land which will show greater reflectance an NDVI image.
Figure 10
 
 

Figure 10 shows an NDVI image of West-central Wisconsin and eastern Minnesota as it appeared in the Erdas 2013 viewer (eau_claire 2011ndvi.img). The inset-viewer in the center right-hand portion of the screenshot shows the city of Eau Claire in greater detail.

 
Part 4: Spatial and Spectral Image Enhancement:
Section 1a: Spatial Enhancement—Low Pass Filter:
Step 1: Open the appropriate file in the Erdas 2013 viewer; in this case it is chicago_tm1995_b3.img.
Q 9. This image demonstrates some amount of high frequency which needs to be suppressed. What is a high frequency image?
Figure 11a shows chicago_tm1995_b3.img as it was displayed in the Erdas Viewer after performing step 1 above. The image is high frequency because its brightness values (BV) change substantially over a short distance. This is shown by chicago_tm1995_b3.img’s histogram, which is depicted I figure 11b.
Figure 11
 a)

b)

Figure 11 shows the high frequency image, chicago_tm1995_b3.img (a) and its histogram (b).
 
 
Step 2: Activate the Raster tools.
Step 3: Click on Spatial, followed by Convolution in the Raster menu. Doing this will activate the Convolution interface.
a)      Ensure that the input file is chicago_tm1995_b3.img.
b)      Under Kernel Selection, select 5x5 Low Pass.
c)      Click on the folder icon next to the Output File section of the Convolution interface.
d)      Navigate to the appropriate output folder. Name the output file chicago_tm1995_b3low.img.
e)      Leave all other parameters in the Convolution interface as they are and click OK to run the model.
Step 4: Dismiss once the model finishes running and close out the box.
Q10. Outline the differences between the original image and the 5x5 Low Pass filtered image you just created.
Figure 12 shows spatially enhanced chicago_tm1995_b3low.img (right) compared to the original chicago_tm1995_b3.img (left). As shown, the new image is much smoother than the original when both are viewed in Erdas 2013 at the same extent (1:1152587). This smoothness, which resulted from changes done to the brightness values, is especially when comparing the space circumvented by the red ellipse on the new image to that on the original.
However, when zooming in with the two views synchronized, the spatially enhanced image becomes blurry more quickly than the original one.
 
Figure 12
 

Figure 12 shows the original chicago_tm1995_b3.img (left) and chicago_tm1995_b3low.img (right), on which a 5x5 low-pass convolution filter was applied.

Section 1b: Spatial Enhancement—High Pass Filter:
Q11. What is a low frequency image?
A low frequency image is one in which the brightness values (BV) of pixels change little over a given distance (i.e. it has low contrast).
Step 1: Open the desired image in the Erdas 2013 viewer; in this case it is sierra_leone2002b3.img.
Step 2: Using the methods in Section 1a, apply a 5x5 High-Pass convolution filter on sierra_leone2002b3.img. However, save the new file as sierra_leone2002high.img.
 
 Q12. Outline the differences between the original image and the 5x5 High Pass filtered image you just created.
Shown in figure 13, the convolved image (right) has much better contrast than the original image (left). Viewed from the “Fit to Screen” extent (1:995207), what appear to be roads (white) show up much better in the convolved image than in the original.
However, when zooming in (images synchronized) sierra_leone2002b3.img appears to have a lot of noise. For example, at an extent of 1:86394 the convolved images begins to take on a salt and pepper appearance.
Figure 13
Figure 13 shows the original sierra_leone2002b3.img (left) and sierra_leone2002high.img (right) on which a 5x5 high-pass convolution filter was applied.
 
 
Section 1c: Spatial Enhancement—Edge Enhancement:
Step 1: Open the desired image in Erdas Imagine 2013; in this case it is sierra_leone1991.img.
Step 2: Open the Convolution window in the same way as in section 1 a and b.
a)      Make sure that the Input image is sierra_leone1991.img.
b)      Highlight 3x3 Laplacian Edge Detection under Kernel Type.
c)      Click on Fill.
d)      Uncheck Normalize the Kernel.
e)      Click on the folder icon next to the output image and navigate to the appropriate folder. Name the new image sierra_leone1991edge.img.
f)       Leave the other parameters the as they are and click OK to run the model.
Q13.  What is a Laplacian convolution filter?
A Laplacian convolution filter (LCF) is a linear edge enhancement method approximates the second derivative of two adjacent pixels. This is done to make the borders between two features in an image more prominent. In other words, an LCF attempts to make surface features in an image more easily distinguishable from one another.
Q14. Outline the differences between the original image and the Laplacian edge detection image you just created.
Figure 14 shows the original image (left) and the convolved sierra_leone1991edge.img (left) in the Erdas 2013 viewer at the same extent (1:944817). Although the overall brightness of sierra_leone1991edge.img appears to be less than that in the original, surface features such as roads and rivers are more easily discernable in the image on the right than in the original.
Figure 14
Figure 14 shows the original image, sierra_leone1991.img (left) and the convolved sierra_leone1991edge.img (right).
 
Section 2: Spectral Enhancement:
This section of the lab will illustrate how to improve on the visual appearance of two images. This will be done using two methods of liner contrast stretch to improve them visually.
Also, histogram equalization will be performed to one of the images.
Section 2a: Min.-Max. Contrast Stretch:
Step1: Load the desired image into the Erdas 2013 viewer. In this case it is eau_claire19913b.img.
Step 2: Activate the Panchromatic tools in the Erdas menu bar.
Step 3: Click on General Contrast, then select General Contrast again from the resulting dropdown tab. This will open the Contrast Adjust interface.
a)      Click on Method and select Gaussian.
b)      Click on Apply.
The resulting image with the Gaussian linear stretch is shown below in figure 15.
Figure 15
Figure 15 shows eau_claire19913b.img after a Gaussian stretch was applied.
 
Step 4: Clear eau_claire19913b.img and do not save.
Section 2b: Piecewise Contrast Stretch:
Step 1: Load the desired image into the Erdas 2013 viewer. In this case it is eau_claire1991b5.img.
Step 2: Click on General Contrast, followed by Piecewise Contrast in the Panchromatic tool menu.
Step 3: Use the figure below to set the parameters in the resulting Contrast Tool dialogue box. These values were taken by moving the crosshairs over the histogram and taking note of the resulting values.
 
 





Step 4: Click on the Middle Range button of the dialogue box and repeat the same process as above.

Step 5: Increase the dynamic range of brightness for the final mode to 180 and apply it to the image.

Q15. Compare the appearance of the piecewise contrast stretched image with the original image.

Figure 16 shows the original unaltered image (eau_claire1991b5.img; right) and the spectrally enhanced image on the left. Notice that surface features in the enhanced image are much more easily distinguished than in the washed-out original.
 
Figure 16
Figure 16 shows the original image (eau_claire1991b5.img) on the right and the same image after a piecewise contrast stretch was applied.
 
Section 2c: Histogram Equalization:
Step 1: Load the desired image into the Erdas 2013 viewer. In this case it is l5026029_0292011b30.img.
Step 2: Activate the Raster Tools and select Radiometric, followed by Histogram Equalization.
Step 3: Click on the folder icon next to the output file section of the dialogue box that resulted from step 2 and navigate to the appropriate folder. Name the new file ec_2011_b3_he.img.
Step 4: Accept all default parameters and run the model by clicking OK.
Q16. Outline the differences you observed between your input image and your Histogram Equalized image, and also their respective histograms.
Figure 17a shows the original image on the left (l5026029_0292011b30.img) and the image that resulted after histogram equalization was applied on the right (ec_2011_b3_he.img). Notice that the contrast in the enhanced image is much better than in the original.
 This change in contrast is also reflected by the respective histograms of the images, with the original displayed on the left side of figure 17b and the enhanced on the right. The frequency of brightness values of the enhanced image is much more distributed across its histogram than in the original.
Figure 17
 
 a)
 b)

Figure 17a shows the original image on the left (l5026029_0292011b30.img) and the image enhanced using histogram equalization on the right (ec_2011_b3_he.img). The respective histograms of these images are shown in figure 17b.


 
Part 5: Binary Change Detection (Image Differencing):

In this part of the lab image differencing will be used to detect changes in brightness values (BV) between two images of Eau Claire and neighboring counties. Both images were take around the same time, August, though the older one was taken in 1991 and the newer in 2011.

Section 1a: Creating a Difference Image:

Step 1: Open Erdas 2013 with two viewers. In each window open the appropriate images; in this case they are: ec_envs1991.img and ec_envs2011.img in view one and two, respectively.

Step 2: Synchronize the viewers and zoom in and out/pan around to observe any differences between the two images.

Step 3: Activate the Raster processing tools.

Step 4: Click on Functions in the raster tool menu, followed by Two Image Functions. This will open the Two Input Operators interface.

a)      Ensure that the following is correct within the interface:

1)      Input File 1: ec_envs2011.img

2)      Input File 2: ec_envs1991.img

b)      Navigate to the appropriate output folder via the folder icon next to the Output section of the interface. Name the output file ec_envs91_11.img.

c)      Under the Output Options section of the interface, change the Operator from (+) to (-).

d)      Next, click on Layer beneath both input files and change them from All to 4 on each.

e)      Click OK to run image differencing.

Step 4: Dismiss at the end of the run and close out the window.

The images produced using the steps above is displayed below in figure 18. In this case the image, on the right, is shown next to one the input images that was used to create it (ec_envs1991.img; left). Only the NIR band 4 was used to create the output image in order to simplify its processing.

Figure 18


Figure 17 shows one of the input images, ec_envs_1991 (left) compared to the newly created ec-envrs_91_11.img (right). Both images have viewers inserted in approximately the same area.



Section 1b: Estimating the Threshold of Change:

Step 1: Open the Image Metadata interface and click oh Histogram.

Step 2: Observe the distribution of the histogram and the range of BVs. Determining the cutoff point of the histogram for the threshold will be determined using the rule of thumb equation:

[1]                                MEAN +1.5(STANDARD DEVIATION)

Step 3: Move the cursor to the center of the histogram (i.e. in the middle of the bell) and take note of the value obtained in doing so.

Step 4: Go to the General section of the Metadata interface and take note of both the mean and standard deviation.

Step 5: Add the results obtained from Steps 3 and 4: the resulting sum is the upper limit for the change/no change threshold.

Step 6: Repeat the above steps to obtain the lower limit of the change/no change threshold; this should be a negative value.

 

Section 2a: Mapping Change Pixels in a Difference Image Using Spatial Modeler:

This section of the lab will map changes that occurred within Eau Claire and neighboring counties between August 1991 and August 2011. Below, equation 2 shows how the difference will be contrived mathematically using the spatial modeler program:

[2]    ΔBVijk = ΔBVijk(1)- ΔBVijk(2)+C

Where:

1)      ΔBVijk is the change in pixel values.

2)      ΔBVijk(1) is the BVs of the 2011 image.

3)      ΔBVijk(2) is the BVs of the 1991 image.

4)      C is a constant, 127 in this case.

5)      i is the line number.

6)      j is the column number.

7)      k is a single band of Landsat TM.

Step 1: Select Model Maker from the ToolBox menu in the Erdas 2013 viewer. This should open the Model Maker interface. Figure 19 displays a diagram in order to help identify the tools used to run the program.
 
Figure 19


Figure 19 shows some of the the tools used in the Model Maker interface.



Step 2: First, click on the selection tool (A), followed by a Raster Object (B), and finally in the model panel (E). Refer to figure 20 as to how model maker should appear after steps 2 through 6 are completed.

Step 3: Repeat step 2 to add another Raster Object to the model maker interface.

Step 4: Repeat the processes in steps 2 and 3, but this time place a Function (C) on the screen. This function should be placed below the two raster objects.

Step 5: Add a third raster object to the model maker interface.

Step 6: Use the Connector Arrow (D) to connect all the objects in the model maker window. Again, when finished the resulting interface should look like figure 20.

Figure 20


Figure 20 illustrates how the model maker interface should appear after steps 2-6 were completed.



Step 7: click on the raster object in the upper left of the model maker interface and click on it using the left mouse button. This should open a raster interface.

a)       Under Input, bring in ec_envs_b4.img image.

Step 8: Repeat step 7, but this time click on the right-hand raster object and choose ec_envs_1994b4.img.

Step 9: Select the function object and put the following on the bottom of the Define Function interface:

                   $n1_ec_envs_2011_b4-$n2_ec_envs_1991_b4+127

Step 10: Name the output image ec_91-11chg_b.img and save it in the appropriate folder.

Step 11: Run the model by clicking on the tool that is represented by arrow F in figure 19.The modified image that was just created should look like the one in figure 21 below.

Figure 21


Figure 21 illustrates how the differenced image ec_91-11chg_b.img) created in part 5, section 2, steps 1-11  should look when brought into the ERDAS viewer.
 
 
 
Section 2b:
 In order to obtain an image with only the BVs that changed between 1991 and 2011, another model will need to be created using Model Maker. However, before this can be done a new change threshold will need to be determined. Equation 3 will be used to do this:
[3]        Mean + 3(Standard Deviation)
Step 1: Open the image Metadata and select Histogram for image ec_91-11chg_b.img.
a)      Once done observing the histogram information click on the general tab and take note of the Mean and Standard Deviation.
b)      Use Equation 3 above to calculate the new change-/no-change threshold.
Step 2: Open Model Maker and create an Input Raster Object, a Function Object, and an Output Raster Object. Figure 22 illustrates how this model should appear in Model Maker.
Step 3: Connect all the objects in Model Maker with arrows, as in Section 1.
Figure 22  Figure 22 shows how the model, created in steps 2 and 3 above, should appear in the Model Maker interface.   
Step 4: Using Section 2a, Step 7 as a guideline, insert ec_91-11chg_b.img into the Input Raster Object.
Step 5: Open the Function Object, as in Step 9 of Section 2a.
a)      Change the function from Analysis to Conditional.
b)      Click the EITHER IF OR function. This should now appear in the script area of the Function Define interface.
c)      After b has been done the script area of the Function Define interface should read:
EITHER 1 IF ($n1_ec1>CHANGE-/NO-CHANGE THRESHOLD VALUE) OR 0 OTHERWISE   
**Note: the change-/no-change value should be the one that was obtained in Step 1 of this section. **
Step 5: Label the new output image ec_91-11bvis.img and save it to the appropriate output folder.
Step 6: Run the model. An Error means that the script will need to be written properly.
Step 7: Open ec_91-11bvis.img in Erdas; it should look like the one in figure 23 below.
Figure 23 Figure 23 shows how ec_91-11bvis.img, created using steps 1-7 in section 2b above, should look once it is brought into the Erdas 213 viewer.   
Step 8: Use ArcMap to create a map of the changes that occurred in the AOI between 1991 and 2011. This should be done overlaying ec_91-11bvis.img onto ec_envs1991b4.img in ArcMap. Figure 24 shows how the resulting map should appear.
Q22. Describe the spatial distribution areas that changed over the 20 year period. Are these areas close to urban centers or not?
   Most of the changes appear that occurred between 1991 and 2011 appear to be in rural areas, or just outside of city limits. For instance, some significant changes are circled below in figure 19, which shows a map that illustrates the changes between 1991 and 2011. This circled area lies just outside the Menomonie/Cedar Falls area and appears to be increased cropland when zooming in using the Erdas viewer. This would make sense since the differencing of the two original images (ec_envs_1991b4.img subtracted from ec_envs_2011b4.img) are both NIR bands which will exhibit greater brightness values for vegetated areas.
  Also, zooming in on the Erdas images in other areas on the differenced map (ec_91-11chg_b.img) show that these areas represent cropland as well.
Figure 24  Figure 24 shows a map created in in ArcMap of the changes that occurred in BVs over the AOI between 1991 and 2011. This was done using the images (ec_91-11bvis.img and ec_envs1991b4.img) created in section 2b, part 5 of this lab.   
                                     References
Wilson, C. (2013). Remote Sensing of the Environment. Lab 5. Fall 2013. Image Mosaic and
                 Miscellaneous Image Functions 2 (pdf).
                  .
(1999). Erdas Field Guide Fifth Edition Revised and Expanded (pdf). Retrieved from:
 
Also note, Dr. Cyril Wilson’s (UWEC) Lab 5 of the same name was used as a template for this blog. All words in italics are his, verbatim.