ECE 847 Digital Image Processing
|
|
Fall 2006
This course introduces students to the basic concepts, issues, and algorithms in
digital image processing and computer vision. Topics include image formation,
projective geometry, convolution, Fourier analysis and other transforms,
pixel-based processing, segmentation, texture, detection, stereo, and motion.
The goal is to equip students with the skills and tools needed to manipulate
images, along with an appreciation for the difficulty of the problems. Students
will implement several standard algorithms, evaluate the strengths and weakness
of various approaches, and explore a topic of their own choosing in a course
project.
Syllabus
Week
| Topic
| Assignment
|
1
| Pixel-based processing |
HW1: Warm-up, due 9/1 |
2
| Pixel-based processing |
Quiz #1, 9/8 |
3
| Filters and edge detection |
HW2: Pixels and regions, due 9/15 |
4
| Filters and edge detection |
Quiz #2, 9/22 |
5
| Segmentation |
HW3: Edge detection, due 9/29 |
6
| Segmentation |
Quiz #3, 10/6 |
7
| Stereo |
HW4: Segmentation, due 10/18 |
8
| Stereo |
Quiz #4, 10/20 |
9
| Motion |
HW5: Stereo matching, due 10/27 |
10
| Motion |
Quiz #5, 11/3 |
11
| Image formation |
HW6: Lucas-Kanade tracking, due 11/13 |
12
| Projective geometry
|
Quiz #6, 11/17 |
13
| Projective geometry
|
|
14
| Color |
Quiz #7, 12/8 |
15
| Color |
projects due |
Readings to complement the lectures:
- Sonka et al., Region-based shape representation and description
- Robyn Owens,
Mathematical morphology (dilation and erosion)
- Bill Green, Canny
edge detection tutorial
- Bob Fisher et al.,
Canny edge detector
- Michael Bach, Muller-Lyer
illusion
- Various authors,
Split-and-merge segmentation
- Serge Beucher,
Watershed segmentation;
Roerdink and Meijster,
The watershed
transform; Matlab,
watershed tutorial
- Sylvain Bougnoux,
Learning epipolar geometry
Computer vision in the news:
-
Help organizing your digital photos, CBS News, Feb. 9, 2006 (Riya)
- 'Silent drowning' pool girl saved by underwater cameras, Times Online, Aug. 31, 2005
- Courtrooms could host virtual crime scenes, New Scientist.com, March 10, 2005
-
Sportvision virtual first-down markers
-
Basketball buddies build a computerized shot doctor, USA Today, Feb. 7, 2003
(Noah Basketball)
-
Automotive applications:
-
Infiniti advanced lane departure warning system
-
Chrysler automobile uses CMOS cameras for smart headlights, IEEE Spectrum,
Apr. 2006 (Gentex SmartBeam)
-
Lexus uses computer vision
for automatic parallel parking, IEEE Spectrum, Apr. 2006 (Intelligent
Parking Assist)
-
Electronic vision unblocks
the 'blind spot', IEEE Spectrum, Apr. 2006 (Volvo's
Blind-Spot Information System)
-
Car, park thyself (Toyota's automatic parking feature), CBS News, Jan. 15,
2003
Vision in biological systems:
- P. Gurney,
Is our 'inverted' retina really 'bad design'?, Technical Journal,
1999
- C. Wieland,
Seeing back
to front, Creation, 1996 (see also
An eye for
creation, Creation, 1996)
- J. Sarfati,
Can it bee?,
Creation, 2003 -- honeybees using optic flow for navigation
-
Centeye -- obstacle avoidance using optic flow
- C. Stammers,
Trilobite technology, Creation, 1993
- S. M. Gon, The trilobite eye
- J. Sarfati,
Lobster eyes: brilliant geometric design, Creation, 2001
- Sight in
British garden birds
- Color vision in
birds
- P. Gurney, Our
eye movements and their control: Part 1, Technical Journal, 2002
- P. Gurney, Our
eye movements and their control: Part 2, Technical Journal, 2003
- C. Wieland,
New eyes
for blind cave fish, 2000
- T. Wagner,
Darwin vs.
the eye, Creation, 1994
- Eye Design
Book -- overview of eyes in animal world
Computer vision companies:
Additional computer vision
resources
In the assignments, you will implement several fundamental algorithms in C/C++,
documenting your findings is an accompanying report for each assignment.
The C/C++ languages are chosen for their fundamental importance, their ubiquity,
and their efficiency (which is crucial to image processing and computer vision).
For your convenience, you are encouraged to use the latest version of the
Blepo computer vision library.
To make grading easier, your code should do one of the following:
-
#include "blepo.h" (In this case it does not matter where your blepo
directory is, because the grader can simply change the directory include
settings (Tools->Options->Directories->Include files) for Visual Studio
to automatically find the header file.)
or
-
#include "../blepo/src/blepo.h" (assuming your main file
is directly inside your directory). In other words, your assignment directory
should be at the same level as the blepo directory. Here is an example:
To turn in your assignment, send a blank email to
assign@assign.ece.clemson.edu
with the subject line "ECE847-1,#n" (without quotes), where 'n' is the
assignment number; and cc the instructor and grader. (No one reads the body of the email, so anything there will be
ignored.) You
must send this email from your Clemson account, because the assign server is not
smart enough to know who you are if you use another account. Attach
a zip file containing your report, all of your source files, and any other files needed to compile your project, to
the email: *.h, *.c, *.cpp, *.rc, *.dsp, *.dsw. (Do not include the
res, Debug, or Release directories.) Also be sure your report, which may
be in any standard format (.pdf, .doc, etc.), is in this directory. Be sure that this file is actually
attached to the email rather than being automatically included in the body of
the email (This behavior has been observed in Eudora, for example, but it can be
turned off). Also, be sure to change the extension of your zip file (e.g.,
change .zip to _zip) so that the server does not block the
attachment!!! We cannot grade what we do not receive.
An example report
Assignments:
- HW#1 (Floodfill)
- Implement the floodfill algorithm in C/C++. Create an executable
that allows the user to choose the filename and seed point; it is okay if you
hardcode the new color. The application should load the image from disk,
display the original image, run the algorithm, and display the resulting
image. (The specific interface is up to you: Either use command-line parameters,
such as: filename x y (in
that order), where 'filename' is the image filename and (x,y) are the
coordinates of the seed point; Or use a windows-based interface, such as
CFileDialog for selecting the file and GrabMouseClick for getting the seed
point.)
- To create a console app in Visual C++, follow these instructions: File -> New ->
Project -> Win32 Console Application. Give it a name and keep the checkbox
on "Create new workspace". Choose "An application that supports MFC." Now
compile and run (Build -> Build ..., and Build -> Execute, or F7 and
Ctrl-F5). Under FileView -> Source Files you will find the main cpp file.
(Also, I would recommend that you turn off Precompiled Headers: Project ->
Settings -> C/C++ -> Precompiled headers -> Not using precompiled headers.
Before you click on the radio button, though, first select All
configurations in the drop down box so that both Debug and Release versions
are affected.)
- The image that the grader will use to test your code is
quantized.pgm and another image that is similar.
- A tutorial on the Blepo library will be given in class. You may use
any part of the library except the Floodfill function itself.
- No report is due for this assignment.
- HW#2 (Fruit classification)
- Write code to automatically detect and classify the fruit in the images
fruit1.pgm and
fruit2.pgm (or, in BMP format,
fruit1.bmp and
fruit2.bmp).
- First detect all the foregrounds regions of the image, distinguishing them
from the background region.
- Then compute the following properties of each foreground region:
- zeroth-, first- and second-order moments (regular and centralized)
- compactness
- eccentricity (or elongatedness), using eigenvectors (PCA)
- direction, using either eigenvectors (PCA) or the moments formula
- Print all the region properties that you computed in the console window (or
in a message box or edit box).
- Classify the different types of fruit using a combination of these properties
or others that you develop.
The
same parameters should be used for all objects and for both images.
- Display the original image with a one-pixel-thick boundary overlaid on each
object, the color of the boundary indicating the type of fruit: Red
indicates apple, green indicates grapefruit, and yellow indicates banana.
For each object, draw a cross at its centroid and draw two perpendicular lines to
indicate the major and minor axes (the relative length of the lines should
indicate the elongatedness). Indicate the banana stem by coloring the
boundary pixels there magenta.
- For this assignment, you may not use any Blepo functionality contained or
prototyped in ImageAlgorithms.h.
- Write a report describing your approach, including your algorithms and
methodology, experimental results, and discussion. The report may be in
any standard format (.pdf, .doc, etc.).
- HW#3 (Canny edge detection)
- Implement the Canny edge detector. There should be three steps to your
code: gradient estimation, non-maximum suppression, and thresholding (with
hysteresis). For the gradient estimation, convolve the image
with the derivative of a Gaussian, rather than computing finite differences in the
smoothed image. Automatically compute the threshold values based upon
image statistics. Run your
code on the following images: cat.pgm and
cameraman.pgm. Display intermediate
results (e.g., the two x- and y- gradient components, the gradient magnitude and
angle, and the edges before thresholding) in separate figures, in addition to
the final result.
- Implement the chamfer distance algorithm with the Manhattan distance.
Compute the chamfer distance of the cherrypepsi.jpg
image, then perform an exhaustive search (ignoring image boundaries) for the
best location of the cherrypepsi_template.jpg
template. Display the resulting probability map by summing the distances
to the edges, and (in a separate window) overlay on the original image the
rectangle corresponding to the peak.
- For this assignment, you may not use any Blepo functionality contained or
prototyped in ImageAlgorithms.h, and you may not use the Gauss* or Gradient*
functions prototyped in ImageOperations.h.
- Write a report describing your approach, including your algorithms and
methodology, experimental results, and discussion. The report may be in
any standard format (.pdf, .doc, etc.). Be sure to show the effect of the
scale parameter on the output for at least one image.
- HW#4 (Watershed segmentation)
- Implement the Vincent-Soille watershed segmentation algorithm. The
algorithm involves three steps: (1) Compute the magnitude of the image
gradient, and quantize into a number of discrete values; (2) Construct a data
structure allowing fast access to all the pixels with a certain value; (3) Apply
breadth-first search to flood the pixels level by level, starting with the
minimum value, assigning each pixel to either the nearest existing catchment
basin or a new catchment basin. Define the watershed pixels as those which
occur at a transition between basins.
- Reduce oversegmentation by smoothing the image, using automatic markers,
and/or implementing the hierarchical watershed algorithm
- Run your code on the following images: holes.pgm
and cells.pgm. Display the result of the
algorithm at various stages of the computation. (Due to the difficulty of
segmenting these images, it is okay for your code to have a command-line switch
to select two different variations of your algorithm.)
- For this assignment, you may not use any code in Watershed.cpp.
- Write a report describing your approach, including your algorithms and
methodology, experimental results, and discussion. Be sure to show the effect of the
scale parameter on the output for at least one image.
- HW#5 (Stereo matching)
- Implement correlation-based matching of rectified stereo images. The
resulting disparity map should be the same size as the two input images,
although the values at the left edge will be erroneous. Match from left to
right (i.e., for each window in the left image, search in the right image), so
that the disparity map is with respect to the left image. Recall that a
(left) disparity map D(x,y) between a left image L and a right
image R that have been rectified is an array such that the pixel
corresponding to L(x,y) R(x-D(x,y), y).
- Implement the left-to-right consistency check, retaining a value in the left
disparity map only if the corresponding point in the right disparity map yields
the negative of that disparity. The resulting disparity map should be valid
only at the pixels that pass the consistency check; set other pixels to zero.
- Your code should be efficient as possible, on the order of several frames per
second. (Hint: First compute the dissimilarities of all the pixels
for each disparity, storing the results in an array of images; then convolve
each image with a summing kernel (all ones) in both directions. Further
speedup can be obtained using mmx_diff and xmm_diff in Blepo, but this is not
required.)
- Suggestion: use SAD (sum of absolute differences) to match raw
intensities and use a window size of 5x5.
- Run your code on
tsukuba_left.pgm and
tsukuba_right.pgm. Show the results both with and without the consistency
check. What kind of errors do you notice? Now run the algorithm on
lamp_left.pgm and
lamp_right.pgm. What happens? Why is this image
difficult?
- Take a look at the results of the latest stereo research at
http://www.middlebury.edu/stereo
(click on the "Results" tab). Look only at the first column (all) under
the column Tsukuba. What errors do you see in the best algorithm (the
one with minimum error in this column)? Find the output of another algorithm
that looks better to you in some respect. What does this tell you about the difficulty of
finding meaningful objective criteria to evaluate such algorithms?
- Write a report describing your approach, including your algorithm and
methodology, experimental results, and discussion.
- HW#6 (Lucas-Kanade)
- Implement Lucas-Kanade feature point detection and tracking.
- Detection. For each pixel in a graylevel image, construct the
2x2 covariance matrix of the gradients in the 5x5 window surrounding the pixel.
Then compute the minimum eigenvalue of the gradient covariance matrix for each pixel.
Arbitrate between pixels by enforcing a minimum distance of 10 pixels between
features. Detect the 100 most salient features, and display them overlaid on the original image.
- Tracking. For each feature, track its location from one image
frame to the next by solving the Lucas-Kanade equation Zd=e, where Z is the 2x2
gradient covariance matrix and e is the 2x1 vector of gradients multiplied
by the temporal derivative. Display a movie of the original images with features
overlaid.
- Run your code on the following image sequence:
statue_sequence.zip .
- For this assignment you may not use any of the Lucas-Kanade or KLT
implementations in Blepo.
- Write a report describing your approach, including your algorithm /
methodology, experimental results, and discussion.
Grading standard:
- A. Report is coherent, concise, clear, and neat, with correct
grammar and punctuation. Code works correctly the first time and
achieves good results on both images.
- B. Report adequately
describes the work done, and code generally produces good results. There
are a small number of defects either in the implementation or the writeup, but
the essential components are there.
- C. Report or code are
inadequate. The report contains major errors or is illegible, the code
does not run or produces significantly flawed results, or instructions are
not followed.
- D or F. Report or code not attempted, not turned
in, or contains extremely serious deficiencies.
Detailed grading breakdown is available in the
grading chart.
Extra credit: Contributions to the Blepo computer vision library
will earn up to 10 points extra credit on your final grade. In general,
you should expect 1 point for a major bug fix, and 2-7 points for a significant
extension to an existing function or implementation of an algorithm or set of
functions. Contributions should be cleaning written, with code-level and
user-level documentation, and a test harness. To receive extra credit, you
must meet the following deadlines:
- announce (non-binding) intention to contribute (10/20)
- get interface approval (11/3)
- turn in final code and documentation (11/8)
In your final project, you will investigate some area of image processing or computer vision in more detail. Typically
this will involve formulating a problem, reading the literature, proposing a solution, implementing the solution,
evaluating the results, and communicating your findings. In the case of a survey project, the quality and depth of
the literature review should be increased significantly to compensate for the lack of implementation.
Project deadlines:
- 11/3: team, title, and brief description
- 12/1: progress report (1 page)
- 12/11: final written
report (up to 5 pages)
- 12/11: final oral presentation in class during
final exam slot, 1:00-4:00
- 12/13: peer reviews of written
reports and oral presentations
Instructor: Stan Birchfield, 207-A Riggs Hall, 656-5912, email: stb at clemson
Grader: Zhichao Chen, 015 Riggs Hall, 650-0308, email: zhichac at clemson
Lectures: 12:20 - 1:10 MWF, 223 Riggs Hall