ECE
877 Computer Vision
|
|
Spring 2008
This course builds upon ECE 847 by exposing students to fundamental concepts, issues, and algorithms in
digital image processing and computer vision. Topics include segmentation, texture, detection,
3D reconstruction, calibration, shape, and energy minimization.
The goal is to equip students with the skills and tools needed to manipulate
images, along with an appreciation for the difficulty of the problems. Students
will implement several standard algorithms, evaluate the strengths and weakness
of various approaches, and explore a topic in more detail in a course
project.
Syllabus
Week
| Topic
| Assignment
|
1
| Shape and active contours |
HW1: Template matching, due 1/18 |
2
| Shape and active contours |
Quiz #1, 1/25 |
3
| Level sets |
HW2: Active contours, due 2/1 |
4
| Classification |
Quiz #2, 2/8 |
5
| Classification |
HW3: Level sets, due 2/15 |
6
| Fourier transform |
Quiz #3, 2/22 |
7
| Texture |
HW4: Texture, due 2/29 |
8
| 3D reconstruction |
Quiz #4, 3/7 |
9
| Model fitting |
HW5: Head tracking, due 3/14 |
10
| [break] |
Quiz #5 |
11
| Model fitting |
HW6: Hough transform, due 4/4 |
12
| Camera calibration |
Quiz #6, 4/11 |
13
| Tracking and filtering |
|
14
| Function optimization |
Quiz #7, 4/25 |
15
| Function optimization |
projects due |
See ECE 847 Readings and Resources.
In the assignments, you will implement several fundamental algorithms in C/C++,
documenting your findings is an accompanying report for each assignment.
The C/C++ languages are chosen for their fundamental importance, their ubiquity,
and their efficiency (which is crucial to image processing and computer vision).
For your convenience, you are encouraged to use the latest version of the
Blepo computer vision library.
To make grading easier, your code should do one of the following:
-
#include "blepo.h" (In this case it does not matter where your blepo
directory is, because the grader can simply change the directory include
settings (Tools->Options->Directories->Include files) for Visual Studio
to automatically find the header file.)
or
-
#include "../blepo/src/blepo.h" (assuming your main file
is directly inside your directory). In other words, your assignment directory
should be at the same level as the blepo directory. Here is an example:
To turn in your assignment, send an email to
assign@assign.ece.clemson.edu
(and cc the instructor and grader)
with the subject line "ECE877-1,#n" (without quotes), where 'n' is the
assignment number. You may leave the body of the email blank. Attach
a zip file containing your report (in any standard format such as .pdf or .doc;
but not .docx),
and all the files needed to compile your project (such as *.h, *.c, *.cpp, *.rc, *.dsp, *.dsw;
do not include *.ncb, *.opt, *.plg, *.aps, or the res, Debug, or Release directories).
You must send this email from your Clemson account, because the assign server is
not smart enough to know who you are if you use another account. Be sure that this file is actually
attached to the email rather than being automatically included in the body of
the email (Eudora, for example, has been known include files inline, but this
behavior can be turned off). Also, be sure to change the extension of your zip file (e.g.,
change .zip to _zip) so that the server does not block the
attachment!!! We cannot grade what we do not receive. (Also be sure
that you're not hiding extensions for known types; in Windows explorer, uncheck
the box "Tools.Folder Options.View.Hide extensions for known file types".)
In addition to submitting your report electronically, please also turn in a
hardcopy. The deadline for the electronic copy is the same as for the
code, whereas the hardcopies should be brought to the instructor by noon of the
next business day after the deadline (at the latest). Just slip it under the
door if I'm not in. No points will be deducted for printing in
black-and-white, even if the report is in color.
Assignments:
- HW#1 (Detection)
- Using the images provided (textdoc-training.bmp
and textdoc-testing.bmp), pick a letter of the
alphabet and build a model of that letter's appearance (in the lower-case Times
font of the main body text -- do not worry about the title, heading, or figure
captions) using the training image. Then search in the testing image for
all occurrences of the letter. For this assignment, your detector may use
a simple model such as a single template built from a single example. The
output should be a colored (e.g., red) rectangle outlining each letter found in
the testing image.
- Do the same thing, but this time for the entire word
"texture" (all lowercase).
- Show receiving operating characteristic curves (ROC) for the detection
problems, with different thresholds. To make an ROC curve, vary the
threshold and measure the false positives and false negatives, plotting on a
graph. Then connect the dots. Here is a demonstration of ROC curves:
http://wise.cgu.edu/sdtmod/measures6.asp (or see
http://en.wikipedia.org/wiki/Receiver_operating_characteristic ).
- Write a report describing your approach, including your algorithms and
methodology, experimental results, and discussion.
- HW#2 (Snakes)
- Implement the dynamic-programming snake algorithm, as described in Amini et
al., 1990. Include first-order and second-order derivative terms for the
internal energy (i.e., alpha and beta), and allow each point to move to one of
nine positions in its immediate vicinity. For the external energy, use the
negative of the magnitude of the gradient.
- For simplicity, restrict your implementation to work only with closed
curves.
- Start your snake from an initial curve that is larger than the object and
display its evolution over time.
- Run the "repeatable experiment" mentioned at the end of section VI of the
Amini et al. paper on synthetic_square.pgm.
Also run the code on fruitfly.pgm, initializing your contour to a curve that surrounds the fly.
- Try your algorithm with different parameters for alpha and beta, number and
location of points, etc.
- Write a report describing your approach, experimental results, and discussion.
- HW#3 (Level sets)
- Implement the basic level set algorithm to segment a simple image:
shapes.pgm. The initial implicit surface should
surround all the objects initially, and as the surface evolves, the zeroth level
set should conform to the boundaries of the objects. Use the gradient
magnitude of the image to determine the speed of evolution.
- Write a report describing your approach, including the algorithm,
methodology, experimental results, and discussion. Also include an overview of level set
methods, the narrow band algorithm, and the fast marching method, explaining
when the latter is applicable.
- HW#4 (Texture synthesis)
- Implement the Efros-Leung texture synthesis algorithm described in the
paper, "Texture
Synthesis by Non-parametric Sampling", ICCV 1999. See their
website for details. Create a Visual C++ application with two modes:
- In the first mode, the application allows the
user to select a gray-level texture, the size of the window, and the desired size of the
output image. The application loads the texture image, runs the
algorithm, and displays both the original image and the resulting image in
separate windows on the screen.
- In the second mode, the application allows the user to select a gray-level
image. Any pixel in the image that has a value of zero is filled in by the
algorithm. The application displays both the original image and the resulting image in
separate windows on the screen.
- If you create a command-line application, then the syntax should be as
follows:
- elprog texture.pgm window_size output_width output_height
(for first mode)
- elprog image.pgm window_size (for second mode)
where
- 'elprog' is the name of your executable,
- the texture/image can be in any format recognizable by the software,
- 'window_size' is the width/height of the square window used in the
computation, and
- the output image is of size 'output_width' x 'output_height' for the first
mode.
The two modes are distinguished by the number of parameters passed in.
- On the other hand, if you create a windows-based application, then make it
obvious how to select between the modes and set the parameters.
- Test your algorithm on images such as those used by Efros and Leung:
texture_brick.pgm,
texture_dense_weave.pgm,
texture_grate.pgm,
texture_mesh.pgm,
texture_ripples.pgm,
texture_text.pgm,
texture_thin_weave.pgm; small versions:
texture_brick_small.pgm,
texture_mesh_small.pgm
- Write a report describing your approach, including the algorithm,
methodology, experimental results, and discussion.
HW#5 (Head tracking)
- Implement a simplified version of the color histogram head tracking algorithm
described in
"Elliptical Head Tracking Using Intensity Gradients and Color Histograms". Initialization
should be performed manually in the first frame by clicking the mouse in the
center of the head. The histogram model should be captured in the first
frame and used throughout, with no adaptation. In subsequent images,
search in a local 8x8 region around the previous location, comparing images
using histogram intersection. For simplicity, do not search over scale and
do not use intensity gradients.
- It is suggested that you keep the ellipse vertically-oriented with a fixed aspect ratio (between 1.2 and 1.4),
and that the 3D color histograms are computed in RGB color space with 8 bins along
each axis.
- Using the video sequence melissa_head_rev.zip,
- run your code over the entire sequence, displaying the best location in each image.
If necessary, use Sleep() after each
display, so that you can see the results for each image. Evaluate the
algorithm's performance with and without constant-velocity prediction.
- Write a report describing your approach, including the algorithm,
methodology, experimental results, and discussion.
- For the report only, run your code over all possible (x,y) positions in the first image of the sequence
(ignoring image boundaries), generating a likelihood map for the person's
location. Plot the
likelihood map in one figure, and the most likely ellipse location in another
figure (overlaying the ellipse on the grayscale image).
HW#6 (Hough transform)
- Implement the Hough transform to detect straight lines in an image, using
the edge detector of your choice. Overlay the detected lines on the
original image.
- Run your code on the following images:
riggs1.pgm, riggs2.pgm, and
riggs3.pgm.
- Write a report describing your approach, including the algorithm,
methodology, experimental results, and discussion.
Grading standard:
- A. Report is coherent, concise, clear, and neat, with correct
grammar and punctuation. Code works correctly the first time and
achieves good results on both images.
- B. Report adequately
describes the work done, and code generally produces good results. There
are a small number of defects either in the implementation or the writeup, but
the essential components are there.
- C. Report or code are
inadequate. The report contains major errors or is illegible, the code
does not run or produces significantly flawed results, or instructions are
not followed.
- D or F. Report or code not attempted, not turned
in, or contains extremely serious deficiencies.
Detailed grading breakdown is available in the grading chart.
Extra credit: Contributions to the Blepo computer vision library
will earn up to 10 points extra credit on your final grade. In general, you
should expect 1 point for a major bug fix, and 2-7 points for a significant
extension to an existing function or implementation of an algorithm or set of
functions. Contributions should be cleaning written, with code-level and
user-level documentation, and a test harness. To receive extra credit, you
must meet the following deadlines:
- announce (non-binding) intention to contribute (3/14)
- get interface approval (4/4)
- turn in final code and documentation (4/25)
In your final project, you will investigate some area of image processing or computer vision in more detail. Typically
this will involve formulating a problem, reading the literature, proposing a solution, implementing the solution
(using the programming language/environment of your choice),
evaluating the results, and communicating your findings. In the case of a survey project, the quality and depth of
the literature review should be increased significantly to compensate for the lack of implementation.
Project deadlines:
- 3/28: team (1 or 2 people), title, and brief description
- 4/18: progress report (1 page)
-
4/29: final oral presentation in class during
final exam slot, 1:00-4:00
- 5/1: final written report (approximately 5 pages,
formatted as a conference paper)
Instructor: Stan Birchfield, 207-A Riggs Hall, 656-5912, email: stb at clemson
Office hours: 3:30-5:00pm Monday, or by appointment
Grader: Zhichao Chen, zhichac at clemson
Lectures: 1:25 - 2:15 MWF, 227 Riggs Hall