ECCV 2006  Tutorial program 
Afternoon session  
14:30  17:45  A2: Tinne Tuytelaars: Local Invariant Features: What? Why? When? How? 
14:30  17:45  A3: Fredrik Kahl and Richard Hartley: Continuous Optimization Methods in Computer Vision 
Full day  
9:00  17:45  F1: Yuri Boykov, Daniel Cremers, Vladimir Kolmogorov: Graph Cuts versus Level Sets 
Component Analysis for Computer Vision 
Date: Sunday, May 7, 9:00  12:15 
Venue: Graz Congress Center 
Description: Component Analysis methods (e.g. Principal Component Analysis/Singular Value Decomposition, Independent Component Analysis, Linear Discriminant Analysis, Tensor Factorization, etc) have been successfully applied in numerous visual, graphics and signal processing tasks over the last two decades. In this tutorial, I will provide an overview of traditional component analysis methods and recent extensions useful for dimensionality reduction, modeling, classifying and clustering high dimensional data such as images. In the first part of the tutorial, we will briefly review traditional linear techniques such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), NMF (NonNegative Matrix Factorization), Oriented Component Analysis (OCA), Independent Component Analysis (ICA) and others. In the second part, several extensions to solve common problems in computer vision (e.g. outliers, lack of training data, geometric invariance, nonuniform sampling, discrete data, illumination, etc.) will be discussed. In the final part of the tutorial, we will review more common standard extensions of linear models such as kernels, probabilistic extensions or tensor factorization. All the theory introduced in the tutorial will be illustrated with several examples in the area of visual tracking, signal modeling (e.g. background estimation), parameter estimation, pattern recognition (e.g. face recognition) and clustering problems. 
Outline: 
1  Introduction. 
2  Generative models: 
 Review of PCA/SVD, ICA, NMF.  Robust PCA.  Parameterized Component Analysis.  Incremental PCA.  Multinomial PCA.  Illumination insensitive eigenspaces.  PCA over continuous spaces.  Multiple subspaces.  2D PCA.  Component Analysis and spectral graph methods for clustering. 
3  Discriminative models: 
 Review of LDA, OCA, CCA, Relevant Component Analysis (RCA).  Multimodal Oriented Discriminant Analysis.  Representational Oriented Component Analysis.  Robust Linear Discriminant Analysis.  Dynamic Coupled Component Analysis.  Combining generative and discriminative models.  2D LDA. 
4  Standard extensions: 
 Probabilistic models (latent variable models).  Kernel methods.  Tensor factorization. 
Instructor: 
Fernando De la Torre received his B.Sc. degree in telecommunications, M.Sc. degree in electronic engineering and Ph. D, respectively, in 1994, 1996 and 2002, from La Salle School of Engineering in Ramon Llull University, Barcelona, Spain. In '97 and '00 he became assistant and associate professor of the department of Communications and Signal theory in Enginyeria La Salle. Since 2005 he is research scientist in the Robotics Institute at Carnegie Mellon University. His research interests include dimensionality reduction techniques, subspace methods, face tracking/modeling, statistical learning and optimization. 
Link to tutorial material: http://www.salleURL.edu/~ftorre/last.pdf 
http://www.salleURL.edu/~ftorre/part1.pdf 
http://www.salleURL.edu/~ftorre/bibliography.pdf 
General Imaging Design, Calibration and Applications 
Date: Sunday, May 7, 8:30  13:30 
Venue: Graz Congress Center 
Description: This tutorial will cover imaging systems beyond the perspective camera. We will consider the motivations for using such systems, how to design them and how to model their geometry and use this for calibration and structurefrommotion computations. Notice: this tutorial is the result of merging two independently proposed tutorials, which we found to be similar in spirit and scope ("Generic Models and Algorithms for Camera Calibration and StructurefromMotion" by P. Sturm and S. Ramalingam, and "Unconventional Imaging  Geometry, Design and Applications" by R. Swaminathan). 
Outline: 
1  Introduction 
2  Imaging Systems

3  Nonparametric calibration

4  Distortion correction

5  Structurefrommotion for general camera models

6  Mirror design

Instructors: 
Srikumar Ramalingam is currently a doctorate student in computer science, jointly enrolled at Institut National Polytechnique de Grenoble (INPG) in France and University of California at Santa Cruz (UCSC) in USA. He is cosupervised by Dr. Peter Sturm from INRIA RhôneAlpes and Prof. Suresh Lodha from UC Santa Cruz. His PhD is cosponsored by a European Marie Curie scholarship. During his masters program he developed 3D reconstruction algorithms, which can handle images at different scales, for the reconstruction and hierarchical enhancement of 3D scenes. Currently he is working in calibration and 3D reconstruction algorithms for generic imaging models. He has published and reviewed papers in major computer vision conferences and workshops like CVPR, ECCV, ICCV and Omnivis. His research interests are in calibration and 3D reconstruction. 
Peter Sturm obtained a Ph.D. from INPG (National Polytechnical Institute of Grenoble, France) in 1997, after getting M.Sc.
degrees from INPG and the Technical University of Karlsruhe, both in 1994. He received the SPECIF award 1998 for his Ph.D.
thesis (given to one French Ph.D. thesis in Computer Science per year). After a twoyear postdoc at Reading University,
he joined INRIA as a Senior Researcher in 1999. He acted as programme committee member for ICCV, CVPR, ECCV, ICIP, ICPR, ACCV and several other conferences and coorganized the 2004 edition of the OMNIVIS workshop (with ECCV in Prague) and the 2005 workshop BenCOS (Towards Benchmarking Automated Calibration, Orientation and Surface Reconstruction from Images, with ICCV). His main research topics are related to camera (self)calibration, omnidirectional vision, and 3D reconstruction. 
Rahul Swaminathan is currently a research scientist with Deutsche Telekom Laboratories at the Technical University of Berlin. His research interests include computational vision and HCI. Before joining Deutsche Telekom Labs, he was a postdoctoral researcher with GRASP Lab, University of Pennsylvania where he worked on general camera calibration methods. He obtained his Ph.D. and M.S. in Computer Science from Columbia University in 2003 working with Shree K. Nayar on catadioptric and nonperspective imaging systems. Additionally he has worked at Microsoft Research Labs, Redmond on appearance of specularities under viewer motion. 
Link to tutorial material: http://www.deutschetelekomlaboratories.de/~srahul/ECCV_Tutorial_2006/ 
Local Invariant Features: What? Why? When? How? 
Date: Sunday, May 7, 14:30  17:45 
Venue: Graz Congress Center 
Description:In this tutorial, we want to give an overview of existing methods to extract, describe and
use local invariant features. In particular, we aim at providing a practical guideline for anyone
considering the use of local features. To this end, we intend to focus on issues like how to use
local invariant features, how to select the right level of invariance, which type of features to select
for a specific application, what to expect from them, general do’s and don’ts, etc. The first part of the tutorial will present the different methods proposed in the literature for extracting local invariant features, followed by a discussion and comparison. The second part will focus on different applications, explaining practical algorithms for matching features, checking consistency among feature matches, indexing, clustering features, etc. 
Outline: 
A  Local Invariant Features: What? Why? 
1  Introduction 
 what are local invariant features ?  why are they useful ?  levels of invariance  properties of the ideal feature 
2  Overview of existing detectors 
 Lowe’s DoG  Lindeberg’s scale selection  Mikolajczyk & Schmid’s Hessian/HarrisLaplacian/Affine  Tuytelaars & Van Gool’s EBR and IBR  Matas’ MSER  Kadir & Brady’s Salient Regions  others 
3  Qualitative and Quantitative Comparison 
 Strengths and weaknesses of the different detectors 
B  Local Invariant Features: When? How? 
1  Feature Descriptors 
 Crosscorrelation  SIFT  others 
2  Applications 
 Wide Baseline Matching (incl. feature matching)  Recognition of Specific Objects (incl. consistency checks)  Image retrieval (incl. efficient indexing schemes)  Recognition of Object Classes (incl. clustering of features)  Others 
3  Conclusion 
Instructor: 
Tinne Tuytelaars received the MS degree in electrotechnical engineering at the Katholieke Universiteit Leuven in 1996. Since then, she has been working as a researcher in the computer vision group VISICS at that same university, which led to the PhD degree in 2000, for her work on ’Local Invariant Features for Registration and Recognition’. Currently, she’s a postdoctoral researcher of the Fund for Scientific Research Flanders (FWO). Her main research interests are object recognition, wide baseline matching, and database retrieval, all based upon the concept of local invariant features. She received the ’Barco prijs voor afstudeerwerken’ for her master thesis, and the ’Barco FWO wetenschappelijke prijs’ for her later work on object recognition. She serves as a program committee member for several of the most important computer vision conferences worldwide, and has over fourty peerreviewed publications. 
Continuous Optimization Methods in Computer Vision 
Date: Sunday, May 7, 14:30  17:45 
Venue: Graz Congress Center 
Description: There are many problems in computer vision that rely on optimization methods in order to solve them. Traditionally, algebraic methods and local optimization techniques have dominated the field, but a recent trend is to employ methods with a guarantee of global optimality. In this tutorial, we will review both traditional local methods and more recent global approaches for continuous optimization problems in computer vision. Topics: • Basic concepts: local vs global methods, convex vs nonconvex problems, algrebraic vs optimal cost functions, mathematical programming. • Local optimization techniques: GaussNewton methods, LevenbergMarquardt and more. • Global optimization techniques: – Algebraic methods – Minmax optimization of quasiconvex problems – Convex approximations of nonconvex problems (for example, using Linear Matrix Inequalities) – Branch and bound • Example problems: triangulation, camera pose, structure from motion and more. 
Outline: 
1  Introduction to basic concepts 
2  Local optimization techniques and algebraic methods 
3  Minmax optimization of quasiconvex problems 
4  Convex approximations of nonconvex problems 
5  Branch and bound 
6  Conclusions 
Instructors: 
Fredrik Kahl received his MSc degree in computer science and technology in 1995 and his PhD in mathematics in 2001. His thesis was awarded the Best Nordic Thesis Award in pattern recognition and image analysis 20012002 at the Scandinavian Conference on Image Analysis 2003. In 2005, he was received the Marr Prize for best paper at the International Conference in Computer Vision, Beijing, China. He has been a postdoctoral research fellow at the Australian National University (ANU) and at the University of California, San Diego (UCSD). He is currently a Research Fellow at the Centre for Mathematical Sciences, Lund University, Sweden. His main research area is computer vision, in particular, geometric reconstruction problems, photometric stereo, geometry of curves & surfaces and machine learning. 
Richard Hartley University of Toronto, Canada PhD Mathematics, 1976, MSc 1972 Stanford University, MSc Computer Science, 1985 Australian National University, BSc, 1971. Professor Richard Hartley is head of the computer vision group in the Department of Systems Engineering, at the Australian National University, where he has been since January, 2001. He is also the Program Leader for the Autonomous Systems and Sensor Technology Program of National ICT Australia, a research centre set up in 2002 with funding from the Australian Government. He has authored over 90 papers in Photogrammetry, Computer Vision, Geometric Topology, Geometric Voting Theory, Computational Geometry and ComputerAided Design, and holds 32 US patents. 
Link to tutorial material: http://users.rsise.anu.edu.au/~hartley/Papers/ECCV2006/eccvoptimization.ppt 
http://users.rsise.anu.edu.au/~hartley/Papers/ECCV2006/tutorial2.ppt 
Graph Cuts versus Level Sets 
Date: Sunday, May 7, 9:00  17:45 
Venue: Graz Congress Center 
Description: Among a multitude of image segmentation methods, the level set method and the graph minimal cut approaches have emerged as two powerful paradigms to compute the segmentation of images. Both methods are based on fundamentally different representations of images. Level sets are formulated as infinitedimensional optimization on a spatially continuous image domain. The Graph Cuts, on the other hand, are defined as minimal cuts of a discrete graph representing the pixels of the image. In this tutorial, we want to review these two methods, show their strengths and limitations, and bridge the gap between these seemingly very different paradigms. 
Topics: 
1. Basics of Graph Cuts 
 basic mincut/maxflow algorithms for graph partitioning  applications in lowlevel vision: binary segmentation, stereo, texture synthesis, estimation of Markov Random Fields (MRF), multilabel problems, alphaexpansions...  submodularity, LP relaxation, and convexity  algorithms for nonsubmodular functions 
2. Basics of Level Sets: 
 explicit versus implicit boundary propagation  variational formulation and gradient descent evolution  integrating different segmentation criteria (intensity, texture, motion, etc).  applications in lowlevel vision: 3D shape reconstruction, shape priors, tracking,... 
3. Connecting Graph Cuts and LevelSets: 
 edgebased (e.g. snakes, geodesic active contours) versus regionbased (e.g. MumfordShah) approaches  implicit surface representation in level sets and graph cuts  integrating "regional" and "boundary" cues into graph cuts and levelsets  differential vs. integral geometry  convex and nonconvex formulations of energy functionals  global versus local optimization  solving surface evolution PDEs via differential and integral approaches 
Instructors: 
Yuri Boykov received his "Diploma of High Education" with honors at Moscow Institute of Physics and Technology in 1992 and completed his Ph.D. at the department of Operations Research at Cornell University, NY, in 1996. Currently, Yuri is an Assistant Professor at the department of Computer Science at the University of Western Ontario, Canada. He is interested in problems of segmentation, restoration, registration, stereo, featurebased object recognition, tracking, photovideo editing, learning graphbased representation models, computational geometry, and others. 
Daniel Cremers received Bachelor degrees in Mathematics (1994) and Physics (1994), and a Master's degree (Diplom) in Theoretical Physics (1997) from the University of Heidelberg. In 2002 he obtained a PhD in Computer Science from the University of Mannheim, Germany. During research periods at the UCLA and at Siemens Corporate Research (Princeton), he developed level set methods for motion and dynamic texture segmentation and for multiview 3D reconstruction. He introduced nonparametric and dynamical statistical shape priors into level set based tracking. Since October 2005, Daniel is Professor at the Department of Computer Science at the University of Bonn, Germany. 
Vladimir Kolmogorov received the MS degree in Applied Mathematics and Physics from the Moscow Institute of Physics and Technology in 1999, and the PhD degree in Computer Science from Cornell University in 2003. His research focuses on optimization algorithms for Markov Random Fields and their applications to stereo, image segmentation and other vision problems. 
Link to tutorial material: http://www.csd.uwo.ca/faculty/yuri/Abstracts/eccv06tutorial.html 
© 2006 Johanna Pfeifer  20060530, JP 