### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Special Topics EECS 598

UM

GPA 3.8

### View Full Document

## 10

## 0

## Popular in Course

## Popular in Engineering Computer Science

This 279 page Class Notes was uploaded by Ophelia Ritchie on Thursday October 29, 2015. The Class Notes belongs to EECS 598 at University of Michigan taught by Staff in Fall. Since its upload, it has received 10 views. For similar materials see /class/231541/eecs-598-university-of-michigan in Engineering Computer Science at University of Michigan.

## Reviews for Special Topics

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/29/15

Kazhdan et al 2003 Rotation Invariant Spherical Harmonic Representation of 3D Shape Descriptors by Paritosh Gupta Ryan Amundsen Ground Work Are these two SD Shapes similar Yes so what R39Q General Approach Each model is represented by a shape descriptors Their shape descriptors are then compared to find correspondence General Approach Would this be a geod idea here to simply camp re 7 N0 and why not When comparing two 3D shapes they should be at their optimal alignment mnlt THE PROBLEM POSE ALIGNMENT three options currently exist 1 Exhaustive Search 2 Normalization 3nva ance Exhaustive Search Approach Compare shapes at all alignments Alignment supposed to be done when correspondence bw models is closest Exhaustive Search Limitation not at all an efficient approach need to address this problem more intelligently Normalization Shapes placed into canonical coordinate frame after normalizing for translationscale amp rotation Two shapes are then assumed to be optimally aligned Now correspondence can be found without trying all possible transformations now MATCH Scale Normalization Translate training set to origin Let xO be the initial estimate of mean Align all shapes with mean Reestimate mean to be X Align new mean wrt previous mean and scale st X 1 Procrustes Analysis D Kxi X2 is minimized PCA Pose Normalization gt align principal axes to x y z axes on canonical coordinate system by affine transformation based on set of surface points gt Translate center of mass to origin gt Rotation applied so that largest variance of transformed points is along x axis gt Rotate around x axis so that maximal spread in yz plane occurs along y axis Normalization Traditional methods of translation amp scaling give good results Rotation Normalization is less robust and obstructs performance the PCA issue lnvanance We skip translation invariance and also the mention of other shape descriptors EGI etc Our focus here is now the Rotation Invariant using Spherical Harmonics Spherical Harmonics U 0 pa 2 Representation Crease Histograms8 Shape Distributions9 Extended Gaussian Images10 Shape Histograms11 Shells Shape Histograms11 Spherical Extent Functions12 Wavelets13 Re ective Symmetry Descriptors14 Higher Order Moments15 Exponentiated EDT16 2222222HvH 2222222222 222222H2HH Table l A smnmani of a number ofslmpe descriptors show r39ng if they are Normaiized or Ulwmianr ro each of trans lation scale and rotation Spherical Harmonics Rotation normalization via PCA alignment does not provide robust normalization for many matching application PCA alignment is performed by solving for Eigen values of the covariance matrix which captures only second order model information Spherical Harmonics When using PCA assume that alignment of higher freq info is strongly correlated with the alignment of second order components this not always true Spherical Harmonics Using theory of Spherical function m1 from 2 2 uiquot je ji izUmz l Histogram plotted with L2 Norm computation done after Frequency Accumulation LGormltZIxi l2 2 Spherical Harmonics For a frequency I the subspace of functions defined will be a v r r y r1 err 1 r i39A a 1q f Therefore for any function felement of VI and any rotation we have Rf element of VI Hence on rotation function will always shufer in different m but for a fixed I Therefore rotation is preserved Spherical Harmonics Tr1R1 R1T1 VI is Irreducible and cannot be further decomposed as VIV V where VI and VI are nontrivial SHifi H istQH if1i9iigtfi f are the frequency 00111130116118 of f ml Z 39Zi39mK39mpBd Hence applying rotation to a spherical function f does not change its energy representation Spherical Harmonics Represent each spherical function as a sum of harmonic frequencies orders 5 Spherical Harmonics spherical function used for scaling points on surface of sphere in proportion to their value points on sphere having large value are pushed away from center and points with small value are pulled in Points with positive value red points with negative in blue LAVAVAVAVAVALA uVAHuVAHwVAHHVAvaAn uVAMHVAHUVAn J 1 IA er 3mg Mx 1J4 J w g r Spherical Harmonics Frequency subspaces are fixed rotations g Spherical Harmonics Freuency subspaces are fixed rotations 1 y 39 g Spherical Harmonics Frequency subspaces are fixed rotations l 139 Spherical Harmonics Store how much L2norm of the shape resides in each frequency to get a rotation invariant representation r lAl lllllIllII 39 1 w 5 v39 MF39 7 a 2 m r 397 34131 quotL v elfquot3937 a X 7quotquot 31 3 H 1 439 A r lt gt 7 r 951 Spherical Harmonics Spherical Frequency Decomposition like Fourier decomposition has property that individual frequency components are fixed by rotation firstorder function when rotated remains a firstorder function and can be expressed as a linear sum of firstorder harmonics Spherical Harmonics secondorder function when rotated remains secondorder function can be expressed as linear sum of secondorder harmonics hence we obtain rotation invariant representation of spherical function by storing only energy of each frequency component w Spherical Harmonics V s m p P pa Harmonic Representation Harmonic Representation 1 533 5 35 r M 2 L2difference of harmonic representations Spherical Harmonics lt 9 13934 K it to a h M gt My wk ls l W VS min rotations 2 bounds proximity of descriptors over all rotations 2 Spherical Harmonics Advantages Improves matching performance Reduces dimensionality of descriptor providing more compact representation f Harmonic Representation Spherical Harmonics Limitations from Spherical shape descriptor to its Spherical Harmonic Representation we loose information Dimensionality of representation is reduced contracting a 2D spherical function to a 1D array of energy values Spherical Harmonics Mathematically b 139 Spherical Function le Z Z ahY ie gti 1Um l Space of spherical function with bandwidth b Ob2 Spherical Harmonic Representation Ob Spherical Harmonics Voxel Grids Rotation fixes distance of point from origin voxel grid is collection of spherical fn Fn weighted by radius of restricting shape 1 C1 as Whig 00 53 CD 5 ease l 7 x m E r w Spherical Harmonics representation invariant to independent rotations 39 models not a rotation of each other but their spherical harmonic representations are same Shape Dis rribu ons nd David Drab Don l go ro Prince ron if you don l like ro wri re 3D Shope Similori ry is o fundomen rol rosk Reoo ni rion re rrievol Clus rering dnd Cldssi ICd rion Compu rer Vision Mechdniodl Engineering dnd Moleculor Biology 3D model doloboses con expond To 0 voriely of fields becouse Improved Modeling Tools World Wide Web spredds ovoildbili ry Of3D models Hordwore dnd CPU s ore gelling fds rer ndependenT of cameras lighr sources surrounding Objects 3D models olwoys inelude every ospec r of Igfroducfion Problems wifh 3D odel Classificofion Currer Is es gt Di f ferenf file ormofs gt Differenf scam fools gt in Ta ing gt Previous Works Hod Difficul ry wilh 3D poWgonsoups Clossifioo rion requires solu rions lo oos rly problems Reoonslruolion of 3D Models Poromelerizo rion eg boundory oro leng rh Regislrolion eg ooordinole syslems for quot olignmenl Correspondence of feo rures Goal to represent the shape signature fora 3D model as a probability distribution sampled from a shape function measuring geometric properties of the 3D model 3D Shape Modal Distribution go Pw mmTim r JJJI lAllll Measure Ea 39 Ptirtlrnmwirmiim Types of Shape DisTribuTions 6 A3 Measures The angle beTween Three random poinTs on The surface of a 3D model a Dl Measures The disTance beTween a fixed poinT and one random poinT on The surface The cenTroid of The boundary of The model as The fixed poinT a D2 Measures The disTanc e beTween Two random poinTs on The surface Types of Shape DisTribuTions o D3 Measures The square rooT of The area of The Triangle beTween Three random poinTs on The surface a D4 Measures The cube rooT of The volume of The TeTrahedron beTween four random poinTs on The surface Examples of D2 Disiribuiions A a Line segment b Circle perimeter only C Triangle e Sphere f Cylinder without caps g Ellipsoids of different radii 11 TWO adjacent unit spheres 0 TWO unit Spheres separated by 1 2 3 and 4 units deon rcges of Shape Dis ibu rions B l E ampAamp max mean search Align mgximum mggni rudes Align megn mggniiudes Search for mos r similgr scale consign r min D x 3gsx Cansiruciing Shape Dislribulians l Evalua re N Samples form model 2 Organize Samples in B bins of fixed size To form a his rogram 3 Conslruc r a piecewise linear func rion lo represenl dis rribulion wi rh V componen rs In This experimenl 10242 N 1024 B 64 I V is used because empiricallyil gives The best performance Sample GeneraTian 1 Divide model s surface inTo Triangles 2 Each Triangle is given a probabiliTy proporTional To The ToTal surface area of 3D objecT SelecT random poinT fromoselecTed Triangle from randomly selecTed Triangle P I r2B l rgc The following dissimilari ry measures were used For The PDF and CDF measures only N i 2 were iesied 9 Kg Di f g i f Elf a Ehattacharyya Di f g 1 f 4 f g a PDF L N h39IiI mweki L N norm of the pdf Dltfgff glNgt1N CDF LN Minkowski L N norm of the cdf Di fa g 11 ij iquot F gr INNW 1 Baso Shape isiribuiion Pro ess 4 Comparison Measure Chi Squared Bha rlacharyya PDF Norm N 12 CDF Norm N 12 4 Sampling 10242 samples 1024 bins 64 vec rors for piecewise funciion Disiribu rion Formulation 0A3 Dl D2 D3 D4 1 Normaliza rion Max Mean Search Experimenldl Selup o Coded in C 9 3D Models used composed of independen r polygons o Con rdined be rween 20 lo 186000 polygons overdge 7000 o Ivlos r models oonloined crooks self in rerseo rions missing polygons o Experimen rs were run on 0 PC Wiln400 MHz Penlium ll Processor dnd 256MB j memow 39Experimen r l Robuslness o D2 shdlo elfunclion MEAN normdlizolion melhod P D E Ll norm o 10 models o 8 Trdnsformolions o 90 model librdry lnch djung origindl model Experiment 1 Robustness o Transformations used gt gt gt Scale Grow by a factor of 10 in every dimension Anisotropic scale Grow by 5 in the Y dimension and 10 in the Z dimension Rotate Rotate by 45 degrees around the X axis then the Y axis then the Z axis Mirror Mirror over the YZ plane then over the XZ plane then over the XY plane Shear Grow the Y and Z dimensions by 5 and 10 of the X dimension respectively Noise Perturb each vertex randomly by 1 of the longest length of the model s bounding boxquot Delete Randomly remove 5 of the polygons Insert Randomly insert copies of 5 of ta Dolyg OHS Robusfness D2 of Seven Vorion rs of The 10 models gt1 4 39H 1 1 1 0 rd 0 O 4 94 distance Robusfness fferenf D2 of Two models for seven di 1 polygons eooh Tessello ons o l 3 0 Q v g n mu HHQMQoum e C n a t s i d Rabusiness gt E H I39D I OaWOJ FP HWO i I H 4H 4w H ww io4 Hr H 4v WAt Human Similari ry Ma rrix 01 90 model library using The PDF Li Norm Phone 3 I D Plane Chair C T O U quoti i j i i f i 9 i i i 7 rqHgtI quvqwrvm4H H1 0H HUI Hrka fi4v 1a aivivimvrhwHrmviv Car Sub Cha Pla Mis Mug Pho Ska Hum Tab D2 shdpe function MEAN normalization method PDF Ll norm 133 Models from the internet Grouped into 25 Cldsses Librdry more diverse thdn lST experiment Eoch oloss oontoins on orbitrory number of models I Similority between olosses 315 well as oldsses vories signifiodntly Examples of 3 classes No celhe high variability among and in be rween classes hlllmn Willi 6 th TILLquot luml War ship Am wx C0 reg orized j j by 6 7 8Clxairs 4 Helicopters u Hunmns 1 Lamps 3 Lightniugs a xrli silp 397 1 Openbooks g 1 PIIIUHP quot 395 Fl Him3995 3 SLQLLEJJURH39EJE A 3 Hubs l Tables F Tanks Noiice rhG r some objects are grouped ioge rher because of similar functions 03 opposed ro shape 4 Phones 5 Animals Similarity Matrix for all 133 Images 1 ighrnmg Table 1 Evaluation of Shape Functions using MEAN and PDF L1 Shape First Secend Nearest Sample Function TieI39 Tier Neighbor Time sva Table II Evaluation of Normalization Methods using D2 and PDF L1 Scale First Second Neareet Compare Method Ti e1quot Tier Neighbor Time ms MAX MEAN SEARCH Table III Evaluation of Comparison Methods using D2 and MW Norm Second Nearest Compare I NGigthI Time ms IIE 66 64 46 44 59 43 5794 mpgr f xp yqzr dx dy d2 boundary Table IV Comparison of Classi cation Results Achieved with Shape Distributions versus Moments of Different Orders Second Nearest I Tier Neighbor MS M4 MS MG M7 r aiizepi Shape Disiriaiian CSS Comparison Sampling Measure 10242 samples Chi Squared i o 1024 bins Bha r racharyya 64 vectors for piecewise 0 PDF Norm N 12 co fUi iCTIOh CDF Norm N 12 oo Normaliza rion Max 0 MEAN Search Formulaiion 0 A3 39 D1 D2 D3 D4 Lecture 4 Electromagnetic Waves Ill 15 4 Electromagnetic Waves III 41 Diffraction and Scattering In the previous sections we have discussed the generation of EM waves and their propagation through a homogeneous medium We have also discussed how the interface between two semiinfinite media interacts with the waves In this section we will continue our discussions to the interaction of EM waves with a nanoscale structure We consider two general cases the diffraction and the scattering Diffraction is one type of scattering in which the EM wave is scattered off from a small aperture From our previous discussions the details of the interatomic and atomicEM wave interactions can be lumped into a polarization vector after ignoring higher order terms as in 27 This process can be thought of as an ensemble of dipole moments being induced by the incoming EM wave and a new EM wave is generated from those dipole antennas This picture rigorous speaking is not quite correct as the induced EM wave can in turn interact with the medium itself and generate new dipole moments The process will iterate over and over for an infinite amount of times But in most cases we can still use the above picture that is to truncate the iteration process to get a good approximation 411 Scattering by a Small Sphere Rayleigh scattering We consider the scattering of an EM wave by a small sphere lfthe sphere is small enough still gt 10 nm but much smaller than the wavelength we can approximate the induced polarization vector as an infinitesimal dipole moment The scattered wave is the radiation generated by this dipole antenna as given by 326 To determine the dipole moment we match the boundary conditions forthe fields at the surface of the sphere ie the interface between two macroscopic media and in this case the air and the sphere Assume the incident wave is a plane wave given by E fExe k m 2Ex as kx ltlt1 41 The fields including the field inside the sphere and the scattered field near the origin ofthe sphere ie the location ofthe induced dipole moment should have the near field nature ie quasistatic Because there are no boundaries inside the sphere and the field inside the sphere satisfies the Poisson equation the solution is a plane wave polarized in the same direction as the incoming wave E 2E fcos i sm6Et 42 The scattered field is given by 329 43 s 3 3 557 1 f2cos sin 22c056 sm6 E 47mg r r To match the boundary condition at the surface ofthe sphere we demand EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu Lecture 4 Electromagnetic Waves Ill 16 Eauzaan Emaan 4 4 Dautnmm1 Dmmrma1 It leads to E E5 7E 4 5 8Ex 2 E5 85E I And we get E5 EEI 46 5 28 Combining 46 and 43 we have 11 44M PE 85 T g 47 u0 5 28 Now the scattered wave is nothing but the radiation generated from an infinitesimal dipole radiator with a strength given by 47 In the far field region the electric field is 110 7 A 57 6k2a3e E 85 g sin 9 E 6E5 48 r 85 28 The magnetic field is A A A 8 A Hrp l Egzszw 49 0 The total averaged scattered power is 2 slexHa 4 i k2a3E 85 s 410 2 3 u0 85 28 The scattering cross section is defined as the ratio ofthe total scattered power and the incident power 2 08 k2a3 855 2604116 411 3 85 28 Higherfrequency waves are scattered more This explains why the sky is blue and the sunset is red In a special case when 85 28 the scattering cross section goes to infinity Rigoroust speaking if the scattering cross section goes to infinity we need to take into account all the iteration terms in the scattering process ie scattered wave induces the dipole moment and induces the scattered wave and But nonetheless it tells us a there is a resonance when 85 28 This again corresponds to the surface polariton mode and will be discussed later 412 Mie scattering For scattering by a notsosmall sphere we can no longer assume the scattered wave is resulted from an infinitesimal dipole radiator This problem however can be solved rigorously by matching the boundary conditions for the Maxwell s equations This is called the Mie scattering and those of you who are interested EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu Lecture 4 Electromagnetic Waves Ill 17 in this topic can refer to any of J A Kong Electromagnetic Wave Theory 2quotd ed Section 61 42 Waveguiding So far we have discussed how EM wave propagates and interacts with macroscopic media We learned about reflection and refraction We also learned when certain condition is met we will have total internal reflection In this section we discussed how we can use the total internal reflection to guide and confine the EM wave along an energy pipe or a waveguide The simplest example is to stack two semiinfinite straight interfaces and with a higher refractiveindex material sandwiched between two low refractiveindex materials When a radiation source eg the dipole radiator is placed inside the waveguide the part of EM radiation that propagates at an angle greater than the total internal reflection angle will be guided inside the waveguide without energy loss through both interfaces After a certain distance of propagation all the radiation components that do not satisfy the total internal reflection condition will lose their energy The part that is being guided will maintain a constant electric and magnetic field profiles along the waveguide direction It is called the waveguide transverse mode quot To solve the electric and magnetic field profile we write down a general solution for a TE polarized field as follows A1e39 e k z x gt dZ Eyx z A2 coskhx or A2 sinkhx 412 iAleWe k z x lt idZ where kzx k22 0211 8 2 2 0 2 413 max kz2 021108 The magnetic fields are 6E Hzxz 1 414 150110 x The boundary conditions give us for even modes x tan 1 22 415 2x EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu GEOMETRICAL CLONING OF 3D OBJECTS VIA SIMULTANEOUS REGISTRATION OF MULTIPLE RANGE IMAGES Presented by O Bhargav Avasarala PRESENTATION OUTLINE 0 Keywords Range Imaging Image Registration Introduction Motivation Previous Works Major Contribution Methodologies Process Pipeline 3D Scanning Registration of Multiple Range Images Nonlinear Least Squares Visibility Criterion Reconstruction and 3D Rendering Summary Overview KEYWORDS RANGE IMAGING 0 Collection of techniques used to produce a 2D image that provide depth information 0 Range images are also referred to as depth maps or xyz maps 0 Typically light intensity used to represent depth KEYWORDS IMAGE REGISTRATION 0 Process of aligning two or more images of the same scene 0 Use one image as a reference model and align target scene images 0 Rigid translation rotation scaling and nonrigid elastic transformations Curves edges points used as correspondences 0 Applications 0 Medical Imaging combining CT and NMR data multimodal analysis 0 Cartography Map updating multitemporal analysis KEYWORDS IMAGE REGISTRATION 0 Range Image or Surface Registration 0 Matching or aligning points for 3D surfaces 0 In mathematical terms Let X Y represent two surfaces for x e X y e Y and for some rigid transformation T Vxl eXEyl eYst HTx y 0 x xabf 6119 y yCdg0d INTRODUCTION MOTIVATION o Reconstruction3D rendering of real world objects from the Registration of multiple Range Images INTRODUCTION PREVIOUS WORKS o Turk and Levoy Registration of range images through a modified ICP iterative closest point algorithm 0 Bergevin Laurendeau and Poussart Partial range views represented as triangulated meshes used as features for registrations o Higuchi Herbert and lkeuchi o Featurebased approach using curvature for registration 0 Curless and Levoy Reconstructed complete objects through a volumetric approach given registered range images INTRODUCTION MAJOR CONTRIBUTIONS o Simultaneous Registration 0 Iteratively align all range images simultaneously 0 Resolution Hierarchy o Cuts down on runtime of convergence 0 Visibility Criteria 0 Ensures that points not belonging to the object are not inadvertently incorporated into the 3D image METHODOLOGIES PROCESS PIPELINE Segmentation object background shadows Direct Rettderlng Building of an intermediate volumetric Volume cl 1 39h39 11 I t th t 1 mo e w 1c represeu s e 0130 ogy Rendering of the reconstructed object Marching cubes Polygonal a Improvement of the polygonal mesh Rendering METHODOLOGIES 8D SCANNING 0 Range Images acquired through 3D Scanners 0 3D Scanning Issues 0 Not possible to scan the shape of an object entirely in one View topologicalgeometrical limitations 0 Need to have 2050 overlap in images for correspondence 0 Application of Registration for Rendering 0 Reconstruction of a 3D scene entirely from partial Views 0 Connecting surface pieces of a puzzle in 3D METHODOLOGIES 8D SCANNING METHODOLOGIES IMAGE REGISTRATION 0 Initial Estimates 0 Manual selection of corresponding points in each image need at least 3 points 0 Serves to provide an initial relative orientation View 2 estimate 2 T1 T0 m ld coord View 0 r i gt i View 111 SySIBIHl Fig 2 The relative orientation for each range image with respect to a world coordinate sys tem METHODOLOGIES IMAGE REGISTRATION o Simultaneous Registration 0 Use the initial relative orientation estimate and iteratively optimize the transformation parameters over all range images 0 Advantages 0 Global error minimization spreaddiffuse error over all range 1mages o Disadvantages 0 Large computational cost since each iteration must compute error between each range image and reference METHODOLOGIES NONLINEAR LEAST SQUARES 0 Notation S SOSI S N Set of range imagessurfaces Qi Relative orientation Wrt world coordinates ng Rotation matrix dependent on Q 56 Translation parameters dependent on Q Ti Transformation matrix Range image gt World coord D Distance between points 0 Numerically optimize 9 in order to estimate 1R 9gtltxigtrfglxiESi METHODOLOGIES NONLINEAR LEAST SQUARES CONT D 0 Least Squares Problem 0 Estimate the values of the relative coordinate parameters 6 such that 19016 Si 9 729 sf is minimized in the leastsquares sense ie 92 mgnZDltzlt6gt S x T961 METHODOLOGIES NONLINEAR LEAST SQUARES CONT D o Iterative algorithm registers all images at once 0 Split the transformation operator into known and correction operators 9 5 T T T T 0 New distance to minimize 82 mgn mw 591 51 i j Discretize the modelreference image approximating a region around each model point as a plane St gikQ gik Ok 12 METHODOLOGIES NONLINEAR LEAST SQUARES CONT D o The correction vector 6 is derived from simplifying the least squares expression 1 Q I ZAiikAy39k Z Aliksy39k i jk i jk i j Indices of range images from the set S k index representing available model points 0 The iterative step Tn Tn 5 T l Ti T METHODOLOGIES NONLINEAR LEAST SQUARES CONT D 0 Resolution Hierarchy Choose only a few model points for the first iteration and successively increase the number of model points each iterative step c Runtime depends mainly on how registration algorithm is iterated in the highest resolution 0 20100 times faster as a result of implementing hierarchy METHODOLOGIES VISIBILITY CRITERION 0 Visible objects are defined as lying between the scanner and the surface 0 A point x given in the world coordinates does not belong to a particular object if Iii lo gt Mar 1d PyTl lm where a N U Interpolat ed depth at coordinate s of SI METHODOLOGIES RECONSTRUCTION AND 3D RE NDERING 0 Intermediate Volume 0 Construction of a topological model from registered partial Views under the world coordinate system 0 Volume Rendering 0 3D rendering of the Intermediate Volume 0 For each point requires an opacity and color 0 Polygonal Rendering o Resample the object in 3D space using the Marching Cubes Algorithm 0 scheme for extracting a polygonal mesh of an isosurface 1 3D Object Wlooellng and Recognition Usrng AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects by Fred Rothganger Svetlana Lazebnik Cordelia Schmid and Jean Ponce Presented by Bunu Soo Kim amp Liang Mei Feb 10 2009 University of Michigan Ann Arbor 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects The Problem Build 3D model from a small set of images and recognize it in other images 150wa Wequot still Modeling Recognition 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects The Problem Build 3D model from a small set of images and recognize it in other images Previous Works Featurebased geometric approaches Not use local brightness patterns as features Nevarz39a ampBinford 77 Brooks 81 Selhi el aL 02 Appearancebased methods Not use 3D structure Turk amp Pentland 91 Murase amp Nayar 95 Schmid ampMohr 97 Lowe 01 Mahamud et a 02 Af neinvariant patches New technology enables use 01 appearance and 51 structure Zindeberg amp G hrding 94 Mikolajczyk amp Schmid 02 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Modeling Find affine invariant patches in each view Match between selected pairs of views using 3D geometric constraints Merge partial models g2 Approach Recognition Select appearancebased potential matches l RANSAC Like selectionestimati on i Add geometry based matches l Object detection g Key Elements Af neinvariant patches Filter for selecting match candidates 3D geometric constraints Afford matching algorithm for discarding geometrically inconsistent candidates Mdl39 Approach AffineInvariant Patches Pose invariance 39 Rotation 39A39Ffine Why we have to guarantee the equivalence of patches frum a set U1 unregistere images from different v1ewp01nts Detection Extract geometrlc constramm Description Extract appearance patches 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Approach Af neInvariant Patches Detection Integrated Af ne region Detector W Harris detector Detect the characteristic shape of the local feature The second moment ellipse can be Viewed as the characteristic shape of a region direction of lhe fastest change 7 7 I I quot7 l direction of the M Z wx A x 2quot slawesl 3 1x11 1 change Ole 0 22 u v M Ilconst v 1 7 lt ampnlinv Mmlalinv Approach Af neInvariant Patches 0 Detection cont d Modi ed Integrated Affine region Detector Scale amp Af ne Invariant Interest Point Detectors K MIKOLAJCZYK et a1 http 0 ar eval iicv2004 df M0di ed Integrated Af ne region Detector Iterates Over 1 Detect initial region with DOG detector orlgmal Harrls detector 2 Estimate affine shape with M 3 Normalize the affine region to a circular one 4 Redetect the new location and scale in the normalized image 5 Go to step 2 if the eigenvaluesof the M for the new point are not equal detector not yet adapted to the characteristic shape 1 7 m nrin M 4 lin Approach Af neInvariant Patches Detection cont d Modi ed Integrated Af ne region Detector Scale amp Af ne Invariant Interest Point Detectors K MIKOLAJCZYK et al r ievali 39 39 iijcv2004pdf 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Approach Af neInvariant Patches Description Descriptor output Ambiguity of rotation and re ection Ambiguity is resolved by dominant gradiei orientation Turn the corresponding ellipse into a parallelogram Orientation Assignment Orientation determined by largest peak Assign second orientation if second peak at least 8 height About 15 of keypoints have tWO orientations Parabolic fit to three values around peak to interpolate orientation gt 0 t 2 TE 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Approach Af neInvariant Patches Description cont d Descriptor output Ambiguity of rotation and re ection Ambiguity is resolved by dominant gradient orientation Turn the corresponding ellipse into a parallelogram Appearance Affine rectifying transformations Geometric Approach Af neInvariant Patches Description cont d Af ne rectifying transformation Maps each parallelogram onto a unit square Result 0 a normalized representation of the local surface appearance invariant under planar af ne transformations 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Approach Af neInvariant Patches Descrintion cont dl SIFT Descriptor AKgtl Amie Image gradlens Ksypolnl descrime 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Approach 3D Geometric Constraints Scene patch Fictitious number j image numbel J Since S contains 3 vectors each patch have 3 vector information If m patches are same in appearance from 3m vectors we can use SFM Image number 139 i A 1 i i 2 Segmenting Modeling Approach 3D Geometric Constraints Factorization 511 177 M1 5 1 5 5 5 M M Scenepatch 1 quot Fictitious 5m 5m Mm number j image immber39 J D 4quot 0 pi or 392 A 001 GT 1 001 391 391 41 Ag isgml 81 Am Image number i 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Modeling Find affine invariant patches in each view Match between selected pairs of views using 3D geometric constraints Merge partial models Approach Recognition Select appearancebased Selection of potential matches l RANSAC lee selectionestimati on i Add geometry based matches l Dbjeci detection KJ g Key Elements Af neinvarim1t patches Filler for selecting match candidates 31 geometric constraints w A lo n1 m a it it in g algorithm for discarding gemtwtrimlly inconsislmn candidates Finding Af neInvar Patches Modeling Find affine invariant Qatches in each view MaMen selected pairs of views using 3D geometric constraints Mew models L Segmenmg MadeHng and MatchingVidea CHpsCantainm Mu tip e Mmingohjects I Matchlng39 RANSAC I Modeling m Find affine invariant patches in each view Matchbtween selected pairsof views using3D gm main Mam models L m Min M A rquot Matching RANSAC K 39 P and for each patch in the rm sex nd the K closest patches in Modeling Pmamems nD AI is me number of nerauoas ofrbe RANSAC39slilie pan ofIhe algomimi Aquot is me number of samples drawn ai each Herath cfthe Sampling srage n n c r i F in d aff i nes invariant patches i Appem nrarbmedtelecrim apmemmlmmrha I u mm mm an Empty In each Vlew Match between selected pairs of views using 3D geometric constraint Merge partial models me second set iheu add a P ihe matches whuse amuse does not exceed 0 2 RiMHClrlm almonmmnon pzbmdm39c 39 I do a Smiipliug e r P v i 39 samples and esiunaie me cmrespondmg geometric pzn39ameizrs b Consensus Add to m 5 all elements of P um aheady mere whose xquojemon m I smaliex Ilmn E Imnmze T In be m 111 gesi consensus sen use nelghbmlmmd consismucy consn39mms v ramoxe porennai unilxers and reresmmre rhe geomemc pm39ame39eri 3 Gennmnvbmed IdImam ommrimr M T maIeri feature Vecioys Add to 1 any element 051 whose repi ujeuuou error 5 mile man E Reseslumre me geumemr pamneters mid ompur 1 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 M g and quot u 39 a Video Clips Containing Multiple Moving Objects Merging Partial Models Modeling K Find affine invariant patches in each View Match between selected pairs of views using 3D geometric constraints Merge partial models gs g 7 39 3 s x if a 397 3 1 32 4 v g 17 C halnlng c 1s egg g Q g Link matches across multiple images Stitching Solve for the affine structure and motion while coping with missing data Bundle Adjustment Refine the model using non linear least squares Euclidean Upgrade Use constraints of camera to turn the affine reconstruction into a Euclidean one 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Merging Partial Models Modeling N 1 Chaining I I Link matches across multiple images Find affine invariant patches in each View 39r39 MatCh bEtween Patchview matrix for the teddy bear selected pairs Of Columns surface patches Views using 3D Row the images they appear in geometric constraints Merge partial models w 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Merging Partial Models Modeling 2 Stitching Solve for the af ne structure and motion FInd afflne wh11e coplng w1th mlssmg data Invariant patches in each View Match between selected pairs of a Views using 3D t Patchview matrix for the teddy bear geome rlc Columns surface patches constralnts Row the images they appearin Merge partial models g Merging Partial Models Modeling 3 Bundle Adjustment I Re ne the model using nonlinear least Find affine squares Invariant patches in each View n 2 V Minimizing E Z Z is 7 MNi Match between FMS selected pairs of views using 3D geometric Euclidean Upgrade constraints I Use constmiinwi of camerato turn the af ne reconstruction into a Euclidean one I With three or more Views it is simple to compute the corresponding Euclidean Weak perspective projection matrices 4 Merge gartial models 4 u I L i a a r Z Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Recognition Appearancebased preselection Measure of contrast Color histogram of UV component SIFT RANSAClike estimation Geometricbased waition of matchw Object Detection number of matches 2 777 OR matched areatotal area 2 it AND distortion 3 IL 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Experiment Results L i a a r 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Experiment Results ngm 20 Compmsm ofrccngmimu mes fox dx ereut blackrandrwime mums ofeur memad See 8 for dcmls 39xnmm Figure 23 y 39 39 39 TP and mi n kmive r 39 k 39 39 Values and Valymg Eon left in n39gm me mimber of mamhed pmches he min of maiched o mime area and the dmomon 1 7 m nrin M r1 in Experiment Results Fume mm me Tm mm Hui Fl 1392 24 39 K methods For olu rm n 1 am l uLu level of false poshives 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Modeling Find affine invariant patches in each view Match between selected pairs of views using 3D geometric constraints Merge partial models g m Recap Recognition Select appearancebased potential matches l RANSAC Like selectionestimati on i Add geometry based matches l Object detection g Key Elements Af neinvariant patches Filter for selecting match candidates 3D geometric constraints Afford matching algorithm for discarding geometrically inconsistent candidates L l a y r 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects L l a a r 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Obiects Locally Af ne Projection Constraints Assumption Globally perspective locally af ne Comers of each patch obey a local af ne model while centers obey a global projective model 0 From the perspective projection model l where Al A 1 PfP7mAPTb mm of 7Aa3P174Pba quot 7 Aim 7 agP12 aBPH h H v V 4 c fCi pl AipaST OP a339P1 I P P39 3 Full Camera Model proiec rive ngl M XW K3gtlt3 3x4 XW4gtlt1 K0 Cx 0 0 How many degrees of freedom 5 3 3 11 Slide courtesy to Prof Silvio 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Locally Af ne Projection Constraints 11 3 511 7 BF 391 7 WV 4 C1 c HC Cas If we know the projection matrix Given a xed projection matrix 1 putting Eqs 5 and 6 together now yields a system of 6 linear equations in the 9 unknown coordinates of H V and C 13 A782 Zzil l lJZLlSl t 0T oi Aecaillcl lei hi 7 6 equations 9 unknowns What if we know neither Bilinear Re nement a3 C 1 1 v A 7 cog H V and 4C39b 5 or c A 7 cagTC b If we know the rectifying transform Given xed vectors H V and C Eqs 5 and 6 also provide a system of 6 linear equations in the 11 miknown entries of l H ithicHT 02 11 v ivCTicVT 02 c icCT 12 13 b where U2 and 12 are respectively the 2 X 2 zero and identity matrices a1 and 12 are the rst two rows of A and HT 0T VT 0T CT 0T HOT HTi 0T vTwL0T GT 6 equations 11 unknowns Dealing with Missing Data 0 Possible solution decompose matrix into dense subblocks factorize each subblock and fuse the results I Finding dense maximal sub blocks of the matrix is NP complete equivalent to nding maximal cliques in a graph 0 Incremental bilinear re nement 1 Perform factorization 2 Solve for a new 3D point 3 Solve for a new camera that on a dense subblock Visible by at least 26 sees at least 36 known 3D known cameras linear least points linear least squares squares 1 3D Object Modeling and Recog I n Using AffineInvariant Patches and Mu iew Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Dealing with Missing Input r 1e or locally a ine de nitions for the camera and patch equations a The sparse patchview matrix containing the image measurements A 11 o A seed model consisting of camera matrices 11 and 3D patch matrices B that cover a subset of the patchView matrix Outpu A model coveiing a maximal subset of the patchView matrix given the nunnnum coverage requirements for patches and cameras repeat F n or each column of the patchView matrix that is not yet covered by a known 5D patch B count the number m of nnage measurements 8 that reside in some row covered by a knon camera Similarly for each row 239 that is not yet covered by a known camera count the number m of image measurements covered by some known atch Add to the iuodel the row or column that has the highest number of covered image measurements it a row i is chosen then So Ye for M1 by stacking the in instances of the camera equation associated with image measurements covered by lmoun pa c tes else i Solve for B by stacking the m instances of the patch equation associated with image measurements covered by known cameras and if Incremental bundle adjustment Propagate the effects of the new data into the model by resoh mg for all the knovm patches and for all the knoun cameras Alternate between cameras and patches several tnues until no column or row remains with su icient coverage Algorithm 1 Incremental Bilineai Re nement Data 1 3D Object Modeling and Recognition Using AffineInvariant Patches and MultiView Spatial Constraints 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Objects Motion Segmentation Deals with scenes containing multiple independently moving rigid objects Simultaneously identify the subset of tracks that move rigidly together and build the 3D model Input 0 A set of tracks T o A threshold to on the mmjmlun number of consecutive frames two overlapping tracks must share a e on reprojection e1ror This determines if a track is consistent with a model 0 A threshold 1 on the minimum number of tracks in a component Output A set of ligid groups and their associated 3D mod els epeat c Find the frame f with the largest munber of concurrent tracks in T A track must appear at least in frames f f w to qualify Call the set of overlapping tracks 0 0 Use RANSAC to nd the largest subset of tracks in O that are rigidly consistent For each random pair sampled from 0 form a 3D model and then select all other tracks from O with reprojection elror below E to form a consensus set Keep the largest consensus set and call it C repeat 0 Form a model from C by using incremental bilinear re nement l ri i a Replace CT villi all tracks in T whose reproJection e1ro139 with respect to the model is below E until C stops growing if C contains at least u tracks then a Add C and its model to the output u T e T C if until another C such that lC 2 1 cannot be formed Algorithm 2 Motion Segmentation Modeling Results Track 1 1 r Mdl39 Modeling Results Track Modeling Results Motion Segment Modeling Results Motion Segment Mr 5 Model Scene component Beardog ne af ne locally af ne Groundrag Van scene 39 Van compouem Cameras Pmches af ne 150 2661 locally a iue 133 2168 277 1409 locally af ne 288 1833 83 347 03619 90 474 TABLE I COMPARISON OF AFH NE AND LOCALLY AFFENE PROJECTION MODELS 7 mm39 Mrll39n Query Video Sequence Affine Patches Compare with Database Query Result Recognition Input Two sets of patches A and B Output A set g X B of consistent matches Step 1 Appearancebased selection of potential matches I Initialize the set of matches I by nding patch pairs from A X B h gh appearance similarity Step J Robust estinmtian I Apply robust estimation to nd a set C g 1W of geometrically consistent nJatc es Step 3 Geometrybased addition Hf matches 0 Estimate a registering transformation Q using C a Replace C with all matches in NI that are consistent with 0 until C stops changing a Reestimate Q using C 0 Add more putative matches to 1 using Q as a guide New matches must also satisfy relaxed appearance constraints until M stops changing Algorithm 3 Matching see section IV A for details L l a y r 2 Segmenting Modeling and Matching Video Clips Containing Multiple Moving Dbiects Result mg 11 Four coneclly matched sums Left onng frame of llle tesl sum Mlddle the query model reprojected mm the 651 vldeo nghl me query model matched 10 the test model For ease of vlsunhzanou he gure mdudes several mum markers Ideallfymg corresponding cues Taming Hm UndIlymg Challcngcs of hliablc am in Insor Nuworks A ec Wm Terence Tang and bcmd CuHer uc Berke ey and Ime Research Berke ey A CrussrLayer Perspechve em Ruuhng Hwymm A up ummym W nquot m w W w equot m a up mmquot mm m w w smmy Under ymg Curmechv y m Reawy a regmn am nangmauegm E1quotth 7 l V V Rtgmn Wquot m m a e Wquot Daphymem enmmm zmnnrmg mmm e mmmmwmsmm Hmr mm m quotmm quotW Newman my me Ruadmap UnderMngprucess e W Quah yEx woman 7 New hhmhaadeagemem 39Crussrmyer rummg sway e neenmee Rammg A 5uud Lmk Eshmamr Accurme 39Ag eyescb2 39SmaH memury fumpmm Swmwe Passwe Eshmcmun Unkxequemenmhmxnawmg sum mm mm mm Key Ssue em Mquot m m quotm pm newquot 57mmquot ww mum dmmm Mquot m me an m Aiymme He W 7 mm mm quotmesm qmmmm 7 mm mmquot WWW mm mm Lmk Eshmamr Smdy Newghburs m Transmuncd Regmn smdw esnmmurs r hy mmg m mu m2 me away haund m W m mu vmemmlnmghhm mm gtavm d7 wmme m mm manmmwwm 9mm w W m quotWm am sahmm dawnrsnmph aswpuss W m imamquot 001nm quotm 39 2 Camm mwm mm m gum v M mm ruinquot u um mmquot W mm mes mm mm ManagememTechmques Dem s InseNvmen r Dawn amp e raw adap m m mxghhm m YmhsmN ghbms Remfurce meb e V cm W mm um ock 7 Nude Caum2r Frequency Emmmm r emnqw 12m 27mm 02 rewaceah e mg 11 m mm n m m 39Cache Repmaemem Puhcy r r o mu NHLC mk bambase r Freqmncyexnmanm afdmasheau Wzouzmv Munku n a Key ResuHs Ruadmap Unde ymgpracese r W mews Mannquot 7 nghhmhaad Mmagmm russrmyer rummg sway c r hemmed Rammg mmbmd Tree Bm dmg Dmnhmed magmmmywmk 7 am W mm w mm mmquot 5m mm mm News 7 mummy m and my mum 5m m mm mm s m n mm mm wm Man mmw HuprCuunf 395hunesv hupw 395hunesv hup w 7 Hard threshmd nh nu 25mm m 5P quoth Vhreshu d 5m Nunereshu d based Cast Memes Pam Rehubmy 7 mm af hm quamymmg m2 mm mm mmmum Transmssmn T V cm Shmedanhrkquamy Z 1 Q Lmkts mu m 7X malnumher mm Eva uahun Ruadmap Keyubxervcmam Hapms nbu nn tna armdswuss smmmy Graphanc y my Hwy 7 mum um 395 93quot w M Dadm evemmummn r Won magnumm mum Druv39wdmmummmu WM quotmgquotmmrmm Empmca lchaMmes mu w w W and mm mm mm W 7 5m w w my mmgh mm magnumM mum zmgtshan Braph Ana ysws Key ResuHs Huprbwsvrwbmmnand pehabmymss Swmu ahun Key ResuHs 5mm 2 may an m mm Wm M Rex udycanmc wwyv mm m yquot W ms xvm Smdy sumo1 5mm w K 5 Empwca 5de m quotad s m n 1 eyubxervcmam L quot 5mm 1 uquot mm mm m x m a mp5 k mm Empwca Key ResuHs M mama Sum v Dmmce WW 5 m 7 an EA a v g N 3 m mm a m mbwsxanc heu mm mm 5 x 1m Aza ssvsgm Cungeshun and smbmy m w WW 2L 5Y as 4quot W h 5 m 1 was m U Cmned MW 3 2ng ma Va mm be m m 2 p aw a x mmgax 3amquot Dizwnmmmum my 7 Mm 5 avg ntghbms rignrdhss 1 N a WM Hard Mushst y mum 1mm m amang Hapms nbu nn ndusmn meme WMCMYWEWMA V mm m um Mesa prawssei w m smmmy Backup Shdes Sensnr netwmk slnrzge and query no A nny Awaynun Suvlcefnr M Na c Saunr quotswans Simue Madden H mm Sldgshsedmsm Mmm sul I TAG Inlmducllnn u emuan mm 2quot m39 e n m mawm new by w mm my mm mm We Sensar Nets YvaIvrevmmn Ovewiew Sensar NamM6 evewiew Dewee cm I we new We We Nets e wullllvmxxm Ymvamvnmn 53 va Yw smxa mwm m werme we 7 WM We mummmwusuam Sensnr N21 Sample Apps My rm on Meln 3mm Wm Cnmmuniczlinn In SensurNets Overview am mmmmn has m m m asses wuwamum a a mm m r H Sensar NamM6 V Mm wk mmm Overview Sensor Networks Queries in Sensor Nets Tiny Aggregation e vervlew a Optimizations amp Results 2 we Ila Wrmm TAG lnnetwork processing of aggregates Common data analysis operatron a gamer operatron oreduclonm ll prograrnrnrng e Cornrnunrcattonreouong Operatordepenoentoenent e Across nodes dunng same epocn Exploit semantics improve ef ciency 1 we Ila Wrmm Query Propagation z we Ila Wrmm ln 7 am nude samples luca At end of epoch PSR for Whole network output at root Basic Aggregation eaon epoon v A sensors unce Generates partral state retard PSR lucal reamngs reamngsrrorn enrmren Outputs PSR during its curnrn slut ln paper plpellnlng grouprng z we Ila Wrmm Illustration Aggregation SELECT COUNTCquot I quot39 quot 4A z we Ila Wrmm ELECT COUNTquot r I A I39 Slot Illustration Aggregation z we Ila Wrmm Illustration Aggregation coumm z um um Wimm Illustration Aggregation coumm 9 Slot4 M Jquot i i Slot z um um Wimm Illustration Aggregation SELECT COUNTquot 5 Aggregation Framework As in extensible databases we support any aggregation function conforming to 9 n m39mugtilmimt 39II gtltngt whim sm Ratrd PSI qu lta1gtltaxgtgtltaugt rmmmkag gt aggregate value Merge associative commutative AVGquot VI eltv1 gt Avon lt51c1gtltscgt gtlt 51 51 c1 cgt m4 AVGthKS t z Ihiwllm wimmn 21 z Ihiwllm wimmn 22 Types of Aggregates Taxonomy of Aggregates sQL supports MNY MAX SUM COUNT TAG irisight ciassiiyaggregates according to Various functional properties AVERAGE r ields a general set or uptimizatiunsthat can autumatically be app IE Any function can be computed via TAG mt Emits at In network bene t for many operations item mm mm we r E 9 Standard deviation topbottom Ni spatiai union ntersection histograms etc W V M 7 e compactness ofPSR mm aw mm VG mom 2 Ihiwllm wimmn 23 z Ihiwllm wimmn Z4 TAG Advantages Communication Reduction 7 lmponantforpowerand col ltel ltlol l Continuous stream of results a Smooth tra slel lt faults across epochs z um quota mum Simulation Environment Evaluated via simulation Coarse grained event based simulator 7 Sensors arranged on a gn TWO commul llcatlol l models Lossless All nelgnoors nearall messages Lossy Messages lost Wltn probabllltylhatll39lcl39easeswlth dlstal lce was Wm 26 Benefit of lnNetwork Processing 25 Ne 5Dgtlt5EI Grld De n EIEI Nudes F1 lgnoors zu UEIElEl z um quota mum rntal Eytu Xmittsd vs Aggregatinn Functinn mum Maw Optimization Channel Sharing Snooping Insight Shared channel enables optimizations Suppress messages that won t affect aggregate r g l e Applles to all exernolary rnonotonlo aggregates 2 newquot Wlmm 28 Optimization Hypothesis Testin Insight Guess from root can be used for suppression n E 9 MW lt 50 7 Works for rnonotonlo amp exemplary aggregates Also surnrnary n lrnoreolslon allowed How is hypothesis computed e Elllrld urstatls tlcally lnfurmed guess 7 Observatan over network subset z um quota mum Experiment Hypothesis Testing EECS 598 1 Eigenanalysis for Boo Sabin subdivision lgor Guskov March 217 2002 1 DocSabin scheme Doo Sabin subdivision comes from regular tensor product quadratic B splines It is a dual scheme the re nement happens via vertex splits See Figure 1 The weights for the extraordinary case are given as 040 14 54N and 04k 3 260327rkN4N for k 1 N 71 316 116 916 316 Figure 1 Boo Sabin subdivision scheme 2 Eigenanalysis for DocSabin Consider a dual scheme at an extraordinary face with N sides The onering subdivision matrix S is curculant with elements Slk 04 That is7 the subdivision matrix is 040 041 042 04N71 04N71 040 041 04N72 l 041 042 043 040 Discrete Fourier transform is de ned via multiplication with the matrix F where Fm 2 where z chiN Note that 2 depends on N and that 2N 1 it is not the general symbolic 2 we use in z transforms and masks Also note that all the indices will be treated mod N below The inverse Fourier transform is performed with the matrix F 1 1NF Here F is the conjugate matrix with elements Fifi 2 One can check that N71 N71 N71 7 W q9 7 p79 q 7 2 FquS 7 E z z 7 E z 7 N6p75 20 20 20 The last equality is easy to see graphically The Fourier transform of a subdivision matrix produces a diagonal subdi vision matrix FSF l This operation does not affect eigenstructure of the matrix is diagonal with values on the diagonal being the eigenvalues and given via 02 Zq aqzqt lndeed7 we have FSF Z zpqozqqz Z 2074 Z quZqt N61t64t I q 1 So the transformed matrix is a0 0 0 0 0 a1 0 0 0 0 a2 0 0 0 0 mm The inverse transform gives us the os via 1 A as NZatz t9 Thus we can enforce any eigenstructure we want The regular case has spectrum 640 1641 643 12642 14 UNIVERSITY OF MICHIGAN Shape Matching and Object Recognition Using Shape Contexts Serge Belongie at at 2002 Recognizing Objects in Range Data Using Regional Point Descriptors Andrea Frame et mi 20 Wongun Choi Khuram Shahid UNIVERSITY OF MICHIGAN Shape Context 2D 0 Shape Context 3D and experiments 0 Shape Context 2D continued Motivation M UNIVERSITY OF MICHIGAN A model target Find Cu 39 between two shape Shape Context Estimate Transform Thin Plate Splines Measure similarity Related Works M UNIVERSITY OF MICHIGAN Known correspondence Bookstein Thin Plate Spline Kendall Feature Based Silhouette images Set of points Spin Image 0 Brightness Based Gray scale Implicit correspondence Shape Context M UNIVERSITY OF MICHIGAN Shape Context UNIVERSITY OF MICHIGAN Histogram Uniform In ogypo ar space Countgugiber of points 399 points mm 41 11751 a Mum Shape Context M UNIVERSITY OF MICHIGAN Shape Context M UNIVERSITY OF MICHIGAN Invariance Translation invariance Inherent characteristic Scale Invariance normalize radial distant according to the mean distance between all pairs Rotation Invariance Align Xaxis to the tangent vector Outlier rejection Shape Context M UNIVERSITY OF MICHIGAN Similarity measure between two descriptor 1 WW hr r i ii17 ELL Bipartite Graph Matching um it39imqm gt The permutation n can be found using Hungarian method in 0N5 gt shortest Augmenting Path Algorithm to find the permutation n Shape Context in 3D M UNIVERSITY OF MICHIGAN Global feature vs Local feature in 3D shape matching Global feature I Fast to compute I Database is small I Object should be segmented a prior Local feature I Partial matching Robust against occlusion and clutter I Database is huge Too slow not appropriate for retrieval Shape Context In 3D UNIVERSITY OF MICHIGAN Challenges ln Range Data Selfocclusion and occlusion by others Cluttered Similarshape and size ex Cars Limited spatial resolution Noisy scanning 39 Global will WUIK well Shape Context in 3D UNIVERSITY OF MICHIGAN 39 Natural extension of 2D Shape Context to 3D Evenlyspaced along the azimuth and elevation Logarithmicallyspaced along radial dimension Weighted oounting Align the north pole direction to the surface ormal n inherent ambiguityin the azimuth direction 7 Store L desmmms fer EVEW palms lrl refmndel exam m7fgt Shape Context in 3D M UNIVERSITY OF MICHIGAN Similarity measure L2 distance for 3D shape contexts Representative descriptorRD Sample K RDs randomly Lower bound on an ideal cost measure GOSHSTSg V min 1 4 mE1 i distm 11ml ll k 1Kl Nearest Neighbor method was used using this distance measure Experiment M UNIVERSITY OF MICHIGAN 495 Experiment M UNIVERSITY OF MICHIGAN Scene with 5 cm noise 0 Scene with 10 cm noise Cluttered scenes Experiment M UNIVERSITY OF MICHIGAN Details of experiment Reference model 4 different views were generated for each object About 373 points were sampled per point cloud 1494 per object Query Generated from slightly different View point Same sampling method was used but further reduced Noise was added along the line of sight Normal was computed using stabilization algorithm Experiment UNIVERSITY OF MICHIGAN 0 Three different descriptor was compared Shape Context Spin Image Table 1 mme used m the or pcnmcms jor shape Comexts 150 mp Harmonic Shape Context manic lapccunm x andspm mm SI Mi Histamine are m meters SC USO SI qu mm m min radius mi mm m lad ldn muns J elex mt dinsmu rm 1 dwmnv 7 ban mm i y 1b diiuemium 1930 2010 225 dmsuy rmrims m n 2 n Result UNIVERSITY OF MICHIGAN 5 cm noise 0 ea MFan mnngmlnn we N summeme pew exL gtleHu1ge u s U152D2530354D a Numle ul vepweaenLALVE dexvlplul Mean remgn on me in my mum nape canted WM WEakEr 251W 31mm an nurma dwecuun UNIVERSITY OF MICHIGAN Mam mmgnlmn vale nmappmmen nannmcsmnecomed 5pm Huge 1m 392 14 o m we w m grungy Esumatmn an rm rma dwecuun Result UNIVERSITY OF MICHIGAN Cluttered scene it was assumed that the bounding box was given RDs were sampled for each run Only 40 smallest distances were counted for outlier relection Geometric constraints were not considered m 3519 when in new anhkepm nerd demri lithe correct model were in the K best match Rank Depth it was considered as a correct recognition Conclusion M UNIVERSITY OF MICHIGAN Shape context is more robust against noise outlier and clutter than other descriptors The time required for matching is very large But efficient NN algorithm like LSH can be applied to reduce the time cost UNIVERSITY OF MICHIGAN We have more time Let s go back to the 2D Shape Context Transformation M UNIVERSITY OF MICHIGAN Correspondence was known using shape context 0 Any suitable family of transformation can be applied Ex Affine Model 0 This paper suggest the thin plate spline TPS UNIVERSITY OF MICHIGAN BookstemPAM1989 m 354 n I quotHF V 39r v zwnmfr Tl 111 739739v39139 W54quot My 402 m UNIVERSITY OF MICHIGAN I A 11 z MT Km UNIVERSITY OF MICHIGAN TPS Example Recognition UNIVERSITY OF MICHIGAN Shape Distance Combination of shape context distance image appearance distance and bending energy Various appearance information was used accordingto the appiication 1 a 1 V V Dlt39PQ ng11 1 1JTqi7v Ma a mm 7 41 we 7 qco V w 1 A v 17 LGDMFHUJDM Wquot quotquot 7 2 quot h i39lrylhi Nearest Neighbor method A Knowledge Plane for the lntemet an m EEES mm H Dem n mew m Mmquot Preview Knowledge Plane Knowledge Plane mnmlnlgm KP Attribute Why a Cognitive System What is the KP good for lault diagnosis and mitigation Diagnesis starting rrem enu neue te KR and return respense Tu interm userabeut it ean be xed er net Automatic con guration rn eneeiunter eentlieting assertieins made by different parties 399 2 0 Ce ju Zlt 0 a a a o i gt 2 What is the KP good for Support ovenay networks Overiay HEMWK use a applieatien ievei prebing te Evaiuate tne patn perrermanee KP is able tei aggregates appilcatlunr and netWeirKueriveu Knuvviedge abeut nEMurK perreirmanee KP ean uffer applieatieins better intermatiein abeuttne nEMurK Knowledgeenhanced intrusion detec39 tion Next generatiein le may reguire multiple ubsENatluns in tne netrum F39mvldES a basis at implementatiein reir data gatnering and eerrelatiein MINI KP Architecture Requirement KP supports distributed compositional perspective CEIHEEUEIH fliter mute DbSENaUDH Eunciusluns frum different parts Elf n a Data and Knuwiedge integratiein is a eentral tunetiein er KP KP system should develop utilize and reason about information at whateverscope is appropriate rortne problem time m u Core Foundation Observations Deseribeeurrenteengitiens Assertions Capture intentiens and eenstraints en netwurK eperatiens Explanations Observatlurls Assertiens and map tnem te eenelusiens SensorsandActuators Sensurs Entitiestnatpreuuee ubsENatlun Amuaturs Entitiestnatenange behavlur pawn time m u Requirement for Diagnosis lnput Endruser Figtlt lT reguests Sensupdete edprubiemsignatures Knowledge Desired behavlur successsturles lvwere gees tnis eeme trem7 Huvv is it represented Action Active sensing aereiss multiple ieveisand regieins a intENEntanS out ut Detected peliey eentliets Deteetegeentiguratieneentliets Detected remete eentiguratiein errers pawn EECS 598 1 Lecture 10 NURBS lgor G uskov February 127 2002 1 Homogeneous coordinates We follow the rst three pages of the rst chapter of HZOO in our exposition We will introduce the concepts of homogeneous coordinates in 2D but that can be generalized to 3D easily Homogeneous line representation A line in the plane is given by equa tion am 1 by c 0 So a line can be represented by the vector 11 ct For any non zero k the vector la171d7 kc represents the same line however All the vectors related by such scaling are equivalent 7 equivalence classes are called homogeneous vectors Then the projective space P 1 is formed as the set of equivalence classes of vectors in R 07 Ot Homogeneous point representation We can introduce a similar rep resentation for points by adding 1 as the last component of the vector so that the point zyt in R2 is represented as X zy1t and then the line equation can be written as xtl 0 where l 11 ct is the line vec tor An arbitrary homogeneous point 17273 then corresponds to the planar point mlm3 zgzg You can think about an equivalence class of a homogeneous point as the straight line passing through that point and the origin Note that the homogeneous point z y 0 corresponds to a point at in nity in a certain direction7 again think about the corresponding line 7 its equivalence class Line intersection Given two lines 1 abct and l a7b7Ct7 their intersection point is l x 1 If the two lines are parallel the result is a point at in nity7 with its third component equal to zero Line joining points Given two points X and X we have the line passing through them as X x X This even works intuitevely when one of the points is at in nity Projection translation in 3D Given a homogeneous point mg727w we get the corresponding inhomogeneous point via mw7 yw7 Given a homogeneous point 734271 it is easy to translate it just multiply it by a matrix 100th 7010252 T 001tz 0001j Generally7 any af ne transformation Ap b on points can be expressed A b p 0 1 1 2 NURBS Curves From a projective perspective the NURBS can be de ned as following suppose that we are given sequences of knots U uk7 control points P p 7 and weights W wk7 where pk mk7yk72kt Then we rst de ne a sequence of homogeneous control points P 15k de ned as 15k zkwhykwhzkwhwk This new homogeneous control polygon corresponds to some curve in 4D The projection of that curve back onto the original 3D space is the NURBS curve Meaning of weights Quadratic Bezier Arc Three control points 190191192 and weights we 1101102 1 As wl goes to in nity7 curve approaches p1 As wl goes to zero7 curve approaches the straight line seg ment from p0 to p2 so that p1 stops to matter 3 Drawing circles via projection Let E 10mm wy Then homogeneous point corresponding to y is 577 w The circle 2 y2 1 then becomes a cone 52 n2 wz EECS 598 1 Lecture 3 Mesh Class7 manifold preservation7 and out of oore simpli cation lgor G uskov January 177 2002 1 Orientable manifold edgebased mesh class 11 Directed edge class DEdgeT public standard stuff DEdgeT the default constructor produces a null directed edge DEdgeTconst DEdgeTamp de DEdgeTamp operatorconst DEdgeTamp de DEdgeTEdgeT e int dir EdgeT Edge const int Dir const check if null or not operator bool const bool operator const bool operatorconst DEdgeTamp de const navigation operators DEdgeT Flip const equivalent to SymFnextthis DEdgeT Enext const Enext of a boundary edge is a null dedge DEdgeT Enext2 const Enext applied twice bool Boundary const are we on the outside of the boundary bool AdjacentToBoundary const return Boundary II FlipBoundary origin and destination vertices of the directed edge VertexT Org const VertexT Dest const these set values in the underlying edge structure void SetEnextconst DEdgeTamp de const void SetOrgVertexT v const 12 Vertices class VertexT public VertexT DEdgeT DEdge const DEdgeTamp DEdge returns a dedge whose origin is this vertex and if this vertex is on the boundary then the returned dedge is a boundary edge as well which may be important when you go around a vertex collecting its one ring bool Boundary const int Valence const number of edges containing this vertex these are all the data associated with the vertex position const XVecfamp Pos const XVecfamp Pos normal vector const XVecfamp Normal const XVecfamp Normal color rgb const XVecfamp Color const XVecfamp Color 13 Edges class EdgeT public EdgeT dir O or 1 edges store two Enext dedges so that DEdgeTe dirEnext eDEdgedir DEdgeT DEdgeint dir const DEdgeTamp DEdgeint dir edges store two vertices so that DEdgeTe dir0rg eVertexdir VertexT Vertexint dir const VertexTamp Vertexint dir 14 Mesh itself class MeshT public typedef stdvectorltEdgeTgt EdgeCt typedef stdvectorltVertexTgt Verteth public MeshT MeshTO const EdgeCtamp Edges const const Vertethamp Verts const void AddVertexVertexT v void AddEdgeEdgeT e a simple procedure that properly assigns all the dedges for vertices even on the boundary void AssignVertEdges O helper functions static void SetNeighborsDEdgeTamp de DEdgeTamp den static void AssignVertexDEdgeconst DEdgeTamp des 15 Topology preservation for surfaces with boundaries Create a Virtual vertex w Connect all the boundary loops to that vertex with cones Then the topology preservation condition for edge 11 collapse is Lkwa Lkwb Lkwab Note that Lkwab for any non Virtual edge consists of two vertices lf ab is an internal edge then those two vertices are non Virtual7 if ab is on the boundary then one of the vertices is Virtual7 and the other one non Virtual We can simplify calculations by changing mesh class 7 representing Vir tual vertex as a zero pointer The boundary convention will change then 2 Progressive meshes Ef cient face based storage of meshes obtained with half edge collapses Pre computed order of removal Go between levels of detail very fast Good for games lndex vertices in order of removal backwards 7 last vertex gets re moved rst Sort faces in the inverse order of removal 7 rst faces to get removed come last Collapse array stores vertex index transformations during collapses Figure l A triangular mesh example Vertex coordinates are stored in a vertex buffer 1901917 7pll The last three vertices will be removed The vertex buffer does not have to change at all The collapse table is 9 H 110 H 211 9 The triangles are stored in an index buffer7 from which the index buffer for a current level is easily generated by substitutions from the collapse table 0 8 9 10 5 4 Z the faces above exist on all the levels Z including the coarsest one Z the following two faces appear on level I O 9 1 1 9 10 Z the following two faces appear on level II 1 10 2 2 10 1 Z the following two faces appear on level III 9 7 11 9 11 10 Below is an example of substitutions for level Hi We rst change the sub stitution table to obtain 9 H 110 H 211 a 1 Then we perform index table generation for level III 0 8 9 gt 0 8 1 10 5 4 gt 2 5 4 Z the following two faces appear on level I O 9 1 gt O 1 1 Z degenerate everything below is too 1 9 10 Z the following two faces appear on level II 1 10 2 2 10 1 Z the following two faces appear on level III 9 7 11 9 11 10 We can stop Whenever rst degenerate face is hit 3 Other simpli cation techniques Preventing mesh inversion Compute normals of all the faces before and after the edge contraction and if the normals ip by too much then prohibit the contraction Complexity Every edge contraction requires on average log NE opera tions to maintain the heap There are NE edges to remove hence we get Nlog NE simpli cation algorithm for local error estimation methods 31 Vertex clustering Rossignac and Borrel 1993 lntroduce grid in 3D7 for every cell collapse all the vertices into one Replace all the vertices in a cell by their average Discard all the degenerate triangles Does not care about manifoldness 32 Lindstrom 2000 OOCS Problem size of real data Priority queue 160n bytes of RAM For meshes with billions of points that s a problem Lindstrom assumes that output model ts in RAM lnput can be arbi trarily big Represent input as a triangle soup Tm 117 9117 2117 127 9127 Zl27 137 9137 213 217 9217 Z217 227 9227 Z227 237 9237 Z23 Compute bounding box Make a grid Nx by Ny by Nz mapltix iy iz gt Q p indexgt qp for every triangle t in Tin Compute quadric Qt for every vertex xtkytkztk in t k123 Lookup xtkytkztk gtixiyiz gtqpixiyiz if needed make a new entry with a new index jk qpix iy izQ Qt update cell quadric if index triple jl j2 jS is nondegenerate then insert jl j2 jS into Tout for every active cell in qp find optimal position based on the stored quadric store it in the vertex coordinates array at the appropriate index Visual Learning and Recognition of 3 D Objects from Appearance H Murase and S Nayar 1995 Presented by Michael Allison ampJoseph Harman EECS 598Winter 2009 Objective b Recognize objects in an image and compute their poses in the 3D scene using appearance Motivation gt Visual positioning and tracking of robot manipulators gt Visual inspections httpwwwgeneration5orgcontentZOO5imageskukapng Problem b Objective can be divided into two stages LEARNING RECOGNITION Automatically learn 3 Use this info to 3D objects from identif objects and their aearance in their poses in new 2D images images Problem cont b Appearance of an object in an image depends on its shape reflectance properties pose and illumination b For rigid objects shape and reflectance are constant while pose and illumination vary between scenes gt Thus the learning process consists of developing a compact representation of the 3D objects parameterized by pose and illumination Previous Work gt Earlier methods for object and pose detection used shape models instead of appearance Besl and Jain I 985 and Chin and Dyer I986 gt Required a shape model for each object CAD if available Create the model from basic shapes and input the information into the system gt Can be very time consuming hence the need for automated learning Automated Learning gt In I995 machine vision had little or no learning capabilities Paula and Edeman I 990 threelayered network mapping input space to output space Edeman and Weinshall I 99l twolayered network representing objects from multiple views using unsupervised Hebbian relaxation UIIman and 80er I I99 I three views of an object can be used to represent boundaries Fan et al I I987 descriptions of 3D objects using range images Ikeuchi and Suehiro I992 used range images of human operator to train robotic manipulator Turk and Pentland I 99I used principal component analysis PCA to learn and identify faces Principal Component Analysis PCA b Reduce dimensionality of highly correlated data like images b Determines a subspace with dimensions that capture the largest variability in the data X2 XI Principal Component Analysis PCA b Reduce dimensionality of highly correlated data like images gt Determines a subspace with dimensions that capture the largest variability in the data X2 XI T Ki xlia x2i Principal Component Analysis PCA b Reduce dimensionality of highly correlated data like images b Determines a subspace with dimensions that capture the largest variability in the data X2 el yl 1Tamp 9 XI Principal Component Analysis PCA b Reduce dimensionality of highly correlated data like images gt Determines a subspace with dimensions that capture the largest variability in the data X2 el l Dimension Space x x x x x t x gtel XI Principal Component Analysis PCA b Reduce dimensionality of highly correlated data like images b Determines a subspace with dimensions that capture the largest variability in the data X2 el 62 o XI Solution Outline gt LEARNING STAGE Obtain a set of images of an object by varying its pose and illumination in small increments Normalize the scale and brightness Construct an eigenspace by computing most prominent eigenvectors Project all images of the set into the eigenspace Euclidean distance in eigenspace is a measure of the similarity between images Use cubicspline interpolation to compute a manifold from the discrete points gt This is done twice object eigenspaces universal eigenspace all objects Solution Outline b Manifold in an object eigenspace first 3D shown 415 435 nquot a 415 13 H E Solution Outline b RECOGNITION STAGE Segment the object of interest Normalize the segmented image as before Project onto the universal eigenspace Closest manifold to projected point identi es the object Project onto recognized object eigenspace Closest point on manifold can be used to estimate pose LEARNING DETAILS LEARNINGCollectingImages b Illumination direction is varied using robot manipulator b Object is placed in a stable configuration on a motorized turntable and rotated to vary pose Fig 1 Setup used to automatically acquire image sets The object is placed on a motorized turntable LEARNING Normalization gt Images are segmented using threshold technique Background is set to zero brightness gt They are then resampled so that their larger dimensions fit a preselected size l28 x I28 pixels Scale invariance and vector calculations t Each NxN image is then stored as a lXNZ vector i 31e2eNT LEARNING Normalization b Brightness is normalized by ensuring the total energy contained in the image is unity x Limits effect of variations in intensity of illumination or aperture size where and x is the normalized image vector LEARNING Normalization rgt Example of normalized images of I object mmmmumuum E gm 9 I W 633quot LEARNING Eigenspaces b Compute the eigenvectors of the covarianace matrix Let xfj represent the image from object p with pose i and illuminance j xR 1cxg c NxM with M RLP gt Use spatial temporal adaptive STA method to compute as large matrix LEARNING Eigenspaces b How many eigenvectors to use Set some thresholdT lt Generally 20 dimensions l A1 a T Ei l J b Resulting eigenvectors are aili12n1k kSN gt Do the same with vectors for single objects LEARNING Eigenspaces e4 14145 e5 25 lon e6 1594 Fig 3 Eigenvectors corresponding to the six largest eigen values computed for the image set shown in Figure 2 LEARNING Project Images onto Space b Each learning sample xnlP in the image set p is projected onto the eigenspace as em c I Obtain a set of discrete points in the eigenspace for each object gt Since consecutive images are highly correlated they are close to one another in the eigenspace Thus they represent a smoothly varying manifold gl 61926 where 9i are the continuous pose and illumination parameters LEARNING Compute Manifold b In our application pose and illumination variations are limited to one degree of freedom thus glpkgla 62 b Manifold is created by connecting the discrete points with a cubicspline It is also called a parametric eigenspace representation b This process is repeated in the object eigenspaces to find object manifolds f3 egpiregmwegap hgi c1vl gt E lehaz LEARNING Compute Manifold Fig 4 Parametric eigenspace representation computed using the image set shown in Figure 392 Only the three most prominent dimensions of the eigenspace are displayed here The dots correspond to projections of learning samples Since illumination is constant in this case appearance is given by a curve with a single parameter rotation rather than a surface RECOGNITION DETAILS RECOGNITION Normalize 8L Recognize b Segment object from the image and normalize Assume it is not occluded gt Project input image vector y onto universal eigenspace z label 39iekTy c gt Find the minimum distance between the projected point z and each objects manifold glPGl92 a my 62w b If the shortest of these distances is below some threshold then the image is said to belong to that object dIP lt dlk for all p k AND dIP lt threshold 9 image is ofobject p RECOGNITION Project onto Space gt Project image vector y onto identified object eigenspace 2 z e pje2pgt me cm1 y 5cm Fm j n An i pun imam h The inu gz in man 1 a point in time1 nigpnnpnne The imminn nf1h paint an Ihl pammmit cum kwlminm lic panic at illc calmm m Ihr 7112136 RECOGNITION Project onto Space gt Find the pose and illuminance parameters 6192 that minimize the distance between 2 and the object manifold fp d2 gmguliz W1 62 l 2 gt The minimizing pose parameter 91 is the estimate of the pose of the object RECOGNITION Distance b How do you find the closest point on manifold Use a large number of uniformly spaced samples from the manifolds Okn with k dimensions n of samples Use an efficient technique for multidimensional binary search Nene and Nayar I994 Ok log2 n I Use threelayered radial basis function NN to map input points to manifold parameters Mukherjee and Nayar I990 Complexity depends on network parameters but close Doesn t require manifolds but loss in accuracy EXPERIMENTS b Two separate object sets of 4 objects each Set I Uniform reflectance shapes that are similar in certain poses 2 Set 2 Complex appearance characteristics strong specular re ections and complex interreflections 3 Object Set 1 b Object Set 2 Experiment 1 gt Used previous turntable and robot manipulator light source setup gt Captured 450 learning images for each object 5 illuminance directions x 90 poses and 270 testing images gt Testing poses lay in between the learning poses Table 1 Image sets for each of the two object gets shown in Figure 6 The 1080 test images used for recognition are different from the 1800 images used for learning Learning Samples Tbst Samples for Recognition 4 Objects 4 Objects 5 Light Source Directions 3 Light Source Directions 9 Poses 9O Pesos 1800 Images 1080 Imagas Experiment 1 b Example object manifolds for IO dimensional object eigenspaces 91 pose and 62 illuminance direction 3 Object Set 1 b Object Set 2 Results 1 b 60 msec recognition time b Recognition rate results for Set I 10 39 H 100 39 i m 3 e g 539 g 90 5 90 a a 2 s e a a I 50 o an a 3quot a 8 as m i I m 7 i 5 2 s a a m 1 0 Dimensions of Parametric Eigcnspaoe a l39 39 i 40 60 Poses used for Learning h l 80 l 100 Results 1 b Pose accuracy results for Set I using 8D eigenspaces Avg Abs Pose Error 90 poses 05 I8 poses 0 10m 600 500 a 800 If 2 00 90 Poses forLeammg a 400 its Posesfor Ltarnmg 3 S E E 300 F 400 53 g 200 200 ton D r radSiywsrzdi 1 z 3 4 5 5 7 59 x 5 VE S ASz ID 1 2 3 4 S E 7 3 9 at P09 Error degrees 09 Error degreesi c d gt Similar results for Set 2 Avg Abs Pose Error 90 poses 05 I8 poses 2 Extension of Experiment 1 b Autosegmented images of car using motion cues l I 3 l4 in Learning sample with closest pose Fig 9 Appearancenbased recognition and pose estimation applied to the image sequence of a moving car i b Automated recognition system b Trained for 20 objects in 72 poses no illumination variation Experiment 2 b Computed 20 dimension universal eigenspace I2 hours gt Did not compute object eigenspaces b Introduce an object it performs the full recognition process estimating pose with universal eigenspace in sec Experiment and Results 2 b 320 test images of the 20 objects in randomly selected but known poses b OO recognition rate gt Abs pose error histogram Avg 59 Stdev 53 Test images i 00 X0 60 0 Pose error deg 0 4 r3 4 I0 12 I4 H 1x 39 39c Pose estimation accuracy Conclusions b Reasonable method to facilitate using large databases for realtime appearance based recognition gt Pros It does not require significant low level processing No geometric features Do not have to model the complex shape and reflections gt Cons Must recompute entire eigenspace to add objects Occlusions are fail case and segmentation can be difficult QUESTIONS Similarity Euclidean Distance b gm and gn are points in eigenspace N xm Z gmiei 6 i1 gt xm and xn are unit vectors Km quot xnuz Km quot xnTXm Kn 2 2x19 gt Maximize sum of squared distance correlation n 4 Km xn 2 3 k h g gmeei quot E gmei i1 21 Similarity Euclidean Distance k k 2 k 2 2911143 quot ngei 274ng gnsei i4 i1 i1 1 Is Z 283 i1 ji x 9m 9n 2 E Hgm gull2 19 b Eigenvectors are orthonormal th KnHZ 3 gm gniiz Computing Eigenvectors b Conjugate Gradient Find maximizing e of Raleigh quotient it is largest eigenvector ETQe Flie ej e Remove this eigenvector from covariance matrix 01 Q Q3 Qs l AstJes I lilll b SingularValue Decomposition If M images lt N pixels A izxi quotj XTX gt I 81 Xi EXE Lecture 3 Electromagnetic Waves ll 9 3 Electromagnetic Waves Last time we discussed the following 1 The quot of an EM wave through a 39 media We discussed how the wave interacts with the media and how all of the details of interatomic and atomEM wave interactions can be described by a constitutive relation Examples we discussed last time included the change ofthe speed of the propagation reflection and refraction gain and loss and tunneling through a thin slab 2 Macroscopic Maxwell s equations We derived the macroscopic Maxwell s equations These equations can describe all the EM interaction with macroscopic media with a linear dimension gt 10 nm The constitutive relation can be measured from experiments It can also be calculated by taking into account all the microscopic interactions For example to the first order the dielectric constant of a homogeneous medium can be determined by the polarization vector Pwhich is the total dipole moment in the medium We will show you how this can be done later in the class when we discuss microscopic interaction between light and matters 0PE 31 31 Generation of EM waves observer SO U rce 311 Antenna basics In the discussions so far we have only studied the behavior of a given EM wave eg a plane wave and its interaction with macroscopic media but we have not discussed how the EM wave is generated We will see in this section how atimevariant current source can generate an EM wave We will solve the wave equation with the inclusion ofthe current source as follows VXVXEEkZEiw j 32 Once the electric field is obtained we can calculate the magnetic field by EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu Lecture 3 Electromagnetic Waves ll 10 H 33 may The solution to 32 can be written as follows for any observer at 7 outside the source distribution region 57 I39m4 j 5a J739df39 34 where 5 is a dyadic2 Green s function which satisfies VxVx57k25I5fef39 35 I is an identity matrix 1 JE 3322 W 3 You can easily verify by inserting 34 into the LHS of 32 and using 35 you get the RHS of 32 Note also that because 7 is outside the source we can interchange Vgtlt Vgtlt and the volume integral Please note all the differential operators below including 35 act on 7 unless otherwise specified To solve 5 first we notice that VXVXE 7V2ClVVC If we take the divergence of 35 and use the vector identity VVx 21 0 we get 7k2V5 v50 7 739 36 Substituting 36 back into 35 we have V2k2c71 2V577739 37 We can verify that 5 now can be written in terms of a scalar function 5T 2ng 38 where the scalar function g satisfy v2 18 g7739 757 7 739 39 To solve g in 39 we notice that g739 should depend only on rer but not the absolute location of 739 Therefore we can arbitrarily set 739 at origin After the choice of 739 we can easily see the solution for 9 must be spherical symmetric around the origin 39 becomes 2 r2 d g 2r dg k2r2gr 750 310 dr dr The solutionto 310 is ex gr C 311 k r 2 Dyadic 5 is a direct product of two vectors For example if 5 AB its index notation becomes GU AxBJ ln matrix notation the direct product of two vectors can be represented by a 3x3 matrix EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu Lecture 3 Electromagnetic Waves ll 11 To determine the constant C we integrate 39 over a volume including the origin and let the volume go to zero Ingf 71 V A 2 dg IVgndS47rr 71 312 S d7 case 2 C L 47r Combining 34 38 311 and 312 we have A A I exklfrf l A 39 Er 1am I 7jWLEWJU df 313 The magnetic field is from 33 and 313 H VXE m 314 exkl777 l v J quotdquot X I 47rlfif39l r V Source Before we proceed we have to remember that all the quantities in 313 and 314 are in the frequency domain We have dropped their 6 quot dependence lf J39equotm is a static source a 0 and k 0 312 General properties of near field and far field Depending on the distance between the observer and the source we can study two extreme cases the near field klfef39lltlt1 and the far field klfef39lgtgt1 In the far field we have lfif39le ref739 The electric field becomes a W W A Er1aul k 2jmmjmmr e d 315 The integral in 315 results in a function that depends only on e and q We can define a vector current moment as f6p j J739e quotdf39 316 In the far field region we only keep terms on the order of lkr and neglect all the higher order terms Using v 3 li 317 6r r66 rsm azp 315 becomes A 6110 A 6110 A Er1au1irrmf1awm6f5zpfw 318 EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu Lecture 3 Electromagnetic Waves ll 12 This is an outgoing wave with a spherical wave front The electric field is perpendicular to the propagation direction At large distance the wave can be approximately by a plane wave In the near field we have e k W 1z39k l 77 739 l1 and the observer is at a distance many times smaller than the wavelength from the source 313 becomes 4 VV Jquot Erzagtul k 2j j r 319 This is a quasistatic field because of the absence of the oscillating exponential term Note the field is not truly static since there is an implicit time harmonic factor 6 quot Similarly the magnetic field is J39 HfVgtlt j W Source d 320 Because in near field region we have k l 77739 lltlt1 the contribution from the 2quotd term in the parenthesis of 319 dominates and the magnetic field can usually be neglected because magnetic field gets differentiation once while the electric field gets differentiation twice We will see an example in the following when we discuss the dipole radiator But we notice that if magnetic field can be neglected the solution of fields satisfies the electrostatic equation or the Poisson equation vE 0 321 Or in terms of the potential v2 0 322 313 Dipole radiation The most fundamental antenna is a Hertzian dipole which consists of a currentcarrying wire with an infinitesimal length J739 2116739 323 Substituting 323 into 313 we get A W e k H E07 I39m1 7 Lgm l a f39 iwyj2u 324 am I 223 i 39u k2 62 47W Now we make the coordinate transformation to the spherical coordinate by V f3 1 6r r66 1k 110 3 e ikiljcos e 325 62 47W r 47W 6 A 1 3 rsin azp 2 fcos i sin EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu Lecture 3 Electromagnetic Waves ll 13 324 becomes 57 iwlull k 2 2 e moose i i 6st 1L i 326 47W kr kr kr kr 314 Near and far fields for dipole radiators In the far field 326 reduces to 1k 57 460111 sina 327 e 47W The wavefront is a spherical outgoing wave and radiation pattern consists of two side lobes with no electric field along the z axis The polarization ofthe electric field is perpendicularto the direction of the propagation The magnetic field can be calculated by 33 to be 10 H0 awe aging 328 47W lnthe nearfield 326 reduces to EmiMamiehumeral 13911 1 3 A 4 r2c056651n6 7mg r 329 The field fades away quickly with f3 in contrast to far field dependence of W Note the field is quasistatic and has the f component 315 Radiation from a Moving Charge The dipole radiator is one special type of radiation sources Since the dipole source usually consists of lots of oscillating or moving charges It is interesting to study the radiation from a single moving charge It has applications for example in particle detectors using Cherenkov radiation and synchrotron radiation The charge density of a moving charge with a trajectory r0 1 is x3021 1157 701 330 The current density is therefore 7031 q dad 57397 70 39z 331 To solve the electric field with 313 we need to convert 7011 to the frequency domain To do that we use the Fourier transform as follows d2 390 dt Idtq exltwimzgtgt dt mam Idf39jdtq 57397 70 39ze ltm quotgt 332 EECS 598002 Nanophotonics and Nanoscale Fabrication Winter 2006 POKu EECS 598 Special topics in Computer Vision Lecture 2 introduction to multiview geometry Comero models Epipolor geometry SFM Correspondence problem Applications If needed please consult slides and lecture notes from EECS 442 httpwwweecsumichedusilvioteochingeec3442html Image formation object film CCD re rina Exposed poin r Le r s design a camera Idea put a piece of film in front of an obiec r Do we get a reasonable image Pinhole camera object barrier film 39 Add a barrier to block off most of the rays This reduces blurring The opening known as the aperture Pinhole camera pinhole 39 Virlu 11 image f focal length c center of the camera Pinhole camera 139 X39zfE r Z P y 9132 z y y392fl z Derived using similar triangles Pinhole camera image plane quot Virtual image Common to draw image plane in fronf of the focal point Moving the image plane y f merely scales the image Pinhole camera Is the size of the aperture important object barrier film Shrinking aperture size Rays are mixed up LU Z OPTIC 011mm quot35 mm Why the aperture cannot be too small Less light passes through Adding lenses Dittraction effect Cameras amp Lenses object lens film focal point 39 A lens focuses light onto the film Rays passing through the center are not deviated All parallel rays converge to one point on a plane located at the focal lengfh f X X9 yaz gt 9 f z z E SR3 gtSR2 Is this a linear transformation No division by z is nonlinear How to make if lineara Homogeneous coordinates a a x7ygty mgZ y 1 z 1 homogeneous image homogeneous scene coordinates coordinates Converting from homogeneous coordinates 33 y gt1339w7yw SNQR 11 J gt wwayIUvZw Perspective Proiec rion Transformation X39 MX C 0 H7 0 H O O O O O O H N Klt gtlt H 914 xyzgtltf3flgt Z Z E SR3 gtSR2 Camera Matrix U f P k C39 0 10 1 Camera matrix K X39 MX X a s cx0 K 0X X3920 cyioy 991 0 j K has 5 degrees of freedom World reference system 3D Rotation of Points Rotation around the coordinate axes counterclocIONise 1 O O Rxa O cosa sina P 0 sin a cos a Y39 y cos O sin P Ry O 1 O sin O cos f N cosy sin7 O N V X X RZ siny cosy O O O 1 World reference system X R TXW Inierncll parameiers quot External parameters Full Camera Model proiec rive gt3gtlt1 M Xw K3gtlt3 T3gtlt4 Xw4gtlt1 K0 3 C 0 0 How many degrees of freedom 5 3 3 11 Full Camera Model proiec rive I X3gtlt1 M XW K3gtlt3 R T3gtlt4 Xw4x1 X m X M is defined up to scale 1 W 2 W n u l M by c scalar 3X m xyzw m m3XW won 1 change the Image W Calibration Problem Calibration rig jw 1C 0P Pn with known positions in OWiWiWkW 0p pn known positions in the image Calibration Problem Calibration rig jw 0P1 Pn with known positions in OWiWiWkW 0p pn known positions in the image Calibration Problem Calibration rig jw 0P1 Pn with known positions in OWiWiWkW 0p pn known positions in the image Goal compute intrinsic and extrinsic parameters How many correspondences do I need 6 General Calibration Problem X t is nonlinear measurement Parameter Newton Method LevenbergMarquardt Algorithm Iterative starts from initial solution May be slow if initial solution far from real solution Estimated solution may be function of the initial solution Newton requires the computation of J H LevenbergMarquardt doesn39t require the computation of H Radial Distortion Caused by imperfect lenses Deviations are most noticeable for rays that pass through the edge of the lens No distortion Pin cushion Barrel 5 IUD 150 r EDD 25E quottni 739 2mm ene Radial Distortion Click an the tour extreme eernere efthe rectangular pattern fame see Calibration Procedure Camera CaI39bra bn Toobox for Maiab J Bouguef 19982000 Calibration images Given that the camera is calibrated can we recover the scene structure from a single View 0 W Calibration rig Camera K Intrinsic ambiguity of the mapping from 3D to image 2D Intrinsic ambiguity of the mapping from 3D to image 2D Cmnesysmes mer Two eyes help This is called triangulation Triangulation 0 Find X that minimizes d2x19P1X d2x29P2X Epipolar geometry Epipolar Plane 0 Epipoles e1 e2 Baseline intersections of baseline with image planes projections of the other camera center 0 Epipolar Lines Example Converging image planes Epipolar Constraint XITFX20 F Fundamental Ma rrix Faugeras and Luong 1992 Epipolar Constraint The epipolar ine 1 associated with x2 is F x2 x1 6 I1 0 The epipolar ine 2 associated with x1 is FT x1 x2 6 I2 0 F is 3x3 matrix 7 DOF F is singular rank two 0 Fe20 and FTe1O Suppose F is known No addiiionai information about ine scene and camera is given Given a poini on left image how can Ifind ine corresponding poini on right image Why is F useful 0 F captures information about the epipolar geometry of 2 views camera parameters MORE IMPORTANTLY F gives constraints on how the scene changes under view point transformation without reconstructing the scene 0 Powerful tool in 0 3D reconstruction Multiview obiectscene matching Rectification Making image planes parallel I E l i v Iquot H n I g E P quot o P 39 E x E X I 39 39 x tquot I 4 o o O 039 GOAL Estimate the perspective transformation H l39hCll39 makes the images parallel parallel epipolar lines Rectification Making image planes parallel Courtesy figure 5 Lazebnik After rectification computing depth is easy 0 Baseline B O Bf X X disparity Z Disparity is inversely proportional to depth i Disparity map depth map httpvisionmiddleburyedustereo Disparity maps Disparity map with occlusions Multiview geometry Recovering the structure Find coordinates of 3D point from its proiection into 2 or images Recovering camera parameters Given corresponding points in two images find camera matrices position and pose Correspondence Given a point in one image how can I find the corresponding point x39 in another one 3

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.