Learning to find good correspondences K.M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, P. Fua CVPR 2018 (Salt Lake City, UT, USA)
Local features matter Keypoints provide us with a robust way to match points across images.
Local features matter Keypoints: location (x, y), orientation, scale. Descriptors: histograms of gradient orientations. Source: http://medium.com/machine-learning-world/feature-extraction-and-similar-image-search-with-opencv-for-newbies-3c59796bf774
Local features matter Matching large numbers of local features allows us to recover structure! Source: OpenIMAJ (http://openimaj.org)
{<latexit sha1_base64="audfbnkxz8fqqkqu0b14/dsqtaq=">aaab6xicdvbns8naej34wetx1aoxxsj4ckknbb0vvxisyj+gdwwz3brln5uwuxfk6d/w4kerr/4jb/4bn20ffx0w8hhvhpl5qckz0o7zya2srq1vbba2its7u3v7pypdtoptswilxdyw3qarypmglc00p91euhwfnhacyvxud+6pvcwwd3qaud/ci8fcrra20m0/g5tkjn1rr1a8knjsx6m5ftcnlzp37ihxkdnkserzuhrvd2osrlrowrfspddjtj9hqrnhdfbsp4ommezwipymftiiys/ml87qqvggkiylkahrxp0+kefiqwkumm4i67h67exix14v1whdz5hiuk0fwswku450jpk30zbjsjsfgokjzozwrmzyyqjnoeutwten6h/srtiuy7s3xrlxuyyjamdwamfgqg0aca1naagbeb7gcz6tifvovvivi9yvazlzbd9gvx0c+5onpw==</latexit> <latexit sha1_base64="audfbnkxz8fqqkqu0b14/dsqtaq=">aaab6xicdvbns8naej34wetx1aoxxsj4ckknbb0vvxisyj+gdwwz3brln5uwuxfk6d/w4kerr/4jb/4bn20ffx0w8hhvhpl5qckz0o7zya2srq1vbba2its7u3v7pypdtoptswilxdyw3qarypmglc00p91euhwfnhacyvxud+6pvcwwd3qaud/ci8fcrra20m0/g5tkjn1rr1a8knjsx6m5ftcnlzp37ihxkdnkserzuhrvd2osrlrowrfspddjtj9hqrnhdfbsp4ommezwipymftiiys/ml87qqvggkiylkahrxp0+kefiqwkumm4i67h67exix14v1whdz5hiuk0fwswku450jpk30zbjsjsfgokjzozwrmzyyqjnoeutwten6h/srtiuy7s3xrlxuyyjamdwamfgqg0aca1naagbeb7gcz6tifvovvivi9yvazlzbd9gvx0c+5onpw==</latexit> <latexit sha1_base64="audfbnkxz8fqqkqu0b14/dsqtaq=">aaab6xicdvbns8naej34wetx1aoxxsj4ckknbb0vvxisyj+gdwwz3brln5uwuxfk6d/w4kerr/4jb/4bn20ffx0w8hhvhpl5qckz0o7zya2srq1vbba2its7u3v7pypdtoptswilxdyw3qarypmglc00p91euhwfnhacyvxud+6pvcwwd3qaud/ci8fcrra20m0/g5tkjn1rr1a8knjsx6m5ftcnlzp37ihxkdnkserzuhrvd2osrlrowrfspddjtj9hqrnhdfbsp4ommezwipymftiiys/ml87qqvggkiylkahrxp0+kefiqwkumm4i67h67exix14v1whdz5hiuk0fwswku450jpk30zbjsjsfgokjzozwrmzyyqjnoeutwten6h/srtiuy7s3xrlxuyyjamdwamfgqg0aca1naagbeb7gcz6tifvovvivi9yvazlzbd9gvx0c+5onpw==</latexit> <latexit sha1_base64="audfbnkxz8fqqkqu0b14/dsqtaq=">aaab6xicdvbns8naej34wetx1aoxxsj4ckknbb0vvxisyj+gdwwz3brln5uwuxfk6d/w4kerr/4jb/4bn20ffx0w8hhvhpl5qckz0o7zya2srq1vbba2its7u3v7pypdtoptswilxdyw3qarypmglc00p91euhwfnhacyvxud+6pvcwwd3qaud/ci8fcrra20m0/g5tkjn1rr1a8knjsx6m5ftcnlzp37ihxkdnkserzuhrvd2osrlrowrfspddjtj9hqrnhdfbsp4ommezwipymftiiys/ml87qqvggkiylkahrxp0+kefiqwkumm4i67h67exix14v1whdz5hiuk0fwswku450jpk30zbjsjsfgokjzozwrmzyyqjnoeutwten6h/srtiuy7s3xrlxuyyjamdwamfgqg0aca1naagbeb7gcz6tifvovvivi9yvazlzbd9gvx0c+5onpw==</latexit> Why should I care? SIFT (journal paper) (conference paper) 64k citations!
Applications: panorama stitching Source: http://karantza.org/wordpress/?p=10
Applications: 3D reconstruction
Applications: camera pose retrieval Source: S.M. Yoon et al, Hierarchical image representation using 3D camera geometry for content-based image retrieval, EAAI 2014.
What about Deep Learning? Recent works by the Computer Vision lab at EPFL: TILDE: A Temporally Invariant Learned DEtector (CVPR 15). Discriminative learning of descriptors (ICCV 15). Learning to assign orientations to feature points (CVPR 16). LIFT: Learned Invariant Feature Transform (ECCV 16). Learning to find good correspondences (CVPR 18). LF-Net: Learning local features from images (arxiv 18).
What about Deep Learning? Recent works by the Computer Vision lab at EPFL: TILDE: A Temporally Invariant Learned DEtector (CVPR 15). Discriminative learning of descriptors (ICCV 15). Learning to assign orientations to feature points (CVPR 16). LIFT: Learned Invariant Feature Transform (ECCV 16). Learning to find good correspondences (CVPR 18). LF-Net: Learning local features from images (arxiv 18).
Matching keypoints (c) Retrieve pose (a) Find putative matches (b) Find inliers (e.g. RANSAC) Fischler & Bolles, Random Sample Consensus. Comm. ACM, 1981
Dense matching with CNNs Current focus of research: Zamir et al, ECCV 16. SfM-Net, arxiv 17. DeMoN, CVPR 17. Lowe et al, CVPR 17. Focus: video, small displacements. General case (wide baselines) remains unsolved.
Where s the challenge?
RANSAC: not always enough
Geometry to the rescue
Geometry to the rescue A geometrically-aware deep network. Input: correspondences. Output: one weight for each. We simultaneously learn to: Perform outlier rejection. Regress to the essential matrix.
Computing the Essential matrix Closed form solution: 8-point algorithm Nx9 matrix {uu0, uv 0, u,...} <latexit sha1_base64="pcmig45y9xtmgmeuc4o5xyekun0=">aaab/xicbvdlssnafj3uv62v+ni5gsyiixisexrzdooygn1ae8pkommhtizhhouair/ixouibv0pd/6nkzylbt1wl4dz7mxundblvcrx/bzkk6tr6xvlzcrw9s7unr1/0jkjfpg0ccis0qmrjixy0lrumdjjbufxyeg7hn3mfntmhkqjf1ctlaqxgnaauyyukxr2kz9pfvadepy3gnqcx5/27krrudpazeivpaoknhr2l99psi4jv5ghkbuem6ogq0jrzmi04mtjuorhaec6hniuexlks+un8nqofrglwhrxckb+3shqloukds1kjnrqlnq5+j/x1sq6djlku60ix/ohis2gsmaebextqbbie0mqfttccveqcysvcaxiqvawv7xmwheo5zre/ww1flpeuqbh4ascaw9cgtq4aw3qbbg8gmfwct6sj+vferc+5qmlq9g5bh9gff4ascytkq==</latexit> 9x9 matrix X> X N correspondences Longuet-Higgins, "A computer algorithm for reconstructing a scene from two projections. Nature, 1981. SVD Essential Matrix
Learning to compute weights We learn to compute weights for the 8-point algorithm Nx9 matrix {uu0, uv 0, u,...} 9x9 matrix <latexit sha1_base64="pcmig45y9xtmgmeuc4o5xyekun0=">aaab/xicbvdlssnafj3uv62v+ni5gsyiixisexrzdooygn1ae8pkommhtizhhouair/ixouibv0pd/6nkzylbt1wl4dz7mxundblvcrx/bzkk6tr6xvlzcrw9s7unr1/0jkjfpg0ccis0qmrjixy0lrumdjjbufxyeg7hn3mfntmhkqjf1ctlaqxgnaauyyukxr2kz9pfvadepy3gnqcx5/27krrudpazeivpaoknhr2l99psi4jv5ghkbuem6ogq0jrzmi04mtjuorhaec6hniuexlks+un8nqofrglwhrxckb+3shqloukds1kjnrqlnq5+j/x1sq6djlku60ix/ohis2gsmaebextqbbie0mqfttccveqcysvcaxiqvawv7xmwheo5zre/ww1flpeuqbh4ascaw9cgtq4aw3qbbg8gmfwct6sj+vferc+5qmlq9g5bh9gff4ascytkq==</latexit> X> w X <latexit sha1_base64="dlmfdtqzwoqgkfsgeouhguazgps=">aaacjhicbvdjsgnbeo2jw4xb1koxxhauccomfwuvqs8ei5gfmppq0+ljmvqs9bijw3ymf3/fiwcxphjxw+xjimpiqtev3quiqp4xmyqkzx0yuaxlldw1/hphy3nre6e4u9cqkeky1hheit7ykccmhqquqwskfxocao+rpje8yvtmihbbo/bwjmpibqgfup9ijdxvlv6unuspowpuo+yrqnm0nbtgbegopd9ppr1hrvf3etc5sx+kbrfkmdyk4ckwz6aezlhrfl+dxorvqekjgrkibvuxdbpejcwm6klkkbjhieqttoyhcohwk8mrksxrpgf9iosxsjhhf3ckkbbihhi6mttqzgsz+z/wvti/dxmaxkqsee8h+ypbgchmmdijngdjxhogzknefeib4ghl7wtbm2dpn7wigqembzn2jvwqxs7syimdcaiogq3oqbvcgxqoawzuwsn4bi/gg/fkvbnv09kcmevzb3/c+pwca02j6a==</latexit> sha1_base64="ozm7whtvmxg2ill5gkzzzpldtsg=">aaacjhicbvdjsgnbek2jw4xb1koxxickhghgi4kiohepeuwmzglo6fqktxowelhcepbxvpgruxhwwymxv8werhet6obve1vu1fmtzqrynfcrnze5nt2tny3mzs8slhwxv2oy1olqkol5loo+lpszifyvu5zwe0fx6hn64fdomv3iigrj4uhc9rpadhenygejwbmqvdzf8fktn8tix2vfgdm27q0kxohv1w/s+udsu3hymv5fbg++pfax5njoknbf4h6a0ugwchadajvw8clrx0shnfkeyykbrpoozoqfyortm1vlmmdswx3amddcizxndhtkag0ypo2cwjgxktriv3ekojsyh/qmmttq/tyy8j+tovww10xzlghfiziefgioviwyx1cbcuou7xuaiwbmv0s6wgcijk8fy4l7++s/olzju47tnjmlo2myrx7wyb22wivdoijtqeavcnzceb7g0bqz7q1n62vcmrm+elbhr1hv73kfpbu=</latexit> sha1_base64="hc3b1cm/ub0yvyu95d3xotqjkmm=">aaacjhicbvdjsgnbeo1xjemw9eilmqrfwjdjruheobepecwcmst0dhqsxp6fxpqw5gne8fdy8eccibe/xz4koiywdppqvsqq6nkxo0la9ocxmzs3v7cywtkxv1bx1rmbmxurky5jgucs4jupccjosmqsskzqmsco8bipetfnqv69ivzqklysvzg0atqjqu8xkppqzy/zbqlubggqm/qrqmuy3l7pbkh2pt+p9zuujolv9la53/+rwtmcbdndgnpagypc6ca8ie/fzvir++k2i6wcekrmkbb1x45li0fcusyinqoeirg+rh1s1zbeargnzhhkh+y104z+xpuljryyvzssfajrczxdmw4ojrwu/e+rk+kfnriaxkqsei8g+ypbgchumdimngdjehogzknefeiu4ghl7auptxamt54glqplss3n0s4vz8aommab7ia94ibduaqxoatkaim7mabp4nl4mb6nv+ntvdpjjhu2wj8wpr8azbsnkq==</latexit> SVD Essential Matrix Nx1 weights Deep Net N correspondences
Learning to compute weights We learn to compute weights for the 8-point algorithm Nx9 matrix {uu0, uv 0, u,...} 9x9 matrix <latexit sha1_base64="pcmig45y9xtmgmeuc4o5xyekun0=">aaab/xicbvdlssnafj3uv62v+ni5gsyiixisexrzdooygn1ae8pkommhtizhhouair/ixouibv0pd/6nkzylbt1wl4dz7mxundblvcrx/bzkk6tr6xvlzcrw9s7unr1/0jkjfpg0ccis0qmrjixy0lrumdjjbufxyeg7hn3mfntmhkqjf1ctlaqxgnaauyyukxr2kz9pfvadepy3gnqcx5/27krrudpazeivpaoknhr2l99psi4jv5ghkbuem6ogq0jrzmi04mtjuorhaec6hniuexlks+un8nqofrglwhrxckb+3shqloukds1kjnrqlnq5+j/x1sq6djlku60ix/ohis2gsmaebextqbbie0mqfttccveqcysvcaxiqvawv7xmwheo5zre/ww1flpeuqbh4ascaw9cgtq4aw3qbbg8gmfwct6sj+vferc+5qmlq9g5bh9gff4ascytkq==</latexit> X> w X <latexit sha1_base64="dlmfdtqzwoqgkfsgeouhguazgps=">aaacjhicbvdjsgnbeo2jw4xb1koxxhauccomfwuvqs8ei5gfmppq0+ljmvqs9bijw3ymf3/fiwcxphjxw+xjimpiqtev3quiqp4xmyqkzx0yuaxlldw1/hphy3nre6e4u9cqkeky1hheit7ykccmhqquqwskfxocao+rpje8yvtmihbbo/bwjmpibqgfup9ijdxvlv6unuspowpuo+yrqnm0nbtgbegopd9ppr1hrvf3etc5sx+kbrfkmdyk4ckwz6aezlhrfl+dxorvqekjgrkibvuxdbpejcwm6klkkbjhieqttoyhcohwk8mrksxrpgf9iosxsjhhf3ckkbbihhi6mttqzgsz+z/wvti/dxmaxkqsee8h+ypbgchmmdijngdjxhogzknefeib4ghl7wtbm2dpn7wigqembzn2jvwqxs7syimdcaiogq3oqbvcgxqoawzuwsn4bi/gg/fkvbnv09kcmevzb3/c+pwca02j6a==</latexit> sha1_base64="ozm7whtvmxg2ill5gkzzzpldtsg=">aaacjhicbvdjsgnbek2jw4xb1koxxickhghgi4kiohepeuwmzglo6fqktxowelhcepbxvpgruxhwwymxv8werhet6obve1vu1fmtzqrynfcrnze5nt2tny3mzs8slhwxv2oy1olqkol5loo+lpszifyvu5zwe0fx6hn64fdomv3iigrj4uhc9rpadhenygejwbmqvdzf8fktn8tix2vfgdm27q0kxohv1w/s+udsu3hymv5fbg++pfax5njoknbf4h6a0ugwchadajvw8clrx0shnfkeyykbrpoozoqfyortm1vlmmdswx3amddcizxndhtkag0ypo2cwjgxktriv3ekojsyh/qmmttq/tyy8j+tovww10xzlghfiziefgioviwyx1cbcuou7xuaiwbmv0s6wgcijk8fy4l7++s/olzju47tnjmlo2myrx7wyb22wivdoijtqeavcnzceb7g0bqz7q1n62vcmrm+elbhr1hv73kfpbu=</latexit> sha1_base64="hc3b1cm/ub0yvyu95d3xotqjkmm=">aaacjhicbvdjsgnbeo1xjemw9eilmqrfwjdjruheobepecwcmst0dhqsxp6fxpqw5gne8fdy8eccibe/xz4koiywdppqvsqq6nkxo0la9ocxmzs3v7cywtkxv1bx1rmbmxurky5jgucs4jupccjosmqsskzqmsco8bipetfnqv69ivzqklysvzg0atqjqu8xkppqzy/zbqlubggqm/qrqmuy3l7pbkh2pt+p9zuujolv9la53/+rwtmcbdndgnpagypc6ca8ie/fzvir++k2i6wcekrmkbb1x45li0fcusyinqoeirg+rh1s1zbeargnzhhkh+y104z+xpuljryyvzssfajrczxdmw4ojrwu/e+rk+kfnriaxkqsei8g+ypbgchumdimngdjehogzknefeiu4ghl7auptxamt54glqplss3n0s4vz8aommab7ia94ibduaqxoatkaim7mabp4nl4mb6nv+ntvdpjjhu2wj8wpr8azbsnkq==</latexit> Fully differentiable! Deep Net N correspondences SVD Essential Matrix Nx1 weights
Learning to compute weights We learn to compute weights for the 8-point algorithm Nx9 matrix {uu0, uv 0, u,...} 9x9 matrix <latexit sha1_base64="pcmig45y9xtmgmeuc4o5xyekun0=">aaab/xicbvdlssnafj3uv62v+ni5gsyiixisexrzdooygn1ae8pkommhtizhhouair/ixouibv0pd/6nkzylbt1wl4dz7mxundblvcrx/bzkk6tr6xvlzcrw9s7unr1/0jkjfpg0ccis0qmrjixy0lrumdjjbufxyeg7hn3mfntmhkqjf1ctlaqxgnaauyyukxr2kz9pfvadepy3gnqcx5/27krrudpazeivpaoknhr2l99psi4jv5ghkbuem6ogq0jrzmi04mtjuorhaec6hniuexlks+un8nqofrglwhrxckb+3shqloukds1kjnrqlnq5+j/x1sq6djlku60ix/ohis2gsmaebextqbbie0mqfttccveqcysvcaxiqvawv7xmwheo5zre/ww1flpeuqbh4ascaw9cgtq4aw3qbbg8gmfwct6sj+vferc+5qmlq9g5bh9gff4ascytkq==</latexit>? X> w X <latexit sha1_base64="dlmfdtqzwoqgkfsgeouhguazgps=">aaacjhicbvdjsgnbeo2jw4xb1koxxhauccomfwuvqs8ei5gfmppq0+ljmvqs9bijw3ymf3/fiwcxphjxw+xjimpiqtev3quiqp4xmyqkzx0yuaxlldw1/hphy3nre6e4u9cqkeky1hheit7ykccmhqquqwskfxocao+rpje8yvtmihbbo/bwjmpibqgfup9ijdxvlv6unuspowpuo+yrqnm0nbtgbegopd9ppr1hrvf3etc5sx+kbrfkmdyk4ckwz6aezlhrfl+dxorvqekjgrkibvuxdbpejcwm6klkkbjhieqttoyhcohwk8mrksxrpgf9iosxsjhhf3ckkbbihhi6mttqzgsz+z/wvti/dxmaxkqsee8h+ypbgchmmdijngdjxhogzknefeib4ghl7wtbm2dpn7wigqembzn2jvwqxs7syimdcaiogq3oqbvcgxqoawzuwsn4bi/gg/fkvbnv09kcmevzb3/c+pwca02j6a==</latexit> sha1_base64="ozm7whtvmxg2ill5gkzzzpldtsg=">aaacjhicbvdjsgnbek2jw4xb1koxxickhghgi4kiohepeuwmzglo6fqktxowelhcepbxvpgruxhwwymxv8werhet6obve1vu1fmtzqrynfcrnze5nt2tny3mzs8slhwxv2oy1olqkol5loo+lpszifyvu5zwe0fx6hn64fdomv3iigrj4uhc9rpadhenygejwbmqvdzf8fktn8tix2vfgdm27q0kxohv1w/s+udsu3hymv5fbg++pfax5njoknbf4h6a0ugwchadajvw8clrx0shnfkeyykbrpoozoqfyortm1vlmmdswx3amddcizxndhtkag0ypo2cwjgxktriv3ekojsyh/qmmttq/tyy8j+tovww10xzlghfiziefgioviwyx1cbcuou7xuaiwbmv0s6wgcijk8fy4l7++s/olzju47tnjmlo2myrx7wyb22wivdoijtqeavcnzceb7g0bqz7q1n62vcmrm+elbhr1hv73kfpbu=</latexit> sha1_base64="hc3b1cm/ub0yvyu95d3xotqjkmm=">aaacjhicbvdjsgnbeo1xjemw9eilmqrfwjdjruheobepecwcmst0dhqsxp6fxpqw5gne8fdy8eccibe/xz4koiywdppqvsqq6nkxo0la9ocxmzs3v7cywtkxv1bx1rmbmxurky5jgucs4jupccjosmqsskzqmsco8bipetfnqv69ivzqklysvzg0atqjqu8xkppqzy/zbqlubggqm/qrqmuy3l7pbkh2pt+p9zuujolv9la53/+rwtmcbdndgnpagypc6ca8ie/fzvir++k2i6wcekrmkbb1x45li0fcusyinqoeirg+rh1s1zbeargnzhhkh+y104z+xpuljryyvzssfajrczxdmw4ojrwu/e+rk+kfnriaxkqsei8g+ypbgchumdimngdjehogzknefeiu4ghl7auptxamt54glqplss3n0s4vz8aommab7ia94ibduaqxoatkaim7mabp4nl4mb6nv+ntvdpjjhu2wj8wpr8azbsnkq==</latexit> SVD Essential Matrix Nx1 weights Deep Net N correspondences
Exploiting epipolar geometry We do not have dense depth data. But we have the ground truth camera poses. With epipolar geometry we know that points in image 1 map to lines in image 2. Source: wikipedia (https://en.wikipedia.org/wiki/epipolar_geometry)
Adding a classification loss Not perfect (point line)! But good enough for a supervision signal. Hartley & Zisserman, Multiple view geometry in computer vision, 2000.
Complete formulation We jointly train for outlier rejection and regression to the Essential matrix by minimizing the hybrid loss: Classification (Inliers vs outliers) Regression (which inliers help us retrieve E?)
Our network CORRESPONDENCES: N x 4 4-D ( ) P P Shared P 128-D ( ) P P Shared P RESNET BLOCK 1 CONTEXT NORM BATCH NORM + ReLU P P Shared P CONTEXT NORM BATCH NORM + ReLU + + ( ) + RESNET BLOCK 2 + + ( ) + ( ) RESNET BLOCK 12 128-D P P Shared P 1-D ReLU ReLU ( ) ReLU tanh tanh ( ) tanh WEIGHTS: N x 1 Input: putative matches (SIFT+NN). Coordinates only: Output: Weights, encoding inlier probability. {u, v, u 0,v 0 } 1appleiappleN <latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> <latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> <latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> <latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> Network: MLPs. Global context embedded via Context Normalization.
<latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> <latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> <latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> <latexit sha1_base64="pw8vs6kbouutkehspyuviojapaq=">aaacf3icbvdlssnafj34rpuvdelmsjs6kcerqzcfuvxjbfuajpbjdniontycr6ge/oubf8wnc0xc6s6/cdjmoa0hhns4517u3omnjapp29/g0vlk6tp6yao4ubw9s2vu7tdfrdgmdryzmld9jaijewlikhlpj5yg0gek5q8vmr81ilzqolqt44r4iephnkayss11tavshkgo/cc9nbtdvfxhqapvrzeko7lphegy8gdprnxmumbjtuwp4cjxclicoepd88vtxvifjjkyise6jp1il0vcusyixqkesraeoj7pabqhkagvnd41gwwt9gaqc/0icafq74kuhukmq193zkeies8t//m6sgbnxkqjreks4dmiqdeoy5ifbhuueyzzwboeodv/hxiaomjsr1nuitjzjy+s5onl2jzze1qqxevxfmahoalhwafnoaauqr00aaap4bm8gjfjyxgx3o2pweuskc8cgd8wpn8aitgdzw==</latexit> Embedding context Batch CONTEXT NORM Non-parametric normalization of the mean/std of feature maps. Applied over each image pair in the batch separately. Also known as Instance Norm, used in image stylization. Correspondences Image pair Image pair Image pair Perceptron - Mean & Variance Coords: {u, v, u 0,v 0 } 1appleiappleN
Results Train on only two sequences: one indoors & one outdoors (10k pairs from each): (i) St. Peter s Square (2.5k images) (ii) Brown (video, 8k images) Test on completely different sequences (1k pairs from each):
Results Ours RANSAC Outdoors: great performance. Indoors: slightly better than dense methods.
RANSAC (SIFT, 2000 keypoints) OURS (SIFT, 2000 keypoints)
Collaborators Kwang Yi (U. Victoria) Eduard Trulls (EPFL) Yuki Ono (Sony) Mathieu Salzmann Vincent Lepetit (EPFL) (U. Bordeaux) Pascal Fua (EPFL) Code and models: github.com/vcg-uvic/learned-correspondence-release
Thanks for your attention. Questions?