. Deep Quantization Network for Efficient Image Retrieval . . . Yue Cao, Mingsheng Long, Jianmin Wang, Han Zhu, and Qingfu Wen School of Software Tsinghua University The Thirtieth AAAI Conference on Artificial Intelligence, 2016 . . . . . . . . . . . . . . . . . . . . .. . . .. . .. . .. .. . .. . .. . .. . .. . . .. . .. .. . .. . . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 1 / 17
Motivation Large Scale Image Retrieval Large Scale Image Retrieval Find Visually Similar Images From these Images . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. . .. . .. . . .. .. . .. . .. . .. . .. . . .. .. . .. . .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 2 / 17
Motivation Hashing Methods Hashing Methods Approximate generate image generate image 1 SIFT descriptors hash codes Nearest GIST Neighbor DeCAF Retrieval -1 . . Categories Superiorities . . Hamming Embedding Methods Memory Quantization Methods 128-d float : 512 bytes → 16 bytes . . 1 billion items : 512 GB → 16 GB Applications . Time Approximate nearest neighbor Computation: x10 - x100 faster search Transmission (disk / web): x30 Compact representation, Feature faster . Compression for large datasets . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .. . .. . .. . .. . .. . .. .. . . .. .. . . .. . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 3 / 17
. VQ for ANN Search . d x y d c i c j lookup i j B -by- B ) construct a K -by- K (also look-up table . Motivation Quantization Methods Vector Quantization . Vector Quantization . # code words: K code length: B = log 2 K c i x x i(x) nearest code . codeword stored c i c j y . . . . . . . . . . . . . . . . . . . . . .. . .. . .. .. . .. . .. . . .. . .. . .. . .. .. . .. . . .. . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 4 / 17
Motivation Quantization Methods Vector Quantization . Vector Quantization . # code words: K code length: B = log 2 K c i x x i(x) nearest code . codeword stored c i . VQ for ANN Search . d ( x , y ) ≈ d ( c i , c j ) � lookup ( i , j ) c j construct a K -by- K (also 2 B -by- 2 B ) y look-up table . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .. . .. . .. . .. . .. . .. .. . . .. .. . . .. . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 4 / 17
Motivation Quantization Methods Product Quantization (PQ) [pami 11'] . Loss . N � 2 min � � x i − c i ( x ) � � c 1 ,..., c M i =1 Input Average VQ for s.t. c ∈ c 1 × c 2 × ... × c M . vector Cut each subspace . Pros . Huge codebook: K = k M Tractable: M k -by- k tables . . Cons . Sensitive to Projection . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .. . .. . .. . .. .. . . .. . .. . .. .. . . .. . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 5 / 17
Motivation Quantization Methods Optimized Product Quantization (OPQ) [cvpr 13'] . Loss . N � 2 min � � x i − c i ( x ) � � R , c 1 ,..., c M i =1 s.t. Rc ∈ c 1 × c 2 × ... × c M , R T R = I . . Pros . Huge codebook: K = k M Tractable: M k -by- k tables Insensitive for rotation . . Cons . high correlated between subspaces . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .. . .. .. . . .. . .. . .. . .. .. . .. . .. . . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 6 / 17
Motivation Quantization Methods OPQ with Deep Features . Pros . . Insensitive for rotation low correlated between subspaces . . . . Cons . . poor quantizability: Input vector cannot be easily clustered into clusters. . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. .. . .. . .. . . .. .. . .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 7 / 17
Motivation Quantization Methods Deep Quantization . Product Quantization Loss . N � 2 � � z l (1) Q = � i − Ch i � 2 , i =1 C = diag ( C 1 , C 2 , . . . , C M ) = ⎡ C 1 ⎤ 0 · · · 0 C 2 0 · · · 0 ⎢ ⎥ . . . ... ⎢ ⎥ . . . . . . ⎦ . ⎢ ⎥ ⎣ C M 0 0 · · · . . Pros . . low correlated between subspaces easily clustered for each subspace (high quantizability) Look-up table is the same as PQ . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. .. . .. . . .. . .. . .. . .. . .. .. . .. . . .. . .. .. . .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 8 / 17
Motivation Quantization Methods Similarity Preserving . Previous works [cvpr12', aaai14'] . � 2 L = � s ij − 1 B ⟨ z i , z j ⟩ � . s ij ∈ S ⟨ z i , z j ⟩ ∈ [ − R , R ] but s ij ∈ { − 1 , 1 } . . Our approach . � 2 � s ij − ⟨ z i , z j ⟩ L = � ∥ z i ∥ ∥ z j ∥ . s ij ∈ S cos ( z i , z j ) = ⟨ z i , z j ⟩ / ∥ z i ∥ ∥ z j ∥ ∈ [ − 1 , 1] with s ij ∈ { − 1 , 1 } , hence making our loss well-specified for preserving the similarity conveyed in S . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .. . .. . .. . .. . .. . .. .. . . .. .. . . .. . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 9 / 17
Method Model Objective Function Θ , C , H L + λ Q , min (2) . Pairwise Cosine Loss . � 2 � z l i , z l � � j � L = s ij − (3) ∥ z l � z l � � , i ∥ j � . s ij ∈ S . Product Quantization Loss . N � 2 � � z l Q = � i − Ch i � (4) 2 , i =1 ⎡ C 1 ⎤ 0 · · · 0 C 2 0 · · · 0 ⎢ ⎥ C = diag ( C 1 , C 2 , . . . , C M ) = . . . ... ⎢ ⎥ . . . . . . ⎦ . ⎢ ⎥ ⎣ C M 0 0 · · · . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. .. . .. . . .. . .. . .. . .. . .. .. . .. . . .. . .. .. . .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 10 / 17
Method Model Deep Quantization Network . Key Contributions . . An end-to-end deep quantization framework using Alexnet for deep representation learning Firstly minimize quantization error with deep representation learning, which significantly improve quantizability Devise a pairwise cosine loss to better link the cosine distances with similarity labels . . . pairwise Codebook 1 4 cosine-loss 000 001 010 011 100 101 110 111 111 -1 Hash fine- fine- fine- fine- 110 4 Quantize train train train Code tune tune tune tune 100 2 2 101 000 001 010 011 100 101 110 111 011 001 000 2 010 quantization 8 8 2 loss 000 001 010 011 100 101 110 111 input conv1 conv2 conv3 conv4 conv5 fc6 fc7 fcb . . . . . . . . . . . . . . . . . . . . . .. . .. . .. .. . .. . . .. . .. . .. . .. . .. .. . .. . . .. . .. .. . .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 11 / 17
Method Model Approximate Nearest Neighbor Search . Asymmetric Quantizer Distance (AQD) . . M � 2 � � z l AQD ( q , x i ) = � qm − C m h im � (5) 2 m =1 c mi q : query, x i : raw feature of db point i z l q : deep representation of query q x mi h im : binary code of x i in m -th subspace. C m h im : compressed representation of x i in q m m -th subspace m-th subspace . . . . Look-up Tables . . For each query, pre-compute M × K Look-up table Each query entails M table lookups and additions . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. .. . .. . .. . . .. . .. .. . .. . .. . .. . .. . Y. Cao et al. (Tsinghua University) Deep Quantization Networks AAAI 2016 12 / 17
Recommend
More recommend