Authors:
            
                    Xiaohu Song
                    
                        
                    
                    ; 
                
                    Damien Muselet
                    
                        
                    
                     and
                
                    Alain Tremeau
                    
                        
                    
                    
                
        
        
            Affiliation:
            
                    
                        
                    
                    UJM-Saint-Etienne, CNRS and LaHC UMR 5516, France
                
        
        
        
        
        
             Keyword(s):
            Color Descriptor, Self-similarity, Classification, Invariance.
        
        
            
                Related
                    Ontology
                    Subjects/Areas/Topics:
                
                        Color and Texture Analyses
                    ; 
                        Computer Vision, Visualization and Computer Graphics
                    ; 
                        Features Extraction
                    ; 
                        Image and Video Analysis
                    
            
        
        
            
                Abstract: 
                One big challenge in computer vision is to extract robust and discriminative local descriptors. For many applications such as object tracking, image classification or image matching, there exist appearance-based descriptors such as SIFT or learned CNN-features that provide very good results. But for some other applications such as multimodal image comparison (infra-red versus color, color versus depth, ...) these descriptors failed and people resort to using the spatial distribution of self-similarities. The idea is to inform about the similarities between local regions in an image rather than the appearances of these regions at the pixel level. Nevertheless, the classical self-similarities are not invariant to rotation in the image space, so that two rotated versions of a local patch are not considered as similar and we think that many discriminative information is lost because of this weakness. In this paper, we present a method to extract rotation-invariant self similarities. In t
                his aim, we propose to compare color descriptors of the local regions rather than the local regions themselves. Furthermore, since this comparison informs us about the relative orientations of the two local regions, we incorporate this information in the final image descriptor in order to increase the discriminative power of the system. We show that the self similarities extracted by this way are very discriminative.
                (More)