Abstract

Motivated in part by the hierarchical organization of the cortex, a number of al-gorithms have recently been proposed that try to learn hierarchical, or “deep,” structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hier-archy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of Hinton et al. (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (“contour”) features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex “corner ” features matches well with the results from the Ito & Komatsu’s study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features. 1

Keywords

Computer scienceDeep belief networkArtificial intelligenceVisual cortexNeural codingDeep learningReceptive fieldPattern recognition (psychology)ComputationHierarchyDictionary learningLayer (electronics)AlgorithmSparse approximation

Affiliated Institutions

Related Publications

Publication Info

Year
2007
Type
article
Volume
20
Pages
873-880
Citations
880
Access
Closed

External Links

Citation Metrics

880
OpenAlex

Cite This

Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng (2007). Sparse deep belief net model for visual area V2. , 20 , 873-880.