Abstract
Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propose the use of supervised error signals from gradient descent to tune the established maps within the model. This technique allows us to learn what would otherwise be a design choice within the model and specialize the maps to aggregate areas of invariance for the task presented. Preliminary results show moderate potential gains in classification accuracy and highlight areas of importance within the intermediate feature representation space.
Original language | English |
---|---|
State | Published - 2013 |
Externally published | Yes |
Event | 1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States Duration: May 2 2013 → May 4 2013 |
Conference
Conference | 1st International Conference on Learning Representations, ICLR 2013 |
---|---|
Country/Territory | United States |
City | Scottsdale |
Period | 05/2/13 → 05/4/13 |