Semantic Kernel Forests from Multiple Taxonomies
Sung Ju Hwang, Fei Sha and Kristen Grauman
The University of Texas at Austin
University of Southern California
MotivationThe conventional use of a semantic taxnomy in object categorization is limited in two ways.
|1) The structure is not always optimal for hierarchical classification||2) There exists no single 'optimal' taxonomy|
IdeaWe want to focus on the implicit information provided by the taxonomy - that is, the criteria used to classify subclasses at different semantic granularity 
ApproachOur method comprises of two steps: 1) Learning granularity-specific semantic discriminative features. 2) Combining features at different semantic views and granularities.
Isolating granularity-specific discriminative features from multiple taxonomies
Combining features at different semantic views and granularitiesAfter having isolated per-granularity discriminative semantic features at each node, we want to combine them to learn optimal per-category feature.
Semantic Kernel Forest
Learning per-category kernel from Semantic Kernel Forest using MKLAfter isolating semantic, discriminative features at each node, we combine them in an additive manner using multiple kernel learning. Note that we only consider the kernels on the tree path.
Sparse hierarchical regularization
DatasetWe validate our method on three different datasets, AWA-4 (4 categories used for illustration), AWA-10, and Imagenet-20.
|Raw feature kernel||an RBF kernel computed on the original features||47.67 ± 2.22||30.80 ± 1.36||28.20 ± 1.45|
|Raw feature kernel + MKL||MKL combination of RBF kernels constructed by varying gamma||48.50 ± 1.89||31.13 ± 2.31||27.57 ± 1.50|
|Perturbed semantic kernel tree + MKL-H||a semantic kernel tree trained with taxonomies that have randomly swapped leaves||N/A||31.53 ± 2.07||28.20 ± 2.02|
|Perturbed semantic kernel forest + MKL-H||semantic kernel forest trained with taxonomies that have randomly swapped leaves||N/A||33.20 ± 2.96||30.77 ± 1.53|
|Semantic kernel tree + Avg||an equal-weight average of the semantic kernels from one taxonomy||47.17 ± 2.40||31.92 ± 1.21||28.97 ± 1.61|
|Semantic kernel tree + MKL||the same kernels, combined with MKL using sparsity regularization only||48.89 ± 1.06||32.43 ± 1.93||29.74 ± 1.26|
|Semantic kernel tree + MKL-H||the same as the above, but adding the proposed hierarchical regularization||50.06 ± 1.12||32.68 ± 1.79||29.90 ± 0.70|
|Semantic kernel forest + MKL||semantic forest kernels from multiple taxonomies combined with MKL||49.67 ± 1.11||34.60 ± 1.78||30.97 ± 1.14|
|Semantic kernel forest + MKL-H||the same as the above, but adding our hierarchical regularizer||52.83 ± 1.68||35.87 ± 1.22||32.30 ± 1.00|
Per-class resultsThe below two plots are per-class accuracy improvements of each individual taxonomy and the semantic kernel forest ("All") over the raw kernel baseline.
Confusion matrix on 4 animal classes
Learnt kernel weights
References Sung Ju Hwang, Kristen Grauman and Fei Sha, Tree of Metrics with Disjoint Visual Features, NIPS 2011
Source code and data[kernelforest.tar.gz] 64Mb. Contains matlab codes (v0.9) for both tree of metrics and semantic kernel forests, and data.
v1.0 in C++ with OpenMP for parallel classifier training will be released soon.
[taxonomyutil.tar.gz] Contains matlab codes to generate taxonomy from WordNet, and also taxonomy data for 4 taxonomies from AWA-10, and 3 taxonomies from ImageNet-20
PublicationSung Ju Hwang, Kristen Grauman and Fei Sha Semantic Kernel Forests from Multiple Taxonomies
Advances in Neural Information Processing System (NIPS), Lake Taho, NV, USA, December 2012