UTCS Artificial Intelligence
courses
talks/events
demos
people
projects
publications
software/data
labs
areas
admin
Deep Generative Similarity-Weighted Interleaved Learning (2025)
Marlan McInnes-Taylor
Deep neural networks excel at visual recognition when trained offline on large, stationary datasets, but they struggle to learn sequentially without erasing prior knowledge, a phenomenon termed catastrophic forgetting. Replay-based methods are currently the most effective defense against catastrophic forgetting in class-incremental image classification, yet na¨ıve rehearsal raises storage and privacy concerns and often wastes computation on low-risk memories. This thesis investigates whether targeted rehearsal can be combined with synthetic replay to retain accuracy while eliminating raw-data retention. I propose Deep Generative Similarity-Weighted Interleaved Learning (DGSWIL), which replaces stored exemplars with classconditional synthetic samples selected by feature-space similarity to the current task, aiming to concentrate replay where interference risk is highest. Across standard class-incremental protocols on Fashion-MNIST and CIFAR- 10, DGSWIL does not reliably outperform strong replay baselines—including SWIL and uniform/exemplar-based variants—and in several settings increases forgetting relative to simpler strategies. A diagnostic study probes why similarity-guided generative replay underperforms, showing that (i) representation drift corrupts similarity estimates over task sequences, (ii) compute–noise trade-offs introduced by joint optimization of generator and classifier severely degrade replay fidelity, and (iii) 6 generator–target mismatch limits class-conditional fidelity near decision boundaries. Representational-overlap analyses, and synthetic-sample quality metrics collectively indicate misalignment between similarity scores and gradients that actually reduce interference. The contributions are fourfold: (1) a unified, reproducible DGSWIL framework coupling similarity-weighted interleaving with generative replay; (2) a negativeresult evaluation against strong baselines; (3) diagnostics that expose failure modes in similarity-based synthetic rehearsal; and (4) open-source artifacts (code, configurations, and evaluation scripts) to enable replication and further study. The results caution against assuming that “which items to replay” can be optimized via featurespace similarity when the features themselves are nonstationary, and they outline concrete directions for stabilizing similarity estimates and decoupling generator noise from continual-learning updates.
View:
PDF
Citation:
Masters Thesis, Department of Computer Science, The University of Texas at Austin.
Bibtex:
@mastersthesis{mcinnestaylor:ms25, title={Deep Generative Similarity-Weighted Interleaved Learning}, author={Marlan McInnes-Taylor}, month={ }, school={Department of Computer Science, The University of Texas at Austin}, address={Austin, TX}, url="http://www.cs.utexas.edu/users/ai-lab?mcinnestaylor:ms25", year={2025} }
People
Marlan McInnes-Taylor
Masters Alumni
marlan [at] cs utexas edu
Areas of Interest
Supervised Learning
Labs
Neural Networks