| title |
|---|
ML Articles Reading Notes |
Notes in chronological order: archive
- Shalev-Shwartz 2014, Understanding Machine Learning: from Theory to Algorithms
- Cucker Zhou 2007, Learning Theory: an approximation viewpoint
- Murphy 2012, Machine Learning: a probabilistic approach
- Ng Jordan, On Discriminative vs. Generative classifiers: a comparison of logistic regression and naive Bayes, notes
- Csiszár 2004: Information Theory and Statistics, a tutorial
- Minsker 2015: Geometric median and robust estimation in Banach spaces, notes
- Belkin 2018: To Understand Deep Learning We Need to Understand Kernel Learning, notes
- Tishby 2000: The Information Bottleneck Method, notes
- Tishby 2015: Deep Learning and the Information Bottleneck Principle, notes
- Pan Yang 2010: A Survey on Transfer Learning, notes
- Daume Marcu 2006: Domain Adaptation for Statistical Classifiers, notes
- Zadrozny 2004: Learning and Evaluating Classifiers under Sample Selection Bias, notes
- Yu Mineyev Varshney 2018: A Group-Theoretic Approach to Abstraction, notes
- Jain Kar 2017, Non-convex Optimization for Machine Learning
- Salimans 2017: Evolution Strategies as a Scalable Alternative to Reinforcement Learning, notes
- Lehman 2017: ES Is More Than Just a Traditional Finite-Difference Approximator, notes
- Zhang 2017: On the Relationship Between the OpenAI Evolution Strategy and Stochastic Gradient Descent, notes
*The phrasing in these notes are sometimes copied directly from the texts. Other times, the notes diverge quite a bit.