イベント・セミナー・講演会
Learning hidden features in unlabeled training data is called unsupervised learning. Understanding how data size confines learning process is a topic of interest not only in machine learning but also in cognitive neuroscience. The merit of unsupervised feature learning puzzles the community for a long time, and now as deep learning gets popular and powerful, a theoretical basis for unsupervised learning becomes increasingly important but is lacked so far. Our simple statistical mechanics model substantially advances our understanding of how data size confines learning, and opens a new perspective for both neural network training and related statistical physics studies.
更新日:2016.12.14