Deep Learning是機器學習中一個非常接近AI的領域,其動機在于建立、模擬人腦進行分析學習的神經網絡,最近研究了機器學習中一些深度學習的相關知識,本文給出一些很有用的資料和心得。
Key Words:有監督學習與無監督學習,分類、回歸,密度估計、聚類,深度學習,Sparse DBN,
1. 有監督學習和無監督學習
給定一組數據(input,target)為Z=(X,Y)。
有監督學習:最常見的是regression?&?classification。
regression:Y是實數vector。回歸問題,就是擬合(X,Y)的一條曲線,使得下式cost function L最小。

classification:Y是一個finite number,可以看做類標號。分類問題需要首先給定有label的數據訓練分類器,故屬于有監督學習過程。分類問題中,cost function L(X,Y)是X屬于類Y的概率的負對數。

,其中fi(X)=P(Y=i | X);

無監督學習:無監督學習的目的是學習一個function f,使它可以描述給定數據的位置分布P(Z)。 包括兩種:density estimation & clustering.
density estimation就是密度估計,估計該數據在任意位置的分布密度
clustering就是聚類,將Z聚集幾類(如K-Means),或者給出一個樣本屬于每一類的概率。由于不需要事先根據訓練數據去train聚類器,故屬于無監督學習。
PCA和很多deep learning算法都屬于無監督學習。
2.?[深度學習Deep Learning介紹](http://www.iro.umontreal.ca/~pift6266/H10/notes/deepintro.html)
? ?Depth 概念:depth: the length of the longest path from an input to an output.
? ?Deep Architecture 的三個特點:深度不足會出現問題;人腦具有一個深度結構(每深入一層進行一次abstraction,由lower-layer的features描述而成的feature構成,就是[上篇中提到的feature hierarchy問題](http://blog.csdn.net/abcjennifer/article/details/7804962),而且該hierarchy是一個[稀疏矩陣](http://blog.csdn.net/abcjennifer/article/details/7748833));認知過程逐層進行,逐步抽象
??[?3篇文章](http://www.iro.umontreal.ca/~pift6266/H10/notes/deepintro.html#breakthrough-in-learning-deep-architectures)介紹Deep Belief Networks,作為DBN的breakthrough
3.Deep Learning Algorithm 的核心思想:
? ? 把learning hierarchy 看做一個network,則
? ??①無監督學習用于每一層網絡的pre-train;
? ? ②每次用無監督學習只訓練一層,將其訓練結果作為其higher一層的輸入;
? ? ③用監督學習去調整所有層
這里不負責任地理解下,舉個例子在Autoencoder中,無監督學習學的是feature,有監督學習用在fine-tuning. 比如每一個neural network 學出的hidden layer就是feature,作為下一次神經網絡無監督學習的input……這樣一次次就學出了一個deep的網絡,每一層都是上一次學習的hidden layer。再用softmax classifier去fine-tuning這個deep network的系數。

這三個點是Deep Learning Algorithm的精髓,我在[上一篇](http://blog.csdn.net/abcjennifer/article/details/7804962)文章中也有講到,其中第三部分:Learning Features Hierachy & Sparse DBN就講了如何運用Sparse DBN進行feature學習。
4. Deep Learning 經典閱讀材料:
> - The monograph or review paper?[Learning Deep Architectures for AI](http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/239)?(Foundations & Trends in Machine Learning, 2009).
> - The ICML 2009 Workshop on Learning Feature Hierarchies?[webpage](http://www.cs.toronto.edu/~rsalakhu/deeplearning/index.html)?has a?[list of references](http://www.cs.toronto.edu/~rsalakhu/deeplearning/references.html).
> - The LISA?[public wiki](http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/WebHome)?has a?[reading list](http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/ReadingOnDeepNetworks)?and a?[bibliography](http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/DeepNetworksBibliography).
> - Geoff Hinton has?[readings](http://www.cs.toronto.edu/~hinton/deeprefs.html)?from last year’s?[NIPS tutorial](http://videolectures.net/jul09_hinton_deeplearn/).
> 闡述Deep learning主要思想的三篇文章:
> - Hinton, G. E., Osindero, S. and Teh, Y.,?[A fast learning algorithm for deep belief nets](http://www.cs.toronto.edu/~hinton/absps/fastnc.pdf)Neural Computation 18:1527-1554, 2006
> - Yoshua Bengio, Pascal Lamblin, Dan Popovici and Hugo Larochelle,?[Greedy Layer-Wise Training of Deep Networks](http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/190), in J. Platt et al. (Eds), Advances in Neural Information Processing Systems 19 (NIPS 2006), pp. 153-160, MIT Press, 2007**<比較了RBM和Auto-encoder>**
> - Marc’Aurelio Ranzato, Christopher Poultney, Sumit Chopra and Yann LeCun?[Efficient Learning of Sparse Representations with an Energy-Based Model](http://yann.lecun.com/exdb/publis/pdf/ranzato-06.pdf), in J. Platt et al. (Eds), Advances in Neural Information Processing Systems (NIPS 2006), MIT Press, 2007**<將稀疏自編碼用于回旋結構(convolutional architecture)>**
> 06年后,大批deep learning文章涌現,感興趣的可以看下大牛Yoshua Bengio的綜述[Learning deep architectures for {AI}](http://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf),不過本文很長,很長……
5. Deep Learning工具——?[Theano](http://deeplearning.net/software/theano)
? ???[Theano](http://deeplearning.net/software/theano)是deep learning的Python庫,要求首先熟悉Python語言和numpy,建議讀者先看[Theano basic tutorial](http://deeplearning.net/software/theano/tutorial),然后按照*[Getting Started](http://deeplearning.net/tutorial/gettingstarted.html#gettingstarted)?*下載相關數據并用gradient descent的方法進行學習。
學習了Theano的基本方法后,可以練習寫以下幾個算法:
有監督學習:
1. [*Logistic Regression*](http://deeplearning.net/tutorial/logreg.html#logreg)?- using Theano for something simple
1. [*Multilayer perceptron*](http://deeplearning.net/tutorial/mlp.html#mlp)?- introduction to layers
1. [*Deep Convolutional Network*](http://deeplearning.net/tutorial/lenet.html#lenet)?- a simplified version of LeNet5
無監督學習:
- [*Auto Encoders, Denoising Autoencoders*](http://deeplearning.net/tutorial/dA.html#daa)?- description of autoencoders
- [*Stacked Denoising Auto-Encoders*](http://deeplearning.net/tutorial/SdA.html#sda)?- easy steps into unsupervised pre-training for deep nets
- [*Restricted Boltzmann Machines*](http://deeplearning.net/tutorial/rbm.html#rbm)?- single layer generative RBM model
- [*Deep Belief Networks*](http://deeplearning.net/tutorial/DBN.html#dbn)?-?unsupervised generative pre-training of stacked RBMs followed by supervised fine-tuning
最后呢,推薦給大家基本ML的書籍:
- [Chris Bishop, “Pattern Recognition and Machine Learning”, 2007](http://research.microsoft.com/en-us/um/people/cmbishop/prml/)
- [Simon Haykin, “Neural Networks: a Comprehensive Foundation”, 2009 (3rd edition)](http://books.google.ca/books?id=K7P36lKzI_QC&dq=simon+haykin+neural+networks+book&source=gbs_navlinks_s)
- [Richard O. Duda, Peter E. Hart and David G. Stork, “Pattern Classification”, 2001 (2nd edition)](http://www.rii.ricoh.com/~stork/DHS.html)
關于Machine Learning更多的學習資料將繼續更新,敬請關注本博客和新浪微博[Sophia_qing](http://weibo.com/u/2607574543)。
References:
1.?[Brief Introduction to ML for AI](http://www.iro.umontreal.ca/~pift6266/H10/notes/mlintro.html)
2.[Deep Learning Tutorial](http://deeplearning.net/tutorial/)
3.[A tutorial on deep learning - Video](http://videolectures.net/jul09_hinton_deeplearn/)
注明:轉自Rachel Zhang的專欄http://blog.csdn.net/abcjennifer/article/details/7826917