ArtificialNeuralNetworks人工神經(jīng)網(wǎng)絡(luò)PPT學(xué)習(xí)教案_第1頁
ArtificialNeuralNetworks人工神經(jīng)網(wǎng)絡(luò)PPT學(xué)習(xí)教案_第2頁
ArtificialNeuralNetworks人工神經(jīng)網(wǎng)絡(luò)PPT學(xué)習(xí)教案_第3頁
ArtificialNeuralNetworks人工神經(jīng)網(wǎng)絡(luò)PPT學(xué)習(xí)教案_第4頁
ArtificialNeuralNetworks人工神經(jīng)網(wǎng)絡(luò)PPT學(xué)習(xí)教案_第5頁
已閱讀5頁,還剩64頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

1、會(huì)計(jì)學(xué)1ArtificialNeuralNetworks人工神經(jīng)網(wǎng)絡(luò)人工神經(jīng)網(wǎng)絡(luò)5/1/2022Artificial Neural Networks - I2ISupervised ANNsExamplesApplicationsFurther topicsIIUnsupervised ANNsExamplesApplicationsFurther topicsIII第1頁/共69頁5/1/2022Artificial Neural Networks - I3第2頁/共69頁5/1/2022Artificial Neural Networks - I4 billion synapses in

2、human brain Chemical transmission and modulation of signals Inhibitory synapses Excitatory synapses610第3頁/共69頁5/1/2022Artificial Neural Networks - I5第4頁/共69頁5/1/2022Artificial Neural Networks - I6Refractory timeAction potentialAction potential 100mVActivation threshold 20-30mVRest potential -65mVSpi

3、ke time 1-2msRefractory time 10-20ms第5頁/共69頁5/1/2022Artificial Neural Networks - I7第6頁/共69頁5/1/2022Artificial Neural Networks - I8第7頁/共69頁5/1/2022Artificial Neural Networks - I9 jjijitxwtuStimulusurest = resting potentialxj(t) = output of neuron j at time twij = connection strength between neuron i

4、and neuron ju(t) = total stimulus at time tyi(t)x1(t)x2(t)x5(t)x3(t)x4(t)wi1wi3wi2wi4wi5(t)( fiiuy )(txjjijwNeuron i tuuftyirestiResponse第8頁/共69頁5/1/2022Artificial Neural Networks - I10第9頁/共69頁5/1/2022Artificial Neural Networks - I11 OFFelseONzzf“Hard” threshold= threshold ex: Perceptrons, Hopfield

5、NNs, Boltzmann Machines Main drawbacks: can only map binary functions, biologically implausible.offonjjijixwuStimulusirestiuufyResponse第10頁/共69頁5/1/2022Artificial Neural Networks - I12 112zezf“Soft” threshold ex: MLPs, Recurrent NNs, RBF NNs. Main drawbacks: difficult to process time patterns, biolo

6、gically implausible.offonjjijixwuStimulusirestiuufyResponse第11頁/共69頁5/1/2022Artificial Neural Networks - I13 jjijitxwtu tifrestiutttufty0,)()( OFFelseONdtdzzzf0& = spike and afterspike potentialurest = resting potential(t,u() = trace at time t of input at time = thresholdxj(t) = output of neuron j a

7、t time twij = efficacy of synapse from neuron i to neuron ju(t) = input stimulus at time tResponseStimulus第12頁/共69頁5/1/2022Artificial Neural Networks - I14y(t)urest+(t-tf)第13頁/共69頁5/1/2022Artificial Neural Networks - I15第14頁/共69頁5/1/2022Artificial Neural Networks - I16第15頁/共69頁5/1/2022Artificial Neu

8、ral Networks - I17第16頁/共69頁5/1/2022Artificial Neural Networks - I18第17頁/共69頁5/1/2022Artificial Neural Networks - I19Output layerInput layerHidden layersfully connectedsparsely connected第18頁/共69頁5/1/2022Artificial Neural Networks - I20第19頁/共69頁5/1/2022Artificial Neural Networks - I21第20頁/共69頁5/1/2022

9、Artificial Neural Networks - I22第21頁/共69頁5/1/2022Artificial Neural Networks - I23第22頁/共69頁5/1/2022Artificial Neural Networks - I24第23頁/共69頁5/1/2022Artificial Neural Networks - I25第24頁/共69頁5/1/2022Artificial Neural Networks - I26第25頁/共69頁5/1/2022Artificial Neural Networks - I27第26頁/共69頁5/1/2022Artifi

10、cial Neural Networks - I28第27頁/共69頁5/1/2022Artificial Neural Networks - I29Presented by Martin Ho, Eddy Li, Eric Wong and Kitty Wong - Copyright 2000Linear Separability in Perceptrons 第28頁/共69頁5/1/2022Artificial Neural Networks - I30Presented by Martin Ho, Eddy Li, Eric Wong and Kitty Wong - Copyrig

11、ht 2000Learning Linearly Separable Functions (1)What can these functions learn ?Bad news:- There are not many linearly separable functions.Good news:- There is a perceptron algorithm that will learn any linearly separable function, given enough training examples.第29頁/共69頁5/1/2022Artificial Neural Ne

12、tworks - I31jijxewiiyde =learning coefficientwij=connection from neuron xj to yix=(x1,x2,.,xn) ANN inputy=(y1,y2,.,yn) ANN outputd=(d1,d2,.,dn) desired output(x,d) training examplee=ANN errorw11w12w13w14y1y2y3x1x2x3x4第30頁/共69頁5/1/2022Artificial Neural Networks - I32第31頁/共69頁5/1/2022Artificial Neural

13、 Networks - I33Hebb postulate (1948)Correlation-based learningConnections between concurrently firing neurons are strengthenedExperimentally verified (1973)w12w11x1x2y1jiijijxywFwdtd, =learning coefficientwij=connection from neuron xj to yijiijxywdtdjixyFijjiwxyFijjiijwxywdtdGeneral FormulationHebb

14、postulateKohonen & Grossberg (ART)第32頁/共69頁5/1/2022Artificial Neural Networks - I34ENERGY MINIMIZATIONWe need an appropriate definition of energy for artificial neural networks, and having that we can use mathematical optimisation techniques to find how to change the weights of the synaptic connecti

15、ons between neurons.ENERGY = measure of task performance error第33頁/共69頁5/1/2022Artificial Neural Networks - I35InputsOutput),(),(),(),(14414133131221211111wxfywxfywxfywxfy),(),(),(231232212221121wyfywyfywyfy141312111yyyyy),(312wyfyOut2323232yyyy第34頁/共69頁5/1/2022Artificial Neural Networks - I36Neural

16、 network: input / output transformation),(WxFyoutW is the matrix of all weight vectors.第35頁/共69頁5/1/2022Artificial Neural Networks - I37MLP = multi-layer perceptronPerceptron:MLP neural network:xwyToutxyoutxyout23212322212213121111),(2 , 1,11),(3 , 2 , 1,1121211ywywyyyykeyyyyykeyTkkkoutTaywkTaxwkkkT

17、kkT第36頁/共69頁5/1/2022Artificial Neural Networks - I38RBF = radial basis function222|)(|)(|)(awxexfcxrxrExample:Gaussian RBFxyout41)(2|222, 1kawxkoutkkewy第37頁/共69頁5/1/2022Artificial Neural Networks - I39 control classification prediction approximationThese can be reformulated in general as FUNCTION AP

18、PROXIMATION tasks.Approximation: given a set of values of a function g(x) build a neural network that approximates the g(x) values for any input x.第38頁/共69頁5/1/2022Artificial Neural Networks - I40Task specification:Data: set of value pairs: (xt, yt), yt=g(xt) + zt; zt is random measurement noise.Obj

19、ective: find a neural network that represents the input / output transformation (a function) F(x,W) such thatF(x,W) approximates g(x) for every x第39頁/共69頁5/1/2022Artificial Neural Networks - I41Error measure:NtttyWxFNE12);(1Rule for changing the synaptic weights:jijinewjijijiwwwWwEcw,)(c is the lear

20、ning parameter (usually a constant)第40頁/共69頁5/1/2022Artificial Neural Networks - I42Perceptron:xwyToutData:),(),.,(),(2211NNyxyxyxError:22)()()(ttTtoutyxtwytytELearning:tjmjjTtittTiiittTiiiixtwxtwxyxtwctwtwwyxtwctwwtEctwtw12)()()()() 1()()()()() 1(A perceptron is able to learn a linear function.第41頁

21、/共69頁5/1/2022Artificial Neural Networks - I43Only the synaptic weights of the output neuron are modified.An RBF neural network learns a nonlinear function.RBF neural network:Data:),(),.,(),(2211NNyxyxyxError:21)(2|22)()()(22, 1tMkawxktoutyetwytytEkktLearning:22, 1)(2|2222)(,(2)()()() 1(iitawxttiiiie

22、ytWxFwtEwtEctwtwMkawxkoutkkewWxFy1)(2|222, 1),(第42頁/共69頁5/1/2022Artificial Neural Networks - I44MLP neural network:with p layersData:),(),.,(),(2211NNyxyxyxError:22);()()(tttoutyWxFytytE1221222111111);(.),.,(,.,1,11),.,(,.,1,112212111ppToutTMaywkTMaxwkywWxFyyyyMkeyyyyMkeykkTkkTxyout 1 2 p-1 p第43頁/共6

23、9頁5/1/2022Artificial Neural Networks - I45Learning: Apply the chain rule for differentiation: calculate first the changes for the synaptic weights of the output neuron; calculate the changes backward starting from layer p-1, and propagate backward the local error terms.The method is still relatively

24、 complicated but it is much simpler than the original optimisation problem.第44頁/共69頁5/1/2022Artificial Neural Networks - I46In general it is enough to have a single layer of nonlinear neurons in a neural network in order to learn to approximate a nonlinear function.In such case general optimisation

25、may be applied without too much difficulty.Example: an MLP neural network with a single hidden layer:MkaxwkoutkkTewWxFy12, 111);(第45頁/共69頁5/1/2022Artificial Neural Networks - I47itiTaxwttiiiieytWxFwtEwtEctwtw, 111)(,(2)()()() 1(2222Synaptic weight change rules for the output neuron:Synaptic weight c

26、hange rules for the neurons of the hidden layer:)(1)(,(2)() 1(11111)(,(2)()()() 1(2, 1, 1, 1, 1, 1, 12, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1tjaxwaxwttijijtjitiTijitiTijaxwaxwaxwijaxwijttijijijijxeeytWxFctwtwxaxwwaxwweeewewytWxFwtEwtEctwtwitiTitiTitiTitiTitiTitiT第46頁/共69頁5/1/2022Artificial Neural Netwo

27、rks - I48Bayesian learning:the distribution of the neural network parameters is learntSupport vector learning:the minimal representative subset of the available data is used to calculate the synaptic weights of the neurons第47頁/共69頁5/1/2022Artificial Neural Networks - I49第48頁/共69頁5/1/2022Artificial N

28、eural Networks - I50Artificial Neural NetworksFeedforwardRecurrent(Kohonen)(MLP, RBF)UnsupervisedSupervised(ART)(Elman, Jordan,Hopfield)UnsupervisedSupervised第49頁/共69頁5/1/2022Artificial Neural Networks - I51第50頁/共69頁5/1/2022Artificial Neural Networks - I52第51頁/共69頁5/1/2022Artificial Neural Networks

29、- I53第52頁/共69頁5/1/2022Artificial Neural Networks - I54第53頁/共69頁5/1/2022Artificial Neural Networks - I55第54頁/共69頁5/1/2022Artificial Neural Networks - I56第55頁/共69頁5/1/2022Artificial Neural Networks - I57第56頁/共69頁5/1/2022Artificial Neural Networks - I58第57頁/共69頁5/1/2022Artificial Neural Networks - I59第

30、58頁/共69頁5/1/2022Artificial Neural Networks - I60第59頁/共69頁5/1/2022Artificial Neural Networks - I61第60頁/共69頁5/1/2022Artificial Neural Networks - I62第61頁/共69頁5/1/2022Artificial Neural Networks - I63第62頁/共69頁5/1/2022Artificial Neural Networks - I64Presented by Martin Ho, Eddy Li, Eric Wong and Kitty Wong - Copyright 2000Neural Network ApproachesALVINN - Autonomous Land Vehicle In a Neural Network第63頁/共69頁5/1/2022Artificial Neural Networks - I65Presented by Martin Ho, Eddy

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論