




版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、:學(xué)學(xué)10 18 , 2012 動(dòng)力學(xué)系統(tǒng)建模神經(jīng)網(wǎng)絡(luò)動(dòng)力學(xué)系統(tǒng)中的三類(lèi)問(wèn)題問(wèn)題的提法Figure : 動(dòng)力學(xué)系統(tǒng)三類(lèi)問(wèn)題問(wèn)題知系統(tǒng)S 輸入i (激力)輸出o數(shù)問(wèn)題知輸入ic輸出o系統(tǒng)S(系統(tǒng)的數(shù) 數(shù) M, K, C )?問(wèn)題知系統(tǒng)S 輸出o()輸入i激力動(dòng)力學(xué)系統(tǒng)與問(wèn)題的提法Figure : 動(dòng)力學(xué)系統(tǒng)的三類(lèi)問(wèn)題Pigeons as art experts (Watanabe et al. 1995)ExperimentPigeon in Skinner boxPresent pa/ Van Gogh)ings of two different artists (e.g. ChagallRe
2、ward for pecking when presented a particular artist (e.g. Van Gogh)PigeonFigure : 子Van Gogh 的畫(huà)作Figure : Van Gogh 的畫(huà)運(yùn)與Marc Chagall 的畫(huà)作Figure : Marc Chagall 的畫(huà)實(shí)驗(yàn)結(jié)果Pigeons were able to discriminate betn Van Goghand Chagall with 95% accuracy (when presented withpictures they had been trained on)Discrimi
3、nation still 85% suc unseen paings of the artistssful for previouslyPigeons do not simply memorise thcturesThey can extract and recognise patterns (the ostylep) They generalise from the already seen to make predictionsThis is what neural networks (biological and artifi are good at (unlike convention
4、al computer)l)什么是神經(jīng)網(wǎng)絡(luò)?Neural Network(NN)Ms of the brain and nervous systemHighly parallel Pro bras information muore like thehan a serial computer LearningVery simple principlesVery complex beApplicationsiours Aserful problem solvers As biological ms什么是神經(jīng)網(wǎng)絡(luò)?Animals are able to react adaptively to ch
5、anges in their external andernal environment, and they use their nervous system to perform these beiours.An appropriateulation of the nervoussystem should be able to produimilar responses and beiours in artifil systems.The nervous system is build by relatively simple units, the neurons, so copying t
6、heir be ior and functionality should be the solution.神經(jīng)元模型類(lèi)的神經(jīng)細(xì)胞 1011 個(gè)個(gè)神經(jīng)細(xì)胞與 104 個(gè)神經(jīng)細(xì)胞相系dFigure : 神經(jīng)元模型神經(jīng)元模型神經(jīng)元之間的相連傳的突觸(Synapse)d突觸傳的制化學(xué)突觸突觸c中化學(xué)突觸多數(shù)神經(jīng)動(dòng)傳化學(xué)的作用d神經(jīng)元的結(jié)構(gòu)dFigure : 神經(jīng)元模型兩個(gè)相連的細(xì)胞Signals can be transmitted unchanged or they can be altered by synapses. A synapse is able to increase or decrea
7、se the strength of the connection from the neuron to neuron and cause exciion or inhibition of a subsequence neuron. This is where information isd.The information prosing abilities of biologicalneural systems must follow from highly parallel proses operating on represenionst are distributed over man
8、y neurons.One motivation for ANN is to capture this kind of highly parallel compuion based on distributed represenions.NEURAL NETWORK REPRESENIONAn ANN is comed of prosingelements called or perceptrons,anized in different ways to form the networks structure.An ANN consists of perceptrons. Each of th
9、eperceptrons receives inputs, pro and delivers a single output.ses inputsANNs-The basicsANNs incorporate the two fundamental components of biological neural nets:Neurones (nodes) Synapses (weights)Figure : 神經(jīng)元模型ANNs-The basicsInformation flow is unidirectionalData is presented to Input layerPassed o
10、n to Hidden Layer Passed on to Output layerInformation is distributedInformation pro parallelsing isFigure : 神經(jīng)元模型神經(jīng)元與節(jié)點(diǎn)Figure : 神經(jīng)元與節(jié)點(diǎn)神經(jīng)網(wǎng)絡(luò)基本模型Figure : 神經(jīng)網(wǎng)絡(luò)基本模型神經(jīng)網(wǎng)絡(luò)的拓?fù)浣Y(jié)構(gòu)Figure : 神經(jīng)網(wǎng)絡(luò)的拓?fù)浣Y(jié)構(gòu)節(jié)點(diǎn)的結(jié)構(gòu)與激活函數(shù)activation function, also called as threshold function, transfer function or squashing function突觸與權(quán)重Fig
11、ure : 突觸與權(quán)重標(biāo)準(zhǔn) MP 神經(jīng)元模型MP 模型MP 模型一元模型是 Mc Culloch Pitts 提出的神經(jīng)元模型之一dMP 模型是多數(shù)神經(jīng)網(wǎng)絡(luò)模型的基dFigure : 神經(jīng)元輸入輸出之間的關(guān)系y = f (x, w)x Rn, w Rn, y Rm Examples:Sigmoidal 神經(jīng)元1 1 + ewTxay =Gauss 神經(jīng)元2xwy = e2a2什么是學(xué)習(xí)?Learning = learning by adapionAt the neural level the learning happens by changing of the synaptic streng
12、ths, eliminating some synapses, and building new onesLearning is an optimisation prosthe objective of learning in biologicalanisms is to optimise the amount of available resour, happiness, or in general to achieve a closer to optimal se學(xué)習(xí)規(guī)則神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)d神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)結(jié)神經(jīng)元連權(quán)的化d wij 的原則Learning rules of Hebbsynchron
13、ouivation increases the synaptic strength i j 個(gè)神經(jīng)元狀態(tài)則之間的連加wij = uivj一規(guī)則與qr學(xué)一神經(jīng)細(xì)胞學(xué)的實(shí)d 學(xué)習(xí)速的比例數(shù)dasynchronouivation decreases the synaptic strengthThese rules fit with energy minimization principles.學(xué)習(xí)原理ENERGY MINIMIZATIONMaaining synaptic strength needs energy, it shouldbe maained at those plawhere it
14、is needed, and it shouldnt be maained at plawhere its not needed.We need an appropriate definition of energy forartifil neural networks, andingt we can use mathematical optimisation techniques to find how to change the weights of the synaptic connectionsbetn neurons.ENERGY = measure of task performa
15、nce error簡(jiǎn)單感知器Figure : 簡(jiǎn)單感知器nyi = f ( wixi )1wi 權(quán) d多層感知器多層感知器(MLP=multi-layerceptron)果輸入輸出層間加一層多層的神經(jīng)元(層神經(jīng)元)多層向網(wǎng)絡(luò)多層感知器dFigure : 多層感知器多層感知器層數(shù)問(wèn)題?一層單元數(shù)問(wèn)題??jī)蓚€(gè)的層層單元數(shù)多多層感知器網(wǎng)絡(luò)實(shí)模類(lèi)d多層感知器1 ,y1 =k = 1, 2, 3k1k T11 + e(w ) xaky1 = (y1, y1, y1)T1 2 31 ,y2 =k = 1, 2k2k T 121 + e(w ) y aky2 = (y2, y2)T1 22 w3y2 = (w
16、3)Ty2yout =k kk=1函數(shù)的逼近Given a set of values of a function g(x) build a neural networkt approximates the g(x) values for any input x.Data: set of value pairs: (xt, yt), yt = g(xt) + zt; zt is random measurement noiseObjective: find a neural networkt represents theinput/output transformation (a functi
17、on) F(x, W) suchtF(x, W) approximates g(x) for every x函數(shù)的逼近Error measureN 1Nt2E =(F(x ; W) y )ti=1Rule for changing the synaptic weightsE wj = c (W)iwjiwj,new= wj + wjiiic 學(xué)習(xí)數(shù)d后向算法Backpropagation (BP)Algorithm一個(gè)的多層網(wǎng)絡(luò)BP 算法用學(xué)習(xí)權(quán)重d用度法極小化函數(shù)(網(wǎng)絡(luò)輸出與輸出之間的方函數(shù))d多輸出函數(shù) E 輸出單元的方之d1(tkd okd)2E(W) =2 dD koutputs中ou
18、tputs is the set of output unitshe network, and tkdand okd are theand output values assoted with the kthoutput unit and training example d.神經(jīng)網(wǎng)絡(luò)的用途Control Classification PredictionApproximation傳統(tǒng)控制與 ANN 控制的比較Figure : 傳統(tǒng)控制器與 ANN 控制器的比較ANN 控制器Emulator neural network-trained to imi structures.e responses of unknown-used for training of controller neural network.Controller neural network-trained to make control force.-used
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 公路養(yǎng)護(hù)合同范本
- 人力資源外包合同范例
- 代理旗艦店合同范本
- 農(nóng)戶(hù)種植水稻合同范本
- 2024年中國(guó)移動(dòng)招聘考試真題
- 個(gè)人債權(quán)抵押合同范本
- 2024年西安工業(yè)大學(xué)專(zhuān)任教師招聘考試真題
- 企業(yè)簽訂勞務(wù)合同范本
- 供熱站拆除合同范本
- 兼職技術(shù)總工合同范本
- 2025深圳勞動(dòng)合同下載
- 設(shè)備損壞評(píng)估報(bào)告范文
- 標(biāo)準(zhǔn)和計(jì)量管理制度范文(2篇)
- 透析患者心理問(wèn)題護(hù)理干預(yù)
- 孕前口腔護(hù)理保健
- 《民航服務(wù)與溝通學(xué)》課件-第1講 服務(wù)與民航服務(wù)的概念
- 《大學(xué)生安全教育》課件 項(xiàng)目四 軍事安全
- 10KV電力配電工程施工方案
- 智能感知工程基礎(chǔ)知識(shí)單選題100道及答案解析
- 肌肉注射藥物不良反應(yīng)及預(yù)防措施研究
- 人教版數(shù)學(xué)六年級(jí)上冊(cè)第一單元測(cè)試卷
評(píng)論
0/150
提交評(píng)論