deep learning techniques and applications深度學習技術(shù)與應用_第1頁
deep learning techniques and applications深度學習技術(shù)與應用_第2頁
deep learning techniques and applications深度學習技術(shù)與應用_第3頁
deep learning techniques and applications深度學習技術(shù)與應用_第4頁
deep learning techniques and applications深度學習技術(shù)與應用_第5頁
已閱讀5頁,還剩83頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

DeepDeepLearning:Techniques DeepChallengetoThetruechallengetoartificialinprovedtobesolvingthetasksthatareeasyforpeopletoperformbuthardforpeopletodescribeformally;solvingtheproblemsthatwesolveintuitively,thatfeelautomatic;likerecognizingspokenwordsorfacesinFlowchartsshowinghowthedifferentpartsofanAIsystemrelatetoeachotherwithindifferentAIdisciplines.Shadedboxesindicatecomponentsthatareabletolearnfromdata.Thefigureshowstwoofthethreehistoricalwavesofartificialneuralnetsresearch,asmeasuredbythefrequencyoftheand“connectionism”or“neuralnetworks”accordingtoDeeplearninghasemoreusefulastheamountofavailabletrainingdatahasKeyTrendsaboutDeepDeeplearningmodelshavegrowninsizeovertimeascomputerhardwareandsoftwareinfrastructurefordeeplearninghasKeyTrendsaboutDeepDeeplearningmodelshavegrowninsizeovertimeascomputerhardwareandsoftwareinfrastructurefordeeplearninghasKeyTrendsaboutDeepDeeplearninghassolvedincreasinglywithincreasingaccuracyovertime.2009

微 的語音識別專家LiDeng和DongYu和Hinton開始合作2011微軟的語音識別研究取得成把連續(xù)多幀的語音特征并在一起,構(gòu) 特征逐級地進行信息特征抽取無縫地和傳統(tǒng)的語音識別技術(shù)相結(jié)合,在不引起任何系統(tǒng)額外耗況下大幅度地提升語音識別系統(tǒng)的識別率,LeCun,ConvolutionNeuralNetworks,2012年10月 團隊 分類問題上用更深 N取最好結(jié)果,使得圖像識 2015 公布 錯誤率DeepDeepLearninginNLP&WhyNeuralThehumancerebralcortexis2to4millimetresinThedifferentcorticallayerseachcontainacharacteristicdistributionneuronalcelltypesandconnectionswithothercorticalandsubcorticalthemoreancientpartofthecerebralcortex,thehippocampus,hasatthreecellularThemostrecentpartofthecerebralcortex,theneocortex(alsocalledisocortex),NeuronsinvariouslayersconnectverticallytoformsmallThecortexisorganizedverticallyincolumnsandhorizontallyinThedifferentregionsofsomatosensoryreceivetheirmaininputsfromdifferentkindsofreceptors.Area3breceivesmostofitsprojectionsfromthesuperficialArea3areceivesinputfromreceptorsinthemuscleInputfromthethalamusarrivesatlayerIV,whereneuronsdistributeinformationupanddownlayers.[Kaasetal.,1979.]Hubel-WieselHubel,DavidH.,andTorstenN.Wiesel."Receptivefieldsofsingleneuronesinthecat'sstriatecortex."TheJournalofphysiology148,no.3(1959):574-591.Hubel-WieselTheNobelPrizeinPhysiologyorMedicine,FurtherBrunoDavidCornell 上不同物UniversityofToronto-MachineLearningGroup(GeoffHinton,RichZemel,RuslanSalakhutdinov,BrendanFrey,RadfordUniversitédeMontréal-LisaLab(YoshuaBengio,PascalVincent,AaronCourville,RolandNewYorkUniversity–YannLecun‘sandRobFergus‘StanfordUniversity–AndrewNg‘sUniversityofOxford–Deeplearninggroup,NandodeFreitasandPhilResearch–JeffDean,SamyBengio,JasonWeston,Marc’AurelioRanzato,DumitruErhan,QuocLeetResearch–LiDengetSUPSI–IDSIA(Jurgen UCBerkeley–BrunoOlshausen‘sUniversityofWashington–PedroDomingos‘IDIAPResearchInstitute-RonanCollobert‘sUniversityofCaliforniaMerced–MiguelA.Carreira-Perpinan‘sUniversityofHelsinki-AapoHyv?rinen‘sNeuroinformaticsUniversitédeSherbrooke–Hugo e‘sUniversityofGuelph–GrahamTaylor‘sUniversityofMichigan–HonglakLee‘sTechnicalUniversityofBerlin–Klaus-RobertMuller‘sBaidu–KaiYu‘sAaltoUniversity-JuhaKarhunenandTapaniRaikoU.Amsterdam–MaxWelling‘sU.CaliforniaIrvine–PierreBaldi‘sGhentUniversity–BenjaminShrauwen‘sUniversityofTennessee–ItamarArel‘sIBMResearch–BrianKingsburyetUniversityofBonn–SvenBehnke’sGatsbyUnit@UniversityCollegeLondon–ManeeshSahani,Yee-WhyeTeh,PeterComputationalCognitiveNeuroscienceLab@UniversityofColoradoThere'saninterestinghistoryaboutpeople'schangesintheirattitudestowarddeeparchitecturesandtheshallow2012-Asurveyondeeplearning-onesmallsteptowardHistoricalContextofDeepInaround1960,the1stgenerationofneuralnetworkwasborn(byIt'scapabilityofclassifyingsomebasicshapesliketrianglesandsquaresletpeopleseethepotentialthatarealin ligentmachinewhichcansense,learn,rememberandrecognizelikehuman-beingscanbeinventedwiththistrend.BUT,itsfundamentallimitationssoonbrokepeople'sCriticizingfromMarvinMinsky,OneoftheapparentreasonsisthatthefeaturelayerofthisPerceptronisfixedandcraftedbyhumanbeings,whichisabsolu againstthedefinitionofareal“in Anotherreasonisitssingle-layerstructurelimitsthefunctionsitlearn,e.g,anexclusive-orfunctionisoutofitslearningInaround1985,basedonthePerceptrons,Geoffrey cedtheoriginalsinglefixedfeaturelayerwithseveralhiddenlayers,creatingthe2nd-generationneuralnetwork.viaBack-propagationalgorithm(proposedin1969,practicableinBPdidnotworkwellinThecorrectingsignalwillbeweakenedwhenitpassesbackviamultipleItoftengetstrappedinpoorlocaloptimawhenthebatch-modeorevengradientdescentBPalgorithmisTheseverityincreasessignificantlyasthedepthofthenetworksLearningistooslowacrossmultiplehiddenIn1989,YannLeCunetal.builtadeepneuralnetworkwiththepurposerecognizinghandwrittenZIPcodesonDespitethesuccessofapplyingthealgorithm,thetimetotrainthenetworkondatasetwas y3SVMsloweddownthedevelopmentsofWhenpeopleweretryingtomakeimprovementstoHinton'sneuralnetworkswithrespecttothoseadvantagesTryingtoincreasethetrainingdatasetandestimatingtheinitialweight1993-1995VladimirN.Vapnik,etmadeimprovementsontheoriginalTheycreatinganewfamilySupportVectorSVM——GoodorSVMmakeslearningfastandeasy,duetoitssimpleAppropriatefordatawithsimplestructures,e.g,withasmalloffeaturesorthedatawhichdoesn'tcontainhierarchicalBut,forthedatawhichitselfcontainscomplicatedfeatures,tendstoperformworsebecauseofitssimpleOnewaytosolvethisproblemistoaddapriorknowledgetotheSVMmodelinordertoobtainabetterfeaturelayer.But,it'shardtofindageneralsetofpriorSVM——takesusawayfromtheroadtoareal ligentmachineDespitethefactthatSVMcanworkreallywellinsolvingmanyAIproblems,itisnotagoodtrendtoAIduetoitsfataldeficiency,shallowarchitecture.SVMisstillakindofPerceptronwherethefeaturesareobtainednotlearntfromthedataWiththepurposeoffindinganarchitecturethatmeetstherequirementsabove,someresearchersstartedtolookbacktothemulti-layerneuralnetwork,tryingtoexploititsadvantagesrelatedtodeepandethelimitations...After2010年 國防部DARPA計劃首次資助深度學習項目,參與方包括福大學、紐約大學和 2011年,微 和谷歌的語音識別研究人員先后采用DNN技術(shù)降低語識別錯誤率20%-30%,是該領(lǐng)域10年來最大突2012年,Hinton將 分類問題的Top5錯誤率由26%降至至15%Andrewf建 個音和;2013年,Hinton創(chuàng)立的DNNResearch公司 收購2013年,YannLeCun加 的人工智 After CTR預估(Click-Through-RatePrediction, 檢索達到了國際領(lǐng)先水平。2014年,AndrewNg加盟。2013年,騰訊著手建立深度學臺Mariana。Mariana面向語音識別、圖像識別、推薦等眾多應用領(lǐng)域,提供默認算法的并行實現(xiàn)以減少新算DNNGPUDNNCPUBMBM->RBM-BM->RBMBM->RBM-DBN模型被視為由若干個RBM堆疊在一起,可過由低到高逐層訓練這些RBM來實現(xiàn)(1)底部RBM以原始輸入數(shù)據(jù)訓(2)將底部RBM抽取的特征作為頂部的輸入訓Hintonin自 ,80年代晚期出主要用于降維,后用于主成分分??輸入輸出層神經(jīng)元個m隱藏層神經(jīng)元個p,q各層神經(jīng)元的偏w輸入層與隱藏層之間的權(quán)?隱藏層與輸出層之間的權(quán)Y.Bengio,P.Lamblin,D.Popovici,andH. e.Greedylayerwisetrainingofnetworks.InProceedingsofNeuralInformationProcessingSystems(NIPS).2012Hinton–Deep2014/12/9NetworkIn2013ZFVisualizingandUnderstandingConvolutional2014/12/9NetworkIn RNN(RecurrentNeuralallbiologicalneuralnetworksareRNNsimplementdynamicalmathematically,SomeOld

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論