人工智能課件chapter4_第1頁(yè)
人工智能課件chapter4_第2頁(yè)
人工智能課件chapter4_第3頁(yè)
人工智能課件chapter4_第4頁(yè)
人工智能課件chapter4_第5頁(yè)
已閱讀5頁(yè),還剩60頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

Chapter4

DeepLearning

Deeplearning(alsoknownasdeepstructuredlearningorhierarchicallearning)ispartofabroaderfamilyofmachinelearningmethodsbasedonlearningdatarepresentations,asopposedtotask-specificalgorithms.Learningcanbesupervised,semi-supervisedorunsupervised.TextA深度學(xué)習(xí)(也稱為深度結(jié)構(gòu)化學(xué)習(xí)或分層學(xué)習(xí))是基于學(xué)習(xí)數(shù)據(jù)表示的更廣泛的機(jī)器學(xué)習(xí)方法的一部分,而不是特定于任務(wù)的算法。深度學(xué)習(xí)可以是有監(jiān)督的、半監(jiān)督的或無(wú)監(jiān)督的。Deeplearningarchitecturessuchasdeepneuralnetworks,deepbeliefnetworksandrecurrentneuralnetworkshavebeenappliedtofieldsincludingcomputervision,speechrecognition,naturallanguageprocessing,audiorecognition,socialnetworkfiltering,machinetranslation,bioinformatics,drugdesign,medicalimageanalysis,materialinspectionandboardgameprograms,wheretheyhaveproducedresultscomparabletoandinsomecasessuperiortohumanexperts.深度學(xué)習(xí)架構(gòu)(如深度神經(jīng)網(wǎng)絡(luò)、深度信念網(wǎng)絡(luò)和循環(huán)神經(jīng)網(wǎng)絡(luò))的應(yīng)用領(lǐng)域包括計(jì)算機(jī)視覺(jué)、語(yǔ)音識(shí)別、自然語(yǔ)言處理、音頻識(shí)別,社交網(wǎng)絡(luò)過(guò)濾、機(jī)器翻譯、生物信息學(xué)、藥物設(shè)計(jì)、醫(yī)學(xué)圖像分析、材料檢驗(yàn)和棋盤游戲項(xiàng)目,它們產(chǎn)生了相當(dāng)大的成效,在某些情況下優(yōu)于人類專家。Deeplearningmodelsarevaguelyinspiredbyinformationprocessingandcommunicationpatternsinbiologicalnervoussystemsyethavevariousdifferencesfromthestructuralandfunctionalpropertiesofbiologicalbrains(especiallyhumanbrains),whichmakethemincompatiblewithneuroscienceevidences.深度學(xué)習(xí)模型的靈感來(lái)源于生物神經(jīng)系統(tǒng)的信息處理和通信模式,但與生物大腦(尤其是人腦)的結(jié)構(gòu)和功能特性存在著諸多差異,與神經(jīng)科學(xué)已經(jīng)證明的生物事實(shí)不一致。OverviewMostmoderndeeplearningmodelsarebasedonanartificialneuralnetwork,althoughtheycanalsoincludepropositional

formulasorlatentvariablesorganizedlayer-wiseindeepgenerativemodelssuchasthenodesindeepbeliefnetworks1anddeepBoltzmannmachines.概述大多數(shù)現(xiàn)代的深度學(xué)習(xí)模型都是基于人工神經(jīng)網(wǎng)絡(luò),盡管它們也可以包含命題公式或潛在變量,這些潛在變量在深層生成模型中按層組織,比如在深度信念網(wǎng)絡(luò)中的節(jié)點(diǎn)和深層玻爾茲曼機(jī)中。Indeeplearning,eachlevellearnstotransformitsinputdataintoaslightlymoreabstractandcompositerepresentation.Inanimagerecognitionapplication,therawinputmaybeamatrixofpixels;thefirstrepresentationallayermayabstractthepixelsandencodeedges;thesecondlayermaycomposeandencodearrangementsofedges;thethirdlayermayencodeanoseandeyes;andthefourthlayermayrecognizethattheimagecontainsaface.Importantly,adeeplearningprocesscanlearnwhichfeaturestooptimallyplaceinwhichlevelonitsown.(Ofcourse,thisdoesnotcompletelyobviatetheneedforhand-tuning;forexample,varyingnumbersoflayersandlayersizescanprovidedifferentdegreesofabstraction.)在深度學(xué)習(xí)中,每個(gè)層次都學(xué)習(xí)將其輸入數(shù)據(jù)轉(zhuǎn)換為稍微抽象和復(fù)合的表示形式。在圖像識(shí)別應(yīng)用程序中,原始輸入可以是像素矩陣;第一表示層可以抽象像素并對(duì)邊緣進(jìn)行編碼;第二層可對(duì)邊緣的排列進(jìn)行組合和編碼;第三層可能編碼鼻子和眼睛;第四層可能識(shí)別出圖像中包含一個(gè)人臉。重要的是,深度學(xué)習(xí)過(guò)程可以自己學(xué)習(xí)哪些特性最適合放在哪個(gè)級(jí)別。(當(dāng)然,這并不完全排除手工調(diào)優(yōu)的需要。例如,不同數(shù)量的層和層大小可以提供不同的抽象程度。)The“deep”in“deeplearning”referstothenumberoflayersthroughwhichthedataistransformed.Moreprecisely,deeplearningsystemshaveasubstantialcreditassignmentpath(CAP)depth.TheCAPisthechainoftransformationsfrominputtooutput.CAPsdescribepotentiallycausalconnectionsbetweeninputandoutput.Forafeedforwardneuralnetwork2,thedepthoftheCAPsisthatofthenetworkandisthenumberofhiddenlayersplusone(astheoutputlayerisalsoparameterized).Forrecurrentneuralnetworks,inwhichasignalmaypropagatethroughalayermorethanonce,theCAPdepthispotentiallyunlimited.Nouniversallyagreeduponthresholdofdepthdividesshallowlearningfromdeeplearning,butmostresearchersagreethatdeeplearninginvolvesCAPdepth>2.CAPofdepth2hasbeenshowntobeauniversalapproximatorinthesensethatitcanemulateanyfunction.[citationneeded]Beyondthatmorelayersdonotaddtothefunctionapproximatorabilityofthenetwork.Deepmodels(CAP>2)areabletoextractbetterfeaturesthanshallowmodelsandhence.“深度學(xué)習(xí)”中的“深度”指的是數(shù)據(jù)轉(zhuǎn)換的層數(shù)。更準(zhǔn)確地說(shuō),深度學(xué)習(xí)系統(tǒng)具有相當(dāng)大的學(xué)分分配路徑(CAP)深度。上限是從輸入到輸出的轉(zhuǎn)換鏈。上限描述了輸入和輸出之間潛在的因果關(guān)系。對(duì)于前饋神經(jīng)網(wǎng)絡(luò),上限的深度是網(wǎng)絡(luò)的深度,是隱藏層的數(shù)量加1(因?yàn)檩敵鰧右彩菂?shù)化的)。對(duì)于遞歸神經(jīng)網(wǎng)絡(luò),信號(hào)可能不止一次通過(guò)一層傳播,其上限深度可能是無(wú)限的。目前還沒(méi)有公認(rèn)的深度閾值將淺學(xué)習(xí)與深度學(xué)習(xí)區(qū)分開(kāi)來(lái),但大多數(shù)研究者認(rèn)為深度學(xué)習(xí)涉及到CAPdepth>2。深度上限2已經(jīng)被證明是一個(gè)通用的近似器,在這個(gè)意義上,它可以模擬任何函數(shù)。除此之外,更多的層次并不會(huì)增加網(wǎng)絡(luò)的函數(shù)逼近能力。深度模型(CAP>2)能夠比淺層模型提取更好的特征,因此,額外的層有助于學(xué)習(xí)特征。Deeplearningarchitecturesareoftenconstructedwithagreedylayer-by-layermethod.Deeplearninghelpstodisentangletheseabstractionsandpickoutwhichfeaturesimproveperformance.深度學(xué)習(xí)體系結(jié)構(gòu)通常使用逐層的貪心算法構(gòu)建。深度學(xué)習(xí)有助于理清這些抽象概念,并找出哪些特性可以提高性能。Forsupervisedlearningtasks,deeplearningmethodsobviatefeatureengineering3,bytranslatingthedataintocompactintermediaterepresentationsakintoprincipalcomponents,andderivelayeredstructuresthatremoveredundancyinrepresentation.對(duì)于監(jiān)督學(xué)習(xí)任務(wù),深度學(xué)習(xí)方法通過(guò)將數(shù)據(jù)轉(zhuǎn)換為類似于主成分的緊湊的中間表示,并推導(dǎo)出消除表示冗余的分層結(jié)構(gòu),從而避免特征工程。Deeplearningalgorithmscanbeappliedtounsupervisedlearningtasks.Thisisanimportantbenefitbecauseunlabeleddataaremoreabundantthanlabeleddata.Examplesofdeepstructuresthatcanbetrainedinanunsupervisedmannerareneuralhistorycompressorsanddeepbeliefnetworks.深度學(xué)習(xí)算法可以應(yīng)用于無(wú)監(jiān)督學(xué)習(xí)任務(wù)。這是一個(gè)重要的優(yōu)點(diǎn),因?yàn)槲礃?biāo)記的數(shù)據(jù)比標(biāo)記的數(shù)據(jù)更豐富??梢栽跓o(wú)監(jiān)督方式下進(jìn)行訓(xùn)練的深層結(jié)構(gòu)的例子有神經(jīng)歷史壓縮器和深層信念網(wǎng)絡(luò)。RelationtohumancognitiveandbraindevelopmentDeeplearningiscloselyrelatedtoaclassoftheoriesofbraindevelopment(specifically,neocorticaldevelopment)proposedbycognitiveneuroscientistsintheearly1990s.Thesedevelopmentaltheorieswereinstantiatedincomputationalmodels,makingthempredecessorsofdeeplearningsystems.Thesedevelopmentalmodelssharethepropertythatvariousproposedlearningdynamicsinthebrain(e.g.,awaveofnervegrowthfactor4)supporttheself-organizationsomewhatanalogoustotheneuralnetworksutilizedindeeplearningmodels.Liketheneocortex,neuralnetworksemployahierarchyoflayeredfiltersinwhicheachlayerconsidersinformationfromapriorlayer(ortheoperatingenvironment),andthenpassesitsoutput(andpossiblytheoriginalinput),tootherlayers.Thisprocessyieldsaself-organizingstackoftransducers,well-tunedtotheiroperatingenvironment.A1995descriptionstated,“...theinfant’sbrainseemstoorganizeitselfundertheinfluenceofwavesofso-calledtrophic-factors...differentregionsofthebrainbecomeconnectedsequentially,withonelayeroftissuematuringbeforeanotherandsoonuntilthewholebrainismature.”與人類認(rèn)知和大腦發(fā)育的關(guān)系深度學(xué)習(xí)與認(rèn)知神經(jīng)科學(xué)家在20世紀(jì)90年代初提出的大腦發(fā)展(特別是新皮層發(fā)展)理論密切相關(guān)。這些發(fā)展理論在計(jì)算模型中得到了實(shí)例化,成為深度學(xué)習(xí)系統(tǒng)的前身。這些發(fā)展模型的共同特征是,大腦中各種擬議的學(xué)習(xí)動(dòng)力學(xué)(例如,一波神經(jīng)生長(zhǎng)因子)支持某種程度上類似于深度學(xué)習(xí)模型中使用的神經(jīng)網(wǎng)絡(luò)的自組織。與新大腦皮層一樣,神經(jīng)網(wǎng)絡(luò)采用分層過(guò)濾的層次結(jié)構(gòu),每一層都考慮來(lái)自前一層(或操作環(huán)境)的信息,然后將其輸出(可能還有原始輸入)傳遞給其他層。這個(gè)過(guò)程產(chǎn)生一個(gè)自組織的傳感器堆棧,很好地適應(yīng)了它們的操作環(huán)境。1995年的一份描述說(shuō):“……嬰兒的大腦似乎在所謂的營(yíng)養(yǎng)因子波的影響下自行組織起來(lái)……大腦的不同區(qū)域依次連接,一層組織先于另一層組織成熟,以此類推,直到整個(gè)大腦成熟。”Avarietyofapproacheshavebeenusedtoinvestigatetheplausibilityofdeeplearningmodelsfromaneurobiologicalperspective.Ontheonehand,severalvariantsofthebackpropagationalgorithmhavebeenproposedinordertoincreaseitsprocessingrealism.Otherresearchershavearguedthatunsupervisedformsofdeeplearning,suchasthosebasedonhierarchicalgenerativemodelsanddeepbeliefnetworks,maybeclosertobiologicalreality.Inthisrespect,generativeneuralnetworkmodelshavebeenrelatedtoneurobiologicalevidenceaboutsampling-basedprocessinginthecerebralcortex.從神經(jīng)生物學(xué)的角度研究深度學(xué)習(xí)模型的合理性,已經(jīng)使用了多種方法。為了提高反向傳播算法的處理真實(shí)感,提出了幾種不同的反向傳播算法。其他研究人員認(rèn)為,非監(jiān)督形式的深度學(xué)習(xí),例如基于層次生成模型和深度信念網(wǎng)絡(luò)的深度學(xué)習(xí),可能更接近生物學(xué)現(xiàn)實(shí)。在這方面,生成神經(jīng)網(wǎng)絡(luò)模型已經(jīng)與大腦皮層基于采樣處理的神經(jīng)生物學(xué)證據(jù)相關(guān)。Althoughasystematiccomparisonbetweenthehumanbrainorganizationandtheneuronalencodingindeepnetworkshasnotyetbeenestablished,severalanalogieshavebeenreported.Forexample,thecomputationsperformedbydeeplearningunitscouldbesimilartothoseofactualneuronsandneuralpopulations.Similarly,therepresentationsdevelopedbydeeplearningmodelsaresimilartothosemeasuredintheprimatevisualsystembothatthesingle-unitandatthepopulationlevels.雖然目前還沒(méi)有系統(tǒng)地比較人腦組織和深層網(wǎng)絡(luò)中的神經(jīng)編碼,但是已經(jīng)有一些類似的報(bào)道。例如,由深度學(xué)習(xí)單元進(jìn)行的計(jì)算可以類似于實(shí)際神經(jīng)元和神經(jīng)種群的計(jì)算。同樣地,由深度學(xué)習(xí)模型發(fā)展出來(lái)的表象,無(wú)論是在單個(gè)單位還是在種群水平上,都與靈長(zhǎng)類視覺(jué)系統(tǒng)測(cè)量到的表象相似。Imaginehowmuchmoreefficientlawyerscouldbeiftheyhadthetimetoreadeverylegalbookeverwrittenandrevieweverycaseeverbroughttocourt.Imaginedoctorswiththeabilitytostudyeveryadvancementpublishedacrosstheworld’smedicaljournals,orconsulteverymedicalcase,ever.Unfortunately,thehumanbraincannotstorethatmuchinformation,anditwouldtakedecadestoachievethesefeats.TextB想象一下,如果律師們有時(shí)間閱讀每一本寫過(guò)的法律書(shū)籍,審查每一宗提交法庭的案件,他們會(huì)變得多么高效。再想象一下,醫(yī)生有能力研究世界各地醫(yī)學(xué)雜志上發(fā)表的每一項(xiàng)進(jìn)展,或咨詢每一個(gè)醫(yī)學(xué)案例。不幸的是,人類的大腦不能儲(chǔ)存那么多的信息,要實(shí)現(xiàn)這些壯舉需要幾十年的時(shí)間。Butacomputer,onespecificallydesignedtoworklikethehumanmind,could.Deeplearningneuralnetworksaredesignedtomimicthehumanbrain’sneuralconnections.Theyarecapableoflearningthroughcontinuousexposuretohugeamountsofdata.Thisallowsthemtorecognizepatterns,comprehendcomplexconcepts,andtranslatehigh-levelabstractions.Thesenetworksconsistofmanylayers,eachhavingadifferentsetofweights.Thedeeperthenetwork,thestrongeritis.但是一臺(tái)專門設(shè)計(jì)成像人類思維一樣工作的計(jì)算機(jī)卻可以。深度學(xué)習(xí)神經(jīng)網(wǎng)絡(luò)被設(shè)計(jì)用來(lái)模擬人腦的神經(jīng)連接。他們能夠通過(guò)持續(xù)接觸大量數(shù)據(jù)來(lái)學(xué)習(xí)。這允許他們識(shí)別模式,理解復(fù)雜的概念,并轉(zhuǎn)換高級(jí)抽象。這些網(wǎng)絡(luò)由許多層組成,每一層都有不同的權(quán)重集。網(wǎng)絡(luò)越深,它就越強(qiáng)大。Currentapplicationsforthesenetworksincludemedicaldiagnosis,roboticsandengineering,facerecognition,andautomotivenavigation.However,deeplearningisstillindevelopment–notsurprisingly,itisahugeundertakingtogetmachinestothinklikehumans.Infact,verylittleisunderstoodaboutthesenetworks,andmonthsofmanualtuningareoftenrequiredforobtainingexcellentperformance.目前這些網(wǎng)絡(luò)的應(yīng)用包括醫(yī)療診斷、機(jī)器人和工程、人臉識(shí)別和汽車導(dǎo)航。然而,深度學(xué)習(xí)仍處于發(fā)展階段——這并不奇怪,讓機(jī)器像人類一樣思考是一項(xiàng)艱巨的任務(wù)。事實(shí)上,人們對(duì)這些網(wǎng)絡(luò)知之甚少,通常需要數(shù)月的手工調(diào)優(yōu)才能獲得出色的性能。FuxinLi,assistantprofessorattheOregonStateUniversitySchoolofElectricalEngineeringandComputerScience,andhisteamaretakingontheaccuracyoftheseneuralnetworksunderadversarialconditions.Theirresearchfocusesonthebasicmachinelearningaspectsofdeeplearning,andhowtomakegeneraldeeplearningmorerobust.俄勒岡州立大學(xué)電氣工程與計(jì)算機(jī)科學(xué)學(xué)院助理教授李富新(FuxinLi)和他的團(tuán)隊(duì)正在研究這些神經(jīng)網(wǎng)絡(luò)在對(duì)抗條件下的準(zhǔn)確性。他們的研究集中在深度學(xué)習(xí)的基礎(chǔ)機(jī)器學(xué)習(xí)方面,以及如何使一般的深度學(xué)習(xí)更加健全。Totrytobetterunderstandwhenadeepconvolutionalneuralnetwork(CNN)isgoingtoberightorwrong,Li’steamhadtoestablishanestimateofconfidenceinthepredictionsofthedeeplearningarchitecture.Thoseestimatescanbeusedassafeguardswhenutilizingthenetworksinreallife.為了更好地理解深度卷積神經(jīng)網(wǎng)絡(luò)(CNN)是對(duì)是錯(cuò),李的團(tuán)隊(duì)必須對(duì)深度學(xué)習(xí)架構(gòu)的預(yù)測(cè)建立一個(gè)信心評(píng)估。這些估計(jì)可以作為在現(xiàn)實(shí)生活中利用網(wǎng)絡(luò)時(shí)的保障?!癇asically,”explainsLi,“tryingtomakedeeplearningincreasinglyself-awaretobeawareofwhattypeofdataithasseen,andwhattypeofdataitcouldworkon.”李解釋說(shuō),“基本上,試著讓深度學(xué)習(xí)越來(lái)越具有自我意識(shí)——意識(shí)到它看到了什么類型的數(shù)據(jù),以及它可以處理什么類型的數(shù)據(jù)?!盩heteamlookedatrecentadvancesindeeplearning,whichhavegreatlyimprovedthecapabilitytorecognizeimagesautomatically.Thosenetworks,albeitveryresistanttooverfitting,werediscoveredtocompletelyfailifsomeofthepixelsinsuchimageswereperturbedviaanadversarialoptimizationalgorithm.研究小組觀察了深度學(xué)習(xí)的最新進(jìn)展,深度學(xué)習(xí)極大地提高了自動(dòng)識(shí)別圖像的能力。這些網(wǎng)絡(luò)雖然非??惯^(guò)擬合,但如果通過(guò)對(duì)抗性優(yōu)化算法對(duì)圖像中的某些像素進(jìn)行擾動(dòng),這些網(wǎng)絡(luò)就會(huì)完全失效。Toahumanobserver,theimageinquestionmaylookfine,butthedeepnetworkseesotherwise.Accordingtotheresearchers,thoseadversarialexamplesaredangerousifadeepnetworkisutilizedintoanycrucialrealapplication,suchasautonomousdriving.Iftheresultofthenetworkcanbehacked,wrongauthenticationsandotherdevastatingeffectswouldbeunavoidable.在人類觀察者看來(lái),這幅圖像可能看起來(lái)很好,但深層網(wǎng)絡(luò)卻不這么認(rèn)為。根據(jù)研究人員的說(shuō)法,如果一個(gè)深層網(wǎng)絡(luò)被用于任何關(guān)鍵的實(shí)際應(yīng)用,例如自動(dòng)駕駛,那么這些對(duì)抗性的例子是危險(xiǎn)的。如果網(wǎng)絡(luò)的結(jié)果可以被黑客攻擊,錯(cuò)誤的認(rèn)證和其他破壞性的影響將不可避免。Inadeparturefrompreviousperspectivesthatfocusedonimprovingtheclassifierstocorrectlyorganizetheadversarialexamples,theteamfocusedondetectingthoseadversarialexamplesbyanalyzingwhethertheycomefromthesamedistributionasthenormalexamples.Theaccuracyfordetectingadversarialexamplesexceeded96%.Notably,90%oftheadversarialscanbedetectedwithafalsepositiverateoflessthan10%.與以前關(guān)注改進(jìn)分類器以正確組織對(duì)抗式示例的觀點(diǎn)不同,團(tuán)隊(duì)關(guān)注于通過(guò)分析這些對(duì)抗式示例是否來(lái)自與正常示例相同的分布來(lái)檢測(cè)這些對(duì)抗式示例。對(duì)抗性樣本的檢測(cè)準(zhǔn)確率超過(guò)96%。值得注意的是,90%的對(duì)抗物可以檢測(cè)到假陽(yáng)性率低于10%。Thebenefitsofthisresearcharenumerous.Itisvitalforaneuralnetworktobeabletoidentifywhetheranexamplecomesfromanormaloranadversarialdistribution.Suchknowledge,ifavailable,willhelpsignificantlytocontrolbehaviorsofrobotsemployingdeeplearning.Areliableprocedurecanpreventrobotsfrombehavinginanundesirablemannerbecauseofthefalseperceptionsitmadeabouttheenvironment.這項(xiàng)研究的好處是多方面的。神經(jīng)網(wǎng)絡(luò)能夠識(shí)別一個(gè)例子是來(lái)自正態(tài)分布還是反向分布是至關(guān)重要的。這些知識(shí),如果可用,將大大有助于控制使用深度學(xué)習(xí)的機(jī)器人的行為。一個(gè)可靠的程序可以防止機(jī)器人由于對(duì)環(huán)境的錯(cuò)誤感知而做出不受歡迎的行為。Ligivesoneexample:“Inroboticsthere’sthisbigissueaboutrobotsnotdoingsomethingbasedonerroneousperception.It’simportantforarobottoknowthatit’snotmakingaconfidentperception.Forexample,if[therobot]issayingthere’sanobjectoverthere,butit’sactuallyawall,he’llgotofetchthatobject,andthenhehitsawall.”李舉了一個(gè)例子:“在機(jī)器人技術(shù)中,有一個(gè)很大的問(wèn)題是機(jī)器人不能基于錯(cuò)誤的感知去做某件事。對(duì)機(jī)器人來(lái)說(shuō),重要的是要知道它沒(méi)有做出自信的感知。例如,如果(機(jī)器人)說(shuō)那邊有一個(gè)物體,但它實(shí)際上是一堵墻,它就會(huì)去取那個(gè)物體,然后撞上墻。Hopefully,Lisays,thatwon’thappen.However,currentsoftwareandmachinelearninghavebeenmostlybasedsolelyonpredictionconfidencewithintheoriginalmachinelearningframework.Basically,thetestingandtrainingdataareassumedtobepulledfromthesamedistributionindependently,andthatcanleadtoincorrectassumptions.“希望這不會(huì)發(fā)生?!崩钫f(shuō)。然而,目前的軟件和機(jī)器學(xué)習(xí)大多是基于原始機(jī)器學(xué)習(xí)框架內(nèi)的預(yù)測(cè)置信度。基本上,測(cè)試和訓(xùn)練數(shù)據(jù)被認(rèn)為是從同一個(gè)分布中獨(dú)立提取的,這可能導(dǎo)致錯(cuò)誤的假設(shè)。BetterconfidenceestimatescouldpotentiallyhelpavoidincidentssuchastheTeslacrashscenariofromMay2016,whereanadversarialexample(truckwithtoomuchlight)wasinthemiddleofthehighwaythatcheatedthesystem.Aconfidenceestimatecouldpotentiallysolvethatissue.Butfirst,thecomputermustbesmarter.Thecomputerhastolearntodetectobjectsanddifferentiate,say,atreefromanothervehicle.更好的信心估計(jì)可能有助于避免2016年5月發(fā)生的特斯拉(Tesla)撞車事故,當(dāng)時(shí),一輛對(duì)抗式卡車(載重量過(guò)大的卡車)行駛在公路中間,騙過(guò)了系統(tǒng)。信心評(píng)估有可能解決這個(gè)問(wèn)題。但首先,電腦必須更聰明。計(jì)算機(jī)必須學(xué)會(huì)探測(cè)物體并將樹(shù)與其他交通工具區(qū)分開(kāi)來(lái)。“Tomakeitreallyrobust,youneedtoaccountforunknownobjects.Somethingweirdmayhityou.Adeermayjumpout.”Thenetworkcan’tbetaughteveryunexpectedsituation,saysLi,“soyouneedittodiscoverthemwithoutknowledgeofwhattheyare.That’ssomethingthatwedo.Wetrytobridgethegap.”“為了讓他真正健全,你需要考慮未知的對(duì)象?!笨赡軙?huì)有奇怪的事情發(fā)生。鹿可能會(huì)跳出來(lái)。“網(wǎng)絡(luò)不可能在每一個(gè)意想不到的情況下都被教授,”李說(shuō),“所以你需要他來(lái)發(fā)現(xiàn)它們,而不知道它們是什么?!蔽覀兙褪沁@么做的。我們?cè)噲D彌合這一差距。Trainingprocedureswillmakedeeplearningmoreautomaticandleadtofewerfailures,aswellasconfidenceestimateswhenthedeepnetworkisutilizedtopredictnewdata.Mostofthistraining,explainsLi,comesfromphotodistributionusingstockima

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論