諾貝爾獎凸顯神經網(wǎng)絡的物理學根源 Nobel Prize highlights neural networksphysics roots_第1頁
諾貝爾獎凸顯神經網(wǎng)絡的物理學根源 Nobel Prize highlights neural networksphysics roots_第2頁
諾貝爾獎凸顯神經網(wǎng)絡的物理學根源 Nobel Prize highlights neural networksphysics roots_第3頁
諾貝爾獎凸顯神經網(wǎng)絡的物理學根源 Nobel Prize highlights neural networksphysics roots_第4頁
諾貝爾獎凸顯神經網(wǎng)絡的物理學根源 Nobel Prize highlights neural networksphysics roots_第5頁
已閱讀5頁,還剩7頁未讀 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

NobelPrizehighlightsneuralnetworks’physicsroots罡

Theroadtothemodernmachine-learningmarvelswaspavedwithideasfromstatisticalmechanicsandcollectivephenomena.

JohannaL.Miller

PhysicsToday77(12),12–16(2024);

/10.1063/pt.qjmx.snxw

View

Online

CrossMark

5

Export

Citation

24December202423:32:07

SEARCH&DISCOVERY

NobelPrizehighlightsneuralnetworks’

physicsroots

Theroadtothemodern

MATTRASPANTI/PRINCETONUNIVERSITY

JOHNNYGUATTO/UNIVERSITYOFTORONTO

machine-learningmarvelswaspavedwithideasfromstatisticalmechanicsandcollectivephenomena.

“G

arbagein,garbageout.”According

totheoldadagefromcomputersci-

JohnHop?eld

Geo?reyHinton

onthoseideastodevelopthealgorithmsusedbyneural-networkmodelstoday.

Glassymemory

Itwasfarfromobvious,atfirst,thatneuralnetworkswouldevergrowtobesopowerful.Asrecentlyas2011,theflashiestmilestonesinAIwerebeingachievedbyanotherapproachentirely.IBMWatson,thecomputerthatbeatKenJenningsandBradRutteratJeop-ardy!,wasnotaneuralnetwork:Itwasexplicitlyprogrammedwithrulesforlanguageprocessing,informationre-trieval,andlogicalreasoning.AndmanyresearchersthoughtthatwasthewaytogotocreatepracticalAImachines.

Incontrast,theearlyworkonneuralnetworkswascuriosity-drivenresearch,inspiredmorebyrealbrainsthanbycomputersandtheirapplications.Butthenatureoftheinterdisciplinaryconnectionwassubtle.“ThequestionsHopfieldad-dressedarenotunrelatedtothingsneuro-scientistswereworriedabout,”saysPrinceton’sWilliamBialek.“Butthisisn’tabout‘applicationofphysicstoX’;rather,it’saboutintroducingawholepointofviewthatjustdidn’texistbefore.”

Bythe1980s,neuroscientistshadknownfordecadesthatthebrainiscom-posedofneurons,whichareconnectedtooneanotherviasynapsesandalter-natebetweenperiodsofhighandlowelectricalactivity(colloquially,“firing”and“notfiring”),andtheywerestudy-ingsystemsofafewneuronstounder-standhowoneneuron’sfiringaffected

thoseitwasconnectedto.“Somethoughtofneuronsintermsoflogicgates,likeinelectronics,”saysStanfordUniversity’sJayMcClelland.

Inalandmark1982paper,Hopfieldtookadifferentapproach.1Inphysics,heargued,manyimportantpropertiesoflarge-scalesystemsareindependentofsmall-scaledetails.Allmaterialscon-ductsoundwaves,forexample,irrespec-tiveofexactlyhowtheiratomsormole-culesinteract.Microscopicforcesmightaffectthespeedofsoundorotheracous-ticproperties,butstudyingtheforcesamongthreeorfouratomsrevealslittleabouthowtheconceptofsoundwavesemergesinthefirstplace.

Sohewrotedownamodelofanet-workofneurons,withaneyemoreto-wardcomputationalandmathematicalsimplicitythanneurobiologicalrealism.Themodel,nowknownasaHopfieldnetwork,issketchedinfigure1.(Thefig-ureshowsafive-neuronnetworkforeaseofillustration;Hopfieldwassimulatingnetworksof30to100neurons.)Eachneuroncanbeinstate1,forfiring,orstate0,fornotfiring.Andeachneuronwasconnectedtoalltheothersviacouplingconstantsthatcouldhaveanypositiveornegativevalue,dependingonwhethereachsynapsefavorsordisfavorstheneu-ronstobothbefiringatthesametime.

That’sexactlythesameformasaspinglass,afamouslythornysystemfromcondensed-matterphysics.(SeePhysicsToday,December2021,page17.)Unlikeaferromagnet,inwhichthecouplingsareall

ence,whatyougetfromacomputerisnobetterthanwhatyougiveit.Anditwouldseemtoimplythatbecausecom-puterscan’tthinkforthemselves,theycanneverdoanythingmoresophisti-catedthanwhatthey’vebeenexplicitlyinstructedto.

24December202423:32:07

Butthatlastpartappearstobenolongertrue.Neuralnetworks—computingarchitectures,inspiredbythehumanbrain,inwhichsignalsarepassedamongnodescalledartificialneurons—have,inrecentyears,beenproducingwaveafterwaveofstunningresults.(See,forexam-ple,page17ofthisissue.)Individualartificialneuronsperformonlythemostelementaryofcomputations.Butwhenbroughttogetherinlargeenoughnum-bers,andwhenfedonenoughtrainingdata,theyacquirecapabilitiesuncannilyreminiscentofhumanintelligence,seem-inglyoutofnowhere.

Physicistsarenostrangerstotheideaofunexpectedphenomenaemergingfromsimplerbuildingblocks.Afewel-ementaryparticlesandtherulesoftheirinteractionscombinetoyieldalmostthewholeofthevisibleworld:super-conductors,plasmas,andeverythinginbetween.Whyshouldn’taphysicsap-proachtoemergentcomplexitybeap-pliedtoneuralnetworkstoo?

Indeed,itwas—andstillis—asshow-casedbythisyear’sNobelPrizeinPhys-ics,whichgoestoPrincetonUniversity’sJohnHopfieldandtheUniversityofTo-ronto’sGeoffreyHinton.Beginningintheearly1980s,Hopfieldlaidthecon-ceptualfoundationsforphysics-basedthinkingaboutbrain-inspiredinforma-tionprocessing;Hintonwasatthefore-frontofthedecades-longefforttobuild

12PHYSICSTODAY|DECEMBER2024

10

1

00

FIGURE1.AHOPFIELDNETWORK,formallyequivalenttoaspinglass,functionsasanassociativememory:Whenpresentedwithapartiallyrecalledstate,itusesanenergy-loweringalgorithmtofillinthegaps.Thememoriesarestoredinthestrengthsofthe

connectionsamongthenodes.WhenJohnHopfieldshowedthatwiththerightcombinationofconnectionweights,thenetworkcouldstoremanymemoriessimultaneously,hesetthestageforphysics-basedthinkingaboutneuralnetworks.(FigurebyFreddiePagani;rabbitphotobyJMLigeroLoarte/WikimediaCommons/CCBY3.0.)

positiveandthesystemhasacleargroundstatewithallitsspinsaligned,aspinglassalmostalwayslacksastatethatsatisfiesallitsspins’energeticpreferencessimul-taneously.Itsenergylandscapeiscom-plex,withmanylocalenergyminima.

Hopfieldarguedthatthelandscapecouldserveasamemory,witheachoftheenergy-minimizingconfigurationsservingasastatetoberemembered.Andhepresentedanelegantwayofset-tingtheconnectionstrengths—inspiredbywhathappensatrealsynapses—sothatthememorywouldstoreanyde-siredcollectionofstates.

ButtheHopfieldnetworkisfunda-mentallydifferentfromanordinarycom-putermemory.Inacomputer,eachitemofdatatobestoredisencodedasastringofonesandzerosinaspecificplace,andit’srecalledbygoingbacktothatplaceandreadingoutthestring.InaHopfieldnetwork,alltheitemsarestoredsimulta-neouslyinthecouplingstrengthsofthewholenetwork.Andtheycanberecalledassociatively,bygivingthenetworkastartingpointthatsharesjustafewfea-tureswithoneoftherememberedstatesandallowingittorelaxtothenearestenergyminimum.Moreoftenthannot,itwillrecallthedesiredmemory.(SeealsothearticlesbyHaimSompolinsky,PhysicsToday,December1988,page70,andJohnHopfield,PhysicsToday,Feb-ruary1994,page40.)

Thoseareboththingsthathappeninrealbrains.“Itwasknownexperimen-tallyinhigheranimalsthatbrainactivitywaswellspreadout,anditinvolved

manyneurons,”saysHopfield.Andas-sociativememoryissomethingyou’vedirectlyexperiencedifyou’veeverre-calledasongyou’veheardbeforeafterhearingonerandomline.

Hopfield’smodelwasavastsimplifica-tionofarealbrain.Realneuronsarein-trinsicallydynamic,notcharacterizedbystaticstates,andrealneuronconnectionsarenotsymmetric.Butinaway,thosedifferenceswerefeatures,notbugs:They

showedthatcollective,associativemem-orywasanemergentlarge-scalephenom-enon,robustagainstsmall-scaledetails.

Learninghowtolearn

“NotonlyisHopfieldaverygoodphysi-cist,buttheHopfieldmodelisexcellentphysicsbyitself,”saysLeovanHemmen,oftheTechnicalUniversityofMunich.Still,its1982formulationleftmanyin-triguingopenquestions.Hopfieldhadfocusedonsimulationstoshowhowthesystemrelaxestoanenergyminimum;wouldthemodeladmitamorerobustanalyticaltreatment?Howmanystatescouldthemodelremember,andwhatwouldhappenifitwasoverloaded?Weretherebetterwaysofsettingthecon-nectionstrengthsthantheoneHopfieldproposed?

Thosequestions,andothers,weretakenonbyaflurryofphysics-trainedresearcherswhowereinspiredbyHopfield’sworkandenteredtheneural-networkfieldoverthe1980s.“Physicistsareversatile,curious,andarrogant—inapositiveway,”saysEytanDomany,oftheWeizmannInstituteofScienceinIsrael.

“They’rewillingtostudythoroughlyandthentackleaproblemthey’veneverseenbefore,ifit’sinteresting.Andeveryoneisexcitedaboutunderstandingthebrain.”

24December202423:32:07

AnotherpartoftheappealwasinhowHopfieldhadtakenatraditionalphysicsproblemandturneditonitshead.“Inmostenergy-landscapeprob-lems,you’regiventhemicroscopicinter-actions,andyouask,Whatisthegroundstate?Whatarethelocalminima?Whatistheentirelandscape?”saysHaimSompolinsky,oftheHebrewUniversityofJerusalem.“The1982paperdidtheopposite.Westartwiththegroundstatesthatwewant:thememories.Andweask,Whatarethemicroscopicinteractionsthatwillsupportthoseasgroundstates?”

Fromthere,itwasashortconceptualleaptoask,Whatifthecouplingstrengthsthemselvescanevolveontheirownen-ergylandscape?Thatis,insteadofbeingpreprogrammedwithparameterstoen-codespecificmemories,canthesystemimproveitselfbylearning?

Machinelearninginneuralnetworkshadbeentriedbefore.Theperceptron—aneural-network-likedevicethatsortedim-agesintosimplecategories,suchascirclesandsquares—datesbacktothe1950s.Whenprovidedwithaseriesoftrainingimagesandasimplealgorithmforupdat-ingitsconnectionsbetweenneurons,itcouldeventuallylearntocorrectlyclassifyevenimagesithadn’tseenbefore.

Buttheperceptrondidn’talwayswork:Withthewaythenetworkwasstructured,sometimestherewasn’tanywayofsettingtheconnectionstrengths

DECEMBER2024|PHYSICSTODAY13

SEARCH&DISCOVERY

3

10

0

1

01

A

3

3

3

3

3

FIGURE2.ABOLTZMANNMACHINEextendstheHopfieldnetworkintwoways:Itaugmentsthenetworktoincludehidden

nodes(showninthecenterofthenetworkingray)thataren’tinvolvedinencodingthedata,anditoperatesatanonzeroeffectivetemperature,sothattheentirespaceofconfigurationscanbecharacterizedbyaBoltzmannprobabilitydistribution.Geoffrey

HintonandcolleaguesdevelopedawaytotraintheBoltzmannmachineasagenerativemodel:Whenpresentedwithseveralinputsthatallsharedacommonfeature,itproducedmoreitemsofthesametype.(FigurebyFreddiePagani.)

toperformthedesiredclassification.“Whenthathappened,youcoulditer-ateforever,andthealgorithmwouldneverconverge,”saysvanHemmen.“Thatwasabigshock.”Withoutaguid-ingprincipletochartapathforward,thefieldhadstalled.

Findingcommonground

Hintondidn’tcometoneuralnetworksfromabackgroundinphysics.ButhiscollaboratorTerrenceSejnowski—who’dearnedhisPhDunderHopfieldin1978—did.Together,theyextendedtheHop-fieldnetworkintosomethingtheycalledtheBoltzmannmachine,whichvastlyextendedthemodel’scapabilitiesbyex-plicitlydrawingonconceptsfromstatis-ticalphysics.2

InHopfield’s1982simulations,he’deffectivelyconsideredthespin-glassnet-workatzerotemperature:Heallowedthesystemtoevolveitsstateonlyinwaysthatwouldloweritsoverallenergy.Sowhateverthestartingstate,itrolledintoanearbylocalenergyminimumandstayedthere.

“TerryandIimmediatelystartedthinkingaboutthestochasticversion,withnonzerotemperature,”saysHinton.In-steadofadeterministicenergy-loweringrule,theyusedaMonteCarloalgorithmthatallowedthesystemtooccasionallyjumpintoastateofhigherenergy.Givenenoughtime,astochasticsimulationofthenetworkwouldexploretheentireen-ergylandscape,anditwouldsettleintoaBoltzmannprobabilitydistribution,withallthelow-energystates—regardlessof

14PHYSICSTODAY|DECEMBER2024

whetherthey’relocalenergyminima—representedwithhighprobability.

“Andin1983,wediscoveredareallybeautifulwaytodolearning,”Hintonsays.Whenthenetworkwassuppliedwithtrainingdata,theyiterativelyup-datedtheconnectionstrengthssothatthedatastateshadhighprobabilityintheBoltzmanndistribution.3Moreover,whentheinputdatahadsomethingincommon—liketheimagesofthenu-meral3infigure2—thenotherhigh-probabilitystateswouldsharethesamecommonfeatures.

Thekeyingredientforthatkindofcommonalityfindingwasaugmentingthenetworktoincludemorenodesthanjusttheonesthatencodethedata.Thosehiddennodes,representedingrayinfigure2,allowthesystemtocapturehigher-levelcorrelationsamongthedata.

Inprinciple,theBoltzmannmachinecouldbeusedformachinerecognitionofhandwritingorfordistinguishingnormalfromemergencyconditionsinafacilitysuchasapowerplant.Unfortu-nately,theBoltzmannmachine’slearn-ingalgorithmisprohibitivelyslowformostpracticalapplications.Itremainedatopicofacademicresearch,butitdidn’tfindmuchreal-worlduse—untilitmadeasurprisingreappearanceyearslater.

Howthenetworkswork

Aroundthesametime,HintonwasworkingwithcognitivescientistDavidRumelhartonanotherlearningalgo-rithm,whichwouldbecomethesecretsauceofalmostalloftoday’sneural

24December202423:32:07

networks:backpropagation.4Thealgo-rithmwasdevelopedforadifferentkindofnetworkarchitecture,calledafeed-forwardnetwork,showninfigure3.IncontrasttotheHopfieldnetworkandBoltzmannmachine,withtheirbidirec-tionalconnectionsamongnodes,signalsinafeedforwardnetworkflowinonedirectiononly:fromalayerofinputneu-rons,throughsomenumberofhiddenlayers,totheoutput.Asimilararchitec-turehadbeenusedinthemultilayerperceptron.

Supposeyouwanttotrainafeed-forwardnetworktoclassifyimages.Yougiveitapictureofarabbit,andyouwantittoproducetheoutputmessage“Thisisarabbit.”Butsomethingiswrong,andinsteadyougettheoutput“Thisisaturtle.”Howdoyougetthingsbackontrack?Thenetworkmighthavedozensorhundreds—ortoday,trillions—ofinter-nodeconnectionsthatcontributetotheoutput,eachwithitsownnumericalweight.There’sadizzyingnumberofwaystoadjustthemalltotrytogettheoutputyouwant.

Backpropagationsolvesthatproblemthroughgradientdescent:First,youde-fineanerrorfunctionthatquantifieshowfartheoutputyougotisfromtheoutputyouwant.Then,calculatethepartialde-rivativesoftheerrorfunctionwithre-specttoeachoftheinternodalweights—asimplematterofrepeatedlyapplyingcalculus’schainrule.Finally,usethosederivativestoadjusttheweightsinawaythatdecreasestheerror.

Itmighttakemanyrepetitionstoget

0.3

0.4

Rabbit

0.1

0.5

0.9

0.8

0.7

0.6

0.6

Writeahaikuaboutarabbit

Softearsinthegrass,

Hoppingthroughthemorningdew,Nature’squietjoy.

0.5

0.9

0.2

FIGURE3.AFEEDFORWARDNETWORK,trainedbybackpropagation,isthebasicstructureoftheneuralnetworksusedtoday.

Bypassingnumericalsignalsfromaninputlayerthroughhiddenlayerstoanoutputlayer,feedforwardnetworksperformfunctionsthatincludeimageclassificationandtextgeneration.(FigurebyFreddiePagani;rabbitphotobyJMLigeroLoarte/Wikimedia

Commons/CCBY3.0;haikugeneratedbyGPT-4,OpenAI,22October2024.)

theerrorcloseenoughtozero—andyou’llwanttomakesurethatthenetworkgivestherightoutputformanyinputs,notjustone.Butthosebasicstepsareusedtotrainallkindsofnetworks,includingproof-of-conceptimageclassifiersandlargelan-guagemodels,suchasChatGPT.

Gradientdescentisintuitivelyele-gant,anditwasn’tconceptuallynew.“Butseveralelementshadtocometo-gethertogetthebackpropagationideatowork,”saysMcClelland.“Foronething,youcan’ttakethederivativeofsome-thingifit’snotdifferentiable.”Realneu-ronsoperatemoreorlessindiscreteonandoffstates,andtheoriginalHopfieldnetwork,Boltzmannmachine,andper-ceptronwerealldiscretemodels.Forbackpropagationtowork,itwasneces-sarytoshifttoamodelinwhichthenodestatescantakeacontinuumofvalues.Butthosecontinuous-valuednetworkshadalreadybeenintroduced,includingina1984paperbyHopfield.5

Asecondinnovationhadtowaitforlonger.Backpropagationworkedwellfornetworkswithjustacoupleoflayers.Butwhenthelayercountapproachedfiveormore—atriflingnumberbyto-day’sstandards—someofthepartialde-rivativesweresosmallthatthetrainingtookanimpracticallylongtime.

Intheearly2000s,Hintonfoundasolution,anditinvolvedhisoldBoltz-mannmachine—orrather,aso-calledrestrictedversionofit,inwhichtheonlyconnectionsarethosebetweenonehid-denneuronandonevisible(non-hidden)neuron.6RestrictedBoltzmannmachines(RBMs)areeasytocomputationally

model,becauseeachgroupofneurons—visibleandhidden—couldbeupdatedallatonce,andtheconnectionweightscouldallbeadjustedtogetherinasinglestep.Hinton’sideawastoisolatepairsofsuccessivelayersinafeedforwardnetwork,trainthemasiftheywereRBMstogettheweightsapproximatelyright,andthenfine-tunethewholenetworkusingbackpropagation.

“Itwaskindofahackything,butitworked,andpeoplegotveryexcited,”saysGrahamTaylor,oftheUniversityofGuelphinCanada,whoearnedhisPhDunderHintonin2009.“Itwasnowpos-sibletotrainnetworkswithfive,six,sevenlayers.Peoplecalledthem‘deep’networks,andtheystartedusingtheterm‘deeplearning.’”

TheRBMhackwasn’tusedforlong.Computingpowerwasadvancingsoquickly—particularlywiththerealizationthatgraphicsprocessingunits(GPUs)wereideallysuitedtothecomputationsneededforneuralnetworks—thatwithinafewyears,itwaspossibletodoback-propagationonevenlargernetworksfromacoldstart,withnoRBMsrequired. “IfRBMlearninghadn’thappened,wouldGPUshavecomealonganyway?”asksTaylor.“That’sarguable.Buttheex-citementaroundRBMschangedtheland-scape:Itledtotherecruitmentandtrain-ingofnewstudentsandtonewwaysofthinking.Ithinkattheveryleast,itwouldn’thavehappenedthesameway.”

What’snewisold

Today’snetworksusehundredsorthou-sandsoflayers,buttheirformislittle

24December202423:32:07

changedfromwhatHintondescribed.“Ilearnedaboutneuralnetworksfrombooksfromthe1980s,”saysBernhardMehlig,oftheUniversityofGothenburginSweden.“WhenIstartedteachingit,Irealizedthatnotmuchisnew.It’sessen-tiallytheoldstuff.”Mehlignotesthatinatextbookhewrote,publishedin2021,part1of3isaboutHopfield,andpart2isaboutHinton.

Neuralnetworksnowinfluenceavastnumberofhumanendeavors:They’reinvolvedindataanalysis,websearches,andcreatinggraphics.Aretheyintelli-gent?It’seasytodismissthequestionoutofhand.“Therehavealwaysbeenlotsofthingsthatmachinescandobetterthanhumans,”saystheUniversityofMaryland’sSankarDasSarma.“Thathasnothingtodowithbecominghuman.ChatGPTisfabulouslygoodatsomethings,butatmanyothers,it’snotevenasgoodasatwo-year-oldbaby.”

Anillustrativecomparisonisthevastdatagapbetweentoday’sneuralnet-worksandhumans.7Aliterate20-year-oldmayhavereadandheardafewhun-dredmillionwordsinlifesofar.Largelanguagemodels,incontrast,aretrainedonhundredsofbillionsofwords,anum-berthatgrowswitheachnewrelease.WhenyouaccountforthefactthatChatGPThastheadvantageofathousandtimesasmuchlifeexperienceasyoudo,itsabilitiesmayseemlesslikeintelli-gence.Butperhapsitdoesn’tmatterifAIfumbleswithsometasksifit’sgoodattherightcombinationofothers.

HintonandHopfieldhavebothspo-kenaboutthedangersofuncheckedAI.

DECEMBER2024|PHYSICSTODAY15

SEARCH&DISCOVERY

Amongtheirargumentsistheideathatoncemachinesbecomecapableofbreak-ingupgoalsintosubgoals,they’llquicklydeducethattheycanmakealmostanytaskeasierforthemselvesbyconsolidat-ingtheirownpower.Andbecauseneu-ralnetworksareoftentaskedwithwrit-ingcodeforothercomputers,stoppingthedamageisnotassimpleaspullingtheplugonasinglemachine.

“Therearealsoimminentrisksthatwe’refacingrightnow,”saysMehlig.“Therearecomputer-writtentextsandfakeimagesthatarebeingusedtotrickpeopleandinfluenceelections.Ithinkthatbytalkingaboutcomputerstakingovertheworld,peopletaketheimmi-nentdangerslessseriously.”

Whatcanphysicistsdo?

Muchoftheuneasestemsfromthefactthatsolittleisknownaboutwhatneu-ralnetworksarereallydoing:Howdobillionsofmatrixmultiplicationsadduptotheabilitytofindproteinstruc-turesorwritepoetry?“Peopleatthebigcompaniesaremoreinterestedinpro-ducingrevenue,notunderstanding,”saysDasSarma.“Understandingtakeslonger.Thejoboftheoristsistounder-standphenomena,andthisisahugephysicalphenomenon,waitingtobeunderstoodbyus.Physicistsshouldbeinterestedinthis.”

“It’shardnottobeexcitedbywhat’sgoingon,andit’shardnottonoticethatwedon’tunderstand,”saysBialek.“Ifyouwanttosaythatthingsareemergent,what’stheorderparameter,andwhatisitthat’semerged?Physicshasawayof

makingthatquestionmoreprecise.Willthatapproachyieldinsight?We’llsee.”

Fornow,thebiggestquestionsarestilloverwhelming.“Ifthereweresome-thingobviousthatcametomind,therewouldbeahordeofpeopletryingtosolveit,”saysHopfield.“Butthereisn’tahordeofpeopleworkingonthis,be-causenobodyknowswheretostart.”

Butafewsmaller-scalequestionsaremoretractable.Forexample,whydoesbackpropagationsoreliablyreducethenetworkerrortonearzero,ratherthangettingstuckinhigh-lyinglocalminimaliketheHopfieldnetworkdoes?“TherewasabeautifulpieceofworkonthisafewyearsagobySuryaGanguliatStan-ford,”saysSaraSolla,ofNorthwesternUniversity.“Hefoundthatmosthigh-lyingminimaarereallysaddlepoints:It’saminimuminmanydimensions,butthere’salwaysoneinwhichit’snot.Soifyoukeepkicking,youeventuallyfindyourwayout.”

Whenphysics-trained

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論