數(shù)據(jù)安全專業(yè)人員的深度學(xué)習(xí)_第1頁
數(shù)據(jù)安全專業(yè)人員的深度學(xué)習(xí)_第2頁
數(shù)據(jù)安全專業(yè)人員的深度學(xué)習(xí)_第3頁
數(shù)據(jù)安全專業(yè)人員的深度學(xué)習(xí)_第4頁
數(shù)據(jù)安全專業(yè)人員的深度學(xué)習(xí)_第5頁
已閱讀5頁,還剩21頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

www.concentric.ai

contact_us@concentric.ai

DEEPLEARNING

FORDATASECURITYPROFESSIONALS

Dr.MadhuShashanka

ChiefScientistandFounder

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

2

CONTENTS

INTRODUCTION 02

APPLICATIONS 05

FUNDAMENTALCONCEPTS 06

NEURALNETWORKS 09

VERSATILITY,EXPLAINED 10

DEEPVS.SHALLOW 13

HOWMODELSLEARN 15

ABREAKTHROUGH 17

DEEPLEARNING?MACHINE 20

LEARNING?

WHAT’SAHEAD 24

ABOUTTHEAUTHOR 25

PAGE

?Concentric2020Allrightsreserved

INTRODUCTION

Notadaygoesbywithoutseeingamentionofdeeplearninginthemedia.Inafewshortyears,theimpactdeeplearninghashadonindustryandpopularcultureisnothingshortofremarkable.Backduringmygraduatestudentdays,Imadeanannualpilgrimagetomeetwithatightcommunityofacademicsandresearchersinterestedinneuralnetworks(calledNIPSthen,nowcalled

NeurIPS

).Withtheriseofinterestindeeplearning,itbecamesopopularthatorganizers

heldalottery

forattendeesforthe2019conference.

Theexcitement’sunderstandable.Deeplearningisresponsibleforseveralgroundbreakingadvancementsthatdefiedexpectationsofevenleadingresearchersinthefield.Overasinglesummerin2010,forexample,researchersat

Microsoftusedthetechnologytocutspeechrecognitionerrors-whichhadbeenstuckat20-25%forovertwodecades

-downto15%.Thesolutionoutperformedprofessionalhumantranscribers

two

yearsago

.

Versatilityis

oneofdeeplearning’smostintriguingcharacteristics.

Allowmetoexplain.

Concentricappliesdeeplearningtechnologytotheproblemofdatasecurity.Ifyoulookatdatasecuritythroughadeep-learninglens,theproblemhassomethingincommonwithself-driving.Autonomouscarsprocessanavalancheofinformation(cameras,radar,LiDAR)tomakedrivingdecisions.Concentricalsoprocessesanavalancheofinformation(words,sentences,andparagraphs,documentlocationandusage)tomakeriskassessments.(Datascientistsrealizetheseareverydifferentproblems-butIofferthecomparisontobringdeeplearning’sversatilityintosharperfocus.)

RADIOLOGY

Highlyspecializedhumanexpertise

DRIVING

Finelyhonedphysicalskillsandcloseattention

MAILSORTING

Highvisualacuity

LEARNMORE

WWW

WWW

WWW

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

5

APPLICATIONS

DOCTORS

PAGE

Aradiologist’sguidetodeeplearning

AUTOS

Deeplearningtechniquesforautonomousdriving

PROCESSORS

Deeplearningformailprocessing

FUNDAMENTALNEURALNETWORKCONCEPTS

Tounderstanddeeplearning,weneedtogoallthewaybackto1943.WarrenMcCullochandWalterPitts,forthefirsttime,

proposedamathematical

model

ofanartificialneuronasasimplecomputingmachine.Individualneuronshavemanyinputsandasinglebinaryoutput.Aneuron’sweightedandsummedinputsdetermineitsoutput.

IMPLICATIONS

McCullochandPittsobservedthatanetworkofsuchcomputingunits,inprinciple,couldcomputeanypossiblebooleanfunction.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

6

WARRENMcCULLOCH

Neurophysiologist

BACKGROUND

MDfromColumbiaUniversityofPhysiciansandSurgeons

PAGE

DepartmentofPsychiatryattheUniversityofIllinois,Chicago

WALTERPITTS

Logician

BACKGROUND

AnautodidactwithextensiveinformaltrainingattheUniversityofChicago

CONTRIBUTION

“ALogicalCalculusofIdeasImmanentinNervousActivity”

Aseminalcontributiontoneuralnetworktheorythatwasinspiredbynature:McCullochandPittsdevelopedtheirmodelafterconsideringthebrainasaninformationprocessingdevice

BuilttheMarkI

Perceptronin1960

7

1MarvinMinskyandSeymourPapert’s1969book“Perceptrons”outlinedlimitationsofRosen-

blatt’stechniqueanddemonstratedafewsimplefunctions(suchasbooleanXOR)thepercep-tronwasunabletomodel.Thebookhada

chillingeffect

onneuralnetworkresearchforalmosttwodecades.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

THEPERCEPTRON

x,

x2

2

Output

x3

x4

Thefirsttrainableneuralnetwork-

theperceptron

-wasintroducedbyFrankRosenblattin1957.Theproposedneuralnetworkwascapableofsimplelinearclassifications.Researcherstheorizednetworkscouldbebuiltwithmultiplehiddenlayersbutlimitationsofcomputingpowerandfundamentalalgorithmicdifficultiespreventedanymeaningfuldemonstrationoftheirusefulness.1

FRANKROSENBLATT

Psychologist

BACKGROUND

PAGE

CornellAeronauticalLaboratory

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

9

PAGE

NEURALNETWORKS

Theperceptronwasthestartingpointforwhatwenowknowas

neural

networks

,composedofcomputing“neurons”arrangedinlayerswithdataflowingfromonelayerontothenext.Asdataflowsthrougheachlayer,itgetstransformedastheoutputsofeachneuronfeedintothenextlayerasdatainputs.Thepassengersinacarseeonlythevisiblelayers:alightturnsred,theself-drivingcarcomestoastop.Allthewhile,unseenhiddenlayersprocesslargevolumesofvisualandsensordatatoarriveatthedecisiontoapplythebrakes.

10

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

PAGE

VERSATILITY,EXPLAINED

UNIVERSALAPPROXIMATIONTHEOREM

Agroundbreakingresultfromthe80sby

Cybenk

o

proved

thataneuralnetworkwithasinglehiddenlayerandafinitenumberofneuronscouldapproximateanycontinuousmathematicalfunctiontoarbitraryprecision.Oftenreferredtoasthe

universalapproximationtheorem

,thisisthefoundationofwhydeeplearningissoversatile.

Buthowdowegofromatechnologythatcan“approximateanycontinuousmathematicalfunction”toaplacewhereitfindsbrainlesionsasaccuratelyasaskilledradiologist?Wrappingyourheadaroundhowthiscanpossiblyhappen,Iadmit,isn’teasy.Taskslikeself-driving,radiology,anddatasecurityaremonumentallycomplex.Itseemsimpossibletheycouldbemodeledlikesomedataregressionfromahighschoolphysicsexperiment.

Buttheycan.

Thesetasksare,infact,composedofafinitenumberofinputs(e.g.thepatternsandshadesonanx-rayorthecontextanduseofadocument)withunequivocallycorrectandrepeatableoutputs(thepatienteitherdoesordoesn’thaveabraintumor;thedocumentdoesordoesn’tcontainsensitiveinformation).Thoseinputsarecomplex.Theinsightsneededtoarriveatananswerareincrediblysophisticated.Deeplearningisthetechnologythatcandoit.

Enough

withthepleasantries.

Let’sdigin.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

12

MODELANYTHING

AccordingtoCybenko,

Arbitrarydecisionregionscanbearbitrarily

wellapproximatedbycontinuousfeedforwardneuralnetworkswithonlyasingleinternal,hiddenlayerand

anycontinuoussigmoidalnonlinearity.

Thisfascinatingresultexplainswhyneuralnetworkswork-essentiallyitshowsthatneuralnetworkscan,inprinciple,modelanything.Thebasicideaisasfollows:computationsineachhiddenneuroncanbeapproximatedasastepfunction.Thecontributionofeachsteptotheoutputcanbetunedbasedonthevalueschosenforeachneuron’sparameters.Thesesimplebuildingblockscombineintoanetworkthat’scapableofmodelinganyarbitrarycontinuousfunctiontoanarbitrarylevelofprecision.Curious

readersareencouragedtoexplorethis

remarkablevisualexplanation

ofhowitworks.Goahead,I’llbeherewhenyougetback.

THATWASFUN,WASN’TIT?

Ifyoumadeittothecurve-fittingexerciseyou’reabelievernow.Onceweunderstandthatevenhighlycomplexprocesses-evaluatingamedicalimage,drivingacar,assessingriskinmillionsofdocuments-canbecapturedasmathematicalmodels(ridiculouslycomplexmodels,yes,butstillmodels)thechallengechangesfrom“if”to“how.”Howdowebuildsolutionsthatcanperformthesetasksquicklyandaccurately?Ihadaphysicsteacheronceclaimthatifweknewthemomentumandpositionofeveryparticleintheuniverse,wecouldpredictthefuture.Obviouslysomesystemsaretoocomplexforeventhemightyneuralnetwork.

PAGE

That,ofcourse,begsthequestion:whatarethelimits?Justhowdeepcandeeplearninggo?

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

13

NEURALNETWORKSGODEEP

DEEPVS.SHALLOW

Bydefinition,deeplearningencompassesafamilyofneuralnetworkmodelsthathavemanylayersofneuronsasopposedto“shallow”networkswhichhavefewer.Thenumberofneuronsmaybesimilarbutthearrangementisverydifferent.Andwhileflatternetworksare,intheory,justascapableasdeeplearningmodels,deepernetworkshavesomeimportantadvantages.

Deeplearningnetworksare,foragivennumberofneurons,moreaccuratethanshallowerones.(Inthemachinelearningvernacular,wecallthis“performance,”althoughI’lluse“accuracy”heretoavoidconfusionwithspeedorotherdefinitionsofperformance.)ComparisonsbetweentherelativelyshallowVGGNetmodel(with16layersand~140Mparameters)showthedeeperResNetmodel(with152layersbutonly~2Mparameters)tobemoreaccurate.Infact,it

canbeshown

thatforapproximatingcertain

functions,shallownetworkswillneedexponentiallymoreneuronscomparedtoadeepnetwork.Andsincethedeepermodelrequiresfewerparameters,

italsoalleviatestheproblemofoverfitting(whereamodelissotightlytunedtoasampledatasetthatitspredictiveabilitytakesanosedivewhenusedonreal-worlddata).

PAGE

Deeplearningtechniqueshaveanotheradvantage:they’rebetteratmodelingdatathat’sinherentlyhierarchical.Innature,forexample,thecascadeofneuronsleadingfromtheretinatothebrainishierarchical.Ateachstep(or“l(fā)ayer”inthedeep-learningvernacular)neuronsbecomeselectiveforincreasinglycomplexstimuli.Retinalneuronsconnecttoneuronsspecializedtosortoutbarsandedges.Thoseneuronsconnecttoneuronsspecializinginorientationsandpositions.Orientationsandpositionsfeedintocornersandfeatures.Andsoon,untilthebrainrecognizesaface.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

14

PAGE

VISUALIZING

Here’sanotheranalogytohelpyouvisualizewhatthedataitselflookslikeaslayersworkinacomputerizeddeeplearningnetwork.Suppose

you’regivenaballofpaperwith1,000gridpointsonit.Asyouuncrumplethepaper,you’llrecordthethreedimensionalpositionofeachgridpointinajournal.Ourfirstentrywouldbechaotic:the1,000coordinateswouldlookcompletelyrandom.Aswecontinueuncrumplingthesheet,we’lloccasionally

pausetorecordpositionsforeachcoordinate.(Youcanthinkofeachrecordingasa“l(fā)ayer”inourdeeplearningmodel).

Whatwillourjournallooklikeovertime?Itwilllooklessandlesschaoticuntilyoufinallysmoothoutthepaperonyourdesk.Nowthedatarevealsthepaperforwhatitis:a2dimensionalobject.(Thanksto

FrancoisChollet

for

thisvisualanalogy).Likeanyanalogy,don’ttakethisonetoofar-it’sintendedsimplyasawayforyoutoimaginewhatthedatalookslikeasit’sprocessed.

AtConcentric,deeplearningrevealsthemeaninginthemillionsoffilescreatedandusedbyanorganization’semployees.Youcanthinkofthesefilesaspointsontheballed-upsheetofpaper.Aswe“uncrumple”it,ourSemanticIntelligence?solutionrevealsclustersoffileswithsimilarmeaning.Forsecuritypractitioners,understandingwhatyouhaveisanessentialstepbeforeyoucanprotectit.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

15

PAGE

OKAY,BUTHOW?

ITSTARTSWITHTRAINING

Trainingisthekeytoestablishingamodelthatsolvesreal-worldproblems.

Atthemostbasiclevel,“training”meansadjustingthe“settings”foreachartificialneuroninournetworkofneurons.(Rememberthat

visual

explanation

Imentionedearlier?Ifyouhaven’talready,gogiveitatry.It’llclarifythisnotionofneuronsettings.)

In1974,PaulWerbosintroducedthe

backpropagationalgorithm

asawaytotraintheparametersinaneuralnetwork.It’saniterativemethodthatcomparesthemodel’soutputtoaknownsetoftrainingdata.Aftereachiteration,errorsarefedbackintothemodeltoguidetheadjustmentof

modelparameters.Adjustingparametersinthehiddeninnerlayersistricky.Werbos’algorithmgaveusanelegantwaytopropagateoutputerrorsbackintotheinnerlayerswheretheyareusedtotunemodelparameters.

Backpropagationworksfineforsmall,shallownetworksbutitcompletelybreaksdownwithdeeparchitectures.Thisisbecauseeachneuronneedsto“see”itscontributiontothemodel’serror.Inadeepmodel,errorsignals

decreaseexponentiallythedeeperyougo.Itdoesn’ttakelongbeforewehitthelimitsofcomputerprecisionandtheerrorsignalsbecomeessentiallyinvisible.Thisiscalledthe“

vanishinggradientproblem

,”andithasbeenthefocusofmuchworkinthefield.

TrainingEf?ciencyFactor

50

45 Ef?cientNet-b0

40

35

30

25

Shuf?eNet_v1_1x

20

Shuf?eNet_v2_1x

15 Shuf?eNet_v2_5x

10 MobileNet_v1

MobileNet_v2

GoogLeNet

5

Squeezenet_v1_1

AlexNet

VGG-11

Resnet-18

DenseNet121

2013 2014 2015 2016 2017 2018 2019 2020

17

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

PAGE

MOORE’SLAWFORNEURALNETWORKS

BREAKTHROUGH

Manyresearchers(notably

Schmidhuber

)proposedmethodstoovercomethevanishinggradientproblem.ButitwasGeoffreyHinton’sbreakthroughworkin2006thatclearedthewayforcomplexdeepneuralnetworks.Heproposedtrainingtwolayersatatime(insteadoftryingtotrainallthelayersatonce),usingtheoutputfromeachpairoflayersastheinputforthenext.These

layerpairsareknownas

RestrictedBoltzmannMachines

and,whenstackedintoasingledeepnetwork,theymaketrainingpracticalwithoutsacrificingaccuracy.

Hinton’sinfluentialworkmadecomplexdeepneuralnetworksfeasibleforreal-worldapplicationsandittookacademicresearchersinnewdirections.

Trainingalgorithmsareimprovingatabreathtakingrate.

Recentanalysis

from

OpenAI

suggestswe’reseeinggainsakinto

Moore’slaw

:theamountofcomputeneededtotrainaneuralnettothesameperformanceonImageNetclassification(adatasetusedtobenchmarkimagerecognitionalgorithms

-moreonImageNettocome)hasbeendecreasingbyafactorof2every16months.ComparedtoAlexNetin2012(thebestperformingimagerecognitionalgorithmatthetime)itnowtakes44timeslesscomputetotrainaneuralnetworkmodeltothesamelevelofaccuracy.

18

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

PAGE

THERESTOFTHEPICTURE

Trainingimprovementsareonlyonereasondeeplearninghascomeintoitsown.Twootherdevelopments-theavailabilityoflargedatasetsandthedramaticincreaseinavailablecomputingresources-contributetodeeplearning’spracticality.

AVAILABILITYOFCOMPUTATIONALRESOURCES

Comparedtotraditionalmodelingtechniques,deeplearningrequiresordersofmagnitudemorecomputepower.Nvidia’sCUDAprogrammingplatform,releasedin2007,allowedresearcherstoleveragetheirGPU’sgeneralpurposeparallelprocessingcapabilitiesfordeeplearningcomputations.

A

breakthroughpaper

in2009showedthatmassivelyparallelgraphicsprocessorswereover70timesfasterthanmulti-coreCPUs.That’shadadramaticimpactontraining(whichusedtobealengthy,iterativeprocess.)Nowtrainingexperimentsthatoncetookweeksaredoneinafewhours.

Today,academiaaswellasindustryisworkingonhardwaredesignedspecificallyfordeeplearningandithasdevelopedintoanexcitingareaofresearch.

19

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

AVAILABILITYOFLARGEDATASETS

Modelswithmoreparameterscanreflectincreasinglycomplexunderlyingsystems.(Asaprofessorofmineingradschoolusedtosay-“giveme5ormoreparametersandIcanfitanelephant.”)Butastheparametercountgrows,sodoestheproblemofoverfitting.Overfittingbecomesarealconcernwhenthenumberofparametersgetsclosetothenumberofsamplesinthedatasetusedtodevelopthemodel.Andasourmodelsgrowtohundredsofthousandsorevenmillionsofneurons,weneedlargerandlargerdatasetstoensuremodelaccuracy.

Forthelongesttime,neuralnetworkresearchersusedasmallnumberofstandarddatasetstobenchmarkaccuracyoftheirnewmethodsand

algorithms.Inimagerecognitionforexample,oneofthegoldstandardswasthe“

MNISTDatabaseofHandwrittenDigits

.”Butitonlyhad70,000examplesand,asimagerecognitionmodelsimproved,thedatabasewasnolonger

achallenge.Everyalgorithmworkedequallywellanditwasimpossibletomeasurewhetheranalgorithmadvancedthestateoftheart.Itwasasignificantbarriertoprogress.

PAGE

In2006,Fei-FeiLi,thenaCSprofessoratUniversityofIllinoisUrbana-Champaignrecognizedthisproblemandworkedtocreatethe

ImageNet

dataset

,culminatingwiththe

ImageNetChallenge

in2010.Itwasacompetitionforteamstotryouttheirnewalgorithmsandbenchmarkagainstotherapproaches.ImageNetwasanimportantcatalystforimagerecognitionresearchanditinspiredothersinrelatedareastofocusonbuildingmoreextensivedatasetsaswell.

WHAT’STHEDIFFERENCE?

Deeplearningisaspecializationwithinthedisciplineofmachinelearning.Butdeeplearninghasdepartedsignificantlyfromitsmachinelearningrootstobecomeauniqueandfundamentallydifferentapproach.Thesedifferencesboildowntotwofactors.

Documentclassificationisagoodexample.Earlymachinelearningmodelsclassifieddocumentsbycountingthefrequencyofspecificwordsin

PAGE

adocument.Asyoucanimagine,thatapproachmissesmanyofthenuancesfoundinhumanlanguage.Doestheword“bank,”forexample,meanafinancialinstitutionortheedgeofariver?Nearbywordslike“river”or“ATM”mightbeclues.Findingtherightfeaturestorepresentasetofdocumentsisacomplexproblem(andtherearemanymorenuancesbeyondwordproximitythatcomplicatetheissue).

REPRESENTATIONLEARNING

Deeplearninglearnsfeaturesfromthedatainsteadofrelyingonhumanexperts.Tohelpclarify,letmeexplainwhat“representation”and“features”are.Everymodel(inmachinelearningandelsewhere)usesasetofinputstocapturetheessenceofthethingbeingmodeled.Ifyouwereestimatinghomeprices,forexample,yourinputswouldneedtoincludesquarefootageandzipcode(atleast)oryourmodelwouldn’thavemuchpredictivepower.Thecollectionoffeaturesneededtobuildanoptimalmodelisthe“representation.”

Traditionalmachinelearningmodelsusehumanstodefinefeatures.Thatcanworkreasonablywellinsomecases(likehomesandtheirprices).Butformorecomplexphenomena,humanreasoningandexpertiseoftenfail.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

20

NEWCATEGORIES

Itmightseemalmostfantasticalthatadeeplearningalgorithmcould,withoutanyguidancewhatsoever,accuratelydeterminetherightfeaturestorepresentanunfamiliarsetofdocuments.Butit’snot.

Thinkofitthisway-wereyoutofindyourselfinaforeigncountry,you’dprobablyrecognizeastopsignonthestreetasastopsign(evenifyoudidn’tunderstandthelanguage).You’veneverseenthe

signbefore,butyouknowwhatmatters:it’satanintersection,it’sred,it’sorientedsodriverscanseeitand(inmostplacesintheworld)it’sanoctagon.Easy.YounowhavementalcategoriesforChineseandMoroccanstopsigns.Natureisanexpertatrepresentationlearning.

Representationlearningtodayisanactiveareaofresearch.Modelssuchas

BERT

inthenaturallanguageprocessingfieldhavedemonstratedhowtask-independentrepresentationscanbe

PAGE

effectivelyspecializedforavarietyofapplications,likedatasecurity.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

21

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

22

PAGE

CAPACITY

MODELCAPACITY

Deeplearningcanencodemoreinformationthantraditionalmachinelearning.Thegraphshere

arefroma

talkbyJeffDean

,whonotesthatdeeplearningperformanceimproveswithmoredata.

Incontrast,traditionalmachinelearningperformanceplateausbeyondacertaindatasetsize.Withdeeplearning(supportedbytheGPU-basedcomputepowerincreasesdiscussedearlier)modelerscanbuildlargerandlargermodelsandtrainwithlargerandlargerdatasets.

Inpractice,thisalmostinvariablyleadstobetteraccuracy.Inthefieldofnaturallanguage

processing,justlastyearwe’veseenpublicationsofincreasinglylargerlanguagemodelssuchas

BERT

(345Mparameters),

GPT-2

(2.5Bparameters)and

GPT-28B

(8Bparameters).

Wedon’tfully

understandwhydeeplearningissoeffective.

DEEPLEARNINGFORDATASECURITYPROFESSIONALS

24

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論