




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
www.concentric.ai
contact_us@concentric.ai
DEEPLEARNING
FORDATASECURITYPROFESSIONALS
Dr.MadhuShashanka
ChiefScientistandFounder
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
2
CONTENTS
INTRODUCTION 02
APPLICATIONS 05
FUNDAMENTALCONCEPTS 06
NEURALNETWORKS 09
VERSATILITY,EXPLAINED 10
DEEPVS.SHALLOW 13
HOWMODELSLEARN 15
ABREAKTHROUGH 17
DEEPLEARNING?MACHINE 20
LEARNING?
WHAT’SAHEAD 24
ABOUTTHEAUTHOR 25
PAGE
?Concentric2020Allrightsreserved
INTRODUCTION
Notadaygoesbywithoutseeingamentionofdeeplearninginthemedia.Inafewshortyears,theimpactdeeplearninghashadonindustryandpopularcultureisnothingshortofremarkable.Backduringmygraduatestudentdays,Imadeanannualpilgrimagetomeetwithatightcommunityofacademicsandresearchersinterestedinneuralnetworks(calledNIPSthen,nowcalled
NeurIPS
).Withtheriseofinterestindeeplearning,itbecamesopopularthatorganizers
heldalottery
forattendeesforthe2019conference.
Theexcitement’sunderstandable.Deeplearningisresponsibleforseveralgroundbreakingadvancementsthatdefiedexpectationsofevenleadingresearchersinthefield.Overasinglesummerin2010,forexample,researchersat
Microsoftusedthetechnologytocutspeechrecognitionerrors-whichhadbeenstuckat20-25%forovertwodecades
-downto15%.Thesolutionoutperformedprofessionalhumantranscribers
two
yearsago
.
Versatilityis
oneofdeeplearning’smostintriguingcharacteristics.
Allowmetoexplain.
Concentricappliesdeeplearningtechnologytotheproblemofdatasecurity.Ifyoulookatdatasecuritythroughadeep-learninglens,theproblemhassomethingincommonwithself-driving.Autonomouscarsprocessanavalancheofinformation(cameras,radar,LiDAR)tomakedrivingdecisions.Concentricalsoprocessesanavalancheofinformation(words,sentences,andparagraphs,documentlocationandusage)tomakeriskassessments.(Datascientistsrealizetheseareverydifferentproblems-butIofferthecomparisontobringdeeplearning’sversatilityintosharperfocus.)
RADIOLOGY
Highlyspecializedhumanexpertise
DRIVING
Finelyhonedphysicalskillsandcloseattention
MAILSORTING
Highvisualacuity
LEARNMORE
WWW
WWW
WWW
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
5
APPLICATIONS
DOCTORS
PAGE
Aradiologist’sguidetodeeplearning
AUTOS
Deeplearningtechniquesforautonomousdriving
PROCESSORS
Deeplearningformailprocessing
FUNDAMENTALNEURALNETWORKCONCEPTS
Tounderstanddeeplearning,weneedtogoallthewaybackto1943.WarrenMcCullochandWalterPitts,forthefirsttime,
proposedamathematical
model
ofanartificialneuronasasimplecomputingmachine.Individualneuronshavemanyinputsandasinglebinaryoutput.Aneuron’sweightedandsummedinputsdetermineitsoutput.
IMPLICATIONS
McCullochandPittsobservedthatanetworkofsuchcomputingunits,inprinciple,couldcomputeanypossiblebooleanfunction.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
6
WARRENMcCULLOCH
Neurophysiologist
BACKGROUND
MDfromColumbiaUniversityofPhysiciansandSurgeons
PAGE
DepartmentofPsychiatryattheUniversityofIllinois,Chicago
WALTERPITTS
Logician
BACKGROUND
AnautodidactwithextensiveinformaltrainingattheUniversityofChicago
CONTRIBUTION
“ALogicalCalculusofIdeasImmanentinNervousActivity”
Aseminalcontributiontoneuralnetworktheorythatwasinspiredbynature:McCullochandPittsdevelopedtheirmodelafterconsideringthebrainasaninformationprocessingdevice
BuilttheMarkI
Perceptronin1960
7
1MarvinMinskyandSeymourPapert’s1969book“Perceptrons”outlinedlimitationsofRosen-
blatt’stechniqueanddemonstratedafewsimplefunctions(suchasbooleanXOR)thepercep-tronwasunabletomodel.Thebookhada
chillingeffect
onneuralnetworkresearchforalmosttwodecades.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
THEPERCEPTRON
x,
x2
2
Output
x3
x4
Thefirsttrainableneuralnetwork-
theperceptron
-wasintroducedbyFrankRosenblattin1957.Theproposedneuralnetworkwascapableofsimplelinearclassifications.Researcherstheorizednetworkscouldbebuiltwithmultiplehiddenlayersbutlimitationsofcomputingpowerandfundamentalalgorithmicdifficultiespreventedanymeaningfuldemonstrationoftheirusefulness.1
FRANKROSENBLATT
Psychologist
BACKGROUND
PAGE
CornellAeronauticalLaboratory
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
9
PAGE
NEURALNETWORKS
Theperceptronwasthestartingpointforwhatwenowknowas
neural
networks
,composedofcomputing“neurons”arrangedinlayerswithdataflowingfromonelayerontothenext.Asdataflowsthrougheachlayer,itgetstransformedastheoutputsofeachneuronfeedintothenextlayerasdatainputs.Thepassengersinacarseeonlythevisiblelayers:alightturnsred,theself-drivingcarcomestoastop.Allthewhile,unseenhiddenlayersprocesslargevolumesofvisualandsensordatatoarriveatthedecisiontoapplythebrakes.
10
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
PAGE
VERSATILITY,EXPLAINED
UNIVERSALAPPROXIMATIONTHEOREM
Agroundbreakingresultfromthe80sby
Cybenk
o
proved
thataneuralnetworkwithasinglehiddenlayerandafinitenumberofneuronscouldapproximateanycontinuousmathematicalfunctiontoarbitraryprecision.Oftenreferredtoasthe
universalapproximationtheorem
,thisisthefoundationofwhydeeplearningissoversatile.
Buthowdowegofromatechnologythatcan“approximateanycontinuousmathematicalfunction”toaplacewhereitfindsbrainlesionsasaccuratelyasaskilledradiologist?Wrappingyourheadaroundhowthiscanpossiblyhappen,Iadmit,isn’teasy.Taskslikeself-driving,radiology,anddatasecurityaremonumentallycomplex.Itseemsimpossibletheycouldbemodeledlikesomedataregressionfromahighschoolphysicsexperiment.
Buttheycan.
Thesetasksare,infact,composedofafinitenumberofinputs(e.g.thepatternsandshadesonanx-rayorthecontextanduseofadocument)withunequivocallycorrectandrepeatableoutputs(thepatienteitherdoesordoesn’thaveabraintumor;thedocumentdoesordoesn’tcontainsensitiveinformation).Thoseinputsarecomplex.Theinsightsneededtoarriveatananswerareincrediblysophisticated.Deeplearningisthetechnologythatcandoit.
Enough
withthepleasantries.
Let’sdigin.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
12
MODELANYTHING
AccordingtoCybenko,
Arbitrarydecisionregionscanbearbitrarily
wellapproximatedbycontinuousfeedforwardneuralnetworkswithonlyasingleinternal,hiddenlayerand
anycontinuoussigmoidalnonlinearity.
Thisfascinatingresultexplainswhyneuralnetworkswork-essentiallyitshowsthatneuralnetworkscan,inprinciple,modelanything.Thebasicideaisasfollows:computationsineachhiddenneuroncanbeapproximatedasastepfunction.Thecontributionofeachsteptotheoutputcanbetunedbasedonthevalueschosenforeachneuron’sparameters.Thesesimplebuildingblockscombineintoanetworkthat’scapableofmodelinganyarbitrarycontinuousfunctiontoanarbitrarylevelofprecision.Curious
readersareencouragedtoexplorethis
remarkablevisualexplanation
ofhowitworks.Goahead,I’llbeherewhenyougetback.
THATWASFUN,WASN’TIT?
Ifyoumadeittothecurve-fittingexerciseyou’reabelievernow.Onceweunderstandthatevenhighlycomplexprocesses-evaluatingamedicalimage,drivingacar,assessingriskinmillionsofdocuments-canbecapturedasmathematicalmodels(ridiculouslycomplexmodels,yes,butstillmodels)thechallengechangesfrom“if”to“how.”Howdowebuildsolutionsthatcanperformthesetasksquicklyandaccurately?Ihadaphysicsteacheronceclaimthatifweknewthemomentumandpositionofeveryparticleintheuniverse,wecouldpredictthefuture.Obviouslysomesystemsaretoocomplexforeventhemightyneuralnetwork.
PAGE
That,ofcourse,begsthequestion:whatarethelimits?Justhowdeepcandeeplearninggo?
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
13
NEURALNETWORKSGODEEP
DEEPVS.SHALLOW
Bydefinition,deeplearningencompassesafamilyofneuralnetworkmodelsthathavemanylayersofneuronsasopposedto“shallow”networkswhichhavefewer.Thenumberofneuronsmaybesimilarbutthearrangementisverydifferent.Andwhileflatternetworksare,intheory,justascapableasdeeplearningmodels,deepernetworkshavesomeimportantadvantages.
Deeplearningnetworksare,foragivennumberofneurons,moreaccuratethanshallowerones.(Inthemachinelearningvernacular,wecallthis“performance,”althoughI’lluse“accuracy”heretoavoidconfusionwithspeedorotherdefinitionsofperformance.)ComparisonsbetweentherelativelyshallowVGGNetmodel(with16layersand~140Mparameters)showthedeeperResNetmodel(with152layersbutonly~2Mparameters)tobemoreaccurate.Infact,it
canbeshown
thatforapproximatingcertain
functions,shallownetworkswillneedexponentiallymoreneuronscomparedtoadeepnetwork.Andsincethedeepermodelrequiresfewerparameters,
italsoalleviatestheproblemofoverfitting(whereamodelissotightlytunedtoasampledatasetthatitspredictiveabilitytakesanosedivewhenusedonreal-worlddata).
PAGE
Deeplearningtechniqueshaveanotheradvantage:they’rebetteratmodelingdatathat’sinherentlyhierarchical.Innature,forexample,thecascadeofneuronsleadingfromtheretinatothebrainishierarchical.Ateachstep(or“l(fā)ayer”inthedeep-learningvernacular)neuronsbecomeselectiveforincreasinglycomplexstimuli.Retinalneuronsconnecttoneuronsspecializedtosortoutbarsandedges.Thoseneuronsconnecttoneuronsspecializinginorientationsandpositions.Orientationsandpositionsfeedintocornersandfeatures.Andsoon,untilthebrainrecognizesaface.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
14
PAGE
VISUALIZING
Here’sanotheranalogytohelpyouvisualizewhatthedataitselflookslikeaslayersworkinacomputerizeddeeplearningnetwork.Suppose
you’regivenaballofpaperwith1,000gridpointsonit.Asyouuncrumplethepaper,you’llrecordthethreedimensionalpositionofeachgridpointinajournal.Ourfirstentrywouldbechaotic:the1,000coordinateswouldlookcompletelyrandom.Aswecontinueuncrumplingthesheet,we’lloccasionally
pausetorecordpositionsforeachcoordinate.(Youcanthinkofeachrecordingasa“l(fā)ayer”inourdeeplearningmodel).
Whatwillourjournallooklikeovertime?Itwilllooklessandlesschaoticuntilyoufinallysmoothoutthepaperonyourdesk.Nowthedatarevealsthepaperforwhatitis:a2dimensionalobject.(Thanksto
FrancoisChollet
for
thisvisualanalogy).Likeanyanalogy,don’ttakethisonetoofar-it’sintendedsimplyasawayforyoutoimaginewhatthedatalookslikeasit’sprocessed.
AtConcentric,deeplearningrevealsthemeaninginthemillionsoffilescreatedandusedbyanorganization’semployees.Youcanthinkofthesefilesaspointsontheballed-upsheetofpaper.Aswe“uncrumple”it,ourSemanticIntelligence?solutionrevealsclustersoffileswithsimilarmeaning.Forsecuritypractitioners,understandingwhatyouhaveisanessentialstepbeforeyoucanprotectit.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
15
PAGE
OKAY,BUTHOW?
ITSTARTSWITHTRAINING
Trainingisthekeytoestablishingamodelthatsolvesreal-worldproblems.
Atthemostbasiclevel,“training”meansadjustingthe“settings”foreachartificialneuroninournetworkofneurons.(Rememberthat
visual
explanation
Imentionedearlier?Ifyouhaven’talready,gogiveitatry.It’llclarifythisnotionofneuronsettings.)
In1974,PaulWerbosintroducedthe
backpropagationalgorithm
asawaytotraintheparametersinaneuralnetwork.It’saniterativemethodthatcomparesthemodel’soutputtoaknownsetoftrainingdata.Aftereachiteration,errorsarefedbackintothemodeltoguidetheadjustmentof
modelparameters.Adjustingparametersinthehiddeninnerlayersistricky.Werbos’algorithmgaveusanelegantwaytopropagateoutputerrorsbackintotheinnerlayerswheretheyareusedtotunemodelparameters.
Backpropagationworksfineforsmall,shallownetworksbutitcompletelybreaksdownwithdeeparchitectures.Thisisbecauseeachneuronneedsto“see”itscontributiontothemodel’serror.Inadeepmodel,errorsignals
decreaseexponentiallythedeeperyougo.Itdoesn’ttakelongbeforewehitthelimitsofcomputerprecisionandtheerrorsignalsbecomeessentiallyinvisible.Thisiscalledthe“
vanishinggradientproblem
,”andithasbeenthefocusofmuchworkinthefield.
TrainingEf?ciencyFactor
50
45 Ef?cientNet-b0
40
35
30
25
Shuf?eNet_v1_1x
20
Shuf?eNet_v2_1x
15 Shuf?eNet_v2_5x
10 MobileNet_v1
MobileNet_v2
GoogLeNet
5
Squeezenet_v1_1
AlexNet
VGG-11
Resnet-18
DenseNet121
2013 2014 2015 2016 2017 2018 2019 2020
17
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
PAGE
MOORE’SLAWFORNEURALNETWORKS
BREAKTHROUGH
Manyresearchers(notably
Schmidhuber
)proposedmethodstoovercomethevanishinggradientproblem.ButitwasGeoffreyHinton’sbreakthroughworkin2006thatclearedthewayforcomplexdeepneuralnetworks.Heproposedtrainingtwolayersatatime(insteadoftryingtotrainallthelayersatonce),usingtheoutputfromeachpairoflayersastheinputforthenext.These
layerpairsareknownas
RestrictedBoltzmannMachines
and,whenstackedintoasingledeepnetwork,theymaketrainingpracticalwithoutsacrificingaccuracy.
Hinton’sinfluentialworkmadecomplexdeepneuralnetworksfeasibleforreal-worldapplicationsandittookacademicresearchersinnewdirections.
Trainingalgorithmsareimprovingatabreathtakingrate.
Recentanalysis
from
OpenAI
suggestswe’reseeinggainsakinto
Moore’slaw
:theamountofcomputeneededtotrainaneuralnettothesameperformanceonImageNetclassification(adatasetusedtobenchmarkimagerecognitionalgorithms
-moreonImageNettocome)hasbeendecreasingbyafactorof2every16months.ComparedtoAlexNetin2012(thebestperformingimagerecognitionalgorithmatthetime)itnowtakes44timeslesscomputetotrainaneuralnetworkmodeltothesamelevelofaccuracy.
18
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
PAGE
THERESTOFTHEPICTURE
Trainingimprovementsareonlyonereasondeeplearninghascomeintoitsown.Twootherdevelopments-theavailabilityoflargedatasetsandthedramaticincreaseinavailablecomputingresources-contributetodeeplearning’spracticality.
AVAILABILITYOFCOMPUTATIONALRESOURCES
Comparedtotraditionalmodelingtechniques,deeplearningrequiresordersofmagnitudemorecomputepower.Nvidia’sCUDAprogrammingplatform,releasedin2007,allowedresearcherstoleveragetheirGPU’sgeneralpurposeparallelprocessingcapabilitiesfordeeplearningcomputations.
A
breakthroughpaper
in2009showedthatmassivelyparallelgraphicsprocessorswereover70timesfasterthanmulti-coreCPUs.That’shadadramaticimpactontraining(whichusedtobealengthy,iterativeprocess.)Nowtrainingexperimentsthatoncetookweeksaredoneinafewhours.
Today,academiaaswellasindustryisworkingonhardwaredesignedspecificallyfordeeplearningandithasdevelopedintoanexcitingareaofresearch.
19
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
AVAILABILITYOFLARGEDATASETS
Modelswithmoreparameterscanreflectincreasinglycomplexunderlyingsystems.(Asaprofessorofmineingradschoolusedtosay-“giveme5ormoreparametersandIcanfitanelephant.”)Butastheparametercountgrows,sodoestheproblemofoverfitting.Overfittingbecomesarealconcernwhenthenumberofparametersgetsclosetothenumberofsamplesinthedatasetusedtodevelopthemodel.Andasourmodelsgrowtohundredsofthousandsorevenmillionsofneurons,weneedlargerandlargerdatasetstoensuremodelaccuracy.
Forthelongesttime,neuralnetworkresearchersusedasmallnumberofstandarddatasetstobenchmarkaccuracyoftheirnewmethodsand
algorithms.Inimagerecognitionforexample,oneofthegoldstandardswasthe“
MNISTDatabaseofHandwrittenDigits
.”Butitonlyhad70,000examplesand,asimagerecognitionmodelsimproved,thedatabasewasnolonger
achallenge.Everyalgorithmworkedequallywellanditwasimpossibletomeasurewhetheranalgorithmadvancedthestateoftheart.Itwasasignificantbarriertoprogress.
PAGE
In2006,Fei-FeiLi,thenaCSprofessoratUniversityofIllinoisUrbana-Champaignrecognizedthisproblemandworkedtocreatethe
ImageNet
dataset
,culminatingwiththe
ImageNetChallenge
in2010.Itwasacompetitionforteamstotryouttheirnewalgorithmsandbenchmarkagainstotherapproaches.ImageNetwasanimportantcatalystforimagerecognitionresearchanditinspiredothersinrelatedareastofocusonbuildingmoreextensivedatasetsaswell.
WHAT’STHEDIFFERENCE?
Deeplearningisaspecializationwithinthedisciplineofmachinelearning.Butdeeplearninghasdepartedsignificantlyfromitsmachinelearningrootstobecomeauniqueandfundamentallydifferentapproach.Thesedifferencesboildowntotwofactors.
Documentclassificationisagoodexample.Earlymachinelearningmodelsclassifieddocumentsbycountingthefrequencyofspecificwordsin
PAGE
adocument.Asyoucanimagine,thatapproachmissesmanyofthenuancesfoundinhumanlanguage.Doestheword“bank,”forexample,meanafinancialinstitutionortheedgeofariver?Nearbywordslike“river”or“ATM”mightbeclues.Findingtherightfeaturestorepresentasetofdocumentsisacomplexproblem(andtherearemanymorenuancesbeyondwordproximitythatcomplicatetheissue).
REPRESENTATIONLEARNING
Deeplearninglearnsfeaturesfromthedatainsteadofrelyingonhumanexperts.Tohelpclarify,letmeexplainwhat“representation”and“features”are.Everymodel(inmachinelearningandelsewhere)usesasetofinputstocapturetheessenceofthethingbeingmodeled.Ifyouwereestimatinghomeprices,forexample,yourinputswouldneedtoincludesquarefootageandzipcode(atleast)oryourmodelwouldn’thavemuchpredictivepower.Thecollectionoffeaturesneededtobuildanoptimalmodelisthe“representation.”
Traditionalmachinelearningmodelsusehumanstodefinefeatures.Thatcanworkreasonablywellinsomecases(likehomesandtheirprices).Butformorecomplexphenomena,humanreasoningandexpertiseoftenfail.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
20
NEWCATEGORIES
Itmightseemalmostfantasticalthatadeeplearningalgorithmcould,withoutanyguidancewhatsoever,accuratelydeterminetherightfeaturestorepresentanunfamiliarsetofdocuments.Butit’snot.
Thinkofitthisway-wereyoutofindyourselfinaforeigncountry,you’dprobablyrecognizeastopsignonthestreetasastopsign(evenifyoudidn’tunderstandthelanguage).You’veneverseenthe
signbefore,butyouknowwhatmatters:it’satanintersection,it’sred,it’sorientedsodriverscanseeitand(inmostplacesintheworld)it’sanoctagon.Easy.YounowhavementalcategoriesforChineseandMoroccanstopsigns.Natureisanexpertatrepresentationlearning.
Representationlearningtodayisanactiveareaofresearch.Modelssuchas
BERT
inthenaturallanguageprocessingfieldhavedemonstratedhowtask-independentrepresentationscanbe
PAGE
effectivelyspecializedforavarietyofapplications,likedatasecurity.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
21
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
22
PAGE
CAPACITY
MODELCAPACITY
Deeplearningcanencodemoreinformationthantraditionalmachinelearning.Thegraphshere
arefroma
talkbyJeffDean
,whonotesthatdeeplearningperformanceimproveswithmoredata.
Incontrast,traditionalmachinelearningperformanceplateausbeyondacertaindatasetsize.Withdeeplearning(supportedbytheGPU-basedcomputepowerincreasesdiscussedearlier)modelerscanbuildlargerandlargermodelsandtrainwithlargerandlargerdatasets.
Inpractice,thisalmostinvariablyleadstobetteraccuracy.Inthefieldofnaturallanguage
processing,justlastyearwe’veseenpublicationsofincreasinglylargerlanguagemodelssuchas
BERT
(345Mparameters),
GPT-2
(2.5Bparameters)and
GPT-28B
(8Bparameters).
Wedon’tfully
understandwhydeeplearningissoeffective.
DEEPLEARNINGFORDATASECURITYPROFESSIONALS
24
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 針灸治療之耳針療法
- 銀行國庫業(yè)務(wù)基礎(chǔ)知識
- 2025年四川科技職業(yè)學(xué)院高職單招高職單招英語2016-2024歷年頻考點試題含答案解析
- 黃山2025年安徽省黃山市黟縣中小學(xué)新任教師招聘6人筆試歷年參考題庫附帶答案詳解
- 阿拉善2025年內(nèi)蒙古阿拉善額濟納旗教育領(lǐng)域緊缺專業(yè)教師校園招聘8人筆試歷年參考題庫附帶答案詳解
- 青島2025年山東青島湛山療養(yǎng)院招聘衛(wèi)生類崗位工作人員3人筆試歷年參考題庫附帶答案詳解
- 銅仁2025年貴州銅仁市印江縣事業(yè)單位招聘153人筆試歷年參考題庫附帶答案詳解
- 2025年云南輕紡職業(yè)學(xué)院高職單招職業(yè)技能測試近5年??及鎱⒖碱}庫含答案解析
- 貴州2025年貴州工業(yè)職業(yè)技術(shù)學(xué)院招聘37人筆試歷年參考題庫附帶答案詳解
- 小學(xué)品格教育專家講座
- 血細胞分析報告規(guī)范化指南解讀
- 能源管理員崗位責(zé)任制(4篇)
- me實驗2 電位、電壓的測定及電路電位圖的繪制
- 特殊兒童隨班就讀申請書范本
- 2022年縣水資源調(diào)度方案
- GSA《5G NTN衛(wèi)星連接報告(2023.8)》 Non-Terrestrial 5G Networks and Satellite Connectivity
- 專題11 以小見大-【幫作文】初中語文之從課文中學(xué)習(xí)寫作 課件(共25張PPT)
- 天溯EMS能源管理系統(tǒng)V1.3安裝配置手冊
- 《儀器分析》完整全套教學(xué)課件(共17章)
- 二級建造師之二建建設(shè)工程施工管理強化訓(xùn)練打印大全
- 灰場排水斜槽廊道及下灰場清灰施工方案
評論
0/150
提交評論