




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
Lesson19SmartRooms
(第十九課智能房間)
Vocabulary(詞匯)ImportantSentences(重點(diǎn)句)QuestionsandAnswers(問答)Problems(問題)
Increatingcomputersystemsthatcanidentifypeopleandinterprettheiractions,researchershavecomeonestepclosertobuildinghelpfulhomeandworkenvironments
Imagineahousethatalwaysknowswhereyourkidsareandtellsyouiftheyaregettingintotrouble.Oranofficethatseeswhenyouarehavinganimportantmeetingandshieldsyoufrominterruptions.Oracarthatsenseswhenyouaretiredandwarnsyoutopullover.Scientistshavelongtriedtodesigncomputersystemsthatcouldaccomplishsuchfeats.Despitetheirefforts,modernmachinesarestillnomatchforbaby-sittersorsecretaries.Buttheycouldbe.
Theproblem,inmyopinion,isthatourcurrentcomputersarebothdeafandblind:theyexperiencetheworldonlybywayofakeyboardandamouse.Evenmultimediamachines,thosethathandleaudiovisualsignalsaswellastext,simplytransportstringsofdata.Theydonotunderstandthemeaningbehindthecharacters,soundsandpicturestheyconvey.Ibelievecomputersmustbeabletoseeandhearwhatwedobeforetheycanprovetrulyhelpful.Whatismore,theymustbeabletorecognizewhoweareand,asmuchasanotherpersonorevenadogwould,makesenseofwhatwearethinking.
Tothatend,mygroupattheMediaLaboratoryattheMassachusettsInstituteofTechnologyhasrecentlydevelopedafamilyofcomputersystemsforrecognizingfaces,expressionsandgestures.Thetechnologyhasenabledustobuildenvironmentsthatbehavesomewhatlikethehouse,officeandcardescribedabove.Theseareas,whichwecallsmartrooms,arefurnishedwithcamerasandmicrophonesthatrelaytheirrecordingstoanearbynetworkofcomputers.Thecomputersassesswhatpeopleinthesmartroomaresayinganddoing.Thankstothisconnection,visitorscanusetheiractions,voicesandexpressions-insteadofkeyboards,sensorsorgoggles-tocontrolcomputerprograms,browsemultimediainformationorventureintorealmsofvirtualreality.[1]Thekeyideaisthatbecausethesmartroomknowssomethingaboutthepeopleinit,itcanreactintelligentlytothem.WorkingtogetherwithPattieMaesandme,graduatestudentsTrevorDarrellandBruceM.Blumbergconstructedthefirstsmartroomin1991atM.I.T.Theinitiativequicklygrewintoacollaborativeexperimentandnowinvolvesfivesuchrooms,alllinkedbytelephonelines,aroundtheworld:threeinBoston,oneinJapanandoneintheU.K.(InstallationsarealsoplannedforParis,NewYorkCityandDallas.)
Eachroomcontainsseveralmachines,nonemorepowerfulthanapersonalcomputer.Theseunitstackledifferentproblems.Forinstance,ifasmartroommustanalyzeimages,soundsandgestures,weequipitwiththreecomputers,oneforeachtypeofinterpretation.Ifgreatercapabilitiesareneeded,weaddmoremachines.Althoughthemodulestakeondifferenttasks,theyallrelyonthesamestatisticalmethod,knownasmaximumlikelihoodanalysis:thecomputerscompareincominginformationwithmodelstheyhavestoredinmemory.[2]Theycalculatethechancethateachstoredmodeldescribestheobservedinputandultimatelypicktheclosestmatch.Bymakingsuchcomparisons,oursmart-roommachinescananswerarangeofquestionsabouttheirusers,includingwhotheyareandsometimesevenwhattheywant.1Where?
Beforeasmartroomcanbegintofigureoutwhatpeoplearedoing,itneedstolocatethem.SograduatestudentsChristopherR.Wren,AliAzarbayejaniandDarrellandIdevelopedasystemcalledPersonFinder(Pfinderforshort)thatcantrackonepersonasheorshemovesaroundintheroom.Asdoourothersystems,Pfinderadoptsthemaximumlikelihoodapproach.First,itmodelsthepersonthecamerarecordsasaconnectedsetofblobs-twoforthehands,twoforthefeetandoneeachforthehead,shirtandpants.Itdescribeseachblobintwoways:asadistributionofvaluesfortheblob’scolorandplacement,andasaso-calledsupportmap,essentiallyalistindicatingwhichimagepixelsbelongtotheblob(pixelsare“pictureelements,”similartothedotsthatmakeupatelevisionimage).[3]Next,Pfindercreatestexturedsurfacestomodelthebackgroundscene.Eachpointononeofthesesurfacescorrelatestoanaveragecolorvalueandadistributionaroundthatmean.Wheneverthecamerainthesmartroompicksupanewpictureinthevideostream,Pfindercomparesthatimagewiththemodelsithasmadeandwithotherreferencesaswell.Tostart,thesystemguesseswhattheblobmodelshouldlooklikeinthenewimage.If,forexample,aperson’supperbodywasmovingtotherightatonemeterpersecondatenthofasecondago,thenPfinderwillassumethatthecenteroftheupperbodyblobhasmovedatenthofametertotheright.Suchestimatesarealsocheckedagainsttypicalpatternsofmovementthatwehavederivedfromtestingthesystemonthousandsofpeople.Forinstance,weknowthatblobscorrespondingtothetorsomustmoveslowly,whereasthoserelatingtohandsandfeetgenerallymovemuchfaster.
Predictionsfinished,Pfinderthenmeasuresthechancethateachpixelinthenewimagebelongstoeachblob.Itdoessobysubtractingthepixel’scolorandbrightnessvaluesfromeachblob’smeancolorandbrightnessvalues.Itcomparestheresultwitheachblob’sdistributiontodeterminehowlikelyitisthatthedifferencehappenedbychance.If,forexample,thebrightnessdifferencebetweenapixelandablobwere10percent,andtheblob’sstatisticssaidthatsuchadifferencehappenedonly1percentofthetime,thechancethatthepixelbelongedtotheblobwouldbeamereonein100.[4]Shadowspresentaminorprobleminthattheycausebrightnessdifferencesthathavenothingtodowiththeprobabilitythatsomepixelbelongstosomeblob.SoPfindersearchesoutshadows,areasthataredarkerthanexpected,andevensouttheircolorhueandsaturationusingthearea’soverallbrightness.
Pfindermustalsoovercomeslightchangesinthelightingorarrangementofobjectsintheroom,eitherofwhichmightmakeitplacecertainpixelsinthewrongmodels.Toavoidthisdifficulty,thesystemcontinuouslyupdatesthepixelsthatarevisiblebehindtheuser,averagingtheoldcolorinformationwiththenew.Inthisway,itkeepstrackofchangesthatoccur,forinstance,whentheusermovesabookandthusaltersthesceneintwoplaces:wherethebookwasandwhereitnowis.Aftercompletingthesevariouscalculationsandcompensations,Pfinderatlastassignseachpixelinthenewimagetothemodelthatmostlikelycontainsit.Finally,itupdatesthestatisticsdescribingtheblobmodelandthebackgroundscene,aswellasthoseanticipatingwhichwaytheblobswillmove.2WhoandHow?
Asidefromknowingwherepeopleare,asmartroommustalsoknowwhotheyareandwhattheyaresaying.Manyworkershaveinventedalgorithmsthatallowcomputerstounderstandspeech.Virtuallyallthosesystemsworkwellonlywhentheuserwearsamicrophoneorsitsnearone.Aroomthatinterpretedyouractionsonlywhenyoustoodinaparticularspotwouldnotseemsosmart.SograduatestudentsSumitBasuandMichaelCaseyandIlookedforanothersolution—onethatwouldletacomputerdecodeauser’sspeechasheorshemovedfreelyaboutsomeroom,eveniftheroomwerequitenoisy.OurendproducttakesadvantageofthefactthatPfinderfollowstheuser’spositionatalltimes.Borrowingthisinformation,thespeech-recognitionsystemelectronically“steers”anarrayoffixedmicrophonessothattheyreinforceonlythosesoundscomingfromthedirectionoftheuser’smouth.[5]Itisaneasyjob.Becausesoundtravelsatafixedspeed,itarrivesatdifferentlocationsatslightlydifferenttimes.Soeachsoundlocationyieldsadifferentpatternoftimedelays.Thus,ifthesystemtakestheoutputsfromafixedarrayofmicrophonesandaddsthemtotimedelaysthatcharacterizeacertainlocation,itcanreinforcethesoundfromthatlocation.Thenitneedonlycomparethesoundwiththoseofknownwordsuntilamatchisfound.
Asmartroommustalsoknowwhoisspeakinginitortoit.Toactwithseemingintelligence,itisabsolutelyvitalthatasystemknowitsusers’identity.Whogivesacommandisoftenasimportantasthecommanditself.Thefastestwaytoidentifysomeonemaywellbetorecognizehisorherface.Sowedevelopedasystemforourroomstodojustthat.Toemploythemaximumlikelihoodapproach,thissystemfirstneededtobuildmodelsofallthefacesit"knew."WorkingwithM.I.T.graduatestudentsMatthewA.TurkandBabackMoghaddam,wefoundthatitwasimportanttofocusonthosefeaturesthatmostefficientlydescribedanentiresetoffaces.Weusedamathematicaltechniquecalledeigenvectoranalysistodescribethosesets,dubbingtheresults“eigenfaces”.Tomodelaface,thesystemdeterminedhowsimilarthatfacewastoeacheigenface.
Thestrategyhasworkedwell.Whenthecameradetectsaperson,theidentifyingsystemextractshisorherface-locatedbyPfinder-fromthesurroundingsceneandnormalizesitscontrast.Thesystemthenmodelsthefaceintermsofwhatsimilaritiesitbearstotheeigenfaces.Next,itcomparesthemodelwiththoseofknownpeople.Ifanyofthesimilarityscoresareclose,thesystemassumesthatithasidentifiedtheuser.Usingthismethod,oursmartroomshaveaccuratelyrecognizedindividualfaces99percentofthetimeamidgroupsofseveralhundred.
Facialexpressionisalmostasimportantasidentity.Ateachingprogram,forexample,shouldknowifitsstudentslookbored.Soonceoursmartroomhasfoundandidentifiedsomeone’sface,itanalyzestheexpression.Yetanothercomputercomparesthefacialmotionthecamerarecordswithmapsdepictingthefacialmotionsinvolvedinmakingvariousexpressions.Eachexpression,infact,involvesauniquecollectionofmusclemovements.Whenyousmile,youcurlthecornersofyourmouthandliftcertainpartsofyourforehead;whenyoufakeasmile,though,youmoveonlyyourmouth.InexperimentsconductedbyscientistIrfanA.Essaandme,oursystemhascorrectlyjudgedexpressions-amongasmallgroupofsubjects-98percentofthetime.3What?
Recognizingaperson’sface,expressionandspeechisjustthefirststep.Forhouses,officesorcarstohelpus,theymustbeabletoputthesebasicperceptionsincontext.Thesamemotions,afterall,canbeinterpretedquitedifferentlydependingonwhatthepersonmakingthemintends.Whenyoudriveacar,forexample,yousometimestakeyourfootfromthegaspedalbecauseyouwanttoslowdown.Butyoudothesamewhenyougetreadytomakeaturn.Thedifferenceisthatinpreparingforaturn,youadjustthesteeringwheelasyoumoveyourfoot.Soacomputersystemwouldneedtoconsiderhowyourmovementshadchangedovertime,incombinationwithothermovements,toknowwhatyouweredoingatanyonemoment.
Indesigningsuchasystem,weborrowedideasfromthescientistsworkingonspeechrecognition.Theymodelindividualwordsassequencesofsounds,or,astheycallthem,internalstates.Eachwordhasacharacteristicdistributionofinternalstates,whicharesometimesphonemes(thesmallestdistinguishableunitsofspeech)andsometimesjustpartsofphonemes.Acomputersystemtriestoidentifywordsbycomparingthesequenceofsoundstheycontainwithwordmodelsandthenselectingthemostlikelymatches.
Wegeneralizedthisapproachinthehopeofdeterminingpeople’sintentionsfromtheirmovements.Wedevisedacomputersystemthatcantell,forexample,whetherapersonwithonearmextendedispointingormerelystretching.Thesystemrecognizestheactioninvolvedinpointingbyreferringtoamodelhavingthreeinternalstates:raisethehand,holditsteadyandreturnitquickly.Thesystemseesstretchingasonecontinuousmovement.Sobyobservingtheseinternalstates-characterizedbytheaccelerationofthehandandthedirectionofitsmovement-oursystemworksoutwhatsomeoneisdoing.
Todate,wehavebuiltseveraldifferentsystemsforinterpretinghumanactionsinthisway.Thesimplestallowpeopletousetheirbodytocontrolvirtualenvironments.OnesuchapplicationistheArtificialLifeInteractiveVideoEnvironment(ALIVE),ajointprojectofMaes’sgroupandmyown.ALIVEutilizesthesmartroom’sdescriptionoftheuser’sshapetoplaceavideomodeloftheuserintoavirtual-realityscene,wherecomputer-generatedlife-formsreside.Thesevirtualcrittersanalyzeinformationaboutauser’sgestures,soundsandpositionstodecidehowtointeractwithhimorher.Silasthevirtualdog,forexample,playsfetch.
Whenasmart-roomusermimicsthemotionsinvolvedinpickingupandthrowingSilas’svirtualball,thedogseesthevideoimageintheALIVEenvironmentdothesameandgetsreadytochaseafteritstoy.Silasalsositsandrollsoveroncommand.Thesmartroom’soutputcanbeputtoworkinanevenmoredirectmanner.Theuser’sbodypositioncanbemappedintoacontrolspaceofsortssothathisorhersoundsandgestureschangetheoperatingmodeofacomputerprogram.Gameplayers,forexample,haveusedthisinterface,insteadofajoystickortrackball,tonavigatethree-dimensionalvirtualenvironments.Ifopponentsappearontheleft,theplayerneedonlyturntothelefttofacethem;tofireaweapon,theplayerneedonlysay,“Bang.”4Why?
Virtual-realitygamesaside,manymorepracticalapplicationsofsmart-roomtechnologyexist.ConsiderAmericanSignLanguage(ASL),asetofsophisticatedhandgesturesusedbydeafandmutepeople.Becausethegesturesarequitecomplex,theyofferagoodtestofourroom’sabilities.Hence,graduatestudentThadStarnerandIsetouttobuildasystemforinterpretingASL.Wefirstbuiltmodelsforeachsign,observingmanyexamplesofthehandmotionsinvolved,asdescribedbyPfinder.WefoundthatifwecomparedthesemodelswithPfinder’smodelsofanactualuserwhileheorshewassigning,wecouldtranslatea40-wordsubsetofASLinrealtimewithanaccuracyrateof99.2percent.Ifwecanincreasethesizeofthevocabularythatoursystemunderstands—anditseemsverylikelythatwewillbeabletodoso—itmaybepossibletocreateinterfacesfordeafpeopleasreliableasthespeech-recognitionsystemsthatarenowbeingintroducedforpeoplewhocanhear.[6]Automobiledrivers,too,standtobenefitfromsmart-roomtechnology.InmanypartsoftheU.S.,theaverageworkerspends10hoursaweekinacar.Morethan40,000motoristsdieintrafficaccidentseachyear,themajorityofwhichcanbeattributedtodrivererror.SotogetherwithAndyLiu,ascientistatNissanCambridgeBasicResearch,wehavebeenbuildingasmart-roomversionofacarinterior.Theultimategoalistodevelopavehiclethatcanmonitorwhatthedriverisdoingandprovideusefulfeedback,suchasroaddirections,operatinginstructionsandeventravelwarnings.Tocompileasetofdrivingmodels—includingwhatactionspeopletookwhentheywerepassing,following,turning,stopping,acceleratingorchanginglanes—weobservedthehandandlegmotionsofmanydriversastheysteeredtheirwaythroughasimulatedcourse.Weusedtheresultingmodelstoclassifyatestdriver’sactionasquicklyaspossible.Surprisingly,thesystemcoulddeterminewhatthedriverwasdoingalmostassoonastheactionhadstarted.Itclassifiedactionswithanaccuracyof86percentwithin0.5secondofthestartofanaction.Giventwoseconds,theaccuracyroseto97percent.
Wehaveshownthat,atleastinsimplesituations,itispossibletotrackpeople’smovements,identifythemandrecognizetheirexpressionsinrealtimeusingonlymodestcomputationalresources.[7]Bycombiningsuchcapabilities,wehavebuiltsmartroomsinwhich,freefromwiresorkeyboards,individualscancontrolcomputerdisplays,playwithvirtualcreaturesandevencommunicatebywayofsignlanguage.Suchperceptualintelligenceisalreadybeginningtospreadtoawidervarietyofsituations.Forinstance,wearenowbuildingprototypesofeyeglassesthatrecognizeyouracquaintancesandwhispertheirnamesinyourear.Wearealsoworkingontelevisionscreensthatknowwhenpeoplearewatchingthem.Andweplantodevelopcreditcardsthatcanrecognizetheirownersandsoknowwhentheyhavebeenstolen.
OtherresearchgroupsattheMediaLabareworkingtograntoursmartroomstheabilitytosenseattentionandemotionandtherebygainadeeperunderstandingofhumanactionsandmotivations.RosalindW.Picardhopestodeviseasystemthatcantellwhendriversorstudentsarenotpayingattention.AaronBobickiswritingsoftwaretointerpretthehumanmotionsusedinsports—imagineatelevisioncamerathatcoulddiscriminatebetweentwofootballplays,say,aquarterbacksneakandanendrun,andfollowtheaction.Assmart-roomtechnologydevelopsevenfurther,computerswillcometoseemmorelikeattentiveassistantsthaninsensibletools.Infact,itisnottoofar—fetchedtoimagineaworldinwhichthedistinctionbetweeninanimateandanimateobjectsactuallybeginstoblur.
1.?kidn.哄騙,取笑,開玩笑,小孩,小山羊v.哄騙,取笑,開玩笑,欺騙。
2.?baby-sittern.代人臨時(shí)照看嬰孩者。
3.?audiovisualn.(常用復(fù)數(shù))視聽設(shè)備,視聽教材adj.視聽的。
4.?gogglen.眼睛睜視,(復(fù)數(shù))風(fēng)鏡,護(hù)目鏡adj.睜眼的,瞪眼的;vi.眼珠轉(zhuǎn)動(dòng),瞪眼看vt.使瞪眼。
5.?tacklen.工具,滑車,用具,裝備,扭倒vt.固定,應(yīng)付(難事等),處理,抓住vi.捉住,扭住。Vocabulary
6.?statisticaladj.統(tǒng)計(jì)的,統(tǒng)計(jì)學(xué)的。
7.?blobn.一滴,水滴,斑點(diǎn)vt.濺污。
8.?texturedadj.織地粗糙的,手摸時(shí)有感覺的,有織紋的。
9.?correlatevt.使相互關(guān)聯(lián)vi.和……相關(guān)。
10.?torso未完成的(不完整的)作品,殘缺不全的東西。
11.?huen.色調(diào),樣子,顏色,色彩;叫聲,大聲叫喊,大聲反對(duì)。
12.?saturationn.飽和(狀態(tài)),浸潤(rùn),浸透,飽和度。
13.?compensationn.補(bǔ)償,賠償。
14.?anticipatevt.預(yù)期,期望,過早使用,先人一著,占先v.預(yù)訂,預(yù)見,可以預(yù)料。
15.?pedaln.踏板;?v.踩……的踏板。
16.?phonemen.[語]音位,音素。
17.?dimensionaladj.空間的。
18.?prototypen.原型。
19.?insensibleadj.無知覺的,無同情心的,硬心腸的,麻木不仁的。
20.?blurv.涂污,污損(名譽(yù)等),把(界線、視線等)弄得模糊不清,弄污n.污點(diǎn)。
[1]Thecomputersassesswhatpeopleinthesmartroomaresayinganddoing.Thankstothisconnection,visitorscanusetheiractions,voicesandexpressions-insteadofkeyboards,sensorsorgoggles-tocontrolcomputerprograms,browsemultimediainformationorventureintorealmsofvirtualreality.
計(jì)算機(jī)來判斷智能房間中的人在說什么,做什么。由于有了這個(gè)媒介,來訪者可以使用動(dòng)作、聲音和表情,而不是鍵盤、傳感器或視鏡,來控制計(jì)算機(jī)程序、瀏覽多媒體信息,或是在虛擬現(xiàn)實(shí)世界里探險(xiǎn)。注意破折號(hào)中間的插入語“insteadofkeyboards,sensorsorgoggles”,第二句句子框架為:VisitorscanuseAtodosomething。ImportantSentences
[2]Althoughthemodulestakeondifferenttasks,theyallrelyonthesamestatisticalmethod,knownasmaximumlikelihoodanalysis:thecomputerscompareincominginformationwithmodelstheyhavestoredinmemory.
盡管這些模塊有不同的任務(wù),但它們都依賴同樣的統(tǒng)計(jì)學(xué)方法——極大似然法,計(jì)算機(jī)把輸入的信息和存儲(chǔ)器中已有的模型進(jìn)行匹配?!癿aximumlikelihoodanalysis”可譯為“最大似然法”。
[3]Itdescribeseachblobintwoways:asadistributionofvaluesfortheblob’scolorandplacement,andasaso-calledsupportmap,essentiallyalistindicatingwhichimagepixelsbelongtotheblob(pixelsare“pictureelements”,similartothedotsthatmakeupatelevisionimage).
它用兩種方法描述每一個(gè)塊的屬性:一種方法是描述這個(gè)塊的顏色和位置的值的分布,另一種方法是所謂的支持圖,本質(zhì)上是一張表,它描述哪一個(gè)圖像像素屬于這個(gè)塊(像素是“圖像元素”,這同組成電視圖像中的點(diǎn)相似)?!癳ssentiallyalistindicatingwhichimagepixelsbelongtotheblob”為“supportmap”的同位語。
[4]If,forexample,thebrightnessdifferencebetweenapixelandablobwere10percent,andtheblob’sstatisticssaidthatsuchadifferencehappenedonly1percentofthetime,thechancethatthepixelbelongedtotheblobwouldbeamereonein100.
舉例來說,如果一個(gè)像素和一個(gè)塊之間的亮度差為10%,而且塊的統(tǒng)計(jì)數(shù)值說明了當(dāng)時(shí)這個(gè)差的發(fā)生率僅為1%,則這個(gè)像素屬于這個(gè)塊的機(jī)會(huì)只有1%。注意“If,forexample,the…”句子結(jié)構(gòu)的使用。
[5]Borrowingthisinformation,thespeech-recognitionsystemelectronically“steers”anarrayoffixedmicrophonessothattheyreinforceonlythosesoundscomingfromthedirectionoftheuser’smouth.
借助于這種信息,語音識(shí)別系統(tǒng)電子化的“控制”由固定麥克風(fēng)組成的矩陣,加強(qiáng)從用戶嘴巴方向傳來的聲音。注意理解帶引號(hào)的謂語動(dòng)詞“steer”的含義,該詞的字面含義為“引導(dǎo),駕駛”。
[6]Ifwecanincreasethesizeofthevocabularythatoursystemunderstands—anditseemsverylikelythatwewillbeabletodoso—itmaybepossibletocreateinterfacesfordeafpeopleasreliableasthespeech-recognitionsystemsthatarenowbeingintroducedforpeoplewhocanhear.
如果我們能往系統(tǒng)里加入系統(tǒng)可識(shí)別的詞匯——很有可能我們會(huì)這樣做——我們就有可能開發(fā)出與目前為正常人開發(fā)的接口一樣可靠的聾啞人語音識(shí)別系統(tǒng)。注意破折號(hào)中間的成分為“插入語”,在分析句子結(jié)構(gòu)時(shí)可忽略其存在。
[7]Wehaveshownthat,atleastinsimplesituations,itispossibletotrackpeople’smovements,identifythemandrecognizetheirexpressionsinrealtime
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2024項(xiàng)目管理考試輔導(dǎo)材料試題及答案
- 廣告策劃中的危機(jī)公關(guān)處理考核試卷
- 財(cái)務(wù)數(shù)據(jù)解讀與應(yīng)用試題及答案
- 陜西排水帶施工方案
- 針對(duì)新形勢(shì)的注冊(cè)會(huì)計(jì)師考試變革探討試題及答案
- 2024項(xiàng)目管理專業(yè)知識(shí)考題試題及答案
- 2024年項(xiàng)目成功的關(guān)鍵因素與應(yīng)對(duì)方案試題及答案
- 打井前施工方案怎么寫
- 項(xiàng)目管理專業(yè)人士資格考試的備考經(jīng)驗(yàn)試題及答案
- 電視機(jī)語音助手與智能交互技術(shù)考核試卷
- 模塊21.CR400AF型動(dòng)車組轉(zhuǎn)向架 《高速鐵路動(dòng)車組機(jī)械設(shè)備維護(hù)與檢修》教學(xué)課件
- AQ6111-2023個(gè)體防護(hù)裝備安全管理規(guī)范
- GGD交流低壓配電柜運(yùn)行、維護(hù)說明書、安裝、操作手冊(cè)
- 多發(fā)性骨髓瘤腎損傷診治指南(2024版)
- 2024年中考數(shù)學(xué)反比例函數(shù)-選擇題(壓軸)(試題)
- 【渠道視角下伊利股份營(yíng)運(yùn)資金管理存在的問題及優(yōu)化建議探析9000字(論文)】
- 【語文】古詩詞誦讀《登快閣》教學(xué)課件 2023-2024學(xué)年統(tǒng)編版高中語文選擇性必修下冊(cè)
- 2024年江蘇省南通市通州區(qū)中考一模英語試卷
- (正式版)JBT 9229-2024 剪叉式升降工作平臺(tái)
- JTG B05-01-2013 公路護(hù)欄安全性能評(píng)價(jià)標(biāo)準(zhǔn)
- (高清版)DZT 0208-2020 礦產(chǎn)地質(zhì)勘查規(guī)范 金屬砂礦類
評(píng)論
0/150
提交評(píng)論