版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1/92
ExecutiveSummary
Drivenbythejointeffortofkeytechnologiessuchasbigdataandcloudcomputing,asizablenumberofthegenerativepre-trainedtransformer(GPT)largemodels,representedbyChatGPT,haveemerged,showinghighlycreativecontentgenerationcapabilitiesandprovidinghighlyintelligenthuman-computerinteractionexperience.Foralongtime,therehavebeenmanytechnicalproblemsincommunicationthataredifficulttomodelaccuratelyorsolveefficientlyusingtraditionalmethods.Meanwhile,GPTdemonstratesthepotentialtoimprovetheperformanceofinformationcommunicationservicesandintelligentautonomousnetworks.Inaddition,therapiddevelopmentandbroadapplicationsofGPTalsoneedtobesupportedbyacommunicationnetworkwithlargebandwidth,lowlatency,and
highreliability.
Therefore,fromtheperspectiveofcommunicationpractitioners,thiswhitepaperexplorestheinterrelationshipbetweenGPTandcommunication.Firstly,Chapter1sketchestheconcept,developmentprocess,andresearchstatusofGPTlargemodels.Secondly,Chapter2discussesthenewapplicationsofGPTinthecommunicationindustry,andthepositionofGPTinnetworkintelligentautonomy.Thirdly,Chapter3exploreshowthecommunicationnetworksenablethebroadapplicationsofGPT,andgivesatypicalideaoffuturenetworkdesign.Moreover,Chapter4analyzestheprocessofGPTandcommunicationfromindependentevolutiontocollaborativedevelopment,aswellasapplicationsof“6G+GPT”empoweringthedigitaltransformationofindustries.Inaddition,Chapter5pointsoutthefivemostobviousproblemsandchallengesintheintegrationprocessof“GPT+Communication”andprovidessomesolutions.Subsequently,Chapter6putsforwardseveralsuggestionsonhowGPTandthecommunicationindustrycandeveloptogether,aswellasthe
prospectsforthefuture.Finally,Chapter7concludesthiswhitepaper.
2/92
Contents
ExecutiveSummary 1
0Preface 4
1.GPTLeadstheTideofArtificialIntelligenceDevelopment 8
1.1.BasicConceptsofGPT 8
1.1.1GenerativePre-trainedTransformer 8
1.1.2LargeModel 9
1.1.3TransformerArchitecture 11
1.2.DevelopmentHistoryofGPT 13
1.3.CurrentResearchStatusofGPT 15
1.3.1ForeinResearchStatus 16
1.3.2DomesticResearchStatus 18
1.3.3InternationalOrganizations 19
2.GPTEmpowerstheCommunicationIndustry 20
2.1.GPTStimulatesNewApplicationsandReformsinCommunication 20
2.1.1IntelligentCustomerService 22
2.1.2AutomationSimulation 23
2.1.3EnhancedSemanticCommunication 24
2.1.4ReshapingtheFieldofChipDesign 25
2.2.GPTPromotesIntelligentAutonomyinCommunicationNetworks 26
2.2.1GPTReshapesNetworkPlanning 28
2.2.2GPTEnhancesSlicingDeployment 29
2.2.3GPTSimplifiesNetworkOperationsandMaintenance 30
2.2.4GPTAcceleratesNetworkOptimization 32
3.CommunicationNetworksEnableGPTUbiquitousApplications 35
3.1CommunicationNetworksGuaranteetheLandingofGPTApplications 35
3.2FutureNetworkTechnologySupportsGPTApplications 38
3.2.1TypicalApproachestoFutureNetworkDesign 38
3.2.26GNetworkwithNativeSupportforGPTApplications 39
3.3NewNetworkArchitectureSupportsGPTCapabilitySinking 41
3.3.1AdaptiveSlicing 41
3.3.2DistributedLearning 43
3.3.3EdgeIntelligence 43
4.CollaborativeDevelopmentofGPTandCommunication 46
4.1.GPTandCommunicationfromIndependentEvolutiontoCloseIntegration 46
4.1.1TrendsintheIntegrationofGPTandCommunication 46
4.1.2IntegrationofGPTand5GNetworks 47
4.2.IntegrationandDevelopmentofGPTwith6GCommunicationNetworks 48
4.2.1GPTSupportsMassiveDataProcessing 49
4.2.2GPTPromotesNetworkSelf-Service 50
3/92
4.2.3GPTAssistsinNetworkResourceOrchestration 50
4.2.4GPTConstructsNetworkEndogenousSecurity 50
4.3.“6G+GPT”EmpowersIndustryDigitalTransformation 51
4.3.1“6G+GPT”EmpowersSmartIndustry 52
4.3.2“6G+GPT”EmpowersSmartHealthcare 53
4.3.3“6G+GPT”EmpowersSmartTransportation 53
4.3.4“6G+GPT”EmpowersSmartAgriculture 54
4.3.5“6G+GPT”EmpowersSmartHome 55
4.3.6“6G+GPT”EmpowersDigitalEntertainment 55
5.ProblemsFacedbytheDevelopmentof“GPT+Communication”Integration56
5.1.ScarcityofHigh-QualityTrainingDatainCommunicationLeadstoPoorAccuracyand
GeneralizationofSpecializedModels
5
7
5.2.InsufficientOn-DeviceComputingPowerandHardwareResourcesPoseChallengesto
LightweightDeploymentofLargeModels
6
0
5.3.DifficultiesinCloud-Edge-TerminalHeterogeneousNetworkCollaborationLeadtoPoor
StabilityPerformanceofLargeModels
6
2
5.4.ServerInterconnectionBandwidthBottlenecksResultinLongTrainingTimeandLow
InferenceEfficiency
6
5
5.5.LaggingLegalRegulationsRelatedtoLargeModelsResultinHighRisksofSecurity,
Privacy,andEthicalIssues
6
7
6.DevelopmentRecommendationsandFutureProspects 71
6.1.DevelopmentRecommendations 71
6.1.1AcceleratingtheConstructionofAIComputingPowerandProvidingInfrastructure
Support
7
1
6.1.2StrengtheningJointTrainingofSchoolsandEnterprisestoFilltheGapin
InnovativeTalents
7
4
6.1.3AcceleratingtheFormulationofRelevantPoliciesandEstablishingPlatformsto
GuideDevelopment
7
6
6.2.FutureProspects 78
6.2.1BreakthroughsinCoreTechnologiesandSignificantEnhancementofKey
Capabilities
7
8
6.2.2ContinuousImprovementinSystemConstructionandRapidDevelopmentofthe
DigitalEconomy
8
0
6.2.3ExpansionofApplicationScenarios,GradualIntegrationandSymbiosis 82
7.Conclusion 84
References 85
Abbreviations 90
Acknowledgments 92
4/92
0Preface
Inrecentyears,asArtificialIntelligence(AI)technologyhascontinuedtoadvance,particularlyintheareasofreinforcementlearning,largemodels,andgenerativecontent,variousindustrieshavebeenactivelyexploringitsapplications.AttheendofNovember2022,OpenAIreleasedtherapidlypopularizedchatbotChatGPT,whichpossessesastonishingnaturallanguageunderstandingandgenerationcapabilities,attractingwidespreadattentionfromsociety.Subsequently,inMarch2023,thelaunchoftheupgradedversionGPT-4multimodallargemodelreignitedenthusiasmforgenerativeAI,leadingtotheemergenceofnumerouslargemodelsin
quicksuccession.
Sincetheinceptionoftext-basedconversationalinteractions,GPThasprofoundlyimpactedpeople’sproductionandliveswithinafewshortyears,bringingaboutsignificantchanges.Manypeoplebelievethatitwillcontinuetobringdisruptivechanges.BillGatespointedoutthatlargemodelsrepresentthemostrevolutionarytechnologicaladvancementinover40years;NVIDIACEOJensenHuanglikenedtheemergenceoflargemodelstothe“iPhonemoment”ofAI;BaiduCEORobinLiproposedthatlargemodelsarepreparedtochangetheworldatthe2023ZhongguancunForum.FromtheripplescausedbyChatGPTtotheglobalwaveitunleashed,GPTlargemodelshavebecomeoneofthemostdiscussedtopicstoday,signalingacrucialturningpointinthedevelopmentofgenerativeAI;theyear2023
willalsoundoubtedlyleaveasignificantmarkinthehistoryofAIdevelopment.
Asanindustryfacilitatinginformationexchangeandtransmissionamonghumans,nature,andmachines,thecommunicationindustryiscloselyintertwinedwiththedevelopmentoflargemodeltechnology.Thecommunicationindustryitselfhasahighdegreeofdigitalizationandneedstohandlecomplexdata.TheintroductionofGPTcanstreamlineasignificantamountofwork,bringingaboutsignificantcapacityenhancementsforcommunicationoperators,particularlyintherealmsofnetworkoperationsandmaintenance(O&M)andservicedelivery,makingthemmoreintelligent.Intheeraoflargemodels,withtheadvancementofGPTtechnology,thedemandforcomputingpower,data,andalgorithmswillexperienceexplosivegrowth,requiringcommunicationinfrastructuretoprovidesupport.Inthefuture,howGPT
empowersthecommunicationindustryandhowthecommunicationindustrysupports
5/92
GPTarequestionsthateverycommunicationprofessionalshouldearnestly
contemplate.
Therefore,thiswhitepaperisbasedonthedevelopmenthistoryandlatestresearchadvancementsofGPTlargemodels.Ontheonehand,itelaboratesontheinnovativeapplicationsofGPTwithinthecommunicationindustryinspecificscenarios.Ontheotherhand,itinvestigateshowfuturecommunicationnetworksprovidenativesupportforGPTintermsofarchitectureandkeytechnologies.Subsequently,combiningGPTwithcommunication,itproposesaroadmapforthedigitalandintelligenttransformationofkeyindustriesthroughtheircollaborativedevelopment,whilealsopointingouttheproblemsandchallengesintheintegrationanddevelopmentprocess.Inresponsetotheseissues,correspondingdevelopmentrecommendationsandprospectsareprovided.Finally,thewholecontentofthiswhitepaperissummarized.Thecompletechapterstructureofthiswhitepaperisillustrated
inFigure0-1below.
6/92
Figure0-1WhitePaperChapterStructureDiagram
ThiswhitepaperwasjointlyorganizedandauthoredbytheBeijingInstituteofTechnology,withparticipationfrom18entities,includingthethreemajortelecomoperators(ChinaMobile,ChinaUnicom,andChinaTelecom),seventop-tieruniversities,threerenownedenterprises,andfiveleadingresearchinstitutesintheindustry.Spanningovereightmonths,theprocessinvolvedthein-depthparticipationofover50expertsandscholars,fromconductingresearchandtrackingthecutting-edgestatusofGPTlargemodelstoexploringtherelationshipbetweenGPTandcommunication,conceptualizingtheoutlineofthewhitepaper,arrangingspecificchaptercontent,andassigningwritingtasks.Itunderwentmorethantwentyroundsofdiscussionsandrevisionsbeforereachingitscompletion.Duringthisperiod,some
participatingentitiesalsosuccessfullycollaboratedtoapplyforaninternational
7/92
cooperationprojectfromtheMinistryofScienceandTechnologyofthePeople’sRepublicofChina,titled“ResearchonKeyTechnologiesofIntegratedMultidimensionalIntelligentOrchestrationinCloudComputingNetworksBasedon
LargeModels,”therebybettersupportingthecompletionofthiswhitepaper.
WebelievethatAItechnologyisstillinarapidlydevelopingstage,andtheintegrationandmutualsupportbetweenGPTlargemodelsandcommunicationnetworkscancontinuallyexpandinnovativeapplicationscenariosandimproveecosystemdevelopment,thusjointlypromotingtechnologicalprogressandthe
developmentofvariousindustries.
8/92
1.GPTLeadstheTideofArtificialIntelligenceDevelopment
WiththeadvancementofAIanddeeplearningtechnologies,theconceptof“l(fā)argemodels”hascomeintofocus,withChatGPTbeingthemostnotable.OnNovember30,2022,OpenAIofficiallyreleasedtheAIchatbotChatGPT,whichrepresentsArtificialIntelligenceGeneratedContent(AIGC)inthefieldofnaturallanguage.Itspowerfulcapabilitieshavechangedthewaymanypeopleworkandlive,sparkinganewwaveofAIgloballyandattractingwideattentionfrombothindustryandacademia.OnMarch14,2023,theofficiallyreleasedGPT-4underwentfurtherupgrades,significantlyrelaxingtextinputrestrictions,improvingansweraccuracy,andevenenablingdirectinputofimagestogeneratelyrics,creativetexts,etc.,withstylevariations,onceagainshowcasingtheimpactofgenerativeAI.OnNovember7,2023,atthefirst-everOpenAIDevDay,OpenAICEOAltmanshowcasedGPT-4Turbototheworld.AsthelatestversionofGPT,ithasbeenupdatedinareassuchasdataquality,imageprocessing,andspeechconversion,bringingdevelopersandusers
morepossibilitiesandopportunities.
So,whatareChatGPTandGPT?Whatdevelopmentjourneyhavetheyundergone?Andhowshouldtheybeunderstoodandapplied?ThischapterwillstartwithanexplorationofGPTlargemodels,introducingtheirbasicconcepts,developmenthistory,andcurrentresearchstatustoprovidereaderswitha
comprehensiveandin-depthunderstandingofGPT.
1.1.BasicConceptsofGPT
1.1.1GenerativePre-trainedTransformer
GPTstandsforGenerativePre-trainedTransformer,originatingfromthefieldsofdeeplearningandnaturallanguageprocessing(NLP).Overthepastfewyears,withtheadvancementofcomputingpowerandtheemergenceofbigdata,significantbreakthroughshavebeenmadeinthefieldofNLP.GPT,asanintegrationofaseries
ofNLPtechnologies,emergedinsuchacontext,asshowninFigure1-1.
G:Generative.ThisindicatesthatGPThastheabilitytospontaneouslygenerate
content.
P:Pre-trained.ThisindicatesthatGPThasundergonepre-trainingandisready
forimmediateuse.
9/92
T:Transformer.ThisindicatesthatGPTisbasedontheTransformerarchitecture
forlanguagemodeling.
Figure1-1MeaningofGPT
In2017,theGoogleteamfirstproposedtheTransformermodelbasedontheSelf-AttentionMechanism(SAM)andappliedittoNLP[1].OpenAIappliedthistechnologyandreleasedtheearliestgenerationoflargemodels,GPT-1,in2018.Sincethen,theparametersizeofeachgenerationofGPTmodelshasgrownexplosively.TheparametersizeofGPT-2,releasedinFebruary2019,was1.5billion,whileGPT-3,
releasedinMay2020,directlyreached175billion.
ThemeteoricriseofChatGPTwasnotbychance.Itistheresultoftheeffortsofmanypeopleandalongperiodofevolution.TounderstandthedevelopmentofGPT,
oneshouldfirstgrasptheconceptoflargemodelsandTransformerarchitecture.
1.1.2LargeModel
Generally,beforeChatGPT,theAImodelsthatreceivedpublicattentionweremainlyusedforsingletasks.Forexample,“AlphaGo”,whichignitedtheentireAImarketandprompteditsexplosivedevelopment,defeatedGoworldchampionLeeSedolinthe“Manvs.Machine”matchin2016,basedonglobalGogamerecords.However,fundamentally,theseAIdatamodels,whichfocusonspecifictasks,can
onlybecalled“smallmodels”comparedtoChatGPT.
Largemodelsrefertomachinelearningmodelswithhugeparameterscalesandcomplexity.ThetermusuallyreferstoLargeLanguageModels(LLMs).AlanguagemodelisanAImodelthat,aftertraining,canunderstandandgeneratehumanlanguage,and“l(fā)arge”meansthatthemodel’sparametersareverylargerelativeto
“smallmodels.”
AsshowninFigure1-2,thisevolutionarytreetracesthedevelopmenthistoryof
10/92
largemodelsinrecentyears,highlightingsomeofthemostwell-knownmodels,withmodelsonthesamebranchbeingmorecloselyrelated[2].Solidsquaresrepresentopen-sourcemodels,whilehollowsquaresrepresentclosed-sourcemodels.Non-Transformermodelsareshowningray,andamongTransformer-basedmodels,Encodermodelsareinthepinkbranch,Decodermodelsareinthebluebranch,and
Encoder-Decodermodelsareinthegreenbranch.
Figure1-2EvolutionaryTreeofLargeModels
Basedonthisevolutionarytreediagram,wecanconcludethatDecoder-onlymodelsaregraduallybecomingthedominantmodelsinLLMdevelopment,andOpenAIcontinuestomaintainitsleadingpositioninLLM.Metahasmadeoutstandingcontributionstoopen-sourceandLLMresearch,butthereisatrendtowardsclosed-sourcedevelopmentafterthelaunchofGPT-3.Inaddition,manycompaniesandinstitutionsarestillactivelyexploringEncoder-Decodermodels,such
asGoogle.
Currently,majorinstitutionsabroadthatreleaselargemodelsincludeOpenAI,Anthropic,Google,andMeta,withmodelparameterscalesmainlyinthetensandhundredsofbillions.Uptonow,thetopGPTlargemodelsabroadincludeChatGPT,
Claude,Bard,andLlama.Amongthem,afterGooglereleasedthelatestnative
11/92
multimodallargemodel–Gemini,BardwasofficiallyrenamedGemini.
Inthisgloballycompetitivearena,Chinaisalsokeepingpace,developingmanylargemodels,includingTencent’s“Hybrid,”Alibaba’s“TongyiQianwen,”Huawei’s“Pangu,”andChinaMobile’s“Jiutian”series.DatashowsthatasofOctober2023,thereareatotalof254domesticcompanies,universities,andresearchinstituteswithlargemodelsofover1billionparameters,indicatingthatthe“battleofthehundredmodels”istransitioningfromthepreviousstageof“beingborn”toanewstageof“beingused.”Figure1-3showssomeofthelargemodelsdevelopedbydomesticand
foreigncompaniescurrently.
Figure1-3VariousTypesofLargeModels
1.1.3TransformerArchitecture
TheTransformerarchitectureisacrucialfoundationofGPT,whichisaneuralnetworkarchitecturebasedontheSAMandwidelyusedinlargemodelsinthefieldofNLP.ItscorecomponentsaretheEncoderandDecoder.TheEncoderencodesinputtextintoaseriesofvectors,whiletheDecoderdecodesthesevectorsonebyoneintooutputtext.BeforetheintroductionofTransformer,themainstreammodelsintheNLPfieldwereRecurrentNeuralNetworks(RNNs),whichusedrecursionand
convolutionalneuralnetworksforlanguagesequencetransformation.
InJune2017,theGoogleBrainteampublishedapapertitledAttentionisAllYouNeedatthetopAIconferenceNeurIPS,proposinganewnetworkarchitecturecalledTransformer.ItisentirelybasedontheSAM,abandoningrecursionandconvolution.Afteronly12hoursoftrainingoneightP100GraphicsProcessingUnits(GPUs),
Transformerachievedhighertranslationquality[1],showcasingexcellentparallelism
12/92
andbecomingthemostadvancedLLMatthetime.
Figure1-4illustratesthenetworkstructureoftheTransformer.ItconsistsofaseriesofEncodersandDecoders,eachcomprisingmulti-headattentionlayersandall-inclusiveconnectedfeedforwardnetworks.GPT,similartotheDecoderpartof
Transformer,isanautoregressivemodel.
Figure1-4TransformerNetworkStructureDiagram
ThecorecomponentintheTransformeristhemulti-headattentionmechanismmodule,asshowninFigure1-5.Itrequiresthreespecifiedinputs:Q(Query),K(Key),andV(Value).Then,itcalculatesthesimilaritybetweeneachpairofQandKand
weightseachVbasedonthesimilaritytoobtaintheattentioncalculationresult.
13/92
Figure1-5Multi-HeadAttentionMechanismModule
Themulti-headattentionmechanismdoesnotcalculateattentiononlyoncebutdividestheinputintosmallerblocksandthencalculatesthescaleddot-productattentioninparalleloneachsubspace.Thisdesignallowseachattentionmechanismtooptimizedifferentfeaturepartsofeachword,balancingthebiasesthatmayarisefromthesameattentionmechanismandenablingthemodeltocapturesemanticinformationatdifferentlevels,therebyenhancingthemodel’sexpressivepowerand
improvingitseffectiveness.
1.2.DevelopmentHistoryofGPT
14/92
Figure1-6DevelopmentHistoryofGPT
ThedevelopmenthistoryofGPTcanbedividedintotwostages.BeforeChatGPT,theemphasiswasoncontinuouslyincreasingthebasicscaleoflargemodelsandenhancingnewcapabilities.ChatGPTandGPT-4,ontheotherhand,focusmoreonreinforcementlearningfromhumanfeedbacktounderstandhuman
intentandprovidebetterservices,asshowninFigure1-6.
①June2018:OpenAIpublishedthepaperImprovingLanguageUnderstandingbyGenerativePre-trainingandofficiallyreleasedGPT-1[3].
.Basicapproach:Generativepre-training(unsupervised)+downstreamtask
fine-tuning(supervised).
.BasedonaunidirectionalTransformerlanguagemodelwithadecoder
structure,consistingof12layers.
.117millionparameters,5GBtrainingdata,relativelylimitedmodelsizeandcapabilities.
.Contextwindow:512tokens.
②February2019:OpenAIpublishedthepaperLanguageModelsareUnsupervisedMultitaskLearners,proposingthatlanguagemodelsareunsupervisedmultitasklearners,andGPT-2wasborn[4].
.Basicapproach:Removingsupervision,retainingonlyunsupervisedlearning.
.48-layerTransformerstructure.
15/92
.1.5billionparameters,andthetrainingdatavolumeincreasedto40GB.
.Contextwindow:1024tokens.
③May2020:OpenAIpublishedthepaperLanguageModelsareFew-Shot
LearnersandintroducedtheGPT-3model[5].
.Basicapproach:Unsupervisedlearning+in-contextlearning.
.96-layermulti-headTransformer.
.Thenumberofparametersincreasedto175billion,trainedon45TBoftextdata.
.Contextwindow:2048tokens.
④March2022:OpenAIonceagainpublishedthepaperTrainingLanguageModelstoFollowInstructionswithHumanFeedback,introducingReinforcementLearningfromHumanFeedback(RLHF),andlaunchedtheInstructGPTmodel[6].
.Basicapproach:RLHF+fine-tuningtraining.
.Enhancedhumanadjustmentofmodeloutput.
.Resultsrankedinamoreunderstandablemanner.
ChatGPTisaderivativeofInstructGPT,andthetwohavethesamemodelstructureandtrainingmethod.Theonlydifferenceisthewaytheycollectdata.
ChatGPTfocusesmoreoninteractionintheformofdialogue.
⑤March2023:OpenAIreleasedthemultimodalpre-trainedlargemodelGPT-4,
onceagainundergoingsignificantupgrades.
.Basicapproach:Multimodal.
.Contextwindow:8195tokens.
.1.8trillionparameters,13trilliontokentrainingdata.
.Powerfulimagerecognitioncapabilities.
AlthoughthecurrentcapabilitiesofGPT-4inreal-worldscenariosmaynotmatchthoseofhumans,ithasdemonstratedsignificantlysuperiorabilitiesinvariousprofessionalandacademicexams.EvenSATscores(whichcanbeunderstoodasscoresfortheU.S.collegeadmissionstest)ofGPT-4havesurpassedthoseof90%oftesttakers,reachingthelevelrequiredforadmissiontotopuniversitiessuchas
HarvardandStanford.
1.3.CurrentResearchStatusofGPT
OnOctober12,2023,theanalysiscompanystateof.aireleasedtheStateofAI
Report2023.ThereportpointedoutthatOpenAI’sGPT-4remainsthemostpowerful
16/92
LLMglobally.GenerativeAIhaspropelledadvancementsinlifesciencesandhasbeenasaviorfortheventurecapitalindustry[7].Largemodelscontinuetoachievetechnologicalbreakthroughs,especiallyinthefieldoflifesciences,making
significantprogressinmolecularbiologyanddrugdiscovery.
OnDecember14,2023,Natureannouncedtenpeoplein2023.Notably,thechatbotChatGPT,duetoitsdominanceofvariousnewsheadlinesin2023andprofoundimpactonthescientificcommunityandsocietyatlarge,wasincludedasthe11th“non-humanmember”onthelist,recognizingthesignificantchangesbroughtaboutbygenerativeAItoscientificdevelopmentandprogress.Currently,bothdomesticallyandabroad,researchonGPTlargemodelscontinuestodeepen,withmanyinstitutionsstartingtodeveloptheirownlargemodels,andtheapplicationscenariosarebecomingincreasinglydiverse.LargemodelsrepresentedbyChatGPT
haveofficiallyusheredintheeraofAI2.0.
1.3.1ForeinResearchStatus
1UnitedStates
IntheUnitedStates,startupslikeOpenAIandAnthropic,alongwithtechgiantssuchasMicrosoftandGoogle,areleadingtherapiddevelopmentoflargemodels.Majorcompaniesarecontinuallyenhancingtheircompetitiveness.Googleinvested$300millioninAnthropictocounterthethreatposedbyChatGPT,joiningreinforcementlearningfromartificialintelligencefeedback(RLAIF)toreducehumanfeedback.InDecember2022,GooglepublishedapapertitledConstitutionalAI:HarmlessnessfromAIFeedback,introducingtheAImodelClaude.Buzzfeed,aUSnewmediagiant,sawitsstockpricetripleintwodaysafterannouncingplanstouseChatGPTtoassistcontentcreation.Microsoft,asthemaininvestorinOpenAI,isalsousingChatGPTtoenhanceitsproductcompetitivenessandsupplementits
professionalknowledgeandmathematicalshortcomings.
2UnitedKingdom
InApril2023,theUKgovernmentannouncedthatitwouldprovide£100millionininitialfundingtotheteamresponsibleforbuildingtheUKversionofthefoundationalAImodeltoacceleratethedevelopmentofAItechnologyintheUK.TheUKgovernmentstatedthatthisinvestmentwouldbeusedtofundnewteamsjointlybuiltbythegovernmentandtheindustrytoensuretheUK’sAI“sovereign
capabilities.”Thegoalofthisinitiativeistopromotetheapplicationofsafeand
17/92
reliablefoundationalmodelsandstrivetobuildtheUKintoatechnological“superpower”by2030.Inaddition,inresponsetothecontroversyovertheapplicationoflargemodelssuchasGPTinAIethics,theUKhasalsoissuedawhitepaperonregulatorymeasuresandstatedthatregulatoryagencieswillnextissueguidelinesandriskassessmenttemplatestovariousorganizations.Othertoolsandresourceswillbe
usedtoformulatespecificimplementationprincipleswithintheindustry.
③Europe
InFinland,FlowriteisanAI-basedwritingtoolthatcangenerateemails,messages,andothercontentbyinputtingkeywords.IntheNetherlands,theomnichannelcommunicationplatformMessageBirdlauncheditsownAIplatformMessageBirdAI,whichcanunderstandthemeaningofcustomerinformationandrespondaccordingly.BotharebasedonGPT-3.Germanyisalsoconstantlycatchingupinthedevelopmentoflargemodels.Forexample,onMarch7,2023,GooglelaunchedthemultimodallargemodelPaLM-E,jointlydevelopedbytheTechnical
UniversityofBerlinandGoogle.
InFebruary2024,theEuropeangenerativeAIun
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025安拆分公司合同管理制度
- 二零二五年度解除勞動(dòng)合同經(jīng)濟(jì)補(bǔ)償金核算與員工培訓(xùn)協(xié)議3篇
- 二零二五年度股權(quán)協(xié)議書(shū)大全:股權(quán)投資風(fēng)險(xiǎn)控制協(xié)議3篇
- 二零二五年度子女對(duì)父母生活照料與醫(yī)療看護(hù)綜合服務(wù)協(xié)議2篇
- 2025年度連鎖藥店品牌授權(quán)與轉(zhuǎn)讓協(xié)議書(shū)3篇
- 二零二五年度新型醫(yī)療設(shè)備價(jià)格保密合同3篇
- 2025年度股東退出與知識(shí)產(chǎn)權(quán)轉(zhuǎn)讓協(xié)議2篇
- 二零二五年度農(nóng)業(yè)科技企業(yè)員工勞動(dòng)合同規(guī)范模板2篇
- 2025年度智能車庫(kù)租賃合同模板(含車位租賃與停車場(chǎng)環(huán)境改善)3篇
- 2025年度新能源發(fā)電項(xiàng)目轉(zhuǎn)讓合同2篇
- 2023年新教材人教版高中生物選擇性必修3《生物技術(shù)與工程》全冊(cè)各章節(jié)課時(shí)練習(xí)題及章末檢測(cè)含答案解析
- 生鮮連鎖超市運(yùn)營(yíng)實(shí)戰(zhàn)手冊(cè)
- 軟件工程師KPI表
- 燃?xì)獍l(fā)電工程監(jiān)理導(dǎo)則
- GB 16844-1997普通照明用自鎮(zhèn)流燈的安全要求
- 供熱企業(yè)安全風(fēng)險(xiǎn)隱患辨識(shí)清單
- 矩形沉井計(jì)算表格(自動(dòng)版)
- 滬教牛津版五年級(jí)下冊(cè)英語(yǔ)全冊(cè)課件
- 湘藝版 四年級(jí)上冊(cè)音樂(lè)教案- 第十課 我心愛(ài)的小馬車
- 前置胎盤(pán)的手術(shù)配合課件
- 魚(yú)骨圖模板1PPT課件
評(píng)論
0/150
提交評(píng)論