人工智能專用名詞_第1頁
人工智能專用名詞_第2頁
人工智能專用名詞_第3頁
人工智能專用名詞_第4頁
人工智能專用名詞_第5頁
已閱讀5頁,還剩22頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

累積誤差逆?zhèn)鞑ダ鄯e誤差逆?zhèn)鞑ゼせ詈瘮?shù)自適應(yīng)諧振理論加性學(xué)習(xí)對抗網(wǎng)絡(luò)仿射層親和矩陣代理/智能體算法-剪枝異常檢測近似Roc曲線下面積通用人工智能人工智能關(guān)聯(lián)分析注意力機(jī)制屬性條件獨(dú)立性假設(shè)屬性空間屬性值自編碼器自動語音識別自動摘要平均梯度平均池化LetterAAccumulatederrorbackpropagationActivationFunctionAdaptiveResonanceTheory/ARTAddictivemodelAdversarialNetworksAffineLayerAffinitymatrixAgentAlgorithmAlpha-betapruningAnomalydetectionApproximationAreaUnderROCCurve/AUCArtificialGeneralIntelligence/AGIArtificialIntelligence/AIAssociationanalysisAttentionmechanismAttributeconditionalindependenceassumptionAttributespaceAttributevalueAutoencoderAutomaticspeechrecognitionAutomaticsummarizationAveragegradientAverage-PoolingLetterBBackpropagationThroughTimeBackpropagation/BPBaselearnerBaselearningalgorithmBatchNormalization/BNBayesdecisionruleBayesModelAveraging/BMABayesoptimalclassifierBayesiandecisiontheoryBayesiannetworkBetween-classscattermatrixBiasBias-variancedecompositionBias-VarianceDilemmaBi-directionalLong-ShortTermMemory/Bi-LSTMBinaryclassificationBinomialtestBi-partitionBoltzmannmachineBootstrapsamplingBootstrappingBreak-EventPoint/BEPLetterCCalibrationCascade-CorrelationCategoricalattribute通過時(shí)間的反向傳播反向傳播基學(xué)習(xí)器基學(xué)習(xí)算法批量歸一化貝葉斯判定準(zhǔn)則貝葉斯模型平均貝葉斯最優(yōu)分類器貝葉斯決策論貝葉斯網(wǎng)絡(luò)類間散度矩陣偏置/偏差偏差-方差分解偏差方差困境雙向長短期記憶二分類二項(xiàng)檢驗(yàn)二分法玻爾茲曼機(jī)自助采樣法/可重復(fù)采樣/有放回采樣自助法平衡點(diǎn)校準(zhǔn)級聯(lián)相關(guān)離散屬性

Class-conditionalprobabilityClassificationandregressiontree/CARTClassifierClass-imbalanceClosed-formClusterClusteranalysisClusteringClusteringensembleCo-adaptingCodingmatrixCOLTCommittee-basedlearningCompetitivelearningComponentlearnerComprehensibilityComputationCostComputationalLinguisticsComputervisionConceptdriftConceptLearningSystem/CLSConditionalentropyConditionalmutualinformation類條件概率分類與回歸樹分類器類別不平衡閉式簇/類/集群聚類分析聚類ConditionalProbabilityTable/條件概率表CPTConditionalProbabilityTable/條件概率表CPTConditionalrandomfield/CRF條件隨機(jī)場Conditionalrisk 條件風(fēng)險(xiǎn)ConfidenceConfusionmatrixConnectionweightConnectionismConsistencyContingencytableContinuousattributeConvergenceConversationalagentConvexquadraticprogrammingConvexityConvolutionalneuralnetwork/CNNCo-occurrenceCorrelationcoefficientCosinesimilarityCostcurveCostFunctionCostmatrixCost-sensitiveCrossentropyCrossvalidationCrowdsourcingCurseofdimensionalityCutpointCuttingplanealgorithmLetterDDatamining置信度混淆矩陣連接權(quán)連結(jié)主義一致性/相合性列聯(lián)表連續(xù)屬性收斂會話智能體凸二次規(guī)劃凸性卷積神經(jīng)網(wǎng)絡(luò)同現(xiàn)相關(guān)系數(shù)余弦相似度成本曲線成本函數(shù)成本矩陣成本敏感交叉熵交叉驗(yàn)證眾包維數(shù)災(zāi)難截?cái)帱c(diǎn)割平面法數(shù)據(jù)挖掘DatasetDecisionBoundaryDecisionstumpDecisiontreeDeductionDeepBeliefNetworkDeepConvolutionalGenerativeAdversarialNetwork/DCGANDeeplearningDeepneuralnetwork/DNNDeepQ-LearningDeepQ-NetworkDensityestimationDensity-basedclusteringDifferentiableneuralcomputerDimensionalityreductionalgorithmDirectededgeDisagreementmeasureDiscriminativemodelDiscriminatorDistancemeasureDistancemetriclearningDistributionDivergenceDiversitymeasureDomainadaptionDownsampling數(shù)據(jù)集決策邊界決策樹樁決策樹/判定樹演繹深度信念網(wǎng)絡(luò)深度卷積生成對抗網(wǎng)絡(luò)深度學(xué)習(xí)深度神經(jīng)網(wǎng)絡(luò)深度Q學(xué)習(xí)深度Q網(wǎng)絡(luò)密度估計(jì)密度聚類可微分神經(jīng)計(jì)算機(jī)降維算法有向邊不合度量判別模型判別器距離度量距離度量學(xué)習(xí)分布散度多樣性度量/差異性度量領(lǐng)域自適應(yīng)下采樣D-separation(Directedseparation)DualproblemDummynodeDynamicFusionDynamicprogrammingLetterEEigenvaluedecompositionEmbeddingEmotionalanalysisEmpiricalconditionalentropyEmpiricalentropyEmpiricalerrorEmpiricalriskEnd-to-EndEnergy-basedmodelEnsemblelearningEnsemblepruningErrorCorrectingOutputCodes/ECOCErrorrateError-ambiguitydecompositionEuclideandistanceEvolutionarycomputationExpectation-MaximizationExpectedlossExplodingGradientProblemExponentiallossfunction有向分離對偶問題啞結(jié)點(diǎn)動態(tài)融合動態(tài)規(guī)劃特征值分解嵌入情緒分析經(jīng)驗(yàn)條件熵經(jīng)驗(yàn)熵經(jīng)驗(yàn)誤差經(jīng)驗(yàn)風(fēng)險(xiǎn)端到端基于能量的模型集成學(xué)習(xí)集成修剪糾錯輸出碼錯誤率誤差-分歧分解歐氏距離演化計(jì)算期望最大化期望損失梯度爆炸問題指數(shù)損失函數(shù)超限學(xué)習(xí)機(jī)超限學(xué)習(xí)機(jī)因子分解假負(fù)類假正類假正例率特征工程特征選擇特征向量特征學(xué)習(xí)前饋神經(jīng)網(wǎng)絡(luò)微調(diào)翻轉(zhuǎn)法震蕩前向分步算法頻率主義學(xué)派滿秩矩陣功能神經(jīng)元增益率博弈論高斯核函數(shù)高斯混合模型通用問題求解泛化泛化誤差泛化誤差上界ExtremeLearningMachine/ELMLetterFFactorizationFalsenegativeFalsepositiveFalsePositiveRate/FPRFeatureengineeringFeatureselectionFeaturevectorFeaturedLearningFeedforwardNeuralNetworks/FNNFine-tuningFlippingoutputFluctuationForwardstagewisealgorithmFrequentistFull-rankmatrixFunctionalneuronLetterGGainratioGametheoryGaussiankernelfunctionGaussianMixtureModelGeneralProblemSolvingGeneralizationGeneralizationerrorGeneralizationerrorboundGeneralizedLagrangefunctionGeneralizedlinearmodelGeneralizedRayleighquotientGenerativeAdversarialNetworks/GANGenerativeModelGeneratorGeneticAlgorithm/GAGibbssamplingGiniindexGlobalminimumGlobalOptimizationGradientboostingGradientDescentGraphtheoryGround-truthLetterHHardmarginHardvotingHarmonicmeanHessematrixHiddendynamicmodelHiddenlayerHiddenMarkovModel/HMMHierarchicalclusteringHilbertspaceHingelossfunctionHold-out廣義拉格朗日函數(shù)廣義線性模型廣義瑞利商生成對抗網(wǎng)絡(luò)生成模型生成器遺傳算法吉布斯采樣基尼指數(shù)全局最小全局優(yōu)化梯度提升梯度下降圖論真相/真實(shí)硬間隔硬投票調(diào)和平均海塞矩陣隱動態(tài)模型隱藏層隱馬爾可夫模型層次聚類希爾伯特空間合頁損失函數(shù)留出法Homogeneous同質(zhì)Hybridcomputing混合計(jì)算Hyperparameter超參數(shù)Hypothesis假設(shè)HypothesistestLetterI假設(shè)驗(yàn)證ICML國際機(jī)器學(xué)習(xí)會議Improvediterativescaling/IIS改進(jìn)的迭代尺度法IncrementallearningIndependentandidentically增量學(xué)習(xí)獨(dú)立同分布distributed/i.i.d.IndependentComponentAnalysis/ICA獨(dú)立成分分析Indicatorfunction指示函數(shù)Individuallearner個體學(xué)習(xí)器Induction歸納Inductivebias歸納偏好Inductivelearning歸納學(xué)習(xí)InductiveLogicProgramming/ILP歸納邏輯程序設(shè)計(jì)Informationentropy信息熵Informationgain信息增益Inputlayer輸入層Insensitiveloss不敏感損失Inter-clustersimilarity簇間相似度InternationalConferenceforMachine國際機(jī)器學(xué)習(xí)大會Learning/ICMLIntra-clustersimilarity簇內(nèi)相似度Intrinsicvalue固有值IsometricMapping/IsomapIsotonicregressionIterativeDichotomiserLetterKKernelmethodKerneltrickKernelizedLinearDiscriminantAnalysis/KLDAK-foldcrossvalidationK-MeansClusteringK-NearestNeighboursAlgorithm/KNNKnowledgebaseKnowledgeRepresentationLetterLLabelspaceLagrangedualityLagrangemultiplierLaplacesmoothingLaplaciancorrectionLatentDirichletAllocationLatentsemanticanalysisLatentvariableLazylearningLearnerLearningbyanalogyLearningrate等度量映射等分回歸迭代二分器核方法核技巧核線性判別分析k折交叉驗(yàn)證/k倍交叉驗(yàn)證K 均值聚類K近鄰算法知識庫知識表征標(biāo)記空間拉格朗日對偶性拉格朗日乘子拉普拉斯平滑拉普拉斯修正隱狄利克雷分布潛在語義分析隱變量懶惰學(xué)習(xí)學(xué)習(xí)器類比學(xué)習(xí)學(xué)習(xí)率LearningVectorQuantization/LVQ學(xué)習(xí)向量量化Leastsquaresregressiontree 最小二乘回歸樹Leave-One-Out/LOO 留一法linearchainconditionalrandomfield線性鏈條件隨機(jī)場LinearDiscriminantAnalysis/LDA線性判別分析Linearmodel線性模型LinearRegression線性回歸Linkfunction聯(lián)系函數(shù)LocalMarkovproperty局部馬爾可夫性Localminimum局部最小Loglikelihood對數(shù)似然Logodds/logit對數(shù)幾率LogisticRegressionLogistic回歸Log-likelihood對數(shù)似然Log-linearregression對數(shù)線性回歸Long-ShortTermMemory/LSTM長短期記憶Lossfunction損失函數(shù)LetterMMachinetranslation/MT機(jī)器翻譯Macron-P宏查準(zhǔn)率Macron-R宏查全率Majorityvoting絕對多數(shù)投票法Manifoldassumption流形假設(shè)Manifoldlearning流形學(xué)習(xí)Margintheory間隔理論Marginaldistribution邊際分布Marginalindependence邊際獨(dú)立性Marginalization邊際化MarkovChainMonteCarlo/MCMC馬爾可夫鏈蒙特卡羅方法

MarkovRandomFieldMaximalcliqueMaximumLikelihoodEstimation/MLEMaximummarginMaximumweightedspanningtreeMax-PoolingMeansquarederrorMeta-learnerMetriclearningMicro-PMicro-RMinimalDescriptionLength/MDLMinimaxgameMisclassificationcostMixtureofexpertsMomentumMoralgraphMulti-classclassificationMulti-documentsummarizationMulti-layerfeedforwardneuralnetworksMultilayerPerceptron/MLPMultimodallearningMultipleDimensionalScalingMultiplelinearregressionMulti-responseLinearRegression/馬爾可夫隨機(jī)場最大團(tuán)極大似然估計(jì)/極大似然法馬爾可夫隨機(jī)場最大團(tuán)極大似然估計(jì)/極大似然法最大間隔最大帶權(quán)生成樹最大池化均方誤差元學(xué)習(xí)器度量學(xué)習(xí)微查準(zhǔn)率微查全率最小描述長度極小極大博弈誤分類成本混合專家動量道德圖/端正圖多分類多文檔摘要多層前饋神經(jīng)網(wǎng)絡(luò)多層感知器多模態(tài)學(xué)習(xí)多維縮放多元線性回歸多響應(yīng)線性回歸MutualinformationLetterNNaivebayesNaiveBayesClassifierNamedentityrecognitionNashequilibriumNaturallanguagegeneration/NLGNaturallanguageprocessingNegativeclassNegativecorrelationNegativeLogLikelihoodNeighbourhoodComponentAnalysis/NCANeuralMachineTranslationNeuralTuringMachineNewtonmethodNIPSNoFreeLunchTheorem/NFLNoise-contrastiveestimationNominalattributeNon-convexoptimizationNonlinearmodelNon-metricdistanceNon-negativematrixfactorizationNon-ordinalattributeNon-SaturatingGameNorm互信息樸素貝葉斯樸素貝葉斯分類器互信息樸素貝葉斯樸素貝葉斯分類器命名實(shí)體識別納什均衡自然語言生成自然語言處理負(fù)類負(fù)相關(guān)法負(fù)對數(shù)似然近鄰成分分析神經(jīng)機(jī)器翻譯神經(jīng)圖靈機(jī)牛頓法國際神經(jīng)信息處理系統(tǒng)會議沒有免費(fèi)的午餐定理噪音對比估計(jì)列名屬性非凸優(yōu)化非線性模型非度量距離非負(fù)矩陣分解無序?qū)傩苑秋柡筒┺姆稊?shù)歸一化核范數(shù)數(shù)值屬性核范數(shù)數(shù)值屬性NumericalattributeLetterOObjectivefunction 目標(biāo)函數(shù)Obliquedecisiontree 斜決策樹Occam’srazor 奧卡姆剃刀Odds 幾率Off-Policy 離策略O(shè)neshotlearning 一次性學(xué)習(xí)One-DependentEstimator/ODE獨(dú)依賴估計(jì)On-Policy在策略O(shè)rdinalattribute有序?qū)傩設(shè)ut-of-bagestimate包外估計(jì)Outputlayer輸出層Outputsmearing輸出調(diào)制法Overfitting過擬合/過配Oversampling過采樣LetterPPairedt-test成對t檢驗(yàn)Pairwise成對型PairwiseMarkovproperty成對馬爾可夫性Parameter參數(shù)Parameterestimation參數(shù)估計(jì)Parametertuning調(diào)參Parsetree解析樹ParticleSwarmOptimization/PSO粒子群優(yōu)化算法Part-of-speechtagging 詞性標(biāo)注Perceptron 感知機(jī)Performancemeasure性能度量相對多數(shù)投票法極性檢測多項(xiàng)式核函數(shù)池化正類正定矩陣相對多數(shù)投票法極性檢測多項(xiàng)式核函數(shù)池化正類正定矩陣后續(xù)檢驗(yàn)后剪枝勢函數(shù)查準(zhǔn)率/準(zhǔn)確率預(yù)剪枝先驗(yàn)概率圖模型近端梯度下降剪枝偽標(biāo)記量子化神經(jīng)網(wǎng)絡(luò)量子計(jì)算機(jī)量子計(jì)算擬牛頓法徑向基函數(shù)隨機(jī)森林算法PlugandPlayGenerativeNetwork即插即用生成網(wǎng)絡(luò)PluralityvotingPolaritydetectionPolynomialkernelfunctionPoolingPositiveclassPositivedefinitematrixPost-hoctestPost-pruningpotentialfunctionPrecisionPrepruningPrincipalcomponentanalysis/PCA主成分分析Principleofmultipleexplanations多釋原則PriorProbabilityGraphicalModelProximalGradientDescent/PGDPruningPseudo-labelLetterQQuantizedNeuralNetworkQuantumcomputerQuantumComputingQuasiNewtonmethodLetterRRadialBasisFunction/RBFRandomForestAlgorithmRestrictedIsometryProperty/RIPRestrictedIsometryProperty/RIPRandomwalkRecallReceiverOperatingCharacteristic/ROCRectifiedLinearUnit/ReLURecurrentNeuralNetworkRecursiveneuralnetworkReferencemodelRegressionRegularizationReinforcementlearning/RLRepresentationlearningRepresentertheoremreproducingkernelHilbertspace/RKHSRe-samplingRescalingResidualMappingResidualNetworkRestrictedBoltzmannMachine/RBMRe-weightingRobustnessRootnodeRuleEngineRulelearningLetterS隨機(jī)漫步查全率/召回率受試者工作特征線性修正單元循環(huán)神經(jīng)網(wǎng)絡(luò)遞歸神經(jīng)網(wǎng)絡(luò)參考模型回歸正則化強(qiáng)化學(xué)習(xí)表征學(xué)習(xí)表示定理再生核希爾伯特空間重采樣法再縮放殘差映射殘差網(wǎng)絡(luò)受限玻爾茲曼機(jī)限定等距性重賦權(quán)法穩(wěn)健性/魯棒性根結(jié)點(diǎn)規(guī)則引擎規(guī)則學(xué)習(xí)鞍點(diǎn)樣本空間鞍點(diǎn)樣本空間采樣評分函數(shù)自動駕駛自組織映射半樸素貝葉斯分類器半監(jiān)督學(xué)習(xí)半監(jiān)督支持向量機(jī)SamplespaceSamplingScorefunctionSelf-DrivingSelf-OrganizingMap/SOMSemi-naiveBayesclassifiersSemi-SupervisedLearningsemi-SupervisedSupportVectorMachineSentimentanalysis 情感分析Separatinghyperplane 分離超平面Sigmoidfunction Sigmoid函數(shù)Similaritymeasure 相似度度量Simulatedannealing 模擬退火Simultaneouslocalizationand同步定位與地圖構(gòu)建mappingSingularValueDecomposition 奇異值分解Slackvariables 松弛變量Smoothing 平滑Softmargin 軟間隔Softmarginmaximization 軟間隔最大化Softvoting 軟投票Sparserepresentation 稀疏表征Sparsity 稀疏性Specialization 特化SpectralClustering 譜聚類SpeechRecognition 語音識別

Splittingvariable切分變量Squashingfunction擠壓函數(shù)Stability-plasticitydilemma可塑性-穩(wěn)定性困境Statisticallearning統(tǒng)計(jì)學(xué)習(xí)Statusfeaturefunction狀態(tài)特征函Stochasticgradientdescent隨機(jī)梯度下降Stratifiedsampling分層采樣Structuralrisk結(jié)構(gòu)風(fēng)險(xiǎn)Structuralriskminimization/SRM結(jié)構(gòu)風(fēng)險(xiǎn)最小化Subspace子空間Supervisedlearning監(jiān)督學(xué)習(xí)/有導(dǎo)師學(xué)習(xí)supportvectorexpansion支持向量展式SupportVectorMachine/SVM支持向量機(jī)Surrogatloss替代損失Surrogatefunction替代函數(shù)Symboliclearning符號學(xué)習(xí)Symbolism符號主義Synset同義詞集LetterTT-DistributionStochasticNeighbourT分布隨機(jī)近鄰嵌入Embedding/t-SNETensor 張量TensorProcessingUnits/TPU 張量處理單元Theleastsquaremethod 最小二乘法Threshold 閾值Thresholdlogicunit 閾值邏輯單元Threshold-moving 閾值移動TimeStep 時(shí)間步驟Viterbialgorithm Viterbialgorithm 維特比算法TokenizationTrainingerrorTokenizationTrainingerrorTraininginstanceTransductivelearningTransferlearningTreebankTria-by-errorTruenegativeTruepositiveTruePositiveRate/TPRTuringMachineTwice-learningLetterUUnderfittingUndersamplingUnderstandabilityUnequalcostUnit-stepfunctionUnivariatedecisiontreeUnsupervisedlearningUnsupervisedlayer-wisetrainingUpsamplingLetterVVanishingGradientProblemVariationalinferenceVCTheoryVersionspace訓(xùn)練誤差訓(xùn)練示例/訓(xùn)練例直推學(xué)習(xí)遷移學(xué)習(xí)樹庫試錯法真負(fù)類真正類真正例率圖靈機(jī)二次學(xué)習(xí)欠擬合/欠配欠采樣可理解性非均等代價(jià)單位階躍函數(shù)單變量決策樹無監(jiān)督學(xué)習(xí)/無導(dǎo)師學(xué)習(xí)無監(jiān)督逐層訓(xùn)練上采樣梯度消失問題變分推斷VC維理論版本空間Zero-shotlearningZero-shotlearning模式識別計(jì)算機(jī)硬件的迅速發(fā)展,計(jì)算機(jī)應(yīng)用領(lǐng)域的不斷開拓,急切地要求計(jì)算機(jī)能更有效地感知諸如聲音、文字、圖象、溫度、震動等等信息資料,模式識別便得到迅速發(fā)展。模式VonNeumannarchitectureLetterWWassersteinGAN/WGANWeaklearnerWeightWeightsharingWeightedvotingWithin-classscattermatrixWordembeddingWordsensedisambiguationLetterZZero-datalearning馮?諾伊曼架構(gòu)Wasserstein生成對抗網(wǎng)絡(luò)弱學(xué)習(xí)器權(quán)重權(quán)共享加權(quán)投票法類內(nèi)散度矩陣詞嵌入詞義消歧零數(shù)據(jù)學(xué)習(xí)零次學(xué)習(xí)一詞的本意是指模式 一詞的本意是指完美無缺的供模仿的一些標(biāo)本。模式識別就是指識別出給定物體所模仿的標(biāo)本。人工智能所研究的模式識別是指用計(jì)算機(jī)代替人類或幫助人類感知模式,是對人類感知外界功能的模擬,研究的是計(jì)算機(jī)模式識別系統(tǒng),也就是使一個計(jì)算機(jī)系統(tǒng)具有模擬人類通過感官接受外界信息、識別和理解周圍環(huán)境的感知能力。 模式識別是一個不斷發(fā)展的新學(xué)科,它的理論基礎(chǔ)和研究范圍也在不斷發(fā)展。隨著生物醫(yī)學(xué)對人類大腦的初步認(rèn)識,模擬人腦構(gòu)造的計(jì)算機(jī)實(shí)驗(yàn)即人工神經(jīng)網(wǎng)絡(luò)方法早在年代末、年代初就已經(jīng)開始。至今,在模式識別領(lǐng)域,神經(jīng)網(wǎng)絡(luò)方法已經(jīng)成功地用于手寫字符的識別、汽車牌照的識別、指紋識別、語音識別等方面。目前模式識別學(xué)科正處于大發(fā)展的階段,隨著應(yīng)用范圍的不斷擴(kuò)大,隨著計(jì)算機(jī)科學(xué)的不斷進(jìn)步,基于人工神

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論