slides神經(jīng)網(wǎng)絡(luò)基礎(chǔ)算法實踐_第1頁
slides神經(jīng)網(wǎng)絡(luò)基礎(chǔ)算法實踐_第2頁
slides神經(jīng)網(wǎng)絡(luò)基礎(chǔ)算法實踐_第3頁
slides神經(jīng)網(wǎng)絡(luò)基礎(chǔ)算法實踐_第4頁
slides神經(jīng)網(wǎng)絡(luò)基礎(chǔ)算法實踐_第5頁
已閱讀5頁,還剩76頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認(rèn)領(lǐng)

文檔簡介

123 4 oimporttensorflowaso= osess=result= 5樣例2importimporttensorflowasinput1=tf.constant(3.0)input2=add=tf.add(input1,input2)sess=tf.Session()result=printresult6樣例3importimporttensorflowasinput1=tf.constant(3.0)input2=tf.constant(2.0)input3=tf.constant(5.0)intermed=tf.add(input2,input3)mul=tf.mul(input1,intermed)withtf.Session()assess:result=sess.run(mul)printresult樣例3加法乘法運算-fetchimporttensorflowasinput1=tf.constant(3.0)input2=tf.constant(2.0)input3=tf.constant(5.0)intermed=tf.add(input2,input3)mul=tf.mul(input1,intermed)withtf.Session()asresult=sess.run(mul)printresult

withtf.Session()asresult=sess.run([mul,intermed])printresult8樣例4加法乘法運算-feedimporttensorflowasinput1=input2=tf.input3=

intermed

mul=if.mul(input1,withtf.Session()asresult=sess.run(mul,printresult

feed_dict={input1:3,input2:2,input3:9樣例5importimporttensorflowasm1=tf.constant([[1.,m2=tf.constant([[2.],[3.]])product=tf.matmul(m1,m2)withtf.Session()assess:result=sess.run(product)printresult樣例6importimporttensorflowasstate=one=update=tf.assign(state,ft.add(state,withtf.Session()assess.run(tf.initialize_all_variables())printsess.run(state)for_inxrange(3):printoceholderfeed (??0,??(??0,??=????0+??0根據(jù)??0和?? (?????0)2=(????0+?????0損失函數(shù):loss (????0+?????0??=??? =????×????

=????×2 ????0+??? ×importnumpyasnum_points=1000vectors_set=[]??=0.1??+

foridxinx1=np.random.normal(0.0,y1=x1*0.1+0.3+np.random.normal(0.0,vectors_set.append([x1,x_data=[v[0]forvinvectors_set]y_data=[v[1]forvin

importimportmatplotlib.pyplotasplt.plot(x_data,y_data,'ro')importtensorflowasW=tf.Variable(tf.random_uniform([1],-1.0,1.0))b=tf.Variable(tf.zeros([1]))y=W*x_data+loss=tf.reduce_mean(tf.square(y-

optimizer=train=

initinit=sess=print'params',step,sess.run(W),sess.run(b),forstepinprint'loss',(step,print'params',step,sess.run(W),sess.run(b),訓(xùn)練前:[??,??] ??=0.1??+ ][ ][ ][ 第1輪 第2輪 第3輪訓(xùn)練前:[??,??] ??=0.1??+[ ][ 第1

[ ][ [ ][ 第2第2第3訓(xùn)練前:[??,??] ??=0.1??+第7第8第9[ ][ [ ][ 第7第8第9訓(xùn)練前:[??,??] ??=0.1??+第7 第8 第9 [ ][ [ ][ 第7 第8 第9 LogisticLogistic????= =??????????????(????+l??????=???????= ×log????= ???????= ×log??(??=

??????= =??????= =importnumpyasnum_points=100vectors_set=[]foridxinx1=np.random.normal(0.0,y1=1ifx1*0.3+0.1+np.random.normal(0.0,0.03)>0elsevectors_set.append([x1,x_data=[v[0]forvinvectors_set]y_data=[v[1]forvin

importimporttensorflowasW=tf.Variable(tf.random_uniform([1],-1.0,b=y=tf.sigmoid(W*x_data+one=tf.ones(y.get_shape(),loss=-tf.reduce_mean(y_data*tf.log(y)+(one-y_data)*tf.log(one-類別optimizer=train=類別init=sess=tf.Session()thresholdvec=tf.ones_like(one,dtype=tf.float32)*correct_prediction accuracytf.reduce_mean(tf.cast(correct_predictiontf.float32))forstepinprint('accuracy:',小結(jié)logistic

012

45678輸入:28x28輸入:28x28784012????+0120123456789交叉熵(cross交叉熵(CrossEntropy)= ??????(0,0.3,0,0,0,0,0.7,0,0,0) (0,1,0,0,0,0,0,0,0,0) 012345678

012345678 importimportmnist=input_mnist.read_data_sets("mnist-data/",printmnist.train.images.shapeprintmnist.train.labels.shapeprintmnist.test.images.shapeprintmnist.test.labels.shapeW=b=x= ceholder("float",[None,y=tf.nn.softmax(tf.matmul(x,W)+b)y_=tf. ceholder("float",[None,10])

WbNone:允許任何維度yy_:標(biāo)準(zhǔn)答案cross_entropy=-tf.reduce_sum(y_*tf.log(y),train=init=sess=tf.Session()foriinbatch_xs,batch_ys=mnist.train.next_batch(100)sess.run(train,feed_dict={x:batch_xs,y_:batch_ys})

?個correct_prediction=tf.equal(tf.argmax(y,1),tf.argmax(y_,accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))print(sess.run(accuracy,feed_dict={x:mnist.test.images,y_:mnist.test.labels}))數(shù)據(jù)表示由1維- 類別由2(10類輸入:28x28輸入:28x28784012????+0120123456789012????+012012????+012????+012012????+012012????+01290withweights= 輸入維 輸出維截斷的正態(tài)分布tf.truncated_normal([IMAGE_PIXELShidden1_unit],stddev=1.0/math.sqrt(IMAGE_PIXELS)))biases=hidden1=tf.nn.relu(tf.matmul(image,weights)+非線 輸withweights= 輸入維 輸出維stddev=1.0/math.sqrt(hidden1_unit)))biases=tf.Variable(tf.zeros([hidden2_unit]))hidden2=tf.nn.relu(tf.matmul(hidden1,weights)+withweights=

輸入維 輸出維stddev=1.0/math.sqrt(hidden2_unit)))biases=tf.Variable(tf.zeros([NUM_CLASSES]))logits=tf.matmul(hidden2,weights)+ defdefloss(logits,labels=cross_entropyloss=returndefdeftraining(loss,tf.scalar_summary(,optimizer=tf.train.GradientDescentOptimizer(learning_rate)global_step=tf.Variable(0,name='global_step',trainable=False)train_op=optimizer.minimize(loss,global_step=global_step)returndefdefevaluation(logits,correct=tf.nn.in_top_k(logits,labels,result=tf.reduce_sum(tf.cast(correct,32))returnresultimporttensorflowasflags=tf.app.flagsFLAGS=flags.FLAGSflags.DEFINE_float('learning_rate',0.01,'Initiallearningrate')flags.DEFINE_integer('max_steps',2000,'Numberofstepstoruntrainer')flags.DEFINE_integer('hidden1_unit',128,'Numberofunitsinhiddenlayer1')flags.DEFINE_integer('hidden2_unit',32,'Numberofunitsinhiddenlayer2')flags.DEFINE_integer('batch_size',100,'Batchsize’)flags.DEFINE_string('train_dir','data','Directorytoputthetrainingdata')data_sets=input_data.read_data_sets(FLAGS.train_dir, ceholder(32,logits=mnist.inference(image_ FLAGS.hidden1_unit,FLAGS.hidden2_unit)loss=mnist.loss(logits, train_op=mnist.training(loss,FLAGS.learning_rate)eval_correct=mnist.evaluation(logits,label_ sess=tf.Session()forstepinimage_feed,label_feed=data_set.next_batch(FLAGS.batch_size,FLAGS.fake_data)feed_dict={images_pl:image_feed,labels_pl:label_feed}_,loss_value=sess.run([train_op,loss],ifstep%100==0print('Step%d:loss=%.2f'%(step,loss_value))print('TrainingDataEval:')do_eval(sess,eval_correct,image_ceholder,label_ceholder,data_sets.train)print('TestDataEval:')print('TestDataEval:')do_eval(sess,eval_correct,image_ceholder,label_ceholder,defdo_eval(sess,eval_correct, ceholder, ceholder,true_count=steps_per_epoch=data_set.num_examples//FLAGS.batch_sizenum_examples=steps_per_epoch*FLAGS.batch_sizeforstepinfeed_dict=fill_ ceholder(data_set,image_ ceholder,label_ true_count+=sess.run(eval_correct,feed_dict=feed_dict)precision=1.0*true_count/print('Numberofexamples:%dNumcorrect:%dPrecision:@1:%.4f'%(num_examples,true_count,precision))小結(jié)單層神經(jīng)網(wǎng)絡(luò)->

?????????????? 1+

????

=max(0,

????=log(exp??+導(dǎo)數(shù)是logistic tf.nn.

?? ?????? ?????? +1keep_prob=tf. hidden2_drop=tf.nn.dropout(hidden2,logits=tf.matmul(hidden2_drop,weights)+

batch[0],y_:batch[1],keep_prob: accuracy=

y_:batch[1],keep_prob:tf.train.AdagradOptimizer.tf.train.AdagradOptimizer.initStartingvaluefortheaccumulators,mustbetf.train.MomentumOptimizer.tf.train.MomentumOptimizer.init(learning_rate,tf.train.AdamOptimizer(learning_rate=0.001,tf.train.AdamOptimizer(learning_rate=0.001,beta1=0.9,beta2=0.999,saversaver=sess=forstepinifstep%1000==classclass輸入:一段文本(如:詞語或句?或篇章輸出:文本的情感類別(ConvolutionalNeuralNetworkRecursive/RecurrentNeuralNetwork, 0012????+0好10

1 12d CATEGORY_SIZECATEGORY_SIZE=definference(embeddingLength,embeddingM,textembedding=W=b=logits=tf.matmul(embedding,W)+returndefdefloss(logits,labels=logits,labels)loss=tf.reduce_mean(cross_entropy)returnlossdefdeftrain(loss,global_step=tf.Variable(0,name='global_step',trainable=False)optimizer=tf.train.GradientDescentOptimizer(learningRate)trainOp=optimizer.minimize(loss,returndefevaluation(logits,correct=tf.nn.in_top_k(logits,labels,1)

flags= FLAGS=flags.DEFINE_float('learning_rate',0.1,'Initiallearningrate')flags.DEFINE_integer('max_steps',50,'Numberofstepstoruntrainer')flags.DEFINE_integer('batch_size',20,'Batchsize.''Mustdivideevenlyintothedatasetsizes.')flags.DEFINE_string('positive_lexicon_file','lexicon/positive-words.txt)flags.DEFINE_string('negative_lexicon_file','lexicon/negative-words.txt)flags.DEFINE_string('embedding_file','embedding/sswe.txt)flags.DEFINE_integer('embedding_length',50)defdeftrain_data,test_data,embeddingMatrix= ceholder=tf. ceholder(32,[FLAGS.batch_size]) ceholder=tf. embeddingM=tf.constant(embeddingMatrix)使用外部wordembeddinglogits=word_sa.inference(FLAGS.embedding_lengt

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論