0%

Tensorflow1.x的零碎笔记

世事艰难,即便tensorflow2.0已经出来了,但是工业界全都是1.x的代码,还是得学tensorflow1.x啊😭。这篇文章主要是记录一下学习tensorflow1.x中遇到的问题及解决方案。

1.tensorboard

关于tensorboard,我们是使用命令tensorboard --logdir='your logs path'.一般使用tensorbaord,都是为了查看graph的结构,以及各种loss、参数的变化,我们需要用到tf.summary。具体步骤如下:link

  • 首先将需要可视化的全部写入summary;
  • 然后合并所有summary;

  • 创建summary_file_writer,将图写入文件;

  • 不断的写入数据。

另外,常用的summary有:

  • tf.summary.scalar("loss",loss),常用于loss,accuracy等;
  • tf.summary.histogram(layer_name+"/weight",W),用于显示参数的变化情况。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"


x_data=np.linspace(-1,1,300)[:,np.newaxis].astype(np.float32)
noise=np.random.normal(0,0.5,x_data.shape)
y_data=np.square(x_data)-0.5+noise

def add_layer(inputs,in_size,out_size,hidden_layer_name,activation_func=None):
'''
定义自己的层
:param inputs:输入的张量
:param in_size: 输入的维度
:param out_size: 输出的维度
:param activation_func: 激活函数
:return: 输出张量
'''
hidden_size=inputs.shape[1].value

if hidden_size != in_size:
raise Exception("wrong demension!")
with tf.variable_scope(hidden_layer_name):
W=tf.get_variable(name='W',shape=[in_size,out_size],dtype=tf.float32,initializer=tf.random_normal_initializer())
tf.summary.histogram(hidden_layer_name+"/weight",W)
b=tf.get_variable(name='b',shape=[1,out_size],dtype=tf.float32,initializer=tf.constant_initializer(0.0))
tf.summary.histogram(hidden_layer_name + "/bias", b)

a=tf.matmul(inputs,W)+b

if activation_func == None:
outputs=a
else:
outputs=activation_func(a)

return outputs

xs=tf.placeholder(dtype=tf.float32,shape=[None,1])
ys=tf.placeholder(dtype=tf.float32,shape=[None,1])

hidden_result=add_layer(xs,1,10,"hidden_layer",tf.nn.relu)
outputs=add_layer(hidden_result,10,1,"output_layer",None)
tf.summary.histogram("output_layer/outputs",outputs)

loss=tf.reduce_mean(tf.reduce_sum(tf.square(ys-outputs),axis=-1),axis=-1)
tf.summary.scalar("loss",loss)
train_step=tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)

init=tf.global_variables_initializer()

fig=plt.figure()
ax=fig.add_subplot(1,1,1)
ax.scatter(x_data,y_data)
plt.ion()
plt.show()

with tf.Session() as sess:
sess.run(init)
merged_summary_all=tf.summary.merge_all()
summary_writer=tf.summary.FileWriter("logs/",sess.graph)
for step in range(1000):
sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
summary_str=sess.run(merged_summary_all,feed_dict={xs:x_data,ys:y_data})
summary_writer.add_summary(summary_str,step)
print(step,sess.run(loss,feed_dict={xs:x_data,ys:y_data}))

15308202070

Would you like to buy me a cup of coffee☕️~