tensorflow2.0搭建卷积神经网络,tensorflow实现cnn

  tensorflow2.0搭建卷积神经网络,tensorflow实现cnn

  学习神经网络有一段时间了,从常见的BP神经网络到LSTM长短期记忆网络都有一定的了解,但是从来没有系统的记录过整个神经网络的结构。我相信这些小记录可以帮助我更深入地了解神经网络。

  00-1010隐藏层1、卷积层2、池层3、全连接层、卷积层、池层、全连接层介绍。从常见的BP神经网络到LSTM长期和短期记忆网络,所有的代码都被神经网络学习了一段时间,但整个神经网络的结构从未被系统地记录下来。我相信这些小记录可以帮助我更深入地了解神经网络。

  

目录

  卷积神经网络(CNN)是一种具有卷积计算和深层结构的前馈神经网络,是深度学习的代表性算法之一。

  其主要结构分为输入层、隐藏层和输出层。

  在tensorboard中,其结构如图所示:

  对于卷积神经网络,其输入层和输出层与普通卷积神经网络相同。

  但其隐含层可分为三部分,分别是卷积层(输入数据的特征提取)、池层(特征选择和信息过滤)和全连接层(相当于传统前馈神经网络中的隐含层)。

  

简介

  

隐含层介绍

  卷积输入图像被放入一组卷积滤波器中,每个滤波器激活图像中的一些特征。

  假设黑白图像的大小为5*5,如下所示:

  使用以下卷积器进行卷积:

  卷积结果是:

  卷积过程可以提取特征,利用卷积神经网络根据特征完成分类。

  在张量流中,卷积层的重要作用是:

  tf.nn.conv2d(input,filter,strides,padding,use_cudnn_on_gpu=None,name=None)

  其中包括:

  1.输入是输入的数量,形状是[批次、高度、宽度、通道]。

  2.滤波器是使用的卷积核;

  3.strides是步长,其格式为[1,step,step,1]。步长是指图像卷积各个维度上的步长;

  4.Padding:字符串数量,只能是“SAME”、“valid”和“VALID”之一。相同意味着图像面积在卷积前后不变。

  

1、卷积层

  池层用于卷积层的特征提取,输出的特征图将被传递到池层进行特征选择和信息过滤。

  常见的池化就是最大池化,指的是把这些卷积后的数据的最大值拿出来,也就是把它的最大特征拿出来。

  假设池窗口为2X2,步长为2。

  原始图像是:

  该池变成:

  在tensorflow中,池层的重要功能有:

  tf.nn.max_pool(值,ksize,步幅,填充,数据格式,名称)

  1.值:池层的输入。池层一般遵循卷积层,形状为【批次、高度、宽度、通道】。

  2.ksize:池化窗口的大小,取一个四维向量,一般为[1,in_height,in_width,1]。

  3.步幅:类似卷积,窗口滑动在每个维度的步长也是[1,步幅,步幅,1]。

  4.填充:类似于卷积,您可以使用“有效的”

  quo; 或者’SAME’。

  这是tensorboard中卷积层和池化层的连接结构:

  

  

  

3、全连接层

  全连接层与普通神经网络的结构相同,如图所示:

  

  

  

具体实现代码

  

  

卷积层、池化层与全连接层实现代码

  

def conv2d(x,W,step,pad): #用于进行卷积,x为输入值,w为卷积核

   return tf.nn.conv2d(x,W,strides = [1,step,step,1],padding = pad)

  def max_pool_2X2(x,step,pad): #用于池化,x为输入值,step为步数

   return tf.nn.max_pool(x,ksize = [1,2,2,1],strides= [1,step,step,1],padding = pad)

  def weight_variable(shape): #用于获得W

   initial = tf.truncated_normal(shape,stddev = 0.1) #从截断的正态分布中输出随机值

   return tf.Variable(initial)

  def bias_variable(shape): #获得bias

   initial = tf.constant(0.1,shape=shape) #生成普通值

   return tf.Variable(initial)

  def add_layer(inputs,in_size,out_size,n_layer,activation_function = None,keep_prob = 1):

  #用于添加全连接层

   layer_name = layer_%s%n_layer

   with tf.name_scope(layer_name):

   with tf.name_scope("Weights"):

   Weights = tf.Variable(tf.truncated_normal([in_size,out_size],stddev = 0.1),name = "Weights")

   tf.summary.histogram(layer_name+"/weights",Weights)

   with tf.name_scope("biases"):

   biases = tf.Variable(tf.zeros([1,out_size]) + 0.1,name = "biases")

   tf.summary.histogram(layer_name+"/biases",biases)

   with tf.name_scope("Wx_plus_b"):

   Wx_plus_b = tf.matmul(inputs,Weights) + biases

   tf.summary.histogram(layer_name+"/Wx_plus_b",Wx_plus_b)

   if activation_function == None :

   outputs = Wx_plus_b

   else:

   outputs = activation_function(Wx_plus_b)

   print(activation_function)

   outputs = tf.nn.dropout(outputs,keep_prob)

   tf.summary.histogram(layer_name+"/outputs",outputs)

   return outputs

  def add_cnn_layer(inputs, in_z_dim, out_z_dim, n_layer, conv_step = 1, pool_step = 2, padding = "SAME"):

  #用于生成卷积层和池化层

   layer_name = layer_%s%n_layer

   with tf.name_scope(layer_name):

   with tf.name_scope("Weights"):

   W_conv = weight_variable([5,5,in_z_dim,out_z_dim])

   with tf.name_scope("biases"):

   b_conv = bias_variable([out_z_dim])

   with tf.name_scope("conv"):

   #卷积层

   h_conv = tf.nn.relu(conv2d(inputs, W_conv, conv_step, padding)+b_conv)

   with tf.name_scope("pooling"):

   #池化层

   h_pool = max_pool_2X2(h_conv, pool_step, padding)

   return h_pool

  

  

  

全部代码

  

import tensorflow as tf 

  from tensorflow.examples.tutorials.mnist import input_data

  mnist = input_data.read_data_sets("MNIST_data",one_hot = "true")

  def conv2d(x,W,step,pad):

   return tf.nn.conv2d(x,W,strides = [1,step,step,1],padding = pad)

  def max_pool_2X2(x,step,pad):

   return tf.nn.max_pool(x,ksize = [1,2,2,1],strides= [1,step,step,1],padding = pad)

  def weight_variable(shape):

   initial = tf.truncated_normal(shape,stddev = 0.1) #从截断的正态分布中输出随机值

   return tf.Variable(initial)

  def bias_variable(shape):

   initial = tf.constant(0.1,shape=shape) #生成普通值

   return tf.Variable(initial)

  def add_layer(inputs,in_size,out_size,n_layer,activation_function = None,keep_prob = 1):

   layer_name = layer_%s%n_layer

   with tf.name_scope(layer_name):

   with tf.name_scope("Weights"):

   Weights = tf.Variable(tf.truncated_normal([in_size,out_size],stddev = 0.1),name = "Weights")

   tf.summary.histogram(layer_name+"/weights",Weights)

   with tf.name_scope("biases"):

   biases = tf.Variable(tf.zeros([1,out_size]) + 0.1,name = "biases")

   tf.summary.histogram(layer_name+"/biases",biases)

   with tf.name_scope("Wx_plus_b"):

   Wx_plus_b = tf.matmul(inputs,Weights) + biases

   tf.summary.histogram(layer_name+"/Wx_plus_b",Wx_plus_b)

   if activation_function == None :

   outputs = Wx_plus_b

   else:

   outputs = activation_function(Wx_plus_b)

   print(activation_function)

   outputs = tf.nn.dropout(outputs,keep_prob)

   tf.summary.histogram(layer_name+"/outputs",outputs)

   return outputs

  def add_cnn_layer(inputs, in_z_dim, out_z_dim, n_layer, conv_step = 1, pool_step = 2, padding = "SAME"):

   layer_name = layer_%s%n_layer

   with tf.name_scope(layer_name):

   with tf.name_scope("Weights"):

   W_conv = weight_variable([5,5,in_z_dim,out_z_dim])

   with tf.name_scope("biases"):

   b_conv = bias_variable([out_z_dim])

   with tf.name_scope("conv"):

   h_conv = tf.nn.relu(conv2d(inputs, W_conv, conv_step, padding)+b_conv)

   with tf.name_scope("pooling"):

   h_pool = max_pool_2X2(h_conv, pool_step, padding)

   return h_pool

  def compute_accuracy(x_data,y_data):

   global prediction

   y_pre = sess.run(prediction,feed_dict={xs:x_data,keep_prob:1})

   correct_prediction = tf.equal(tf.arg_max(y_data,1),tf.arg_max(y_pre,1))

   accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

   result = sess.run(accuracy,feed_dict = {xs:batch_xs,ys:batch_ys,keep_prob:1})

   return result

  keep_prob = tf.placeholder(tf.float32)

  xs = tf.placeholder(tf.float32,[None,784])

  ys = tf.placeholder(tf.float32,[None,10])

  x_image = tf.reshape(xs,[-1,28,28,1])

  h_pool1 = add_cnn_layer(x_image, in_z_dim = 1, out_z_dim = 32, n_layer = "cnn1",)

  h_pool2 = add_cnn_layer(h_pool1, in_z_dim = 32, out_z_dim = 64, n_layer = "cnn2",)

  h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])

  h_fc1_drop = add_layer(h_pool2_flat, 7*7*64, 1024, "layer1", activation_function = tf.nn.relu, keep_prob = keep_prob)

  prediction = add_layer(h_fc1_drop, 1024, 10, "layer2", activation_function = tf.nn.softmax, keep_prob = 1)

  with tf.name_scope("loss"):

   loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=ys,logits = prediction),name = loss)

   tf.summary.scalar("loss",loss)

  train = tf.train.AdamOptimizer(1e-4).minimize(loss)

  init = tf.initialize_all_variables()

  merged = tf.summary.merge_all()

  with tf.Session() as sess:

   sess.run(init)

   write = tf.summary.FileWriter("logs/",sess.graph)

   for i in range(5000):

   batch_xs,batch_ys = mnist.train.next_batch(100)

   sess.run(train,feed_dict = {xs:batch_xs,ys:batch_ys,keep_prob:0.5})

   if i % 100 == 0:

   print(compute_accuracy(mnist.test.images,mnist.test.labels))

  

  以上就是python人工智能tensorflow构建卷积神经网络CNN的详细内容,更多关于tensorflow构建卷积神经网络CNN的资料请关注盛行IT软件开发工作室其它相关文章!

郑重声明:本文由网友发布,不代表盛行IT的观点,版权归原作者所有,仅为传播更多信息之目的,如有侵权请联系,我们将第一时间修改或删除,多谢。

留言与评论(共有 条评论)
   
验证码: