Computational Graphs
In this, Tensorflow represented as an instance of a tf.Graph object. And this graph consists of a set of instances of tf.Tensor and tf.operation object. Here tf.operation object is used to perform the matrix multiplication operations. Then the tf.Graph is fetched by these two instances (tf.Tensor and tf.operation) i.e., tf.get_default_graph().
tf.get_default_graph():<tensorflow.python.framework.ops.Graph>
In this, we’ll learn about some of the other Tensor Types used in TensorFlow. The speciall ones commonly used in creating neural network models are
– Constant,
– Variable, and
– Placeholder.
Remember ,we need to import the TensorFlow library at the very beginning of our code using the line:
So,let’s have a brief discussion on every element that is being incorporated in the Tensorflow.
1. Constant
As the name speaks for itself, Constants are used as constants. They create a node that takes value and it does not change. You can simply create a constant tensor using tf.constant. It accepts the five arguments:
tf.constant(value, dtype=None, shape=None, name='Const', verify_shape=False)
Now let’s take a look at a very simple example.
Example 1:
Let’s create two constants and add them together. Constant tensors can simply be defined with a value:
# create Graph a= tf.constant(2) b= tf.constant() c = x + y # launch the graph in a session with tf.Session() as sess: print(sess.run(c))
In the above figure, we created 3 tensors with “Python-names” a, b, and c. As we didn’t define any “TensorFlow-name” for them, but TensorFlow assigns some default names to them which are observed in the graph: const and const_1 for the input constants and add for the output of the addition operation.
We can easily modify it and define custom names as shown below:
# create graph a = tf.constant(2, name='A') b = tf.constant(3, name='B') c = tf.add(a, b, name='Sum') # launch the graph in a session with tf.Session() as sess: print(sess.run(c))
This time the graph is created with the required tensor names:
Fig2. generated graph (Left) and variables (Right) with the modified names
Constants can also be defined with different types (integer, float, etc.) and shapes (vectors, matrices, etc.). The next example has one constant with type 32bit float and another constant with shape 2X2.
Example 2:
s = tf.constant(2.3, name='scalar', dtype=tf.float32) m = tf.constant([[1, 2], [3, 4]], name='matrix') # launch the graph in a session with tf.Session() as sess: print(sess.run(s)) print(sess.run(m))
[[1 2]][3 4]]
2. VARIABLE:
Variables are stateful nodes which output is their current value; that they can retain their value over multiple executions of a graph. They have a number of useful features such as:
They can be saved to your disk during and after training. This allows people from different companies and groups to collaborate as they can save, restore and send over their model parameters to other people.
By default, gradient updates (used in all neural networks) will apply to all variables in your graph. In fact, variables are the things that you want to tune in order to minimize the loss.
# Create a variable
w = tf.Variable(<initial-value>, name=<optional-name>)
Some examples of creating scalar and matrix variables are as follows:
s = tf.Variable(2, name="scalar") m = tf.Variable([[1, 2], [3, 4]], name="matrix") W = tf.Variable(tf.zeros([784,10]))
Variable __W__ defined above creates a matrix with 784 rows and 10 columns which will be initialized with zeros.
tf.get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None)
Some examples are as follows:
s = tf.get_variable("scalar", initializer=tf.constant(2)) m = tf.get_variable("matrix", initializer=tf.constant([[0, 1], [2, 3]])) W = tf.get_variable("weight_matrix", shape=(784, 10), initializer=tf.zeros_initializer())