tf.summary.scalar
tf.summary.FileWriter
tf.summary.histogram
tf.summary.merge_all
tf.equal
tf.argmax
tf.cast
tf.div(x, y, name=None)
tf.pow(x, y, name=None)
tf.unstack(value, num=None, axis=0, name='unstack')
tf.stack(values, axis=0, name='stack')
tf.transpose(a, perm=None, name='transpose')
tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)
tf.nn.softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, dim=-1, name=None)
tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)
tf.contrib.legacy_seq2seq.sequence_loss_by_example(logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None)
apply_gradients
tf.distributions.Normal
tf.summary.scalar
https://www.tensorflow.org/api_docs/python/tf/summary/scalar
tf.summary.FileWriter:
https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter
tf.summary.histogram
https://www.tensorflow.org/api_docs/python/tf/summary/histogram
tf.summary.merge_all
https://www.tensorflow.org/api_docs/python/tf/summary/merge_all
tf.equal:
https://www.tensorflow.org/api_docs/python/tf/equal
tf.argmax:
https://www.tensorflow.org/api_docs/python/tf/argmax
tf.cast
https://www.tensorflow.org/api_docs/python/tf/cast
tf.div(x, y, name=None)
参考链接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/div
tf.pow(x, y, name=None)
参考链接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/pow
tf.unstack(value, num=None, axis=0, name='unstack')
https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/unstack
tf.stack(values, axis=0, name='stack')
https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/stack
1 ###tf.stack()/unstack(): 2 import tensorflow as tf 3 4 a = tf.constant([1, 2, 3]) 5 b = tf.constant([4, 5, 6]) 6 c = tf.stack([a, b], axis=0) 7 d = tf.stack([a, b], axis=1) 8 e = tf.unstack(c, axis=0) 9 f = tf.unstack(c, axis=1) 10 with tf.Session() as sess: 11 print(sess.run(c)) 12 print(sess.run(d)) 13 print(sess.run(e)) 14 print(sess.run(f))
[[1 2 3] [4 5 6]] [[1 4] [2 5] [3 6]] [array([1, 2, 3], dtype=int32), array([4, 5, 6], dtype=int32)] [array([1, 4], dtype=int32), array([2, 5], dtype=int32), array([3, 6], dtype=int32)]
参考链接:https://blog.csdn.net/u012193416/article/details/77411535
1 ###tf.stack()/unstack(): 2 import tensorflow as tf 3 4 g = tf.constant([[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]],[[13, 14, 15, 16],[17, 18, 19, 20],[21, 22, 23, 24]]]) 5 h = tf.unstack(g) 6 with tf.Session() as sess: 7 print(sess.run(h))
[array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]], dtype=int32), array([[13, 14, 15, 16], [17, 18, 19, 20], [21, 22, 23, 24]], dtype=int32)]
tf.transpose(a, perm=None, name='transpose')
官方链接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/transpose
1 ###tf.transpose() 2 import tensorflow as tf 3 4 a = tf.constant([[1, 2, 3], 5 [4, 5, 6]]) 6 b = tf.constant([[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]],[[13, 14, 15, 16],[17, 18, 19, 20],[21, 22, 23, 24]]]) 7 c = tf.transpose(a, [0, 1]) 8 d = tf.transpose(a, [1, 0]) 9 e = tf.transpose(b, [0, 1, 2]) 10 f = tf.transpose(b, [1, 0, 2]) 11 g = tf.transpose(b, [0, 2, 1]) 12 with tf.Session() as sess: 13 print(sess.run(c)) 14 print(sess.run(d)) 15 print(sess.run(e)) 16 print(sess.run(f)) 17 print(sess.run(g))
[[1 2 3] [4 5 6]] [[1 4] [2 5] [3 6]] [[[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] [[13 14 15 16] [17 18 19 20] [21 22 23 24]]] [[[ 1 2 3 4] [13 14 15 16]] [[ 5 6 7 8] [17 18 19 20]] [[ 9 10 11 12] [21 22 23 24]]] [[[ 1 5 9] [ 2 6 10] [ 3 7 11] [ 4 8 12]] [[13 17 21] [14 18 22] [15 19 23] [16 20 24]]]
博客链接:https://www.cnblogs.com/studyDetail/p/6533316.html
tf.set_random_seed(seed)
实例运行参见: Jupyter notebook:TensorFlowAPI
https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/set_random_seed
class tf.contrib.rnn.BasicLSTMCell
链接:https://tensorflow.google.cn/versions/r1.9/api_docs/python/tf/contrib/rnn/BasicLSTMCell#zero_state
tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)
官方链接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn
链接:https://github.com/MorvanZhou/tutorials/blob/master/tensorflowTUT/tf20_RNN2/full_code.py
tf.nn.softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, dim=-1, name=None)
链接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/nn/softmax_cross_entropy_with_logits
tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)
链接:https://tensorflow.google.cn/versions/r1.0/api_docs/python/tf/nn/moments
tf.contrib.legacy_seq2seq.sequence_loss_by_example(logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None)
Weighted cross-entropy loss for a sequence of logits (per example).
tf.distributions.Normal
Aliases:
- Class
tf.contrib.distributions.Normal
- Class
tf.distributions.Normal
The Normal distribution with location loc
and scale
parameters.
where loc = mu
is the mean, scale = sigma
is the std. deviation, and, Z
is the normalization constant.
Methods:
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:TensorFlow 官网API - Python技术站