我想绕过 Estimator 框架,直接在 session 中使用 tensorflow feature_column 和 feature。我读了tensorflow's low level introduction on feature column .问题是 tf.feature_column.input_layer
在构造时需要 features
提要,但训练和预测时间的特征提要不同。查看tf.Estimator
代码,似乎是再次调用相同的构造回调函数来获取图形的方法。我想出了下面的例子,但是如果我在第二次构造后跳过表初始化,它会在表未初始化时失败;或者如果我运行表初始化它会提示表已经初始化。根据their research paper ,这是设计使然,因为他们总是希望从保存点重新加载新模型。但是对于像强化学习这样我们想在训练循环中同时进行更新和推理的情况,这将是非常低效的。还不清楚他们想如何进行开发验证。
为预测构建图形和提要特征的正确方法是什么?
training_features = {
'sales' : [[5], [10], [8], [9]],
'department': ['sports', 'sports', 'gardening', 'gardening']}
test_features = {
'sales' : [[10], [20], [16], [18]],
'department': ['sports', 'sports', 'gardening', 'gardening']}
department_column = tf.feature_column.categorical_column_with_vocabulary_list(
'department', ['sports', 'gardening'])
department_column = tf.feature_column.indicator_column(department_column)
columns = [
tf.feature_column.numeric_column('sales'),
department_column
]
# similar to a tf.Estimator's model_fn callback
def mkgraph(features):
with tf.variable_scope('feature_test', reuse=tf.AUTO_REUSE):
inputs = tf.feature_column.input_layer(features, columns)
alpha = tf.placeholder(tf.float32, name='alpha')
output = inputs * alpha
return output, alpha
with tf.Graph().as_default() as g:
output, alpha = mkgraph(training_features)
print('output', output)
print('alpha', alpha)
var_init = tf.global_variables_initializer()
table_init = tf.tables_initializer()
with tf.Session(graph=g) as sess:
sess.run([var_init, table_init])
print(sess.run(output, feed_dict={alpha: 100.0})) # works here
print('testing')
output, alpha = mkgraph(test_features)
print('output', output)
print('alpha', alpha)
table_init = tf.tables_initializer()
# sess.run([table_init]) # with this, it fails on 'table already initialized'
# without table_init run, it fails on 'table not initialized'
print(sess.run(output, feed_dict={alpha: 200.0}))
最佳答案
如果您有一个训练数据集和一个测试数据集,并且需要在它们之间来回切换,您可以尝试使用 is_training
开关。对于问题中的具体示例:
import tensorflow as tf
training_features = {
'sales' : [[5], [10], [8], [9]],
'department': ['sports', 'sports', 'gardening', 'gardening']}
test_features = {
'sales' : [[10], [20], [16], [18]],
'department': ['sports', 'sports', 'gardening', 'gardening']}
department_column = tf.feature_column.categorical_column_with_vocabulary_list(
'department', ['sports', 'gardening'])
department_column = tf.feature_column.indicator_column(department_column)
columns = [
tf.feature_column.numeric_column('sales'),
department_column
]
with tf.variable_scope('feature_test', reuse=tf.AUTO_REUSE):
alpha = tf.placeholder(tf.float32, name='alpha')
is_training = tf.placeholder(tf.bool, name='is_training')
training_inputs = tf.feature_column.input_layer(training_features, columns)
test_inputs = tf.feature_column.input_layer(test_features, columns)
output = tf.cond(is_training,
lambda: training_inputs * alpha,
lambda: test_inputs * alpha)
var_init = tf.global_variables_initializer()
table_init = tf.tables_initializer()
with tf.Session() as sess:
sess.run([var_init, table_init])
print('training')
print(sess.run(output, feed_dict={alpha: 100.0, is_training: True}))
print('testing')
print(sess.run(output, feed_dict={alpha: 200.0, is_training: False}))
一个潜在的问题是两个 feature_column
都已启动。我不认为他们会加载所有内容并耗尽内存。但它们可能会消耗比必要更多的内存,并可能给您带来一些麻烦。
关于tensorflow - 如何在 Estimator 之外使用 tensorflow.feature_column 进行预测?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49280251/