Tensorflow eager no keras - python

With NO keras can you do eager execution in tensorflow? I have a non-neural network model in TensorFlow graph code to move to eager. This is a low rank matrix factorization for recommender system.
Python language.
Thank you
Request that answerers please demonstrate working code. If answer includes speculation then please state explicitly.

Yes, you can certainly use eager execution without Keras. Keras is built on top of the lower level operations that support eager execution.
For example:
import tensorflow as tf
import numpy as np
tf.enable_eager_execution()
W = tf.contrib.eager.Variable(tf.random_normal((10, 10)))
def model(x):
return tf.matmul(x, W)
data = np.random.randn(3, 10).astype(np.float32)
print(model(data))
You can see some more detailed tutorials at https://www.tensorflow.org/tutorials/eager/
That said, there are various corner cases/errors you might hit if trying to run arbitrary code written to construct a graph with eager execution enabled, and slight refactoring may be needed. Those would depend on the details of how the code is structured.
The reverse (i.e., writing code that works with eager execution enabled) generally works out well to construct the equivalent graph when eager execution is not enabled.
Hope that helps.

Related

Tf.mirrored strategy enabling eager execution of code

I am using tensorflow2.6 and my code requires setting the below code at starting because I use symbolic keras tensor in partial loss in my model
from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()
At the same time I also want to train on multiple gpus hence I used mirrored strategy but the issue is mirrored strategy requires setting eager execution which fails with my above disableness. Please help me if there exists another way of training on multiple gpus.
I have tried calling my code with the below but it got stuck saying significant overhead ahead.
tf.config.run_functions_eagerly(True)
but this is wrong i believe since as i mentioned** i need the disableness of eagerly mode.**

tf.placeholder() is not compatible with eager execution [duplicate]

I have upgraded with tf_upgrade_v2 TF1 code to TF2. I'm a noob with both. I got the next error:
RuntimeError: tf.placeholder() is not compatible with eager execution.
I have some tf.compat.v1.placeholder().
self.temperature = tf.compat.v1.placeholder_with_default(1., shape=())
self.edges_labels = tf.compat.v1.placeholder(dtype=tf.int64, shape=(None, vertexes, vertexes))
self.nodes_labels = tf.compat.v1.placeholder(dtype=tf.int64, shape=(None, vertexes))
self.embeddings = tf.compat.v1.placeholder(dtype=tf.float32, shape=(None, embedding_dim))
Could you give me any advice about how to proceed? Any "fast" solutions? or should I to recode this?
I found an easy solution here: disable Tensorflow eager execution
Basicaly it is:
tf.compat.v1.disable_eager_execution()
With this, you disable the default activate eager execution and you don't need to touch the code much more.
tf.placeholder() is meant to be fed to the session that when run receive the values from feed dict and perform the required operation.
Generally, you would create a Session() with 'with' keyword and run it. But this might not favour all situations due to which you would require immediate execution. This is called eager execution.
Example:
generally, this is the procedure to run a Session:
import tensorflow as tf
def square(num):
return tf.square(num)
p = tf.placeholder(tf.float32)
q = square(num)
with tf.Session() as sess:
print(sess.run(q, feed_dict={num: 10})
But when we run with eager execution we run it as:
import tensorflow as tf
tf.enable_eager_execution()
def square(num):
return tf.square(num)
print(square(10))
Therefore we need not run it inside a session explicitly and can be more intuitive in most of the cases. This provides more of an interactive execution.
For further details visit:
https://www.tensorflow.org/guide/eager
If you are converting the code from tensorflow v1 to tensorflow v2, You must implement tf.compat.v1 and Placeholder is present at tf.compat.v1.placeholder but this can only be executed in eager mode off.
tf.compat.v1.disable_eager_execution()
TensorFlow released the eager execution mode, for which each node is immediately executed after definition. Statements using tf.placeholder are thus no longer valid.
In TensorFlow 1.X, placeholders are created and meant to be fed with actual values when a tf.Session is instantiated. However, from TensorFlow2.0 onwards, Eager Execution has been enabled by default, so the notion of a "placeholder" does not make sense as operations are computed immediately (rather than being differed with the old paradigm).
Also see Functions, not Sessions,
# TensorFlow 1.X
outputs = session.run(f(placeholder), feed_dict={placeholder: input})
# TensorFlow 2.0
outputs = f(input)
If you are getting this error while doing object detection using TensorFlow model then use exporter_main_v2.py instead of export_inference_graph.py for exporting the model. This is right method to do. If you just off eager_execution then it will solve this error but generate other.
Also note that there are some parameter change like hear you will specify the path to directory of checkpoint instead of path to checkpoint. refer to this document for how to do object detection with TensorFlow V2
To solve this, you have to disable the default activate eager execution. So add the following code line.
tf.compat.v1.disable_eager_execution() #<--- Disable eager execution
Before the error fixing
After the error fixing

how to fix the rondom generator in python ? Whenever I run my CNN I get different result

I'm trying to train and test a CNN model for classification and every time I run the code in testing I get different accuracy results.
How can I get the same result every time? is there any possible solution in python TensorFlow for this problem?
Try this:
import numpy as np
import tensorflow as tf
np.random.seed(1)
tf.set_random_seed(1)
The value you set the seed to is not important as long as you keep it fixed.
Make sure you also fix the seed for any third party library. Randomness can also caused by GPU-libraries, if you don't use GPU's don't worry about it.
Edit: Assuming you use TensorFlow as backend. For PyTorch (works for CPU and GPU btw) use:
import torch
torch.manual_seed(1)

Matrix Factorization in tensorflow 2.0 using WALS Method

I am using WALS method in order to perform matrix factorization. Initially in tensorflow 1.13 I can import factorization_ops using
from tensorflow.contrib.factorization.python.ops import factorization_ops
As described in the documentation
Wals model can be called from factorization_ops by using
factorization_ops.WALSModel
Using same command in tensorflow 2.0 giving me following error
ModuleNotFoundError: No module named 'tensorflow.contrib.factorization
Going through the issue there appears to be no way out to use WALSModel in tensorflow 2.0+.
Also it has been mentioned here in tensorflow release updates that tf.contrib has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as tensorflow/addons or tensorflow/io, or removed entirely.
How can I use WALS model in tensorflow 2.0 (Currently I am using 2.0.0-rc0 on windows machine) ? Is WALSModel has been removed or I am missing out some information ?
I believe WALS is not supported in TF 2.0 ...The official recommendation model is Neural Collaborative Filter (NCF)
I hope this helps.
M
I have the same issue, but I don't really have time to write a library myself unfortunately. There are several potential options that I am considering:
Stick with TF1.X until someone creates a library
Switch to using lightfm to continue using WALS
Switch to neural collaborative filtering using embedding layers with keras and a dot product layer. See this paper https://arxiv.org/abs/1708.05031, and this code implementation:
from tensorflow.keras.layers import Input, Embedding, Flatten, Dot, Dense
from tensorflow.keras.models import Model
#import tensorflow.distribute
def get_compiled_model(n_users, n_items, embedding_dims=20):
# Product embedding
prod_input = Input(shape=[1], name="Item-Input")
prod_embedding = Embedding(n_items+1, embedding_dims, name="Item-Embedding")(prod_input)
prod_vec = Flatten(name="Flatten-Product")(prod_embedding)
# User embedding
user_input = Input(shape=[1], name="User-Input")
user_embedding = Embedding(n_users+1, embedding_dims, name="User-Embedding")(user_input)
user_vec = Flatten(name="Flatten-Users")(user_embedding)
# The output is the dot product of the two, i.e. a one-hot vector
dot_product = Dot(name="Dot-Product", axes=1)([prod_vec, user_vec])
# compile - uncomment these two lines to make training distributed
# dist_strat = distribute.Strategy()
# with dist_strat.scope():
model = Model(inputs = [user_input, prod_input], outputs = dot_product)
model.compile(
optimizer='adam',
loss='mean_squared_error'
)
return model
I have compared the Tensorflow implementation of WALS to other implementations with respect to compute resources and accuracy (https://github.com/gtsoukas/cfzoo). The comparison suggests that the implicit Python package (https://github.com/benfred/implicit) is a good replacement that delivers superior performance.
I think that WALS has been removed. As part of tf.contrib it is not supported by TF2 and I do not think it fits in any of core or sub-projects.
Your best bet is probably to make it available as a third-party library.
I expect to use it for my project, but the need to re-write it (mainly copy what was in TF1 and make it work as a separate library compatible with TF2) reduce the priority of this task...
Let us know if you start to code something. Thanks.
Alexis.

Keras vs TensorFlow - does Keras have any actual benefits?

I have been implementing some deep nets in Keras, but have eventually gotten frustrated with some limitations (for example: setting floatx to float16 fails on batch normalization layers, and the only way to fix it is to actually edit the Keras source; implementing custom layers requires coding them in backend code, which destroys the ability to switch backends), there appear to be no parallel training mechanisms [unlike tf.Estimator], and even vanilla programs run 30% slower in Keras than in tf (if one is to trust the interwebs), and was grumbling about moving to tensorflow, but was pleased to discover that TensorFlow (especially if you use tf.layers stuff) is not actually any longer for anything imaginable you might want to do. Is this a failure of my imagination, or is tf.layers basically a backporting of Keras into core TensorFlow, and is there any actual use case for Keras?
Keras used to have an upper hand on TensorFlow in the past but ever since the author is now affiliated with Google all the features that made it attractive are being implemented into TensorFlow you can check version 1.8, like you rightfully pointed out tf.layers is one such example.

Categories

Resources