I'm having an issue i can't manage to solve.
I'm just approaching the Super Resolution Images on Python and i found this on github: https://github.com/titu1994/Image-Super-Resolution
I think this is exactly what i need for my project.
So i just install everything i need to run it and i run it with this:
python main.py (path)t1.bmp
t1.bmp is an image stored in the "input-images" directory so my command is this:
python main.py C:\Users\cecilia....\t1.bmp
The error i get is this:
http://imgur.com/X3ssj08
http://imgur.com/rRSdyUb
Can you please help me solving this? (The code i'm using is the one on the github i linked)
Thanks in advance
The very first line on the Readme in the github link that you give says that the code is designed for theano only. Yet in your traceback it shows that you are using tensorflow as backend...
The error that you are having is typical of having the wrong image format for the used backend. You have to know that for convolutional networks, Theano and tensorflow have different conventions. Theano expects the following order for the dimensions (batch, channels, nb_rows , nb_cols) and tensorflow (batch, nb_rows, nb_cols, channels). The first is known as "channels_first" and the other "channels_last". So what happens is that the code you are trying to run (which is explicitly said to be designed for Theano) organises the data to match the channels_first format, which causes tensorflow to crash because the dimensions don't match what it expects.
Bottom line: use theano, or change the code appropriately to make it work on tensorflow.
Related
I have trained my model as QNN with brevitas. Basically my input shape is:
torch.Size([1, 3, 1024])
I have exported the .pt extended file. As I try my model and generate a confusion matrix I was able to observe everything that I want.
So I believe that there is no problem about the model.
On the other hand as I try to export the .onnx file to implement this brevitas trained model on FINN, I wrote the code given below:
from brevitas.export import FINNManager
FINNManager.export(my_model, input_shape=(1, 3, 1024), export_path='myfinnmodel.onnx')
But as I do that I get the error as:
torch.onnx.export(module, input_t, export_target, **kwargs)
TypeError: export() got an unexpected keyword argument
'enable_onnx_checker'
I do not think this is related with the version. But if you want me to be sure about the version, I can check these too.
If you can help me I will be really appreciated.
Sincerely;
The problem is related to pytorch version > 1.10. Where "enable_onnx_checker" is no more a parameter of torch.onnx.export function.
This is the official solution from the repository.
https://github.com/Xilinx/brevitas/pull/408/files
The fix is not yet release. Is in dev branch.
You need to compile brevitas by yourself or simply change the code in brevitas/export/onnx/manager.py following official solution.
After that i am able to get onnx converted model.
I am trying to use bilinear interpolation in tensorflow, a function is available for this in the Tensorflow Addons library here, however it only has Tf2.x support. I need to get this function working in Tf1.15, so I made few small modifications and things seem fine until actually training when it throws this error that I can't understand (see collab notebook with demo code). What is this PartitionedCall? I tried another simpler model with conv2D layers instead of conv3D and that doesn't throw such errors, which part of conv3D layers is the culprit here? Also no such error with Tf2 of course, what is causing this?
Suggestions for other ways to get bilinear interpolation working in Tf1.15 should also be helpful. Thanks!
Python version 3.7
Keras version 2.3.1
TensorFlow version 1.14.0
I am wanting to run my UNet Keras model using OpenCV's readNetFromTensorflow in C++. I have successfully converted my HDF5 file to .pb per this issue:
How we can convert keras model .h5 file to tensorflow saved model (.pb)
However when I try and run the command (in python first for ease of testing):
net = cv.dnn.readNetFromTensorflow('tensorflow/my_model.pb')
I receive the failure:
error: (-2) Unknown layer type Shape in op decoder_stage0_upsampling/Shape in function cv::dnn::experimental_dnn_v2::`anonymous-namespace'::TFImporter::populateNet
Is there a workaround to this using OpenCV? Or will using Tensorflows C++ API be best in this situation.
I have solved my problem and will add my solution here for anyone else looking to perform inference with OpenCV on their own UNet.
Step 1:
Convert H5/HDF5 file to .pb as stated in my above question.
Step 2:
OpenCV must be upgraded to 4.2.0 (not sure if my solution is supported in anything else between 3.3.1 [my starting OpenCV Version] and 4.2.0)
Step 3:
Load your network as described in the code in my question, this should be successful. Once done, load your image and use cv2.blobFromImage() to construct a blob, then set your input, and lastly perform inference:
blob = cv.dnn.blobFromImage(image, 1 / 255.0, (256,256), swapRB=True)
net.setInput(blob)
out = net.forward()
View your output:
You will end up with a (1,1,x,y) shape. Reshape your output using your desired function (in my case I just use np.resize()). Plot your output and view your results!
Hope this helps others who do not want to deal with TensorFlow C++ API and need to get a relatively good working C++ inference setup.
Edit: As a note I should mention I have yet to test this with the C++ OpenCV library. I plan on doing this in the next week or so. If this solution does not work similarly in C++, I will note it here.
Edit 2: Tested and working well in C++
I'm trying to understand what kind of image preprocessing is required when using one of the base networks provided by keras.application whith tensorflow compat.v1 module
In particular, I'm interested about the functions that converts each pixel channel value in the range [-1,1] or similar. I have digged in the code and it seems Tensorflow relies on Keras which, on other hand, should have 3 differents functions: one for tf, one for caffe and the last for torch, meaning not specific ones for each base network
Up until now I have just re-implemented the function for tensorflow (value = value/127.5 - 1) but I also read about others discussing something else (e.g. value = value/255), nothing "official" tho. I have started to have some doubts regarding what I'm doing because, after switching to ResNet50, I can't seem to obtain decent results in contrast to several papers I'm following. I would like to have a definitive idea about the topic, any help would be much appreciated
Tensorflow provides the preprocessing function for models in keras.applications, called preprocess_input. For example, an image can be preprocessed for InceptionV3 using tf.keras.applications.inception_v3.preprocess_input.
This is probably a very basic question...
But how do I convert checkpoint files into a single .pb file.
My goal is to serve the model using probably C++
These are the files that I'm trying to convert.
As a side note I'm using tflearn with tensorflow.
Edit 1:
I found an article that explains how to do this: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
The problem is that I'm stuck with the following error
KeyError: "The name 'Adam' refers to an Operation not in the graph."
How do I fix this?
Edit 2:
Maybe this will shed some light on the problem.
The error that I get comes from the regression layer, if I use: sgd.
I'll get
KeyError: "The name 'SGD' refers to an Operation not in the graph."
The tutorial on https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc works just fine.
The problem was that I was loading the model using tensorflow instead of using tflearn.
So... instead of:
tf.train.import_meta_graph(...)
We do:
model.load(...)
TFLearn knows how to parse the graph properly.