I have been following this tutorial to perform voice command recognition for a couple words on my ESP32: https://github.com/atomic14/voice-controlled-robot
I was able to train my model and have the "fully_trained.model" file: "fully_trained.model"
Currently I am trying to convert the .model file into the tflite file, however I am getting the "'str' has no attribute 'call'" error: Code, Code, Errors
My tensorflow version is 2.6.2 and python version is 3.10.
Unfortunately, I do not have 10 reputation points yet, so I couldnt embed the images.
If you use tf.lite.TFLiteConverter.from_keras_model you need to pass the tf.keras.Model instance, not the path to the saved_model folder.
Use tf.lite.TFLiteConverter.from_saved_model() instead and pass the path to the "fully_trained.model" folder.
You've passed "fully_trained.model", with quotation marks, as an argument to TFLiteConverter. That's a string. Give the model a name and pass that name as an argument to the converter, without quotation marks.
Related
I'm trying to run a script that builds and loads a TF dataset. The dataset is cityscapes and it is already downloaded and stored in fs/datasets/cityscapes/. I can't move the data. In the directory, there are the following files: ['tfrecord', 'gtFine', 'tfrecord_instances_old', 'README', 'leftImg8bit', 'cityscapesScripts', 'tfrecord_instances', 'license.txt']. An error arises when I try to run dataset = self._dataset_builder.as_dataset(split=self._split, decoders=self._decoders). This error is
AssertionError: Dataset cityscapes: could not find data in /fs/datasets/cityscapes. Please make sure to call dataset_builder.download_and_prepare(), or pass download=True to tfds.load() before trying to access the tf.data.Dataset object.
I believe the issue relates to Constructing tf.data.Dataset cityscapes for split train, from /fs/datasets/cityscapes/cityscapes/semantic_segmentation/1.0.0 which is printed before the error. This added path comes from the Cityscapes TFDS DatasetInfo object. If I try to edit the data_dir or data_path in that object with self._dataset_builder.info.data_dir='/fs/datasets/cityscapes', I receive the error message: AttributeError: can't set attribute. So if anyone has a fix, I'd appreciate it.
I was trying out this tutorial using this Plant Leaves dataset (with over 35k images consisting .JPG, .PNG as well as .JPEG files) with tensorflow version 1.14
And I followed similar steps except; skipping "Load using keras.preprocessing" part. I directly jumped over to "Load using tf.data" part. But when I ran the snippet it threw me this error:
File "D:\Softwares\Anaconda\lib\site-packages\tensorflow\python\ops\ragged\ragged_string_ops.py", line 640, in strings_split_v1
return ragged_result.to_sparse()
AttributeError: 'Tensor' object has no attribute 'to_sparse'
Complete error:
My code snippet is:
dir_root=pathlib.Path("D:/Projects/IIIT/LeafID/Dataset/PlantVillage")
list_ds=tf.data.Dataset.list_files(str(dir_root/"*/*"))
def getLabel(fpath):
parts = tf.strings.split(fpath, os.path.sep)
return parts[-2] == clnames
def decodeimg(img):
img=tf.image.decode_jpeg(img,channels=3)
img=tf.image.convert_image_dtype(img,tf.float32)
return tf.image.resize(img,[64,64])
def process_path(fpath):
label=getLabel(fpath)
img=tf.io.read_file(fpath)
img=decodeimg(img)
return img, label
label_ds=list_ds.map(process_path,num_parallel_calls=AUTOTUNE)
which is almost similar to the code here, except the variables.
I couldn't understand what's the problem here?
Is there something wrong with the process of images getting converted to tensor? Because when I open ragged_string_ops.py, it shows me something like this:
if result_type == "SparseTensor":
return ragged_result.to_sparse()
T.I.A.
I ran into a similar issue working through that tutorial, and found this issue suggesting it's a bug with strings.split in certain tensorflow versions (I saw this issue in tf 1.14, and the OP in the github issue saw it in 1.15): https://github.com/tensorflow/tensorflow/issues/33623
From the linked issue comments, it looks like two possible solutions are
(1) adding brackets around your string, e.g. c = tf.strings.split(['a b'])
or
(2) adding resultType='RaggedTensor', e.g. tf.strings.split('a b',result_type='RaggedTensor') to return a tensor (although this looks like it's buggy behavior, and may be corrected in later versions of tf).
Hope this helps.
I want to save the visualisation of spaCy with the code that spaCy offers here : https://spacy.io/usage/visualizers
Here is my code :
nlp = spacy.load("en_core_web_sm")
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
svg = spacy.displacy.render(doc, style="dep")
output_path = Path(os.path.join("./", "sentence.svg"))
output_path.open('w', encoding="utf-8").write(svg)
But when I execute this code, there is an error : TypeError: write() argument must be str, not None
So how can I save the output of spacy.displacy.render ? How can I fix this error ?
The code provided in the question works fine, when executed in an IDE like PyCharm.
However, displacy.render() works slightly differently in a Jupyter notebook, cf the documentation here and here:
If you’re running a Jupyter notebook, displaCy will detect this and return the markup in a format ready to be rendered and exported.
You can overwrite this automated detection by explicitly setting jupyter=False in the render() call.
I have a problem with opening protobuf file using opencv C++.
I use this code:
cv::String weights = "frozen_inference_graph_face.pb";
cv::String pbtxt = "prototxt.pbtxt";
auto graph = cv::dnn::readNetFromTensorflow(weights, pbtxt);
I have this error:
OpenCV(3.4.1) Error: Unspecified error (FAILED: fs.is_open(). Can't open "frozen_inference_graph_face.pb") in cv::dnn::ReadProtoFromBinaryFile, file C:.hunter_Base\acbf4b9\93b3222\8eb84a0\Build\OpenCV\Source\modules\dnn\src\caffe\caffe_io.cpp, line 1126
It works well when I open it with Python code like this and detect image correctly:
cvNet =
cv.dnn.readNetFromTensorflow('frozen_inference_graph.pb','prototxt.pbtxt')
I have trained ssd_mobilenet_v1_pets. Cannot understand why I cannot open it with my C++ code and the error is refers to cafe, when I use tensorflow. Maybe the configuration of builded OpenCV is wrong? I set WITH_PROTOBUF=ON and BUILD_opencv_dnn=ON.
obviously,it's the problem of path.you should check the relative path,like this:
model = cv2.dnn.readNetFromCaffe("CarTypeRecognizition/model/vehicle_model.prototxt",
"CarTypeRecognizition/model/vehicle_model.caffemodel")
I tried tflite_convert to convert my saved_model.pb(object detection API) file to .tflite but when i execute this command on cmd on the directory of C:\Users\LENOVO-PC\tensorflow> where tensorflow git is cloned,
tflite_convert \ --output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
I get an error saying
ImportError: No module named 'tensorflow.contrib.lite.python.tflite_convert'
the complete sourcelog is
C:\Users\LENOVO-PC\tensorflow>tflite_convert \ --output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
Traceback (most recent call last):
File "c:\users\lenovo-pc\appdata\local\programs\python\python35\lib\runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\lenovo-pc\appdata\local\programs\python\python35\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\LENOVO-PC\AppData\Local\Programs\Python\Python35\Scripts\tflite_convert.exe\__main__.py", line 5, in <module>
ImportError: No module named 'tensorflow.contrib.lite.python.tflite_convert'
is there anyway to convert my .pb file to .tflite on WINDOWS?
Hi my solution was using linux in the following way
Windows Subsystem for Linux - see
then install from store ubunto
then need to pip3 install --upgrade "tensorflow=1.7*"
then if you try to run toco it would not be recognized.
the solution is go to folder
~/.local/bin/
there you would find toco, this is a python file.
run
python3 ~/.local/bin/toco
you would get the "exe" of toco.
to convert you can run the command explained in https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#2
just change the -graph_def_file=tf_files/retrained_graph.pb to --input_file=tf_files/retrained_graph.pb
Hope this helps someone
Note:
if you are missing pip3, you would need to install it
I follow the instruction on this site:
https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#2
However, it seems that tflearn_convert doesn't support Windows anymore. So i decided
to use Ubuntu on Windows. After having created a virtual environment to install tensorflow, i checked "toco" by typing toco in the terminal. And there go the instruction to use toco.
usage: /home/hieu/venv/bin/toco
Flags:
--input_file="" string Input file (model of any supported format). For Protobuf formats, both text and binary are supported regardless of file extension.
--output_file="" string Output file. For Protobuf formats, the binary format will be used.
--input_format="" string Input file format. One of: TENSORFLOW_GRAPHDEF, TFLITE.
--output_format="" string Output file format. One of TENSORFLOW_GRAPHDEF, TFLITE, GRAPHVIZ_DOT.
--default_ranges_min=0.000000 float If defined, will be used as the default value for the min bound of min/max ranges used for quantization.
--default_ranges_max=0.000000 float If defined, will be used as the default value for the max bound of min/max ranges used for quantization.
--inference_type="" string Target data type of arrays in the output file (for input_arrays, this may be overridden by inference_input_type). One of FLOAT, QUANTIZED_UINT8.
--inference_input_type="" string Target data type of input arrays. If not specified, inference_type is used. One of FLOAT, QUANTIZED_UINT8.
--input_type="" string Deprecated ambiguous flag that set both --input_data_types and --inference_input_type.
--input_types="" string Deprecated ambiguous flag that set both --input_data_types and --inference_input_type. Was meant to be a comma-separated list, but this was deprecated before multiple-input-types was ever properly supported.
--drop_fake_quant=false bool Ignore and discard FakeQuant nodes. For instance, to generate plain float code without fake-quantization from a quantized graph.
--reorder_across_fake_quant=false bool Normally, FakeQuant nodes must be strict boundaries for graph transformations, in order to ensure that quantized inference has the exact same arithmetic behavior as quantized training --- which is the whole point of quantized training and of FakeQuant nodes in the first place. However, that entails subtle requirements on where exactly FakeQuant nodes must be placed in the graph. Some quantized graphs have FakeQuant nodes at unexpected locations, that prevent graph transformations that are necessary in order to generate inference code for these graphs. Such graphs should be fixed, but as a temporary work-around, setting this reorder_across_fake_quant flag allows TOCO to perform necessary graph transformaitons on them, at the cost of no longer faithfully matching inference and training arithmetic.
--allow_custom_ops=false bool If true, allow TOCO to create TF Lite Custom operators for all the unsupported TensorFlow ops.
--drop_control_dependency=false bool If true, ignore control dependency requirements in input TensorFlow GraphDef. Otherwise an error will be raised upon control dependency inputs.
--debug_disable_recurrent_cell_fusion=false bool If true, disable fusion of known identifiable cell subgraphs into cells. This includes, for example, specific forms of LSTM cell.
And many more...
After that, i used this command to convert file:
toco --input_file="tf_files/retrained_graph.pb" --output_file="tf_files/optimized_graph.lite" --input_format="TENSORFLOW_GRAPHDEF" --output_format="TFLITE" --input_shape="1,224,224,3" --input_array="input" --output_array="final_result" --inference_type="FLOAT" --input_data_type="FLOAT"
Then optimized_graph.lite should be found in tf_files
According to this thread : Tensorflow discussions
The issue really is that the module as of now is not supported on windows. You can follow the thread and see if there is an update regarding the same.
P.S.: some people claim that a git-clone and bazel build helped resloved the issue, instead of pip install, you can try that as well but have serious doubts if that will work.