Python layer can't read hdf5 file in caffe framework - python

I write python layer for caffe that can read hdf5 with some manipulation needs for me. But I have some issue when open and read hdf5 file in python setup method. Here it is.
When I used tables with the
code in setup:
def setup(self, bottom, top):
h5file = tables.open_file("/home/titan/models/hdf5/train_small.h5", driver="H5FD_CORE")
I have this error when run net:
Traceback (most recent call last):
File "/home/titan/scripts/python_layers/pydata_hdf5.py", line 37, in setup
h5file = tables.open_file("/home/titan/models/hdf5/train_small.h5", driver="H5FD_CORE")
File "/home/titan/anaconda/lib/python2.7/site-packages/tables/file.py", line 318, in open_file
return File(filename, mode, title, root_uep, filters, **kwargs)
File "/home/titan/anaconda/lib/python2.7/site-packages/tables/file.py", line 784, in __init__
self._g_new(filename, mode, **params)
File "tables/hdf5extension.pyx", line 465, in tables.hdf5extension.File._g_new (tables/hdf5extension.c:4872)
tables.exceptions.HDF5ExtError: HDF5 error back trace
File "../../../src/H5FDcore.c", line 273, in H5Pset_fapl_core
not a file access property list
File "../../../src/H5Pint.c", line 3371, in H5P_object_verify
property list is not a member of the class
File "../../../src/H5Pint.c", line 3321, in H5P_isa_class
not a property list
End of HDF5 error back trace
When I used h5py
def setup(self, bottom, top):
self.data = h5py.File('/home/titan/models/hdf5_nose_mouth/train_small.h5', 'r')
I have the same error:
Traceback (most recent call last):
File "/home/titan/scripts/python_layers/pydata_hdf5.py", line 11, in <module>
import h5py
File "/home/titan/anaconda/lib/python2.7/site-packages/h5py/__init__.py", line 31, in <module>
from .highlevel import *
File "/home/titan/anaconda/lib/python2.7/site-packages/h5py/highlevel.py", line 13, in <module>
from ._hl.base import is_hdf5, HLObject
File "/home/titan/anaconda/lib/python2.7/site-packages/h5py/_hl/base.py", line 78, in <module>
dlapl = default_lapl()
File "/home/titan/anaconda/lib/python2.7/site-packages/h5py/_hl/base.py", line 65, in default_lapl
lapl = h5p.create(h5p.LINK_ACCESS)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2458)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2415)
File "h5py/h5p.pyx", line 130, in h5py.h5p.create (-------src-dir-------/h5py/h5p.c:2491)
ValueError: Not a property list class (Not a property list class)
When I used deepdish:
def setup(self, bottom, top):
self.data = dd.io.load('/home/titan/models/hdf5/train_smallest.h5')
I have error too:
Traceback (most recent call last):
File "/home/titan/scripts/python_layers/pydata_hdf5.py", line 36, in setup
self.data = dd.io.load('/home/titan/models/hdf5/train_smallest.h5')
File "/home/titan/anaconda/lib/python2.7/site-packages/deepdish/io/hdf5io.py", line 476, in load
with tables.open_file(path, mode='r') as h5file:
File "/home/titan/anaconda/lib/python2.7/site-packages/tables/file.py", line 318, in open_file
return File(filename, mode, title, root_uep, filters, **kwargs)
File "/home/titan/anaconda/lib/python2.7/site-packages/tables/file.py", line 784, in __init__
self._g_new(filename, mode, **params)
File "tables/hdf5extension.pyx", line 488, in tables.hdf5extension.File._g_new (tables/hdf5extension.c:5081)
tables.exceptions.HDF5ExtError: HDF5 error back trace
File "../../../src/H5F.c", line 1582, in H5Fopen
not file access property list
File "../../../src/H5Pint.c", line 3321, in H5P_isa_class
not a property list
End of HDF5 error back trace
But when I read simply *.txt file, all is ok. Also I can read this file from console and use it in hdf5 layer in caffe. Please help me, how I can read hdf5 file from python layer?

Try to install another version of h5py. I just solved it with:
pip install h5py==prev_version
I guess it has something to do with the linking but it would be interesting to know which is the exact cause for this problem.

Related

Resize method is not implemented Python

Hi I am working on a project to do segmentation for persons. Now I followed the code from https://pixellib.readthedocs.io/en/latest/Image_pascal.html#image-pascal and it is giving me an error: ValueError: Resize method is not implemented. in line 4.
import pixellib
from pixellib.semantic import semantic_segmentation
segment_video = semantic_segmentation()
segment_video.load_pascalvoc_model("deeplabv3_xception_tf_dim_ordering_tf_kernels.h5")
segment_video.process_video_pascalvoc("IMG_2649.mp4", overlay = True, frames_per_second= 15,
output_video_name="output.mp4")
Anyone know why this error is being triggered?
Error:
Traceback (most recent call last):
File "H:/Yolo/person_seg.py", line 4, in <module>
segment_video = semantic_segmentation()
File "G:\anaconda3\envs\yolo5\lib\site-packages\pixellib\semantic.py", line 23, in __init__
self.model = Deeplab_xcep_pascal()
File "G:\anaconda3\envs\yolo5\lib\site-packages\pixellib\deeplab.py", line 214, in Deeplab_xcep_pascal
method='bilinear', align_corners=True))(b4)
File "G:\anaconda3\envs\yolo5\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 554, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "G:\anaconda3\envs\yolo5\lib\site-packages\tensorflow\python\keras\layers\core.py", line 743, in call
return self.function(inputs, **arguments)
File "G:\anaconda3\envs\yolo5\lib\site-packages\pixellib\deeplab.py", line 214, in <lambda>
method='bilinear', align_corners=True))(b4)
File "G:\anaconda3\envs\yolo5\lib\site-packages\tensorflow\python\ops\image_ops_impl.py", line 960, in resize_images
name=None)
File "G:\anaconda3\envs\yolo5\lib\site-packages\tensorflow\python\ops\image_ops_impl.py", line 1088, in resize_images_v2
raise ValueError('Resize method is not implemented.')
ValueError: Resize method is not implemented.
Make sure to follow the initial steps prior installing the PixelLib library, since it requires the latest version of Tensorflow (Tensorflow 2.0+) as well as imgaug.

Post-training quantization with tflite cause runtime error

I am trying to quantize my model (specifically pretrained faster_rcnn_inception_v2 on coco, that was downloaded from the model zoo), in hopes to speedup inference time.
I use the following code from here:
import tensorflow as tf
converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir)
converter.post_training_quantize = True
tflite_quantized_model = converter.convert()
open("quantized_model.tflite", "wb").write(tflite_quantized_model)
Models directory didnt have saved_model.pb file. So i renamed frozen_inference_graph.pb to saved_model.pb.
Running the code above produce the following runtime error:
Traceback (most recent call last):
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1664, in <module>
main()
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/hdd/motorola/motorola_heads/tensorflow_face_detection/quantize.py", line 5, in <module>
converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 318, in new_func
return func(*args, **kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/lite.py", line 587, in from_saved_model
tag_set, signature_key)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/lite.py", line 376, in from_saved_model
output_arrays, tag_set, signature_key)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/convert_saved_model.py", line 254, in freeze_saved_model
meta_graph = get_meta_graph_def(saved_model_dir, tag_set)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/convert_saved_model.py", line 61, in get_meta_graph_def
return loader.load(sess, tag_set, saved_model_dir)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 318, in new_func
return func(*args, **kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 269, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 420, in load
**saver_kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 347, in load_graph
meta_graph_def = self.get_meta_graph_def_from_tags(tags)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 323, in get_meta_graph_def_from_tags
" could not be found in SavedModel. To inspect available tag-sets in"
RuntimeError: MetaGraphDef associated with tags set(['serve']) could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
What does it mean and what should i do?
Please refer to this issue. They seem to have the same issue as you.
This may be fixed in a more recent version of Tensorflow (perhaps the tag has switched from 'serve' to 'serving' in the meantime).
You should use tf.saved_model.simple_save to save the pb model.

pygame game crashes when I edit tilesheets in tiled

I am trying to make a game with pygame using tiled. When I edit tilesheets (to for example add collision) I get a bunch of error messages when I run my code.
When I use the same tilesheet without editing I get no errors and all the files work.
Traceback (most recent call last):
File "C:/Users/47988/PycharmProjects/Terrible/Main.py", line 94, in
<module>
g = Game()
File "C:/Users/47988/PycharmProjects/Terrible/Main.py", line 16, in init
self.load_data()
File "C:/Users/47988/PycharmProjects/Terrible/Main.py", line 22, in load_data
self.map = TiledMap(path.join(map_folder, "map6.tmx"))
File "C:\Users\47988\PycharmProjects\Terrible\Tilemap.py", line 21, in init
tm = pytmx.load_pygame(filename, pixelalpha=True)
File "C:\Users\47988\PycharmProjects\Terrible\venv\lib\site- packages\pytmx\util_pygame.py", line 141, in load_pygame
return pytmx.TiledMap(filename, *args, **kwargs)
File "C:\Users\47988\PycharmProjects\Terrible\venv\lib\site-packages\pytmx\pytmx.py", line 360, in init
self.parse_xml(ElementTree.parse(self.filename).getroot())
File "C:\Users\47988\PycharmProjects\Terrible\venv\lib\site- packages\pytmx\pytmx.py", line 400, in parse_xml
self.add_tileset(TiledTileset(self, subnode))
File "C:\Users\47988\PycharmProjects\Terrible\venv\lib\site-packages\pytmx\pytmx.py", line 845, in init
self.parse_xml(node)
File "C:\Users\47988\PycharmProjects\Terrible\venv\lib\site-packages\pytmx\pytmx.py", line 874, in parse_xml
raise Exception
Exception
Is there any way of solving this?

saving pandas dataframe as hdf5

Using pandas version 0.19.1 (with py27-tables-3.2.2_1 and hdf5-1.10.0 installed on my system), I am trying to save a pandas dataframe as a .h5 with:
import pandas as pd
df = pd.DataFrame(dict(A=range(5), B=range(5)))
df.to_hdf('savefile.h5', 'table', mode='w')
However the following error results:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/generic.py", line 1138, in to_hdf
return pytables.to_hdf(path_or_buf, key, self, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 270, in to_hdf
f(store)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 264, in <lambda>
f = lambda store: store.put(key, value, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 873, in put
self._write_to_group(key, value, append=append, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 1315, in _write_to_group
s.write(obj=value, append=append, complib=complib, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 2864, in write
self.attrs.ndim = data.ndim
File "/usr/local/lib/python2.7/site-packages/tables/attributeset.py", line 461, in __setattr__
self._g__setattr(name, value)
File "/usr/local/lib/python2.7/site-packages/tables/attributeset.py", line 403, in _g__setattr
self._g_setattr(self._v_node, name, stvalue)
File "tables/hdf5extension.pyx", line 696, in tables.hdf5extension.AttributeSet._g_setattr (tables/hdf5extension.c:7549)
tables.exceptions.HDF5ExtError: HDF5 error back trace
File "H5A.c", line 634, in H5Awrite
not an attribute
End of HDF5 error back trace
Can't set attribute 'ndim' in node:
/table (Group) ''.
Could someone provide a simple working example of how to save a pandas dataframe in hdf5 format.
pytables is currently not compatible with hdf5-1.10 as reported in this issue on github, downgrading to hdf5-0.8 is the recommended solution.

Failed to execute Augustus PMML Gaslog Example. Need help to debug

I ran command testing the Gaslog example of Augutus:
Augustus consumer_config.xcfg
But got following error:
Traceback (most recent call last):
File "/usr/local/bin/Augustus", line 171, in <module>
main(config)
File "/usr/local/lib/python2.7/dist-packages/augustus/engine/mainloop.py", line 532, in main
mainLoop = MainLoop(configuration, dataStream=dataStream, rethrowExceptions=rethrowExceptions)
File "/usr/local/lib/python2.7/dist-packages/augustus/engine/mainloop.py", line 150, in __init__
self.model = xmlbase.loadfile(fileLocation, pmml.X_ODG_PMML, lineNumbers=True)
File "/usr/local/lib/python2.7/dist-packages/augustus/core/xmlbase.py", line 1628, in loadfile
return load(file(fileName), base, validation, dropSpecial, lineNumbers)
File "/usr/local/lib/python2.7/dist-packages/augustus/core/xmlbase.py", line 1807, in load
parser.parse(stream)
File "/usr/lib/python2.7/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/lib/python2.7/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/lib/python2.7/xml/sax/expatreader.py", line 210, in feed
self._parser.Parse(data, isFinal)
File "/usr/lib/python2.7/xml/sax/expatreader.py", line 307, in end_element
self._cont_handler.endElement(name)
File "/usr/local/lib/python2.7/dist-packages/augustus/core/xmlbase.py", line 1728, in endElement
raise XMLValidationError("%sXMLValidationError: %s." % (stacktrace, str(err)))
augustus.core.xmlbase.XMLValidationError: Below is a traceback to the line that caused the actual exception.
File "/usr/local/lib/python2.7/dist-packages/augustus/core/xmlbase.py", line 1721, in endElement
last.validate(recurse=False, exception=True)
File "/usr/local/lib/python2.7/dist-packages/augustus/core/xmlbase.py", line 872, in validate
self.xsd.validate(self)
File "/usr/local/lib/python2.7/dist-packages/augustus/core/xmlbase.py", line 1579, in validate
xml.post_validate()
File "/usr/local/lib/python2.7/dist-packages/augustus/core/pmml41.py", line 1656, in post_validate
pmmlApply.top_validate_transformationDictionary(self.transformationDictionary)
File "/usr/local/lib/python2.7/dist-packages/augustus/core/pmml41.py", line 7092, in top_validate_transformationDictionary
raise PMMLValidationError("Apply function \"%s\" not recognized (not built-in and not user-defined)" % function)
XMLValidationError: Apply function "formatDateTime" not recognized (not built-in and not user-defined).
Ref:
Example I was trying: https://github.com/codersofthedark/augustus/tree/master/augustus-examples/gaslog/introductory
Augustus: https://code.google.com/p/augustus/
I got the same error. I'm not an expert at Augustus, but it looks the model file, "example_model.pmml" has the function, "formatDateTime", spelled wrong in two places. It should be "formatDatetime" (i.e., "time" should start with a lowercase "t"). When I made that correction, the example ran and produced output in the results directory.

Categories

Resources