I run opencv 3.2.0, ubuntu 14.04, and latest opencv_contrib.
I run examine:
https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.py
But it have show err:
$ python textdetection.py scenetext_word01.jpg
textdetection.py
A demo script of the Extremal Region Filter algorithm described in:
Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012
Extracting Class Specific Extremal Regions from 9 channels ...
(...) this may take a while (...)
OpenCV Error: Bad argument (Default classifier file not found!) in ERClassifierNM1, file /home/vietnam/opencv_and_contri/opencv_contrib/modules/text/src/erfilter.cpp, line 1022
Traceback (most recent call last):
File "textdetection.py", line 38, in <module>
erc1 = cv2.text.loadClassifierNM1(pathname+'/trained_classifierNM1.xml')
cv2.error: /home/vietnam/opencv_and_contri/opencv_contrib/modules/text/src/erfilter.cpp:1022: error: (-5) Default classifier file not found! in function ERClassifierNM1
How to solve this?
Try using relative paths in the parameters for cv2.text.loadClassifierNM1() and cv2.text.loadClassifierNM2()
So now that part of the code looks like this:
erc1 = cv2.text.loadClassifierNM1('./trained_classifierNM1.xml')
er1 = cv2.text.createERFilterNM1(erc1,16,0.00015,0.13,0.2,True,0.1)
erc2 = cv2.text.loadClassifierNM2('./trained_classifierNM2.xml')
er2 = cv2.text.createERFilterNM2(erc2,0.5)
I'm not sure why this works (it did for me), but I tried this after looking at a solution posted for a similar problem in VS2015 here: https://github.com/cesardelgadof/OpenCVBinaries/issues/1
Hope this helps.
Trying with absolute path e.g. "/usr/lib/opencv-3.2.0/opencv_contrib-3.2.0/modules/text/samples/trained_classifierNM1.xml" worked in my case for Ubuntu 16.04, C++
Related
I followed this link to doc to create environment of my own.
But when i run this
from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="v1-ball-cube-game.x86_64")
env.reset()
behavior_names = env.behavior_spec.keys()
print(behavior_names)
Game window pop up and then terminal show error saying
Traceback (most recent call last):
File "index.py", line 6, in <module>
behavior_names = env.behavior_spec.keys()
AttributeError: 'UnityEnvironment' object has no attribute 'behavior_spec'
despite the fact that this is the exact snippet as shown in the documentation.
I created environment by following this (it make without brain) and i was able to train the model by .conf file. Now i wanted to connect to python API.
You need to use stable documents and stable repo( RELEASE_TAGS ) to achieve stable results. Unity ML Agents changes it's syntax every few months so that is problem if you are following master branch.
env.get_behavior_spec(behavior_name: str)
Should solve your problem.
https://github.com/Unity-Technologies/ml-agents/blob/release_2/docs/Python-API.md
I am trying to use open3d to create an "alphahull" around a set of 3d points using TriangleMesh. However I get a TypeError.
import open3d as o3d
import numpy as np
xx =np.asarray([[10,21,18], [31,20,25], [36,20,24], [33,19,24], [22,25,13], [25,19,24], [22,26,10],[29,19,24]])
cloud = o3d.geometry.PointCloud()
cloud.points = o3d.utility.Vector3dVector(xx)
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_alpha_shape(pcd=cloud, alpha=10.0)
output:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: create_from_point_cloud_alpha_shape(): incompatible function arguments. The following argument types are supported:
1. (pcd: open3d.open3d.geometry.PointCloud, alpha: float, tetra_mesh: open3d::geometry::TetraMesh, pt_map: List[int]) -> open3d.open3d.geometry.TriangleMesh
The error says the object I am passing the function is the wrong type. But when I check the type I get this:
>>print(type(cloud))
<class 'open3d.open3d.geometry.PointCloud'>
Please can someone help me with this error?
Note: A comment on this post Python open3D no attribute 'create_coordinate_frame' suggested it might be a problem with the installation and that a solution was to compile the library from source. So I compiled the library from source. After this ran
make install-pip-package. Though I am not sure it completed correctly because I couldn't import open3d in python yet; see output of installation: https://pastebin.com/sS2TZfTL
(I wasn't sure if that command was supposed to complete the installation, or if you were still required to run pip? After I ran python3 -m pip install Open3d I could import the library in python.)
Bug in bindings of current release (v0.9.0):
https://github.com/intel-isl/Open3D/issues/1621
Fix for master:
https://github.com/intel-isl/Open3D/pull/1624
Workaround:
tetra_mesh, pt_map = o3d.geometry.TetraMesh.create_from_point_cloud(pcd)
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_alpha_shape(pcd, alpha, tetra_mesh, pt_map)
I have taken over a project which was using Yocto Fido version from 2015. I need to update this to the latest stable version Thud.
I have cloned the Poky-Thud repository and cloned the latest layers which are required by our customized layer such as meta-openembedded etc. and added our customized layer back to it.
Now, I wasn't expecting this to build straight away without issues by any means, but the below errors I'm getting with the new layers, I just don't understand. There are more errors like this relating to not enough values, but posted below is one.
There is an interface issue in meta/classes/sanity.bbclass. I can't just revert to the older version of meta to solve this, nor I don't think it makes sense to modify the code myself? Any ideas why this is and how to solve it?
ERROR: Execution of event handler 'config_reparse_eventhandler' failed
Traceback (most recent call last):
File "/home/ubuntu/new-repo/poky-thud/build-
bbgw/../meta/classes/sanity.bbclass", line 971, in
config_reparse_eventhandler(e=<bb.event.ConfigParsed object at
0x7ff4103bf3c8>):
python config_reparse_eventhandler() {
> sanity_check_conffiles(e.data)
}
File "/home/ubuntu/new-repo/poky-thud/build-
bbgw/../meta/classes/sanity.bbclass", line 572, in sanity_check_conffiles(d=
<bb.data_smart.DataSmart object at 0x7ff4108d35c0>):
for func in funcs:
> conffile, current_version, required_version, func =
func.split(":")
if check_conf_exists(conffile, d) and d.getVar(current_version)
is not None and \
ValueError: not enough values to unpack (expected 4, got 1)
I am looking for a straightforward way of applying automatic white balance to an image.
I found some official documentation about a balanceWhite() method: cv::xphoto::WhiteBalancer Class Reference
However, I have an obscure error when I try to call the function as shown in the example.
image = cv2.xphoto_WhiteBalancer.balanceWhite(image)
Raises:
Traceback (most recent call last):
File "C:\Users\Delgan\main.py", line 80, in load
image = cv2.xphoto_WhiteBalancer.balanceWhite(image)
TypeError: descriptor 'balanceWhite' requires a 'cv2.xphoto_WhiteBalancer' object but received a 'numpy.ndarray'
If then I try to use a cv2.xphoto_WhiteBalancer object as required:
balancer = cv2.xphoto_WhiteBalancer()
cv2.xphoto_WhiteBalancer.balanceWhite(balancer, image)
It raises:
Traceback (most recent call last):
File "C:\Users\Delgan\main.py", line 81, in load
cv2.xphoto_WhiteBalancer.balanceWhite(balancer, image)
TypeError: Incorrect type of self (must be 'xphoto_WhiteBalancer' or its derivative)
Did anyone succeeded to use this feature with Python 3.6 and OpenCV 3.4?
I also tried with derived classes GrayworldWB, LearningBasedWB and SimpleWB but errors are the same.
The answer can be found in the xphoto documentation.
The appropriate methods to create the WB algorithms are createSimpleWB(), createLearningBasedWB() and createGrayworldWB().
Example:
wb = cv2.xphoto.createGrayworldWB()
wb.setSaturationThreshold(0.99)
image = wb.balanceWhite(image)
Here is a sample file in the official OpenCV repository: modules/xphoto/samples/color_balance_benchmark.py
I am using the Python(2.7.6) on ubuntu(14.04.2 LTS), numpy(1.11.3) and scikit-learn version(0.18.1). But it throws the following exception.
Here is the link of official document.
nn = Classifier(
layers=[
Layer("Maxout", units=100, pieces=2),
Layer("Softmax")],
learning_rate=0.001,
n_iter=25)
Error :
Traceback (most recent call last):
File "LeadScore.py", line 19, in <module>
Layer("Maxout", units=100, pieces=2),
TypeError: __init__() got an unexpected keyword argument 'pieces'
(Disclaimer: i never used this lib)
(1) scikit-neuralnetwork has not much to do with scikit-learn, so you should probably mention the version of scikit-neuralnetwork which you are using.
(2) According to this and this Maxout was removed from the library. If you search for pieces or maxout within the project-sources search-link no code is found!
(3) The basic problem here seems to be a mismatch between the example and your version. Maybe there was a version with maxout, but without the parameter pieces. I don't know.
(4) My opinion: this library/project does not seem that active anymore (at least compared to keras and co.) and while using Pybrain in the past (dead), it seems it's using Lasange (somewhat a dying project too) now. Together with these mismatches between examples and code this would give me a lot of headaches and i would switch libraries.