I would like to learn how to install new op. So for doing that i' m following the given tutorial. I made a folder named user_ops, create a "zero_out.cc" file and copy the code given in the tutorial. When i' m trying to compile the Op into a dynamic library with g++ errors appear:
zero_out.cc: In lambda function:
zero_out.cc:10:14: error: ‘Status’ has not been declared
return Status::OK();
^
zero_out.cc: At global scope:
zero_out.cc:11:6: error: invalid user-defined conversion from ‘’ to ‘tensorflow::Status ()(tensorflow::shape_inference::InferenceContext)’ [-fpermissive]
});
^
zero_out.cc:8:70: note: candidate is: ::operator void ()(tensorflow::shape_inference::InferenceContext)() const
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
^
zero_out.cc:8:70: note: no known conversion from ‘void ()(tensorflow::shape_inference::InferenceContext)’ to ‘tensorflow::Status ()(tensorflow::shape_inference::InferenceContext)’
In file included from zero_out.cc:1:0:
/usr/local/lib/python2.7/dist-packages/tensorflow/include/tensorflow/core/framework/op.h:252:30: note: initializing argument 1 of ‘tensorflow::register_op::OpDefBuilderWrapper& tensorflow::register_op::OpDefBuilderWrapper::SetShapeFn(tensorflow::Status ()(tensorflow::shape_inference::InferenceContext))’
OpDefBuilderWrapper& SetShapeFn(<
Why is that happening? How could i fix that?
Assuming your only problem is the undefined Status type -- and copying and pasting the tutorial code works just fine except for this -- you need to either move the using namespace tensorflow to before the first use of Status, or fully qualify it (as in return tensorflow::Status::OK())
For example, the REGISTER_OP section could read as follows, if you did the templated version:
REGISTER_OP("ZeroOut")
.Attr("T: {float, int32}")
.Input("to_zero: T")
.Output("zeroed: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->input(0));
return tensorflow::Status::OK();
});
Seems to me, that the Tensorflow tutorial doesn't have the right code.
So i followed the code of this tutorial and it is working perfectly!
I have no clue what it says!
Related
I was trying to compile this example.cpp from a pybind11 tutorial called pybind11_examples on GitHub
#include <pybind11/pybind11.h>
#include <pybind11/eigen.h>
#include <Eigen/LU>
#include <iostream>
// ----------------
// regular C++ code
// ----------------
Eigen::MatrixXd mul(const Eigen::MatrixXd &xs, double fac)
{
std::cout << "Double" << std::endl;
return fac*xs;
}
Eigen::MatrixXi mul(const Eigen::MatrixXi &xs, int fac)
{
std::cout << "Int" << std::endl;
return fac*xs;
}
// ----------------
// Python interface
// ----------------
namespace py = pybind11;
PYBIND11_MODULE(example,m)
{
m.doc() = "pybind11 example plugin";
// N.B. the order here is crucial, in the reversed order every "int" is converted to a "double"
m.def("mul", py::overload_cast<const Eigen::MatrixXi &,int >(&mul) );
m.def("mul", py::overload_cast<const Eigen::MatrixXd &,double>(&mul) );
}
and the corresponding CMakeLists.txt file is
cmake_minimum_required(VERSION 2.8.12)
project(example)
set (CMAKE_CXX_STANDARD 14)
find_package( PkgConfig )
pkg_check_modules( EIGEN3 REQUIRED eigen3 )
include_directories( ${EIGEN3_INCLUDE_DIRS} )
add_subdirectory(pybind11)
pybind11_add_module(example example.cpp)
One confusion I have is why do we not need to link Eigen with example. Is it because Eigen is a header-only library so include_directories is enough? Then what should I do for non-header-only libraries?
Thanks in advance!
EDIT: To answer your question regarding what to do with non-header-only libraries:
Pybind11's pybind11_add_module works the same way as CMakes add_executable or add_library it defines a target (first argument) that you can then link against. In your case:
#Rest of the CMakeLists.txt...
pybind11_add_module(example example.cpp)
target_link_libraries(example my_library)
#Rest of the CMakeLists.txt...
As was already posted by #user253751 in the comments. Eigen is a header only library. You can see for yourself on their homepage:
Requirements
Eigen doesn't have any dependencies other than the C++ standard library.
We use the CMake build system, but only to build the documentation and unit-tests, and to automate installation. If you just want to use Eigen, you can use the header files right away. There is no binary library to link to, and no configured header file. Eigen is a pure template library defined in the headers.
More on the target_link_library problem, I tried the following CMakeLists.txt
cmake_minimum_required(VERSION 2.8.12)
project(example)
set (CMAKE_CXX_STANDARD 14)
find_package(Eigen3 3.3)
add_subdirectory(pybind11)
pybind11_add_module(example example.cpp)
target_link_libraries(example Eigen3::Eigen)
And it will give me an error:
CMake Error at CMakeLists.txt:14 (target_link_libraries):
The keyword signature for target_link_libraries has already been used with
the target "example". All uses of target_link_libraries with a target must
be either all-keyword or all-plain.
The uses of the keyword signature are here:
* pybind11/tools/pybind11Tools.cmake:179 (target_link_libraries)
* pybind11/tools/pybind11Tools.cmake:211 (target_link_libraries)
This is because the two lines mentioned above are calling target_link_libraries as
179: target_link_libraries(${target_name} PRIVATE pybind11::module)
211: target_link_libraries(${target_name} PRIVATE pybind11::lto)
This causes an error because, as mentioned in the error message:
all uses of target_link_libraries with a target must be either all-keyword or all-plain.
all-keyword refers to including PRIVATE|PUBLIC|INTERFACE in the target_link_libraries call as in the above two lines.
all-plain refers to not including PRIVATE|PUBLIC|INTERFACE in the target_link_libraries command, for example:
target_link_libraries(example Eigen3::Eigen)
We must pick one of the above ways to use the target_link_libraries command within a single CMakeLists.txt file. Since pybind11 made that choice for us using a keyword call, we must also do the same. So if we use
target_link_libraries(example PRIVATE Eigen3::Eigen)
it will successfully compile. If we use PUBLIC instead of PRIVATE, it will also successfully compile.
I'm quite new to CUDA/C++ programming and I'm stuck at passing the input parameters to the CUDA Kernel from the Tensorflow C++ API.
First off I register the following Op:
REGISTER_OP("Op")
.Attr("T: {float, int64}")
.Input("in: T")
.Input("angles: T")
.Output("out: T");
Afterwards I want to pass the second Input (angles) through to the CPU/GPU Kernel. Somehow the following implementation works fine for the CPU implementation but throws an error in Python when I run it on my GPU...
Python Error message:
Process finished with exit code -1073741819 (0xC0000005)
This is how I'm trying to access the value of the Input. Note that the input for "angles" is allways a single value (float or int):
void Compute(OpKernelContext* context) override {
...
const Tensor &input_angles = context->input(1);
auto angles_flat = input_angles.flat<float>();
const float N = angles_flat(0);
...
}
Calling the CPU/GPU Kernels as follows:
...
Functor<Device, T>()(
context->eigen_device<Device>(),
static_cast<int>(input_tensor.NumElements()),
input_tensor.flat<T>().data(),
output_tensor->flat<T>().data(),
N);
...
As I said before, running this Op on the CPU works just how I it want to, but when I run it on the GPU I always get the abovementioned Python Error... Does someone know how to fix this? I can only guess that I'm trying to access a wrong address on the GPU with angles_flat(0)... So if anybody can help me out here it would be highly appreciated!!
So I have a C program that I am running from Python. But am getting segmentation fault error. when I run the C program alone, it runs fine. The C program interfaces a fingerprint sensor using the fprint lib.
#include <poll.h>
#include <stdlib.h>
#include <sys/time.h>
#include <stdio.h>
#include <libfprint/fprint.h>
int main(){
struct fp_dscv_dev **devices;
struct fp_dev *device;
struct fp_img **img;
int r;
r=fp_init();
if(r<0){
printf("Error");
return 1;
}
devices=fp_discover_devs();
if(devices){
device=fp_dev_open(*devices);
fp_dscv_devs_free(devices);
}
if(device==NULL){
printf("NO Device\n");
return 1;
}else{
printf("Yes\n");
}
int caps;
caps=fp_dev_img_capture(device,0,img);
printf("bloody status %i \n",caps);
//save the fingerprint image to file. ** this is the block that
causes the segmentation fault.
int imrstx;
imrstx=fp_img_save_to_file(*img,"enrolledx.pgm");
fp_img_free(*img);
fp_exit();
return 0;
}
the python code
from ctypes import *
so_file = "/home/arkounts/Desktop/pythonsdk/capture.so"
my_functions = CDLL(so_file)
a=my_functions.main()
print(a)
print("Done")
The capture.so is built and accessed in python. But calling from python, I get a Segmentation fault. What could be my problem?
Thanks alot
Although I am unfamiliar with libfprint, after taking a look at your code and comparing it with the documentation, I see two issues with your code that can both cause a segmentation fault:
First issue:
According to the documentation of the function fp_discover_devs, NULL is returned on error. On success, a NULL-terminated list is returned, which may be empty.
In the following code, you check for failure/success, but don't check for an empty list:
devices=fp_discover_devs();
if(devices){
device=fp_dev_open(*devices);
fp_dscv_devs_free(devices);
}
If devices is non-NULL, but empty, then devices[0] (which is equivalent to *devices) is NULL. In that case, you pass this NULL pointer to fp_dev_open. This may cause a segmentation fault.
I don't think that this is the reason for your segmentation fault though, because this error in your code would only be triggered if an empty list were returned.
Second issue:
The last parameter of fp_dev_img_capture should be a pointer to an allocated variable of type struct fp_img *. This tells the function the address of the variable that it should write to. However, with the code
struct fp_img **img;
[...]
caps=fp_dev_img_capture(device,0,img);
you are passing that function a wild pointer, because img does not point to any valid object. This can cause a segmentation fault as soon as the wild pointer is dereferenced by the function or cause some other kind of undefined behavior, such as overwriting other variables in your program.
I suggest you write the following code instead:
struct fp_img *img;
[...]
caps=fp_dev_img_capture(device,0,&img);
Now the third parameter is pointing to a valid object (to the variable img).
Since img is now a single pointer and not a double pointer, you must pass img instead of *img to the functions fp_img_save_to_file and fp_img_free.
This second issue is probably the reason for your segmentation fault. It seems that you were just "lucky" that your program did not segfault as a standalone program.
I was making route optimiser a web app developed in django which was working fine but due to some changes in I guess I wreaked my code of rote optimiser.It is showing me following errors:
File "/home/chirag/chirag/smartlogistics/lib/python3.7/site-packages/ortools/constraint_solver/pywrapcp.py", line 3191, in __init__
this = _pywrapcp.new_RoutingModel(*args)
NotImplementedError: Wrong number or type of arguments for overloaded
function 'new_RoutingModel'.
Possible C/C++ prototypes are:
operations_research::RoutingModel::RoutingModel(operations_research::RoutingIndexManager const &)
operations_research::RoutingModel::RoutingModel(operations_research::RoutingIndexManager const &,operations_research::RoutingModelParameters const &)
I was unable to figure the problem, I'm new in django. Any help is appreciated.
Thank you.
Somehow, you picked the 7.0 beta release, which breaks the API.
Look at:
https://github.com/google/or-tools/tree/master/ortools/constraint_solver/doc
And
https://github.com/google/or-tools/releases/tag/v7.0-beta.1
I have been trying to pip install a package for a while which is returning the following error:
Error compiling Cython file:
------------------------------------------------------------
...
return compare >= 0
cdef inline bint cmp(x, y):
return (x > y) - (x < y)
cdef Strand parse_strand(str strand):
^
------------------------------------------------------------
wrenlab/genome/types.pyx:35:5: 'Strand' is not a type identifier
...
#error Do not use this file, it is the result of a failed Cython compilation.
^
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
I was able to get it working on some computers but not others.
Does anyone know where the best place to start would be with this problem? It appears to be an issue with cython or gcc, but I have installed the proper version which is requested in the source code.
I had a look at the package https://pypi.python.org/pypi/wrenlab/0.1.2 and the code does not define Strand nor does it import or include code that would. It is weird that it works at all on some computers. Contact the authors of the code to inquire about its status (beta/working/version of Python, etc).