Just learning how to use Meson and want to generate protobuf source/headers for multiple languages - C++, Python, Java, Javascript. C++ was simple enough using the generator function in my meson.build file:
project('MesonProtobufExample', 'cpp')
protoc = find_program('protoc', required : true)
deps = dependency('protobuf', required : true)
gen = generator(protoc, \
output : ['#BASENAME#.pb.cc', '#BASENAME#.pb.h'],
arguments : ['--proto_path=#CURRENT_SOURCE_DIR#', '--cpp_out=#BUILD_DIR#', '#INPUT#'])
generated = gen.process('MyExample.proto')
ex = executable('my_example', 'my_example.cpp', generated, dependencies : deps)
Which produces the MyExample.pb.cc and MyExample.pb.h files. I figured Python would be just as easy but I'm a bit stumped since there's no executable() step for my Python script since it doesn't need to be compiled. I noticed meson (and CMake it turns out) don't actually generate the protobuf files until you call executable() so I can't just skip this step or the MyExample_pb2.py file will not be generated. I have found no example for using meson/python/GPB together after several hours of searching. Shouldn't there be a simple way to 'link' the generated sources to a python file/module like the way CMake does?
protobuf_generate_python(PROTO_PY MyExample.proto)
# This command causes the protobuf python binding to be generated
add_custom_target(my_example.py ALL DEPENDS ${PROTO_PY})
You can use trick with custom_target() and "fake compiler" in the form of cp or cat tools (in -nix environments, of course, if you want to support Windows then you can use conditional find_program()). Here is the example with cp:
py_gen = generator( ... )
py_generated = gen.process('MyExample.proto')
py_proc = custom_target('py_proto',
command: [ 'cp', '#INPUT#', '#OUTPUT#' ],
input : py_generated,
output : 'MyExample_pb2.py',
build_by_default : true)
I added buid_by_default flag assuming that you need to generate it as a part of standard build process (of course, enabling this target can be conditional too).
Related
I have a Python package with a native extension compiled by Cython. Due to some performance needs, the compilation is done with -march=native, -mtune=native flags. This basically enables the compiler to use any of the available ISA extensions.
Additionally, we keep a non-cythonized, pure-python version of this package. It should be used in environments which are less performance sensitive.
Hence, in total we have two versions published:
Cythonized wheel built for a very specific platform
Pure-python wheel.
Some other packages depend on this package, and some of the machines are a bit different than the one that the package was compiled on. Since we used -march=native, as a result we get SIGILL, since some ISA extension is missing on the server.
So, in essence, I'd like to somehow make pip disregard the native wheel if the host CPU is not compatible with the wheel.
The native wheel does have the cp37 and platform name, but I don't see a way to define a more granular ISA requirements here. I can always use --implementation flags for pip, but I wonder if there's a better way for pip to differentiate among different ISAs.
Thanks,
The pip infrastructure doesn't support such granularity.
I think a better approach would be to have two versions of the Cython-extension compiled: with -march=native and without, to install both and to decide at the run time which one should be loaded.
Here is a proof of concept.
The first hoop to jump: how to check at run time which instructions are supported by CPU/OS combination. For the simplicity we will check for AVX (this SO-post has more details) and I offer only a gcc-specific (see also this) solution - called impl_picker.pyx:
cdef extern from *:
"""
int cpu_supports_avx(void){
return __builtin_cpu_supports("avx");
}
"""
int cpu_supports_avx()
def cpu_has_avx_support():
return cpu_supports_avx() != 0
The second problem: the pyx-file and the module must have the same name. To avoid code duplication, the actual code is in a pxi-file:
# worker.pxi
cdef extern from *:
"""
int compiled_with_avx(void){
#ifdef __AVX__
return 1;
#else
return 0;
#endif
}
"""
int compiled_with_avx()
def compiled_with_avx_support():
return compiled_with_avx() != 0
As one can see, the function compiled_with_avx_support will yield different results, depending on whether it was compiled with -march=native or not.
And now we can define two versions of the module just by including the actual code from the *.pxi-file. One module called worker_native.pyx:
# distutils: extra_compile_args=["-march=native"]
include "worker.pxi"
and worker_fallback.pyx:
include "worker.pxi"
Building everything, e.g. via cythonize -i -3 *.pyx, it can be used as follows:
from impl_picker import cpu_has_avx_support
# overhead once when imported:
if cpu_has_avx_support():
import worker_native as worker
else:
print("using fallback worker")
import worker_fallback as worker
print("compiled_with_avx_support:", worker.compiled_with_avx_support())
On my machine the above would lead to compiled_with_avx_support: True, on older machines the "slower" worker_fallback will be used and the result will be compiled_with_avx_support: False.
The goal of this post is not to give a working setup.py, but just to outline the idea how one could achieve the goal of picking correct version at the run time. Obviously, the setup.py could be quite more complicated: e.g. one would need to compile multiple c-files with different compiler settings (see this SO-post, how this could be achieved).
Currently I'm using a toolbox which makes use of CoolProp. The REFPROP library is used in the coding, e.g: ... = CP.AbstractState('REFPROP', fluid).
When I run the script I get the following error message: "ValueError: You cannot use the REFPROPMixtureBackend". I don't understand how to acces the REFPROP library. I used pip install ctREFPROP and installed the "MINI-REFPROP" from NIST.
How do I introduce the REFPROP library within Python ? Is it something I have to pay for ?
Set the path of REFPROP library like this:
CoolProp.set_config_string(CoolProp.ALTERNATIVE_REFPROP_PATH, "C:\Program Files (x86)\MINI-REFPROP")
NOTE:
MINI-REFPROP contains a limited number of pure fluids (water, CO2, R134a, nitrogen, oxygen, methane, propane, helium, hydrogen, and dodecane), along with air as a pseudo-pure fluid.
I have some cuda code that uses cooperative groups, and thus requires the -rdc=true flag to compile with nvcc. I would like to call the cuda code from python, so am writing a python interface with python c extensions.
Because I'm including cuda code I had to adapt my setup.py, as described in: Can python distutils compile CUDA code?
This compiles and installs, but as soon as I import my code in python, it segfaults. Removing the -rdc=true flag makes everything work, but forces me to remove any cooperative group code from the cuda kernels (or get a 'cudaCGGetIntrinsicHandle unresolved' errorĀ during compilation).
Any way I can adapt my setup.py further to get this to work? Alternatively, is there an other way to compile my c extension that allows cuda code (with the rdc flag on)?
Think I sort of figured out the answer. If you generate relocatable device code with nvcc, either nvcc needs to link the object files so device code linking gets handled correctly, or you need to generate a separate object file by running nvcc on all the object files that have relocatable device code with the '--device-link' flag. This extra object file can then be included with all the other object files for an external linker.
I adapted the setup from Can python distutils compile CUDA code? by adding a dummy 'link.cu' file to the end of the sources file list. I also add the cudadevrt library and another set of compiler options for the cuda device linking step:
ext = Extension('mypythonextension',
sources=['python_wrapper.cpp', 'file_with_cuda_code.cu', 'link.cu'],
library_dirs=[CUDA['lib64']],
libraries=['cudart', 'cudadevrt'],
runtime_library_dirs=[CUDA['lib64']],
extra_compile_args={'gcc': [],
'nvcc': ['-arch=sm_70', '-rdc=true', '--compiler-options', "'-fPIC'"],
'nvcclink': ['-arch=sm_70', '--device-link', '--compiler-options', "'-fPIC'"]
},
include_dirs = [numpy_include, CUDA['include'], 'src'])
This then gets picked up in the following way by the function that adapts the compiler calls:
def customize_compiler_for_nvcc(self):
self.src_extensions.append('.cu')
# track all the object files generated with cuda device code
self.cuda_object_files = []
super = self._compile
def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
# generate a special object file that will contain linked in
# relocatable device code
if src == 'link.cu':
self.set_executable('compiler_so', CUDA['nvcc'])
postargs = extra_postargs['nvcclink']
cc_args = self.cuda_object_files[1:]
src = self.cuda_object_files[0]
elif os.path.splitext(src)[1] == '.cu':
self.set_executable('compiler_so', CUDA['nvcc'])
postargs = extra_postargs['nvcc']
self.cuda_object_files.append(obj)
else:
postargs = extra_postargs['gcc']
super(obj, src, ext, cc_args, postargs, pp_opts)
self.compiler_so = default_compiler_so
self._compile = _compile
The solution feels a bit hackish because of my lack of distutils knowledge, but it seems to work. :)
It's the first time i have come across a recipe file written in python and it's giving me an error. The error is:
../meta-intel/recipes-rt/images/core-image-rt.bb: Error executing a python function in <code>:
This is a recipe which is coming from the meta-intel branch "[master] intel-vaapi-driver: 2.1.0 -> 2.2.0".
My poky version is" [morty] documentation: Updated manual revision table for 2.2.4 release date.
My BITBAKE version is: "BitBake Build Tool Core version 1.32.0"
The contents of core-image-rt.bb are:
require recipes-core/images/core-image-minimal.bb
# Skip processing of this recipe if linux-intel-rt is not explicitly specified as the
# PREFERRED_PROVIDER for virtual/kernel. This avoids errors when trying
# to build multiple virtual/kernel providers.
python () {
if d.getVar("PREFERRED_PROVIDER_virtual/kernel") != "linux-intel-rt":
raise bb.parse.SkipPackage("Set PREFERRED_PROVIDER_virtual/kernel to linux-intel-rt to enable it")
}
DESCRIPTION = "A small image just capable of allowing a device to boot plus a \
real-time test suite and tools appropriate for real-time use."
DEPENDS += "linux-intel-rt"
IMAGE_INSTALL += "rt-tests hwlatdetect"
LICENSE = "MIT"
If you need any additional information please let me know and i'll try and supply it.
I can normally build images on my ubuntu machine but don't believe have ever had to build an image in which the recipes were written in python
You are using incompatible API of using g.getVar method. In morty release as the last one with old way of using second parameter, there is still need to provide boolean parameter:
...
if d.getVar("PREFERRED_PROVIDER_virtual/kernel", True) != "linux-intel-rt":
...
Please take a look at one of the commit, that remove this in next releases.
I'm attempting to generate Python bindings for a Thrift service definition using Bazel. As far as I've been able to tell, there is no existing .bzl for doing this so I'm somewhat on my own here. I've written .bzl rules in the past but the situation I'm running into in this case is different.
The general issue is that I don't know the names of the output files from the thrift command before the build starts which means that I can't generate a py_library rule with a srcs attribute set correctly since I don't have the names of the files. I've tried to follow examples whereby the output files are known ahead of time by way of generating a .zip file, but the py_library rule only allows .py files as srcs so this doesn't work.
The only thing I can think of would be to use a repository_rule to generate the code and BUILD files but what I'm trying to accomplish doesn't seem like much of a stretch and should be supported.
Someone has attempted this before.
Discussion here:
https://groups.google.com/forum/#!topic/bazel-dev/g3DVmhVhNZs
Code Here:
https://github.com/wt/bazel_thrift
I would start there.
Edit:
I started there. I did not get as far as I hoped. Bazel is being extended to support having multiple outputs generated by one input, but it does not allow that very easily just yet, per:
groups.google.com/forum/#!topic/bazel-discuss/3WQhHm194yU
Regardless, I did attempt something for C++ thrift bindings, which have the same issue. The Java example got around this by using the source jar as a build source, which won't work for us. To make it work, I passed in the list of source files I cared about that would be created by the thrift generator. I then reported these files as the output that would be generated in the impl. That seems to work. It is a bit nasty in that you have to know what files you are looking for before you build, but it does work. It would also be possible to have a small program read the thift file and determine the output files it would make. That would be nicer, but I don't have the time. Plus, the current approach is nice, in that it explicitly defines what files you are looking for thrift to generate, which makes the BUILD file a little bit easier to understand for a newbie like me.
First pass at some code, maybe I will clean it up and submit it as a patch (maybe not):
###########
# CPP gen
###########
# Create Generated cpp source files from thrift idl files.
#
def _gen_thrift_cc_src_impl(ctx):
out = ctx.outputs.outs
if not out:
# empty set
# nothing to do, no inputs to build
return DefaultInfo(files=depset(out))
# Used dir(out[0]) to see what
# we had available in the object.
# dirname attribute tells us the directory
# we should be putting stuff in, works nicely.
# ctx.genfile_dir is not the output directory
# when called as an external repository
target_genfiles_root = out[0].dirname
thrift_includes_root = "/".join(
[ target_genfiles_root, "thrift_includes"])
gen_cpp_dir = "/".join([target_genfiles_root,"." ])
commands = []
commands.append(_mkdir_command_string(target_genfiles_root))
commands.append(_mkdir_command_string(thrift_includes_root))
thrift_lib_archive_files = ctx.attr.thrift_library._transitive_archive_files
for f in thrift_lib_archive_files:
commands.append(
_tar_extract_command_string(f.path, thrift_includes_root))
commands.append(_mkdir_command_string(gen_cpp_dir))
thrift_lib_srcs = ctx.attr.thrift_library.srcs
for src in thrift_lib_srcs:
commands.append(_thrift_cc_compile_command_string(
thrift_includes_root, gen_cpp_dir, src))
inputs = (
list(thrift_lib_archive_files) + thrift_lib_srcs )
ctx.action(
inputs = inputs,
outputs = out,
progress_message = "Generating CPP sources from thift archive %s" % target_genfiles_root,
command = " && ".join(commands),
)
return DefaultInfo(files=depset(out))
thrift_cc_gen_src= rule(
_gen_thrift_cc_src_impl,
attrs = {
"thrift_library": attr.label(
mandatory=True, providers=['srcs', '_transitive_archive_files']),
"outs" : attr.output_list(mandatory=True, non_empty=True),
},output_to_genfiles = True,
)
# wraps cc_library to generate a library from one or more .thrift files
# provided as a thrift_library bundle.
#
# Generates all src and hdr files needed, but you must specify the expected
# files. This is a bug in bazel: https://groups.google.com/forum/#!topic/bazel-discuss/3WQhHm194yU
#
# Instead of src and hdrs, requires: cpp_srcs and cpp_hdrs. These are required.
#
# Takes:
# name: The library name, like cc_library
#
# thrift_library: The library of source .thrift files from which our
# code will be built from.
#
# cpp_srcs: The expected source that will be generated and built. Passed to
# cc_library as src.
#
# cpp_hdrs: The expected header files that will be generated. Passed to
# cc_library as hdrs.
#
# Rest of options are documented in native.cc_library
#
def thrift_cc_library(name, thrift_library,
cpp_srcs=[],cpp_hdrs=[],
build_skeletons=False,
deps=[], alwayslink=0, copts=[],
defines=[], include_prefix=None,
includes=[], linkopts=[],
linkstatic=0, nocopts=None,
strip_include_prefix=None,
textual_hdrs=[],
visibility=None):
# from our thrift_library tarball source bundle,
# create a generated cpp source directory.
outs = []
for src in cpp_srcs:
outs.append("//:"+src)
for hdr in cpp_hdrs:
outs.append("//:"+hdr)
thrift_cc_gen_src(
name = name + 'cc_gen_src',
thrift_library = thrift_library,
outs = outs,
)
# Then make the library for the given name.
native.cc_library(
name = name,
deps = deps,
srcs = cpp_srcs,
hdrs = cpp_hdrs,
alwayslink = alwayslink,
copts = copts,
defines=defines,
include_prefix=include_prefix,
includes=includes,
linkopts=linkopts,
linkstatic=linkstatic,
nocopts=nocopts,
strip_include_prefix=strip_include_prefix,
textual_hdrs=textual_hdrs,
visibility=visibility,
)