bazel py_proto_library is not defined - python

BUILD:
cc_proto_library(
name = "yd_fieldoptions_cc",
deps = [":yd_fieldoptions"],
)
py_proto_library(
name = "yd_fieldoptions_py",
deps = [":yd_fieldoptions"],
)
proto_library(
name = "yd_fieldoptions",
srcs = ["yd_fieldoptions.proto"],
deps = [
"#com_google_protobuf//:descriptor_proto",
],
)
Error
bazel build -s //field_options:yd_fieldoptions_py
BUILD:11:1: name 'py_proto_library' is not defined (did you mean 'cc_proto_library'?)
version:
Build label: 0.14.0- (#non-git)
protobuf verson: 3.5.0

You might be thinking of this rule: https://github.com/google/protobuf/blob/master/protobuf.bzl
In order to use it you have to load the bzl file in the BUILD file: https://docs.bazel.build/versions/master/skylark/concepts.html#loading-an-extension

The implementation of py_proto_library has some hacks related to it.
Some of the toolchain/library references are only valid inside the Protobuf repository. In order to use the rule py_proto_library, you have to manually bind those references in your own repository.
I have a very rough example that demonstrates how to bind some (but definitely not all) of those references in order to make py_proto_library work in your repository.
You can checkout the example here.
This is a very rough implementation, though it works, I don't have any idea whether this will work with a more complex scenario.
You have been warned.
However, if you really really want to make things work.
You can invoke the Protobuf compiler directly, then export the generate Python file to a py_library.
This is guaranteed to work, though this requires more code.
# This generates the Protobuf Python code using the protoc compiler
genrule(
name = "yd_fieldoptions_compiled_python",
srcs = ["yd_fieldoptions.proto"],
outs = ["yd_fieldoptions_pb2.py"],
cmd = "$(location #com_google_protobuf//:protoc) -I=proto --python_out=$(#D) $<",
tools = ["#com_google_protobuf//:protoc"],
)
# Setup a py_library target to be used by your code.
py_library(
name = "yd_fieldoptions_py",
srcs = [":yd_fieldoptions_compiled_python"],
deps = [
"#protobuf_python",
"#pypi_six//:six",
],
)
Also, you have to include the following info in your WORKSPACE file.
Those are used to download the necessary dependencies, you might have to change those URL as well as versions for Protobuf-Python according to your need.
new_http_archive(
name = "pypi_six",
url = "https://pypi.python.org/packages/16/d8/bc6316cf98419719bd59c91742194c111b6f2e85abac88e496adefaf7afe/six-1.11.0.tar.gz",
build_file_content = """
py_library(
name = "six",
srcs = ["six.py"],
visibility = ["//visibility:public"],
)
""",
strip_prefix = "six-1.11.0",
)
new_http_archive(
name = "protobuf_python",
url = "https://pypi.python.org/packages/14/03/ff5279abda7b46e9538bfb1411d42831b7e65c460d73831ed2445649bc02/protobuf-3.5.1.tar.gz",
build_file_content = """
py_library(
name = "protobuf_python",
srcs = glob(["google/protobuf/**/*.py"]),
visibility = ["//visibility:public"],
imports = [
"#pypi_six//:six",
],
)
""",
strip_prefix = "protobuf-3.5.1",
)
BTW, the code included above does not have gRPC plugin included.
If you are looking for a gRPC enabled Protobuf library, you have to include the gRPC repo, then include necessary the plugin in the corresponding rule.

Related

how to automatically generate mypy stubs using pyproject.toml

i have a package and in it i use pyproject.toml
and for proper typing i need stubs generated, although
its kinda annoying to generate them manually every time,
so, is there a way to do it automatically using it ?
i just want it to run stubgen and thats it, just so
mypy sees the stubs and its annoying seeing linters
throw warnings and you keep having to # type: ignore
heres what i have as of now, i rarely do this so its probably
not that good :
[build-system]
requires = ["setuptools", "setuptools-scm"]
build-backend = "setuptools.build_meta"
[project]
name = "<...>"
authors = [
{name = "<...>", email = "<...>"},
]
description = "<...>"
readme = "README"
requires-python = ">=3.10"
keywords = ["<...>"]
license = {text = "GNU General Public License v3 or later (GPLv3+)"}
classifiers = [
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
]
dependencies = [
"<...>",
]
dynamic = ["version"]
[tool.setuptools]
include-package-data = true
[tool.setuptools.package-data]
<...> = ["*.pyi"]
[tool.pyright]
pythonVersion = "3.10"
exclude = [
"venv",
"**/node_modules",
"**/__pycache__",
".git"
]
include = ["src", "scripts"]
venv = "venv"
stubPath = "src/stubs"
typeCheckingMode = "strict"
useLibraryCodeForTypes = true
reportMissingTypeStubs = true
[tool.mypy]
exclude = [
"^venv/.*",
"^node_modules/.*",
"^__pycache__/.*",
]
thanks for the answers in advance
just make a shellscript and add it to pyproject.toml as a script
:+1:

In Nixpkgs, what makes a python derivation an application rather than a library?

I'm trying to package a python application for Nix, but I'm finding that the majority of the documentation assumes that I want to package a library.
In order to do this, I looked at Rednotebook example (not for any particular reason other than I happened to know it was written in python), which can be found in python-packages, but as that file is so huge, here is the relevant part:
redNotebook = buildPythonPackage rec {
name = "rednotebook-1.8.1";
src = pkgs.fetchurl {
url = "mirror://sourceforge/rednotebook/${name}.tar.gz";
sha256 = "00b7s4xpqpxsbzjvjx9qsx5d84m9pvn383c5di1nsfh35pig0rzn";
};
# no tests available
doCheck = false;
propagatedBuildInputs = with self; [ pygtk pywebkitgtk pyyaml chardet ];
meta = {
homepage = http://rednotebook.sourceforge.net/index.html;
description = "A modern journal that includes a calendar navigation, customizable templates, export functionality and word clouds";
license = licenses.gpl2;
maintainers = with maintainers; [ tstrobel ];
};
};
My derivation looks like this:
{ pkgs ? import <nixpkgs> {} }:
let
requirements = import ./nix/requirements.nix { inherit pkgs; };
in
pkgs.python35Packages.buildPythonPackage rec {
name = "package-name";
version = "1.0.0";
namePrefix = "";
src = ./.;
doCheck = false;
propagatedBuildInputs = builtins.attrValues requirements.packages;
}
requirements.nix was the output of pypi2nix and requirements.packages has type "list of derivations". Despite this when I cd into the resulting store path for Rednotebook there is a /bin directory with some wrapper scripts. The store path for my app there is just a lib an no /bin
How do I tell Nixpkgs that I have an application?
There are buildPythonPackage and buildPythonApplication functions, but at the end of the day, they both call mkPythonDerivation.
A Python application with a simple Nix package you can reference is Ranger; which uses buildPythonApplication. The resulting derivation contains a wrapper in /nix/store/PACKAGE/bin/
Was experimenting with building and installing a sample service written in Python.
Ended up using buildPythonApplication in my installer
Important points
If you are leveraging setuptools to distribute your app you need to have the entry_points value in your setup call:
from distutils.core import setup
setup(
...
entry_points = {
'console_scripts': ['sample_module = src.app:main']
}
src - source code folder
app - entry point module name
main - name of the entry point function
My whole build configuration was:
# sample.nix
with (import <nixpkgs> {});
python37.pkgs.buildPythonApplication rec {
pname = "sample_service";
version = "0.0.3";
src = ./sample_package;
meta = with lib; {
homepage = "https://www.example.com";
description = "Sample Python Service";
};
}
sample_package was the folder that contained 'src' folder and 'setup.py' located at the same level as above 'sample.nix' file.
At this point, if you run nix-build sample.nix it will build and install the package for you. However, to make it available on the path you will need to capture the resulting path in the nix store and run:
nix-env -i <nix_store_path>
buildPythonPackage is to be used for libraries, that is, when you want it to expose its modules in site-packages. buildPythonApplication is for applications that just happen to be written in Python. In that case, when you include such a derivation in another derivation, you don't want to expose the libraries.
This separation is important because it may happen that a Python 3 environment calls a tool that is written in Python 2 and vice versa.
Note that at the time of writing this issue hasn't been completely solved yet, but at least python.buildEnv or python.withPackages will not include applications along with their dependencies.

Is there any way I can override folder permissions of a python package through setup.py/etc?

I actually tried following this guide, but it did not work for me(I believe it's only for python 2 since I got a ton of errors and tried fixing them but it wasn't working still, I'm trying to do this for python 3)
set file permissions in setup.py file
So basically I have a folder in lets say
/usr/lib/python3.6/site_packages/XYZ
I want to give XYZ read & write permissions, since the current permissions only give root user write access. In my documentation I can require each user that installs my program through pip to chmod the directory themselves, but I'm looking for a more convenient way so no one has to do that.
Here's my setup.py incase anyone wants to see it
from distutils.core import setup
setup(
name = 'graphite-analytics',
packages = ['graphite'],
package_data={
'graphite' : ['graphite.py', 'capture.j3', 'templates/css/styles.css', 'templates/js/Chart.PieceLabel.js', 'templates/html/render.html', 'templates/fonts/Antro_Vectra.otf', 'templates/images/Calendar-icon.png'],
},
version = '0.1.2.13',
description = 'Create a print-out template for your google analytics data',
author = 'NAME REDACTED',
author_email = 'EMAIL REDACTED',
url = 'https://github.com/ARM-open/Graphite',
include_package_data=True,
zip_safe=True,
classifiers = [],
keywords = ['Google analytics', 'analytics', 'templates'],
install_requires=['Click', 'google-api-python-client', 'jinja2'],
entry_points={'console_scripts': [
'graphite-analytics = graphite.graphite:main'
]}
)

Bazel build package not found

I'm trying to run Tensorflow code downloaded from github tensorflow/models/adversarial_text, but running into a bazel build error. The error looks quite straightforward. But as I haven't used bazel very much before, I'd appreciate any ideas/suggestions about it. The error is below:
ERROR: /home/dasgupta/adversarial_text/BUILD:60:1: no such package 'adversarial_text/data': BUILD file not found on package path and referenced by '//:inputs'.
Inside adversarial_text/BUILD:(line 60 - that gives above error) is the following rule:
py_library(
name = "inputs",
srcs = ["inputs.py"],
deps = [
# tensorflow dep,
"//adversarial_text/data:data_utils",
],
}
But I see that there is a directory called "adversarial_text/data" and inside adversarial_text/data/BUILD there's this rule too:
py_library(
name = "data_utils",
srcs = ["data_utils.py"],
deps = [
# tensorflow dep,
],
)
I tried adding
visibility = ["//adversarial_text:__pkg__"],
right after the deps rule for data_utils, but that didn't solve the problem.
Any ideas what I might be missing here, or what I might need to set/change (environment vars?) to get this to work.
My config: bash on Ubuntu 16.04, Tensorflow 1.2, bazel 0.5 and python 2.7
The visibility has to be //:__pkg__ since adversarial_text is the root of your workspace. And you should try building //:inputs.
So to summarize, this is what I did to make it work, after cloning the project.
1 Create "WORKSPACE" file in adversarial_text/
touch WORKSPACE
2 Edit deps in adversarial_text/BUILD
py_library(
name = "inputs",
srcs = ["inputs.py"],
deps = [
# tensorflow dep,
"//data:data_utils",
],
)
py_test(
name = "graphs_test",
size = "large",
srcs = ["graphs_test.py"],
deps = [
":graphs",
# tensorflow dep,
"//data:data_utils",
],
)
3 add visibility for data_utils in adversarial_text/data/BUILD
py_library(
name = "data_utils",
srcs = ["data_utils.py"],
deps = [
# tensorflow dep,
],
visibility = ["//:__pkg__"],
)
This should be fixed now, running the code no longer requires bazel as of https://github.com/tensorflow/models/pull/3414

scon portable and cleaner way for debug and release build together

SConstruct : This file is implemented to use build library for debug and release build.
variant_dir is set to build/debug for debug build
& set to build/release for release build
import os
env = Environment()
releaseEnv = env.Clone(CCFLAGS = ['-O3'])
debugEnv = env.Clone(CCFLAGS = ['-O0', '-g'])
debugDirPath = os.path.join('build', 'debug') # build/debug
releaseDirPath = os.path.join('build', 'release') # build/release
if os.name == 'nt':
releaseEnv.Replace(CCFLAGS = ['EHsc'])
# windows specific flags
debugEnv.Replace(CCFLAGS = ['EHsc', 'Zi', 'MTd'])
SConscript(dirs = 'src', name = 'SConscript', exports = {'env' : releaseEnv}, variant_dir = releaseDirPath, duplicate = 0)
SConscript(dirs = 'src', name = 'SConscript', exports = {'env': debugEnv}, variant_dir = debugDirPath, duplicate = 0)
SConscript: (present inside source directory which contains a1.cpp and b1.cpp)
import os
Import('env')
src_list = Glob(os.path.join(Dir('#').abspath, 'src', '*.cpp'))
env.SharedLibrary(target='sum', source= src_list)
env.StaticLibrary(target='sum', source= src_list)
Directory structure is like:
root_dir -> SConstruct
-> src
-> SConscript
-> sum.cpp
-> mul.cpp
1) Running scons from root_dir generates following warning and although it's a warning message build is stop, library doesn't gets created.
scons: * Two environments with different actions were specified for the same target: /home/xyz/temp/src/mul.os
File "/home/xyz/temp/src/SConscript", line 7, in
This issue has been resolved after using src_list = Glob('*.cpp');
2) What is the proper (portable) way to create environment object for debug and release build ?
The way I have implemented is it correct ?
Kindly suggest necessary changes to avoid the warning and running build successfully.
Your problem is not related to build variants, but the fact that you have two targets with the same name (SharedLibrary and StaticLibrary both build sum).
To fix that, either just give one of them another name or add an extension to at least one of them. If you add an extension, you might want to check for OS if you want to keep your cross-platform compatibility.

Categories

Resources