Bazel build package not found - python

I'm trying to run Tensorflow code downloaded from github tensorflow/models/adversarial_text, but running into a bazel build error. The error looks quite straightforward. But as I haven't used bazel very much before, I'd appreciate any ideas/suggestions about it. The error is below:
ERROR: /home/dasgupta/adversarial_text/BUILD:60:1: no such package 'adversarial_text/data': BUILD file not found on package path and referenced by '//:inputs'.
Inside adversarial_text/BUILD:(line 60 - that gives above error) is the following rule:
py_library(
name = "inputs",
srcs = ["inputs.py"],
deps = [
# tensorflow dep,
"//adversarial_text/data:data_utils",
],
}
But I see that there is a directory called "adversarial_text/data" and inside adversarial_text/data/BUILD there's this rule too:
py_library(
name = "data_utils",
srcs = ["data_utils.py"],
deps = [
# tensorflow dep,
],
)
I tried adding
visibility = ["//adversarial_text:__pkg__"],
right after the deps rule for data_utils, but that didn't solve the problem.
Any ideas what I might be missing here, or what I might need to set/change (environment vars?) to get this to work.
My config: bash on Ubuntu 16.04, Tensorflow 1.2, bazel 0.5 and python 2.7

The visibility has to be //:__pkg__ since adversarial_text is the root of your workspace. And you should try building //:inputs.

So to summarize, this is what I did to make it work, after cloning the project.
1 Create "WORKSPACE" file in adversarial_text/
touch WORKSPACE
2 Edit deps in adversarial_text/BUILD
py_library(
name = "inputs",
srcs = ["inputs.py"],
deps = [
# tensorflow dep,
"//data:data_utils",
],
)
py_test(
name = "graphs_test",
size = "large",
srcs = ["graphs_test.py"],
deps = [
":graphs",
# tensorflow dep,
"//data:data_utils",
],
)
3 add visibility for data_utils in adversarial_text/data/BUILD
py_library(
name = "data_utils",
srcs = ["data_utils.py"],
deps = [
# tensorflow dep,
],
visibility = ["//:__pkg__"],
)

This should be fixed now, running the code no longer requires bazel as of https://github.com/tensorflow/models/pull/3414

Related

sphinx-autoapi build error with sphinx.ext.inheritance_diagram use Graphviz

I have the same problem as in the official documentation (https://www.sphinx-doc.org/en/master/usage/extensions/inheritance.html#examples):
My error is:
Here is a part of my conf.py:
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx.ext.inheritance_diagram',
'sphinx.ext.graphviz'
# 'rst2pdf.pdfbuilder']
extensions.append('autoapi.extension')
autoapi_type = 'python'
autoapi_dirs = ['../pyqpanda']
autoapi_options = ['members', 'undoc-members', 'private-members',
'show-inheritance', 'show-module-summary',
'special-members', 'imported-members', 'show-inheritance-diagram']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['.templates']
inheritance_graph_attrs = dict(rankdir="TB", size='""')
I use centos. Sphinx 5.1.1, Graphviz 0.2.
How to solve this abnormal information display? How to show the inherit_diagram in api reference?

How to inculde one group of optional-dependencies in another?

If I have 2 groups of project.optional-dependencies in my pyproject.toml, is there a way to specify that installing one group installs the dependencies of the other?
E.g.
[project.optional-dependencies]
test = [
"pytest",
"pytest-asyncio",
"pytest-cov",
]
dev = [
"flake8",
"flake8-import-order",
"black",
]
How can I specify that installing myproj[dev] also installs [test].
Not sure from which pip version this is possible, on 22.2.2 it is, on 20.0.2 it isn't.
[project]
name = "my-pkg"
[project.optional-dependencies]
test = [
"pytest",
"pytest-asyncio",
"pytest-cov",
]
dev = [
"flake8",
"flake8-import-order",
"black",
"my-pkg[test]"
]
source => https://hynek.me/articles/python-recursive-optional-dependencies/

Bazel 0.26.1 use Python3 on py_test

I am trying to use Bazel for my new project, and for some reason I can only get bazel 0.26.1. However, when I am trying to write a test case using py_test, it seems that bazel is always using Python 2 to test my program. Is there any way to prevent this behavior?
To reproduce:
file test_a.py:
# Works on Python 3
# SyntaxError on Python 2
print(print('Good'))
file WORKSPACE:
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "rules_python",
commit = "54d1cb35cd54318d59bf38e52df3e628c07d4bbc",
remote = "https://github.com/bazelbuild/rules_python.git",
)
file BUILD:
load("#rules_python//python:defs.bzl", "py_test")
py_test(
name = "test_a",
size = "small",
srcs = ["test_a.py"],
deps = [],
)
My shell looks like (... is a path in ~/.cache/)
$ bazel version | head -n 1
Build label: 0.26.1
$ bazel test test_a
//:test_a FAILED in 0.1s
.../test.log
INFO: Build completed, 1 test FAILED, 2 total actions
$ cat .../test.log
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //:test_a
-----------------------------------------------------------------------------
File ".../test_a.py", line 1
print(print('Good'))
^
SyntaxError: invalid syntax
$
According to a note in the documentation of the python_version flag for py_test there is a bug (#4815) where the script may still invoke the wrong interpreter version at runtime.
The suggested workaround is to define a py_runtime rule using select() and
point to that py_runtime with the --python_top flag (see issue for more
details):
py_runtime(
name = "myruntime",
interpreter_path = select({
# Update paths as appropriate for your system.
"#bazel_tools//tools/python:PY2": "/usr/bin/python2",
"#bazel_tools//tools/python:PY3": "/usr/bin/python3",
}),
files = [],
)
> bazel test :test_a --python_top=//path/to:myruntime.
The issue appears to have been fixed in 0.27.0

Create a Python entry point manually

In setup.py I can put something like this:
import setuptools
setuptools.setup(
name = "my_project",
version = "1.2.3.4",
packages = [ "my_project" ],
entry_points = {
"console_scripts": [
"my_project = my_project.__main__:main" ] } )
To create an entry point/executable, my_project, that I can call from the console.
Is it possible to create these entry points manually, in a normal Python script, outside of setup.py?
(I'm interested in creating these the same way setup.py does, so not using system-specific hashbang scripts, etc.)

bazel py_proto_library is not defined

BUILD:
cc_proto_library(
name = "yd_fieldoptions_cc",
deps = [":yd_fieldoptions"],
)
py_proto_library(
name = "yd_fieldoptions_py",
deps = [":yd_fieldoptions"],
)
proto_library(
name = "yd_fieldoptions",
srcs = ["yd_fieldoptions.proto"],
deps = [
"#com_google_protobuf//:descriptor_proto",
],
)
Error
bazel build -s //field_options:yd_fieldoptions_py
BUILD:11:1: name 'py_proto_library' is not defined (did you mean 'cc_proto_library'?)
version:
Build label: 0.14.0- (#non-git)
protobuf verson: 3.5.0
You might be thinking of this rule: https://github.com/google/protobuf/blob/master/protobuf.bzl
In order to use it you have to load the bzl file in the BUILD file: https://docs.bazel.build/versions/master/skylark/concepts.html#loading-an-extension
The implementation of py_proto_library has some hacks related to it.
Some of the toolchain/library references are only valid inside the Protobuf repository. In order to use the rule py_proto_library, you have to manually bind those references in your own repository.
I have a very rough example that demonstrates how to bind some (but definitely not all) of those references in order to make py_proto_library work in your repository.
You can checkout the example here.
This is a very rough implementation, though it works, I don't have any idea whether this will work with a more complex scenario.
You have been warned.
However, if you really really want to make things work.
You can invoke the Protobuf compiler directly, then export the generate Python file to a py_library.
This is guaranteed to work, though this requires more code.
# This generates the Protobuf Python code using the protoc compiler
genrule(
name = "yd_fieldoptions_compiled_python",
srcs = ["yd_fieldoptions.proto"],
outs = ["yd_fieldoptions_pb2.py"],
cmd = "$(location #com_google_protobuf//:protoc) -I=proto --python_out=$(#D) $<",
tools = ["#com_google_protobuf//:protoc"],
)
# Setup a py_library target to be used by your code.
py_library(
name = "yd_fieldoptions_py",
srcs = [":yd_fieldoptions_compiled_python"],
deps = [
"#protobuf_python",
"#pypi_six//:six",
],
)
Also, you have to include the following info in your WORKSPACE file.
Those are used to download the necessary dependencies, you might have to change those URL as well as versions for Protobuf-Python according to your need.
new_http_archive(
name = "pypi_six",
url = "https://pypi.python.org/packages/16/d8/bc6316cf98419719bd59c91742194c111b6f2e85abac88e496adefaf7afe/six-1.11.0.tar.gz",
build_file_content = """
py_library(
name = "six",
srcs = ["six.py"],
visibility = ["//visibility:public"],
)
""",
strip_prefix = "six-1.11.0",
)
new_http_archive(
name = "protobuf_python",
url = "https://pypi.python.org/packages/14/03/ff5279abda7b46e9538bfb1411d42831b7e65c460d73831ed2445649bc02/protobuf-3.5.1.tar.gz",
build_file_content = """
py_library(
name = "protobuf_python",
srcs = glob(["google/protobuf/**/*.py"]),
visibility = ["//visibility:public"],
imports = [
"#pypi_six//:six",
],
)
""",
strip_prefix = "protobuf-3.5.1",
)
BTW, the code included above does not have gRPC plugin included.
If you are looking for a gRPC enabled Protobuf library, you have to include the gRPC repo, then include necessary the plugin in the corresponding rule.

Categories

Resources