I have a fairly standard Python test (full sources are here):
from absl.testing import absltest
[...]
class BellTest(absltest.TestCase):
def test_bell(self):
[...]
and the corresponding entries in the BUILD file, where the dependencies are part of the BUILD file:
py_test(
name = "bell_test",
size = "small",
srcs = ["bell_test.py"],
python_version = "PY3",
srcs_version = "PY3",
deps = [
":tensor",
":state",
":ops",
":bell",
],
)
Without problems I can 'run' this via
bazel run bell_test
[...] comes out Ok.
However, I cannot 'test' it
bazel test bell_test
[...] FAILED
The log file tells me that it cannot find the dependency to absl.testing. This is puzzling, given that it works with 'run'. This also works on Linux and MacOS without problems.
I tried to add all kinds of ways to add a dependency to absl / testing, but to no avail. Pointers would be greatly appreciated.
Side note: It would be great if bazel would print the path to the log file with Windows backslashes!
Related
I'm facing problems running a streaming pipeline on DataFlowRunner after separating out the “main pipeline code” and “custom transforms code”to multiple files, as described here: Multiple File Dependencies - no element (pubs message) is read into the pipeline. Neither the tabs - JOB LOGS, WORKER LOGS, JOB ERROR REPORTING in (new) Dataflow UI - report any errors. Job ID: 2020-04-06_15_23_52-4004061030939218807 if someone wants to have a look...
Pipeline minimal code (BEFORE):
pipeline.py
row = p | "read_sub" >> pubsub.ReadFromPubSub(subscription=SUB,with_attributes=True,) \
| "add_timestamps" >> beam.Map(add_timestamps)
add_timestamps is my custom transform
def add_timestamps(e):
payload = e.data.decode()
return {"message":payload}
All works fine when add_timestamps and the pipeline code are in same file pipeline.py.
AFTER I restructured the files as follows:
root_dir/
pipeline.py
setup.py
my_transforms/
__init__py.py
transforms.py
where, setup.py
import setuptools
setuptools.setup(
name='my-custom-transforms-package',
version='1.0',
install_requires=["datetime"],
packages= ['my_transforms'] #setuptools.find_packages(),
)
all the add_timestamps transform code moved to transforms.py (under my_transforms package directory)
In my pipeline.py I now import and use the transform as follows:
from my_transforms.transforms import add_timestamps
row = p | "read_sub" >> pubsub.ReadFromPubSub(subscription=SUB,with_attributes=True,) \
| "add_timestamps" >> beam.Map(add_timestamps)
While launching the pipline I do set the flag: --setup_file=./setup.py.
However not a single element is read into the pipeline (as you can see Data watermark still stuck and Elements added (Approximate) does not report anything)
I have tested Multiple File dependencies option in Dataflow and for me it works fine. I reproduced example from Medium.
Your directory structure is correct. Have you added any imports in transforms.py file?
I would recommend you to make some changes in setup.py:
import setuptools
REQUIRED_PACKAGES = [
‘datetime’
]
PACKAGE_NAME = 'my_transforms'
PACKAGE_VERSION = '0.0.1'
setuptools.setup(
name=PACKAGE_NAME,
version=PACKAGE_VERSION,
description='My transforms package',
install_requires=REQUIRED_PACKAGES,
packages=setuptools.find_packages()
)
When running your pipeline, keep an eye on setting the following fields in PipelineOptions: job_name, project, runner, staging_location, temp_location. You must specify at least one of temp_location or staging_location to run your pipeline on the Google cloud. If you use the Apache Beam SDK for Python 2.15.0 or later, you must also specify region. Remember about specifying full path to setup.py.
It will look similar to that command:
python3 pipeline.py \
--job_name <JOB_NAME>
--project <PROJECT_NAME> \
--runner DataflowRunner \
--region <REGION> \
--temp_location gs://<BUCKET_NAME>/temp \
--setup_file /<FULL_PATH>/setup.py
I hope it helps.
I found the root cause... I was setting the flag --no_use_public_ips and had install_requires=["datetime"] in setup.py..
of-course, without External IP the worker was unable to communicate with python package manager server to install datetime. problem solve by not setting the flag --no_use_public_ips (I'll look at solution later how to disable external IPs for workers and still be able to run successfully). Would have been good it at least some Error message was displayed in Job/Worker logs! Spent like 2-3 days troubleshooting :=)
Is this possible? AFAICT there's no built-in py_proto_library rule, and trying to use my own genrule like:
genrule(
name = "my_proto",
srcs = ["my.proto"],
outs = ["my_pb2.py", "my_pb2_grpc.py"],
cmd = "python -m grpc_tools.protoc --python_out=$(#D) --grpc_python_out=$(#D) $<"
)
in the deps of a py_binary fails with '//:my_proto' does not have mandatory provider 'py'.
It should work fine rolling your own proto files like you're doing, you just need to add them to the srcs (not deps) of your py_binary.
deps are only for py_librarys (you could also wrap your .py in a py_library if you preferred and then have the binary depend on that).
I discoverd entry_points of setuptools:
http://pythonhosted.org/setuptools/setuptools.html#dynamic-discovery-of-services-and-plugins
quote: setuptools supports creating libraries that “plug in” to extensible applications and frameworks, by letting you register “entry points” in your project that can be imported by the application or framework.
But I have not seen a project using them.
Are there examples of projects which use them?
If not, why are they not used?
There are loads of examples. Any project that defines console scripts uses them, for example. A quick search on GitHub gives you plenty to browse through.
I'll focus on one specific example (one that is not on GitHub): Babel.
Babel uses both entry_points for both console scripts and to define extension points for translatable text extraction. See their setup.py source:
if have_setuptools:
extra_arguments = dict(
zip_safe = False,
test_suite = 'babel.tests.suite',
tests_require = ['pytz'],
entry_points = """
[console_scripts]
pybabel = babel.messages.frontend:main
[distutils.commands]
compile_catalog = babel.messages.frontend:compile_catalog
extract_messages = babel.messages.frontend:extract_messages
init_catalog = babel.messages.frontend:init_catalog
update_catalog = babel.messages.frontend:update_catalog
[distutils.setup_keywords]
message_extractors = babel.messages.frontend:check_message_extractors
[babel.checkers]
num_plurals = babel.messages.checkers:num_plurals
python_format = babel.messages.checkers:python_format
[babel.extractors]
ignore = babel.messages.extract:extract_nothing
python = babel.messages.extract:extract_python
javascript = babel.messages.extract:extract_javascript
""",
)
Tools like pip and zc.buildout use the console_scripts entry point to create commandline scripts (one called pybabel, running the main() callable in the babel.messages.frontend module).
The distutils.commands entry points defines additional commands you can use when running setup.py; these can be used in your own projects to invoke Babel command-line utilities right from your setup script.
Last, but not least, it registers its own checkers and extractors. The babel.extractors entry point is loaded by the babel.messages.extract.extract function, using the setuptools pkg_resources module, giving access to all installed Python projects that registered that entry point. The following code looks for a specific extractor in those entries:
try:
from pkg_resources import working_set
except ImportError:
pass
else:
for entry_point in working_set.iter_entry_points(GROUP_NAME,
method):
func = entry_point.load(require=True)
break
This lets any project register additional extractors; simply add an entry point in your setup.py and Babel can make use of it.
Sentry is a good example. Sentry's author even created a django package named Logan to convert standard django management commands to console scripts.
I've written a setup.py script for py2exe, generated an executable for my python GUI application and I have a whole bunch of files in the dist directory, including the app, w9xopen.exe and MSVCR71.dll. When I try to run the application, I get an error message that just says "see the logfile for details". The only problem is, the log file is empty.
The closest error I've seen is "The following modules appear to be missing" but I'm not using any of those modules as far as I know (especially since they seem to be of databases I'm not using) but digging up on Google suggests that these are relatively benign warnings.
I've written and packaged a console application as well as a wxpython one with py2exe and both applications have compiled and run successfully. I am using a new python toolkit called dabo, which in turn makes uses of wxpython modules so I can't figure out what I'm doing wrong. Where do I start investigating the problem since obviously the log file hasn't been too useful?
Edit 1:
The python version is 2.5. py2exe is 0.6.8. There were no significant build errors. The only one was the bit about "The following modules appear to be missing..." which were non critical errors since the packages listed were ones I was definitely not using and shouldn't stop the execution of the app either. Running the executable produced a logfile which was completely empty. Previously it had an error about locales which I've since fixed but clearly something is wrong as the executable wasn't running. The setup.py file is based quite heavily on the original setup.py generated by running their "app wizard" and looking at the example that Ed Leafe and some others posted. Yes, I have a log file and it's not printing anything for me to use, which is why I'm asking if there's any other troubleshooting avenue I've missed which will help me find out what's going on.
I have even written a bare bones test application which simply produces a bare bones GUI - an empty frame with some default menu options. The code written itself is only 3 lines and the rest is in the 3rd party toolkit. Again, that compiled into an exe (as did my original app) but simply did not run. There were no error output in the run time log file either.
Edit 2:
It turns out that switching from "windows" to "console" for initial debugging purposes was insightful. I've now got a basic running test app and on to compiling the real app!
The test app:
import dabo
app = dabo.dApp()
app.start()
The setup.py for test app:
import os
import sys
import glob
from distutils.core import setup
import py2exe
import dabo.icons
daboDir = os.path.split(dabo.__file__)[0]
# Find the location of the dabo icons:
iconDir = os.path.split(dabo.icons.__file__)[0]
iconSubDirs = []
def getIconSubDir(arg, dirname, fnames):
if ".svn" not in dirname and dirname[-1] != "\\":
icons = glob.glob(os.path.join(dirname, "*.png"))
if icons:
subdir = (os.path.join("resources", dirname[len(arg)+1:]), icons)
iconSubDirs.append(subdir)
os.path.walk(iconDir, getIconSubDir, iconDir)
# locales:
localeDir = "%s%slocale" % (daboDir, os.sep)
locales = []
def getLocales(arg, dirname, fnames):
if ".svn" not in dirname and dirname[-1] != "\\":
mo_files = tuple(glob.glob(os.path.join(dirname, "*.mo")))
if mo_files:
subdir = os.path.join("dabo.locale", dirname[len(arg)+1:])
locales.append((subdir, mo_files))
os.path.walk(localeDir, getLocales, localeDir)
data_files=[("resources", glob.glob(os.path.join(iconDir, "*.ico"))),
("resources", glob.glob("resources/*"))]
data_files.extend(iconSubDirs)
data_files.extend(locales)
setup(name="basicApp",
version='0.01',
description="Test Dabo Application",
options={"py2exe": {
"compressed": 1, "optimize": 2, "bundle_files": 1,
"excludes": ["Tkconstants","Tkinter","tcl",
"_imagingtk", "PIL._imagingtk",
"ImageTk", "PIL.ImageTk", "FixTk", "kinterbasdb",
"MySQLdb", 'Numeric', 'OpenGL.GL', 'OpenGL.GLUT',
'dbGadfly', 'email.Generator',
'email.Iterators', 'email.Utils', 'kinterbasdb',
'numarray', 'pymssql', 'pysqlite2', 'wx.BitmapFromImage'],
"includes": ["encodings", "locale", "wx.gizmos","wx.lib.calendar"]}},
zipfile=None,
windows=[{'script':'basicApp.py'}],
data_files=data_files
)
You may need to fix log handling first, this URL may help.
Later you may look for answer here.
My answer is very general because you didn't give any more specific info (like py2exe/python version, py2exe log, other used 3rd party libraries).
See http://www.wxpython.org/docs/api/wx.App-class.html for wxPyton's App class initializer. If you want to run the app from a console and have stderr print to there, then supply False for the redirect argument. Otherwise, if you just want a window to pop up, set redirect to True and filename to None.
How can I pass a user-defined parameter both from the command line and setup.cfg configuration file to distutils' setup.py script?
I want to write a setup.py script, which accepts my package specific parameters. For example:
python setup.py install -foo myfoo
As Setuptools/Distuils are horribly documented, I had problems finding the answer to this myself. But eventually I stumbled across this example. Also, this similar question was helpful. Basically, a custom command with an option would look like:
from distutils.core import setup, Command
class InstallCommand(Command):
description = "Installs the foo."
user_options = [
('foo=', None, 'Specify the foo to bar.'),
]
def initialize_options(self):
self.foo = None
def finalize_options(self):
assert self.foo in (None, 'myFoo', 'myFoo2'), 'Invalid foo!'
def run(self):
install_all_the_things()
setup(
...,
cmdclass={
'install': InstallCommand,
}
)
Here is a very simple solution, all you have to do is filter out sys.argv and handle it yourself before you call to distutils setup(..).
Something like this:
if "--foo" in sys.argv:
do_foo_stuff()
sys.argv.remove("--foo")
...
setup(..)
The documentation on how to do this with distutils is terrible, eventually I came across this one: the hitchhikers guide to packaging, which uses sdist and its user_options.
I find the extending distutils reference not particularly helpful.
Although this looks like the "proper" way of doing it with distutils (at least the only one that I could find that is vaguely documented). I could not find anything on --with and --without switches mentioned in the other answer.
The problem with this distutils solution is that it is just way too involved for what I am looking for (which may also be the case for you).
Adding dozens of lines and subclassing sdist is just wrong for me.
Yes, it's 2015 and the documentation for adding commands and options in both setuptools and distutils is still largely missing.
After a few frustrating hours I figured out the following code for adding a custom option to the install command of setup.py:
from setuptools.command.install import install
class InstallCommand(install):
user_options = install.user_options + [
('custom_option=', None, 'Path to something')
]
def initialize_options(self):
install.initialize_options(self)
self.custom_option = None
def finalize_options(self):
#print('The custom option for install is ', self.custom_option)
install.finalize_options(self)
def run(self):
global my_custom_option
my_custom_option = self.custom_option
install.run(self) # OR: install.do_egg_install(self)
It's worth to mention that install.run() checks if it's called "natively" or had been patched:
if not self._called_from_setup(inspect.currentframe()):
orig.install.run(self)
else:
self.do_egg_install()
At this point you register your command with setup:
setup(
cmdclass={
'install': InstallCommand,
},
:
You can't really pass custom parameters to the script. However the following things are possible and could solve your problem:
optional features can be enabled using --with-featurename, standard features can be disabled using --without-featurename. [AFAIR this requires setuptools]
you can use environment variables, these however require to be set on windows whereas prefixing them works on linux/ OS X (FOO=bar python setup.py).
you can extend distutils with your own cmd_classes which can implement new features. They are also chainable, so you can use that to change variables in your script. (python setup.py foo install) will execute the foo command before it executes install.
Hope that helps somehow. Generally speaking I would suggest providing a bit more information what exactly your extra parameter should do, maybe there is a better solution available.
I successfully used a workaround to use a solution similar to totaam's suggestion. I ended up popping my extra arguments from the sys.argv list:
import sys
from distutils.core import setup
foo = 0
if '--foo' in sys.argv:
index = sys.argv.index('--foo')
sys.argv.pop(index) # Removes the '--foo'
foo = sys.argv.pop(index) # Returns the element after the '--foo'
# The foo is now ready to use for the setup
setup(...)
Some extra validation could be added to ensure the inputs are good, but this is how I did it
A quick and easy way similar to that given by totaam would be to use argparse to grab the -foo argument and leave the remaining arguments for the call to distutils.setup(). Using argparse for this would be better than iterating through sys.argv manually imho. For instance, add this at the beginning of your setup.py:
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('--foo', help='required foo argument', required=True)
args, unknown = argparser.parse_known_args()
sys.argv = [sys.argv[0]] + unknown
The add_help=False argument means that you can still get the regular setup.py help using -h (provided --foo is given).
Perhaps you are an unseasoned programmer like me that still struggled after reading all the answers above. Thus, you might find another example potentially helpful (and to address the comments in previous answers about entering the command line arguments):
class RunClientCommand(Command):
"""
A command class to runs the client GUI.
"""
description = "runs client gui"
# The format is (long option, short option, description).
user_options = [
('socket=', None, 'The socket of the server to connect (e.g. '127.0.0.1:8000')',
]
def initialize_options(self):
"""
Sets the default value for the server socket.
The method is responsible for setting default values for
all the options that the command supports.
Option dependencies should not be set here.
"""
self.socket = '127.0.0.1:8000'
def finalize_options(self):
"""
Overriding a required abstract method.
The method is responsible for setting and checking the
final values and option dependencies for all the options
just before the method run is executed.
In practice, this is where the values are assigned and verified.
"""
pass
def run(self):
"""
Semantically, runs 'python src/client/view.py SERVER_SOCKET' on the
command line.
"""
print(self.socket)
errno = subprocess.call([sys.executable, 'src/client/view.py ' + self.socket])
if errno != 0:
raise SystemExit("Unable to run client GUI!")
setup(
# Some other omitted details
cmdclass={
'runClient': RunClientCommand,
},
The above is tested and from some code I wrote. I have also included slightly more detailed docstrings to make things easier to understand.
As for the command line: python setup.py runClient --socket=127.0.0.1:7777. A quick double check using print statements shows that indeed the correct argument is picked up by the run method.
Other resources I found useful (more and more examples):
Custom distutils commands
https://seasonofcode.com/posts/how-to-add-custom-build-steps-and-commands-to-setuppy.html
To be fully compatible with both python setup.py install and pip install . you need to use environment variables because pip option --install-option= is bugged:
pip --install-option leaks across lines
Determine what should be done about --(install|global)-option with Wheels
pip not naming abi3 wheels correctly
This is a full example not using the --install-option:
import os
environment_variable_name = 'MY_ENVIRONMENT_VARIABLE'
environment_variable_value = os.environ.get( environment_variable_name, None )
if environment_variable_value is not None:
sys.stderr.write( "Using '%s=%s' environment variable!\n" % (
environment_variable_name, environment_variable_value ) )
setup(
name = 'packagename',
version = '1.0.0',
...
)
Then, you can run it like this on Linux:
MY_ENVIRONMENT_VARIABLE=1 pip install .
MY_ENVIRONMENT_VARIABLE=1 pip install -e .
MY_ENVIRONMENT_VARIABLE=1 python setup.py install
MY_ENVIRONMENT_VARIABLE=1 python setup.py develop
But, if you are on Windows, run it like this:
set "MY_ENVIRONMENT_VARIABLE=1" && pip install .
set "MY_ENVIRONMENT_VARIABLE=1" && pip install -e .
set "MY_ENVIRONMENT_VARIABLE=1" && python setup.py install
set "MY_ENVIRONMENT_VARIABLE=1" && python setup.py develop
References:
How to obtain arguments passed to setup.py from pip with '--install-option'?
Passing command line arguments to pip install
Passing the library path as a command line argument to setup.py