DeepGraphGO
https://github.com/yourh/DeepGraphGO
Run the main program
Using backend: pytorch
Traceback (most recent call last):
File "D:/Users/.../PycharmProjects/deepgrago210721/main.py", line 92, in <module>
main()
File "D:\Users\...\PycharmProjects\deepgrago210721\venv\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "D:\Users\...\PycharmProjects\deepgrago210721\venv\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "D:\Users\...\PycharmProjects\deepgrago210721\venv\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "D:\Users\...\PycharmProjects\deepgrago210721\venv\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "D:/Users/.../PycharmProjects/deepgrago210721/main.py", line 31, in main
data_cnf, model_cnf = yaml.load(Path(data_cnf)), yaml.load(Path(model_cnf))
File "D:\Users\...\AppData\Local\Programs\Python\Python38\lib\pathlib.py", line 1038, in __new__
self = cls._from_parts(args, init=False)
File "D:\Users\...\AppData\Local\Programs\Python\Python38\lib\pathlib.py", line 679, in _from_parts
drv, root, parts = self._parse_args(args)
File "D:\Users\...\AppData\Local\Programs\Python\Python38\lib\pathlib.py", line 663, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
Ask for help,I failed to find the reason. Is it because of the data storage location?
The steps to run the DeepGraphGO: graph neural network for large-scale, multispecies protein function prediction
step 1:
Clone/Download the repo
pip install -r requirements.txt
pip install dtrx
step 2:
Extract Data folder
cd ./data
dtrx -f data.zip
step 3:
Run the preprocessing script
python preprocessing.py data/ppi_mat.npz data/ppi_dgl_top_100
Run the main function with different model-id with biological process ontology domain, for molecular function ontology and cellular component ontology domains replace the yaml files in -d with configure/mf.yaml and configure/cc.yaml respectively
python main.py -m configure/dgg.yaml -d configure/bp.yaml --model-id 0
python main.py -m configure/dgg.yaml -d configure/bp.yaml --model-id 1
python main.py -m configure/dgg.yaml -d configure/bp.yaml --model-id 2
Run the evaluation script for BP domain
python bagging.py -m configure/dgg.yaml -d configure/bp.yaml -n 3
python evaluation.py data/bp_test_go.txt results/DeepGraphGO-Ensemble-bp-test.txt
Related
I have the following Snakefile (using Snakemake v7.6.0):
from snakemake.remote.GS import RemoteProvider as GSRemoteProvider
GS = GSRemoteProvider(project="my-gcp-project-id")
rule example2:
output:
GS.remote('my-gcs-bucket/path/to/test.txt', stay_on_remote=True)
shell: 'touch test.txt && gsutil mv test.txt gs://my-gcs-bucket/path/to/test.txt'
When I run this in Linux (using WSL2), the job completes successfully -- the file is uploaded to GCS, and Snakemake reports that it finished the job:
...
Finished job 0.
1 of 1 steps (100%) done
However, when I run the same rule in Windows (using GitBash), I get an error:
Traceback (most recent call last):
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\__init__.py", line 722, in snakemake
success = workflow.execute(
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\workflow.py", line 1110, in execute
success = self.scheduler.schedule()
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\scheduler.py", line 510, in schedule
self.run(runjobs)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\scheduler.py", line 558, in run
executor.run_jobs(
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 120, in run_jobs
self.run(
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 486, in run
future = self.run_single_job(job)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 539, in run_single_job
self.cached_or_run, job, run_wrapper, *self.job_args_and_prepare(job)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 491, in job_args_and_prepare
job.prepare()
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\jobs.py", line 770, in prepare
f.prepare()
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\io.py", line 618, in prepare
raise e
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\io.py", line 614, in prepare
os.makedirs(dir, exist_ok=True)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\os.py", line 223, in makedirs
mkdir(name, mode)
FileNotFoundError: [WinError 161] The specified path is invalid: 'gs://my-gcs-bucket'
If the file is already present in GCS (e.g., because I ran the rule in Linux already), the Windows job can see that the file is there:
Building DAG of jobs...
Nothing to be done (all requested files are present and up to date).
I realize that the example above could be refactored to touch {output} and allow Snakemake itself to upload the file. However, I have a use-case where I have large files that I want to be able to write directly to GCS (e.g., streaming and processing files that would otherwise be too large to fit on my local disk all at once). Thus, the example above is meant to represent a wider type of use-case.
Is my use in Linux incorrect? If not, is there a way for this to work in Windows (in a POSIX-like shell like GitBash), as well? I'm interested to understand why this does not work in the Windows shell.
I try to compile Qt 5.13 in a snap package, but I get the following error when priming it:
'NoneType' object has no attribute 'decode'
I thought that this is related to some internationalization issues in the underlying python code, so tried to change the build order, did a lot of other things, but every time we get back the error above.
The following is the snapcraft file:
name: vcs
version: '2.0'
summary: VCS GUI
description: |
Maritime Robotics Vehicle Control System GUI
confinement: devmode
base: core18
layout:
/usr/lib/vcs-gui/plugins:
bind: $SNAP/usr/lib/vcs-gui/plugins
parts:
desktop-qt5:
source: https://download.qt.io/archive/qt/5.13/5.13.2/single/qt-everywhere-src-5.13.2.tar.xz
plugin: dump
override-build: |
snapcraftctl build
./configure -opensource -confirm-license -debug -nomake examples -nomake tests
make
make install
override-prime: |
locale-gen "en_US.UTF-8"
update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
snapcraftctl prime
build-packages:
- python
- locales
protobuf:
source: https://github.com/protocolbuffers/protobuf/releases/download/v3.8.0/protobuf-all-3.8.0.tar.gz
plugin: autotools
cmake:
source: https://github.com/Kitware/CMake/releases/download/v3.15.0-rc1/cmake-3.15.0-rc1.tar.gz
plugin: autotools
libproj:
source: http://download.osgeo.org/proj/proj-6.0.0.tar.gz
plugin: autotools
build-packages:
- libsqlite3-dev
- sqlite3
- libgl1-mesa-dev
- libglu1-mesa-dev
gdal:
source: https://github.com/OSGeo/gdal/releases/download/v3.0.0/gdal-3.0.0.tar.gz
plugin: autotools
after: [libproj]
ecc:
source: https://confluence.ecmwf.int/download/attachments/45757960/eccodes-2.12.5-Source.tar.gz
plugin: cmake
configflags: [-DENABLE_FORTRAN=OFF]
vcs:
plugin: cmake
configflags: [-DQt5Core_DIR=/usr/local/Qt-5.13.2/lib/cmake/Qt5Core,-DQt5_DIR=/usr/local/Qt-5.13.2/lib/cmake/Qt5,-DQT_QMAKE_EXECUTABLE=/usr/local/Qt-5.13.2/bin/qmake]
source: https://github.com/fritzone/Qt-CMake-HelloWorld.git
after: [protobuf,cmake,libproj,gdal,ecc,desktop-qt5]
build-packages:
- git
- g++
- make
- libkml-dev
- libarmadillo-dev
- libgeographic-dev
- libssl-dev
- libconfig++-dev
- libxml2-dev
- libmodbus-dev
- libev-dev
- libudev-dev
- libexiv2-dev
- libv4l-dev
- doxygen
- graphviz
- libgeotiff-dev
- libgeos-dev
- libpng-dev
- libbotan-2-dev
stage-packages:
- libbotan-2-4
- libtspi1
- libkmlconvenience1
- libkmlbase1
- libkmlengine1
- libkmldom1
- libssl1.0.0
- libconfig++9v5
- libxml2
- libmodbus5
- libev4
- libaec0
- libhdf4-0-alt
- libsz2
- libexiv2-14
- libv4l-0
- libgeotiff2
- libsdl2-2.0-0
- libxcb-xinerama0
- libarmadillo8
- libarpack2
- libsuperlu5
- libgeos-3.6.2
- libgeos-c1v5
apps:
vcs:
command: bin/desktop-launch bin/vcs
adapter: full
command-chain:
- bin/desktop-launch
- bin/vcs
common-id: vcs-gui.desktop
desktop: usr/share/applications/vcs-gui.desktop
environment:
"DISABLE_WAYLAND": "0"
plugs: [x11, wayland, desktop, desktop-legacy, opengl, network, home]
Here is the output I get from it when I run with:
SNAPCRAFT_ENABLE_DEVELOPER_DEBUG=yes SNAPCRAFT_BUILD_ENVIRONMENT_MEMORY=8G snapcraft --debug
(Yes, it needs circa 8GB of memory due to the compiler crashing while building Qt)
.
. // everything is fine till this point
.
Priming desktop-qt5
Generating locales (this might take a while)...
en_US.UTF-8... done
Generation complete.
'NoneType' object has no attribute 'decode'
We would appreciate it if you anonymously reported this issue.
No other data than the traceback and the version of snapcraft in use will be sent.
And here is the stacktrace of the python script which tries to build the package:
'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)
We would appreciate it if you anonymously reported this issue.
No other data than the traceback and the version of snapcraft in use will be sent.
Would you like to send this error data? (Yes/No/Always/View) [no]: View
Traceback (most recent call last):
File "/snap/snapcraft/3440/bin/snapcraft", line 11, in <module>
load_entry_point('snapcraft==3.8', 'console_scripts', 'snapcraft')()
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/core.py", line 1114, in invoke
return Command.invoke(self, ctx)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/cli/_runner.py", line 103, in run
snap_command.invoke(ctx)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/cli/_command.py", line 87, in invoke
return super().invoke(ctx)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/cli/lifecycle.py", line 261, in snap
_execute(steps.PRIME, parts=[], pack_project=True, output=output, **kwargs)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/cli/lifecycle.py", line 66, in _execute
lifecycle.execute(step, project_config, parts)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/lifecycle/_runner.py", line 94, in execute
executor.run(step, part_names)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/lifecycle/_runner.py", line 148, in run
self._handle_step(part_names, part, step, current_step, cli_config)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/lifecycle/_runner.py", line 162, in _handle_step
getattr(self, "_run_{}".format(current_step.name))(part)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/lifecycle/_runner.py", line 237, in _run_prime
self._run_step(step=steps.PRIME, part=part, progress="Priming")
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/lifecycle/_runner.py", line 281, in _run_step
getattr(part, step.name)()
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/pluginhandler/__init__.py", line 795, in prime
self._do_runner_step(steps.PRIME)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/pluginhandler/__init__.py", line 242, in _do_runner_step
return getattr(self._runner, "{}".format(step.name))()
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/pluginhandler/_runner.py", line 91, in prime
"override-prime", self._override_prime_scriptlet, self._primedir
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/pluginhandler/_runner.py", line 137, in _run_scriptlet
scriptlet_name, function_call.strip()
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/pluginhandler/_runner.py", line 193, in _handle_builtin_function
function(**function_args)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/pluginhandler/__init__.py", line 807, in _do_prime
dependency_paths = self._handle_elf(snap_files)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/pluginhandler/__init__.py", line 814, in _handle_elf
elf_files = elf.get_elf_files(self.primedir, snap_files)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/elf.py", line 576, in get_elf_files
elf_file = ElfFile(path=path)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/elf.py", line 219, in __init__
elf_data = self._extract(path)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/snapcraft/internal/elf.py", line 252, in _extract
interp_section = elf.get_section_by_name(_INTERP)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/elftools/elf/elffile.py", line 94, in get_section_by_name
for i, sec in enumerate(self.iter_sections()):
File "/snap/snapcraft/3440/lib/python3.5/site-packages/elftools/elf/elffile.py", line 103, in iter_sections
yield self.get_section(i)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/elftools/elf/elffile.py", line 83, in get_section
return self._make_section(section_header)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/elftools/elf/elffile.py", line 288, in _make_section
name = self._get_section_name(section_header)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/elftools/elf/elffile.py", line 283, in _get_section_name
return self._file_stringtable_section.get_string(name_offset)
File "/snap/snapcraft/3440/lib/python3.5/site-packages/elftools/elf/sections.py", line 70, in get_string
return s.decode('ascii')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)
We would appreciate it if you anonymously reported this issue.
Does anyone have any ideas on how to make it work? Please note, using the Qt which comes with default base core18 (5.9) is not good for our application due to some bugs that were fixed in 5.13, so we need at least 5.13.
The application this tries to build is just a helloworld app, it's not relevant to the error we get, but it's required by snapcraft to reach the stage of the error.
(Be careful if you try to run snapcraft against this file, Qt takes 4 - 5 hours on my computer to compile)
In the traceback, a get_string function is raising a UnicodeDecodeError because it can't decode some text from ASCII.
In the current source for elftools, this line has been replaced by
return s.decode('UTF-8') is else ''
So you could try upgrading the version of pyelftools bundled with snap
<the python interpreter bundled with snap> -m pip install --upgrade pyelftools
(ensure you can roll back, in case this upgrade actually makes things worse)
I am learning the "TensorFlow for Poets" tutorial.
I am getting stuck on the retraining step, where for some reason the retrain.py command (along with the other 4 lines of code attached to it) encounters errors.
I'm thinking it might be a simple fix. I am able to follow the codelab tutorial through each step succesfully until the step with the following command:
# python tensorflow/examples/image_retraining/retrain.py \
--bottleneck_dir=/tf_files/bottlenecks \
--how_many_training_steps 500 \
--model_dir=/tf_files/inception \
--output_graph=/tf_files/retrained_graph.pb \
--output_labels=/tf_files/retrained_labels.txt \
--image_dir /tf_files/flower_photos
I enter this into my command line in my Docker terminal and this is the error:
root#3333e49b2f82:/tensorflow# python tensorflow/examples/image_retraining/retrain.py \
--bottleneck_dir=/tf_files/bottlenecks \
--how_many_training_steps 500 \
--model_dir=/tf_files/inception \
--output_graph=/tf_files/retrained_graph.pb \
--output_labels=/tf_files/retrained_labels.txt \
--image_dir /tf_files/flower_photos
Traceback (most recent call last):
File "tensorflow/examples/image_retraining/retrain.py", line 1012, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/usr/local/lib/python2.7/dist- packages/tensorflow/python/platform/app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "tensorflow/examples/image_retraining/retrain.py", line 751, in main
maybe_download_and_extract()
File "tensorflow/examples/image_retraining/retrain.py", line 313, in maybe_download_and_extract
tarfile.open(filepath, 'r:gz').extractall(dest_directory)
File "/usr/lib/python2.7/tarfile.py", line 2051, in extractall
self.extract(tarinfo, path)
File "/usr/lib/python2.7/tarfile.py", line 2088, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/usr/lib/python2.7/tarfile.py", line 2164, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib/python2.7/tarfile.py", line 2205, in makefile
copyfileobj(source, target)
File "/usr/lib/python2.7/tarfile.py", line 265, in copyfileobj
shutil.copyfileobj(src, dst)
File "/usr/lib/python2.7/shutil.py", line 49, in copyfileobj
buf = fsrc.read(length)
File "/usr/lib/python2.7/tarfile.py", line 818, in read
buf += self.fileobj.read(size - len(buf))
File "/usr/lib/python2.7/tarfile.py", line 736, in read
return self.readnormal(size)
File "/usr/lib/python2.7/tarfile.py", line 745, in readnormal
return self.fileobj.read(size)
File "/usr/lib/python2.7/gzip.py", line 261, in read
self._read(readsize)
File "/usr/lib/python2.7/gzip.py", line 308, in _read
self._read_eof()
File "/usr/lib/python2.7/gzip.py", line 347, in _read_eof
hex(self.crc)))
IOError: CRC check failed 0x76f1f85e != 0x6caceac0L
root#3333e49b2f82:/tensorflow#
The only thing I can think of based on these errors is that it has something to do with python2.7? I have both python2.7 and 3.5 installed on my machine (Macbook air) and I'm not sure if this is somehow a problem for Docker or tensorflow...
Anyways any help is greatly appreciated.
A CRC error indicates that some data in your compressed file is damaged.
Actually it is happening because previous time while you are running this command you left somewhere in the middle(this was my case) which resulted in partial downloaded inception files. So when you are running this again, it checks the CRC with the previous partially downloaded one which don't match.
Solution:
I deleted the previous downloaded one to avoid anomaly during this time command execution.
The downloaded inception files will be in /tf_file (if you are naming everything as per the tutorial).
So go to /tf_files and delete the partially downloaded inception/ directory.
rm -rf /inception
Come back to /tensorflow and run the command again.
It seems like the nature of the MapReduce framework is to work with many files. So when I get errors that tell me I'm using too many files, I suspect I'm doing something wrong.
If I run the job with the inline runner and three directories, it works:
$ python mr_gps_quality.py /Volumes/Logs/gps/ByCityLogs/city1/0[1-3]/*.log -r inline --no-output --output-dir city1_results/gps_quality/2015/03/
But if I run it using the local runner (and the same three directories), it fails:
$ python mr_gps_quality.py /Volumes/Logs/gps/ByCityLogs/city1/0[1-3]/*.log -r local --no-output --output-dir city1_results/gps_quality/2015/03/
[...output clipped...]
> /Users/andrewsturges/sturges/mr/env/bin/python mr_gps_quality.py --step-num=0 --mapper /var/folders/32/5vqk9bjx4c773cpq4pn_r80c0000gn/T/mr_gps_quality.andrewsturges.20150604.170016.046323/input_part-00249 > /var/folders/32/5vqk9bjx4c773cpq4pn_r80c0000gn/T/mr_gps_quality.andrewsturges.20150604.170016.046323/step-k0-mapper_part-00249
Traceback (most recent call last):
File "mr_gps_quality.py", line 53, in <module>
MRGPSQuality.run()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/job.py", line 494, in run
mr_job.execute()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/job.py", line 512, in execute
super(MRJob, self).execute()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/launch.py", line 147, in execute
self.run_job()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/launch.py", line 208, in run_job
runner.run()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/runner.py", line 458, in run
self._run()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/sim.py", line 182, in _run
self._invoke_step(step_num, 'mapper')
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/sim.py", line 269, in _invoke_step
working_dir, env)
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/local.py", line 150, in _run_step
procs_args, output_path, working_dir, env)
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/local.py", line 253, in _invoke_processes
cwd=working_dir, env=env)
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/local.py", line 76, in _chain_procs
proc = Popen(args, **proc_kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1197, in _execute_child
errpipe_read, errpipe_write = self.pipe_cloexec()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1153, in pipe_cloexec
r, w = os.pipe()
OSError: [Errno 24] Too many open files
Furthermore, if I go back to using the inline runner and include even more directories (11 total) in my input, then I get a different error again:
$ python mr_gps_quality.py /Volumes/Logs/gps/ByCityLogs/city1/*/*.log -r inline --no-output --output-dir city1_results/gps_quality/2015/03/
[...clipped...]
Traceback (most recent call last):
File "mr_gps_quality.py", line 53, in <module>
MRGPSQuality.run()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/job.py", line 494, in run
mr_job.execute()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/job.py", line 512, in execute
super(MRJob, self).execute()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/launch.py", line 147, in execute
self.run_job()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/launch.py", line 208, in run_job
runner.run()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/runner.py", line 458, in run
self._run()
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/sim.py", line 191, in _run
self._invoke_sort(self._step_input_paths(), sort_output_path)
File "/Users/andrewsturges/sturges/mr/env/lib/python2.7/site-packages/mrjob/runner.py", line 1202, in _invoke_sort
check_call(args, stdout=output, stderr=err, env=env)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 537, in check_call
retcode = call(*popenargs, **kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 7] Argument list too long
The mrjob docs include a discussion of the differences between the inline and local runners, but I don't understand how it would explain this behavior.
Lastly, I'll mention that the number of files in the directories I'm globbing isn't huge (acknowledgement):
$ find . -maxdepth 1 -mindepth 1 -type d | while read dir; do printf "%-25.25s : " "$dir"; find "$dir" -type f | wc -l; done | sort
./01 : 236
./02 : 169
./03 : 176
./04 : 185
./05 : 176
./06 : 235
./07 : 275
./08 : 265
./09 : 186
./10 : 171
./11 : 161
I don't think this has to do with the job itself, but here it is:
from mrjob.job import MRJob
import numpy as np
import geohash
class MRGPSQuality(MRJob):
def mapper(self, _, line):
try:
lat = float(line.split(',')[1])
lng = float(line.split(',')[2])
horizontalAccuracy = float(line.split(',')[4])
gh = geohash.encode(lat, lng, precision=7)
yield gh, horizontalAccuracy
except:
pass
def reducer(self, key, values):
# Convert the generator straight back to array:
vals = np.fromiter(values, float)
count = len(vals)
mean = np.mean(vals)
if count > 50:
yield key, [count, mean]
if __name__ == '__main__':
MRGPSQuality.run()
The problem for "Argument list too long" is not the job or python, its bash. The asterisk in your command line to kick off the job expands out to every file that matches which is a really long command line and exceeds bash limit.
The error has nothing to do with ulimit but the error "Too many open files" is to do with ulimit, so you bump into the ulimit if the command were to actually run.
You can check the shells limit like this (if you are interested)...
getconf ARG_MAX
To get around the max args problem, you can concatenate all the files into one by doing this.
for f in *; do cat "$f" >> ../directory/bigfile.log; done
Then run your mrjob pointed at the big file.
If its a lot of files you can use multiple threads to concat the file using gnu parallel because above command is single thread and slow.
ls | parallel -m -j 8 "cat {} >> ../files/bigfile.log"
*Change 8 to the amount of parallelism you want
I'm having trouble running tasks. I run ./manage celeryd -B -l info, it correctly loads all tasks to registry.
The error happens when any of the tasks run - the task starts, does its thing, and then I get:
[ERROR/MainProcess] Thread 'ResultHandler' crashed: ValueError('Octet out of range 0..2**64-1',)
Traceback (most recent call last):
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 221, in run
return self.body()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 458, in body
on_state_change(task)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 436, in on_state_change
state_handlers[state](*args)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 413, in on_ack
cache[job]._ack(i, time_accepted, pid)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 1016, in _ack
self._accept_callback(pid, time_accepted)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 424, in on_accepted
self.acknowledge()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 516, in acknowledge
self.on_ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/consumer.py", line 405, in ack
message.ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/kombu-2.1.0-py2.7.egg/kombu/transport/base.py", line 98, in ack
self.channel.basic_ack(self.delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/channel.py", line 1740, in basic_ack
args.write_longlong(delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/serialization.py", line 325, in write_longlong
raise ValueError('Octet out of range 0..2**64-1')
ValueError: Octet out of range 0..2**64-1
I also must note that this worked on my previous Lion install, and even if I create a blank virtualenv with some test code, when a task runs it gives this error.
This happens with Python 2.7.2 and 2.6.4.
Django==1.3.1
amqplib==1.0.2
celery==2.4.6
django-celery==2.4.2
It appears there is some bug with homebrew install python. I've now switched to the native Lion one (2.7.1) and it works.