i would like to use pygradle in a multi-project setup with project dependencies. I created two gradle sub-projects. A python-cli project (example-app) and a python-sdist project (example-lib) on which the cli project depends on.
But currently i'm facing the following error (gist) when i try to build the app:
multi-project-example/example-app> gradle build --info
> Task :example-app:buildPex FAILED
Task ':example-app:buildPex' is not up-to-date because:
Task has not declared any outputs despite executing actions.
Starting process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''. Working directory: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app Command: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/pip freeze --all --disable-pip-version-check
Successfully started process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''
/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/deployable/bin/example-app.pex
Starting process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''. Working directory: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app Command: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/pex --no-pypi --cache-dir /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/pex-cache --output-file /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/deployable/bin/example-app.pex --repo /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/wheel-cache --python-shebang /home/kkdh/.anaconda3/bin/python UNKNOWN==0.0.0 example-app==0.3.0a1
Successfully started process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''
Could not satisfy all requirements for example-lib:
example-lib(from: example-app==0.3.0a1)
:example-app:buildPex (Thread[Execution worker for ':',5,main]) completed. Took 1.165 secs.
You will find the example in my fork of pygradle: https://github.com/kKdH/pygradle/tree/master/examples/multi-project-example
I opened an issue about this problem but without any response from the project maintainers. So now i'am asking here for any pointers to a solution or further troubleshoot steps.
Related
***EDIT Issue resolved. Several things needed to be done to setup pre-commit in my environment prior to commit/push.
'git init'
'pre-commit install'
open new terminal, activate venv
'pre-commit run —all-files'
Hope this helps someone in the future.
Junior Web Developer making his first post!
I'm setting up a Python Django project using the Cookie-cutter template https://github.com/cookiecutter/cookiecutter-django
However, upon any commit to GitHub, the GitHub Actions test fails on linter at the Run pre-commit step. I receive the error message: "Error: The process '/opt/hostedtoolcache/Python/3.9.10/x64/bin/pre-commit' failed with exit code 1"
My research indicates this is a generic python linter failure, but it's not clear why it's failing. The slug repo doesn't have any issues and I've setup the project many times, on 3 different machines (2 Macs and 1 Windows), pushing to new repositories each time, slightly modifying the settings each time, not modifying any code after project initialization, sadly receiving the same results.
I'm completely stumped and my progress has come to a stop on this issue. Please note I don't have much experience with GitHub Actions. Part of using this project template is having these nice features work to support future development.
One thing I'm confused about is why python 3.9.10 is being used in the pre-commit step when command "python3 --version" on all my machines I've used to initialize the project is returning python 3.9.9. A simple search in the project for "3.9.10" returns no results.
Below are my relevant project build settings. I've attempted many different combinations, but this is the result I'd like to get to production. Everything builds and runs via Docker-compose locally without issues. Thanks massively for your help and guidance! Some seemingly non-relevant links removed in the code due to reputation limit.
https://github.com/TElphee01/django1
version: 0.1.0
open_source_license: 2 - BSD
timezone: EST
windows: n
use_pycharm: n
use_docker: y
postgresql_version: 14.1
js_task_runner: gulp
cloud_provider: AWS
Mail_service: Mailgun
Use_async:n
Use_drf: y
custom_bootstrap_compilation: n
use_compressor: y
use_celery: y
use_mailhog: y
use_sentry: y
Use_whitenoise: y
use_heroku: y
ci_tool: Github
Keep_local_envs_in_vcs: y
debug: n
Error message found at: https://github.com/TElphee01/django1/runs/5325567134?check_suite_focus=true under Run pre-commit step
Run pre-commit/action#v2.0.3
install pre-commit
/opt/hostedtoolcache/Python/3.9.10/x64/bin/pre-commit run --show-diff-on-failure --color=always --all-files
[INFO] Initializing environment for github.com/pre-commit/pre-commit-hooks.
[INFO] Initializing environment for github.com/psf/black.
[INFO] Initializing environment for github.com/PyCQA/isort.
[INFO] Initializing environment for github.com/PyCQA/flake8.
[INFO] Initializing environment for github.com/PyCQA/flake8:flake8-isort.
[INFO] Installing environment for github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for github.com/psf/black.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for github.com/PyCQA/isort.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for github.com/PyCQA/flake8.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
trim trailing whitespace.................................................Passed
fix end of files.........................................................Failed
- hook id: end-of-file-fixer
- exit code: 1
- files were modified by this hook
Fixing README.md
check yaml...............................................................Passed
black....................................................................Passed
isort....................................................................Failed
- hook id: isort
- files were modified by this hook
Fixing /home/runner/work/django1/django1/tefrontend/users/tests/test_views.py
flake8...................................................................Passed
pre-commit hook(s) made changes.
If you are seeing this message in CI, reproduce locally with: `pre-commit run --all-files`.
To run `pre-commit` as part of git workflow, use `pre-commit install`.
All changes made by hooks:
diff --git a/README.md b/README.md
index a64ec04..c4c8a44 100644
Binary files a/README.md and b/README.md differ
diff --git a/tefrontend/users/tests/test_views.py b/tefrontend/users/tests/test_views.py
index ebdc864..4fe526a 100644
--- a/tefrontend/users/tests/test_views.py
+++ b/tefrontend/users/tests/test_views.py
## -11,11 +11,7 ## from django.urls import reverse
from tefrontend.users.forms import UserAdminChangeForm
from tefrontend.users.models import User
from tefrontend.users.tests.factories import UserFactory
-from tefrontend.users.views import (
- UserRedirectView,
- UserUpdateView,
- user_detail_view,
-)
+from tefrontend.users.views import UserRedirectView, UserUpdateView, user_detail_view
pytestmark = pytest.mark.django_db
Error: The process '/opt/hostedtoolcache/Python/3.9.10/x64/bin/pre-commit' failed with exit code 1
It looks like the pre-commit checks have detected some specific issues in your project:
the "end of file" check, which ensures that a file is either empty, or ends with one newline, and
the "isort" check, which has to do with the ordering and formatting of import statements.
In both cases, the tooling is also correcting the problem.
If you were to install and run pre-commit locally, you could just commit the changes it makes and then push your code when the commits pass the checks locally.
If you install pre-commit and then run pre-commit install in your repository, it will be installed as a pre-commit hook so these checks will be run for you automatically when you commit changes.
I am using elastic beanstalk to deploy my Django application. Today it suddenly stopped working without any breaking changes from the application side (I've changed some templates, nothing more).
The deployment time outs after 10 minutes of trying to deploy the app and nothing happens.
The only more or less useful hints I can see in the log is this:
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject/Command 01_migrate] : Activity execution failed, because: SystemCheckError: System check identified some issues:
ERRORS:
education.Author.photo: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
education.Course.cover_image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
education.CourseCategory.icon_image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
Using staging settings
App receivers connected
(ElasticBeanstalk::ExternalInvocationError)
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject/Command 01_migrate] : Activity failed.
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject] : Activity failed.
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update ...] : Activity failed.
[2020-02-20T15:00:20.507Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142/AppDeployStage0/EbExtensionPostBuild] : Activity failed.
[2020-02-20T15:00:20.507Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142/AppDeployStage0] : Activity failed.
[2020-02-20T15:00:20.508Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142] : Completed activity. Result:
Application update - Command CMD-AppDeploy failed
But I already have Pillow in requirements.txt and the log above says:
Requirement already satisfied: Pillow==6.2.1 in /opt/python/run/venv/lib64/python3.6/site-packages (from -r /opt/python/ondeck/app/requirements.txt (line 51))
How can I troubleshoot and fix this? And how can I avoid similar issues in the future? I am really frightened that the same problem may randomly pop out on production environment.
Here's some more info about the configuration:
Here's what I have in .ebextensions:
01_packages.config:
packages:
yum:
git: []
postgresql93-devel: []
db-migrate.config
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myproject.settings
django.config
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: myproject/wsgi.py
wsgi_custom.config
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
This one is a pain and a known issue with Django when using the ImageField model/form. Due to Pythons dynamic import system it will suddenly appear and annoyed the hell out of me when I first came across it.
The way I normally fix this is by using conda and its equivalent of a virtualenv to ensure the right interpreter (the one with my packages) is used.
If you are not using a virtualenv or equivalent, set one up now, if you are already using one then check you are installing pillow with pip3 install pillow - the pip3 being important here as on debian (and many other) systems normal pip will only install for python 2.x.
Using conda will ensure this doesnt happen in production, but I would also add it to your checklist of things to test when deploying - check correct version of pillow setup and working.
I had two Elastic Beanstalk environments with the same issue (one web tier env and a worker env).
On one of them the issue was resolved by restarting the environment.
The other one failed to restart and timed out every time on any operation. This one I managed to fix by going to configuration > capacity and changing the minimum and maximum number of instances to 0. I've applied the changes, waited for them to apply and then returned the previous values for min and max instance numbers.
That fixed the issue.
I still have no idea what caused the issue in the first place and would love to receive some comment on that.
We are building data pipeline using Beam Python SDK and trying to run on Dataflow, but getting the below error,
A setup error was detected in beamapp-xxxxyyyy-0322102737-03220329-8a74-harness-lm6v. Please refer to the worker-startup log for detailed information.
But could not find detailed worker-startup logs.
We tried increasing memory size, worker count etc, but still getting the same error.
Here is the command we use,
python run.py \
--project=xyz \
--runner=DataflowRunner \
--staging_location=gs://xyz/staging \
--temp_location=gs://xyz/temp \
--requirements_file=requirements.txt \
--worker_machine_type n1-standard-8 \
--num_workers 2
pipeline snippet,
data = pipeline | "load data" >> beam.io.Read(
beam.io.BigQuerySource(query="SELECT * FROM abc_table LIMIT 100")
)
data | "filter data" >> beam.Filter(lambda x: x.get('column_name') == value)
Above pipeline is just loading the data from BigQuery and filtering based on some column value. This pipeline works like a charm in DirectRunner but fails on Dataflow.
Are we doing any obvious setup mistake? anyone else getting the same error? We could use some help to resolve the issue.
Update:
Our pipeline code is spread across multiple files, so we created a python package. We solved setup error problem by passing --setup_file argument instead of --requirements_file.
We resolved this setup error issue by sending a different set of arguments to the dataflow. Our code is spread across multiple files, so had to create a package for it. If we use --requirements_file, the job will start, but fail eventually, because it wouldn't be able to find the package in the workers. Beam Python SDK sometimes does not throw explicit error message for these instead, it will retry the job and fail. To get your code running with a package, you will need to pass --setup_file argument, which has dependencies listed in it. Make sure package created by python setup.py sdist command includes all the files required by your pipeline code.
If you have a privately hosted python package dependency then pass --extra_package with the path to the package.tar.gz file. Better way is to store in a GCS bucket and pass the path here.
I have written an example project to get started with Apache Beam Python SDK on Dataflow - https://github.com/RajeshHegde/apache-beam-example
Read about it here - https://medium.com/#rajeshhegde/data-pipeline-using-apache-beam-python-sdk-on-dataflow-6bb8550bf366
I'm building a prediction pipeline using Apache Beam/Dataflow. I need to include the model files inside the dependencies available to the remote workers. The Dataflow job failed with the same error log:
Error message from worker: A setup error was detected in beamapp-xxx-xxxxxxxxxx-xxxxxxxx-xxxx-harness-xxxx. Please refer to the worker-startup log for detailed information.
However, this error message didn't give any details about the worker-startup log. Finally, I found a way to have the worker log and solve the problem.
As is known, Dataflow creates compute engines to run jobs and save logs on them so that we can access the vm to see logs. We can connect to the vm in use by our Dataflow job from the GCP console via SSH. Then we can check the boot-json.log file located in /var/log/dataflow/taskrunner/harness:
$ cd /var/log/dataflow/taskrunner/harness
$ cat boot-json.log
Here we should pay attention. When running in batch mode, the vm created by Dataflow is ephemeral and closed when the job failed. If the vm is closed, we can't access it anymore. But a process including a failing item is retried 4 times, so normally we have enough time to open boot-json.log and see what is going on.
At last, I put my Python setup solution here that may help someone else:
main.py
...
model_path = os.path.dirname(os.path.abspath(__file__)) + '/models/net.pd'
# pipeline code
...
MANIFEST.in
include models/*.*
setup.py complete example
REQUIRED_PACKAGES = [...]
setuptools.setup(
...
include_package_data=True,
install_requires=REQUIRED_PACKAGES,
packages=setuptools.find_packages(),
package_data={"models": ["models/*"]},
...
)
Run Dataflow pipelines
$ python main.py --setup_file=/absolute/path/to/setup.py ...
backgroud
The system is Centos7, which have a python2.x. 1GB memory and single core.
I install python3.x , I can code python3 into python3.
The django-celery project is based on a virtualenv python3.x,and I had make it well at nginx,uwsgi,mariadb. At least,I think so for no error happend.
I try to use supervisor to control the django-celery's worker,like below:
command=env/bin/python project/manage.py celeryd -l INFO -n worker_%(process_num)s
numprocs=4
process_name=projects_worker_%(process_num)s
stdout_logfile=logfile.log
etderr_logfile=logfile_err.log
Also had make setting about celery events,celery beat,this part is well ,no error happend. Error comes from the part of worker.
When I keep the proces big than 1,it would run at first,when I do supervisorctl status,all are running.
But when I do the same command to see status once more times,some process status change to starting.
So I try more times,I found that:the worker's status would always change from running to starting and then changeing starting to running-- no stop.
When I check the supervisor's logfile at tmp/supervisor.log,it shows like:
exit status 1; not expected
entered runnging state,process has stayed up for > than 1 seconds(startsecs)
'project_worker_0' with pid 2284
Maybe it shows why the worker change status all the time.
What's more ,when I change the proces to 1,the worker could failed.The worker's log show me:
stale pidfile exists.Removing it
But,I did not ponit the pidfile path to worker.And,I just found the events's and beat 's pidfie at the / path,no worker's pidfile.Also ,I try find / -name *.pid to find a pidfile like worker,or celeryd,but here did not exist.
question
firstly, I want to deploy the project , so ,did here any other way to deploy the django-celery with virtulanev's celery part?
If here anyone can tell me how this phenomenon comes,I would better to choose supervisor to deploy the celery part. Anyone can help me about it ?
PS
Any of your thoughts may be helpful to me, best wishs!
Finally, I solve this problem yesterday night.
about the reason
I make the project could success running at a windows 10 system, but did no check when I change the project to centos7.+. The command:env/bin/python project/manage.py celeryd could not run success. So the supervisor would start a process which will failed soon.
Why the command could not success? I had pip installed all the package need. But it show err below:
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
I try to search some blog about this error, and get the anser:
export C_FORCE_ROOT='true' # at the centos enviroument
action to solve(after meeting error like this)
add export C_FORCE_ROOT='true' to centos's enviroment file and source it.
check command 'env/bin/python project/manage.py celeryd ',did it run successful.
restart the supervisord. Attention please! not supervisorctl reload,it just reload the .conf file,not the environment file. Try kill the process supervisord -c xx.conf(ps aux | grep supervisord and kill -9 process_number,be careful).
some url about the blog
the error when just run celeryd not sucess in chinese
I just spin up some environment using EB with python 3.4 and Django but it keeps failing, looks like the error occurs when installing using pip install -r requirements.txt this are the events from the web console:
Time Type Details
2017-10-06 20:22:39 UTC-0600 WARN Environment health has transitioned from Pending to Degraded. Command failed on all instances. Initialization completed 69 seconds ago and took 14 minutes.
2017-10-06 20:22:20 UTC-0600 ERROR Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
2017-10-06 20:21:17 UTC-0600 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2017-10-06 20:21:17 UTC-0600 ERROR [Instance: i-0b46caf0e3099458c] Command failed on instance. Return code: 1 Output: (TRUNCATED)...) File "/usr/lib64/python2.7/subprocess.py", line 541, in check_call raise CalledProcessError(retcode, cmd) CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2017-10-06 20:21:14 UTC-0600 ERROR Your requirements.txt is invalid. Snapshot your logs for details.
I followed this tutorial: django-elastic-beanstalk-django and deploying-a-django-app-and-postgresql-to-aws-elastic-beanstalk on both I'm stuck at the same step
I had an exact same problem. Here is the solution I found:
Go to your env's configuration
Click the conf button of Instances
Under Server section, check your Custom AMI ID
For now, leave the page and go to AWS EC2 Console page
Press Launch Instance
You will now be on Step 1: Choose an Amazon Machine Image (AMI)
Tada! You will see these ami ids that you can choose from:
Go back to no.3 and fill up the Custom AMI ID, refer from no.6
(FYI, I used amazon linux, but it is up to you to decide)
Click Apply and wait till the configuration is done
Re-follow the deploying django app
DONE!