I created a custom paster command as described in http://pythonpaste.org/script/developer.html#what-do-commands-look-like. In my setup.py I have defined the entry point like this:
entry_points={
'paste.global_paster_command' : [
'xxx_new = xxxconf.main:NewXxx'
]
}
I'm inside an activated virtualenv and have installed my package via
python setup.py develop
If I run paster while inside my package folder, I see my custom command and I can run it via paster xxx .... But if I leave my package folder paster does not display my command anymore. I checked which paster and it's the version of my virtualenv. I also started a python interpreter and imported xxxconf and it works fine.
I have no idea why my global command is not recognized when I'm outside my package folder!?
You are doing something wrong, it should work. This is the minimal working example, you can test it with your virtualenv:
blah/setup.py:
from setuptools import setup, find_packages
setup(name='blah',
version='0.1',
packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
include_package_data=True,
zip_safe=False,
entry_points={'paste.global_paster_command': [ "xxx_new = blah.xxx:NewXxx", ] },
)
blah/blah/xxx.py:
from paste.script import command
class NewXxx(command.Command):
usage = "PREFIX"
summary = "some command"
group_name = "my group"
blah/blah/__init__.py: empty.
Now testing:
$ pwd
/tmp
$ virtualenv paster
New python executable in paster/bin/python
Installing setuptools............done.
Installing pip...............done.
$ . paster/bin/activate
(paster)$ pip install PasteScript
Downloading/unpacking PasteScript
[... skipping long pip output here ...]
(paster)$ paster
[...]
Commands:
create Create the file layout for a Python distribution
help Display help
make-config Install a package and create a fresh config file/directory
points Show information about entry points
post Run a request for the described application
request Run a request for the described application
serve Serve the described application
setup-app Setup an application, given a config file
(paster)$ cd blah/
(paster)$ python setup.py develop
running develop
[... skipping setup.py output...]
(paster)$ paster
[...]
Commands:
create Create the file layout for a Python distribution
help Display help
make-config Install a package and create a fresh config file/directory
points Show information about entry points
post Run a request for the described application
request Run a request for the described application
serve Serve the described application
setup-app Setup an application, given a config file
my group:
xxx_new some command
(paster)$ cd ~
(paster)$ paster
[...]
Commands:
[...]
setup-app Setup an application, given a config file
my group:
xxx_new some command
You should install your paster_script in the active virtualenv. Then you can use it anywhere.
Related
I have a GUI program I'm managing, written in Python. For the sake of not having to worry about environments, it's distributed as an executable built with PyInstaller. I can run this build from a function defined in the module as MyModule.build() (because to me it makes more sense to manage that script alongside the project itself).
I want to automate this to some extent, such that when a new release is added on Gitlab, it can be built and attached to the release by a runner. The approach I currently have to this is functional but hacky:
I use the Gitlab API to download the source of the tag for the release. I run python -m pip install -r {requirements_path} and python -m pip install {source_path} in the runner's environment. Then import and run the MyModule.build() function to generate an executable. Which is then uploaded and linked to the release with the Gitlab API.
Obviously the middle section is wanting. What are best approaches for similar projects? Can the package and requirments be installed in a separate venv than the one the runner script it running in?
One workflow would be to push a tag to create your release. The following jobs have a rules: configuration so they only run on tag pipelines.
One job will build the executable file. Another job will create the GitLab release using the file created in the first job.
build:
rules:
- if: "$CI_COMMIT_TAG" # Only run when tags are pushed
image: python:3.9-slim
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: # https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies
paths:
- .cache/pip
- venv/
script:
- python -m venv venv
- source venv/bin/activate
- python -m pip install -r requirements.txt # package requirements
- python -m pip install pyinstaller # build requirements
- pyinstaller --onefile --name myapp mypackage/__main__.py
artifacts:
paths:
- dist
create_release:
rules:
- if: "$CI_COMMIT_TAG"
needs: [build]
image: registry.gitlab.com/gitlab-org/release-cli:latest
script: # zip/upload your binary wherever it should be downloaded from
- echo "Uploading release!"
release: # create GitLab release
tag_name: $CI_COMMIT_TAG
name: 'Release of myapp version $CI_COMMIT_TAG'
description: 'Release created using the release-cli.'
assets: # link uploaded asset(s) to the release
- name: 'release-zip'
url: 'https://example.com/downloads/myapp/$CI_COMMIT_TAG/myapp.zip'
I have a project on ReadTheDocs that I'm trying to build. I'm using a very basic .readthedocs.yaml file that reads:
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.9"
# You can also specify other tool versions:
# nodejs: "16"
# rust: "1.55"
# golang: "1.17"
# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
configuration: docs/source/conf.py
fail_on_warning: true
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
# - pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/requirements.txt
conda:
environment: environment.yml
Unfortunately, the RTD build logs seem to tell me that after cloning and writing out the environment.yml file, the build process runs python env create --quiet --name develop --file environment.yml. This obviously fails with "no such file or directory" (Error 2) since, well, no such file or directory as env exists in the directory structure. Shouldn't RTD be running conda create here? How do I make it do the right thing?
Thanks,
Eli
This problem is described in https://github.com/readthedocs/readthedocs.org/issues/8595
In summary, python: "3.9" now means that CPython 3.9 + venv are used, regardless of conda.environment.
If one wants to use a conda environment, python: "miniconda3-4.7" or python: "mambaforge-4.10" need to be specified.
In the future, a better error message should be shown at least. Feel free to upvote the issue.
I'm trying to make my Python command-line app as self-installing as possible for some Mac users in my company by registering it as a "command-line accessible tool" using setup.py as described here.
My project tree (simplified) looks like so:
my-app
|-app.py
|-setup.py
Building off Kenneth Reitz's example, here's my setup.py script:
setup(
name=my-app,
...
packages=find_packages(exclude=('tests',)),
# For entry point. Based on https://stackoverflow.com/a/28471597/9381758.
py_modules=['app'],
# Based on https://stackoverflow.com/a/11717581/9381758.
entry_points={
'console_scripts': ['my-app = app:main']
}
)
When I run python setup.py install, it installs to my pyenv directory:
~/.pyenv/versions/3.6.4/bin/my-app
This works:
$ ~/.pyenv/versions/3.6.4/bin/my-app -h
usage: my-app [-h]
This doesn't:
$ my-app -h
-bash: my-app: command not found
Is there an elegant way to update my setup.py script so that after install end-users will be able to run the my-app command without further tweaks or including the full path?
This could solve your problem.
setup(
other args ...
scripts = ['my-app/script.py']
)
In your case try
setup(
other args ...
scripts = ['app.py']
)
This question already has answers here:
How to run cloned Django project?
(7 answers)
Closed last year.
create virtualenv
create project (chat)
follow instructions at
https://github.com/qubird/django-chatrooms, after which there is a src folder in the root of the virtualenv
navigate to virtualenv/src/chatrooms and run the command python setup.py install, this installs app folder with all files and folders at virtualEnv/src/chatrooms/chatrooms
How do I get this to install to my project, not to virtualEnv/src/chatrooms/chatrooms? I also checked
Can't install Django app from git
and
how can i download code from git hub using command line but am still stuck.
Just follow these:
cd in to the directory where you want your project to store you source code eg. home/.
Then run django-admin startproject chat
This will create a chat directory in your current directory
Now cd to the chat directory.
run virtualenv env
This will create a directory namely env. Now just activate the virtualenv by running source /env/bin/activate (if you are in the chat dir).
As you now have you virtualenv ready and activated, just installed all your apps by running pip install .. and you are ready to go.
And don't worry about the env folder and its content or where your installed app code goes (until you want to change something in the installed app, which is usually not the case).
All you have to see is if your installed app works or not.
The method below installs directly to the prjoect level directory with no nedd to manually move file , .ie, not installed to src/chatrooms/charooms
1 create virtualenv (optional)
2 create project
3 cd to project
4 run git init
5 run command git clone "{protocol:url}"
6 add app to settings & and urls to main UURLCONF file
When I was developing and testing my project, I used to use virtualenvwrapper to manage the environment and run it:
workon myproject
python myproject.py
Of course, once I was in the right virtualenv, I was using the right version of Python, and other corresponding libraries for running my project.
Now, I want to use Supervisord to manage the same project as it is ready for deployment. The question is what is the proper way to tell Supervisord to activate the right virtualenv before executing the script? Do I need to write a separate bash script that does this, and call that script in the command field of Supervisord config file?
One way to use your virtualenv from the command line is to use the python executable located inside of your virtualenv.
for me i have my virtual envs in .virtualenvs directory. For example
/home/ubuntu/.virtualenvs/yourenv/bin/python
no need to workon
for a supervisor.conf managing a tornado app i do:
command=/home/ubuntu/.virtualenvs/myapp/bin/python /usr/share/nginx/www/myapp/application.py --port=%(process_num)s
Add your virtualenv/bin path to your supervisord.conf's environment:
[program:myproj-uwsgi]
process_name=myproj-uwsgi
command=/home/myuser/.virtualenvs/myproj/bin/uwsgi
--chdir /home/myuser/projects/myproj
-w myproj:app
environment=PATH="/home/myuser/.virtualenvs/myproj/bin:%(ENV_PATH)s"
user=myuser
group=myuser
killasgroup=true
startsecs=5
stopwaitsecs=10
First, run
$ workon myproject
$ dirname `which python`
/home/username/.virtualenvs/myproject/bin
Add the following
environment=PATH="/home/username/.virtualenvs/myproject/bin"
to the related supervisord.conf under [program:blabla] section.