virtualenv + setuptools issue in Pyramid - python

I have followed these instructions. That is:
Created a folder blah_project and another folder venv within it.
Run virtualenv --no-site-packages venv to create a virtual environment inside venv.
Activated venv with source venv/bin/activate
Run pip install pyramid
Run pcreate -s alchemy blah
Now, the problem I'm facing is that if I run any command, for instance python blah/setup.py test -q, the required packages are installed not in the appropriate venv subpath, but rather in the current directory. Is that the expected behaviour? How do I setup the script to always install packages in the right path?
I tried looking inside setup.py and I don't really find anything relevant, i.e. there is no path passed on to setuptools.setup() function call.

Try
pip install -e .
That will help you to install the requirements in your venv environment.

This is expected behavior with the test subcommand of setup.py unfortunately. The way we solve this in a lot of our subprojects is by defining a new alias called setup.py dev which installs both testing dependencies and actual dependencies at the same time. However I don't have a great solution for you as this is the way setup.py test works intentionally. Below are links to the Pyramid configuration that allows setup.py dev to work.
https://github.com/Pylons/pyramid/blob/master/setup.cfg#L12
https://github.com/Pylons/pyramid/blob/master/setup.py#L99

Related

How to let myprogram.py use venv without setting any envs beforehand?

I'm a newbie to python(3), but not to programming in general.
I'd like to distribute a git repo with myprogram consisting of these files:
requirements.txt
myprogram.py
lib/modulea.py
lib/moduleb.py
My question is: What is the best-practice and least surprising way to let users run myprogram.py using the dependencies in requirements.txt? So that after git clone, and some idiomatic installation command(s), ./myprogram.py or /some/path/to/myprogram.py "just works" without having to first set magical venv or python3 environment variables?
I want to be able to run it using the #! shebang so that /path/to/myprogram.py and double-clicking it from the file manager GUI does the correct thing.
I already know I can create a wrapper.sh or make a clever shebang line. But I'm looking for the best-practice approach, since I'm new to python.
More details
I'm guessing that users would
git clone $url workdir
cd workdir
python3 -m venv .
./bin/pip install -r requirements.txt
And from now on this uses the modules from requirements.txt:
./myprogram.py
If I knew that the project directory was always /home/peter/workdir, I could start the myprogram.py with:
#!/home/peter/workdir/bin/python3
but I'd like to avoid hard-coding the project directory in myprogram.py.
This also seems to work in my tiny demo, but clearly this is brittle and not best-practice, but it illustrates what I'm trying to do:
#!/usr/bin/env python3
import os
import sys
print(os.path.join(os.path.dirname(__file__), 'lib', 'python3.10', 'site-packages'))
I'm sure I could come up with some home-grown shebang line that works, but what is the idiomatic way to do this in python3?
Again: After pip install, I absolutely refuse to have to to set any environment variables or call any setup code in future shells before running myprogram.py. (Unless that strongly conflicts with "idiomatic", which I hope isn't the case)...
Expanding #sinoroc's comment into an answer:
I've looked at https://packaging.python.org/en/latest/tutorials/packaging-projects/ and also at "entrypoints", and this is the smallest example I can think of. Create an empty directory with these two files:
pyproject.toml:
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "example_module_pmorch"
version = "0.0.1"
[project.scripts]
runme = "example_module_pmorch:cli_main"
src/example_module_pmorch/__init__.py:
def cli_main():
print("I'm the entrypoint")
Now if I run this:
$ python3 -m venv .
# Adding -e during development is optional
$ ./bin/pip install .
Then ./bin/runme does the right thing and prints I'm the entrypoint.
I do not see why you would need to hardcode anything. From your last snippet it looks like you are forcing the Python import system to include the target directory of the virtual environment you first create.
Based on your explanation, it seems you are using venv as your virtual environment manager. So long as your users install the dependencies in the virtual environment, and then activate the virtual environment before running the script, the dependencies will be ready for your module/script to use them.
This line you share: ./bin/pip install -r requirements.txt manually uses the package manager of the virtual environment you create with python3 -m venv .. Instead, you would want your user to create the environment (python3 -m venv example-env), activate the environment (source example-env/bin/activate) and then run the pip install command: python3 -m pip install -r requirements.txt.
The user of the package has to make sure that the environment is active before running the script.

What's the standard way to package a python project with dependencies?

I have a python project that has a few dependencies (defined under install_requires in setup.py). My ops people requires a package to be self contained and only depend on a python installation. The litmus test would be that they're able to get a zip-file and then unzip and run it without an internet connection.
Is there an easy way to package an install including dependencies? It is acceptable if I have to build on the OS/architecture that it will eventually be run on.
For what it's worth, I've tried both setup.py build and setup.py sdist, but they don't seem to fit the bill since they do not include dependencies. I've also considered virtualenv (which could be installed if absolutely necessary), but that has hard coded paths which makes it less than ideal.
There are a few nuances to how pip works. Unfortunately, using --prefix vendor to store all the dependencies of the project doesn't work if any of those dependencies, or dependencies of dependencies are installed into a place where pip can find them. It will skip those dependencies and just install the rest to your vendor folder.
In the past I've used virtualenv's --no-site-packages option to solve this issue. At one company we would ship the whole virtualenv, which includes the python binary. In the interest of only shipping the dependencies, you can combine using a virtualenv with the --prefix switch on pip to give yourself a clean environment that installs to the right place.
I'll provide an example script that creates a temporary virtualenv, activates it, then installs the dependencies to a local vendor folder. This is handy if you are running in CI.
#!/bin/bash
tempdir=$(mktemp -d -t project.XXX) # create a temporary directory
trap "rm -rf $tempdir" EXIT # ensure it is cleaned up
# create the virtualenv and exclude packages outside of it
virtualenv --python=$(which python2.7) --no-site-packages $tempdir/venv
# activate the virtualenv
source $tempdir/venv/bin/activate
# install the dependencies as above
pip install -r requirements.txt --prefix=vendor
In most cases you should be able to "vendor" all the dependencies. It's basically a crude version of virtualenv.
For example look at how the requests package includes chardet and urllib3 in its own source tree. Here's an example script that should do the initial downloading and copying for you: https://gist.github.com/proppy/1136723
Once you have the dependencies installed, you can reference them with from .some.namespace import dependency_name to make sure that you're using your local versions.
It's possible to do this with recent versions of pip (I'm using 8.1.2). On the build machine:
pip install -r requirements.txt --prefix vendor
Then run it:
PYTHONPATH=vendor/lib/python2.7/site-packages python yourapp.py
(This is basically an expansion of #valentjedi comment. Thanks!)
let's say you have python module app.py with dependencies in requirements.txt file.
first, install all your dependencies in appdeps folder.
python -m pip install -r requirements.txt --target=./appdeps
then in your app.py module add this dependency folder to the pythonpath
# app.py
import sys
sys.path.append('appdeps')
# rest of your module normally
#...
this will work the same way as if you were running this script from venv with all the dependencies installed inside ;>

Install python module only on home folder in server

I'm developing my master thesis on a university's server, so I have my account and I can log in and do all the stuff I want if I remain inside /home/myname/.
I'm developing some python scripts and now I want to integrate python with the octave module, which is not currently installed on the system, and , of course, I cannot do anything with sudo apt-get install .
How can I overcome this problem without asking to my teacher?
thank you all,
Fabio
Please don't copy python and pip. You should use a virtualenv to install project-specific packages. This is particularly useful in your use-case where you can't install things at the system level. Even if you could, virtualenvs are recommended so the dependencies of each project are isolated.
Here is a quick primer that should get you going.
Create the virtualenv
virtualenv ~/project/env
Activate the virtualenv
source ~/project/env/bin/activate
This will modify your bash prompt by placing the name of your virtualenv in parenthesis to indicate that your virtualenv is activated.
(env) hostname:current_folder user$
Install Packages into the virtualenv
pip install -r requirements.txt
Use the virtualenv
python script.py
Use virtualenv by default in a script
script.py
#!~/project/env/bin/python
print('hello world!')
Then from the command line
chmod ugo+x script.py
./script.py
hello world!
Deactivate the virtualenv
deactivate
Make yourself a local copy of python and pip, then you can install whatever modules you want and not have to worry about getting a sysadmin to help you.
There are some good instructions here
Go here to get the link to the version of python you need and substitute it in the instructions above.
In your .bashrc add alias and path to your local copy - you may need to modify this for your own situation:
alias python="~/bin/python"
PATH=~/.local/bin:~/bin:$PATH
For the PATH - when you install local copies of modules through pip they by default go to ~/.local - change this if you prefer.
Begin your scripts with:
#/usr/bin/env python
so they use your preferred python version

Install local dist package into virtualenv

I have a pytest test, let's call it test.py. I used to run this test outside of virtualenv; now I'm trying to run it inside a virtualenv sandbox.
The project is structured like this:
~/project/test # where test.py and all virtualenv files live
~/project/mylibrary
test.py imports from mylibrary. In the past, this worked because I have the code in ~/project/mylibrary installed into /usr/lib/python2.7/dist-packages/mylibrary.
I can't run virtualenv with the --system-site-packages flag. I also can't move the code from ~/project/mylibrary into the ~/project/test folder. How can I get access to the code in mylibrary inside my virtualenv?
You don't need to do anything special - as long as you are working inside a virtualenv, python setup.py install will automatically install packages into
$VIRTUAL_ENV/lib/python2.7/site-packages
rather than your system-wide
/usr/lib/python2.7/dist-packages
directory.
In general it's better to use pip install mylibrary/, since this way you can neatly uninstall the package using pip uninstall mylibrary.
If you're installing a working copy of some code that you're developing, it might be a good idea to install it in "editable" mode using pip install -e mylibrary/, which creates a link to your source directory so that your installed module gets updated as you edit the code.
The easiest way would be to add the directory containing the library to your sys.path

How to export virtualenv?

I'm new to virtualenv but I'm writting django app and finally I will have to deploy it somehow.
So lets assume I have my app working on my local virtualenv where I installed all the required libraries. What I want to do now, is to run some kind of script, that will take my virtualenv, check what's installed inside and produce a script that will install all these libraries on fresh virtualenv on other machine. How this can be done? Please help.
You don't copy paste your virtualenv. You export the list of all the packages installed like -
pip freeze > requirements.txt
Then push the requirements.txt file to anywhere you want to deploy the code, and then just do what you did on dev machine -
$ virtualenv <env_name>
$ source <env_name>/bin/activate
(<env_name>)$ pip install -r path/to/requirements.txt
And there you have all your packages installed with the exact version.
You can also look into Fabric to automate this task, with a function like this -
def pip_install():
with cd(env.path):
with prefix('source venv/bin/activate'):
run('pip install -r requirements.txt')
You can install virtualenvwrapper and try cpvirtualenv, but the developers advise caution here:
Warning
Copying virtual environments is not well supported. Each virtualenv
has path information hard-coded into it, and there may be cases where
the copy code does not know it needs to update a particular file. Use
with caution.
If it is going to be on the same path you can tar it and extract it on another machine. If all the same dependencies, libraries etc are available on the target machine it will work.

Categories

Resources