poetry Virtual environment already activated - python

Running the following
poetry shell
returns the following error
/home/harshagoli/.poetry/lib/poetry/_vendor/py2.7/subprocess32.py:149: RuntimeWarning: The _posixsubprocess module is not being used. Child process reliability may suffer if your program uses threads.
"program uses threads.", RuntimeWarning)
The currently activated Python version 2.7.17 is not supported by the project (^3.7).
Trying to find and use a compatible version.
Using python3 (3.7.5)
Virtual environment already activated: /home/harshagoli/.cache/pypoetry/virtualenvs/my-project-0wt3KWFj-py3.7
How can I get past this error? Why doesn't this command work?

I'd have to do the following
source "$( poetry env list --full-path | grep Activated | cut -d' ' -f1 )/bin/activate"

poetry shell is a really buggy command, and this is often talked about among the maintainers. A workaround for this specific issue is to activate the shell manually. It might be worth aliasing the following
source $(poetry env info --path)/bin/activate
so you need to paste this into your .bash_aliases or .bashrc
alias acpoet="source $(poetry env info --path)/bin/activate"
Now you can run acpoet to activate your poetry env (don't forget to source your file to enable the command)

It's similar to activating a normal virtual environment. But we need to find out path of the poetry virtual env.
We can find the path via poetry env info --path
source $(poetry env info --path)/bin/activate
Explained in this poetry demo.

If you've installed poetry using the traditional pip command, poetry will be limited to create virtual environments for the python version for which it has been installed.
Try uninstalling poetry using:
pip3 uninstall poetry
Then reinstall using:
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
This worked for me.

Related

subprocess in python cannot create & activate conda venv

Currently I am tryting to, in a python script,
create a conda venv in a temp dir with a different python version I am using in my system
install some packages into this temp conda venv
Execute other python script using this new venv
Kill the process (which is automatic since it is under with .... as ..:)
import subprocess
from tempfile import TemporaryDirectory
with TemporaryDirectory() as tmpdir:
subprocess.call([
f"""
conda create -p {tmpdir}/temp_venv python=3.8 <<< y;
conda activate {tmpdir}/temp_venv && pip install <some_package>==XXX;
{tmpdir}/temp_venv/bin/python /path/to/python/script/test.py
"""
],
shell=True)
The point is that when I try this approach, I get the following error
**CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
You may need to close and restart your shell after running 'conda init'.**
I have already tried running conda init bash but the error persists.
I have also tried to use the Venv package for that but unfortunately it does not let me create a venv with a python version that is not installed in the system.
So, the problem is that conda expects your shell to be initialized normally (an interactive shell). But when you use subprocess, you are in a non-login, non-interactive shell. So, one hack would be to manually call the shell startup script. So for example, on my Macbook Pro:
subprocess.run([
f"""
conda create -y -p {tmpdir}/temp_venv python=3.8;
conda init
source ~/.bash_profile
conda activate {tmpdir}/temp_venv && pip install <some_package>==XXX;
{tmpdir}/temp_venv/bin/python /path/to/python/script/test.py
"""
],
shell=True)
Of course, this is going to be a bit platform dependent. For example, on Ubuntu, you are going to want to use:
source ~/.bashrc
instead.
A more portable solution would be to get subprocess.run to use an interactive shell, that would automatically call those scripts according to the convention of your OS (which conda handles setting up correctly).
So, this is definitely a hack, but it should work.
BTW, if you are using conda, you might as well use:
conda create -y -p {tmpdir}/temp_venv python=3.8 <some_package>==XXX
instead of a seperate:
pip install <some_package>==XXX;
A less hacky alternative is to use conda run, which will run a script in the conda environment. So something like:
subprocess.run([
f"""
conda create -y -p {tmpdir}/temp_venv python=3.8;
conda run -p {tmpdir}/temp_venv --no-capture-output pip install <some_package>==XXX;
conda run -p {tmpdir}/temp_venv/bin/python --no-capture-output /path/to/python/script/test.py
"""
],
shell=True)
I hesitate to use conda run because, at least a few years ago, it was considered "broken" for various subtle reasons, although, in simple cases it works. I think it is still considered an "experimental feature", so use with that caveat in mind, but it should be more portable.

Module installs but doesn't import

I'm on an OS Catalina and I'm trying to install and run Mephisto (see https://github.com/facebookresearch/mephisto/blob/master/docs/quickstart.md). I created a python3 virtual environment and then went to the directory and ran
sudo pip3 install -e .
This seems to have run fine as I can now run mephisto and see the list of commands and options. However when I run mephisto register mturk it throws No module named 'mephisto.core.argparse_parser' because of an import statement in the python file. This seems like a general issue of a module installing but not importing the module properly, but would appreciate help in how to fix it. Is it because my $PYTHONPATH is currently empty?
Mephisto Lead here! This seems to have been a case of unfortunate timing, as we were in the midst of a refactor there and some code got pushed to master that should've received more scrutiny. We'll be moving to stable releases via PyPI in the near future to prevent things like this!
I created a python3 virtual environment and then went to the directory and ran
sudo pip3 install -e .
You should not have used sudo to install this library, if you meant to install it in a virtual environment. By using sudo the library probably got installed in the global environment (not in the virtual environment).
Typically:
create a virtual environment:
python3 -m venv path/to/venv
install tools and libraries in this environment with:
path/to/venv/bin/python -m pip install Mephisto
use python in the virtual environment:
path/to/venv/bin/python -c 'import mephisto'
use a tool in the virtual environment:
path/to/venv/bin/mephisto
Is it because my $PYTHONPATH is currently empty?
Forget PYTHONPATH. Basically one should never have to modify this environment variable (this is almost always ill-informed advice to get PYTHONPATH involved).
Check the __init__.py file is in the module's file directory. If not try creating an empty one.

Unable to run python script in virtualenv using bash script

I want to run a python script in in-built anaconda environment tensorflow_p36. To check if it is in virtual environment or not, I am using command pip -V.
My first attempt at bash script:
#!/bin/bash
source activate tensorflow_p36
python /home/ec2-user/abc/temp.py
pip -V
Note: tensorflow_p36 being an in-built environment, does not require to be called from specific /env/bin directory. It can be activated from any directory. I think it's a feature of Amazon Deep Learning AMIs.
My second attempt at bash script:
#!/bin/bash
pythonEnv="/home/ec2-user/anaconda3/envs/tensorflow_p36/"
source ${pythonEnv}bin/activate
${pythonEnv}bin/python /home/ec2-user/abc/temp.py
pip -V
Note: When I try to run source /home/ec2-user/anaconda3/envs/tensorflow_p36/bin/activate command in terminal, the environment isn't being activated.
Each time, I am getting the same result:
pip 9.0.1 from /home/ec2-user/anaconda3/lib/python3.6/site-packages (python 3.6)
Whereas, I should be getting:
pip 9.0.1 from /home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages (python 3.6)
Can someone please explain how do I activate virtual environment and run a python script from that environment? I need to use this particular environment because of the dependencies installed in it.
Extra info:
Not sure if it matters, but the tensorflow_p36 is a conda environment, not a virtualenv.
This works with virtualenv. Create environment:
virtualenv -p python 3.6 tensorflow_p36
Then change the script to:
#!/bin/bash
source $HOME/tensorflow_p36/bin/activate
python /home/ec2-user/abc/temp.py
I believe the confusion has to do with the fact that you are using anaconda and not virtualenv to create a python environment. These two tools work differently.
If you are using an EC2 instance, why not to install tensorflow_p36 globally anyway?

How to pip-install Python package into virtual env and have CLI commands accessible in normal shell

For larger Python packages which might interfere with other packages it is recommended to install them into their own virtual environment and some Python packages expose CLI commands to the shell.
Is there a way to pip-install such a package into its own virtual environment,
but have the CLI commands accessible from a normal shell without switching
beforehand manually to this virtual environment?
Here an example: When I install csvkit via
pip install csvkit
I have the commands csvcut, csvlook, csvgrep and others available in my
shell. However if I do not want to install cvskit in my System-Python and
install it in a virtual environment, say at ~/venvs/csvkit, I have
csvkit only available if I have manually activated the environment
csvkit.
Is there a way to create the virtual environment and install csvkit in it,
so that the commands like csvcut activate the environment themselves before
they run?
A newer tool which still is very well maintained is pipx - Install and Run Python Applications in Isolated Environments. It works similar to pipsi:
First install pipx. (See pipx installation)
Then issue:
pipx install csvkit
Finally make sure pipx's bin directory (normally ~/.local/bin) is in your PATH.
Please note, pipx has additional commands for maintaining and inspecting of the generated venvs - see pipx --help.
You can create an aliases, such as csvcut and point them to source ~/venvs/csvkit/bin/activate && csvcut && source deactivate
If this programs accept parameters, you can use functions and defined the in the .bashrc file:
csvcut() {
#do things with parameters like $1 such as
source ~/venvs/csvkit/bin/activate
csvcut $1 $2 $3 $4 $5
deactivate
}
To call the function, just use the csvcut <your_parameter> command.
Use pipsi. Here a description from the project's ReadMe:
pipsi installs each package into ~/.local/venvs/PKGNAME and then symlinks all new scripts into ~/.local/bin (these can be changed by PIPSI_HOME and PIPSI_BIN_DIR env variables respectively).
Compared to pip install --user each PKGNAME is installed into its own virtualenv, so you don't have to worry about different PKGNAMEs having conflicting dependencies.
It works a treat for csvkit:
First install pipsi.
Then issue:
pipsi install csvkit
Finally make sure pipsi's bin directory (normally ~/.local/bin) is in your PATH.
That's it! Now you can type on the comamnd line e.g.
csvcut --help
which calls csvcut in its own virtualenv.
There's is no need to manually activate an virtualenv and your system Python does not get polluted by additional packages (apart from the pipsi package once and for all).

Install python module only on home folder in server

I'm developing my master thesis on a university's server, so I have my account and I can log in and do all the stuff I want if I remain inside /home/myname/.
I'm developing some python scripts and now I want to integrate python with the octave module, which is not currently installed on the system, and , of course, I cannot do anything with sudo apt-get install .
How can I overcome this problem without asking to my teacher?
thank you all,
Fabio
Please don't copy python and pip. You should use a virtualenv to install project-specific packages. This is particularly useful in your use-case where you can't install things at the system level. Even if you could, virtualenvs are recommended so the dependencies of each project are isolated.
Here is a quick primer that should get you going.
Create the virtualenv
virtualenv ~/project/env
Activate the virtualenv
source ~/project/env/bin/activate
This will modify your bash prompt by placing the name of your virtualenv in parenthesis to indicate that your virtualenv is activated.
(env) hostname:current_folder user$
Install Packages into the virtualenv
pip install -r requirements.txt
Use the virtualenv
python script.py
Use virtualenv by default in a script
script.py
#!~/project/env/bin/python
print('hello world!')
Then from the command line
chmod ugo+x script.py
./script.py
hello world!
Deactivate the virtualenv
deactivate
Make yourself a local copy of python and pip, then you can install whatever modules you want and not have to worry about getting a sysadmin to help you.
There are some good instructions here
Go here to get the link to the version of python you need and substitute it in the instructions above.
In your .bashrc add alias and path to your local copy - you may need to modify this for your own situation:
alias python="~/bin/python"
PATH=~/.local/bin:~/bin:$PATH
For the PATH - when you install local copies of modules through pip they by default go to ~/.local - change this if you prefer.
Begin your scripts with:
#/usr/bin/env python
so they use your preferred python version

Categories

Resources