Unable to activate virtualenv from Python script (subprocess | powershell) - python

This is pretty straightforward to activate a virtualenv from powershell of Windows, by ./venv/Scripts/activate command, or with an absolute path like below:
But when I want to execute the same command from a Python script that executes commands in powershell, virtualenv doesn't activate and I can't run pip install something commands inside virtualenv. It means that I can't add packages or even upgrade pip inside virtualenv (Surely because it's not activated correctly).
Note
I'm confident about the implementation of the code because it works clearly for other commands. The only problem might be with C:/temp/venv/Scripts/activate command sent to powershell. Looking for some command like source in Linux to activate that virtualenv.
Here is my code:
installer.py script: runs different commands inside powershell with subprocess, and returns the result.
# installer.py
class Installer:
def run(command):
# Some code here
proc = subprocess.Popen(
[ 'powershell.exe', command ],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# Some code here
install.py script: sends commands to the Installer class
# install.py
from installer import Installer
installer = Installer()
installer.run('C:/temp/venv/Scripts/activate')

SOLUTION
Turned out I didn't need to activate virtualenv. I could easily run pip install commands with following command sent to subprocess:
installer.run('C:/temp/venv/Scripts/python.exe -m pip install somepackage')

Related

How to install Python packages from Windows command line?

I have a script that I wrote in Google Collab, and I'm installing the Geopandas, pandas, and descartes packages using !pip install [package] at the top of the script. However, when I run this script from the command line (I open the Terminal in Anaconda), I get a syntax error.
In normal Python, packages are installed by typing:
pip install <pkg_name>
into either PowerShell, Command Prompt or any other shells you have. Pure Python does not support installation via the code script
However, you could in theory write code that runs the shell commands to install the packages, something like the following:
import subprocess
package_names = ['geopandas', 'pandas', 'descartes']
for pkg in package_names:
try:
process = subprocess.run(
['python', '-m', 'pip', 'install', pkg],
shell=True, check=True, text=True,
stderr=subprocess.PIPE, stdout=subprocess.PIPE
)
print(process)
except subprocess.CalledProcessError as e:
print(e.returncode, e.stderr, e.output)
But as you can see, it isn't the simplest code any is extremely bad practise as it does not take into account any environments or cleanliness of the package installations. It is better to create a virtual environment in your project folder and install the dependencies there
cd %userprofile%\path\to\project\folder
py -m venv .\project-env
.\project-env\scripts\activate.bat
pip install geopandas pandas descartes
pip freeze > .\requirements.txt
The above (in Windows Command Prompt, aka cmd.exe) will create a virtual environment, which means that all packages are isolated for that project, which will ensure there are no dependency conflicts, then activate the venv. It will then install the packages and write a requirements file that you can ship with your code
I hope this helps, good luck :D

Conda command working in command prompt but not in bash script

my anaconda (4.5.4) works fine as long as I just use it via a linux terminal (bash shell). However, running conda commands in a bash script does not work at all.
The script test.sh containes these lines:
#!/bin/bash
conda --version
conda activate env
Now, running bash test.sh results in the error
test.sh: line 2: conda: command not found
test.sh: line 3: conda: command not found
As recommended for anaconda version > 4.4 my .bashrc does not contain
export PATH="/opt/anaconda/bin:$PATH",
but
. /opt/anaconda/etc/profile.d/conda.sh
Thank you.
I solved the problem thanks to #darthbith 's comment.
Since conda is a bash function and bash functions can not be propagated to independent shells (e.g. opened by executing a bash script), one has to add the line
source /opt/anaconda/etc/profile.d/conda.sh
to the bash script before calling conda commands. Otherwise bash will not know about conda.
If #randomwalker's method doesn't work for you, which it won't any time your script is run in a more basic shell such as sh, then you have two options.
Add this to your script: eval $(conda shell.bash hook)
Call your script with: bash -i <scriptname> so that it runs in your interactive environment.
Let's say you try to access user name with "miky" # "server" address.First when you login to your user ; learn conda path with "which conda" then probably you will get a path such as "/home/miky/anaconda3/bin/conda"
then put your conda commands as follow (in my example i use conda to install a mysql plugin forexample.): shh miky#server -t "/home/miky/anaconda3/bin/conda install -y -c anaconda mysql-connector-python" thats all.
do sudo ln -s /home/<user>/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh and try again. This should activate conda for all users permenantly
source

Set up virtualenv with Paramiko SSH

I have some limited experience with Python and Django in Windows, and now I am trying to understand how to deploy my code to an Ubuntu 16.04 LTS VPS. Having read various tutorials and a lot of answers on SE, I managed to proceed pretty far (well, for me), but now I am stuck.
Manually (via Putty) I can do the following:
# check that Python 3.5 is installed
python3 --version
# install pip
sudo -kS apt-get -y install python3-pip
# upgrade pip to newest version
pip3 install --upgrade pip
# check result
pip3 --version
# install venv
sudo -kS pip3 install virtualenv virtualenvwrapper
# create venv
virtualenv ~/Env/firstsite
# make sure venv is created
ls -l ~/Env/firstsite/bin/python # /home/droplet/Env/firstsite/bin/python3.5 -> python3
# switch on venv
source ~/Env/firstsite/bin/activate # (firstsite) droplet#hostname:~$
# check that python3 is taken from venv
which python3 # /home/droplet/Env/firstsite/bin/python3
So the virtual environment is properly created and switched on. I could proceed installing Django.
However when I am trying to do exactly the same in the automated regime, using Paramiko (I execute commands using paramiko.SSHClient().exec_command(cmd, input_string, get_pty=False), everything goes exactly the same way, until the last command:
exec_command('which python3')
returns /usr/bin/python3. So I assume source activate doesn't work via Paramiko's SSH.
Why?
How can I cope with it?
Can I check that the venv is enabled in some more direct (and reliable) way?
Taken from #Pablo Navarro's answer here :How to source virtualenv activate in a Bash script helped me with this same issue (activating environments in a paramiko ssh session).
In the exec_command give the path to the python executable within the environment eg:
stdin, stdout, stderr = ssh.exec_command(/path/to/env/bin/python script.py)
In my case (using miniconda and a env called pyODBC):
stdin, stdout, stderr = ssh.exec_command(~/miniconda2/envs/pyODBC/bin/python run_script.py)
running the command ~/miniconda2/envs/pyODBC/bin/python -m pip list printed the list of modules in this env to confirm
We can easily activate the virtualenv and execute commands on same.
Example:
import paramiko
hostname = 'host'
port = 22
username = 'root'
password = 'root'
s = paramiko.SSHClient()
s.load_system_host_keys()
s.set_missing_host_key_policy(paramiko.AutoAddPolicy())
s.connect(hostname, port, username, password)
command = 'source /root/Envs/env/bin/activate;python3 --version;qark;echo hello'
(stdin, stdout, stderr) = s.exec_command(command)
for line in stdout.readlines():
print(line)
for line in stderr.readlines():
print(line)
s.close()
If you are using anaconda and creating your virtual environments that way, I found a work around. Taken from [this github page][1] I use send the following command to my remote pc through paramiko
f'source ~/anaconda3/etc/profile.d/conda.sh && conda activate {my_env} && {command}'
I also wish you could just activate a venv and then all the following commands would be in the venv, but this work around is nice since the only thing I have to change is the venv name. Since everythnig is in one line, it executes perfectly and I don't need to reactivate anything. If you just have a wrapper function in python it makes it all very easy to use and read. Something like this:
def venv_wrapper(command, ssh, venv=None):
if venv:
conda_location = 'source ~/anaconda3/etc/profile.d/conda.sh'
activate_env = f'conda activate {venv}'
command = f'{conda_location} && {activate_env} && {command}'
ssh.exec_command(command, get_pty=True)
I just send all of my commands through this code (which is a little more developed/complicated in my own toolkit) whether or not im using a venv. Works pretty nicely so far
[1]: https://github.com/conda/conda/issues/7980

How to run `pip` in a virtualenv with subprocess.check_call()?

I'm trying to launch a command in different Python virtualenvs using subprocess.check_call().
To activate the virtualenv (in a Python 2/3 agnostic manner), I simply append the path to my virtualenv bin (or Scripts under Windows) to the PATH and then call subprocess.check_call() with this modified environment. Like so:
environment = os.environ.copy()
environment['PATH'] = os.pathsep.join([bin_path, environment['PATH']])
subprocess.check_call("pip install -r dev_requirements.txt", env=environment)
However, what I notice is that pip installs everything in the system Python site-packages. If I change the check_call() for:
subprocess.check_call("pip install -r dev_requirements.txt", env=environment, shell=True)
Then suddenly pip operates in the virtualenv as expected.
What bothers me is that in both cases running where pip gives me the path to the virtualenv's pip first.
What could explain this odd behavior ?
PATH is not the first place where CreateProcess() used by Popen() on Windows looks for the executable. It may use pip.exe from the same directory as the parent python.exe process. The shell (cmd.exe) uses different rules. See Popen with conflicting executable/path.
To avoid the dependency; use the explicit full path to pip. You don't need to change the environment in this case:
import os
from subprocess import check_call
check_call([os.path.join(bin_path, 'pip.exe')] +
'install -r dev_requirements.txt'.split())

Activate virtualenv via os.system()

I'm writing a Python-based shell script to boilerplate a Django app with virtualenv, pip, and fabric. Should be straightforward enough, but it appears that I'm not able to activate and run commands in the virtualenv through the shell script.
os.system('virtualenv %s --no-site-packages' % project_name)
os.system('source %s/bin/activate' % project_name)
os.system('easy_install pip')
When running, this errors out:
$ startproject+ -s false sample
New python executable in sample/bin/python
Installing setuptools............done.
/testing
Searching for pip
Best match: pip 0.4
Processing pip-0.4-py2.6.egg
pip 0.4 is already the active version in easy-install.pth
Installing pip script to /usr/local/bin
error: /usr/local/bin/pip: Permission denied
Obviously the source line isn't being run, but why? Is it a concurrency/threading issue, or something deeper with virtualenv?
Thanks!
Each call to os.system runs the command in a new subshell, which has the same properties as the original python process.
Try putting the commands into one string separated by semicolons.
Just don't use "source activate" at all. It does nothing but alter your shell PATH to put the virtualenv's bin directory first. I presume your script knows the directory of the virtualenv it has just created; all you have to do is call _virtualenv_dir_/bin/easy_install by full path. Or _virtualenv_dir_/bin/python for running any other python script within the virtualenv.
Each os.system call creates a new process. You'll need to ensure that the activate and the easy_install are run in the same os.system or subprocess call.
You could also install virtualenvwrapper, and use the postmkvirtualenv hook. I use it to automatically bring in fresh copies of pip and IPython into virtualenvs I create (as I don't want it using my system IPython). I also use it to copy pythonw into the virtualenv, otherwise wx-based stuff won't work. Looks like this:
easy_install pip
pip install -I ipython
cd ~/bin
python install_pythonw.py ${VIRTUAL_ENV}

Categories

Resources