I have some limited experience with Python and Django in Windows, and now I am trying to understand how to deploy my code to an Ubuntu 16.04 LTS VPS. Having read various tutorials and a lot of answers on SE, I managed to proceed pretty far (well, for me), but now I am stuck.
Manually (via Putty) I can do the following:
# check that Python 3.5 is installed
python3 --version
# install pip
sudo -kS apt-get -y install python3-pip
# upgrade pip to newest version
pip3 install --upgrade pip
# check result
pip3 --version
# install venv
sudo -kS pip3 install virtualenv virtualenvwrapper
# create venv
virtualenv ~/Env/firstsite
# make sure venv is created
ls -l ~/Env/firstsite/bin/python # /home/droplet/Env/firstsite/bin/python3.5 -> python3
# switch on venv
source ~/Env/firstsite/bin/activate # (firstsite) droplet#hostname:~$
# check that python3 is taken from venv
which python3 # /home/droplet/Env/firstsite/bin/python3
So the virtual environment is properly created and switched on. I could proceed installing Django.
However when I am trying to do exactly the same in the automated regime, using Paramiko (I execute commands using paramiko.SSHClient().exec_command(cmd, input_string, get_pty=False), everything goes exactly the same way, until the last command:
exec_command('which python3')
returns /usr/bin/python3. So I assume source activate doesn't work via Paramiko's SSH.
Why?
How can I cope with it?
Can I check that the venv is enabled in some more direct (and reliable) way?
Taken from #Pablo Navarro's answer here :How to source virtualenv activate in a Bash script helped me with this same issue (activating environments in a paramiko ssh session).
In the exec_command give the path to the python executable within the environment eg:
stdin, stdout, stderr = ssh.exec_command(/path/to/env/bin/python script.py)
In my case (using miniconda and a env called pyODBC):
stdin, stdout, stderr = ssh.exec_command(~/miniconda2/envs/pyODBC/bin/python run_script.py)
running the command ~/miniconda2/envs/pyODBC/bin/python -m pip list printed the list of modules in this env to confirm
We can easily activate the virtualenv and execute commands on same.
Example:
import paramiko
hostname = 'host'
port = 22
username = 'root'
password = 'root'
s = paramiko.SSHClient()
s.load_system_host_keys()
s.set_missing_host_key_policy(paramiko.AutoAddPolicy())
s.connect(hostname, port, username, password)
command = 'source /root/Envs/env/bin/activate;python3 --version;qark;echo hello'
(stdin, stdout, stderr) = s.exec_command(command)
for line in stdout.readlines():
print(line)
for line in stderr.readlines():
print(line)
s.close()
If you are using anaconda and creating your virtual environments that way, I found a work around. Taken from [this github page][1] I use send the following command to my remote pc through paramiko
f'source ~/anaconda3/etc/profile.d/conda.sh && conda activate {my_env} && {command}'
I also wish you could just activate a venv and then all the following commands would be in the venv, but this work around is nice since the only thing I have to change is the venv name. Since everythnig is in one line, it executes perfectly and I don't need to reactivate anything. If you just have a wrapper function in python it makes it all very easy to use and read. Something like this:
def venv_wrapper(command, ssh, venv=None):
if venv:
conda_location = 'source ~/anaconda3/etc/profile.d/conda.sh'
activate_env = f'conda activate {venv}'
command = f'{conda_location} && {activate_env} && {command}'
ssh.exec_command(command, get_pty=True)
I just send all of my commands through this code (which is a little more developed/complicated in my own toolkit) whether or not im using a venv. Works pretty nicely so far
[1]: https://github.com/conda/conda/issues/7980
Related
Currently I am tryting to, in a python script,
create a conda venv in a temp dir with a different python version I am using in my system
install some packages into this temp conda venv
Execute other python script using this new venv
Kill the process (which is automatic since it is under with .... as ..:)
import subprocess
from tempfile import TemporaryDirectory
with TemporaryDirectory() as tmpdir:
subprocess.call([
f"""
conda create -p {tmpdir}/temp_venv python=3.8 <<< y;
conda activate {tmpdir}/temp_venv && pip install <some_package>==XXX;
{tmpdir}/temp_venv/bin/python /path/to/python/script/test.py
"""
],
shell=True)
The point is that when I try this approach, I get the following error
**CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
You may need to close and restart your shell after running 'conda init'.**
I have already tried running conda init bash but the error persists.
I have also tried to use the Venv package for that but unfortunately it does not let me create a venv with a python version that is not installed in the system.
So, the problem is that conda expects your shell to be initialized normally (an interactive shell). But when you use subprocess, you are in a non-login, non-interactive shell. So, one hack would be to manually call the shell startup script. So for example, on my Macbook Pro:
subprocess.run([
f"""
conda create -y -p {tmpdir}/temp_venv python=3.8;
conda init
source ~/.bash_profile
conda activate {tmpdir}/temp_venv && pip install <some_package>==XXX;
{tmpdir}/temp_venv/bin/python /path/to/python/script/test.py
"""
],
shell=True)
Of course, this is going to be a bit platform dependent. For example, on Ubuntu, you are going to want to use:
source ~/.bashrc
instead.
A more portable solution would be to get subprocess.run to use an interactive shell, that would automatically call those scripts according to the convention of your OS (which conda handles setting up correctly).
So, this is definitely a hack, but it should work.
BTW, if you are using conda, you might as well use:
conda create -y -p {tmpdir}/temp_venv python=3.8 <some_package>==XXX
instead of a seperate:
pip install <some_package>==XXX;
A less hacky alternative is to use conda run, which will run a script in the conda environment. So something like:
subprocess.run([
f"""
conda create -y -p {tmpdir}/temp_venv python=3.8;
conda run -p {tmpdir}/temp_venv --no-capture-output pip install <some_package>==XXX;
conda run -p {tmpdir}/temp_venv/bin/python --no-capture-output /path/to/python/script/test.py
"""
],
shell=True)
I hesitate to use conda run because, at least a few years ago, it was considered "broken" for various subtle reasons, although, in simple cases it works. I think it is still considered an "experimental feature", so use with that caveat in mind, but it should be more portable.
This is pretty straightforward to activate a virtualenv from powershell of Windows, by ./venv/Scripts/activate command, or with an absolute path like below:
But when I want to execute the same command from a Python script that executes commands in powershell, virtualenv doesn't activate and I can't run pip install something commands inside virtualenv. It means that I can't add packages or even upgrade pip inside virtualenv (Surely because it's not activated correctly).
Note
I'm confident about the implementation of the code because it works clearly for other commands. The only problem might be with C:/temp/venv/Scripts/activate command sent to powershell. Looking for some command like source in Linux to activate that virtualenv.
Here is my code:
installer.py script: runs different commands inside powershell with subprocess, and returns the result.
# installer.py
class Installer:
def run(command):
# Some code here
proc = subprocess.Popen(
[ 'powershell.exe', command ],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# Some code here
install.py script: sends commands to the Installer class
# install.py
from installer import Installer
installer = Installer()
installer.run('C:/temp/venv/Scripts/activate')
SOLUTION
Turned out I didn't need to activate virtualenv. I could easily run pip install commands with following command sent to subprocess:
installer.run('C:/temp/venv/Scripts/python.exe -m pip install somepackage')
I create a virtual environment and run PySpark script. If I do these steps on MacOS, everything works fine. However, if I run them on Linux (Ubuntu 16), then the incorrect version of Python is picked. Of course, I previously did export PYSPARK_PYTHON=python3 on Linux, but still the same issue. Below I explain all steps:
1. edit profile :vim ~/.profile
2. add the code into the file: export PYSPARK_PYTHON=python3
3. execute command: source ~/.profile
Then I do:
pip3 install --upgrade pip
pip3 install virtualenv
wget https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
tar -xvzf spark-2.4.0-bin-hadoop2.7.tgz && rm spark-2.4.0-bin-hadoop2.7.tgz
virtualenv tes-ve
source test-ve/bin/activate && pip install -r requirements.txt
If I execute python --version inside the visual environment, I see Python 3.5.2.
However when I run Spark code with this command: sudo /usr/local/spark-2.4.0-bin-hadoop2.7/bin/spark-submit mySpark.py, I get Using Python version 2.7... for these lines of code:
print("Using Python version %s (%s, %s)" % (
platform.python_version(),
platform.python_build()[0],
platform.python_build()[1]))
PYSPARK_PYTHON sets the call that's used to execute Python on the slave nodes. There's a separate environment variable called PYSPARK_DRIVER_PYTHON that sets the call for the driver node (ie the node on which your script is initially run). So you need to set PYSPARK_DRIVER_PYTHON=python3 too.
Edit
As phd points out, you may be running into trouble with your environment since you're using sudo to call the Pyspark submit. One thing to try would be using sudo -E instead of just sudo. The -E option will preserve your environment (though it isn't perfect).
If that fails, you can try setting the spark.pyspark.driver.python and spark.pyspark.python options directly. For example, you can pass the desired values into your call to spark-submit:
sudo /usr/local/spark-2.4.0-bin-hadoop2.7/bin/spark-submit --conf spark.pyspark.driver.python=python3 --conf spark.pyspark.python=python3 mySpark.py
There's a bunch of different ways to set these options (see this doc for full details). If one doesn't work/is inconvenient for you, try another.
my anaconda (4.5.4) works fine as long as I just use it via a linux terminal (bash shell). However, running conda commands in a bash script does not work at all.
The script test.sh containes these lines:
#!/bin/bash
conda --version
conda activate env
Now, running bash test.sh results in the error
test.sh: line 2: conda: command not found
test.sh: line 3: conda: command not found
As recommended for anaconda version > 4.4 my .bashrc does not contain
export PATH="/opt/anaconda/bin:$PATH",
but
. /opt/anaconda/etc/profile.d/conda.sh
Thank you.
I solved the problem thanks to #darthbith 's comment.
Since conda is a bash function and bash functions can not be propagated to independent shells (e.g. opened by executing a bash script), one has to add the line
source /opt/anaconda/etc/profile.d/conda.sh
to the bash script before calling conda commands. Otherwise bash will not know about conda.
If #randomwalker's method doesn't work for you, which it won't any time your script is run in a more basic shell such as sh, then you have two options.
Add this to your script: eval $(conda shell.bash hook)
Call your script with: bash -i <scriptname> so that it runs in your interactive environment.
Let's say you try to access user name with "miky" # "server" address.First when you login to your user ; learn conda path with "which conda" then probably you will get a path such as "/home/miky/anaconda3/bin/conda"
then put your conda commands as follow (in my example i use conda to install a mysql plugin forexample.): shh miky#server -t "/home/miky/anaconda3/bin/conda install -y -c anaconda mysql-connector-python" thats all.
do sudo ln -s /home/<user>/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh and try again. This should activate conda for all users permenantly
source
I would like to install Anaconda on a remote server.
The server is running Ubuntu 12.04.
I only have access to this server via SSH.
How can I install Anaconda via the command line?
Something along the lines of:
wget https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh
to get the installer for 64 bit linux followed by:
bash Anaconda3-2020.07-Linux-x86_64.sh
You can get the latest release from here
Please take a look at the Anaconda repo archive page and select an appropriate version that you'd like to install.
After that, just do:
# replace this `Anaconda3-version.num-Linux-x86_64.sh` with your choice
~$ wget -c https://repo.continuum.io/archive/Anaconda3-vers.num-Linux-x86_64.sh
~$ bash Anaconda3-version.num-Linux-x86_64.sh
Concrete Example:
As of this writing, Anaconda3-2020.07 is the latest version. So,
~$ wget -c https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh
~$ bash Anaconda3-2020.07-Linux-x86_64.sh
P.S. Based on comments, this should also work in CentOS systems.
You can do as Prashant said or you can use bash scripts to automate the installation. Just simply copy and paste depending on the version of Python you want
If you are trying to it entirely in command line you use a bash script
python 2 anaconda install bash script:
# Go to home directory
cd ~
# You can change what anaconda version you want at
# https://repo.continuum.io/archive/
wget https://repo.continuum.io/archive/Anaconda2-4.2.0-Linux-x86_64.sh
bash Anaconda2-4.2.0-Linux-x86_64.sh -b -p ~/anaconda
rm Anaconda2-4.2.0-Linux-x86_64.sh
echo 'export PATH="~/anaconda/bin:$PATH"' >> ~/.bashrc
# Refresh basically
source .bashrc
conda update conda
python 3 anaconda install bash script
# Go to home directory
cd ~
# You can change what anaconda version you want at
# https://repo.continuum.io/archive/
wget https://repo.continuum.io/archive/Anaconda3-4.2.0-Linux-x86_64.sh
bash Anaconda3-4.2.0-Linux-x86_64.sh -b -p ~/anaconda
rm Anaconda3-4.2.0-Linux-x86_64.sh
echo 'export PATH="~/anaconda/bin:$PATH"' >> ~/.bashrc
# Refresh basically
source .bashrc
conda update conda
Source: https://medium.com/#GalarnykMichael/install-python-on-ubuntu-anaconda-65623042cb5a
Download Anaconda for linux, place in your ubuntu system through WinScp, then
$ sudo bash Anaconda2-4.3.0-Linux-x86_64.sh
after this logout of your ssh session and then login, you will get base environment.
1 - Go to Anaconda Repository, find the installation for your OS and copy the address
2 - wget {paste}. Ex: https://repo.continuum.io/archive/Anaconda3-5.2.0-Linux-x86_64.sh
3 - Execute with: bash. Ex: bash Anaconda3-5.2.0-Linux-x86_64.sh
Run!
$ sudo bash Anaconda2-4.3.0-Linux-x86_64.sh
Video tutorial::
https://youtu.be/JP60kTsVJ8E
Just download the anaconda installer and execute it as it is a shell script. Follow the steps :
In the terminal type "wget https://repo.continuum.io/archive/Anaconda-2.3.0-Linux-x86_64.sh"
The file will be downloaded in current directory. Now execute the downloaded file by "bash ./Anaconda-2.3.0-Linux-x86_64.sh"
Restart the terminal. This is very important for python version provided by anaconda to be set to default for that user.
Note- Try using environment for using different version of python. Changing the default python version for root might result in non functioning of some functionalities like yum.