I was trying to systemd a flask app. I tried to write a script like this:
#!/bin/bash
cd /path/to/app
source venv/bin/activate
python start.py
and just ExecStart this script in the .service file. But this doesn't quite work as starting the service errors with
python: command not found
I actually ran into quite a few issues, but eventually resolved with service file:
[Service]
WorkingDirectory=/path/to/app
ExecStart=/path/to/app/venv/bin/python start.py
Without WorkingDirectory, the file paths do not seem to work as the static file can't even be found.
So my question is actually why doesn't script above in the beginning work? The cd took effect, but the venv activate didn't?
Related
I am having an issue via an apache Nearly Free Speech server (I have been following the NFS guide via this link: https://blog.nearlyfreespeech.net/2014/11/17/how-to-django-on-nearlyfreespeech-net/. I have created a run-django.sh file to properly run the application which also opens a virtualenv (I have named the folder 'venv'). These are the contents of my run-django.sh file:
#!/bin/sh
. venv/bin/activate
exec python3 manage.py runserver
On the current step of the guide I am attempting to run the run-django.sh as follows:
[questionanswer /home/protected]$ ls
question_answer run-django.sh venv
[questionanswer /home/protected]$ cd question_answer/
[questionanswer /home/protected/question_answer]$ ../run-django.sh
.: cannot open bin/activate: No such file or directory
How is this not detecting my directory of 'venv/bin/activate' ?
This is my fabric script that runs on the jenkins server.
sudo('/home/myjenkins/killit.sh',pty=False)
sudo('/home/myjenkins/makedir.sh',pty=False)
sudo('/home/myjenkins/runit.sh',pty=False)
This kills the old server, creates a virtualenv, installs the requirements and restarts the server.
The problem is the with the script that starts the server - runit.sh :-
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
When the jenkins server that starts the server and I navigate to the homepage, it gives me a 404 Page Not Found. It says /static/index.html not found. But the file exists. When I run 'sudo bash runit.sh' and I access the homepage, It works fine.
mkdir -p /home/myjenkins/devserver
cp -rf /home/myjenkins/workspace /home/jenkins/devserver/
cp -f /home/myjenkins/dev_settings.py /home/myjenkins/devserver/workspace/mywebsite/settings.py
cd /home/myjenkins/devserver
virtualenv -p python3 dev
cd /home/myjenkins/devserver/workspace
../dev/bin/pip install -r requirements.txt
Please ask me for more details if you need it.
EDITED 9/2/18
When I start the script from the folder containing manage.py, the server is able to serve the files. But Jenkins was starting the script from the home folder and if I also start the script from the home folder - the server is not able to find the files. look at my comment for more details. It would be great if someone could explain why this happens even though I've specified the full path in the script.
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
Okay I figured out the whole deal. My django server was taking the output of the npm build from the wrong folder.
In the settings.py file, the variable STATICFILES_DIRS was set as:-
STATICFILES_DIRS = ('frontend/dist',)
instead of:-
STATICFILES_DIRS = (os.path.join(BASE_DIR,'frontend/dist'),)
Thus, when Jenkins was running the script, it was doing so from the home folder. This made Django's staticfiles finders to look at /home/myjenkins/frontend/dist instead of the relative '../frontend/dist'
I have a python script that queries a database. I run it from the terminal with python3 myscript.py
I've added a cron task for it in my crontab file
*/30 9-17 * * 1-5 python3 /path/to/my/python/script\ directory\ space/myscript.py
The script imports a function in the same directory that parses login info for a database located in database.ini in the same directory. The database.ini is:
[postgresql]
host=my-db-host-1-link.11.thedatabase.com
database=dbname
user=username
password=password
port=10898
But currently cron outputs to the file in my mail folder:
Section postgresql not found in the database.ini file
The section is clearly present in the database.ini file, so what am I missing here?
Instead of running "python3 myscript.py" in the directory where it is present, try running it from some other directory (like home directory). Most likely you will see the same issue.
Note that cron's current-working-directory is different on different systems. So, the safest method is to explicitly switch to the directory where your script is and run the command there:
cd /path/to/my/python/script\ directory\ space/ && python3 myscript.py
Try this:
import os
...
change --> filename=database.ini
for --------> filename=os.path.dirname(__file__)+'/database.ini'
So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the mkvirtualenv function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries.
I am trying to add a bin/run-app executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in
#!/bin/bash
# cd into the project directory
workon "$(cat .venv)"
python main.py
A file .venv contains the virtualenv name. But when I run this script, I get workon: command not found error.
Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script.
So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
Just source the virtualenvwrapper.sh script in your script to import the virtualenvwrapper's functions. You should then be able to use the workon function in your script.
And maybe better, you could create a shell script (you could name it venv-run.sh for example) to run any Python script into a given virtualenv, and place it in /usr/bin, /usr/local/bin, or any directory which is in your PATH.
Such a script could look like this:
#!/bin/sh
# if virtualenvwrapper.sh is in your PATH (i.e. installed with pip)
source `which virtualenvwrapper.sh`
#source /path/to/virtualenvwrapper.sh # if it's not in your PATH
workon $1
python $2
deactivate
And could be used simply like venv-run.sh my_virtualenv /path/to/script.py
I can't find the way to trigger the commands of virtualenvwrapper in shell. But this trick can help: assume your env. name is myenv, then put following lines at the beginning of scripts:
ENV=myenv
source $WORKON_HOME/$ENV/bin/activate
This is a super old thread and I had a similar issue. I started digging for a simpler solution out of curiousity.
gnome-terminal --working-directory='/home/exact/path/here' --tab --title="API" -- bash -ci "workon aaapi && python manage.py runserver 8001; exec bash;"
The --workingdirectory forces the tab to open there by default under the hood and the -ci forces it to work like an interactive interface, which gets around the issues with the venvwrapper not functioning as expected.
You can run as many of these in sequence. It will open tabs, give them an alias, and run the script you want.
Personally I dropped an alias into my bashrc to just do this when I type startdev in my terminal.
I like this because its easy, simple to replicate, flexible, and doesn't require any fiddling with variables and whatnot.
It's a known issue. As a workaround, you can make the content of the script a function and place it in either ~/.bashrc or ~/.profile
function run-app() {
workon "$(cat .venv)"
python main.py
}
If your Python script requires a particular virtualenv then put/install it in virtualenv's bin directory. If you need access to that script outside of the environment then you could make a symlink.
main.py from virtualenv's bin:
#!/path/to/virtualenv/bin/python
import yourmodule
if __name__=="__main__":
yourmodule.main()
Symlink in your PATH:
pymain -> /path/to/virtualenv/bin/main.py
In bin/run-app:
#!/bin/sh
# cd into the project directory
pymain arg1 arg2 ...
Apparently, I was doing this the wrong way. Instead of saving the virtualenv's name in the .venv file, I should be putting the virtualenv's directory path.
(cdvirtualenv && pwd) > .venv
and in the bin/run-app, I put
source "$(cat .venv)/bin/activate"
python main.py
And yay!
add these lines to your .bashrc or .bash_profile
export WORKON_HOME=~/Envs
source /usr/local/bin/virtualenvwrapper.sh
and reopen your terminal and try
You can also call the virtualenv's python executable directly. First find the path to the executable:
$ workon myenv
$ which python
/path/to/virtualenv/myenv/bin/python
Then call from your shell script:
#!/bin/bash
/path/to/virtualenv/myenv/bin/python myscript.py
When I was developing and testing my project, I used to use virtualenvwrapper to manage the environment and run it:
workon myproject
python myproject.py
Of course, once I was in the right virtualenv, I was using the right version of Python, and other corresponding libraries for running my project.
Now, I want to use Supervisord to manage the same project as it is ready for deployment. The question is what is the proper way to tell Supervisord to activate the right virtualenv before executing the script? Do I need to write a separate bash script that does this, and call that script in the command field of Supervisord config file?
One way to use your virtualenv from the command line is to use the python executable located inside of your virtualenv.
for me i have my virtual envs in .virtualenvs directory. For example
/home/ubuntu/.virtualenvs/yourenv/bin/python
no need to workon
for a supervisor.conf managing a tornado app i do:
command=/home/ubuntu/.virtualenvs/myapp/bin/python /usr/share/nginx/www/myapp/application.py --port=%(process_num)s
Add your virtualenv/bin path to your supervisord.conf's environment:
[program:myproj-uwsgi]
process_name=myproj-uwsgi
command=/home/myuser/.virtualenvs/myproj/bin/uwsgi
--chdir /home/myuser/projects/myproj
-w myproj:app
environment=PATH="/home/myuser/.virtualenvs/myproj/bin:%(ENV_PATH)s"
user=myuser
group=myuser
killasgroup=true
startsecs=5
stopwaitsecs=10
First, run
$ workon myproject
$ dirname `which python`
/home/username/.virtualenvs/myproject/bin
Add the following
environment=PATH="/home/username/.virtualenvs/myproject/bin"
to the related supervisord.conf under [program:blabla] section.