Deploy Flask app on Apache Server using mod_wsgi and WinSCP - python

I want to deploy my Flask application on an Apache server. I have an account on the server, and have been told that "The server can be used to run scripts and web apps written in Python (using django and mod_wsgi)".
I am on Windows, and to transfer files I have to use an FTP client - so I am using WinSCP.
Installing mod_wsgi is not as straightforward as I expected and I cannot get any clear documentation online.
Because the server can already run Python scripts using mod_wsgi does that mean that I just have to create a .wsgi file or do I still need to download it?
I don't know how to go about this.

First you need to check if mod_wsgi is really enabled on the server, then you have to check how your virtual host is configured in apache. There you will find the name you have to give to the wsgi file.
If you have shell access to the server you can do that by using the following commands:
Check mod_wsgi:
sudo apache2ctl -t -D DUMP_MODULES | grep wsgi
Check what name the .wsgi file should have:
sudo grep WSGIScriptAlias /etc/apache2/sites-enabled/yoursite.conf

Related

Deploying a python Flask application with Jenkins and executing it

I am trying to do auto-deployment of a Python Flask application using Jenkins and then run it by using shell command on a Raspberry Pi server.
Here are some background info,
Before using Jenkins, my deployment and execution process was manual described below:
FTP to the directory where my Python scripts and Python venv are located
Replace Flask application scripts using FTP
Activate virtual environment to of Python(3.5) through the terminal on Raspberry Pi ("./venv/bin/activate")
Run myFlaskApp.py by executing "python myFlaskApp.py" in terminal
Now I have integrated Jenkins with the deployment/execution process described below:
Code change pushed to github
Jenkins automatically pulls from github
Jenkins deploy files to specified directories by executing shell commands
Jenkins then activates virtual environment and run myFlaskApp.py by bashing a .sh script in the shell terminal.
Now the problem that I am having is on step 4, because a Flask app has to always be alive, my Jenkins will never "finish building successfully", it will always be in a loading state as the Flask app is running on the shell terminal Jenkins is using.
Now my question:
What is the correct approach that I should be taking in order to activate myFlaskApp.py with Jenkins after deploying the files while not causing it to be "locked down" by the build process?
I have read up about Docker, SubShell and the Linux utility "Screen". Will any of these tools be useful to assist me in my situation right now and which approach should I be taking?
The simple and robust solution (in my opinion) is to use Supervisor which is available in Debian as supervisor package. It allows you do make a daemon from script like your app, it can spawn multiple processes, watch if app doesn't crash and if it does it can start it again.
Note about virtualenv - you don't need to activate venv to use it. You just need to point appropriate Python executable (your_venv/bin/python) instead of default one. For example:
$ ./venv/bin/python myFlaskApp.py
You need to create these files for deployment over jenkins.
Code can be found: https://github.com/ishwar6/django_ci_cd
This will work for both flask as well as django.
initial-setup.sh - This file is the first file to look at when setting up this project. It installs the required packages to make this project work such as Nginx, Jenkins, Python etc. Refer to the youtube video to see how and when it is used.
Jenkinsfile - This file contains the definition of the stages in the pipeline. The stages in this project's pipeline are Setup Python Virtual Environment, Setup gunicorn service and Setup Nginx. The stages in this pipeline just does two things. First it makes a file executable and then runs the file. The file carries out the commands that is described by the stage description.
envsetup.sh - This file sets up the python virtual environment, installs the python packages and then creates log files that will be used by Nginx.
gunicorn.sh - This file runs some Django management commands like migration commands and static files collection commands. It also sets up the gunicorn service that will be running the gunicorn server in the background.
nginx.sh - This file sets up Nginx with a configuration file that points Nginx to the gunicorn service that is running our application. This allows Nginx serve our application. I have followed a digital ocean article to setup this file. You can go through the video once to replicate sites-available and sites-enabled scanerio.
app.conf - This is an Nginx server configuration file. This file is used to setup Nginx as proxy server to gunicorn. For this configuration to work, change the value of server_name to the IP address or domain name of your server.

Virtualhost on SSL - Can't run Django app in virtualenv after upgrading from httpd 2.2 to 2.4

I need help in understanding if I have correctly upgraded my website's older config from httpd-2.2 to httpd-2.4. It was hosted on a local server that we have since retired and the entire /etc/httpd and /var/www directories were copied to an EC2 instance with MySQL installed on the instance itself.
The Django-1.5 app (yes it needs to be upgraded) runs on the mod_wsgi module in daemon mode. I have upgraded the Apache modules and now when I run python manage.py runserver after activating my virtualenv environment and it gives no errors. But when I navigate to my EC2 public IP address with https, it keeps loading the page and times out afterwards. At the moment the httpd.conf redirects any port 80 requests to the website domain address, but I'd like to test the deployment on EC2 using bare IP address first before we can update our DNS records.
Another thing I did was to add the public IP address of my EC2 machine to the ALLOWED_HOSTS list, along with both localhost and 127.0.0.1.
Possible SELINUX problem tried but didn't work
The output of running httpd after runserver gave the following output:
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
I tried selinux in both permissive and non-permissive modes and nothing changed.
Please check my config files below and let me know if something doesn't seem right for httpd-2.4:
EDIT Apache runs fine. I have put an example HTML file in /var/www/html and it rendered when I went to the home page on my IP address. It is the Django app that doesn't seem to run, even when I do python manage.py runserver
/etc/conf/httpd.conf pastebin link
/etc/conf.d/vhosts.conf pastebin link
/etc/conf.d/ssl.conf pastebin link
Django settings.py pastebin link
Django default config file pastebin link
manage.py pastebin link
wsgi.py pastebin link

Domain not serving wsgi file, but the IP does

I'm trying to deploy a flask application on my droplet, which is running ubuntu, but every time I change my virtual host file to the domain, it just serves the index of /var/www/html and not the wsgi which I specified in the virtual host file. However, if I use my droplet's IP for "ServerName", it works fine.
Any ideas?
Thanks
I had the same problem. Not sure what causes it, but if it's the same one I have you should be able to fix it by disabling the default virtualhost cofiguration.
a2dissite 000-default
service apache2 restart
This should leave just the .conf file necessary for your flask application.
Also you mention a droplet, so you might be following the DigitalOcean Flask tutorial. If this is the case, don't forget to add the .conf extension to the configuration file in /etc/apache2/sites-available
In the server terminal, enter:
sudo nano /etc/apache2/sites-available/FlaskApp.conf
Then replace the raw IP with the domain name.

Flask can't see sqlite db when launched via uWSGI Emperor

When I run my Flask app via uWSGI emperor, it 502's out with a Sqlite error regarding it not being able to see my tables. I can go in via the sqlite3 command and verify the data is there. When I run the site via
uwsgi --ini site_conf.ini
it works just fine, but not via emperer.
Check you are not using relative paths when referring to the sqlite db. When run by the Emperor the cwd changes to the vassals dir. Eventually use chdir option in your vassal to move to a specific directory

Deploy Bottle Application on Nginx

I have an Ubuntu 12.04 server setup that currently runs a Ruby on Rails application on a Passenger / Nginx install. I decided to play around with some Python and wrote a small application using Bottle. I would like to deploy this application to my server.
I followed this guide on setting up my server to run Python applications. When I run sudo service uwsgi restart I get the following error message:
Restarting app server(s) uwsgi
[uWSGI] getting INI configuration from
/usr/share/uwsgi/conf/default.ini [uWSGI] parsing config file /etc/uwsgi/apps-enabled/example.net.xml
open("./python_plugin.so"): No such file or directory [core/utils.cline 4700]
!!! UNABLE to load uWSGI plugin: ./python_plugin.so: cannot open shared object file: No such file or directory !!!
Sat Dec 8 18:29:14 2012 - [WARNING] option "app" is deprecated: use the more advanced "mount" option
I really don't know a ton about Python, I have installed the plugins I need via easy_install
Which are:
pymongo
beautifulsoup
bottle
My question is: how do I deploy this simple application to my server?
Thank you
I found out that Passenger will run WSGI apps. I followed the instructions on this post http://kbeezie.com/using-python-nginx-passenger/ and had no trouble getting it working.
It was actually pretty easy in the end.
Here is my adaptor in case anybody else has trouble:
https://github.com/nick-desteffen/astronomy-pics/blob/master/passenger_wsgi.py

Categories

Resources