Ubuntu 12.04, nginx 1.2.0, uwsgi 1.0.3.
I start uwsgi with the following command:
uwsgi -s 127.0.0.1:9010 -M -t 30 -A 4 -p 4 -d /var/log/uwsgi.log
On each request nginx replies with 502 and uwsgi writes to log the following line:
-- unavailable modifier requested: 0 --
Original answer
For Python 2 on Ubuntu 11.10, using upstart, install the python plugin for uWSGI with apt-get install uwsgi-plugin-python and if you're using an ini file to configure your uWSGI app, then add plugins = python to the [uwsgi] section and it should solve this problem.
Edit: Updated for Python 3 and Ubuntu 17.10
For Python 3 on Ubuntu 17.10, using systemd, install the python plugin for uWSGI with apt-get install uwsgi-plugin-python3 and if you're using an ini file to configure your uWSGI app, then add plugins = python to the [uwsgi] section and it should solve this problem.
For more information on getting started with python/uWSGI apps, including how to configure them using an ini file then please take a look at this handy guide
Solved by installing uwsgi-plugin-python3 plugin and adding --plugin python3 option to uwsgi start command
Im starting uwsgi from upstart on Ubuntu. I solved the problem by running apt-get install uwsgi-plugin-python, and then adding plugins=python to my application.ini in /etc/uwsgi/applications-available.
from http://uwsgi-docs.readthedocs.org/en/latest/ThingsToKnow.html, "To route requests to a specific plugin, the webserver needs to pass a magic number known as a modifier to the uWSGI instances. By default this number is set to 0, which is mapped to Python."
I'm using 9 for a bash script and it's working. the numbers and their meanings are on this page: http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html
in my nginx configuration:
location ~ .cgi$ {
include uwsgi_params;
uwsgi_modifier1 9;
uwsgi_pass 127.0.0.1:3031;
}
Modify your ini file by added plugins line.
[uwsgi]
plugins = python3
I'm using Ubuntu 18.04 with Python 3. Below is the exact config I used to get it working.
You must have the Python 3 uWSGI plugin installed:
apt install uwsgi-plugin-python3
Your Nginx site configuration should point to your uWSGI socket. Make sure the port matches the configuration in the later steps.
location / {
uwsgi_pass 127.0.0.1:9090;
include uwsgi_params;
}
Reload the Nginx config to reflect the changes you just made:
systemctl reload nginx
You can use command-line arguments or an ini file for configuration. I created uwsgi.ini. Make sure the socket address matches your nginx config.
[uwsgi]
socket = 127.0.0.1:9090
chdir = /var/www
processes = 4
threads = 2
plugins = python3
wsgi-file = /var/www/app.py
My app.py just has a basic example:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/plain')])
return [b"Hello World!"]
Now start the uWSGI server from the command line:
uwsgi uwsgi.ini
Related
I have deployed a flask application with uwsgi and nginx
The following is the .ini file for uwsgi
[uwsgi]
;module = name of file which contains the application object in this case wsgi.py
LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
chdir=/home/ansible/apps/payment_handler
module = wsgi:application
;tell uWSGI (the service) to start in master mode and spawn 5 worker *processes* to serve requests
master = true
processes = 5
;a socket is much faster than a port, and since we will be using nginx to exppose the application this is better
socket = 0.0.0.0:8001
vaccum = true
die-on-term = true
When I run this from the command line like so
uwsgi --ini payment_app.ini
It works !
However I would like to run the application using a service, the following is the service file
[Unit]
Description=uWSGI instance to serve service app
After=network.target
[Service]
User=root
WorkingDirectory=/home/ansible/apps/payment_handler
Environment="PATH=/home/ansible/apps/payment_handler/env/bin"
ExecStart=/home/ansible/apps/payment_handler/env/bin/uwsgi --ini payment_app.ini
[Install]
WantedBy=multi-user.target
However it does not work because it cannot find the libs for cx_oracle
I have it set in my bashrc file
export LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
However since the service file does not use this to load it's env variables it seems to not find it
Error log
Jun 17 09:58:06 mau-app-036 uwsgi: cx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/odpi/doc/installation.html#linux for help
I have tried setting it in the .ini file (as seen above)
LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
I have also tried setting it in my init.py file using the os module
os.environ['LD_LIBRARY_PATH'] = '/usr/lib/oracle/18.3/client64/lib'
Both to no avail, any help would be great thanks Centos 7 btw
Problems like this are why the Instant Client installation instructions recommend running:
sudo sh -c "echo /usr/lib/oracle/18.3/client64/lib > \
/etc/ld.so.conf.d/oracle-instantclient.conf"
sudo ldconfig
This saves you having to work out how & where to set LD_LIBRARY_PATH.
Note that the 19.3 Instant Client RPM packages automatically runs this for you. Some background is in the Instant Client 19c Linux x64 release announcement blog.
I'm started developing a new site using Django. For realistic testing I wanted to run it on a Synology DS212J NAS.
Following the official Synology guides I installed ipkg and with it the mod_wsgi package.
As Next step: Following the standard tutorial I made a virtualenv and installed Django in it. Opening a new project and adjust the settings following to: https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-apache-and-mod_wsgi-on-ubuntu-16-04
I'm able to reach the "Hello World" Site from Django by use of manage.py
As suggested I want to exchange the manage.py through the apache server on the NAS. So I think I should go and edit the apache config files for e.g. define a virtual host...
However I can't localize the files for it, as it seems they where moved at DSM6 (which I use) in comparison too other guides.
Where need I to enter the values following the Tutorial on the Synology?
As I'm quite new into the topic do I need to especially load the mod_wsgi module for Apache and if where?
Is it a good idea to use the basic mode of wsgi instead of the daemon mode? I'm not sure which Django modules will be used later on in development...
Thanks for the support!
Activate the python 3 package and the webstation
In webstation> general settings> main server http enable nginx
In Control Panel> Network> DSM Settings> Enable Custom Domain: "test"
(which will allow us to access the nas by entering test.local and simplify the task later.)
Enable ssh connection in control panel> terminal and smtp
We use the ddns service of synology to have external access in our case "test.synology.me"
In control panel> security> certificate : we generate our ssl certificate with let's encrypt
Connect to the nas in ssh
Take root rights sudo -i
Install virtualenv: easy_install virtualenv
We set up our virtual environment: virtualenv -p python3 flasktest
Flask and gunicorn are installed:
pip install flask gunicorn
We create our web application, file: init.py
We launch our web application with gunicorn:
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0 .0.1: 5000 app: app
In /etc/nginx/sites-enabled we create a server configuration file, we will use nginx as a proxy, in our case the file will be flasktest.conf
flasktest.conf file:
`
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
gzip on;
server_name test.synology.me;
location / {
proxy_pass https://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_log /volume1/projects/flasktest/logs/error.log;
access_log /volume1/projects/flasktest/logs/acess.log;
}
`
Open the control panel port> external access> router configuration> create> integrate application> enable the check box for webstation and apply
We check our server file for that we enter the command, nginx -t
We are restarting nginx synoservicecfg --restart nginx
You now have access to your python web applications from outside in https ** https: //test.synology.me**
a little more information ...
To end and access your application permanently if you will ever be able to reboot, crash ... you can create a script that will restart gunicorn because otherwise the webstation takes over elsewhere if you enter ip nas locally you do not will not see your web apps in python because we did not modify the main configuration file /etc/nginx/nginx.conf locally so this is the default index.html page of the webstation that will be displayed.
example:
cd / volume1 / projects / flasktest
source bin / activate
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0.0.1:5000 app: app
</ dev / null 2> & 1 &
This method found with other python framework
I have an ansible provisioned VM based on this one https://github.com/jcalazan/ansible-django-stack but for some reason trying to start Gunicorn gives the following error:
Can't connect to /path/to/my/gunicorn.sock
and in nginx log file:
connect() to unix:/path/to/my/gunicorn.sock failed (2: No such file or directory) while connecting to upstream
And actually the socket file is missing in the specified directory. I have checked the permissions of the directory and they are fine.
Here is my gunicorn_start script:
NAME="{{ application_name }}"
DJANGODIR={{ application_path }}
SOCKFILE={{ virtualenv_path }}/run/gunicorn.sock
USER={{ gunicorn_user }}
GROUP={{ gunicorn_group }}
NUM_WORKERS={{ gunicorn_num_workers }}
# Set this to 0 for unlimited requests. During development, you might want to
# set this to 1 to automatically restart the process on each request (i.e. your
# code will be reloaded on every request).
MAX_REQUESTS={{ gunicorn_max_requests }}
echo "Starting $NAME as `whoami`"
# Activate the virtual environment.
cd $DJANGODIR
. ../../bin/activate
# Set additional environment variables.
. ../../bin/postactivate
# Create the run directory if it doesn't exist.
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Programs meant to be run under supervisor should not daemonize themselves
# (do not use --daemon).
exec gunicorn \
--name $NAME \
--workers $NUM_WORKERS \
--max-requests $MAX_REQUESTS \
--user $USER --group $GROUP \
--log-level debug \
--bind unix:$SOCKFILE \
{{ application_name }}.wsgi
Can anyone suggest what else could cause the missing socket file?
Thanks
Well, since I don't have enough rep to comment, I'll mention here that there is not a lot of specificity suggested by the missing socket, but I can tell you a bit about how I started in your shoes and got things to work.
The long and short of it is that gunicorn has encountered a problem when run by upstart and either never got up and running or shut down. Here are some steps that may help you get more info to track down your issue:
In my case, when this happened, gunicorn never got around to doing any error logging, so I had to look elsewhere. Try ps auxf | grep gunicorn to see if you have any workers going. I didn't.
Looking in the syslog for complaints from upstart, grep init: /var/log/syslog, showed me that my gunicorn service had been stopped because it was respawning too fast, though I doubt that'll be your problem since you don't have a respawn in your conf. Regardless, you might find something there.
After seeing gunicorn was failing to run or log errors, I decided to try running it from the command line. Go to the directory where your manage.py lives and run the expanded version of your upstart command against your gunicorn instance. Something like (Replace all of the vars with the appropriate litterals instead of the garbage I use.):
/path/to/your/virtualenv/bin/gunicorn --name myapp --workers 4 --max-requests 10 --user appuser --group webusers --log-level debug --error-logfile /somewhere/I/can/find/error.log --bind unix:/tmp/myapp.socket myapp.wsgi
If you're lucky, you may get a python traceback or find something in your gunicorn error log after running the command manually. Some things that can go wrong:
django errors (maybe problems loading your settings module?). Make sure your wsgi.py is referencing the appropriate settings module on the server.
whitespace issues in your upstart script. I had a tab hiding among spaces that munged things up.
user/permission issues. Finally, I was able to run gunicorn as root on the command line but not as a non-root user via the upstart config.
Hope that helps. It's been a couple of long days tracking this stuff down.
I encountered the same problem after following Michal Karzynski's great guide 'Setting up Django with Nginx, Gunicorn, virtualenv, supervisor and PostgreSQL'.
And this is how I solved it.
I had this variable in the bash script used to start gunicorn via Supervisor (myapp/bin/gunicorn_start):
SOCKFILE={{ myapp absolute path }}/run/gunicorn.sock
Which, when you run the bash script for the first time, creates a 'run' folder and a sock file using root privileges. So I sudo deleted the run folder, and then recreated it without sudo privileges and voila! Now if you rerun Gunicorn or Supervisor you won't have the annoying missing sock file error message anymore!
TL;DR
Sudo delete run folder.
Recreate it without sudo privileges.
Run Gunicorn again.
????
Profit
The error could also arise when you haven't pip installed a requirement. In my case, looking at the gunicorn error logs, I found that there was a missing module. Usually happens when you forget to pip install new requirements.
Well, I worked on this issue for more than a week and finally was able to figure it out.
Please follow links from digital ocean , but they did not pinpoint important issues one which includes
no live upstreams while connecting to upstream
*4 connect() to unix:/myproject.sock failed (13: Permission denied) while connecting to upstream
gunicorn OSError: [Errno 1] Operation not permitted
*1 connect() to unix:/tmp/myproject.sock failed (2: No such file or directory)
etc.
These issues are basically permission issue for connection between Nginx and Gunicorn.
To make things simple, I recommend to give same nginx permission to every file/project/python program you create.
To solve all the issue follow this approach:
First thing is :
Log in to the system as a root user
Create /home/nginx directory.
After doing this, follow as per the website until Create an Upstart Script.
Run chown -R nginx:nginx /home/nginx
For upstart script, do the following change in the last line :
exec gunicorn --workers 3 --bind unix:myproject.sock -u nginx -g nginx wsgi
DONT ADD -m permission as it messes up the socket. From the documentation of Gunicorn, when -m is default, python will figure out the best permission
Start the upstart script
Now just go to /etc/nginx/nginx.conf file.
Go to the server module and append:
location / {
include proxy_params;
proxy_pass http<>:<>//unix:/home/nginx/myproject.sock;
}
REMOVE <>
Do not follow the digitalocean aricle from here on
Now restart nginx server and you are good to go.
I had the same problem and found out that I had set the DJANGO_SETTINGS_MODULE to production settings in the the gunicorn script and the wsgi settings were using dev.
I pointed the DJANGO_SETTINGS_MODULE to dev and everything worked.
I just started to learn Falcon (http://falcon.readthedocs.org/en/latest/user/quickstart.html)
but it need a web server running and docs suggesting use uwsgi or gunicorn.
though they have mentioned that how to use it with gunicorn
$ pip install gunicorn #install
$ gunicorn things:app #and run app through gunicorn.
But I want to run this sample app with uwsgi. but I have no clue how to.
I have installed it pip install uwsgi also gevent as suggested here http://falcon.readthedocs.org/en/latest/user/install.html
but what now. somebody guide me.
You'll probably find your answer on the uWSGI documentation site, specifically try this page:
http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html
I've never used Falcon or uWSGI, but it looks like you can probably get away with:
uwsgi --wsgi-file things.py --callable app
You can use uwsgi.ini to store configuration and easy run. Good way to setup uwsgi as services. Configuration with virtualenv
[uwsgi]
http = :8000
chdir = /home/user/www/uwsgi-ini
virtualenv = /home/user/www/uwsgi-ini/venv/
wsgi-file = main.py
callable = app
processes = 4
threads = 2
stats = 127.0.0.1:9191
and run in app folder:
uwsgi uwsgi.ini
command with virtualenv
uwsgi --http :8000 --wsgi-file main.py --callable app -H $(pwd)/venv/
Here's a data flow:
http <--> nginx <--> uWSGI <--> python webapp
I guess there's http2uwsgi transfer in nginx, and uwsgi2http in uWSGI.
What if I want to directly call uWSGI to test an API in a webapp?
actually i'm using pyramid. just config [uwsgi] in .ini and run uWSGI. but i want to test if uWSGI hold webapp function normally, the uWSGI socket is not directly reachable by http.
Try using uwsgi_curl
$ pip install uwsgi-tools
$ uwsgi_curl 10.0.0.1:3030 /path
or if you need to do some more requests try uwsgi_proxy from the same package
$ uwsgi_proxy 10.0.0.1:3030
Proxying remote uWSGI server 10.0.0.1:3030 "" to local HTTP server 127.0.0.1:3030...
so you can browse it locally at http://127.0.0.1:3030/.
If your application allows only certain Host header, you can specify host name as well
$ uwsgi_curl 10.0.0.1:3030 host.name/path
$ uwsgi_proxy 10.0.0.1:3030 -n host.name
If application has static files, you can redirect such requests to your front server using -s argument. You can also specify different local port if needed.
From your question I'm assuming, you want to directly run your WSGI-compliant app with uWSGI and open an HTTP-Socket. You can do so by configuring your uwsgi.ini (or whatever the filename is) with
http=127.0.0.1:8080
uwsgi will now open an HTTP-socket that listen on port 8080 for incoming connections from localhost (see documentation: http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html)
Alternatively you can directly start your process from the command-line with the http-parameter:
$ uwsgi --http=127.0.0.1:8080 --module=yourapp:wsgi_entry_point
If you use unix-sockets to configure uwsgi nginx is able to communicate with that socket via the uwsgi-protocol (http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html).
Keep in mind, that if you usually serve static content (css, javascript, images) through nginx you will need to set that up, too, if you run uwsgi directly. But if you only want to test a REST-API this should work out for you.
First, consider those questions:
On which port is uWSGI running?
Is uWSGI running on your or on a remote machine?
If it's running on a remote machine, is the port accessible from your computer? (iptables rules might forbid external access)
If you made sure you have access, you can just call http://hostname:port/path/to/uWSGI for direct API access.
I know this is an old question but I just needed this and found out that this docker+nginx solution works for me the best
cat > /tmp/nginx.conf << EOF
events {}
http {
server {
listen 8000;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}
}
}
EOF
docker run -it --network=host --rm --name uswgi-proxy -v /tmp/nginx.conf:/etc/nginx/nginx.conf:ro nginx