Nginx Tornado File Upload - python

I am trying to upload file via nginx_upload_module 2.2.0. I have nginx 1.0.4 setup as a reverse proxy with a tornado server at the backend.
Below is my nginx.conf :
#user nobody;
worker_processes 1;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid /var/log/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
index index.html
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] $request '
# '"$status" $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
upstream frontends {
server 127.0.0.1:8888;
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
# Allow file uploads max 50M for example
client_max_body_size 50M;
#access_log logs/host.access.log main;
error_log /var/log/error.log info;
#POST URLn
location /upload {
# Pass altered request body to this location
upload_pass #after_upload;
# Store files to this directory
upload_store /tmp;
# Allow uploaded files to be read only by user
upload_store_access user:rw;
# Set specified fields in request body
upload_set_form_field $upload_field_name.name “$upload_file_name”;
upload_set_form_field $upload_field_name.content_type “$upload_content_type”;
upload_set_form_field $upload_field_name.path “$upload_tmp_path”;
# Inform backend about hash and size of a file
upload_aggregate_form_field “$upload_field_name.md5” “$upload_file_md5”;
upload_aggregate_form_field “$upload_field_name.size” “$upload_file_size”;
#upload_pass_form_field “some_hidden_field_i_care_about”;
upload_cleanup 400 404 499 500-505;
}
location / {
root /opt/local/html;
}
location #after_upload {
proxy_pass http://127.0.0.1:8888;
}
}
}
I have already tested the setup, and nginx does forward the request to tornado. But when I try to upload the file it gives me a 400: Bad Request http status code. With the tornado log stating that, it's missing the upfile.path in the request. And when I try to go to the folder where nginx should have supposedly stored the uploaded file it isn't there. And hence the 400 error.
Can anyone point why is nginx not storing the file at the specified directory /tmp ?
Tornado Log :
WARNING:root:400 POST /upload (127.0.0.1): Missing argument upfile_path
WARNING:root:400 POST /upload (127.0.0.1) 2.31ms
Nginx Error Log :
127.0.0.1 - - [14/Jul/2011:13:14:31 +0530] "POST /upload HTTP/1.1" 400 73 "http://127.0.0.1/" "Mozilla/5.0 (X11; Linux i686; rv:6.0) Gecko/20100101 Firefox/6.0"
More verbose Error Log with Info option:
2011/07/14 16:17:00 [info] 7369#0: *1 started uploading file "statoverride" to "/tmp/0000000001" (field "upfile", content type "application/octet-stream"), client: 127.0.0.1, server: localhost, request: "POST /upload HTTP/1.1", host: "127.0.0.1", referrer: "http://127.0.0.1/"
2011/07/14 16:17:00 [info] 7369#0: *1 finished uploading file "statoverride" to "/tmp/0000000001", client: 127.0.0.1, server: localhost, request: "POST /upload HTTP/1.1", host: "127.0.0.1", referrer: "http://127.0.0.1/"
2011/07/14 16:17:00 [info] 7369#0: *1 finished cleanup of file "/tmp/0000000001" after http status 400 while closing request, client: 127.0.0.1, server: 0.0.0.0:80
More verbose Error Log with Debug option:
http://pastebin.com/4NVCdmrj
Edit 1:
So, one point that we can infer from the above error log is that, the file is being uploaded by nginx to /tmp but is getting cleaned up subsequently. Don't know why, need help here.

I have just written a web application with tornado and nginx-upload-module,and it works.
According to the tornado log you've provided, I guess you can try to change your code
self.get_argument('upfile_path')
to
self.get_argument('upload_tmp_path')
nginx did store the file,
but "upload_cleanup 400 404 499 500-505;" this line tells it to cleanup the file, when your application response with the specified HTTP code.

Related

How to deploy Python Celery worker on EC2 (current error 111: Connection refused)?

Technologies: Python, Django, AWS, RabbitMQ on AWS, Celery
I currently have my company's website deployed on an EC2 instance. Everything works well, my current tasks is to run a Celery worker but every time I attempt to do so I get the 111:Connection refused error.
Celery runs, RabbitMQ is running but my assumption is that I can possibly not have the correct setup since I have everything within their correct VPCs and security groups.
FILES:
settings.py
CELERY_BROKER_URL = 'amqps://<username>:<password>#<awspath>.mq.us-west-2.amazonaws.com:5671'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_TASK_DEFAULT_QUEUE = env("CELERY_TASK_DEFAULT_QUEUE", default="default")
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": env("AWS_REGION", default="us-west-2"),
}
CELERY_RESULT_BACKEND = None
procfile
celery: celery -A bsw_site worker -l INFO
init.py
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)
error log
2022/10/20 20:57:13 [error] 4189#4189: *35 connect() failed (111: Connection refused) while connecting to upstream, client: 10.176.11.163, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8000/favicon.ico", host: "<company_website_link>", referrer: "<company_website_link>"
nginx_conf.conf
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 200000;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
fastcgi_read_timeout 7200;
proxy_read_timeout 7200;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
access_log /var/log/nginx/access.log main;
client_header_timeout 60;
client_body_timeout 60;
client_max_body_size 50M;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/*.conf;
}
}
Thanks in advance for any help.
I can provide more information.

How to deploy django with nginx and uwsgi

i have a problem to deploy my django server with uwsgi and nginx.
The command dev_ralph runserver 0.0.0.0:8000 starts the development server and works fine.
But my goal now is to deploy a production django server and as i said, i have some problems with that.
My project-root-directory is: /home/ralphadmin/uwsgi/capentory-ralph/ralph
Here is the nginx-virtualhost-configuration:
/etc/nginx/sites-available/ralph.conf and /etc/nginx/sites-enabled/ralph.conf:
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
# server unix:///home/ralphadmin/uwgsi/capentory-ralph/ralph/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 10.70.7.1; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /var/media; # your Django project's media files - amend as required
}
location /static {
root /home/ralphadmin/uwsgi/capentory-ralph/ralph/src/ralph/static/; # your Django project's static files
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/ralphadmin/uwsgi/capentory-ralph/ralph/uwsgi_params; # the uwsgi_params file you ins$
}
}
As you can see, my static-files are located in /home/ralphadmin/uwsgi/capentory-ralph/ralph/src/ralph/static/
And if i try to connect with the server, that default nginx page is in my browser:
nginx
I would really appreciate some helpful comments.
----------- Update 1
I investigated my nginx-logs and found this:
2020/02/11 15:18:44 [error] 19097#19097: *1 connect() failed (111: Connection refused)
while connecting to upstream, client: 10.70.7.254, server: 10.70.7.1, request: "GET /
HTTP/1.1", upstream: "uwsgi://10.70.7.1:8630", host: "10.70.7.1"
2020/02/11 15:31:55 [error] 19492#19492: *1 connect() failed (111: Connection refused)
while connecting to upstream, client: 10.70.7.254, server: 10.70.7.1, request: "GET
/favicon.ico HTTP/1.1", upstream: "uwsgi://10.70.7.1:8630", host: "10.70.7.1",
referrer: "http://10.70.7.1/"
... uwsgi is running but still, it doesn't work at all :(

Internal Server Error with Nginx and uWSGI

I'm trying to host an app using Nginx on Linode.com but I'm stuck early on uWSGI config.
I've used "Getting Started" guide and "WSGI using uWSGI and nginx on Ubuntu 12.04 (Precise Pangolin)" guide and I've succesfully deployed Nginx (got Nginx welcome message in browser).
Although above tutorial is for Ubuntu 12.04 I've used 14.04.
The problem starts when I got to uWSGI configuration and 'Hello World' Python app. Going to location / in browser returns Failed to load resource: the server responded with a status of 500 (Internal Server Error) and nothing gets logged in server error.log. location /static works though and serves files without a hitch.
I've tried many things and looked extensively for fix on Google and Stackoverflow but nothing, and I'm kind of frustrated right now.
Thank you for any help.
Here are my config files (I've hidden my domain and ip):
/etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu
XX.XX.XX.XXX mars
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
/etc/nginx/sites-enabled/example.com
server {
listen 80;
server_name $hostname;
access_log /srv/www/example.com/logs/access.log;
error_log /srv/www/example.com/logs/error.log;
location / {
#uwsgi_pass 127.0.0.1:9001;
uwsgi_pass unix:///run/uwsgi/app/example.com/example.com.socket;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
uwsgi_param SERVER_SOFTWARE nginx/$nginx_version;
}
location /static {
root /srv/www/example.com/public_html/;
index index.html index.htm;
}
}
/etc/uwsgi/apps-enabled/example.com.xml
<uwsgi>
<plugin>python</plugin>
<socket>/run/uwsgi/app/example.com/example.com.socket</socket>
<pythonpath>/srv/www/example.com/application/</pythonpath>
<app mountpoint="/">
<script>wsgi_configuration_module</script>
</app>
<master/>
<processes>4</processes>
<harakiri>60</harakiri>
<reload-mercy>8</reload-mercy>
<cpu-affinity>1</cpu-affinity>
<stats>/tmp/stats.socket</stats>
<max-requests>2000</max-requests>
<limit-as>512</limit-as>
<reload-on-as>256</reload-on-as>
<reload-on-rss>192</reload-on-rss>
<no-orphans/>
<vacuum/>
</uwsgi>
/srv/www/example.com/application/wsgi_configuration_module.py
import os
import sys
sys.path.append('/srv/www/example.com/application')
os.environ['PYTHON_EGG_CACHE'] = '/srv/www/example.com/.python-egg'
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
return 'Hello world!'
last access log
XX.XX.XX.XXX - - [05/Jul/2015:10:03:37 -0400] "GET / HTTP/1.1" 500 32 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36"
XX.XX.XX.XXX - - [05/Jul/2015:10:03:38 -0400] "GET /favicon.ico HTTP/1.1" 500 32 "http://example.com/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36"
only error log I've got only one time when trying to fix this
2015/07/05 08:49:06 [crit] 25301#0: *17 connect() to unix:///run/uwsgi/app/example.com/example.com.socket failed (2: No such file or directory) while connecting to upstream, client: XX.XX.XX.XXX, server: mars, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:///run/uwsgi/app/example.com/example.com.socket:", host: "example.com"
2015/07/05 08:49:07 [crit] 25301#0: *17 connect() to unix:///run/uwsgi/app/example.com/example.com.socket failed (2: No such file or directory) while connecting to upstream, client: XX.XX.XX.XXX, server: mars, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///run/uwsgi/app/example.com/example.com.socket:", host: "example.com", referrer: "http://example.com/"
I do not understand what is it /etc/nginx/sites-enabled/dev.host.in? how to or why *.in?
i think u should try it
step 1.
create project.ini file
# django_project.ini file
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /home/username/django_project
# Django's wsgi file
module = blog.wsgi
# the virtualenv (full path)
home = /home/username/Env/project
# process-related settings
master = true
pidfile = /tmp/proj_uwsgi.pid
# maximum number of worker processes
processes = 5
# the socket
socket = :8001
# ... with appropriate permissions - may be needed
# chmod-socket = 664
# clear environment on exit
vacuum = true
# background the process
daemonize = /home/username/django_project/error_uwsgi.log
step 2.
create mysite.conf in nginx, vim /etc/nginx/conf.d/mysite.conf
upstream django {
#server unix:///home/username/django_project/djproj.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name localhost mysite.com www.mysite.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media { alias /home/username/django_project/media; # your Django project's media files - amend as required
}
location /static { alias /home/username/django_project/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass 127.0.0.1:8001;#project;
include /home/username/django_project/uwsgi_params; # the uwsgi_params file you installed
}
}
step 3.
ln -s /etc/nginx/uwsgi_params /home/username/django_project/
step 4.
uwsgi --ini django_project.ini
uwsgi --stop /tmp/proj_uwsgi.pid
uwsgi --reload /tmp/proj_uwsgi.pid

Django + uwsgi + nginx + CentOS 7 : connection refused on 8001 port

Django + uwsgi + nginx + CentOS 7 : connection refused on 8001 port
I get http 520 error when I tried to go to http://domain.com:8000
nginx.conf
upstream django {
# connect to this socket
# server unix:///tmp/uwsgi.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket
}
server {
# the port your site will be served on
listen 8000;
# the domain name it will serve for
server_name domain.com; # substitute your machine's IP address or FQDN
#root /home/mysite;
charset utf-8;
#Max upload size
client_max_body_size 75M; # adjust to taste
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/mysite/uwsgi_params; # or the uwsgi_params you installed manually
}
}
error message on /var/log/nginx/error.log
2015/04/09 12:28:07 [error] 23235#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 118.131.206.235, server: domain.com, request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8001", host: "domain.com:8000"
2015/04/09 12:28:08 [error] 23235#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 118.131.206.235, server: domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://127.0.0.1:8001", host: "domain.com:8000"
I've tried everything but couldn't find any clue that it gives me http 502 error.
SELinux may prevent such type of connection by default.
You need to check logfile /var/log/audit/audit.log to be sure about it.
Or use following command to stop SELinux for this time:
setenforce 0

Django HttpResponse Internal Server Error when logged out

I have a JQuery request for JSON data in my Django app (which started in vitrualenv with uwsgi and nginx):
$.getJSON( url, function( data ) {
var obl = "/raj/?raj=" + data.id;
$.getJSON( obl, function( raj_data ) {
...
} );
} );
and corresponding view:
def rcoato(request):
response_data = SprRegion.objects.values('id').get(Q(id=request.GET['region']))
response_data = json.dumps(response_data)
return HttpResponse(response_data, content_type='application/javascript')
It works fine, JSON data returned, but only when I logged in via SSH.
I and start my application with:
source virtualenv/bin/activate
uwsgi --xml /home/rino/sites/centre/uwsgi.xml &
When I logged out (with setopt no_hup and setopt no_checkjobs), my app works partially - HTML pages are rendering, static files are handling, but requests to /raj/?raj=... raises Internal Server Error 500.
My nginx.conf:
server {
listen 8081;
server_name localhost;
access_log /var/log/nginx/nginx_centre_access.log;
error_log /var/log/nginx/nginx_centre_error.log;
location /static {
autoindex on;
alias /home/rino/sites/centre/centre/static/;
}
location / {
uwsgi_pass 127.0.0.1:3031;
include /home/rino/sites/centre/uwsgi_params;
}
}
uwsgi config:
<uwsgi>
<socket>127.0.0.1:3031</socket>
<processes>5</processes>
<pythonpath>/home/rino/sites/centre</pythonpath>
<chdir>/home/rino/sites/centre/centre</chdir>
<wsgi-file>/home/rino/sites/centre/centre/wsgi.py</wsgi-file>
<pidfile>/tmp/centre-master.pid</pidfile>
<plugin>python3</plugin>
<max-requests>5000</max-requests>
<harakiri>40</harakiri>
<master>true</master>
<threads>2</threads>
</uwsgi>
cat nginx_centre_error.log | tail after logout and querying request described above:
2014/10/03 07:34:46 [error] 20657#0: *296 connect() failed (111: Connection refused)
while connecting to upstream, client: 176.100.173.177, server: localhost, request: "GET
/settler/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:3031", host: "myhost.com:8081",
referrer: "http://myhost.com/settlersmain/"
2014/10/03 07:56:55 [error] 20657#0: *335 connect() failed (111: Connection refused)
while connecting to upstream, client: 176.100.173.177, server: localhost, request:
"GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:3031", host: "myhost.com:8081"
2014/10/03 08:23:33 [error] 20657#0: *367 open()
Thanks for any help!
UPD: I replace localhost in server_name line of nginx.conf with server IP address, but issue is still present.
According to your log you have (111: Connection refused) error, and it means that uwsgi process is killed after you close ssh connection.
You can try this instruction to use nohup.
uWSGI has an option to daemonize. This is a better way, uWSGI will handle detaching from console.
But I suggest you use something like supervisord to run uWSGI. Or you can use your system's init.d scripts or Upstart

Categories

Resources