I'm trying to setup private git server over http(s) using nginx in windows , but without any success yet.
I want it to be like github but locally and with very simple functionality . For example when you go to localhost/path/to/your/repo.git this site will only display the source code tree list nothing else. you have to do all git commands using git console in your machine.
I saw many posts to do something like that like:
how to serve git through http via nginx with user password
https://gist.github.com/massar/9399764
...
Actually, I need to know that do I really need nginx ? I mean can I make it work with just python server code ?
I really confused what I have to do and what I'm missing, because it's my first time to deal with something like this and don't know nginx very well.
Can anyone help me to get it done ?
NOTE I don't want to use other tools like gitlab, I want to code it from the beginning
EDIT
I read about git-http-backend in git docs. I think now I should configure nginx to work with git-http-backend.exe (because I'm in windows)
here is my nginx conf file:
location / {
root "C:/Users/One/Desktop/work";
fastcgi_param SCRIPT_FILENAME "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe";
fastcgi_param PATH_INFO $uri;
fastcgi_param GIT_EXPORT_ALL "";
fastcgi_param GIT_PROJECT_ROOT "C:/Users/One/Desktop/work";
include fastcgi_params;
}
I have project.git bare repository in C:/Users/One/Desktop/work , but git always return not found when I try to clone it.
git clone http://localhost/project.git test
Cloning into 'test'...
fatal: repository 'http://localhost/project.git/' not found
And this is nginx log file
127.0.0.1 - - [24/Jun/2015:14:04:10 +0300] "GET /project.git/info/refs?service=git-upload-pack HTTP/1.1" 404 168 "-" "git/1.9.5.msysgit.0"
Related
I'm started developing a new site using Django. For realistic testing I wanted to run it on a Synology DS212J NAS.
Following the official Synology guides I installed ipkg and with it the mod_wsgi package.
As Next step: Following the standard tutorial I made a virtualenv and installed Django in it. Opening a new project and adjust the settings following to: https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-apache-and-mod_wsgi-on-ubuntu-16-04
I'm able to reach the "Hello World" Site from Django by use of manage.py
As suggested I want to exchange the manage.py through the apache server on the NAS. So I think I should go and edit the apache config files for e.g. define a virtual host...
However I can't localize the files for it, as it seems they where moved at DSM6 (which I use) in comparison too other guides.
Where need I to enter the values following the Tutorial on the Synology?
As I'm quite new into the topic do I need to especially load the mod_wsgi module for Apache and if where?
Is it a good idea to use the basic mode of wsgi instead of the daemon mode? I'm not sure which Django modules will be used later on in development...
Thanks for the support!
Activate the python 3 package and the webstation
In webstation> general settings> main server http enable nginx
In Control Panel> Network> DSM Settings> Enable Custom Domain: "test"
(which will allow us to access the nas by entering test.local and simplify the task later.)
Enable ssh connection in control panel> terminal and smtp
We use the ddns service of synology to have external access in our case "test.synology.me"
In control panel> security> certificate : we generate our ssl certificate with let's encrypt
Connect to the nas in ssh
Take root rights sudo -i
Install virtualenv: easy_install virtualenv
We set up our virtual environment: virtualenv -p python3 flasktest
Flask and gunicorn are installed:
pip install flask gunicorn
We create our web application, file: init.py
We launch our web application with gunicorn:
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0 .0.1: 5000 app: app
In /etc/nginx/sites-enabled we create a server configuration file, we will use nginx as a proxy, in our case the file will be flasktest.conf
flasktest.conf file:
`
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
gzip on;
server_name test.synology.me;
location / {
proxy_pass https://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_log /volume1/projects/flasktest/logs/error.log;
access_log /volume1/projects/flasktest/logs/acess.log;
}
`
Open the control panel port> external access> router configuration> create> integrate application> enable the check box for webstation and apply
We check our server file for that we enter the command, nginx -t
We are restarting nginx synoservicecfg --restart nginx
You now have access to your python web applications from outside in https ** https: //test.synology.me**
a little more information ...
To end and access your application permanently if you will ever be able to reboot, crash ... you can create a script that will restart gunicorn because otherwise the webstation takes over elsewhere if you enter ip nas locally you do not will not see your web apps in python because we did not modify the main configuration file /etc/nginx/nginx.conf locally so this is the default index.html page of the webstation that will be displayed.
example:
cd / volume1 / projects / flasktest
source bin / activate
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0.0.1:5000 app: app
</ dev / null 2> & 1 &
This method found with other python framework
I'm using the following tutorial to upload a django webapp to a digital ocean server. Everything seems fine while entering the following commands:
pip install --upgrade django
service gunicorn restart
According to the tutorial I now should be able to see my webpage (without the bootstrap theme/fonts) after refreshing the host ip in my browser. Instead I get the following error:
I've looked up the nginx error.log in /var/log/nginx/error.log and it says the following:
2017/01/20 08:18:23 [error] 9342#0: *38 recv() failed (104:
Connection reset by peer) while reading response header from
upstream, client: 92.111.75.86, server: _, request: "GET / HTTP/1.1",
upstream: "http://127.0.0.1:9000/", host: "104.236.68.12"
Question: How do I fix this 502 bad gateway so that my site works properly? I've tried to add ALLOWED_HOSTS = ['104.236.68.12'] to settings.py already and I've also tried to create a droplet with ubuntu 16.04 as well.
You should add the IP address of your DigitalOcean droplet to the ALLOWED_HOSTS variable in your Django settings. Starting from your nginx log, I would set:
ALLOWED_HOSTS = ['104.236.68.12']
P.S: Consider to adopt Docker for the deployment of your django app
I'm so sorry guys. This solved the issue...
Dragging my django app in filezilla to
home/django/django_project
instead of:
home/django/django_project/django_project
Basically wasn't precise enough when reading the tutorial, so sorry!
Good day:
In the settings.py exactly in
ALLOWED_HOSTS = ['*']
This will make it collect all the ips.
I was trying to setup NGINX reverse proxy on Python SimpleHTTPServer. My web.conf file present in /etc/nginx/conf.d and the setting present in the file is as follows.
server {
server_name localhost;
location / {
proxy_pass http://192.168.1.3:8000/;
}
}
My NGINX is up and running. I did the reload after saving the web.conf file. On the other hand, I'm also running Python SimpleHTTPServer in directory home/user/projects/
But when I open the browser and visit localhost it shows me the NGINX Welcome Page and not the index.html file which is inside the directory in which I'm running Python SimpleHTTPServer.
Two things:
1- you forgot in your configuration file to specify the listening port
just add :
listen 80;
2- the default configuration is still active check if there is symlink called default in:
/etc/nginx/sites-enabled/
and delete it
3- Preferably add your settings file in
/etc/nginx/sites-available/
then make a symlink to it in sites-enabled , that way you can just delete the symlink if you want to deactivate the site instead of removing the configuration . instead of putting it in conf.d.
check how to configure nginx for more details
I have an ansible provisioned VM based on this one https://github.com/jcalazan/ansible-django-stack but for some reason trying to start Gunicorn gives the following error:
Can't connect to /path/to/my/gunicorn.sock
and in nginx log file:
connect() to unix:/path/to/my/gunicorn.sock failed (2: No such file or directory) while connecting to upstream
And actually the socket file is missing in the specified directory. I have checked the permissions of the directory and they are fine.
Here is my gunicorn_start script:
NAME="{{ application_name }}"
DJANGODIR={{ application_path }}
SOCKFILE={{ virtualenv_path }}/run/gunicorn.sock
USER={{ gunicorn_user }}
GROUP={{ gunicorn_group }}
NUM_WORKERS={{ gunicorn_num_workers }}
# Set this to 0 for unlimited requests. During development, you might want to
# set this to 1 to automatically restart the process on each request (i.e. your
# code will be reloaded on every request).
MAX_REQUESTS={{ gunicorn_max_requests }}
echo "Starting $NAME as `whoami`"
# Activate the virtual environment.
cd $DJANGODIR
. ../../bin/activate
# Set additional environment variables.
. ../../bin/postactivate
# Create the run directory if it doesn't exist.
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Programs meant to be run under supervisor should not daemonize themselves
# (do not use --daemon).
exec gunicorn \
--name $NAME \
--workers $NUM_WORKERS \
--max-requests $MAX_REQUESTS \
--user $USER --group $GROUP \
--log-level debug \
--bind unix:$SOCKFILE \
{{ application_name }}.wsgi
Can anyone suggest what else could cause the missing socket file?
Thanks
Well, since I don't have enough rep to comment, I'll mention here that there is not a lot of specificity suggested by the missing socket, but I can tell you a bit about how I started in your shoes and got things to work.
The long and short of it is that gunicorn has encountered a problem when run by upstart and either never got up and running or shut down. Here are some steps that may help you get more info to track down your issue:
In my case, when this happened, gunicorn never got around to doing any error logging, so I had to look elsewhere. Try ps auxf | grep gunicorn to see if you have any workers going. I didn't.
Looking in the syslog for complaints from upstart, grep init: /var/log/syslog, showed me that my gunicorn service had been stopped because it was respawning too fast, though I doubt that'll be your problem since you don't have a respawn in your conf. Regardless, you might find something there.
After seeing gunicorn was failing to run or log errors, I decided to try running it from the command line. Go to the directory where your manage.py lives and run the expanded version of your upstart command against your gunicorn instance. Something like (Replace all of the vars with the appropriate litterals instead of the garbage I use.):
/path/to/your/virtualenv/bin/gunicorn --name myapp --workers 4 --max-requests 10 --user appuser --group webusers --log-level debug --error-logfile /somewhere/I/can/find/error.log --bind unix:/tmp/myapp.socket myapp.wsgi
If you're lucky, you may get a python traceback or find something in your gunicorn error log after running the command manually. Some things that can go wrong:
django errors (maybe problems loading your settings module?). Make sure your wsgi.py is referencing the appropriate settings module on the server.
whitespace issues in your upstart script. I had a tab hiding among spaces that munged things up.
user/permission issues. Finally, I was able to run gunicorn as root on the command line but not as a non-root user via the upstart config.
Hope that helps. It's been a couple of long days tracking this stuff down.
I encountered the same problem after following Michal Karzynski's great guide 'Setting up Django with Nginx, Gunicorn, virtualenv, supervisor and PostgreSQL'.
And this is how I solved it.
I had this variable in the bash script used to start gunicorn via Supervisor (myapp/bin/gunicorn_start):
SOCKFILE={{ myapp absolute path }}/run/gunicorn.sock
Which, when you run the bash script for the first time, creates a 'run' folder and a sock file using root privileges. So I sudo deleted the run folder, and then recreated it without sudo privileges and voila! Now if you rerun Gunicorn or Supervisor you won't have the annoying missing sock file error message anymore!
TL;DR
Sudo delete run folder.
Recreate it without sudo privileges.
Run Gunicorn again.
????
Profit
The error could also arise when you haven't pip installed a requirement. In my case, looking at the gunicorn error logs, I found that there was a missing module. Usually happens when you forget to pip install new requirements.
Well, I worked on this issue for more than a week and finally was able to figure it out.
Please follow links from digital ocean , but they did not pinpoint important issues one which includes
no live upstreams while connecting to upstream
*4 connect() to unix:/myproject.sock failed (13: Permission denied) while connecting to upstream
gunicorn OSError: [Errno 1] Operation not permitted
*1 connect() to unix:/tmp/myproject.sock failed (2: No such file or directory)
etc.
These issues are basically permission issue for connection between Nginx and Gunicorn.
To make things simple, I recommend to give same nginx permission to every file/project/python program you create.
To solve all the issue follow this approach:
First thing is :
Log in to the system as a root user
Create /home/nginx directory.
After doing this, follow as per the website until Create an Upstart Script.
Run chown -R nginx:nginx /home/nginx
For upstart script, do the following change in the last line :
exec gunicorn --workers 3 --bind unix:myproject.sock -u nginx -g nginx wsgi
DONT ADD -m permission as it messes up the socket. From the documentation of Gunicorn, when -m is default, python will figure out the best permission
Start the upstart script
Now just go to /etc/nginx/nginx.conf file.
Go to the server module and append:
location / {
include proxy_params;
proxy_pass http<>:<>//unix:/home/nginx/myproject.sock;
}
REMOVE <>
Do not follow the digitalocean aricle from here on
Now restart nginx server and you are good to go.
I had the same problem and found out that I had set the DJANGO_SETTINGS_MODULE to production settings in the the gunicorn script and the wsgi settings were using dev.
I pointed the DJANGO_SETTINGS_MODULE to dev and everything worked.
Here's a data flow:
http <--> nginx <--> uWSGI <--> python webapp
I guess there's http2uwsgi transfer in nginx, and uwsgi2http in uWSGI.
What if I want to directly call uWSGI to test an API in a webapp?
actually i'm using pyramid. just config [uwsgi] in .ini and run uWSGI. but i want to test if uWSGI hold webapp function normally, the uWSGI socket is not directly reachable by http.
Try using uwsgi_curl
$ pip install uwsgi-tools
$ uwsgi_curl 10.0.0.1:3030 /path
or if you need to do some more requests try uwsgi_proxy from the same package
$ uwsgi_proxy 10.0.0.1:3030
Proxying remote uWSGI server 10.0.0.1:3030 "" to local HTTP server 127.0.0.1:3030...
so you can browse it locally at http://127.0.0.1:3030/.
If your application allows only certain Host header, you can specify host name as well
$ uwsgi_curl 10.0.0.1:3030 host.name/path
$ uwsgi_proxy 10.0.0.1:3030 -n host.name
If application has static files, you can redirect such requests to your front server using -s argument. You can also specify different local port if needed.
From your question I'm assuming, you want to directly run your WSGI-compliant app with uWSGI and open an HTTP-Socket. You can do so by configuring your uwsgi.ini (or whatever the filename is) with
http=127.0.0.1:8080
uwsgi will now open an HTTP-socket that listen on port 8080 for incoming connections from localhost (see documentation: http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html)
Alternatively you can directly start your process from the command-line with the http-parameter:
$ uwsgi --http=127.0.0.1:8080 --module=yourapp:wsgi_entry_point
If you use unix-sockets to configure uwsgi nginx is able to communicate with that socket via the uwsgi-protocol (http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html).
Keep in mind, that if you usually serve static content (css, javascript, images) through nginx you will need to set that up, too, if you run uwsgi directly. But if you only want to test a REST-API this should work out for you.
First, consider those questions:
On which port is uWSGI running?
Is uWSGI running on your or on a remote machine?
If it's running on a remote machine, is the port accessible from your computer? (iptables rules might forbid external access)
If you made sure you have access, you can just call http://hostname:port/path/to/uWSGI for direct API access.
I know this is an old question but I just needed this and found out that this docker+nginx solution works for me the best
cat > /tmp/nginx.conf << EOF
events {}
http {
server {
listen 8000;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}
}
}
EOF
docker run -it --network=host --rm --name uswgi-proxy -v /tmp/nginx.conf:/etc/nginx/nginx.conf:ro nginx