How to run two django apps on same dev machine - python

I'm trying to run two separate django apps that need to communicate with each other using a restfull api. In the real life there would be two separate machines but during development i'm running the two instances on different ports. Trying anyway ...
One app is running on 127.0.0.1:8000, and the other on 127.0.0.1:9000.
I've tried running both on localhost or 0.0.0.0 and all other combinations but I keep getting these weird errors.
407 Client Error: Proxy Authorization Required
or
500 Server Error: INKApi Error
which as far as i could find is an apache error, or 403 forbidden
What is the correct way to test two apps on the same machine ?

There's no TRUE one. But you could try using nginx (althought this works with apache as well, using the appropiate syntax - what you should care about is the general idea). You could define two fake domains like domain1.dev and domain2.dev, and build appropiate entries:
server {
listen 80;
server_name domain1.dev;
location / {
proxy_pass http://127.0.0.1:8000/
}
}
server {
listen 80;
server_name domain2.dev;
location / {
proxy_pass http://127.0.0.1:9000/
}
}
note, however, this setup is incomplete and you should understand how nginx works. althought the idea here is conceptual and you can do this with, also, apache. ALSO the domains must be correctly defined in /etc/hosts (or the windows equivalent) for this to work.

Related

How to develop in production and development mode of Flask website

This is not a pure technical issue but more methodologic question.
I've seen Q&A regarding configuration for DEBUG and PRODUCTION ENVs but my question is concerning other issue.
When I started working on the project on my local machine I edited the hosts file to redirect www.example.com (I used the same URL for my live website) to 127.0.0.1 as I used to and it's working great.
Now when www.example.com is live, I wanted to know what is the right configuration for keep developing the website?
The only idea I came up with is to use www.example.org (So I won't lose actual access to www.example.com) in my hosts file and on the code to use IF DEBUG to redirect traffic to example.org instead of example.com but I feel there are better options.
I also would love some tips about the right way of working with git to post local updates to the live server.
When I want to access the website I'm running locally I just use http://127.0.0.1:5000 in my browser to access it.
If you've hardwired the domain "www.example.com" into your flask logic somewhere, i.e. when passing a redirect link to an OAuth service I would consider removing that hardcoded logic. Instead use an environment variable which you set differently on production/dev or else access to the current domain of a request with request.url_root or request.headers['Host'].
Make use of Flask's SERVER_NAME and PREFERRED_URL_SCHEME builtin configuration values.
class Config(object):
# blah blah
class DevelopmentConfig(Config):
SERVER_NAME = "example.local"
PREFERRED_URL_SCHEME = 'http'
class ProductionConfig(Config):
SERVER_NAME = "example.com"
PREFERRED_URL_SCHEME = 'https'
On your development machine map example.local to 127.0.0.1 .

How to run node and django server at a time

I am making a video chatting application in which I will use web sockets from npm and I have written WebSocket related code in a server.js file like this
var webSocketServ = require('ws').Server;
var wss = new webSocketServ({
port: 8000
})
var users = {};
var otherUser;
wss.on('connection', function (conn) {
console.log("User connected");
conn.on('message', function (message) {
var data;
.
.
.
and so on
and for managing URLs I used Django,
so my problem is when I use python manage.py runserver, server.js is not running and application is not connecting to the server and if I run "node server.js" application is connecting to server but I am unable to manage URL as in Django code
so I opened two instances of terminals and ran node in one terminal and python in another but I know its not the correct way and it won't be efficient while hosting
is there any way to run both servers at a time?
If you need the two servers to appear to run on the same port, you'll need to set up one to proxy certain requests to the other. (I'd recommend having the Node.js server proxy everything non-websocket to Django using e.g. https://github.com/http-party/node-http-proxy).
If you don't need the two servers to run on the same port, just change them to use different ports, in which case you'll access the two contents on different URLs. You can change that port: 8000 in the Node app, or in Django's runserver, do e.g. runserver 127.0.0.1:8010 (or 0.0.0.0:8010 to expose the dev server beyond your own machine).

How to fix a “413 request entity too large” on digital ocean-django app

I recently deployed a django app to digital ocean using gunicorn and nginx, the app is working but while trying to upload files, it threw an error
413 request entity too large
I've tried some suggestions on this stackflow which state that there is need to add client_max_body_size to /etc/nginx/nginx.conf, when I did this, an error
client_max_body_size directive not allowed in /etc/nginx/nginx.conf
was thrown. I really do not know what to do.
Check if client_max_body_size is properly declared as it may need additional configuration. Also, set the upload_max_filesize, post_max_size. Check this article

cannot get bokeh server to run on server

I cannot get a bokeh plot to work on a deployed server because of cross-domain issues. I have asked this question in a few forms and am not really getting anywhere.
I always get the error
XMLHttpRequest cannot load http://127.0.0.1:5006/bokeh/objinfo/0257493b-cce5-450d-8036-2bc57233b1dc/bd1791f4-4d28-4faa-8c9d-a6fe5a1721c1. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://my_ip_address' is therefore not allowed access. The response had HTTP status code 500.
Whether I am running an applet or trying to embed a single plot.
Here I am trying to fetch a plot script from within a Flask view
#perf.route('/_fetch_heatmap', methods=['POST'])
#login_required
def fetch_sd_heatmap():
document = Document()
session = Session(root_url='http://127.0.0.1:5006', configdir=current_app.config['BASE_DIRECTORY'])
session.use_doc('sd_viz')
session.load_document(document)
...
plots = VBox(hm_duration, hm_frequency)
document.add(plots)
push(session, document)
script = autoload_server(plots, session)
return jsonify({'script': script})
This script is returned to an ajax call within my javascript. This script is then appended to the corresponding <div>
This runs fine on my development machine.
Below is my nginx configuration for production
server {
listen my_ip default_server;
charset utf-8;
client_max_body_size 30M;
location ~ ^/(app_config.py|.git) {
deny all;
return 404;
}
location / {
index index.html index.htm;
root /home/myuser/app_directory;
try_files $uri #app;
}
location /static {
alias /home/myuser/app_directory/webapp/static;
}
location #app {
include uwsgi_params;
uwsgi_pass unix:/home/myuser/app_directory/uwsgi.sock;
uwsgi_connect_timeout 18000;
...
}
Has anyone successfully made a flask application with embedded bokeh plots from the bokeh server that runs in a production environment?
Hi just to update this discussion, as of the new Bokeh server in 0.11 there is much more extensive documentation about deployments:
http://docs.bokeh.org/en/0.11.1/docs/user_guide/server.html
Including information about running behind reverse proxies, using load-balancers and process manager and automating with tools like Salt. The never server is much more robust, scalable, and easy to use. You can see a gallery of live Bokeh server examples that have been "production" deployed continuously since January 2016 here:
http://demo.bokeh.org
As a reference, the full automated deployment is available for study on GitHub:
https://github.com/bokeh/demo.bokeh.org
Additionally, a fairly sophisticated example of embedding a session-specific Bokeh server app is demonstrated in the "Happiness" example here:
https://github.com/bokeh/bokeh/tree/master/examples/embed/server_session
But lastly I should say that the upcoiming 0.12 release will have the ability to set a custom Jinja template for Bokeh apps, meaning that things like Single Page Apps that build heavily around Bokeh documents can be served directly from the Bokeh server, without needing to embed in another web-server (if that is desired).

Web.py routing when added nginx as upstream

I've been writing a client side app and a server side app as two separate apps and I want the client to use the server. The client is written in javascript and the server is written in python using web.py as the engine to deliver to the client. The client and server must be on the same web domain.
The server part has a route defined as:
'/data/(.*)', 'applicationserver.routes.Data.Data'
This works fine running it locally using http://buildserver/data/transform
I'm setting it up as a site in nginx like this:
upstream app {
server 127.0.0.1:8081
}
and adding it to the web application like this:
location /server {
...
proxy_pass
}
The new path to the route would be ` but for obvious reasons this will not work as the server app is listening for/dataand not/server/data`.
I have tried to change the route in python to (.*)/data/(.*) which sort of works except that it throws the error:
<type 'exceptions.TypeError'> at /data/transform
GET() takes exactly 2 arguments (3 given)
I figured out what was going on just before posting but I hope I can help someone else by posting this anyway.
web.py is sending the matched group to the GET and therefor using (.*)/data/(.*) sent both the start and the end of the path to the GET which is why it failed.
Setting the route to .*/data/(.*) gave me what I was after and only sent the part after data to the GET function.

Categories

Resources