How to run node and django server at a time - python

I am making a video chatting application in which I will use web sockets from npm and I have written WebSocket related code in a server.js file like this
var webSocketServ = require('ws').Server;
var wss = new webSocketServ({
port: 8000
})
var users = {};
var otherUser;
wss.on('connection', function (conn) {
console.log("User connected");
conn.on('message', function (message) {
var data;
.
.
.
and so on
and for managing URLs I used Django,
so my problem is when I use python manage.py runserver, server.js is not running and application is not connecting to the server and if I run "node server.js" application is connecting to server but I am unable to manage URL as in Django code
so I opened two instances of terminals and ran node in one terminal and python in another but I know its not the correct way and it won't be efficient while hosting
is there any way to run both servers at a time?

If you need the two servers to appear to run on the same port, you'll need to set up one to proxy certain requests to the other. (I'd recommend having the Node.js server proxy everything non-websocket to Django using e.g. https://github.com/http-party/node-http-proxy).
If you don't need the two servers to run on the same port, just change them to use different ports, in which case you'll access the two contents on different URLs. You can change that port: 8000 in the Node app, or in Django's runserver, do e.g. runserver 127.0.0.1:8010 (or 0.0.0.0:8010 to expose the dev server beyond your own machine).

Related

WLST edit mode issue for managed instance

While I had executed command edit() connecting to managed instance I was ended-up with the following error. How & What I have to do in order to come out of this problem.
wls:/offline> connect('Admin60000','sun1rise','t3://my-comm-app-serv:60001')
Connecting to t3://my-comm-app-serv:60001 with userid Admin60000 ...
Successfully connected to managed Server "MiCommApp" that belongs to domain "MiBeaDir".
Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.
wls:/MiBeaDir/serverConfig>cd('/Servers/MiCommApp/SSL/MiCommApp')
wls:/MiBeaDir/serverConfig/Servers/MiCommApp/SSL/MiCommApp> edit()
Edit MBeanServer is not enabled on a Managed Server.
60001 is managed instance port which is one among the managed instance that runs in admin server. Admin server runs in 60000 port
That is because for managed servers, WLST functionality is limited to browsing the configuration bean hierarchy. Read below excerpt from WL official documentation.
To edit configuration beans, you must be connected to an
Administration Server, and you must navigate to the edit tree and
start an edit session, as described in edit and startEdit,
respectively.
If you connect to a Managed Server, WLST
functionality is limited to browsing the configuration bean hierarchy.
While you cannot use WLST to change the values of MBeans on Managed
Servers, it is possible to use the Management APIs to do so. BEA
Systems recommends that you change only the values of configuration
MBeans on the Administration Server. Changing the values of MBeans on
Managed Servers can lead to an inconsistent domain configuration.
So, basically you need to connect with your Admin server (current you are getting connected with your managed server, as per logs you have provided - Successfully connected to managed Server "MiCommApp" that belongs to domain "MiBeaDir".) and then issue edit configurations using edit() and startEdit() WLST commands.
BTW, I connect to my server using following command:
If HTTPS - connect(url='t3s://abc.xyz.com:37001',adminServerName='AdminServer')
If HTTP - connect(url='t3://abc.xyz.com:37001',adminServerName='AdminServer')

How to run two django apps on same dev machine

I'm trying to run two separate django apps that need to communicate with each other using a restfull api. In the real life there would be two separate machines but during development i'm running the two instances on different ports. Trying anyway ...
One app is running on 127.0.0.1:8000, and the other on 127.0.0.1:9000.
I've tried running both on localhost or 0.0.0.0 and all other combinations but I keep getting these weird errors.
407 Client Error: Proxy Authorization Required
or
500 Server Error: INKApi Error
which as far as i could find is an apache error, or 403 forbidden
What is the correct way to test two apps on the same machine ?
There's no TRUE one. But you could try using nginx (althought this works with apache as well, using the appropiate syntax - what you should care about is the general idea). You could define two fake domains like domain1.dev and domain2.dev, and build appropiate entries:
server {
listen 80;
server_name domain1.dev;
location / {
proxy_pass http://127.0.0.1:8000/
}
}
server {
listen 80;
server_name domain2.dev;
location / {
proxy_pass http://127.0.0.1:9000/
}
}
note, however, this setup is incomplete and you should understand how nginx works. althought the idea here is conceptual and you can do this with, also, apache. ALSO the domains must be correctly defined in /etc/hosts (or the windows equivalent) for this to work.

How to run Python scripts on a web server (e.g localhost)

In my development of Android and Java applications I have been using PHP scripts to interact with an online MySQL database, but now I want to migrate to Python.
How does one run Python scripts on a web server? In my experience with PHP, I have been saving my files under /var/www folder in a Linux environment. Then I just call the file later with a URL of the path. Where do I save my Python scripts?
You can use Flask to run webapps.
The simple Flask app below will help you get started.
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/sampleurl' methods = ['GET'])
def samplefunction():
#access your DB get your results here
data = {"data":"Processed Data"}
return jsonify(data)
if __name__ == '__main__':
port = 8000 #the custom port you want
app.run(host='0.0.0.0', port=port)
Now when you hit http://your.systems.ip:8000/sampleurl you will get a json response for you mobile app to use.
From within the function you can either do DB reads or file reads, etc.
You can also add parameters like this:
#app.route('/sampleurl' methods = ['GET'])
def samplefunction():
required_params = ['name', 'age']
missing_params = [key for key in required_params if key not in request.args.keys()]
if len(missing_params)==0:
data = {
"name": request.argv['name'],
"age": request.argv['age']
}
return jsonify(data)
else:
resp = {
"status":"failure",
"error" : "missing parameters",
"message" : "Provide %s in request" %(missing_params)
}
return jsonify(resp)
To run this save the flask app in a file e.g. myapp.py
Then from terminal run python myapp.py
It will start the server on port 8000 (or as specified by you.)
Flask's inbuilt server is not recommended for production level use. After you are happy with the app, you might want to look into Nginx + Gunicorn + Flask system.
For detailed instruction on flask you can look at this answer. It is about setting up a webserver on Raspberry pi, but it should work on any linux distro.
Hope that helps.
Use a web application framework like CherryPy, Django, Webapp2 or one of the many others. For a production setup, you will need to configure the web server to make them work.
Or write CGI programs with Python.
On Apache the simplest way would be to write the python as CGI here is an example:
First create an .htaccess for the web folder that is serving your python:
AddHandler cgi-script .py
Options +ExecCGI
Then write python that includes some some cgi libraries and outputs headers as well as the content:
#!/usr/bin/python
import cgi
import cgitb
cgitb.enable()
# HEADERS
print "Content-Type:text/html; charset=UTF-8"
print # blank line required at end of headers
# CONTENT
print "<html><body>"
print "Content"
print "</body></html>"
Make sure the file is owned by Apache chown apache. filename and has the execute bit set chmod +x filename.
There are many significant benefits to actually using a web framework (mentioned in other answers) over this method, but in a localhost web server environment set up for other purposes where you just want to run one or two python scripts, this works well.
Notice I didn't actually utilize the imported cgi library in this script, but hopefully that will direct you to the proper resources.
Most web development in python happens using a web framework. This is different than simply having scripts on the server, because the framework has a lot more functionality, such as handling URL routing, HTML templating, ORM, user session management, CSRF protection, and a lot of other features. This makes it easier to develop web sites, especially since it promotes component reuse, in a OOP fashion.
The most popular python web framework is Django. It's a fully-featured, tighly-coupled framework, with lots of documentation available. If you prefer something more modular and easier to customize, I personally recommend Flask. There's also lots of other choices available.
With that being said, if all you really want is to run a single, simple python script on a server, you can check this question for a simple example using apache+cgi.

Web.py routing when added nginx as upstream

I've been writing a client side app and a server side app as two separate apps and I want the client to use the server. The client is written in javascript and the server is written in python using web.py as the engine to deliver to the client. The client and server must be on the same web domain.
The server part has a route defined as:
'/data/(.*)', 'applicationserver.routes.Data.Data'
This works fine running it locally using http://buildserver/data/transform
I'm setting it up as a site in nginx like this:
upstream app {
server 127.0.0.1:8081
}
and adding it to the web application like this:
location /server {
...
proxy_pass
}
The new path to the route would be ` but for obvious reasons this will not work as the server app is listening for/dataand not/server/data`.
I have tried to change the route in python to (.*)/data/(.*) which sort of works except that it throws the error:
<type 'exceptions.TypeError'> at /data/transform
GET() takes exactly 2 arguments (3 given)
I figured out what was going on just before posting but I hope I can help someone else by posting this anyway.
web.py is sending the matched group to the GET and therefor using (.*)/data/(.*) sent both the start and the end of the path to the GET which is why it failed.
Setting the route to .*/data/(.*) gave me what I was after and only sent the part after data to the GET function.

Simplest way to switch the linux user for the web-server (django) without sudo?

Aim: to create user friendly web interface to linux program without any ssh (console) terrible stuff.
I have chosen Python + Django + Apache.
Problem: user should login through the browser to linux user and then all user`s requests should be served on behalf of this linux user.
By now, server is run as root and when user login through a browser, server root can switch to required user using django user name:
uid = pwd.getpwnam(userName)[2]
os.setuid(uid)
and it can execute all django stuff on behalf of appropriate user.
The problem is that server must be run as root!
How could I provide possibility to normally run server with usual apache user rights with providing login to linux user through a browser? (Just get user Name and PWD from the Http POST request and login to appropriate user using Python)?
Update: I need to map any user via web to specific linux user to give him his home directory to execute specific linux program only as this specific user! I guess something like this is realized in webmin?
Possible solution: I could execute su userName but it doesn't work without terminal:
p = subprocess.Popen(["su", "test"], stdout = subprocess.PIPE, stdin = subprocess.PIPE, stderr = subprocess.STDOUT)
suOUT = p.communicate(input="test")[0]
print suOUT
I just got:
su: must be run from a terminal
I'm not sure what "standard" approaches are for dealing with this problem. However, this is a simple technique for environments with a small number of users that doesn't involve sudo, nor changing UID inside the web server (this is likely to be very problematic for concurrent access by multiple users).
Launch a daemon process for each user having access to this application. This process should itself serve web requests for that user over FastCGI (substitute for protocol of your choice). Your web server should have some user to port number mapping. Then, redirect your gateway's requests to the proper FastCGI process based on the logon used by the Django user.
Example (using internal redirects by NGINX, assuming setup with FastCGI):
User foo logs on to Django web application
User requests page /.../
Django application receives request for /.../ by user foo
Django application returns custom HTTP header X-Accel-Redirect to indicate internal redirect to /delegate/foo/.../.
NGINX forwards finds location /delegate/foo/ associated to a FastCGI handler on port 9000
FastCGI handler is running as user foo and grants access to stuff in home directory.
You can substitute the web server and communication protocol to combinations of your choice. I used FastCGI here because it allows to write both the gateway and the handler as Django applications. I chose NGINX because of the internal redirect feature. This prevents impersonation by direct use of /delegate/foo/.../ URLs by users other than foo.
Update
Example:
Assuming you have the flup module, you can start a FastCGI server directly using Django. To start a Django application over FastCGI under a specific user account, you can use:
sudo -u $user python /absolute/path/to/manage.py runfcgi host=127.0.0.1 port=$port
Substitute the $user for the user name and $port for a unique port for that user (no two users can share the same port).
Assuming an NGINX configuration, you can set this up like:
location /user/$user {
internal;
fastcgi_pass 127.0.0.1:$port;
# additional FastCGI configuration...
}
Make sure to add one such directive for each $user and $port combination above.
Then, from your front-end Django application, you can check permissions and stuff using:
#login_required
def central_dispatch_view ( request ):
response = HttpResponse()
response['X-Accel-Redirect'] = '/user/'+request.user.username
return response
Disclaimer: This is totally untested, and almost a year after the original answer, I'm not sure this is possible, mainly because the documentation on XSendFile in NGINX specifies that this should work with static files. I haven't inquired any further to know if you can actually perform an internal NGINX redirect from a FastCGI application.
Alternate solution:
A better approach might not involve internal redirects, but instead to use a FastCGI authorizer. Basically, a FastCGI is a program that your webserver runs before serving a request. Then, you can bypass the shady internal redirect thing and just have a FastCGI authorizer that check if the request accessing /user/foo/ actually can from a Django user logged in as foo. This authorizer program won't be able to run as a Django application (since this is not an HTTP request/response cycle), but you can write it using flup and access your Django settings.
You can include the wsgi user in the sudoers file, and limit the commands and arguments it can run. Why can't you use sudo?
for example:
Cmnd_Alias TRUSTED_CMDS = /bin/su johndoe /some/command, \
/bin/su janedoe /some/command
my_wsgi_user ALL = NOPASSWD: TRUSTED_CMDS
From the security perspective, you should assume the users have shell access - I think its ok for a coorporate intranet but not for a public site.
From python/django you will be able to call ['sudo', '/bin/su', 'johndoe', '/some/command'].
Another solution if you really can't use sudo (with NOPASSWD) is connect via ssh using the user credentials (user, password) with paramiko.

Categories

Resources