deploying python dynamic cgi application to bluemix? - python

I'm trying to deploy a python-based scientific application on IBM's bluemix platform. However, while I can launch the cgi services to host the web pages, the python application behind it doesn't currently run. The application currently runs on an apache server on rackspace, but I'm try trying launch a newer version for testing. I tried Heroku, with the same problem that the web pages will be served, but not the python application. But the heroku gist seems to be that Heroku can't serve cgi applications. And there was a suggestion that the Cloud Foundry platform would be able to do so. The application runs fine locally, so I'm trying to find the right tweak to deploy to Bluemix (or Heroku).
We have the requirements file, and the initial thought is that it's the procfile that needs tweaking. Currently it looks like:
web: python -m CGIHTTPServer $PORT
I've tried launching the application via the worker tag.
worker: python weblogo.py
worker: python setup.py
even trying to launch the internal files:
worker: python /weblogolib/_cgi.py
worker: python /weblogolib/__init__.py
Yet none of these methods worked on getting the application behind the web pages to work. Is there another method that we're unaware of?
The application is designed to be served locally with the command.
python ./weblogo --serve
Does this matter when deploying to a cloud platform?
Rewriting the application to Flask or Django isn't really an option now. Any guidance toward the getting the application launched would be much appreciated! Thank you in advance!

Related

Kubernetes OpenShift for python

I am new to openshift, we are trying to deploy a python module in a pod which is accessed by other python code running in different pods. When i was deploying, the pod is running and immediately crash with status "Crash Loop Back Off".This python code is an independent module which does not have valid entrypoint. So how to deploy those type of python modules in openshift. Appreciate for any solutions
You don't. You deploy something that can run as a process, and as such, has a capability to contact with the external world in some way (ie. listen for requests, connect to message broker, send requests, read/write to db etc.) you do not package and deploy libraries that on their own are inoperable to the cluster.

How to host Django 1.10 Web Application on WHM via cpanel?

I want to host my Django 1.10 web application on WHM(VPS). for that i have installed Django and another necessary tools on WHM(VPS) by ssh login. and i also have uploaded my Django application code through cpanel in public_html directory.
When i run python manage.py runserver <ip_address:8000> from ssh terminal, i am able to access that application. but when i close the ssh terminal, it terminates all the running process. so could not access application after that.
So, Is there any way that without running python manage.py script i can access Django application?
Any help would be highly appreciated.
Thank you.
Django comes with a built-in development server, it's not really meant to be used when deploying to a remote VPS.
There are some great resources to help you along, but what you're likely going to want to do is deploy your app on a stack using gunicorn or uwsgi as an application server and a web server like Nginx or apache 2 as a reverse proxy.
I personally use Nginx and uwsgi.
There are some great resources explaining how to deploy a Django server. Have a look at this one from digital ocean, it was a great help when I needed to set up a production server correctly: https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-16-04
best of luck with it!

How do I run python program on Rails with Heroku?

Rails run on Cloud9 without any problem.
What I want to do is as follows:
run Rails on Heroku
run python program by rake task in Rails (especially python for access AWS s3 by boto to get some.)
current condition is as follows:
Case 1: deployed rails to heroku without any changing.
The deploy could be success and there is no problem to run rails app via Web browser. but rails server shows error log (by heroku log --tail )
an error shows "No module named boto"
Case 2: deployed rails to heroku file named requirements.txt on root.
Heroku didn't detect it as Ruby Rails app so that rails server didn't run.
rails server shows error log (by heroku log --tail ) as
heroku[router]: at=error code=H14 desc="No web processes running"
Case 3: deployed rails first as same as case 1. Then, add Python on Buildpacks on Heroku setting, then add requirements.txt, finally deploy again. then it's deployed. but rails server shows error log (by heroku log --tail ) as same as Case3.
if I could run command like pip, it will be easy, but it's impossible.
is there any idea to solve the above?
Instead of trying to install your custom boto on Heroku, just place your custom boto folder in your project's directory (at the same level as your project's apps). Thereafter, you can import the boto folder using the import statement. You can read about importing a module here
Although, the ideal way to do it is to use the Rails SDK for AWS instead of using the Python SDK for AWS and then trying to use it with Rails. All the functionality available with Boto is available with the SDK for Ruby as well.
Check
http://docs.aws.amazon.com/sdk-for-ruby/v2/developer-guide/

Deploying Django app to AWS EC2 using Nginx and Gunicorn

I am a newbie to web I developed a small web app using python and Django things are working fine with local development server. Now I want to deploy my app to an AWS EC2 instance with Nginx and Gunicorn. Every tutorial I found on the internet explains things using a linux platform. Unfortunately, I am using a windows.
I have never used Git or Linux, will I be able to do things from Windows machine itself?
Use PUtty and follow this.
After connecting to you linux instance everything command will be on linux.
It does not matter which platform you are using, just make sure on AWS, you should create linux instace and follow those tutorial on deploying with gUnicorn and Nginx.

How to deploy Flask Web app in Production server from PyCharm

I am new to Flask Web with Python. I developed a simple Flask with Pycharm. Now I want to deploy it on my college server. I remember when I develop php web app I used to copy it to www folder in var and run it from there.
Can someone explain me how do I deploy python web app in Linux production server.
First, you need to figure out what your server provides from its sysadmin. There are tens of ways to do this, and all of them have different answers.
Given that you have used php on the system before, I'll assume you are using Apache httpd.
First, you would need to have a wsgi provider installed (e.g. mod_wsgi) , and then wire your application into the wsgi provider.
See the mod_wsgi page and flask docs for more information on how to precisely do this.
https://code.google.com/p/modwsgi/
http://flask.pocoo.org/docs/0.10/deploying/
Another option is to have python bring its own webserver and optionally have a proxy redirect / load balance.

Categories

Resources