I am deploying my python Flask app in nginx server using gunicorn in Ubuntu. I am using mongodb. I have followed the steps from this https://medium.com/faun/deploy-flask-app-with-nginx-using-gunicorn-7fda4f50066a. Everything is working fine. I am able to get the response using GET method. But when I use POST method to insert into db it throws server overload error, internal server error. The code was working fine in development mode. This problem happened only when i moved the code to production.
Related
I'm trying out compute engine and I've successfully setup a vm, created a flask app as an API that accepts POST requests so I can further send emails to clients,
I keep getting a connection refused error whenever I try to make an http post request to the instace IP, I've tried using external and internal IPs, setting up a new firewall rule to allow all ports IN VPC network rules and still nothing. what could be the problem?
this is the address shown to me by my flask app,
and this is my client side code for the request.
also i want to note that the code works perfectly on my local machine.
As for Flask 2.2, the development server always shows this warning, it is not possible to disable it. The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. Use a production WSGI server instead. See the deployment docs from Flask for more information.
That warning applies to the development server, not Flask itself. The Flask framework is appropriate for any type of application and deployment.
You can also try running below commands to do flask run
$ export FLASK_APP=hello.py
$ export FLASK_ENV=development
$ flask run
Alternatively you can do the following
$ export FLASK_APP=hello.py
$ python -m flask run
Refer to this SO link for more information.
I’ve recently tested and deployed a flask app as the minusthemiddleman website. All of the functions tested and worked in the sandbox by multiple testing methods before deployment to nginx and uwsgi in production. For some reason page 2 of the search function causes a resource problem in app as deployed. I’ve had a chance to monitor the redis directly using $redis-cli monitor and check my pip list to make everything is installed in production correctly.
The specific error is:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
Does anybody have ideas as to why this is malfunctioning? Thanks,
I have deployed NLP project on Heroku server. If I am going to test the webapp by entering the fields, then I get INTERNAL SERVER ERROR:
The server encountered an internal error and was unable to complete your request.
Either the server is overloaded or there is an error in the application.
This app is successfully running on localhost. I don't know what happens after deployment.
Can anyone help?
Here is the screen shot of build
[
Here is the code of app.py file
This is the code of spam_classifier.py
These are requirements which I put on requirements.txt
It seems like your build got successfully deployed but you are getting the internal server error while you are consuming the API.So you will have to check the application log instead of the build log. To see that, in the heroku dashboard go to more>>logs. For every request that you make, you will be able to see the logs and hence the runtime errors if there are any.
Few other suggestions
You don't need spam_classifier.py in your file system on the deployed environment as you are loading the trained model from the pickle file.
Make sure that you have pushed the pickle files along with the code. I suggest that you use amazon s3 to store and load those kind of files
If you check the Application Logs in heroku you will find this error "Error R14 (Memory quota exceeded)" due to which you are getting the Internal Server Error after deploying the project on Heroku.
I am facing issue in my Django app...
which is working fine in mhy local django WSGI based server. but the same facing timeout in nginx..
what will be the issue?
is there anything to deal with increasing nginx process?
my nginx response which took 30000ms to respond in my server but without data (i am using AWS),
my local got respond in 12000ms with response,
any help?
My django app is on AWS i am using nginx gunicorn and supervisor for deployment configuration...
I'd recommend against fiddling with the Nginx and Gunicorn config.
Instead try reducing the amount of data you're trying to fetch in a single API response. If you're data is in the form of a list [which it looks like from the picture] I'd recommend paginating your response. Django has excellent pagination module which can be used.
https://docs.djangoproject.com/en/2.0/topics/pagination/
I am using django rest framework.
Patch on api endpoint( users/user_id) is working in local django server on my machine. But on nginx development server its showing
{"detail":"Method \"METHOD_OTHER\" not allowed."}
Do we need to change some settings in nginx?
Ok I tried the access the same code from different network and it worked.
Probably it was firewall issue of that particular wifi network.