Pyramid: restart the apps in a exception view - python

The command that I've been using is:
pserve development.ini --reload
and every time when i meet a error like SQLAlchemy's "IntegrityError" or something else,
I have to kill pserve an type the command again to restart the apps.
Is there a method i can restart the apps in a exception view like this?
#view_config(context=Exception)
def error_view(exc, request):
#restart the waitress or apache...
return Response("Sorry there was an error, wait seconds, we will fix it soon.")

Restarting your server is not a sensical response to an IntegrityError. This is something that is expected to happen and you need to handle it. Restarting the server really makes no sense in the context of anything other than development.
If you run into exceptions in development, fix the code and save the file and the --reload will automatically restart your server for you.

If you have to restart the application after an exception (supposedly because nothing works after an exception otherwise) it suggests your requests try to re-use the same transaction - in other words, your application is not configured properly.
You should be using a session configured with ZopeTransactionExtension as Pyramide's scaffolds generate.
If you show us some code we may be able to pinpoint the exact cause of the problem.

Related

Is it possible run a fastapi in command line?

We can run any script in python doing:
python main.py
Is it possible do the same if the script was a FastApi application?
Something like:
python main.py GET /login.html
To call a GET method that returns a login.html page.
If not, how I could start a FastApi application without using Uvicorn or another webserver?
I would like can run the script only when necessary.
Thanks
FastApi is designed to allow you to BUILD APIs which can be queried using a HTTP client, not directly query those APIs yourself; however, technically I believe you could.
When you start the script you could start the FastApi app in a another process running in the background, then send a request to it.
import subprocess
import threading
import requests
url = "localhost/some_path"
# launch sub process in background task while redirecting all output to /dev/null
thread = threading.Thread(target=lambda: subprocess.check_output(["uvcorn", "main:app"]))
thread.start()
response = requests.get(url)
# do something with the response...
thread.join()
Obviously this snippet has MUCH room for improvement, for example the thread will never actually end unless something bad happens, this is just a minimal example.
This is method has the clear drawback of starting the API each time you want to run the command. A better approach would be to emulate applications such as Docker, in which you would start up a local server daemon which you would then ping using the command line app.
This would mean that you would have the API running for much longer in the background, but typically these APIs are fairly light and you shouldn't notice and hit to you computer's performance. This also provides the benefit of multiple users being able to run the command at the same time.
If you used the first previous method you may run into situations where user A send a GET request, starting up the server taking hold of the configured host port combo. When user B tries to run the same command just after, they will find themselves unable to start the server. and perform the request.
This will also allow you to eventually move the API to an external server with minimal effort down the line. All you would need to do is change the base url of the requests.
TLDR; Run the FastApi application as a daemon, and query the local server from the command line program instead.

Django how to debug a frozen save operation on a queryset object

I have the following code in a Django project (within the create method of a Django Rest Framework serializer)
def create(self, validated_data):
<...>
log.info("\n\n\n")
log.info(f"django model: {self.Meta.model}")
log.info("CREATING CASE NOW .....")
case = self.Meta.model(**kwargs)
log.info(f"Case to be saved: {case}")
case.save()
log.info(f"Case object Created: {case}")
When I'm posting to the endpoint, it's just freezing up completely on .save(). Here's example output:
2020-06-15 02:47:46,008 - serializers - INFO ===> django model: <class 'citator.models.InternalCase'>
2020-06-15 02:47:46,008 - serializers - INFO ===> django model: <class 'citator.models.InternalCase'>
2020-06-15 02:47:46,009 - serializers - INFO ===> CREATING CASE NOW .....
2020-06-15 02:47:46,009 - serializers - INFO ===> CREATING CASE NOW .....
2020-06-15 02:47:46,010 - serializers - INFO ===> Case to be saved: seychelles8698
2020-06-15 02:47:46,010 - serializers - INFO ===> Case to be saved: seychelles8698
No error is thrown and the connection isn't broken. How can I debug this? Is there a way to get logging from the save method?
The error likely unrelated to the use of the Django rest serializers as the code that hangs simple creates a new model and saves it. Now you did not specify how kwargs is defined, but the most likely candidate is that it gets stuck talking to the DB.
To debug the code, you should learn how to step in the code. There are a number of options depending on your preferences.
Visual studio code
Install the debugpy package.
Run python3 -m debugpy --listen localhost:12345 --pid <pid_of_django_process>
Run the "Python: Remote Attach" command.
CLI
Before the line case.save() do
import pdb; pdb.set_trace()
This assumes you are running the Django server interactively and not e.g. through gunicorn. You will get a debug console right before the save line. When the console appears, type 'c' and press enter to continue execution. Then press Ctrl+C when the process appears stuck. Type bt to find out what goes on in the process.
Native code
If the stack trace points to native code, you could switch over to gdb. To debug this (make sure to exit any Python debugger or restart the process without a debugger). Run
gdb -p <pid_of_django>
when the process appears stuck. Then type 'bt' and press enter to get a native traceback of what is going on. This should help you identifiy e.g. database clients acting up.
It is very probable that Django is waiting for a response from database server and it is a configuration problem, not a problem in the Python code where it froze. It is better to check and exclude this possibility before debugging anything. For example it is possible that a table is locked or an updated row is locked by another frozen process and the timeout in the database for waiting for end of lock is long and also the timeout of Django waiting for the database response is very long or infinite.
It is confirmed if a similar save operation takes an abnormally long time in another database client, preferably in your favorite database manager.
Waiting for socket responce is excluded if you see CPU % activity of the locked Python process.
It may be easier explored if you can reproduce the problem in CLI by python manage.py shell or python manage.py runserver --nothreading --nothreading. Then you can press Ctrl+C and maybe after some time Ctrl+C again. If you are lucky you kill the process and will see a KeyboardInterrupt with a traceback. It helps you identify if the process was waiting for something else than for a database server socket response.
Another possible cause in Django could be related to a custom callback code connected to a pre_save or post_save signal.
Instead of plain python manage.py ... you can run python -m pdb manage.py ... and optionally set a break point or simply press "c" "Enter" (continue). The process will run and will be not killed after any exception, but stay in pdb (the native Python DeBugger).

Internal Server Error despite try/except clause and without error log

I have a simple app running on Windows Server 2012 using IIS. It is used to run an R script underneath (it's a very complicated script and rewritting it in Python is not an option). This is the code:
try:
output = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8')
except:
return jsonify({'output': 'Error!', 'returncode': '0', 'error': 'Unknown Error'})
else:
error = str(output.stderr)
return jsonify({'output': str(output.stdout), 'returncode': str(output.returncode), 'error': error})
It is ran by AJAX and is running fine most of the time, but sometimes it results in "Internal Server Error".
Now the interesting part. Above error is not caught by the except clause, it's not logged in the Flask error.log and the underlying R script does everything it's meant to do. So in short everything works as it should but it throws Internal Server Error for no particular reason. This is very annoying as the user gets an error despite everything working fine.
I already tried not using try/except and also "except Exception as err" but they also don't log any errors.
This is how error.log is setup. It works for other parts of the application without any issues.
app = Flask(__name__, static_url_path="", static_folder="static")
errHandler = logging.FileHandler('errors.log')
errHandler.setLevel(logging.ERROR)
app.logger.addHandler(errHandler)
Any ideas how can I catch this error so I can try to debug it?
UPDATE
I've noticed that the Internal Server Error is returned after around 1.5 min. When I changed R script to a simple wait 10s command it works flawlessly so it seems to be a timeout issue.
I have set timeout to 180s on subprocess and ajax but it didn't help. Is there any other place I should look?
UPDATE 2
I've taken out ajax out of the equation and use standard hyperlink to the page with subprocess. It still gives Internal Server Error after 1.5 min. I've also changed R script to wait 2 min and the script itself finishes without any issues (30s after I get the error).
I was looking in a completely wrong place. The issue was caused by the FastCGI Activity Timeout setting in IIS Manager. The timeout was only 70s while the subprocess took much longer to process. Increasing the Activity Timeout resolved the issue.

A curious case of nginx uswgi python

We have a python MVC Web application built using (werkzeug, jinja2 and MongoEngine).
In production we have 4 nginx servers setup behind a nginx load balancer. All 4 servers share a common Mongo server, a Redis server and a Sphinx server.We are using uwsgi between nginx and the application.
Now to the curious case.
Once we deploy a new code, we do a touch xyz.wsgi. For a few hours everything looks fine.
but after that we randomly get the error.
'module' object is not callable
I have seen this error before, in other python development scenarios. But what confuses me this time is the total random behavior.
For Example example.com/multimedia?keywords=sdf&s=title&c=21830.
If we refresh the error is gone. Try another value for any parameter like 'keywords=xzy' and there it is again. Refresh its gone.
That 'multimedia' module is something we did just recently.So we can assume its the root cause. But why does the error occur randomly ?
My assumption is that, it might have something to do with nginx caching or existence of pyc/pyo ? Could a illicit Global Variable be the cause ?
Could you expert hands help me out.
The error probably occurs randomly because it's a runtime error in your code. That is, it doesn't get fired until a user visits your site with the right conditions to follow the code path that results in this error.
It's unlikely to be an nginx caching issue. If it was caching it, then it would probably return the same result over and over rather then change on reload.
However, you can test this by removing nginx and directly testing against werkzeug. Run the requests against it and see if you see the same behavior. No use in debugging Nginx unless you can prove that the underlying systems work the way you expect.
It's also probably worth the 30 seconds to search for module() in your code, since that's the most direct interpretation of that error message.

500 Error without anything in the apache logs

I am currently developing an application based on flask. It runs fine spawning the server manually using app.run(). I've tried to run it through mod_wsgi now. Strangely, I get a 500 error, and nothing in the logs. I've investigated a bit and here are my findings.
Inserting a line like print >>sys.stderr, "hello" works as expected. The message shows up in the error log.
When calling a method without using a template it works just fine. No 500 Error.
Using a simple template works fine too.
BUT as soon as I trigger a database access inside the template (for example looping over a query) I get the error.
My gut tells me that it's SQLAlchemy which emits an error, and maybe some logging config causes the log to be discarded at some point in the application.
Additionally, for testing, I am using SQLite. This, as far as I can recall, can only be accessed from one thread. So if mod_wsgi spawns more threads, it may break the app.
I am a bit at a loss, because it only breaks running behind mod_wsgi, which also seems to swallow my errors. What can I do to make the errors bubble up into the apache error_log?
For reference, the code can be seen on this github permalink.
Turns out I was not completely wrong. The exception was indeed thrown by sqlalchemy. And as it's streamed to stdout by default, mod_wsgi silently ignored it (as far as I can tell).
To answer my main question: How to see the errors produced by the WSGI app?
It's actually very simple. Redirect your logs to stderr. The only thing you need to do, is add the following to your WSGI script:
import logging, sys
logging.basicConfig(stream=sys.stderr)
Now, this is the most mundane logging config. As I haven't put anything into place yet for my application this will do. But, I guess, once the application matures you will have a more sophisticated logging config anyways, so this won't bite you.
But for quick and dirty debugging, this will do just fine.
I had a similar problem: occasional "Internal Server Error" without logs. When you use mod_wsgi you should remove "app.run()" because this will always start a local WSGI server which we do not want if we deploy that application to mod_wsgi. See docs. I do not know if this is your case, but I hope this can help.
If you put this into your config.py it will help dramatically in propagating errors up to the apache error log:
PROPAGATE_EXCEPTIONS = True

Categories

Resources