I have been working on an app for a while but lately my python SDK decided that it would log things up to a certain point then display nothing anymore.
The only way to change its mind it is to stop and start the app again and it blocks at the same point (which is hard to debug)
The last thing that I see is
GET /_ah/api/static/proxy.html?jsh=SOMERANDOMID with an answer 200
The app works fine, I can do calls to Cloud Endpoint, get my results, etc... Just the Logging display is gone.
For some reason when I upload my app, everything works fine, the logs works fine.
Anything that I can do like cleaning some cache somewhere, I don't know.
Thanks
So I figured it out. It was due to some wsgi middleware setup that I had to manage sessions
Related
I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).
In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.
Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.
Any help will be greatly appreciated.
Thank you.
Personally, this makes more sense to me as a responsibility of your deploy process, rather than of the app itself. If you have your own deploy script, add the post request there (after a successful deploy). If you use google's command line tools, you could wrap that in a script. If you use a 3rd party tool for something like continuous integration, they probably have deploy hooks you could use for this purpose.
The main question will be how to ensure it only runs once for a particular version.
Here is an outline on how you might approach it.
You create a HasRun module, which you use store each the version of the deployed app and this indicates if the one time code has been run.
Then make sure you increment your version, when ever you deploy your new code.
In you warmup handler or appengine_config.py grab the version deployed,
then in a transaction try and fetch the new HasRun entity by Key (version number).
If you get the Entity then don't run the one time code.
If you can not find it then create it and run the one time code, either in a task (make sure the process is idempotent, as tasks can be retried) or in the warmup/front facing request.
Now you will probably want to wrap all of that in memcache CAS operation to provide a lock or some sort. To prevent some other instance trying to do the same thing.
Alternately if you want to use the task queue, consider naming the task the version number, you can only submit a task with a particular name once.
It still needs to be idempotent (again could be scheduled to retry) but there will only ever be one task scheduled for that version - at least for a few weeks.
Or a combination/variation of all of the above.
Ever since I read
A untested application is broken
in the flask documentation about testing here
I have been working down my list of things to make for some of my applications.
I currently have a flask web app when I write a new route I just write a requests.get('https://api.github.com/user', auth=('user', 'pass')), post, put, etc to test the route.
Is this a decent alternative? Or should I try and do tests via what flask's documentation says, and if so why?
Fundamentally it's the same concept, you are running functionality tests as they do. However, you have a prerequisite, a live application running somewhere (if I got it right). They create a fake application (aka mock) so you can test it without being live, e.g. you want to run tests in a CI environment.
In my opinion it's a better alternative than a live system. Your current approach consumes more resources on your local machine, since you are required to run the whole system to test something (i.e. at least a DB and the application itself). In their approach they don't, the fake instance does not need to have real data, thus no connection to a DB or any other external dependency.
I suggest you to switch to their testing, in the end you will like it.
We have a python MVC Web application built using (werkzeug, jinja2 and MongoEngine).
In production we have 4 nginx servers setup behind a nginx load balancer. All 4 servers share a common Mongo server, a Redis server and a Sphinx server.We are using uwsgi between nginx and the application.
Now to the curious case.
Once we deploy a new code, we do a touch xyz.wsgi. For a few hours everything looks fine.
but after that we randomly get the error.
'module' object is not callable
I have seen this error before, in other python development scenarios. But what confuses me this time is the total random behavior.
For Example example.com/multimedia?keywords=sdf&s=title&c=21830.
If we refresh the error is gone. Try another value for any parameter like 'keywords=xzy' and there it is again. Refresh its gone.
That 'multimedia' module is something we did just recently.So we can assume its the root cause. But why does the error occur randomly ?
My assumption is that, it might have something to do with nginx caching or existence of pyc/pyo ? Could a illicit Global Variable be the cause ?
Could you expert hands help me out.
The error probably occurs randomly because it's a runtime error in your code. That is, it doesn't get fired until a user visits your site with the right conditions to follow the code path that results in this error.
It's unlikely to be an nginx caching issue. If it was caching it, then it would probably return the same result over and over rather then change on reload.
However, you can test this by removing nginx and directly testing against werkzeug. Run the requests against it and see if you see the same behavior. No use in debugging Nginx unless you can prove that the underlying systems work the way you expect.
It's also probably worth the 30 seconds to search for module() in your code, since that's the most direct interpretation of that error message.
Ok, here is my confusion/problem:
I develop in localhost and there you could raise exceptions and could see the logs in command line easily.
Then I deploy the code on test, stage and production server, that is where the problem begins, it is not easy to see logs or debug errors and exceptions. For normal errors I guess django-toolbar could be enabled, but I do get some silent exceptions which dont crash but the process manipulates to failure because of that. For example, I have payment integration, and few days ago the payments were failing on return (callback) on our site, but nothing was crashing, just that payment process failed message was coming, but the payment gateway vendor was working fine, then I had to look for some failure instances which could lead to this problem and figured out that one db save operation was not saving because the variable was not there.
Now my question, is Sentry (https://github.com/getsentry/sentry) an answer for that? Or is there any other option for this?
Please do ask if any further clarification is needed for my requirement.
Sentry is an option, but honestly is too limited (I tried it a month ago or so), it's intended to track exceptions, but in the real world we should track important informations and events too.
If you didn't setup an application logging, I suggest you to do it, by following this example.
In my app I defined several loggers, for different purposes, the python logging configuration via dictionary (the one used by Django), is very powerful and you have a full control over how things get logged, for example you can write logs to a file, to a database, send an email, call a third party api or whatever. If your app is running in a load balanced environment (so you have several machines running your app), you can use services like Loggly to aggregate the logs coming from your instances in a single place (and since it uses RSYSLOG, it aggregates not only your Django app logs, but also all the logs of your underlying OS).
I suggest you to use also New Relic, which keeps track of a lot of stuff automatically: query executed and time, template loading time, errors and a lot of other useful statistics.
I am currently developing an application based on flask. It runs fine spawning the server manually using app.run(). I've tried to run it through mod_wsgi now. Strangely, I get a 500 error, and nothing in the logs. I've investigated a bit and here are my findings.
Inserting a line like print >>sys.stderr, "hello" works as expected. The message shows up in the error log.
When calling a method without using a template it works just fine. No 500 Error.
Using a simple template works fine too.
BUT as soon as I trigger a database access inside the template (for example looping over a query) I get the error.
My gut tells me that it's SQLAlchemy which emits an error, and maybe some logging config causes the log to be discarded at some point in the application.
Additionally, for testing, I am using SQLite. This, as far as I can recall, can only be accessed from one thread. So if mod_wsgi spawns more threads, it may break the app.
I am a bit at a loss, because it only breaks running behind mod_wsgi, which also seems to swallow my errors. What can I do to make the errors bubble up into the apache error_log?
For reference, the code can be seen on this github permalink.
Turns out I was not completely wrong. The exception was indeed thrown by sqlalchemy. And as it's streamed to stdout by default, mod_wsgi silently ignored it (as far as I can tell).
To answer my main question: How to see the errors produced by the WSGI app?
It's actually very simple. Redirect your logs to stderr. The only thing you need to do, is add the following to your WSGI script:
import logging, sys
logging.basicConfig(stream=sys.stderr)
Now, this is the most mundane logging config. As I haven't put anything into place yet for my application this will do. But, I guess, once the application matures you will have a more sophisticated logging config anyways, so this won't bite you.
But for quick and dirty debugging, this will do just fine.
I had a similar problem: occasional "Internal Server Error" without logs. When you use mod_wsgi you should remove "app.run()" because this will always start a local WSGI server which we do not want if we deploy that application to mod_wsgi. See docs. I do not know if this is your case, but I hope this can help.
If you put this into your config.py it will help dramatically in propagating errors up to the apache error log:
PROPAGATE_EXCEPTIONS = True