Print statements on server give IOError: failed to write data - python

I am running Pylons on my local machine with paster, and on a Debian server using WSGI. I want to add some print statements to debug a problem: am not a Pylons or Python expert.
On my local machine this works fine: print statements go to the terminal. On the server, the statements don't print to the log files: instead the log file says "IOError: failed to write data" whenever a print statement is called.
Until I can fix this, I can't debug anything on the server.
Could someone advise how to get printing running on the server? Thanks!

It's wrong for a WSGI application to use sys.stdout or sys.stderr. If you want to spit debug to a server error log, use environ['wsgi.errors'].write().

Don't use print statements, use the logging module. We can't help you without knowing the setup of the server.

Related

Can not remote debug if there is eventlet.monkey_patch() in code?

I am trying to do remote debug with PyCharm+Pydevd on python code.
The code I try to remote debug is below:
#!/usr/bin/python
import eventlet
eventlet.monkey_patch()
def main():
import pydevd
pydevd.settrace('10.84.101.215', port=11111, stdoutToServer=True, stderrToServer=True)
print "Done"
if __name__ == '__main__':
main()
Please note that if I comment the line
eventlet.monkey_patch()
The remote debug will work. If I change the line to
eventlet.monkey_patch(os=False, thread=False)
The remote debug will also work.
But I can not do that, because this will break some other logic.(I am trying to remote debug openstack neutron. The above code is just a sample to describe my question)
Also I have done something after google this issue, I will paste them here although they are not fixing my issue.
1. In PyCharm do below setting
setting -> Build,Extension,Deployment -> Python Debug -> Gevent Compatible (Check)
2. In PyCharm do below change
Edit the file
C:\Program Files (x86)\JetBrains\PyCharm 2016.1.4\helpers\pydev_pydevd_bundle\pydevd_constants.py
Replace SUPPORT_GEVENT=False to SUPPORT_GEVENT=True
I know this is a PyCharm issue or Pydevd issue. I already post this in PyCharm community not getting reply yet. So I guess I can try here. Please give some advice if you know about it.
Can't help with Pydevd, but there's interactive interpreter backdoor in Eventlet, which allows you to connect and execute any code to analyse state of the system.
eventlet.monkey_patch()
# add one line
eventlet.spawn(backdoor.backdoor_server, eventlet.listen(('localhost', 3000)))
Connect with your favourite telnet client.
http://eventlet.net/doc/modules/backdoor.html
Also, import ipdb ; ipdb.set_trace() has always worked wonders for me.

Flask Server running wrong program

I am learning Flask.
I was able to run the Hello World tutorial as shown here
Then I tried to build the Flaskr program following the tutorial http://flask.pocoo.org/docs/tutorial/introduction/
I ran into an issue with the Flaskr program accessing the database,specifically "sqlite3.OperationalError
OperationalError: unable to open database file"
so I took a break and went back to seeing if I could run my "Hello World" program.
Now when I go to the url 127.0.0.1:5000/, instead of seeing "hello world" I still see my data base error from the Flaskr program.
It seem like I need to reset the server instance or something? Please help!
kill python task in the task-manager and then run your server
If you're testing or working on multiple projects at the same time, please run each one in a dedicated virtual environment and serve at a different port because by default flask serves at 127.0.0.1:5000.
Use something like this below:
if __name__ == "__main__":
app.run(host='0.0.0.0',port=8001)
You can change the port in each other project and run all of them without any problem.
Happy coding,
J.

How to debug Django app running on Heroku using a remote pdb connection?

To debug a bug I'm seeing on Heroku but not on my local machine, I'm trying to do step-through debugging.
The typical import pdb; pdb.set_trace() approach doesn't work with Heroku since you don't have access to a console connected to your app, but apparently you can use rpdb, a "remote" version of pdb.
So I've installed rpdb, added import rpdb; rpdb.set_trace() at the appropriate spot. When I make a request that hits the rpdb line, the app hangs as expected and I see the following in my heroku log:
pdb is running on 3d0c9fdd-c18a-4cc2-8466-da6671a72cbc:4444
Ok, so how to connect to the pdb that is running? I've tried heroku run nc 3d0c9fdd-c18a-4cc2-8466-da6671a72cbc 4444 to try to connect to the named host from within heroku's system, but that just immediately exits with status 1 and no error message.
So my specific question is: how do I now connect to this remote pdb?
The general related question is: is this even the right way for this sort of interactive debugging of an app running on Heroku? Is there a better way?
NOTE RE CELERY: Note, I've now also tried a similar approach with Celery, to no avail. The default host celery's rdb (remote pdb wrapper) uses is localhost, which you can't get to when it's Heroku. I've tried using the CELERY_RDB_HOST environment variable to the domain of the website that is being hosted on Heroku, but that gives a "Cannot assign requested address" error. So it's the same basic issue -- how to connect to the remote pdb instance that's running on Heroku?
In answer to your second question, I do it differently depending on the type of error (browser-side, backend, or view). For backend and view testing (unittests), will something like this work for you?
$ heroku run --app=your-app "python manage.py shell --settings=settings.production"
Then debug-away within ipython:
>>> %run -d script_to_run_unittests.py
Even if you aren't running a django app you could just run the debugger as a command line option to ipython so that any python errors will drop you to the debugger:
$ heroku run --app=your-app "ipython --pdb"
Front-end testing is a whole different ballgame where you should look into tools like selenium. I think there's also a "salad" test suite module that makes front end tests easier to write. Writing a test that breaks is the first step in debugging (or so I'm told ;).
If the bug looks simple, you can always do the old "print and run" with something like
import logging
logger = logging.getLogger(__file__)
logger.warn('here be bugs')`
and review your log files with getsentry.com or an equivalent monitoring tool or just:
heroku logs --tail

Python HTTPServer Exceptions cause it to fail

I've tried many different ways of writing a simple HTTP server for Python (using IronPython in a hosted environment) and whenever there is an error (e.g. run-time error) the Python environment just hangs, rather than shutting down and reporting the error.
Update:
There is a comment on an answer to another SO question that suggest that calling "self.server.shutdown()" in a request handler also causes a Python web server to hang in Windows.
So possibly run-time exceptions lead to the same problem.
You should probably do something along the lines of:
try:
httpd.serve_forever()
except RuntimeError:
httpd.shutdown()
sys.exit()
btw, this is for python 3 and it assumes that you have the sys module imported.

Why would Django fcgi just die? How can I find out?

I'm running Django on Linux using fcgi and Lighttpd. Every now and again (about once a day) the server just dies. I'm using the latest stable release of Django, Python and Lighttpd.
The only thing I can think of is that my program is opening a lot of files and executing a lot of external processes, but I'm fairly sure that side of things is watertight.
Looking at the error and access logs, there's nothing exceptional happening (i.e. load isn't above normal). On those occasions where I have had exceptions from Python, these have shown up in the error.log, but when this crash happens I get nothing.
Is there any way of finding out why the process died? Short of putting logging statements on every single line? Obviously I can't reproduce this so I don't know exactly where to look.
Edit
It's the django process that's dying. I'm running the server with manage.py runfcgi daemonize=true method=threaded host=127.0.0.1 port=12345
You could edit manage.py to redirect stderr to a file, assuming runfcgi doesn't do that itself:
import sys
if sys.argv[1] == "runfcgi":
sys.stderr = open("/path/to/my/django-error.log", "a")
Is this on your server? (do you own the box?). I've had that problem on shared hosting, and the host was just killing long processes. Do you know if your fcgi is receiving a SIGTERM?
Have had the same problems. Not only do they die without warning or reason they leak like crazy too with threads being stuck without a master process. We solved this problem by having a cronjob run every 5 minutes that checks if the port number is up and running and if not restart.
By the way, we've now (slowly migrating) given up on fcgi and moved over to uwsgi.

Categories

Resources