Really odd (mod)_python problem - python

this one is hard to explain!
I am writing a python application to be ran through mod_python. At each request, the returned output differs, even though the logic is 'fixed'.
I have two classes, classA and classB. Such that:
class ClassA:
def page(self, req):
req.write("In classA page")
objB = ClassB()
objB.methodB(req)
req.write("End of page")
class ClassB:
def methodB(self, req):
req.write("In methodB")
return None
Which is a heavily snipped version of what I have. But the stuff I have snipped doesn't change the control flow. There is only one place where MethodB() is called. That is from __init__() in classA.
You would expect the following output:
In classA __init__
In methodB
End of __init__
However, seemingly randomly either get the above correct output or:
In classA __init__
In methodB
End of __init__
In methodB
The stacktrace shows that methodB is being called the second time from __init__. methodB should only be called once. If it is called a second time, you would expect that the other logic in __init__ be done twice too. But nothing before or after methodB executes and there is no recursion.
I wouldn't usually resort to using SO for my debugging, but I have been scratching my head for a while on this.
Version: 2.5.2 r252:60911
thanks in advance
Edit
Some clues that the problem might be elsewhere .... The above changes to the snippet result in the weird output 1 in every 250 or so hits. Which is odd.
The more output prior to printing "In methodB", the more it is printed subsequently incorrectly ... on average, not in direct ratio. It even does it in Lynx.
Im going back to the drawing board.
:(
In response to answer
It seems mod_python and Apache are having marital problems. A restart and things are fine for a few requests. Then it all goes increasingly pear-shaped. When issuing
/etc/rc.d/init.d/httpd stop
It takes a weirdly long amount of time. Also RAM is getting eaten up with requests. I am not that familiar with Apache's internals but it feels like (thanks to Nadia) that threads are staying alive and randomly butting in on requests. Which is plain bonkers.
Moving to mod_wsgi as S.Lott and Nadia suggested
thanks again!!

I've seen similar behaviour with mod_python before. Usually it is because apache is running multiple threads and one of them is running an older version of the code. When you refresh the page chances are the thread with the older code is serving the page. I usually fix this by stoping apache and then restarting it again
sudo /etc/init.d/apache stop
sudo /etc/init.d/apache restart
Restart on its own doesn't always work. Sometimes even that doesn't work! That might sound strange but my last resort in those rare cases where nothing is working is to add a raise Exception() statement on the first line in the handler, refresh the page, restart apache and then refresh the page again. That works every time. There must be a better solution. But that what worked for me. mod_python can drive one crazy for sure!
I hope this might help.

I don't really know, but constructors aren't supposed to return anything, so remove the return None. Even if they could return stuff, None is automatically returned if a function doesn't return anything by itself.
And I think you need a self argument in MethodB.
EDIT: Could you show more code? This is working fine.

Related

PyQt5 QPropertyAnimation finished() how to connect

In essence, i'm trying to close a window after the animation completes.
In all the documentation and examples i've looked at, they are either in:
C++
vague "method definitions"
Old style slots and connectors
how do i access the finished() that gets 'supposedly' called when the animation finishes?
self.anim = QtCore.QPropertyAnimation(window, b"windowOpacity"
self.anim.setStartValue(1)
self.anim.setEndValue(0)
self.anim.setDuration(3000)
#self.anim.finished.connect() does not exist
#QtCore.QObject.connect(stuff) is deprecated
#self.anim.finished(window.destroy) destroys window immediately
in all the examples i am reading, they use the first commented out method, but the compiler complains about 'finished' not having a 'connect()' method
every time guys...
EVERY. TIME.
i look for the answer for hours, and immediately after posting for help, i find the answer.
Commented out method #1 is correct, however you can't do a specific action in the connect() method like window.destroy or something.
correct way:`
self.anim.finished.connect(self.someMethod)
def someMethod(self):
window.destroy
what was throwing me off was; the IDE was not offering a code completion suggestion for finished.connect() (same with button.clicked.connect() actually)
this is what i get for relying too much on an IDE i suppose. hope this helps someone in the future.

python gevent, why is monkey's patch_all and patch_thread Event kwarg default to False (and what does this do)

I can't find info about this in the docs, and I only thought to try setting it to True after looking at the source. It seems setting this to True makes the patch_thread method which is called by patch_all execute this code eventually:
if Event:
from gevent.event import Event
patch_item(threading, 'Event', Event)
This apparently patches the threading modules Event class with gevent's.
This seems to have fixed my original problem I had been having up to this point with DummyThreads living in memory forever (which eventually eats up all the system memory).
As I've been hitting my head off of this problem for a day now, I am super excited to just call setting Event=True on the patching method as the solution. However, I feel there's probably a reason it was set to default to False.
Any one know what that reason might be?
Using python 2.7
gevent 1.1 (freshly minted!)
Cheers

Django : Call a method only once when the django starts up

I want to initialize some variables (from the database) when Django starts.
I am able to get the data from the database but the problem is how should I call the initialize method . And this should be only called once.
Tried looking in other Pages, but couldn't find an answer to it.
The code currently looks something like this ::
def get_latest_dbx(request, ....):
#get the data from database
def get_latest_x(request):
get_latest_dbx(request,x,...)
def startup(request):
get_latest_x(request)
Some people suggest( Execute code when Django starts ONCE only? ) call that initialization in the top-level urls.py(which looks unusual, for urls.py is supposed to handle url pattern). There is another workaround by writing a middleware: Where to put Django startup code?
But I believe most of people are waiting for the ticket to be solved.
UPDATE:
Since the OP has updated the question, it seems the middleware way may be better, for he actually needs a request object in startup. All startup codes could be put in a custom middleware's process_request method, where request object is available in the first argument. After these startup codes execute, some flag may be set to avoid rerunning them later(raising MiddlewareNotUsed exception only works in __init__, which doesn't receive a request argument).
BTW, OP's requirement looks a bit weird. On one hand, he needs to initialize some variables when Django starts, on the other hand, he need request object in the initialization. But when Django starts, there may be no incoming request at all. Even if there is one, it doesn't make much sense. I guess what he actually needs may be doing some initialization for each session or user.
there are some cheats for this. The general solution is trying to include the initial code in some special places, so that when the server starts up, it will run those files and also run the code.
Have you ever tried to put print 'haha' in the settings.py files :) ?
Note: be aware that settings.py runs twice during start-up

pydev breakpoints not working

I am working on a project using python 2.7.2, sqlalchemy 0.7, unittest, eclipse 3.7.2 and pydev 2.4. I am setting breakpoints in python files (unit test files), but they are completely ignored (before, at some point, they worked). By now i have upgraded all related software (see above), started new projects, played around with settings, hypnotized my screen, but nothing works.
The only idea i got from some post is that it has something to de with changing some .py file names to lower case.
Does anyone have any ideas?
added: I even installed the aptana version of eclipse and copied the .py files to it => same result; breakpoints are still ignored.
still no progress: I have changed some code that might be seen as unusual and replaced it with a more straightforward solution.
some more info: it probably has something to do with module unittest:
breakpoints in my files defining test suites work,
breakpoints in the standard unittest files themselves work
breakpoints in my tests methods in classes derived from unittest.TestCase do not work
breakpoints in my code being tested in the test cases do not work
at some point before i could define working breakpoints in test methods or the code being tested
some things i changed after that are: started using test suites, changed some filenames to lowercase, ...
this problem also occurs if my code works without exceptions or test failures.
what I already tried is:
remove .pyc files
define new project and copy only .py files to it
rebooted several times in between
upgraded to eclipse 3.7.2
installed latest pydev on eclipse 3.7.2
switch to aptana (and back)
removed code that 'manually' added classes to my module
fiddled with some configurations
what I can still do is:
start new project with my code, start removing/changing code until breakpoints work and sort of black box figure out if this has something to do with some part of my code
Does anyone have any idea what might cause these problems or how they might be solved?
Is there any other place i could look for a solution?
Do pydev developers look into the questions on stackoverflow?
Is there an older version of pydev that i might try?
I have been working with pydev/eclipse for a long time and it works well for me, but without debugging i'd forced to switch IDE.
In answer to Fabio's questions below:
The python version is 2.7.2,
The sys.gettrace gives None (but I have no idea what in my code could influence that)
This is the output of the debugger after changing the suggested parameters:
pydev debugger:
starting
('Executing file ', 'D:\\.eclipse\\org.eclipse.platform_3.7.0_248562372\\plugins\\org.python.pydev.debug_2.4.0.2012020116\\pysrc\\runfiles.py')
('arguments:', "['D:\\\\.eclipse\\\\org.eclipse.platform_3.7.0_248562372\\\\plugins\\\\org.python.pydev.debug_2.4.0.2012020116\\\\pysrc\\\\runfiles.py', 'D:\\\\Documents\\\\Code\\\\Eclipse\\\\workspace\\\\sqladata\\\\src\\\\unit_test.py', '--port', '49856', '--verbosity', '0']")
('Connecting to ', '127.0.0.1', ':', '49857')
('Connected.',)
('received command ', '501\t1\t1.1')
sending cmd: CMD_VERSION 501 1 1.1
sending cmd: CMD_THREAD_CREATE 103 2 <xml><thread name="pydevd.reader" id="-1"/></xml>
sending cmd: CMD_THREAD_CREATE 103 4 <xml><thread name="pydevd.writer" id="-1"/></xml>
('received command ', '111\t3\tD:\\Documents\\Code\\Eclipse\\workspace\\sqladata\\src\\testData.py\t85\t**FUNC**testAdjacency\tNone')
Added breakpoint:d:\documents\code\eclipse\workspace\sqladata\src\testdata.py - line:85 - func_name:testAdjacency
('received command ', '122\t5\t;;')
Exceptions to hook : []
('received command ', '124\t7\t')
('received command ', '101\t9\t')
Finding files... done.
Importing test modules ... testAtomic (testTypes.TypeTest) ... ok
testCyclic (testTypes.TypeTest) ...
The rest is output of the unit test.
Continuing from Fabio's answer part 2:
I have added the code at the start of the program and the debugger stops working at the last line of following the method in sqlalchemy\orm\attributes.py (it is a descriptor, but how or whther it interferes with the debugging is beyond my current knowledge):
class InstrumentedAttribute(QueryableAttribute):
"""Class bound instrumented attribute which adds descriptor methods."""
def __set__(self, instance, value):
self.impl.set(instance_state(instance),
instance_dict(instance), value, None)
def __delete__(self, instance):
self.impl.delete(instance_state(instance), instance_dict(instance))
def __get__(self, instance, owner):
if instance is None:
return self
dict_ = instance_dict(instance)
if self._supports_population and self.key in dict_:
return dict_[self.key]
else:
return self.impl.get(instance_state(instance),dict_) #<= last line of debugging
From there the debugger steps into the __getattr__ method of one of my own classes, derived from a declarative_base() class of sqlalchemy.
Probably solved (though not understood):
The problem seemed to be that the __getattr__ mentioned above, created something similar to infinite recursion, however the program/unittest/sqlalchemy recovered without reporting any error. I do not understand the sqlalchemy code sufficiently to understand why the __getattr__ method was called.
I changed the __getattr__ method to call super for the attribute name for which the recursion occurred (most likely not my final solution) and the breakpoint problem seems gone.
If i can formulate the problem in a consise manner, i will probably try to get some more info on the google sqlalchemy newsgroup, or at least check my solution for robustness.
Thank you Fabio for your support, the trace_func() function pinpointed the problem for me.
Seems really strange... I need some more info to better diagnose the issue:
Open \plugins\org.python.pydev.debug\pysrc\pydevd_constants.py and change
DEBUG_TRACE_LEVEL = 3
DEBUG_TRACE_BREAKPOINTS = 3
run your use-case with the problem and add the output to your question...
Also, it could be that for some reason the debugging facility is reset in some library you use or in your code, so, do the following: in the same place that you'd put the breakpoint do:
import sys
print 'current trace function', sys.gettrace()
(note: when running in the debugger, it'd be expected that the trace function is something as: <bound method PyDB.trace_dispatch of <__main__.PyDB instance at 0x01D44878>> )
Also, please post which Python version you're using.
Answer part 2:
The fact that sys.gettrace() returns None is probably the real issue... I know some external libraries which mess with it (i.e.:DecoratorTools -- read: http://pydev.blogspot.com/2007/06/why-cant-pydev-debugger-work-with.html) and have even seen Python bugs and compiled extensions break it...
Still, the most common reason it breaks is probably because Python will silently disable the tracing (and thus the debugger) when a recursion throws a stack overflow error (i.e.: RuntimeError: maximum recursion depth exceeded).
You can probably put a breakpoint in the very beginning of your program and step in the debugger until it stops working.
Or maybe simpler is the following: Add the code below to the very beginning of your program and see how far it goes with the printing... The last thing printed is the code just before it broke (so, you could put a breakpoint at the last line printed knowing it should be the last line where it'd work) -- note that if it's a large program, printing may take a long time -- it may even be faster printing to a file instead of a console (such as cmd, bash or eclipse) and later opening that file (just redirect the print from the example to a file).
import sys
def trace_func(frame, event, arg):
print 'Context: ', frame.f_code.co_name, '\tFile:', frame.f_code.co_filename, '\tLine:', frame.f_lineno, '\tEvent:', event
return trace_func
sys.settrace(trace_func)
If you still can't figure it out, please post more information on the obtained results...
Note: a workaround until you don't find the actual place is using:
import pydevd;pydevd.settrace()
on the place where you'd put the breakpoint -- that way you'd have a breakpoint in code which should definitely work, as it'll force setting the tracing facility at that point (it's very similar to the remote debugging: http://pydev.org/manual_adv_remote_debugger.html except that as the debugger was already previously connected, you don't really have to start the remote debugger, just do the settrace to emulate a breakpoint)
Coming late into the conversation, but just in case it helps. I just run into a similar problem and I found that the debugger is very particular w.r.t. what lines it considers "executable" and available to break on.
If you are using line continuations, or multi-line expressions (e.g. inside a list), put the breakpoint in the last line of the statement.
I hope it helps.
Try removing the corresponding .pyc file (compiled) and then running.
Also I have sometimes realized I was running more than one instance of a program.. which confused pydev.
I've definitely seen this before too. Quite a few times.
Ran into a similar situation running a django app in Eclipse/pydev. what was happening was that the code that was running was the one installed in my virtualenv, not my source code. I removed my project from my virtual env site-packages, restarted the django up in the eclipse/pydev debugger and everything was fine.
I had similar-sounding symptoms. It turned out that my module import sequence was rexec'ing my entry-point python module because a binary (non-Python) library had to be dynamically loaded, i.e., the LD_LIBRARY_PATH was dynamically reset. I don't know why this causes the debugger to ignore subsequent breakpoints. Perhaps the rexec call is not specifying debug=true; it should specify debug=true/false based on the calling context state?
Try setting a breakpoint at your first import statement being cognizant of whether you are then s(tep)'ing into or n(ext)'ing over the imports. When I would "next" over the 3rdparty import that required the dynamic lib loading, the debug interpreter would just continue past all breakpoints.

Can I put break points on background threads in Python?

I'm using the PyDev for Eclipse plugin, and I'm trying to set a break point in some code that gets run in a background thread. The break point never gets hit even though the code is executing. Here's a small example:
import thread
def go(count):
print 'count is %d.' % count # set break point here
print 'calling from main thread:'
go(13)
print 'calling from bg thread:'
thread.start_new_thread(go, (23,))
raw_input('press enter to quit.')
The break point in that example gets hit when it's called on the main thread, but not when it's called from a background thread. Is there anything I can do, or is that a limitation of the PyDev debugger?
Update
Thanks for the work arounds. I submitted a PyDev feature request, and it has been completed. It should be released with version 1.6.0. Thanks, PyDev team!
The problem is that there's no API in the thread module to know when a thread starts.
What you can do in your example is set the debugger trace function yourself (as Alex pointed) as in the code below (if you're not in the remote debugger, the pydevd.connected = True is currently required -- I'll change pydev so that this is not needed anymore). You may want to add a try..except ImportError for the pydevd import (which will fail if you're not running in the debugger)
def go(count):
import pydevd
pydevd.connected = True
pydevd.settrace(suspend=False)
print 'count is %d.' % count # set break point here
Now, on a second thought, I think that pydev can replace the start_new_thread method in the thread module providing its own function which will setup the debugger and later call the original function (just did that and it seems to be working, so, if you use the nightly that will be available in some hours, which will become the future 1.6.0, it should be working without doing anything special).
The underlying issue is with sys.settrace, the low-level Python function used to perform all tracing and debugging -- as the docs say,
The function is thread-specific; for a
debugger to support multiple threads,
it must be registered using settrace()
for each thread being debugged.
I believe that when you set a breakpoint in PyDev, the resulting settrace call is always happening on the main thread (I have not looked at PyDev recently so they may have added some way to work around that, but I don't recall any from the time when I did look).
A workaround you might implement yourself is, in your main thread after the breakpoint has been set, to use sys.gettrace to get PyDev's trace function, save it in a global variable, and make sure in all threads of interest to call sys.settrace with that global variable as the argument -- a tad cumbersome (more so for threads that already exist at the time the breakpoint is set!), but I can't think of any simpler alternative.
On this question, I found a way to start the command-line debugger:
import pdb; pdb.set_trace()
It's not as easy to use as the Eclipse debugger, but it's better than nothing.
For me this worked according to one of Fabio's posts, after setting the trace with setTrace("000.000.000.000") # where 0's are the IP of your computer running Eclipse/PyDev
threading.settrace(pydevd.GetGlobalDebugger().trace_dispatch)

Categories

Resources