Django LAMP Apache mod_wsgi Python changes not taking effect [duplicate] - python

I have an extensive program that is running on my server. The line with the error looks like the following:
result[0].update(dictionary)
result[0] looks like ("label",{key:value,...}) so I got an error saying that a tuple does not have update
when I fixed it to be result[0][1].update(dictionary), I got the same error!
I then added print "test" above to see what happened, and I got the same error, but it gave me the error occurring at the print statement. This tells me that the code the server is running is the original and not the edited one. I tried restarting the server. I have saved my code. I do not understand why nor how this is happening. What could be causing this and how can I make it so that the server recognizes the newer version?
error message
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/background_task/tasks.py", line 160, in run_task
tasks.run_task(task.task_name, args, kwargs) [2/1832]
File "/usr/local/lib/python2.7/dist-packages/background_task/tasks.py", line 45, in run_task
task.task_function(*args, **kwargs)
File "/.../proj/tasks.py", line 10, in automap_predict
automap_obj.predict()
File "/.../proj/models.py", line 317, in predict
prediction = predictions[1]
File "/.../proj/models.py", line 143, in predict
#this is a recursive call
File "/.../proj/models.py", line 143, in predict
#this is a recursive call
File "/.../proj/models.py", line 127, in predict
#result[0].update(dictionary) this happens at the base case of the recursion
AttributeError: 'tuple' object has no attribute 'update'
notice that I am getting this error on a line that is commented out. Displaying that this is not the code that is really running.
view
def view(request,param):
run_background_task(param)
return redirect("project.view.reload")
background_task
#background(schedule=0)
def run_background_task(param):
obj = MyModel.objects.get(field=param)
obj.predict()
predict function
This is where the result gets created. I am not permitted to show this code, but note that I am sure that the actual code is irrelevant. It was working. I changed it to make a quick update. I got an error. I then fixed the error, but proceeded to get the same error. I even went back to the old version that was working and I still get the same error. Therefor this error has nothing to do with the contents of this function.
Let me know if I can be of any more help.

If you are running a development server provided with django, namely ./manage.py runserver you should not experience this problem, since it automatically recompiles upon .py file saving. But if you are running wsgi say with apache server it is a common issue, but with a very simple solution. Basically what happens, whenever you change a .py file, server will still use a previously compiled python file. So solution is either restart apache sudo service apache2 restart (which is an overkill) or simply run touch wsgi.py, wherever your wsgi.py is located within a project, which will tell it to recompile .py files. Of course you can also delete the .pyc files, but that is too tedious.
Edit:
I just notice that you are using #background decorator, which make me believe you are using django-background-task. If that's the case, you will have to make sure this app actually takes your code changes into account too, which may not happen automatically while recompiling python files using touch wsgi.py approach. I don't have experience with this particular app, but for example with celery, which is a lot more sophisticated app for scheduling tasks, you would also have to restart the worker in order to reflect the changes in code, since worker actually stores a pickled version of the code.
Now after checking out the django-background-task app (which seems like the one you are using), in theory you should't have to do anything special, but just to make sure you can clear out any scheduled tasks and reschedule them again. I haven't tried but you should be able to do it through ./manage.py shell:
>>> from background_task.tasks import tasks
>>> tasks._tasks = {}
And probably you'll even have to clear out the db, for the DBTaskRunner:
>>> from background_task.models import Task
>>> Task.objects.all().delete()

May be this is what you are looking for?
result = (
("item1",{'key1': 'val1'}),
("item2",{'key2': 'val2'}),
)
my_dict = {
'key1': 'updated val',
'new key': 'new value'
}
pprint.pprint(result[0]) # Result: ('item1', {'key1': 'val1'})
key, dct = result[0]
dct.update(my_dict)
pprint.pprint(result) # Result: (('item1', {'key1': 'updated val', 'new key': 'new value'}), ('item2', {'key2': 'val2'}))

The issue may be related to compiled python files or ".pyc" files located in your project folder on your server. These files are automatically generated as the python code is interpreted. Sometimes these files aren't re-compiled even if new code is present and it keeps running the old code.
You can install "django-extensions" via pip and it comes with a handy manage.py command that can help you clear those files:
python manage.py clean_pyc
If that doesn't work then you need to restart your wsgi server that's running the code since the python code is in memory.

If the code is not updating, solution will depend on how you are serving your application. If you are using ./manage.py runserver, you should not be having problems. Also if serving with apache + mod_python if you restart the sever the code should be reloaded.
But I assume you are not doing either of them, since you still have this problem. If your using uWSGI or gunicorn, they runs in a separate processes from the server, you would need to restart them separately. uWSGI can be set up also to reload the app every time its setting file changes, so normally I just call touch myapp.ini to reload the code.

Basically what happens, whenever you change a .py file, server will still use a previously compiled python file. So solution is either restart apache sudo service apache2 restart (which is an overkill) or simply run touch wsgi.py, wherever your wsgi.py is located within a project, which will tell it to recompile .py files. Of course you can also delete the .pyc files, but that is too tedious.
You are using #background decorator, which make me believe you are using django-background-task. If that's the case, you will have to make sure this app actually takes your code changes into account too, which may not happen automatically while recompiling python files using touch wsgi.py approach. I don't have experience with this particular app, but for example with celery, which is a lot more sophisticated app for scheduling tasks, you would also have to restart the worker in order to reflect the changes in code, since worker actually stores a pickled version of the code.
Now after checking out the django-background-task app (which seems like the one you are using), in theory you should't have to do anything special, but just to make sure you can clear out any scheduled tasks and reschedule them again.

I had a similar problem and it was gunicorn cache wasn't updating.
First try
sudo systemctl restart gunicorn
if that doesn't solve it:
sudo systemctl start gunicorn
sudo systemctl enable gunicorn
sudo systemctl status gunicorn (Check for faults)
sudo systemctl daemon-reload
sudo systemctl restart gunicorn
That then should update the server

It seems your result is not as you think it is, I just used the same format as you described and you see it works. You should check the function that it is returning proper data or not.
>>> a = (('label', {'k':'3'}),)
>>> print (a[0])
('label', {'k': '3'})
>>> a[0][1].update({'k':'6'})
>>> print (a[0])
('label', {'k': '6'})
>>>
FYI, As you see Here, Tuples are immutable sequence types and therefore they do not support updating and that is why they do not need an update function to be defined.
If you want to update the result[0] which itself is a tuple, (as clearly understood by the error message shown ), you have to create a new tuple with the entities that are updated.

Related

Use static object in LibreOffce python script?

I've got a LibreOffice python script that uses serial IO. On my systems, opening a serial port is a very slow process (around 1 second), so I'd like to keep the serial port open, and just send stuff as required.
But LibreOffice python apparently reloads the python framework every time a call is made. Unlike most python implementations, where the process is persistent, and un-enclosed code in a module is run once, when the module is imported.
Is there a way in LibreOffice python to persist objects between calls?
SerialObject=None
def return_global():
return str(SerialObject) #always returns "None"
def init_serial_object():
SerialObject=True
It looks like there is a simple bug. Add global and then it works.
However, it may be that the real problem is your setup. Here is a working example. Put the code in a file called inctest.py under $LIBREOFFICE_USER_DIR/Scripts/python/pythonpath/incmod/.
def init_serial_object():
global SerialObject
SerialObject=True
def moduleVersion():
return "2.0" #change to verify that this is the most recently updated code
The code to call it should be located in the user profile directory (that is, location=user).
from incmod import inctest
inctest.init_serial_object()
msgbox("{}: {}".format(
inctest.moduleVersion(), inctest.return_global()))
Run the script by going to Tools > Macros > Run Macro and find it under My Macros.
Be aware that inctest.py will not always get reloaded. There are two ways to reload it: restart LibreOffice, or force python to reload it by doing del sys.modules[mod].
The moduleVersion() function is not necessary but helps you see when the module is getting reloaded — make changes to that line and then see whether the output changes.

Can we edit the endpoints.py and add new functions to it in airflow?

I have new functions and I want to call them through api/experimental/route_name
I have the below function and want to integrate it with the airflow so that whenever I call the api/experimental/api_userDetail the details show up.
I have included the function in the file [endpoints.py] . For some reason, it is not showing up in the server and says 404 error. Would be great if I get some method to do so.
def returnFromVariable(key_name):
return Variable.get(key_name,deserialize_json=True)
#api_experimental.route('/api_userDetail',methods=['GET'])
def userDetailApi():
get_element_config_name="user_credential"
return jsonify(returnFromVariable(get_element_config_name))```
The output is the data returned.
Actually the stuff worked. I had two versions of airflow running one on python2 and another on python3 so I had to made changes in python3 and that worked for me. The above stuff which I wrote worked. To add the new functionalities simply go to www/api/experimental and then add the api to the file and restart the webserver it will work.

when using Watchman's watch-make I want to access the name of the changed files

I am writing a watchman command with watchman-make and I'm at a loss when trying to access exactly what was changed in the directory. I want to run my upload.py script and inside the script I would like to access filenames of newly created files in /var/spool/cups-pdf/ANONYMOUS .
so far I have
$ watchman-make -p '/var/spool/cups-pdf/ANONYMOUS' -—run 'python /home/pi/upload.py'
I'd like to add another argument to python upload.py so I can have an exact filepath to the newly created file so that I can send the new file over to my database in upload.py,
I've been looking at the docs of watchman and the closest thing I can think to use is a trigger object. Please help!
Solution with watchman-wait:
Assuming project layout like this:
/posts/_SUBDIR_WITH_POST_NAME_/index.md
/Scripts/convert.sh
And the shell script like this:
#!/bin/bash
# File: convert.sh
SrcDirPath=$(cd "$(dirname "$0")/../"; pwd)
cd "$SrcDirPath"
echo "Converting: $SrcDirPath/$1"
Then we can launch watchman-wait like this:
watchman-wait . --max-events 0 -p 'posts/**/*.md' | while read line; do ./Scripts/convert.sh $line; done
When we changing file /posts/_SUBDIR_WITH_POST_NAME_/index.md the output will be like this:
...
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
...
watchman-make is intended to be used together with tools that will perform a follow-up query of their own to discover what they want to do as a next step. For example, running the make tool will cause make to stat the various deps to bring things up to date.
That means that your upload.py script needs to know how to do this for itself if you want to use it with watchman.
You have a couple of options, depending on how sophisticated you want things to be:
Use pywatchman to issue an ad-hoc query
If you want to be able to run upload.py whenever you want and have it figure out the right thing (just like make would do) then you can have it ask watchman directly. You can have upload.py use pywatchman (the python watchman client) to do this. pywatchman will get installed if the the watchman configure script thinks you have a working python installation. You can also pip install pywatchman. Once you have it available and in your PYTHONPATH:
import pywatchman
client = pywatchman.client()
client.query('watch-project', os.getcwd())
result = client.query('query', os.getcwd(), {
"since": "n:pi_upload",
"fields": ["name"]})
print(result["files"])
This snippet uses the since generator with a named cursor to discover the list of files that changed since the last query was issued using that same named cursor. Watchman will remember the associated clock value for you, so you don't need to complicate your script with state tracking. We're using the name pi_upload for the cursor; the name needs to be unique among the watchman clients that might use named cursors, so naming it after your tool is a good idea to avoid potential conflict.
This is probably the most direct way to extract the information you need without requiring that you make more invasive changes to your upload script.
Use pywatchman to initiate a long running subscription
This approach will transform your upload.py script so that it knows how to directly subscribe to watchman, so instead of using watchman-make you'd just directly run upload.py and it would keep running and performing the uploads. This is a bit more invasive and is a bit too much code to try and paste in here. If you're interested in this approach then I'd suggest that you take the code behind watchman-wait as a starting point. You can find it here:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait
The key piece of this that you might want to modify is this line:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait#L169
which is where it receives the list of files.
Why not triggers?
You could use triggers for this, but we're steering folks away from triggers because they are hard to manage. A trigger will run in the background and have its output go to the watchman log file. It can be difficult to tell if it is running, or to stop it running.
The interface is closer to the unix model and allows you to feed a list of files on stdin.
Speaking of unix, what about watchman-wait?
We also have a command that emits the list of changed files as they change. You could potentially stream the output from watchman-wait in your upload.py. This would make it have some similarities with the subscription approach but do so without directly using the pywatchman client.

modifying apache ports.conf via fabric script

I am automating the deploy of a site that requires me to add a listen port to ports.conf. Right now it is ok for me to just replace the existing one but as new sites get added I would like to be able to just modify the file. I have seen examples of creating a backup of a file and writing-out the modified file in python. This seems to get me most of the way there and, python-wise, I'm sure I can figure out the rest. (making sure the change hasn't already been made, etc.) However, I'm not sure about doing this in fabric. How would I go about executing the block of python code remotely?
If you need to add a line to a configuration file (and do nothing if it's already there), you can use the append function in fabric.contrib.files.
Example:
from fabric.contrib.files import append
append('/etc/apache2/ports.conf', 'Listen 1234', use_sudo=True)
See http://docs.fabfile.org/en/1.7/api/contrib/files.html#fabric.contrib.files.append

pydev breakpoints not working

I am working on a project using python 2.7.2, sqlalchemy 0.7, unittest, eclipse 3.7.2 and pydev 2.4. I am setting breakpoints in python files (unit test files), but they are completely ignored (before, at some point, they worked). By now i have upgraded all related software (see above), started new projects, played around with settings, hypnotized my screen, but nothing works.
The only idea i got from some post is that it has something to de with changing some .py file names to lower case.
Does anyone have any ideas?
added: I even installed the aptana version of eclipse and copied the .py files to it => same result; breakpoints are still ignored.
still no progress: I have changed some code that might be seen as unusual and replaced it with a more straightforward solution.
some more info: it probably has something to do with module unittest:
breakpoints in my files defining test suites work,
breakpoints in the standard unittest files themselves work
breakpoints in my tests methods in classes derived from unittest.TestCase do not work
breakpoints in my code being tested in the test cases do not work
at some point before i could define working breakpoints in test methods or the code being tested
some things i changed after that are: started using test suites, changed some filenames to lowercase, ...
this problem also occurs if my code works without exceptions or test failures.
what I already tried is:
remove .pyc files
define new project and copy only .py files to it
rebooted several times in between
upgraded to eclipse 3.7.2
installed latest pydev on eclipse 3.7.2
switch to aptana (and back)
removed code that 'manually' added classes to my module
fiddled with some configurations
what I can still do is:
start new project with my code, start removing/changing code until breakpoints work and sort of black box figure out if this has something to do with some part of my code
Does anyone have any idea what might cause these problems or how they might be solved?
Is there any other place i could look for a solution?
Do pydev developers look into the questions on stackoverflow?
Is there an older version of pydev that i might try?
I have been working with pydev/eclipse for a long time and it works well for me, but without debugging i'd forced to switch IDE.
In answer to Fabio's questions below:
The python version is 2.7.2,
The sys.gettrace gives None (but I have no idea what in my code could influence that)
This is the output of the debugger after changing the suggested parameters:
pydev debugger:
starting
('Executing file ', 'D:\\.eclipse\\org.eclipse.platform_3.7.0_248562372\\plugins\\org.python.pydev.debug_2.4.0.2012020116\\pysrc\\runfiles.py')
('arguments:', "['D:\\\\.eclipse\\\\org.eclipse.platform_3.7.0_248562372\\\\plugins\\\\org.python.pydev.debug_2.4.0.2012020116\\\\pysrc\\\\runfiles.py', 'D:\\\\Documents\\\\Code\\\\Eclipse\\\\workspace\\\\sqladata\\\\src\\\\unit_test.py', '--port', '49856', '--verbosity', '0']")
('Connecting to ', '127.0.0.1', ':', '49857')
('Connected.',)
('received command ', '501\t1\t1.1')
sending cmd: CMD_VERSION 501 1 1.1
sending cmd: CMD_THREAD_CREATE 103 2 <xml><thread name="pydevd.reader" id="-1"/></xml>
sending cmd: CMD_THREAD_CREATE 103 4 <xml><thread name="pydevd.writer" id="-1"/></xml>
('received command ', '111\t3\tD:\\Documents\\Code\\Eclipse\\workspace\\sqladata\\src\\testData.py\t85\t**FUNC**testAdjacency\tNone')
Added breakpoint:d:\documents\code\eclipse\workspace\sqladata\src\testdata.py - line:85 - func_name:testAdjacency
('received command ', '122\t5\t;;')
Exceptions to hook : []
('received command ', '124\t7\t')
('received command ', '101\t9\t')
Finding files... done.
Importing test modules ... testAtomic (testTypes.TypeTest) ... ok
testCyclic (testTypes.TypeTest) ...
The rest is output of the unit test.
Continuing from Fabio's answer part 2:
I have added the code at the start of the program and the debugger stops working at the last line of following the method in sqlalchemy\orm\attributes.py (it is a descriptor, but how or whther it interferes with the debugging is beyond my current knowledge):
class InstrumentedAttribute(QueryableAttribute):
"""Class bound instrumented attribute which adds descriptor methods."""
def __set__(self, instance, value):
self.impl.set(instance_state(instance),
instance_dict(instance), value, None)
def __delete__(self, instance):
self.impl.delete(instance_state(instance), instance_dict(instance))
def __get__(self, instance, owner):
if instance is None:
return self
dict_ = instance_dict(instance)
if self._supports_population and self.key in dict_:
return dict_[self.key]
else:
return self.impl.get(instance_state(instance),dict_) #<= last line of debugging
From there the debugger steps into the __getattr__ method of one of my own classes, derived from a declarative_base() class of sqlalchemy.
Probably solved (though not understood):
The problem seemed to be that the __getattr__ mentioned above, created something similar to infinite recursion, however the program/unittest/sqlalchemy recovered without reporting any error. I do not understand the sqlalchemy code sufficiently to understand why the __getattr__ method was called.
I changed the __getattr__ method to call super for the attribute name for which the recursion occurred (most likely not my final solution) and the breakpoint problem seems gone.
If i can formulate the problem in a consise manner, i will probably try to get some more info on the google sqlalchemy newsgroup, or at least check my solution for robustness.
Thank you Fabio for your support, the trace_func() function pinpointed the problem for me.
Seems really strange... I need some more info to better diagnose the issue:
Open \plugins\org.python.pydev.debug\pysrc\pydevd_constants.py and change
DEBUG_TRACE_LEVEL = 3
DEBUG_TRACE_BREAKPOINTS = 3
run your use-case with the problem and add the output to your question...
Also, it could be that for some reason the debugging facility is reset in some library you use or in your code, so, do the following: in the same place that you'd put the breakpoint do:
import sys
print 'current trace function', sys.gettrace()
(note: when running in the debugger, it'd be expected that the trace function is something as: <bound method PyDB.trace_dispatch of <__main__.PyDB instance at 0x01D44878>> )
Also, please post which Python version you're using.
Answer part 2:
The fact that sys.gettrace() returns None is probably the real issue... I know some external libraries which mess with it (i.e.:DecoratorTools -- read: http://pydev.blogspot.com/2007/06/why-cant-pydev-debugger-work-with.html) and have even seen Python bugs and compiled extensions break it...
Still, the most common reason it breaks is probably because Python will silently disable the tracing (and thus the debugger) when a recursion throws a stack overflow error (i.e.: RuntimeError: maximum recursion depth exceeded).
You can probably put a breakpoint in the very beginning of your program and step in the debugger until it stops working.
Or maybe simpler is the following: Add the code below to the very beginning of your program and see how far it goes with the printing... The last thing printed is the code just before it broke (so, you could put a breakpoint at the last line printed knowing it should be the last line where it'd work) -- note that if it's a large program, printing may take a long time -- it may even be faster printing to a file instead of a console (such as cmd, bash or eclipse) and later opening that file (just redirect the print from the example to a file).
import sys
def trace_func(frame, event, arg):
print 'Context: ', frame.f_code.co_name, '\tFile:', frame.f_code.co_filename, '\tLine:', frame.f_lineno, '\tEvent:', event
return trace_func
sys.settrace(trace_func)
If you still can't figure it out, please post more information on the obtained results...
Note: a workaround until you don't find the actual place is using:
import pydevd;pydevd.settrace()
on the place where you'd put the breakpoint -- that way you'd have a breakpoint in code which should definitely work, as it'll force setting the tracing facility at that point (it's very similar to the remote debugging: http://pydev.org/manual_adv_remote_debugger.html except that as the debugger was already previously connected, you don't really have to start the remote debugger, just do the settrace to emulate a breakpoint)
Coming late into the conversation, but just in case it helps. I just run into a similar problem and I found that the debugger is very particular w.r.t. what lines it considers "executable" and available to break on.
If you are using line continuations, or multi-line expressions (e.g. inside a list), put the breakpoint in the last line of the statement.
I hope it helps.
Try removing the corresponding .pyc file (compiled) and then running.
Also I have sometimes realized I was running more than one instance of a program.. which confused pydev.
I've definitely seen this before too. Quite a few times.
Ran into a similar situation running a django app in Eclipse/pydev. what was happening was that the code that was running was the one installed in my virtualenv, not my source code. I removed my project from my virtual env site-packages, restarted the django up in the eclipse/pydev debugger and everything was fine.
I had similar-sounding symptoms. It turned out that my module import sequence was rexec'ing my entry-point python module because a binary (non-Python) library had to be dynamically loaded, i.e., the LD_LIBRARY_PATH was dynamically reset. I don't know why this causes the debugger to ignore subsequent breakpoints. Perhaps the rexec call is not specifying debug=true; it should specify debug=true/false based on the calling context state?
Try setting a breakpoint at your first import statement being cognizant of whether you are then s(tep)'ing into or n(ext)'ing over the imports. When I would "next" over the 3rdparty import that required the dynamic lib loading, the debug interpreter would just continue past all breakpoints.

Categories

Resources