Here's the problem:
In one of Django views.py I have the following code:
from kml_generator import KML_generator
#login_required(login_url='/dev/login')
def search(request):
if request.POST:
result,SF=Validate(request, Activities)
val=result.values('id')
KML_generator(result1=val,user=request.user)
it basically imports module kml_generator and calls the class KML_generator from there. This class generates .kml file which is then shown on OpenLayers. It works as it should, but I want to change it.
And now:
Why when I change code inside the module kml_generator it do not affect the behaviour? I tried everything I even put there errors and it still works like charm....
So here's the question:
How to change it? Do a django have some kind of 'build', 'compile' inside? Do I need to call it to affect the code?
PS. It's all standing on Apache using wsgi.py
PS2. Ok, that's pathetic by me, but we got side company which developed for us a nice dynamic django website. And now I do not know why it doesn't work like I though it would.
You need to restart the Apache server for Django to pick up changes.
Python loads source files just once, when a module is imported. The compiled bytecode is then kept in memory. At import time, Python also caches the bytecode, in a .pyc file next to the original source file, you can verify that a new import has taken place by comparing timestamps on the .py and corresponding .pyc files.
A graceful restart should suffice; run apache2ctl graceful as root on your server.
In future, you may want to get yourself a development setup; running the same code (from a VCS, of course), but using the built-in Django development server:
python manage.py runserver
The Django development server does its best to reload code when you change it. This is a development feature only (watching files for changes costs performance).
Last but not least, try to avoid altering third-party libraries. Use subclassing or monkeypatching instead, and perhaps the upstream author would be willing to implement new features for you or accept patches. That way you don't have to maintain those changes yourself across versions either.
Related
In the django culture, I have encountered the concept of app reuse but not snippet reuse. Here is an example of what I mean by snippet reuse: I have a function getDateTimeObjFromString( sDateTime ), obviously you pass a string date time and it returns a python date-time object.
Back in the late 1980's or early 1990's, I was exposed to the idea of snippet reuse at a FoxPro developers conference. If you write code for a specific problem and find it is useful elsewhere in your project, move it to a project library. If you find that code is useful for other projects, move it to a generic library that can be accessed by all projects.
(At the FoxPro DevCon, they did not call it snippet reuse. I coined that term to make clear that I am referring to reuse of chunks of code smaller than an entire app. The FoxPro DevCon was long ago, I do not remember exactly what they called it.)
I read the most recent "Two Scoops of Django", and it does mention reusing snippets within a single project but I did not find any mention of the concept of snippet reuse across multiple projects.
I wrote and used getDateTimeObjFromString() long before I tackled my django app. It is in packages I keep under /home/Common/pyPacks. On my computers, I set PYTHONPATH=/home/Common/pyPacks, so every project can access the code there. The code for getDateTimeObjFromString() is under a Time subdirectory in a file named Convert.py. So to use the code in any project:
from Time.Convert import getDateTimeObjFromString
My django app downloads data from an API, and that data includes timestamps. It would be nice if the API sent python date time objects, but what you get are strings. Hence the utility of getDateTimeObjFromString().
This is just one example, there are many little functions under /home/Common/pyPacks that I found convenient to access in my django project.
Yes /home/Common/pyPacks are under version control in github and yes I deploy on any particular machine via git pull.
When working on my django project from a development computer, PYTHONPATH works, and I can import the packages. But then I tried running my django app on a server via wsgi.py -- PYTHONPATH is disabled. I can set PYTHONPATH both at the OS and Apache2 level, but python ignores it, the functions cannot import.
I do not want to bother with making my personal generic library an official python package under PyPI.
Does the django community expect me to copy and paste?
I arrived at a work around: make /home/Common/pyPacks a psudeo site-package by putting a "pyPks" symlink in the virtual environment's site-packages directory to /home/Common/pyPacks, adding "pyPks" to the INSTALLED_APPS, then changing all the import statements as follows:
original:
from Time.Convert import getDateTimeObjFromString
work around update:
from pyPks.Time.Convert import getDateTimeObjFromString
I also had to update all my generic library files to handle both absolute imports via PYTHONPATH and relative imports.
How to fix “Attempted relative import in non-package” even with init.py
Is there a better way to reuse snippets in a django project?
When developing a Python web app (Flask/uWSGI) and running it on my local machine, *.pyc files are generated by the interpreter. My understanding is that these compiled files can make things load faster, but not necessarily run faster.
When I deploy this same app to production, it runs under a user account that has no write permissions on the local file system. There are no *.pyc files committed to source control, and no effort is made to generate them during the deploy. Even if Python wanted to write a .pyc file at runtime, it would not be able to.
Recently I started wondering if this has any tangible effect on the performance of the app, either in terms of the very first pageview after the process starts, or consistently throughout its entire lifetime.
Should I throw a python -m compileall in as part of my deploy scripts?
Sure, you can go ahead and precompile to .pyc's as it won't hurt anything.
Will it affect the first or nth pageload? Assuming Flask/WSGI runs as a persistent process, not at all. By the time the first page has been requested, all of the Python modules will have already been loaded into memory (as bytecode). Thus, server startup time will be the only thing affected by not having the files pre-compiled.
However, if for some reason a new Python process is invoked for each page request, then yes, there would (probably) be a noticeable difference in performance and it would be better to pre-compile.
As Klaus said in the comments above, the only other time a pageload might be affected is if a function happens to try and import a module that hasn't already been imported. This will require the module to be parsed and converted to bytecode then loaded into memory before being able to continue.
Would it be possible to create a python module that lazily downloads and installs submodules as needed? I've worked with "subclassed" modules that mimic real modules, but I've never tried to do so with downloads involved. Is there a guaranteed directory that I can download source code and data to, that the module would then be able to use on subsequent runs?
To make this more concrete, here is the ideal behavior:
User runs pip install magic_module and the lightweight magic_module is installed to their system.
User runs the code import magic_module.alpha
The code goes to a predetermine URL, is told that there is an "alpha" subpackage, and is then given the URLs of alpha.py and alpha.csv files.
The system downloads these files to somewhere that it knows about, and then loads the alpha module.
On subsequent runs, the user is able to take advantage of the downloaded files to skip the server trip.
At some point down the road, the user could run a import magic_module.alpha ; alpha._upgrade() function from the command line to clear the cache and get the latest version.
Is this possible? Is this reasonable? What kinds of problems will I run into with permissions?
Doable, certainly. The core feature will probably be import hooks. The relevant module would be importlib in python 3.
Extending the import mechanism is needed when you want to load modules that are stored in a non-standard way. Examples include [...] modules that are loaded from a database over a network.
Convenient, probably not. The import machinery is one of the parts of python that has seen several changes over releases. It's undergoing a full refactoring right now, with most of the existing things being deprecated.
Reasonable, well it's up to you. Here are some caveats I can think of:
Tricky to get right, especially if you have to support several python versions.
What about error handling? Should application be prepared for import to fail in normal circumstances? Should they degrade gracefully? Or just crash and spew a traceback?
Security? Basically you're downloading code from someplace, how do you ensure the connection is not being hijacked?
How about versionning? If you update some of the remote modules, how can make the application download the correct version?
Dependencies? Pushing of security updates? Permissions management?
Summing it up, you'll have to solve most of the issues of a package manager, along with securing downloads and permissions issues of course. All those issues are tricky to begin with, easy to get wrong with dire consequences.
So with all that in mind, it really comes down to how much resources you deem worth investing into that, and what value that adds over a regular use of readily available tools such as pip.
(the permission question cannot really be answered until you come up with a design for your package manager)
Whenever I change my python source files in my Django project, the .pyc files become out of date. Of course that's because I need to recompile them in order to test them through my local Apache web server. I would like to get around this manual process by employing some automatic means of compiling them on save, or on build through Eclipse, or something like that. What's the best and proper way to do this?
You shouldn't ever need to 'compile' your .pyc files manually. This is always done automatically at runtime by the Python interpreter.
In rare instances, such as when you delete an entire .py module, you may need to manually delete the corresponding .pyc. But there's no need to do any other manual compiling.
What makes you think you need to do this?
I have a python/django application that runs on the google app engine.
My views.py file has some imports...
from commands.userCommands import RegisterUserCommand
from commands.accountCommands import CreateNewAccountCommand, RenameAccountCommand
These imports work fine on my development environment (local machine). But when I upload to the google app engine, views.py fails with a "Could not import views. Error was: No module named userCommands" error.
Any idea why I can't import my commands.userCommands module?
My file structure looks as follows...
- app.yaml
- urls.py
- views.py
- etc...
- commands/__init__.py
- commands/userCommands.py
Note: I did try to append my application name to the module name/path. No luck.
Note: I did do an update with the --noisy argument, and it does appear to upload my commands folder successfully.
You could be running into a clash with Python's own commands module (which doesn't have submodules like yours) -- naming your own modules and packages in ways that are meant to hide ones in the standard library (just like naming your variables in ways that are meant to hide builtin names, like list or file) is always a perilous undertaking, even though it "should" work there's always potential for confusion.
Could you try renaming that commands package and its uses to something unambiguous and free from danger, such as mycommands, and see if that just makes the problem disappear? If that's the case you can then open a ticket on GAE's tracker (because it would show a minor but undeniable bug in GAE's runtime) but meanwhile your problem is solved!-) If the problem stays, ah well, at least we've eliminated one likely cause and can keep digging...
The __init__.py files are required to make Python treat the directories as containing packages, so you need a
commands/__init__.py
file in your directory structure. See http://docs.python.org/tutorial/modules.html.