I'm currently running a Django project on my school's webserver with FCGI. I did follow the multiple guides that recommends installing a virtual local Python environment and it worked out great. The only issue i had was that "touching" my fcgi-file to reload source-files wasn't enough, but instead i had to kill the python service via SSH. This because mod_fcgid is used.
However, the admin didn't think it was a great idea that i ran my own local python. He thought it better if i just told him what modules to install on root, which was a pretty nice service really.
But doing this, i can no longer kill python since it's under root(though immoral as I am, I've definitely tried). The admins recommendation was that I should try too make the fcgi script reload itself by checking time stamp. I've tried to find documentation on how to do this, but fund very little and since I'm a absolute beginner i have no idea what would work.
Anyone have experience running python/django under mod_fcgid or tips on where to find related guides/documentation?
here's what I would do:
## top of my .fcgi script
import sys, time
original_modules = sys.modules.copy()
## in a separate thread
old_ctime = os.path.getctime("mymodule.py")
while True:
time.sleep(10)
new_ctime = os.path.getctime("mymodule.py")
if new_ctime > old_ctime:
sys.modules = original_modules # reset all imports
import mymodule
mymodule.dofcgi()
granted this isn't drop-in perfect (you might have to mess w/ the threading) but it should give you a general idea of how to "reload" a module completely.
Related
Whats the best Python library to automate an external program
We have accounting software where we need to upload many files into for a particular reason. To do this we search for the relevant menu item right click it and then import. This is time consuming. It would be useful to be able to automate this. Ive looked at pyautogui but it needs the screen to be active and the user cant do anything else on their machine while it is running.
Of course ideally I would like to use the accounting software's API (if there is one) or find out if the upload is running a stored procedure at the back end taking the filepath as a parameter and then calling this procedure myself.
In the absence of those would anyone know if there is a way to automate this in Python without the limitations above or if not in Python what other language would be good for this?
Many thanks
I haven't done such thing. So I do not have proof of concept. But
You have at least 2 options, more or less complicated:
install virtualbox/vmware, and create virtual operating system (windows or Linux) on it. Then run your pyautogui script in this virtual operating system. You operating system is available in window-like container - so you can minimize this window with running script, and you system where you can interact as a user. (long tutorial: https://www.virtualbox.org/manual/ch01.html)
If you are Linux user, then you may be familiar with virtual screen. I think many people are using this virtual screen with automating web testing in sellenium (How do I run Selenium in Xvfb?). So probably, I say probably, there is chance to run pyautogui in virtual screen: How to attach pyautogui to the virtual display? - this is similar, but not answered question.
I want to use PyCharm debugger with aiohttp_devtools, but have no idea how to run command:
adev runserver --no-livereload
in this window
Add a new file to run the CLI and reference that in the debug setup:
adev.py:
from aiohttp_devtools.cli import cli
if __name__ == '__main__':
cli()
Then reference that to setup debugging:
With that everything worked fine for me.
Context:
I've fixed the issue above referenced by Andrew #99 and will create a new release v0.5.0, however that doesn't actually fix the problem here. I'll explain how to get debug working below.
I do use pycharm, but I steer clear of debug and similar features: partly because (as in this case) they're generally not worth the effort, and partly so when someone comes up with a decent opensource or paid IDE for python I can get away from pycharm's awful bugginess and awful customer service without much hassle.
The thing to remember when wrestling with pycharm is it was built by java developers in a hurry, not python developers; so it often deviates significantly from the pythonic way of doing things.
For example, in this case the developer clearly hadn't heard about python -m "run module as a script" or the virtualenv env/bin extension to $PATH.
Thank you for raising the question.
Right now there is no easy way to do it.
The only solution is creating own custom startup script which instantiates dev server like from aiohttp_devtools import cli; cli().
But I've created an issue for your needs: https://github.com/aio-libs/aiohttp-devtools/issues/99
I've managed to setup debug configuration with 'Module name' instead of 'Script path' and external adev.py script:
Also, you may need to set proper 'Working directory' and PYTHONPATH (mark directories as sources roots for that).
Would it be possible to create a python module that lazily downloads and installs submodules as needed? I've worked with "subclassed" modules that mimic real modules, but I've never tried to do so with downloads involved. Is there a guaranteed directory that I can download source code and data to, that the module would then be able to use on subsequent runs?
To make this more concrete, here is the ideal behavior:
User runs pip install magic_module and the lightweight magic_module is installed to their system.
User runs the code import magic_module.alpha
The code goes to a predetermine URL, is told that there is an "alpha" subpackage, and is then given the URLs of alpha.py and alpha.csv files.
The system downloads these files to somewhere that it knows about, and then loads the alpha module.
On subsequent runs, the user is able to take advantage of the downloaded files to skip the server trip.
At some point down the road, the user could run a import magic_module.alpha ; alpha._upgrade() function from the command line to clear the cache and get the latest version.
Is this possible? Is this reasonable? What kinds of problems will I run into with permissions?
Doable, certainly. The core feature will probably be import hooks. The relevant module would be importlib in python 3.
Extending the import mechanism is needed when you want to load modules that are stored in a non-standard way. Examples include [...] modules that are loaded from a database over a network.
Convenient, probably not. The import machinery is one of the parts of python that has seen several changes over releases. It's undergoing a full refactoring right now, with most of the existing things being deprecated.
Reasonable, well it's up to you. Here are some caveats I can think of:
Tricky to get right, especially if you have to support several python versions.
What about error handling? Should application be prepared for import to fail in normal circumstances? Should they degrade gracefully? Or just crash and spew a traceback?
Security? Basically you're downloading code from someplace, how do you ensure the connection is not being hijacked?
How about versionning? If you update some of the remote modules, how can make the application download the correct version?
Dependencies? Pushing of security updates? Permissions management?
Summing it up, you'll have to solve most of the issues of a package manager, along with securing downloads and permissions issues of course. All those issues are tricky to begin with, easy to get wrong with dire consequences.
So with all that in mind, it really comes down to how much resources you deem worth investing into that, and what value that adds over a regular use of readily available tools such as pip.
(the permission question cannot really be answered until you come up with a design for your package manager)
Here's the problem:
In one of Django views.py I have the following code:
from kml_generator import KML_generator
#login_required(login_url='/dev/login')
def search(request):
if request.POST:
result,SF=Validate(request, Activities)
val=result.values('id')
KML_generator(result1=val,user=request.user)
it basically imports module kml_generator and calls the class KML_generator from there. This class generates .kml file which is then shown on OpenLayers. It works as it should, but I want to change it.
And now:
Why when I change code inside the module kml_generator it do not affect the behaviour? I tried everything I even put there errors and it still works like charm....
So here's the question:
How to change it? Do a django have some kind of 'build', 'compile' inside? Do I need to call it to affect the code?
PS. It's all standing on Apache using wsgi.py
PS2. Ok, that's pathetic by me, but we got side company which developed for us a nice dynamic django website. And now I do not know why it doesn't work like I though it would.
You need to restart the Apache server for Django to pick up changes.
Python loads source files just once, when a module is imported. The compiled bytecode is then kept in memory. At import time, Python also caches the bytecode, in a .pyc file next to the original source file, you can verify that a new import has taken place by comparing timestamps on the .py and corresponding .pyc files.
A graceful restart should suffice; run apache2ctl graceful as root on your server.
In future, you may want to get yourself a development setup; running the same code (from a VCS, of course), but using the built-in Django development server:
python manage.py runserver
The Django development server does its best to reload code when you change it. This is a development feature only (watching files for changes costs performance).
Last but not least, try to avoid altering third-party libraries. Use subclassing or monkeypatching instead, and perhaps the upstream author would be willing to implement new features for you or accept patches. That way you don't have to maintain those changes yourself across versions either.
So I wrote a Python script which does some simple stuff. It was originally going to run on a Unix server but due to crappy network security settings which TPTB refuse to change, we need to run it on a Windows server instead. However, the administrators of said Windows server refuse to do anything helpful like install Python.
What are my options for running a Python script on Windows without Python?
Consideration 1:
Something like Py2Exe - I found this after a quick Google search and it seems promising. From what I can tell, it'll generate a bunch of files but we can just xcopy that directory to our Windows machine and it will be completely isolated and not have any external dependencies. Does anyone have any insight on how well this works? Obviously, it depends on my Python script but fortunately this script is quite simple and only uses built in Python libraries such as urllib2 and urlparse.
Consideration 2:
We can assume the Windows server has at least some version of the .NET Framework installed too, which brings IronPython to mind. I've never used this before, but I've always wanted to. From what I can tell, it will compile Python code into CLS compliant IL code which can be run natively under the .NET runtime. However, does this require additional .NET libraries to be installed on the server? Can I just bundle those DLLs with my program? Or, does it require I rewrite my Python script to call into .NET Framework specific classes instead of using things like urllib2 or urlparse?
Thanks!
PS - The ironic part: I actually barely know Python and I'm a .NET expert, but I wrote the script in Python because I was told it would run on a Unix server. Had I known we'd end up running this on a Windows server, I'd have written the thing in C# to begin with in about 1/10th of the time. Fail.
Will they let you copy executables onto the server at all? If so then you should be able to do a non-admin installation of Python or use Portable Python which can just be copied into a folder without any installation at all.
Nothing wrong with Py2exe, but it does mean you then have to build the script into a fresh executable each time you update it. Also Py2exe has a slightly longer startup time than a Python interpreter because it has to extract the Python dlls into a temporary folder each time it runs; that only matters of course if you run your script a lot.