Well I want to use WEb2Py because it's pretty nice..
I just need to change the working directory to the directory where all my modules/libraries/apps are so i can use them. I want to be able to import my real program when I use the web2py interface/applications. I need to do this instead of putting all my apps and stuff inside the Web2Py folder... I'm trying to give my program a web frontend without putting the program in the Web2Py folder.. Sorry if this is hard to understand .
In any multi-threaded Python program (and not only Python) you should not use os.chdir and you should not change sys.path when you have more than one thread running. It is not safe because it affects other threads. Moreover you should not sys.path.append() in a loop because it may explode.
All web frameworks are multi-threaded and requests are executed in a loop. Some web frameworks do not allow you to install/un-install applications without restarting the web server and therefore IF os.chdir/sys.path.append are only executed at startup then there is no problem.
In web2py we want to be able to install/uninstall applications without restarting the web server. We want apps to be very dynamical (for example define models based on information provided with the http request). We want each app to have its own models folder and we want complete separation between apps so that if two apps need to different versions of the same module, they do not conflict with each other, so we provide APIs to do so (request.folder, local_import).
You can still use the normal os.chdir and sys.path.append but you should do it outside threads (and this is not a web2py specific issue). You can use import anywhere you like as you would in any other Python program.
I strongly suggest moving this discussion to the web2py mailing list.
os.chdir lets you change the OS's working directory, but for your purposes (enabling imports of a bunch of modules &c which are constrained to live in some strange place) it seems better to add the needed directories to sys.path instead.
I had to do this very thing. I have few modules that I wanted to use from my controllers. If you want to be able to use the code that resides in the modules directory in the controller, you can use:
# use this in your controller code
impname = local_import('module_in_modules', reload=True)
# reload true will ensure that it will re load whenever
# there are changes to the module
Jay
Related
I want to find a way to expose the Python API (and possibly the underlying ObjC structure) of a macOS app called Glyphs (https://glyphsapp.com) to the outside. The app's IDE sucks and I'd love to be able to script the app from within VSCode, for instance.
Where do I start looking? I thought IPython/Jupyter might be a good place to start, but I can't find anything on how to start a kernel from within code, only from the command line.
In order for this setup to work, I would have to load whatever the server is in this case from within that app’s own Python API, such as from within a module.
Any leads welcome, also creative ones
I am trying to restructure a large project so that I can have one core set of code reused among a web frontend application written in flask and an automated back end that uses luigi tasks.
so I have two clients that I'd like to share code between in order to access a database and perform some automated tasks from either the web or luigi.
What is a good way to keep those three things organized and structured so that I can easily import core modules into either project.
I've had some issues getting Luigi to recognize modules that are parallel to it.
Technically, the only requirement to re-use the code is that the Python files of the shared modules ("core set of code") are found via sys.path of the executing interpreter, e.g. by adding the according directory to the PYTHONPATH environment variable. This enables you to use them as modules. However, you have to assess the changes to your project layout depending on your version control scheme and how you deploy the web application and clients. Usually, I would assume that you have to use the following rules of thumb.
Manage the shared code modules as a separate project. Especially, if the lifecycles of the depending clients and web applications are not tied together.
Treat the shared core modules as a library or framework. I should not rely on implementation details of the web application or the clients. Usually, you need to provide some entry points and configuration hooks.
Change your deployment in a way that the shared core modules are pulled as a dependency of the according client/application. There is a magnitude of possibilites depending on your setup and use case. For example, you could build pip-installable packages using setuptools.
Sorry for the trivial question:
I manage multiple projects, each one is deployed or interacts with one or more server(s).
I maintain my main fabric.py in home. Now, since projects come and go every day, is there a way to define per-project settings with Fabric (or other system), but not in the fabric.py? I'm aware of roledefs in Fabric, but I really want to separate the Fabric defs from server access data.
e.g.
Main fabfile
/home/fradeve/fabric.py
def deploy():
some commands
pass
Project's dir
/home/fradeve/projects
|
|--> project1/.fabricrc
|--> project2/.fabricrc
\--> project3/.fabricrc
When launching a project, it should read its own .fabricrc, read server settings and deploy accordingly.
The problem is that I usually work in the main project dir (project1) and starting Fabric with the fabfile outside the project dir (in home) will give me an error.
Any hint on how to manage?
I've ended up with writing my own script to handle this.
Starting from the marvelous fabfile from Alexey Bezhan, I've added Vagrant support and a function to create all needed files for a new project (.vimrc, Vagrantfile, .gitignore, etc).
Every project server(s) is defined in a per-project server_config.yaml, with easily customizable options. All functions are documented in the script, hope this will be useful for someone. My enhanced version of the script is here.
I am used to PHP having applications. For example,
c:\xampp\htdocs\app1
c:\xampp\htdocs\app2
can be accessed as
localhost://app1/page.php
localhost://app2/page.php
Things to be noticed:
a directory placed inside the www-root directory maps directly with the URL
when a file/directory is added/removed/changed, the worker processes seamlessly reflect that change (new files are hot-deployed).
I am on the lookout for a mature python web framework. Its for a web API which will be deployed for multiple clients, and each copy will diverge on customization. And our workflow has frequent interaction/revision cycles between us and our clients. Hence the "drag and drop" deployment is a must.
Which python framework enables this? I prefer a lightweight solution (which doesnt impose MVC, ORMs etc)
Related
How to build Twisted servers which are able to do hot code swap in Python?
Python web hosting: Why are server restarts necessary?
fastcgi, cherrypy, and python
No mature python framework that I'm aware of allows you to map urls to python modules, and frankly, for good reason. You can do this with CGI, but it's definitely not the recommended way to deploy python apps. Setting that requirement aside, flask and bottle are both lightweight micro web-frameworks with similar approaches, both allow you to reload automatically when changes are detected (this is only wise during development).
There is no web framework in Python that I know of that lets you do that out of the box, but if you need it it's not to hard to add with a bit of convention over configuration.
Simply pick your web framework of choice in Python and then write a wrapper to the main application that walks a directory or set of directory and auto-registers routes from the modules inside of them. Have your modules do the same thing in their __init__.py files to the other files located with them. Then just set up your WSGI code to autoreload when the WSGI script is updated and your deployment during development simply becomes a two step process - add file then touch dev_app.wsgi. You could even add a real deployment option to this wrapper that walks a set up dev environment like this and generates hard-coded URL-to-function mappings for deployment.
However, all of this work isn't really necessary. Python is not PHP and the way you develop in one doesn't necessarily translate to the other well. If the client wants variable routes, use dynamic routes and give them (or you) an admin interface to control the mapping of content to URL. Use flat files, SQLite, a NoSQL datastore, or the ether to store these mappings and the content. Use a template engine like Jinja2, Mako, Cheetah or Genshi to maintain your general layout. Wrap this all up with an object oriented structure to make extending it easy (or use a functional paradigm if that comes more naturally to you). Or, drop the whole dynamic in production portion and generate flat HTML files a la Jekyll.
CherryPy is a mature web framework that redeploys automatically when changes are detected. The file structure - URL isn't there, but it is a lightweight framework that doesn't impose ORM, MVC, or even a templating engine.
If you are used to PHP, you might want to take a look at the Apache modules mod_python or mod_wsgi (and WSGI in general if you do web development -- which is the Pythonic way).
With those two modules, the Python interpreter gets started every time a request comes in (similar to PHP). Needless to say, this slows things down but you'll always get the result based on your newest code. Depending on your expected traffic numbers, this might or might not be okay for you.
BUT: If you decide to write your own framework, you most probably do not want to write a system that supports "hot-deploying". Even though the reload() command is built-in, it takes more than just that and will get you into a world full of pain.
We currently run a small shared hosting service for a couple of hundred small PHP sites on our servers. We'd like to offer Python support too, but from our initial research at least, a server restart seems to be required after each source code change.
Is this really the case? If so, we're just not going to be able to offer Python hosting support. Giving our clients the ability to upload files is easy, but we can't have them restart the (shared) server process!
PHP is easy -- you upload a new version of a file, the new version is run.
I've a lot of respect for the Python language and community, so find it hard to believe that it really requires such a crazy process to update a site's code. Please tell me I'm wrong! :-)
Python is a compiled language; the compiled byte code is cached by the Python process for later use, to improve performance. PHP, by default, is interpreted. It's a tradeoff between usability and speed.
If you're using a standard WSGI module, such as Apache's mod_wsgi, then you don't have to restart the server -- just touch the .wsgi file and the code will be reloaded. If you're using some weird server which doesn't support WSGI, you're sort of on your own usability-wise.
Depends on how you deploy the Python application. If it is as a pure Python CGI script, no restarts are necessary (not advised at all though, because it will be super slow). If you are using modwsgi in Apache, there are valid ways of reloading the source. modpython apparently has some support and accompanying issues for module reloading.
There are ways other than Apache to host Python application, including the CherryPy server, Paste Server, Zope, Twisted, and Tornado.
However, unless you have a specific reason not to use it (an since you are coming from presumably an Apache/PHP shop), I would highly recommed mod_wsgi on Apache. I know that Django recommends modwsgi on Apache and most of the other major Python frameworks will work on modwsgi.
Is this really the case?
It Depends. Code reloading is highly specific to the hosting solution. Most servers provide some way to automatically reload the WSGI script itself, but there's no standardisation; indeed, the question of how a WSGI Application object is connected to a web server at all differs widely across varying hosting environments. (You can just about make a single script file that works as deployment glue for CGI, mod_wsgi, passenger and ISAPI_WSGI, but it's not wholly trivial.)
What Python really struggles with, though, is module reloading. Which is problematic for WSGI applications because any non-trivial webapp will be encapsulating its functionality into modules and packages rather than simple standalone scripts. It turns out reloading modules is quite tricky, because if you reload() them one by one they can easily end up with bad references to old versions. Ideally the way forward would be to reload the whole Python interpreter when any file is updated, but in practice it seems some C extensions seem not to like this so it isn't generally done.
There are workarounds to reload a group of modules at once which can reliably update an application when one of its modules is touched. I use a deployment module that does this (which I haven't got around to publishing, but can chuck you a copy if you're interested) and it works great for my own webapps. But you do need a little discipline to make sure you don't accidentally start leaving references to your old modules' objects in other modules you aren't reloading; if you're talking loads of sites written by third parties whose code may be leaky, this might not be ideal.
In that case you might want to look at something like running mod_wsgi in daemon mode with an application group for each party and process-level reloading, and touch the WSGI script file when you've updated any of the modules.
You're right to complain; this (and many other WSGI deployment issues) could do with some standardisation help.