We have few Zope&Plone projects in our company and until today I was only one single developer developing all changes throught ZMI or ZopeEdit. Our company is growing so I need to start cooperating with others developers which can help me with developing features and solving bugs in projects. This means that is no more possible to use ZMI but every developer needs to make and test own changes without affecting others work and paste own changes to production enviroment using git merge in git repo.
I need to move development to git - this means I need to start tracking all portals files and settings in git.
I think I need to move whole projects from ZODB/ZMI (including templates, scripts, sql methods, properties as portal_properties or portal_javascripts etc.) to filesystem and run git on this file system. In the next step every developer can install own pure Plone instance, pull source code and settings from git, create own branch, make changes, test, commit, push, code review ...
My question is: Is there any way to do this and start well-known rapid development process using git? Supports ZODB something like "live migration" of content/settings to/from filesystem? Is there any way to tel Zope to load some folder with content/settings from filesystem instead only from ZODB?
I know there is something called eggs, but is possible to move all types of files mentioned above to separated egg?
Thank you for your help.
The way your company was following until now was the "Old Way Way" of Plone development, but this was a deprecated and discouraged way to do.
Nowadays ZMI can still be used for "quick and dirty" fixes, but commonly this changes stored in DB must be removed (and moved to real code) as soon as possible. This was already possible on Plone 2.0!
More important: every new Plone release tend to reduce the ZMI powers (for example: until Plone 2.1 you were able to do lot of stuff from ZMI, starting from Plone 2.5 some UI elements where impossible to be modified TTW).
So: the answer to your question is "yes". Plone can (must) read code from filesystem, and this code can be stored on VCS (it can be git svn, ...).
All of those information can be found in the Plone Developer Manual.
Creating a new Plone package for modern Plone? Use mr.bob.
Automatically integrate VCS in your buildout? Use mr.developer
If you are starting today from a project you where developing through ZMI you must probably mode code from ZMI to filesystem.
This can be done manually; it's simpler as you are using Zope External Editor.
There is also a very old add-on (Plone Skin Dump) for flushing skin content to filesystem, but I fear it won't work on recent Plone, thus it was not supporting some stuff like SQL methods (if you are using them).
You can have a look at http://docs.plone.org/develop/
There you can find how to create a package (egg). Its source code can be added to git. You can checkout your git repository using mr.developer during buildout. https://pypi.python.org/pypi/mr.developer/
You can use mr.bob to create a PloneAddon. For MySQL you can use MySQL Python. Add this Packages to your Buildout in the eggs section.
Then you can write your own MySQL Statements in the Addon. I know reimplementation is expensive, but you have more control in the future.
Related
I have several small Python libraries that I wrote with stuff that I find myself wanting over and over again. I think most programmers have something similar. I want to use these libraries from a variety of different machines so I've started keeping this stuff in my DropBox. However, I'd like to be able to use my code on machines on which I can't install DropBox or other cloud storage applications, even in portable form. I can just download the files every time one of them changes (DropBox can provide me a URL for each file in my Public folder), which is only a moderate nuisance. But--and I admit this is a longshot--is there a solution out there that will let me tell Python to load a library from my DropBox via http?
BTW, I'd like to add the whole remove folder to my sys.path, but getting a URL for a folder is complicated, so I'm going to try to walk before I run by starting with individual files.
Yes, it's possible. I think you want the combination of two previous questions:
How to download a file in python over HTTP
How to dynamically load a library in python
So your task basically breaks down into writing a little bit of glue code: download the URL via the first bullet, write it to a local file, and then import that file using the second bullet.
So that's how you'd do that.
BUT - please keep in mind that dynamically downloading and executing code has many potential security downfalls. Will you be doing this over a secure connection? Who else has the ability to manipulate that URL? There are a bunch of security issues inherent in downloading and executing code on the fly. I would ask you to consider going about your solution in a different way, but I'm giving you the answer you're asking for.
As a simple security check, you can establish a known-good hash for your file, and then refuse to import any file other than one that's on the list of known-good hashes. This makes it a pain to update your modules, but gives you a little bit of extra safety.
Don't use DropBox as a Revision control
Pick a real solution like Git
Setup access to the Git repository on one of your servers
Clone the repository to your worker machines and checkout master
Create a develop branch where you put every change you make
Test the changes and when you consider any of them stable, merge it to master
On your worker machines set up a cron job which periodically pulls from master branch of repository (and possibly restarts some Python processes as importing the same module again won't make Python interpreter aware of changes since imported modules are cached)
Enjoy your automatically updated workers :)
Don't feel shame - it happens that even experienced software developers come up with XY problem
As a fledgling Django developer, I was wondering if it was customary, or indeed possible, to create a site with Django then transfer the complete file structure to a different machine where it would "go live".
Thanks,
~Caitlin
You could use GIT or Mercurial - or other version control system. To put the site structure on a central server. After that you could deploy the site for example with fabric to multiple servers. For deployment process you should consider using for example virtualenv to isolate the project from global python packages and requirements.
Of course that's possible and in fact it's the only way to "go live". You don't want to develop in your live server, do you? And it's true for any platform, not just django.
If I understood your question correctly, you need a system to push your development code to live.
Use a version control system: git, svn, mercurial etc.
Identify environment specific code like setting/config files etc. and have separate instances of them for each environment.
Create a testing/staging/PP environment which has live data or live-like data and deploy your code there before pushing it to live.
To avoid any downtime during deployment process, usually a symbolic link is created which points to the existing code folder. When a new release is to be pushed, a new folder is created with new code, after all other dependencies are done (like setting and database changes) and the sym link is pointed to the new folder.
I'm a freelance editor and tutor as well as a fiction writer and artist looking to transition to the latter on a full-time basis. Naturally, part of that transition involves constructing a website; a dynamic site to which new content in various forms can be added with ease. Now, I've always intended to learn how to program, and I simply haven't the money to hire someone else to do it. So, having had a good experience with my brief dabblings in Python, I decided I'd go with Django for building my site.
I set up a Fedora Virtualbox for a development environment (as I didn't want to jump through hoops to make Windows work) and went to town on some Django tutorials. Everything went swimmingly until life intervened and I didn't touch the project for three weeks. I'm in a position to return to it now, but I've realized two things in the process. First, I'm having to do a fair bit of retracing of my steps just to find where certain files are, and second, I don't know how I'd go about deploying the site after I'm done building it. My intention is to get the cheapest Linode and host off that until some theoretical point in the future where I required more.
I suspect that re: the file organization issue, that's just something I'll become more familiar with over time, though if there are any tricks I should be aware of to simplify the structure of my overall Django development space, I'm eager to know them. However, what about deployment? How viable is it to, with sufficient knowledge, automate the process of pushing the whole file structure of a site with Git? And how can I do that in such a way that it doesn't tamper with the settings of my development environment?
As a Django developer i can assure you that it grows on you and becomes easier to understand the development environment.
You should remember that settings.py is probably going to be where your thoughts will be for quite a while in the start; the good part is that its only once, after you got it up and running you'll only touch settings.py to add new modules or change some configuration but its unlikely.
I believe there are hosts that integrate with git so that should not be a problem since you will probably just git clone your project's url into the host (and not forget to enable/configure wsgi)
To leave the settings.py out of the mess, you will tell git not to track the file with: git rm file; and then when you add your files for commit you do it with git add -u so it refers only to your tracked files.
I'm not sure if i was clear enough. (probably not) But, i hope i could help you in some way.
I just start to learn python and web2py. Because of web2py's web interface development, I am wondering how can web2py work with svn? If a team wants to build a website,how do they work together? How to control the iteration of source code?
Yes, it works fine with svn, hg, whatever source control you need to use.
Sometimes people think that you have to code with web2py's admin interface, but that really is not the case, once you realize it can be edited with any of your regular tools, you will see that you don't have to treat it any differently when it comes to source control either.
If you use the source version of web2py, you'll have a single folder on disk that contains an entire web2py application server (that in turn contains your 'application' folders). Just check that whole folder into source control.
Now, on the machine that is running web2py, you can make changes either with web2py's web interface, or by just editing the python files directly with another editor (I use WingIDE for example). You'll have the normal svn update/modify/commit cycle at this point.
If multiple people are editing code using web2py's admin interface, all of their changes will be made on the machine running web2py... just periodically do a commit from that system and you are all set.
Using the admin interface to modify the source code is convenient, but for for bigger changes, each member of your team should have their own local copy of the svn branch. They make changes to their local files and commit them. Then from the server running web2py, just do an 'svn up' to get modifications from the rest of the team.
I have a directory of python programs, classes and packages that I currently distribute to 5 servers. It seems I'm continually going to be adding more servers and right now I'm just doing a basic rsync over from my local box to the servers.
What would a better approach be for distributing code across n servers?
thanks
I use Mercurial with fabric to deploy all the source code. Fabric's written in python, so it'll be easy for you to get started. Updating the production service is as simple as fab production deploy. Which ends ups doing something like this:
Shut down all the services and put an "Upgrade in Progress" page.
Update the source code directory.
Run all migrations.
Start up all services.
It's pretty awesome seeing this all happen automatically.
First, make sure to keep all code under revision control (if you're not already doing that), so that you can check out new versions of the code from a repository instead of having to copy it to the servers from your workstation.
With revision control in place you can use a tool such as Capistrano to automatically check out the code on each server without having to log in to each machine and do a manual checkout.
With such a setup, deploying a new version to all servers can be as simple as running
$ cap deploy
from your local machine.
While I also use version control to do this, another approach you might consider is to package up the source using whatever package management your host systems use (for example RPMs or dpkgs), and set up the systems to use a custom repository Then an "apt-get upgrade" or "yum update" will update the software on the systems. Then you could use something like "mussh" to run the stop/update/start commands on all the tools.
Ideally, you'd push it to a "testing" repository first, have your staging systems install it, and once the testing of that was signed off on you could move it to the production repository.
It's very similar to the recommendations of using fabric or version control in general, just another alternative which may suit some people better.
The downside to using packages is that you're probably using version control anyway, and you do have to manage version numbers of these packages. I do this using revision tags within my version control, so I could just as easily do an "svn update" or similar on the destination systems.
In either case, you may need to consider the migration from one version to the next. If a user loads a page that contains references to other elements, you do the update and those elements go away, what do you do? You may wish to do something either within your deployment scripting, or within your code where you first push out a version with the new page, but keep the old referenced elements, deploy that, and then remove the referenced elements and deploy that later.
In this way users won't see broken elements within the page.