Separate folders for development and production? - python

Say I’m working on a complex Python program for keeping track of shipment data along with Git for version control. On my local machine, is it best to create two different folder structures, one being the develop folder and the other being the production folder?
So, I push my final (develop) changes to the master branch, then pull those changes to production folder?

I like that idea of keeping separate folders in order to prevent mistakes, but I think it totally depends on the standards/conventions of your group/environment. If you communicate this approach to them and they approve of it, then I would recommend it. But the way version control works is that it allows you to access remote versions from anywhere and by using the same folder location, you're optimizing space on your machine.

Related

How do I edit code on multiple computers using github? Is there a way to have multiple paths in one project?

So essentially I have a project on GitHub that I edit on both my personal computer and my school laptop. Except for the fact that every time I commit an update I have to update the path? Is there any way to have two simultaneous python paths in for example the JSON file? Maybe some sort of gitignore file or smth(not that good at GitHub nor git).
Btw the project is in python 3.
There is an app called github for desktop that allows you to store a local copy of the repository you're working on, then when you're done you can click commit changes and then push them, it will submit them to the cloud. You need to do this for every computer you want to work your project from. Also, you can watch this to see how to set it up!

Externalized configurations in python microservices

The Current State:
I have some non-negligeble amount of microservices written in python.
Each such microservice has its own yaml configuration file that is located in the git repo. We use dynaconf to read the configuraion.
The Problem:
At first it was fine, the configurations were relatively small and it was easy to maintain them. Time went by, and the configurations went larger. It became annoying to change the configurations and it is bad that they are scattered between different git repos, i.e. not centralized.
I want to use "Externalized Configurations" in order to maintain all the configurations in a single repo and that each microservice will read its portion on startup. I have heard about Spring Boot, but it seems to be way too much and apart from it, it seems that the pip libraries seems to be at beta stage, new and unriliable...
Is there another reccomendation in this particular use case? Or should I proceed with Spring Boot?
You can use Microconfig.IO to manage your configuration with powerful templating and distribution. It's language agnostic as long as your configs are YAML or properties format.
I think you could create a default config file for each service and use the external storage idea of the dynaconf.
One possible solution would be to create a simple system to manage these variables with Redis.
And the DynaConf CLI allows you to make changes on the fly.

How to deploy a Django/Tornado based web app that's built with platter?

This question is mostly about the technical details + some best practices of how to efficiently deploy a python web app that's built using platter.
Taking Django for instance, I have a project that's already built into a tarball distribution. This includes all wheels of all deps + the package of the app itself.
My repo directory also contains some other files that need to be distributed with the deployed code, such as: manage.py, a fabfile package with fabric utils, and some configuration files (for supervisor, nginx, etc).
So my questions are:
How can I wrap these extra files into the distribution that contains the project?
If I simply use git to clone/pull the project on the server I have these files, but then I have duplicate of the source code being both in the project and zipped in the tarball. How can I avoid that? Committing the tarball into a separate repo?
Perhaps the duplication is not so bad, and I'll end up with multiple tarballs in my dist/ directory and only one symlinked to the current from which I deploy?
Same goes for a Tornado based app.
My first rule of deployment is "whatever works". Every production environment has different requirements. But to give opinions on your questions:
Not everything should be in your Python project. Perhaps there is a way to do it, but I think it's using the wrong hammer.
You can create a separate Git repo that handles configuration and asset files for your production deployment (this does not even be managed by Git if you don't care about old, irrelevant configuration files). This does not have to be a Python project, just the files for the production deployment. You may optionally put a Python script or two in here (or just a README.txt or fab files or a Buildout config) to automate tasks such as unpacking your platter or copying config files around.
It's tempting (and possible) to put production config things in your main Git repo. This is even suggested by apps that create boilerplate files for development and production configuration. This doesn't mean it's the best way to do things though.
My rule is that the main Git repo is "development only". It's cloned by developers who are setting up and working in development environments. It conflates a Python project far too much to try and be an Python application and also be a place to manage a production system, IMHO.
Production is managed separately. Sometimes by people different from the developers or at least the developer is wearing a different hat when thinking about a production deployment. This way you can also have a small, clean repo that tracks just changes to your production system.
Playing with symlinks within a single deployment that represents different builds is an extra layer of confusion. And the impetus to do so comes from trying to do everything from a single Python project.
Deploy your python application to something like /var/myapp/build-2015-10-29/. Then create a symlink at /var/myapp/current/ that points to this location. This way you can create a full deployment at /var/myapp/build-2015-11-05/ and tweak the config to start on a separate port, bring the app up and ensure everything works, then just switch from the symlink from the old build to the new build with minimal downtime.

Loading Python libraries via http

I have several small Python libraries that I wrote with stuff that I find myself wanting over and over again. I think most programmers have something similar. I want to use these libraries from a variety of different machines so I've started keeping this stuff in my DropBox. However, I'd like to be able to use my code on machines on which I can't install DropBox or other cloud storage applications, even in portable form. I can just download the files every time one of them changes (DropBox can provide me a URL for each file in my Public folder), which is only a moderate nuisance. But--and I admit this is a longshot--is there a solution out there that will let me tell Python to load a library from my DropBox via http?
BTW, I'd like to add the whole remove folder to my sys.path, but getting a URL for a folder is complicated, so I'm going to try to walk before I run by starting with individual files.
Yes, it's possible. I think you want the combination of two previous questions:
How to download a file in python over HTTP
How to dynamically load a library in python
So your task basically breaks down into writing a little bit of glue code: download the URL via the first bullet, write it to a local file, and then import that file using the second bullet.
So that's how you'd do that.
BUT - please keep in mind that dynamically downloading and executing code has many potential security downfalls. Will you be doing this over a secure connection? Who else has the ability to manipulate that URL? There are a bunch of security issues inherent in downloading and executing code on the fly. I would ask you to consider going about your solution in a different way, but I'm giving you the answer you're asking for.
As a simple security check, you can establish a known-good hash for your file, and then refuse to import any file other than one that's on the list of known-good hashes. This makes it a pain to update your modules, but gives you a little bit of extra safety.
Don't use DropBox as a Revision control
Pick a real solution like Git
Setup access to the Git repository on one of your servers
Clone the repository to your worker machines and checkout master
Create a develop branch where you put every change you make
Test the changes and when you consider any of them stable, merge it to master
On your worker machines set up a cron job which periodically pulls from master branch of repository (and possibly restarts some Python processes as importing the same module again won't make Python interpreter aware of changes since imported modules are cached)
Enjoy your automatically updated workers :)
Don't feel shame - it happens that even experienced software developers come up with XY problem

Creating and transferring a site with Django

As a fledgling Django developer, I was wondering if it was customary, or indeed possible, to create a site with Django then transfer the complete file structure to a different machine where it would "go live".
Thanks,
~Caitlin
You could use GIT or Mercurial - or other version control system. To put the site structure on a central server. After that you could deploy the site for example with fabric to multiple servers. For deployment process you should consider using for example virtualenv to isolate the project from global python packages and requirements.
Of course that's possible and in fact it's the only way to "go live". You don't want to develop in your live server, do you? And it's true for any platform, not just django.
If I understood your question correctly, you need a system to push your development code to live.
Use a version control system: git, svn, mercurial etc.
Identify environment specific code like setting/config files etc. and have separate instances of them for each environment.
Create a testing/staging/PP environment which has live data or live-like data and deploy your code there before pushing it to live.
To avoid any downtime during deployment process, usually a symbolic link is created which points to the existing code folder. When a new release is to be pushed, a new folder is created with new code, after all other dependencies are done (like setting and database changes) and the sym link is pointed to the new folder.

Categories

Resources