I commit every time I make some changes that I think might work: I don't do extensive testing before a commit. Also, my commits will soon be automatically pushed to a remote repository. (I'm the only developer, and I have to add features or rewrite parts of the code many times a day.)
I'd like to set up a remote computer to run regression tests automatically whenever I commit anything; and then email me back the differences report.
What's the easiest way to set this up?
All my code is in Python 3. My own system is Windows 7, ActiveState Python, TortoiseHG, and Wing IDE. I can set up the remote computer as either Linux or Windows. The application is all command-line, with text input and output.
Use a continious integration server such as Buildbot or Jenkins and configure it to monitor the repository. Then run the tests using that. Buildbot is written in Python so you should feel right at home with it.
If you feel it's wasteful to make Buildbot or Jenkins poll the repository (even though hg pull uses very few resources when there are no new changesets), then you can configure a changegroup hook in the repository to trigger a build in the CI server.
I would recommend setting up Buildbot. You can have it watch a remote repository (Mercurial is supported) and automatically kick off a build when the repository changes. In your case, a build would just be running your test suite.
Its waterfall display allows you to see which builds failed and when, in relation to commits from the repository. It can even notify you, with the offending commit, when something breaks.
Jenkins is another option, supporting most of the same features. There are even cloud hosting options, like ShiningPanda that can host it for you, and they offer free licensing for open-source projects.
Related
I'm currently using PyCharm IDE to learn Python. I am not aware of how to sync my file automatically to GitHub. Or to be precise I want my code to automatically sync as I type, to my GitHub repo. Like I want the file to exist in GitHub and edit it over my IDE.
Is there any solution for this to happen?
Regards,
Kausik
That is not how git (or Github) works. Version control systems are designed to capture milestones in your project. I think you're confusing git with file management cloud services (e.g., Dropbox or Google Drive). If you need something that would sync your files with each "save" you make to a file then services like Dropbox are what you looking for.
However, version control systems (e.g., git) are much better suited for code management if you adjust your workflow to follow how they were intended to be used. In PyCharm, after each milestone (e.g., a bug fix or a new feature implementation) you would do the following:
Stage changed files by checking them.
Commit the changes by adding a commit message.
Push changes to the remote repository.
All these can be found within the Commit window in PyCharm (View >> Tool Windows >> Commit). All the three steps above can be done in one click
One last thing. If your goal is to collaborate with someone else in real-time, then PyCharm has a new feature called "Code with me" (Tools >> Code With Me ...). I don't know if it is available for free but the idea is that you would invite friends and change the code base together in real-time. And eventually, you would push the changes to the remote repository.
I have the last six months been working on a Python GUI application that I will use at work. Specifically my GUI will run on a couple of super computer clusters that I use for work.
However, I am mostly developing the software at my personal computer, and here I do not have direct access to the commands that my GUI will call, since the GUI will use subprocess to call commands that only are available on the computing cluster.
So, in order to efficiently develop the program, I often have to copy the directory containing all files related to the GUI, to the cluster. Then I test my current version there, locate all my bugs, fix them by editing the files on the cluster, and finally copy back all files to my computer, overwriting the old version.
This just seems like a bad way of doing it, but I have to be able to test my software in the environment it is made for in order to find my bugs.
Surely this is a common problem in software development... What do actual programmers do (as opposed to hobby programmers such as myself)?
Edit:
Examples of commands that are only available on the computing cluster, that I make heavy use of, are squeue, sacct, and scontrol (SLURM related commands).
Edit2:
I could mention that I tested using ssh connections with Python, but it slowed down the commands significantly, having to establish the ssh connection for each command I wanted. Unless I could set of a lasting ssh session, as in logging in when opening my program, I don't think the ssh-ing will work.
Explore the concepts that make Vagrant a popular choice for developers
Vagrant is a tool for building and managing virtual machine
environments in a single workflow. With an easy-to-use workflow and
focus on automation, Vagrant lowers development environment setup
time, increases production parity, and makes the "works on my machine"
excuse a relic of the past.
Your use case is covered by a couple of vagrant boxes that create a slurm cluster for development purposes. A good starting point might be
Example slurm cluster on your laptop (multiple VMs via vagrant)
If you understand and can setup your development environment with tools like Vagrant, you might explore next which options modern code editors or integrated development environments (IDE) offer for remote development. Remote development covers some other use cases, that might fit into your developer toolbox as well.
A "good enough", free and open source code editor for Python development is Visual Studio Code. According to the docs it has powerful features for remote development.
Visual Studio Code Remote Development allows you to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment.
Read the docs
VS Code Remote Development
So my team and I have bought into Docker - it is fantastic for deployment and testing. My real question is how to set up a great developer experience, specifically around writing Python apps, but this question could be generalized to nodejs, Java, etc.
The problem: When writing a Python app, I really like having decent linting/autocomplete functionality, there are some really good editors out there (Atom, VSCode, PyCharm) that provide these, but most really want a Python install on the local disk. The real advantage of Docker is that all of the core language and any project libraries can all be in the container, so reproducing all of that on the host machine just for developing is a pain.
I know that PyCharm pro does support Docker and docker-compose, but I found it quite sluggish and a lot of the test running capabilities were busted. On top of that, I really would like something that I can commit to version control so that the team can share dev setup and people don't have to repeat all of the steps for their own system.
A few Ideas that I had were:
Install an editor (like Atom) in a sidecar Docker container and use X11 forwarding
Use a browser based editor such as https://c9.io/ in a container - this seems most promising
Install some agent in a dev container that could handle autocomplete/linting, etc. and connect to it from a locally running editor - I think this would be the best solution, but I also think that right now it actually doesn't exist.
Has anyone had luck setting up a more productive development environment besides just mounting volumes and editing text?
You should use an 'advanced' IDE like IntelliJ (Pycharm) and configure a remote Python SDK using SSH-Access to your App-Docker-Container (using a shared ssh-key to auth against the app-container with a preinstalled openssh server and preconfigured authorized_keys file).
You can share this SDK information in your project file with all devs, so they wlll have this setup out of the box
1) This will ensure, your IDE knows about all the python libs/symbols available/installed in your docker-container during runtime. It will also enable you to properly debug remotely at the same time
2) This ensures, you have an IDE at your hand including a lot of important additional features like the inspector, 3way duff, search in path.. . hardly any of the Browser-Based IDEs will catch up with Pycharm at this point IMHO
Of course, as already mentioned in the comments, you need to share aka mount your code into the container. On linux, you plainly use host-volume-mounts from your local src folder to the container.
On OSX, you will run into performance issues when using host mounts. You might use something like http://docker-sync.io ( i am biased - there are also a lot of other similar tools )
I know this is an old question, but as I stumbled across it while trying to see what other editors might offer in this space, I would like to point out Visual Studio Code's notion of a Dev Container, which seems to provide the best level of integration I've seen for this so far. I'm hoping to see this turn into an industry trend myself.
Could use x11docker
x11docker allows to run graphical desktop applications (and entire desktops) in Docker Linux containers.
Docker allows to run applications in an isolated container environment. Containers need much less resources than virtual machines for similar tasks.
Docker does not provide a display server that would allow to run applications with a graphical user interface.
x11docker fills the gap. It runs an X display server on the host system and provides it to Docker containers.
Additionally x11docker does some security setup to enhance container isolation and to avoid X security leaks. This allows a sandbox environment that fairly well protects the host system from possibly malicious or buggy software.
https://github.com/mviereck/x11docker
https://github.com/mviereck/x11docker/wiki (extensive! knowledge)
https://dev.to/brickpop/my-dream-come-true-launching-gui-docker-sessions-with-dx11-in-seconds-1a53
We want to use continuous deployment.
We have:
all sources (python) in a local RhodeCode (git) server.
Jenkins for automated testing
SSH connections to the production systems (linux).
a tool which can update servers in one command.
Now something like this should be implemented:
run tests with Jenkins
if there is a failure. Stop, mail developers
If all tests are OK:
deploy
We are long enough in the business to write some scripts to do this.
My questions:
How to you update the version numbers? You could increment them, you could use a timestamp ...
Since we already use Jenkins, I think we do it in a script called by Jenkins. Any reason to do it with a different (better) tool?
My fear: Jenkins becomes a central server for things which are not related to testing (deploy). I think other tools like SaltStack or Ansible should be used for this. Up to now we use Fabric (simple layer above ssh). Maybe we should switch to a central management system before starting with continuous deployment.
Since we already use Jenkins, I think we do it in a script called by
Jenkins. Any reason to do it with a different (better) tool?
To answer your question: No, there aren't any big reasons to not go with Jenkins for deployment.
Pros:
You already know Jenkins (and you probably know some of the quirks)
You don't need to introduce yet another technology
You said that you want to write scripts called by Jenkins, so you can switch easily to a different system later.
Cons:
there might be better tools out there for deployment
Does not tie the best with Change Control tools.
Additional Considerations:
Do not use the same server for prod deployment and continuous build/integration. These are two different tasks performed by two different roles. Therefore two different permission schemes might be employed.
Use permissions wisely. I use two different permissions for my deploy and CI servers. We have 3 Jenkins servers right now.
CI and deploy to uncontrolled environments (Developers can play with these environments)
Deploy to controlled environments. (QA environemnts and upwards)
Deploy to prod (yes, that's the only purpose in live of this server.) with the most restrictive permission scheme.
sandbox, actually there is this forth server for Jenkins admins to play with.
Store your deployable artifacts outside of Jenkins (and you do if I read your question correctly).
So depending on your existing infrastructure and procedure you decide for the tooling. Jenkins won't log you in as long as you keep as much of the logic as possible in scripts that are only executed by Jenkins.
I have a directory of python programs, classes and packages that I currently distribute to 5 servers. It seems I'm continually going to be adding more servers and right now I'm just doing a basic rsync over from my local box to the servers.
What would a better approach be for distributing code across n servers?
thanks
I use Mercurial with fabric to deploy all the source code. Fabric's written in python, so it'll be easy for you to get started. Updating the production service is as simple as fab production deploy. Which ends ups doing something like this:
Shut down all the services and put an "Upgrade in Progress" page.
Update the source code directory.
Run all migrations.
Start up all services.
It's pretty awesome seeing this all happen automatically.
First, make sure to keep all code under revision control (if you're not already doing that), so that you can check out new versions of the code from a repository instead of having to copy it to the servers from your workstation.
With revision control in place you can use a tool such as Capistrano to automatically check out the code on each server without having to log in to each machine and do a manual checkout.
With such a setup, deploying a new version to all servers can be as simple as running
$ cap deploy
from your local machine.
While I also use version control to do this, another approach you might consider is to package up the source using whatever package management your host systems use (for example RPMs or dpkgs), and set up the systems to use a custom repository Then an "apt-get upgrade" or "yum update" will update the software on the systems. Then you could use something like "mussh" to run the stop/update/start commands on all the tools.
Ideally, you'd push it to a "testing" repository first, have your staging systems install it, and once the testing of that was signed off on you could move it to the production repository.
It's very similar to the recommendations of using fabric or version control in general, just another alternative which may suit some people better.
The downside to using packages is that you're probably using version control anyway, and you do have to manage version numbers of these packages. I do this using revision tags within my version control, so I could just as easily do an "svn update" or similar on the destination systems.
In either case, you may need to consider the migration from one version to the next. If a user loads a page that contains references to other elements, you do the update and those elements go away, what do you do? You may wish to do something either within your deployment scripting, or within your code where you first push out a version with the new page, but keep the old referenced elements, deploy that, and then remove the referenced elements and deploy that later.
In this way users won't see broken elements within the page.