pulling and integrating remote changes with pygit2 - python

I do have the following problem. I'm writing a script which searches a folder for repositories, looks up the remotes on the net and pulls all new data into the repository, notifying me about new changes. The main idea is clear. I'm using python 2.7 on Windows 7 x64, using pygit2 to access the git features. The command-line supports the simple command "git pull 'origin'", but the git api is more complicated and I don't see the way. Okay, I came that far:
import pygit2
orepository=pygit2.Repository("path/to/repository/.git")
oremote=repo.remotes[0]
result=oremote.fetch()
This code retrieves the new objects and downloads it into the repository, but doesn't update the master branch or check the new data out. By inspecting the repository with TortoiseGit I see that nothing way checked out , even the new log messages don't appear when showing the log. I need to use the git pull command to refresh the repository and working copy at all. Now my question: What do I need to do to do all that by using pygit2? I mean, I download the changes by fetching them, but what do I need to do then? I want to update the master branch and working copy too...
Thank you in advance for helping me with my problem.
Best Regards.

Remote.fetch() does not update the files in the workdir because that's very far from its job. If you want to update the current branch and checkout those files, you need to also perform those steps, via Repository.create_reference() or Reference.target= depending on what data you have at the time, and then e.g. Repository.checkout_head() if you did decide to update.
git-pull is a script that performs very many different steps depending on the configuration and flags passed. When you're writing a tool to simulate it over multiple repositories, you need to figure out what it is that you want to do, rather than hoping everything is set up just so that git-pull won't surprise you.

Related

YAPF Formatting on Commits

I am currently working on a tool being used for Python developers, who use Spyder IDE. We want a consistent format. They are not willing to change IDE's that have plugins to automatically do this.
I have been testing the YAPF library, and am looking to find a way that anytime that a commit or push happens to GitLab, it automatically formats it in this way.
Do I need some workflow? Is this considered simular to CI Pipelines? I am unsure how to tackle this.
Any feedback is helpful, and greatly appreciated.
I have been testing the YAPF library, and am looking to find a way that anytime that a commit or push happens to GitLab, it automatically formats it in this way.
You don't want to do that: you don't want your CI environment to modify commits, because either (a) you'll end up duplicating every commit with a second "fix the formatting commit", or (b) you'll make the history of your gitlab repository diverge from the history of the person who submitted the change (if you replace the commit).
The typical solution here is to update your CI to reject pull-/merge-requests that don't meet your formatting standards. At the same time, provide developers with a tool they can run locally that will apply the correct formatting to files before they submit a change request.
If your developers are committing directly to the repository, rather than operating through merge requests, your options are far more limited (at that point, you have a social problem, not a technical problem).

Self update a python script posted on github

I have made a bot that tracks the earning of a mining rig, but the script is running in a friend of mine house, therefore whenever I update it I have to send the new version to my friend. I have thought to load the program to github, inside the script periodically check if the repo that contains it has been modified, and if so, to update itself.
I have tried to pull the whole repo, launch it and destroy the hold files, but I dont think it's a good idea (if I have to deal with big files for examples).
I also have found a pypi function called selfupdate (https://pypi.org/project/selfupdate/) but there is not much documentation and I didn't get how to make it work
How can I manage to make the script update himself pulling the newer version from github?

How to implement an autoupdate on exe files [duplicate]

I wrote 2-3 Plugins for pyload.
Sometimes they change and i let users know over forum that theres a new version.
To avoid that i'd like to give my scripts an auto selfupdate function.
https://github.com/Gutz-Pilz/pyLoad-stuff/blob/master/FileBot.py
Something like that easy to setup ?
Or someone can point me in a direction ?
Thanks in advance!
It is possible, with some caveats. But it can easily become very complicated. Before you know it, your auto-update "feature" will be bigger than the original code!
First you need to have an URL that always contains the latest version. Since you are using github, using raw.githubusercontent might do very well.
Have your code download the latest version from that URL (e.g. using requests), and compare the version with that in the current code. For this purpose I would recommend a simple integer version number, so you don't need any complicated parsing logic.
However, you might want to consider only running that check once per day, or once per week. If you do it every time your file is run, the server might get hammered! So now you have to save a file with the date when the check was last done, and read that to see if it is time to run the check again. This file will need to be saved in a location that you can access on every platform your code is liable to run on. That in itself can be a challenge.
If it is just a single python file, which is installed as the user that is running it, updating is relatively easy. But if the original was installed as root in the global Python directory and your script is running as a nonprivileged user it will be difficult. Especially if it is running as a plugin and cannot ask the user for (temporary) root credentials to install the file.
And what are you going to do if a newer version has more dependencies outside the standard library?
Last but not least, as a sysadmin I don't really like auto-updating software. Especially for critical system infrstructure I like to be able to estimate the consequences before an update.

Automated deployment and update of Python-based Software with pip

We have a relatively large number of machines in a cluster that run a certain Python-based software for computation in an academic research environment. In order to keep the code base up to date, we are using a build server which makes the current code base available in a directory each time we update a dedicated deployment tag on our Mercurial server. Each machine part of the cluster runs a daily rsync script that just synchronises with the deployment directory on the build server (if there's anything to sync) and restart the process if the code base was updated.
Now this approach I find a bit dated and a slight overkill, and would like to optimise in the following way:
Get rid of the build server as all it actually does is clone the latest code base that has a certain tag attached - it doesn't actually compile or do any additional checks (such as testing) on the code base at all. This would also reduce some pain for us as it'd be one less server to maintain and worry about.
Instad of having the build server, I would like to pull straight from our Mercurial server which hosts the code already. This would reduce the need to duplicate the code base each time we update the deployment tag.
Now I had a bit of a read before on how to install / deploy Python-based software with pip (e.g., How to point pip at a Mercurial branch?). It seems to be the right choice as it supports installing packages straight from a code repository. However, I ran into a few problems that I would need help with. The requirements I have are as follows:
Use Mercurial as a source.
Automated background process to update and install into a custom directory on the file system.
Only pull and update from the repository if there is a new version available.
The following command seems to almost do what I need:
pip install -e hg+https://authkey:anypw#mymercurialserver.hostname.com/Code/package#deployment#egg=package --upgrade --src ~/proj
It pulls the package from the Mercurial server, picks the code base with the tag "deployment" and installs it into proj inside the user's home directory.
The problem, however, is that regardless whether there is an update available or not, pip always uninstalls package and reinstalls it. This makes it difficult to decide whether the process needs to be restarted or not if nothing actually changed. In addition, pip always gets stuck with the the message that hg clone in ./proj/yarely exists with URL... and asks me: What to do? (s)witch, (i)gnore, (w)ipe, (b)ackup. Now this is not ideal, as (1) it would be an automated process without user prompt, and, (2) it should only pull the repository if there was an update in the first place to reduce traffic in the network and not overload our Mercurial server. I believe that in this case, a pull instead of clone if there was a local copy of the repository already would be more appropriate and potentially solve the problem.
I wasn't able to find an elegant and nice solution to this problem. Does anyone have a pointer or suggestion how this could be achieved?

Contributing to a repository on GitHub on a new branch

Say someone owns a repository with only one master hosting code that is compatible with Python 2.7.X. I would like to contribute to that repository with my own changes to a new branch new_branch to offer a variant of the repository that is compatible with Python 3.
I followed the steps here:
I forked the repository on GitHub on my account
I cloned my fork on my local machine
I created a new branch new_branch locally
I made the relevant changes
I committed and pushed the changes to my own fork on GitHub
I went on the browser to the GitHub page of the official repository, and asked for a pull request
The above worked, but it did a pull request from "my_account:new_branch" to "official_account:master". This is not what I want, since Python 2.7.x and Python 3 are incompatible with each other. What I would like to do is create a PR to a new branch on the official repository (e.g. with the same name "new_branch"). How can I do that? Is this possible at all?
You really don't want to do things this way. But first I'll explain how to do it, then I'll come back to explain why not to.
Using Pull Requests at GitHub has a pretty good overview, in particular the section "Changing the branch range and destination repository." It's easiest if you use a topic branch, and have the upstream owner create a topic branch of the same name; then you just pull down the menu where it says "base: master" and the choice will be right there, and he can just click the "merge" button and have no surprises.
So, why don't you want to do things this way?
First, it doesn't fit the GitHub model. Topic branches that live forever in parallel with the master branch and have multiple forks make things harder to maintain and visualize.
Second, you need both a git URL and an https URL for you code. You need people to be able to share links, pip install from top of tree, just clone the repo instead of cloning and then checking out a different branch, etc. This all means your code has to be on the master branch.
Third, if you want people to be able to install your 3.x version off PyPI, find docs at readthedocs, etc., you need a single project with a single source tree. Most such sites have a single latest version, not a latest version for each Python version, and definitely not multiple variations of the same version. (You could install completely fork the project, and create a separate foo3 project. But it's much easier for people to be able to pip install foo than to have them try that, fail, come to SO and ask why it doesn't work, and get told they probably have Python 3 and need to pip install foo3 instead.)
How do you merge two versions into a single package? The porting docs should have the most up-to-date advice, but briefly: If it's at all possible to create a single codebase that runs on both versions, that's ideal; if not, and if you can't make things work by running 2to3 or 3to2 at install time, create a parallel directory for the 3.x code (e.g., a foo3 alongside foo) and pick the appropriate directory at install time. (You can always start with that and gradually work toward a unified codebase.)

Categories

Resources