I currently have a v1 API and have updated and created new scripts for v2. The API is consumed by other developers and consists of a bunch of scripts. Before migrating and adding v2 I want to make sure I have a successful versioning strategy to go ahead with.
Currently, there is a bash script called before using the API, with which you can supply the version # or by default gives you the most recent version. Originally, I intended to have different subfolders for each different version, but for scripts that do not change between revisions and scripts that get content added to them, the git history will not be preserved correctly as the original file will still reside in the v1 subdir and will not be 'git mv'ed. This is obviously not the best way but I can't think of a better way currently.
Any recommendations will be helpful but one restriction is that we cannot have a git submodule with different branches. There are no other restrictions (e.g. the bash file used for setup can be deleted) as long as the scripts are accessible. Thanks!
EDIT: We also have scripts above the "API" directory that are part of the same repo that call into the API (we are consumers of our own API). The changes to these files need to be visible when using any version of the API and cannot just be seen in the latest version (related to tags in the repo)
I think you want to use tags in your git repository. For each version of your api, use git tag vn and you don't need to maintain earlier versions of your files. You can access all files at a certain version just using git checkout vn.
If you use a remote repository, you need to use the flag --tags to send the tags to the remote repository, ie, git push --tags.
Related
The goal is to fetch a zip ball of a given repo snapshot from the uri. Should be flexible enough to handle releases, tags or any plain ref (partial hash/head etc.). I want to get the zip file as a bytes object. And this thing should handle modern GitHub auth including web, organizations with sso, etc.
I looked at the raw GitHub v3 APIs and some calls are promising but only handle a portion of this. PyGitHub seems to handle a few things well but does not seem to have higher level apis and auth and multipart could become complicated.
Perhaps doing a shallow git clone without any GitHub dependencies is a better solution?
My goal is to have 100% of the files for that snapshot completely unmodified (not even line endings) so I can archive or build them in a different library that does not understand git.
Tried this with gh CLI and it seems it would work but I don’t want to have a bunch of installs to manage other than why pipenv can do.
This question already has an answer here:
Are we able to use GitHub API to create a commit? especially v4?
(1 answer)
Closed 3 years ago.
Ok, this seems like a stupid question and I don't have a lot of experience using git as I have always been dependent on GitHub desktop for my work. So my question is, Can a github remote repo be committed to and pushed to via only the github api. Can it be done without installing git on your system? I have used PyGitHub and git-python or even python subprocess that can be used to create a remote repo, initialize and add to the the local repo and perform the commit and pushes to the remote repo and I know that these services require the use of the git client installed on the system.
So I was just wondering if only the standalone github api can be called through python requests to do the same but the requirement being that I don't have to get Git installed in my local system. Any help on the matter would be really enlightening.
Can a github remote repo be committed to and pushed to via only the github api. Can it be done without installing git on your system?
Kinda: you can interact with the raw objects via the API, I'm not sure you can behave as if git was on your machine and to push/pull from a local working copy as you'd do if you did have git installed locally.
My experience of it is that it requires some understanding of the low-level fundamentals of Git (blobs, trees, commits, refs, and their relations): the v3 API exposes a git data / git database endpoint which provides transparent access to low-level git structures. There are some limitations (e.g. can't interact with a brand new empty repository via this, you have to have at least one commit in it somehow, high-level operations like "cherrypick" or "rebase" are unavailable and have to be hand-rolled if you want them, ...) but aside from that it works fine.
To understand the low-level objects I would recommend the "Git Internals" chapter of the git book, sections 10.2 Git Objects and 10.3 Git References. There are also "from the ground up" tutorials out there which explain these structures in a more hands-on way by building partial git clients from the ground up.
So I was just wondering if only the standalone github api can be called through python requests to do the same but the requirement being that I don't have to get Git installed in my local system.
See above, kinda: you can most certainly interact with a github repository via the API in most every way possible, but rebuilding a git client out of the API might be difficult.
PyGithub will enable you to deal with whatever we have in Github.com. To manage your local git. Like, committing or pushing or stashing. You might need to use some python modules that deal with Git, not Github. One of the examples could be
https://github.com/gitpython-developers/GitPython
Here is the scenario I am dealing with:
I WANT to/HAVE setup CircleCI build for my project with unit tests etc.
In this project I use another one of my libraries which needs to be installed on the build container in CirleCi, otherwise my tests are failing.
I need to find a way to either:
pull git repository of external reference and install it
Or download it as zip
Or some other way ?
Happy to add more explanation if needed.
From the section Using Resources External to Your Repository:
CircleCI supports git submodule, and has advanced SSH key management to let you access multiple repositories from a single test suite. From your project’s Project Settings > Checkout SSH keys page, you can add a “user key” with one-click, allowing you access code from multiple repositories in your test suite. Git submodules can be easily set up in your circle.yml file (see example 1).
CircleCI’s VMs are connected to the internet. You can download dependencies directly while setting up your project, using curl or wget.
(Or just using git clone without submodules.)
I'm deploying an app to AWS Elastic Beanstalk using the API:
https://elasticbeanstalk.us-east-1.amazon.com/?ApplicationName=SampleApp
&SourceBundle.S3Bucket=amazonaws.com
&SourceBundle.S3Key=sample.war
...
My impression from reading around a bit is that Java deployments use .war, .zips are supported (docs) and that one can use .git (but only with PHP or using eb? doc).
Can I use the API to create an application version from a .git for a Python app? Or are zips the only type supported?
(Alternatively, can I git push to AWS without using the commandline tools?)
There are two ways to deploy to AWS:
The API backend, where it is basically a .zip file referenced from S3. When deploying, the Instance will unpack and run some custom scripts (which you can override from your AMI, or via Custom Configuration Files, which are the recommended way). Note that in order to create and deploy a new version in an AWS Elastic Beanstalk Environment, you need three calls: upload to s3, Create Application Version, and UpdateEnvironment.
The git endpoint, which works like this:
You install the AWS Elastic Beanstalk DevTools, and run a setup script on your git repo
When ran, the setup script patches your .git/config in order to support git aws.push and in particular, git aws.remote (which is not documented)
git aws.push simply takes your keys, builds a custom URL (git aws.remote), and does a git push -f master
Once AWS receives this (url is basically <api>/<app>/<commitid>(/<envname>), it creates the s3 .zip file (from the commit contents), then the application version on <app> for <commitid> and if <envname> is present, it also issues a UpdateEnvironment call. Your AWS ids are hashed and embedded into the URL just like all AWS calls, but sent as username / password auth tokens.
(full reference docs)
I've ported that as a Maven Plugin a few months ago, and this file show how it is done in plain Java. It actually covers a lot of code (since it actually builds a custom git repo - using jgit, calculates the hashes and pushes into it)
I'm strongly considering backporting as a ant task, or perhaps simply make it work without a pom.xml file present, so users only use maven to do the deployment.
Historically, only the first method was supported, while the second grew up in importance. Since the second is actually far easier (in beanstalk-maven-plugin, you have to call three different methods while a simply git push does all the three), we're supporting git-based deployments, and even published an archetype for it (you see a sample project here, especially the README.md in particular).
(btw, if you're using .war files, my elastic beanstalk plugin support both ways, and we're actually in favor of git, since it allows us to some incremental deployments)
So, you wanna still implement it?
There are three files I suggest you read:
FastDeployMojo.java is the main façade
RequestSigner does the real magic
This is a testcase for RequestSigner
Wanna do in
Python? I'd look for Dulwich
C#? The powershell version is based in it
Ruby? The Linux is based on it
Java? Use mine, it uses jgit
I am currently using github to develop a python application and am looking to deploy it on EC2.
Is there a good way to automatically handle the messiness this entails (setting up SSH key pairs on the EC2 instance for github, pulling from the github repository every time a commit is pushed to the master branch, etc.) without a bunch of custom scripts? Alternatively, is there an open-source project that has focused on this?
I wrote a simple python script to do this once. I also posted about it on my blog.
You set up mappings of your repositories and brances to point to local folders which already contain a checkout of that repo and branch. Then, you enable GitHub's post-receive hooks to hit the script which will then automatically trigger a git pull in the appropriate folder.