I am trying to disable and enable branch protections for a GitHub project in a Python script using the GitHub API (Version 2.11). More specifically I want to remove all push restrictions from a branch and later enable them with specific teams.
Replacing/adding existing team restrictions works via
PUT/POST /repos/:owner/:repo/branches/:branch/protection/restrictions/teams
And removing push restrictions also works like a charm using
DELETE /repos/:owner/:repo/branches/:branch/protection/restrictions
But, if I remove the push restrictions, I have found no way how to enable it back again to add specific teams. If I try to add or replace teams, the message says 'Push restrictions not enabled'.
So how can I enable the checkbox Restrict who can push to this branch to add teams in a script? See screenshot for the desired outcome: Push Restrictions
The API documentation just presents me the options Get restrictions of protected branch and Remove restrictions of protected branch.
What I tried so far:
Just removing all teams without removing the restrictions does not work, because then nobody is able to push.
Sending PUT/POST to /repos/:owner/:repo/branches/:branch/protection/restrictions gives a 404.
Right now I have no other way than clicking the checkbox manually, then adding and replacing works via API.
Check Update branch protection section of Github API Rest :
PUT /repos/:owner/:repo/branches/:branch/protection
Using bash & curl :
ownerWithRepo="MyOrg/my-repo"
branch="master"
curl -X PUT \
-H 'Accept: application/vnd.github.luke-cage-preview+json' \
-H 'Authorization: Token YourToken' \
-d '{
"restrictions": {
"users": [
"bertrandmartel"
],
"teams": [
"my-team"
]
},
"required_status_checks": null,
"enforce_admins": null,
"required_pull_request_reviews": null
}' "https://api.github.com/repos/$ownerWithRepo/branches/$branch/protection"
Note that setting null to one of those fields will disable(uncheck) the feature
In python :
import requests
repo = 'MyOrg/my-repo'
branch = 'master'
access_token = 'YourToken'
r = requests.put(
'https://api.github.com/repos/{0}/branches/{1}/protection'.format(repo, branch),
headers = {
'Accept': 'application/vnd.github.luke-cage-preview+json',
'Authorization': 'Token {0}'.format(access_token)
},
json = {
"restrictions": {
"users": [
"bertrandmartel"
],
"teams": [
"my-team"
]
},
"required_status_checks": None,
"enforce_admins": None,
"required_pull_request_reviews": None
}
)
print(r.status_code)
print(r.json())
I am trying to add a teams to the protected branch.
Can I know whether my-team is
slug
name
should it include the owner of the team as well.
"teams": [
"my-team"
]
Thanks in advance for helping out.
Related
I'm trying to automate some post-release processes for creating Pull Requests (release branch -> master) after a release. While I'm able to automate the PR creation, and it links the commits, it does not link the work items.
Note: I'm using Python for this, using the (officially supported) azure-devops module from PyPi (https://pypi.org/project/azure-devops/). It is a thin wrapper around the REST API, and I've read the REST API documentation itself to see if there are any other options (haven't found any so far). Here's my code:
def create_pull_request(self, repo_id, source_branch, target_branch, title, description):
pull_request = {
"title": title,
"description": description,
"sourceRefName": "refs/heads/" + source_branch,
"targetRefName": "refs/heads/" + target_branch,
}
response = self._git_client.create_pull_request(pull_request, repository_id=repo_id)
Here's my function for connection to the git client:
def __init__(self, personal_access_token=None, organization_url=None):
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
self._connection = azure.devops.connection.Connection(base_url=organization_url,
creds=credentials)
# Get a client (the "core" client provides access to projects, teams, etc)
self._git_client = self._connection.clients.get_git_client()
I've also tried pulling the individual commits to see if I can find the work items associated with them and attached them to the pull request after creation, but those responses are coming back with the work items empty.
Is there some other way I can accomplish this?
A couple of ideas to get your work items linked:
First, your pull_request should probably include:
pull_request = {
"title": title,
"description": description,
"sourceRefName": "refs/heads/" + source_branch,
"targetRefName": "refs/heads/" + target_branch,
"workItemRefs": # collection of work item refs
}
workItemRefs is documented in the REST API docs at https://learn.microsoft.com/en-us/rest/api/azure/devops/git/pull%20requests/create?view=azure-devops-rest-6.0 . What you need to put in there is probably a matter of some experimentation.
Second, as an alternative, your commits could be created in the form:
git commit -m "My commit message #workitemid1 #workitemid2, etc."
Then, when the PR is created, it will automatically link those work items in the commits it finds in the PR.
We cannot add work item link via the Pull Request Create API and Pull Request Update API. I found a feature request, you could follow the ticket to get the latest news.
As a work item, we could link work item to pull request via this REST API Work Items - Update
Steps:
Get pull request field artifactId via below API:
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/pullrequests/{pullRequestId}?api-version=6.0
Link work item to Pull Request:
Request URL:
PATCH https://dev.azure.com/{organization}/{project}/_apis/wit/workitems/{work item ID}?api-version=5.1
Request Body:
[ {
"op": "add",
"path": "/relations/-",
"value": {
"rel": "ArtifactLink",
"url": "{This url is artifactId}",
"attributes": {
"name": "pull request"
}
}
} ]
Result:
Update1
Is there somewhere that I could find the work items associated with the commits
We could list the work items associated via the commit ID.
Request URL:
POST https://dev.azure.com/{Org name}/_apis/wit/artifactUriQuery?api-version=5.0-preview
Request Body:
{
"artifactUris": [
"vstfs:///Git/Commit/{Project ID}%2F{Repo ID}%2F{Commit ID}"
]
}
Result:
I am currenlty thinkering with the IBMWatson Natural Language Understanding API.
In the official tutorial page it shows the basic curl command to use the api as follows:
curl -X POST -u "apikey:{apikey}" \
--header "Content-Type: application/json" \
--data '{
"url": "http://newsroom.ibm.com/Guerbet-and-IBM-Watson-Health-Announce-Strategic-Partnership-for-Artificial-Intelligence-in-Medical-Imaging-Liver",
"features": {
"sentiment": {},
"categories": {},
"concepts": {},
"entities": {},
"keywords": {}
}
}' \
"{url}/v1/analyze?version=2019-07-12"
The command basicly analyzes the page provided in the url key of data parameter and returns an analysis.
I want to do the same action using Python's requests library, however I am new to it. As far as I've gathered from the web the following format should correspond to the same request:
headers={'Content-Type': 'application/json'}
features = {"sentiment": {},"categories": {},"concepts": {},"entities": {},"keywords": {}}
myData ={
"url": "http://newsroom.ibm.com/Guerbet-and-IBM-Watson-Health-Announce-Strategic-Partnership-for-Artificial-Intelligence-in-Medical-Imaging-Liver",
"features": features
}
d = requests.post(
auth=('apikey','7LNEjCMvP6ZcNShjAkjPob7QSCfIHeZMQkn4Ho3dQgte'),
headers=headers,
data=myData,
url='https://gateway-lon.watsonplatform.net/natural-language-understanding/api/v1/analyze?version=2019-07-12'
)
However, the server responds with a "400", which I believe is caused by an error in my format.
I have tested editing my apikey, which resulted in Error code 401 "Unauthorized", as expected. So, I know that I can access the server and get authenticated with my key.
I have tested removing the "headers" parameter, which resulted in 415 "Unsupported Media Type", so the return type has to be JSON I guess.
I am not sure on what I'm doing wrong, and appreciate any kind of help. Thanks.
Take a look at the Python SDK examples for analyze and Text analytics features
in the API reference. They might help.
I'm building a shell application that allows my teammates to start new projects by running a few commands. It should be able to create a new project and a new repository inside that project.
Although I'm specifying the project key/uuid when creating a new repository, it doesn't work. What I'm expecting is a success message with the details for the new repository. Most of the time, this is what I get:
{"type": "error", "error": {"message": "string indices must be integers", "id": "ef4c2b1b49c74c7fbd557679a5dd0e58"}}
or the repository goes to the first project created for that team (which is the default behaviour when no project key/uuid is specified, according to Bitbucket's API documentation).
So I'm guessing there's something in between my request & their code receiving it? Because it looks like they're not even getting the request data.
# Setup Request Body
rb = {
"scm": "git",
"project": {
"key": "PROJECT_KEY_OR_UUID"
}
}
# Setup URL
url = "https://api.bitbucket.org/2.0/repositories/TEAM_NAME/REPOSITORY_NAME"
# Request
r = requests.post(url, data=rb)
In the code from the api docs you'll notice that the Content-Type header is "application/json".
$ curl -X POST -H "Content-Type: application/json" -d '{
"scm": "git",
"project": {
"key": "MARS"
}
}' https://api.bitbucket.org/2.0/repositories/teamsinspace/hablanding
In your code you're passing your data in the data parameter, which creates an "application/x-www-form-urlencoded" Content-Type header, and urlencodes your post data.
Instead, you should use the json parameter.
rb = {
"scm": "git",
"project": {
"key": "PROJECT_KEY_OR_UUID"
}
}
url = "https://api.bitbucket.org/2.0/repositories/TEAM_NAME/REPOSITORY_NAME"
r = requests.post(url, json=rb)
I am creating an imposter process using Mountebank and want to record the request and response. To create a http imposter I used the following CURL command as described in their documentation.
curl -i -X POST -H 'Content-Type: application/json' http://127.0.0.1:2525/imposters --data '{
"port": 6568,
"protocol": "http",
"name": "proxyAlways",
"stubs": [
{
"responses": [
{
"proxy": {
"to": "http://localhost:8000",
"mode": "proxyAlways",
"predicateGenerators": [
{
"matches": {
"method": true,
"path": true,
"query": true
}
}
]
}
}
]
}
]
}'
I have another server running at http://localhost:8000 which is listening to all the request coming to port 6568.
Output of my server now:
mb
info: [mb:2525] mountebank v1.6.0-beta.1102 now taking orders - point your browser to http://localhost:2525 for help
info: [mb:2525] POST /imposters
info: [http:6568 proxyAlways] Open for business...
info: [http:6568 proxyAlways] ::ffff:127.0.0.1:55488 => GET /
I want to record all the request and response going around, and unable to do right now. When I enter curl -i -X GET -H 'Content-Type: application/json' http://127.0.0.1:6568/ it is giving me a response but how I do store it?
Also can anyone explain me the meaning of
save off the response in a new stub in front of the proxy response:
(from this Mountebank documentation)
How to store proxy results
The short answer is that mountebank already is storing it. You can verify that by looking at the output of curl http://localhost:2525/imposters/6568. The real question is how do you replay the stored response?
The common usage scenario with mountebank proxies is that you record the proxy responses on one running instance of mb, save off the results, and then start the next instance of mb with those saved responses. The way you would do that is to have the system under test talk to service you're trying to stub out via the mountebank proxy under whatever conditions you need it to, and then save off the responses (and their request predicates) by sending an HTTP GET or DELETE to http://localhost:2525/imposters/6568?removeProxies=true&replayable=true. You feed the JSON body of that response into the next mb instance, either through the REST API, or by saving it on disk and starting mountebank with a command of something like mb --configfile savedProxyResults.json. At that point, mountebank is providing the exact same responses to the requests without connecting to the downstream service.
Proxies create new stubs
Your last question revolves around understanding how the proxyAlways mode works. The default proxyOnce mode means that the first time a mountebank proxy sees a request that uniquely satisfies a predicate, it queries the downstream service and saves the response. The next time it seems a request that satisfies the exact same predicates, it avoids the downstream call and simply returns the saved result. It only proxies downstream once for the same request. The proxyAlways mode, on the other hand, always sends the requests downstream, and saves a list of responses for the same request.
To make this clear, in the example you copied we care about the method, path, and query fields on the request, so if we see two requests with exactly the same combination of those three fields, we need to know whether we should send the saved response back or continue to proxy. Imagine we first sent:
GET /test?q=elephants
The method is GET, the path is /test, and the query is q=elephants. Since this is the first request, we send it to the downstream server, which returns a body of:
No results
That will be true regardless of which proxy mode you set mountebank to, since it has to query downstream at least once. Now suppose, while we're thinking about it, the downstream service added an elephant, and then our system under test makes the same call:
GET /test?q=elephants
If we're in proxyOnce mode, the fact that the elephant was added to the real service simply won't matter, we'll continue to return our saved response:
No results
You'd see the same behavior if you shut the mountebank process down and restarted it as described above. In the config file you saved, you'd see something like this (simplifying a bit):
"stubs": [
{
"predicates": [{
"deepEquals': {
"method": "GET",
"path": "/test",
"query": { "q": "elephants" }
}
}],
"responses": [
{
"is": {
"body": "No results"
}
}
]
}
]
There's only the one stub. If, on the other hand, we use proxyAlways, then the second call to the GET /test?q=elephants would yield the new elephant:
1. Jumbo reporting for duty!
This is important, because if we shut down the mountebank process and restart it, now our tests can rely on the fact that we'll cycle through both responses:
"stubs": [
{
"predicates": [{
"deepEquals': {
"method": "GET",
"path": "/test",
"query": { "q": "elephants" }
}
}],
"responses": [
{
"is": {
"body": "No results"
}
},
{
"is": {
"body": "1. Jumbo reporting for duty!"
}
}
]
}
]
I am planning to build a plug-in for Sphinx documentation system plug-in which shows the names and Github profile links of the persons who have contributed to the documentation page.
Github has this feature internally
Is it possible to get Github profile links of the file contributors through Github API? Note that commiter emails are not enough, one must be able to map them to a Github user profile link. Also note that I don't want all repository contributors - just individual file contributors.
If this is not possible then what kind of alternative methods (private API, scraping) you could suggest to extract this information from Github?
First, you can show the commits for a given file:
https://api.github.com/repos/:owner/:repo/commits?path=PATH_TO_FILE
For instance:
https://api.github.com/repos/git/git/commits?path=README
Second, that JSON response does, in the author section, contain an url filed named 'html_url' to the GitHub profile:
"author": {
"login": "gitster",
"id": 54884,
"avatar_url": "https://0.gravatar.com/avatar/750680c9dcc7d0be3ca83464a0da49d8?d=https%3A%2F%2Fidenticons.github.com%2Ff8e73a1fe6b3a5565851969c2cb234a7.png",
"gravatar_id": "750680c9dcc7d0be3ca83464a0da49d8",
"url": "https://api.github.com/users/gitster",
"html_url": "https://github.com/gitster", <==========
"followers_url": "https://api.github.com/users/gitster/followers",
"following_url": "https://api.github.com/users/gitster/following{/other_user}",
"gists_url": "https://api.github.com/users/gitster/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gitster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gitster/subscriptions",
"organizations_url": "https://api.github.com/users/gitster/orgs",
"repos_url": "https://api.github.com/users/gitster/repos",
"events_url": "https://api.github.com/users/gitster/events{/privacy}",
"received_events_url": "https://api.github.com/users/gitster/received_events",
"type": "User"
},
So you shouldn't need to scrape any web page here.
Here is a very crude jsfiddle to illustrate that, based on the javascript extract:
var url = "https://api.github.com/repos/git/git/commits?path=" + filename
$.getJSON(url, function(data) {
var twitterList = $("<ul />");
$.each(data, function(index, item) {
if(item.author) {
$("<li />", {
"text": item.author.html_url
}).appendTo(twitterList);
}
});
Using GraphQL API v4, you can use :
{
repository(owner: "torvalds", name: "linux") {
object(expression: "master") {
... on Commit {
history(first: 100, path: "MAINTAINERS") {
nodes {
author {
email
name
user {
email
name
avatarUrl
login
url
}
}
}
}
}
}
}
}
Try it in the explorer
Using curl & jq to have a list of the first 100 contributors of this file without duplicates :
TOKEN=<YOUR_TOKEN>
OWNER=torvalds
REPO=linux
BRANCH=master
FILEPATH=MAINTAINERS
curl -s -H "Authorization: token $TOKEN" \
-H "Content-Type:application/json" \
-d '{
"query": "{repository(owner: \"'"$OWNER"'\", name: \"'"$REPO"'\") {object(expression: \"'"$BRANCH"'\") { ... on Commit { history(first: 100, path: \"'"$FILEPATH"'\") { nodes { author { email name user { email name avatarUrl login url}}}}}}}}"
}' https://api.github.com/graphql | \
jq '[.data.repository.object.history.nodes[].author| {name,email}]|unique'
Why do you need to use Github API for that? You can just clone the package and use git log:
git log --format=format:%an path/to/file ver1..ver2 |sort |uniq
Until and unless it is not necessary to interact with GITHUB API directly one can get the list of contributors by cloning the repo down and then getting into the cloned directory and then getting the list from the github log file using shortlog command
import os
import commands
cmd = "git shortlog -s -n"
os.chdir("C:\Users\DhruvOhri\Documents\COMP 6411\pygithub3-0.3")
os.system("git clone https://github.com/poise/python.git")
os.chdir("/home/d/d_ohri/Desktop/python")
output = commands.getoutput(cmd)
print(output)
raw_input("press enter to continue")
There is one more way to list contributors in case one wants to use GITHUB API, we can use pytgithub3 wrapper to interact with GITHUB API and get list of contributors as follows using list_contributors:
from pytgithub3.services.repo import Repo
r = Repo()
r.lis_contributors(user='userid/author',repo='repo name')
for page in r:
for result in page:
print result