How do you retrieve the revision number for the most recent commit of a github project?
The APIv3 docs are a little vague, and only provide a partially URL that doesn't seem to work.
e.g. Does /repos/:user/:repo/commits correspond to https://www.github.com/repos/:user/:repo/commits? OR something else like https://www.github.com/api/v2/json/repos/:user/:repo/commits? Neither work for any combination of user and repo.
In git, you can only ask for the current commit on some given branch. Here's an example:
$ wget -q -O - https://api.github.com/repos/smarnach/pyexiftool/git/refs/heads/master
{
"ref": "refs/heads/master",
"url": "https://api.github.com/repos/smarnach/pyexiftool/git/refs/heads/master",
"object": {
"type": "commit",
"url": "https://api.github.com/repos/smarnach/pyexiftool/git/commits/7be4b9bb680521369f2ae3310b1f6de5d14d1f8b",
"sha": "7be4b9bb680521369f2ae3310b1f6de5d14d1f8b"
}
}
Related
I encountered multiple issues regarding the publishing / deployment of functions when using Python Azure Functions running on Linux and Premium Plan. Following are options what can be done in cases where it fails or it is successful but the function (on Azure) does not reflect what should have been published / deployed.
The following options may also work for non-Linux / non-Python / non-Premium Plan Function (Apps).
Wait some minutes after the publishing so that the Function (App) reflects the update
Restart the Function App
Make sure that the following AppSettings are set under "Configuration" (please adjust to your current context)
[
{
"name": "AzureWebJobsStorage",
"value": "<KeyVault reference to storage account connection string>",
"slotSetting": false
},
{
"name": "ENABLE_ORYX_BUILD",
"value": "true",
"slotSetting": false
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~3",
"slotSetting": false
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "python",
"slotSetting": false
},
{
"name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
"value": "true",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "<storage account connection string>",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "<func app name>",
"slotSetting": false
}
]
When using Azure DevOps Pipelines use the standard Azure Function task (https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureFunctionAppV1/README.md) to publish the function respectively set the AppSettings.
This task also works for Python even if it does not explicitly provide the option under "Runtime Stack" (just leave it empty).
Make sure to publish the correct files (if you publish via ZipDeploy the zip folder should contain host.json at its root)
You can check if the correct files haven been published via checking the wwwroot folder via the Azure Portal -> Function App -> Development Tools -> SSH
cd /home/site/wwwroot
dir
Check the deployment logs
Either via the link displayed as output during the deployment
Should look like "https://func-app-name.net/api/deployments/someid/log"
Via Development Tools -> Advanced Tools
If the steps so far did not help it can help to SSH to the host via the portal (Development Tools -> SSH) and to delete
# The deployments folder (and then republish)
cd /home/site
rm -r deployments
# The wwwroot folder (and then republish)
cd /home/site
rm -r wwwroot
Delete the Function App resource and redeploy it
I am trying to link manifest.json file to the website I built to convert it to PWA. Have used html/css and python flask for the backend.
I am not getting whether it is the issue of the path or something else. Service worker is being detected and that is working absolutely fine.
But in the Application manifest I am getting this error Manifest is not valid JSON. Line: 1, column: 1, Unexpected token
manifest.json file
{
"name": "Flask-PWA",
"short_name": "Flask-PWA",
"description": "A progressive webapp template built with Flask",
"theme_color": "transparent",
"background_color": "transparent",
"display": "standalone",
"orientation": "portrait",
"scope": "/",
"start_url": "../templates/login_user.html",
"icons": [
{
"src": "images/icons/icon-72x72.png",
"sizes": "72x72",
"type": "image/png"
},
{
"src": "images/icons/icon-96x96.png",
"sizes": "96x96",
"type": "image/png"
},
{
"src": "images/icons/icon-128x128.png",
"sizes": "128x128",
"type": "image/png"
},
{
"src": "images/icons/icon-144x144.png",
"sizes": "144x144",
"type": "image/png"
},
{
"src": "images/icons/icon-152x152.png",
"sizes": "152x152",
"type": "image/png"
},
{
"src": "images/icons/icon-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "images/icons/icon-384x384.png",
"sizes": "384x384",
"type": "image/png"
},
{
"src": "images/icons/icon-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
]
}
This is the file structure for the manifest
change content type
Check your network tab in the developer console and look for the manifest.json request. If the content type of the response is text/html, then you might need to add an additional header changing Content-Type to application/json in your flask route.
use python object
If changing the content type doesn't work, you can write your entire manifest as a python object then jsonify it before returning it to the browser.
from flask import jsonify
#app.route('/manifest.json')
def manifest():
return jsonify(manifest_python_object)
The selected answer is no longer working as of today. I assume the OP was using ngrok same as me based on the files from the repo he has cloned (which is same as me). Dependent on the caching strategy of the service worker, someone might actually not have this issue at all.
If it is not caching, there is not going to be any problem because my theory is that caching is causing the problem. What happens is Ngrok has this default page when you first browse a URL. And our PWA is caching everything. or the site is loading the manifest only once or something like that.
These are images but my SO Rep is not allowing me to embed them.
Image: The page that is cached into manifest
Image: The error on manifest
Image: Clicking on the manifest file is showing the error page instead of the original manifest linked
Based on this my theory is that the manifest is getting cached at the time Ngrok shows this default page. (Or I'm wrong and something entirely different is going on but this solution still solves the issue, so read on)
If we actually follow Ngrok's solution to skip that page by passing ngrok-skip-browser-warning header with our initial request. The page works, and the PWA is installable. I used This chrome extension to do this non programmatically.
Image: The extension I used to add that header
Also note that I skipped the use of Ngrok as a flask extension, but instead used it directly after running flask server via:
./ngrok http http://127.0.0.1:5000/
Also, this issue was on a lot of other stackoverflow questions with the use of Ngrok and this solution is obviously applicable because the solution has nothing to do with Flask / Python.
Here are the images of it working when the headers are passed.
Image: PWA is active
Image: No errors in the manifest
Image: This actually is the original manifest file linked in html
Is there a way to dynamically get the version tag from my __init__.py file and append it to the dockerrun.aws.json image name for example::
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "dockerkey",
"Key": "mydockercfg"
},
"Image": {
"Name": "comp/app:{{version}}",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
]
}
This when when I do eb deploy it will build the correct version. At the moment I have to keep modifying the json file with each deploy.
I also stumbled upon that last year, where AWS support stated there's no such feature at hand. I ended up writing a script that receives the docker tag as parameter and composes the dockerrun.aws.json file on the fly with the correct tagname.
I've written a bash script which runs
eb deploy
Before it executes that command I change a symlink depending on if I'm running production or staging. For Example:
ln -sf ../ebs/Dockerrun.aws.$deploy_type.json ../ebs/Dockerrun.aws.json
Actually trying to get the
users(stargazers) for a repo in github by using the github api
Emails of all the stargazers
So for the first point i had tried the below command
curl -i https://api.github.com/repos/my_username/my_repo_name/stargazers
result
I had got results in the form of json dict something like below
[
{
"login": "gazer_name",
"id": 17xxx,
"avatar_url": .........,
"gravatar_id": "....",
.......
.......
"type": "User"
}
{
"login": "hello_man",
"id": 18xxx,
"avatar_url": .........,
"gravatar_id": "....",
.......
.......
"type": "User"
}
......
......
]
So now according to the second point, i want to get the emails of all the users ?
so how can we do that with github api ? whether we need to login for sure in order to get the emails ?
Because we can get the user email by using the command GET /user/emails as described here
That means something like curl -u "my_user_name" https://api.github.com/user/emails only after logging in
And how to use GET /user/emails exactly because when i entered in to terminal as it is the result is nothing, but when i used authentication with curl like above its displaying emails, which indicates that login is required inorder to get user emails
So how can we get the emails of the stargazers ?
I am planning to build a plug-in for Sphinx documentation system plug-in which shows the names and Github profile links of the persons who have contributed to the documentation page.
Github has this feature internally
Is it possible to get Github profile links of the file contributors through Github API? Note that commiter emails are not enough, one must be able to map them to a Github user profile link. Also note that I don't want all repository contributors - just individual file contributors.
If this is not possible then what kind of alternative methods (private API, scraping) you could suggest to extract this information from Github?
First, you can show the commits for a given file:
https://api.github.com/repos/:owner/:repo/commits?path=PATH_TO_FILE
For instance:
https://api.github.com/repos/git/git/commits?path=README
Second, that JSON response does, in the author section, contain an url filed named 'html_url' to the GitHub profile:
"author": {
"login": "gitster",
"id": 54884,
"avatar_url": "https://0.gravatar.com/avatar/750680c9dcc7d0be3ca83464a0da49d8?d=https%3A%2F%2Fidenticons.github.com%2Ff8e73a1fe6b3a5565851969c2cb234a7.png",
"gravatar_id": "750680c9dcc7d0be3ca83464a0da49d8",
"url": "https://api.github.com/users/gitster",
"html_url": "https://github.com/gitster", <==========
"followers_url": "https://api.github.com/users/gitster/followers",
"following_url": "https://api.github.com/users/gitster/following{/other_user}",
"gists_url": "https://api.github.com/users/gitster/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gitster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gitster/subscriptions",
"organizations_url": "https://api.github.com/users/gitster/orgs",
"repos_url": "https://api.github.com/users/gitster/repos",
"events_url": "https://api.github.com/users/gitster/events{/privacy}",
"received_events_url": "https://api.github.com/users/gitster/received_events",
"type": "User"
},
So you shouldn't need to scrape any web page here.
Here is a very crude jsfiddle to illustrate that, based on the javascript extract:
var url = "https://api.github.com/repos/git/git/commits?path=" + filename
$.getJSON(url, function(data) {
var twitterList = $("<ul />");
$.each(data, function(index, item) {
if(item.author) {
$("<li />", {
"text": item.author.html_url
}).appendTo(twitterList);
}
});
Using GraphQL API v4, you can use :
{
repository(owner: "torvalds", name: "linux") {
object(expression: "master") {
... on Commit {
history(first: 100, path: "MAINTAINERS") {
nodes {
author {
email
name
user {
email
name
avatarUrl
login
url
}
}
}
}
}
}
}
}
Try it in the explorer
Using curl & jq to have a list of the first 100 contributors of this file without duplicates :
TOKEN=<YOUR_TOKEN>
OWNER=torvalds
REPO=linux
BRANCH=master
FILEPATH=MAINTAINERS
curl -s -H "Authorization: token $TOKEN" \
-H "Content-Type:application/json" \
-d '{
"query": "{repository(owner: \"'"$OWNER"'\", name: \"'"$REPO"'\") {object(expression: \"'"$BRANCH"'\") { ... on Commit { history(first: 100, path: \"'"$FILEPATH"'\") { nodes { author { email name user { email name avatarUrl login url}}}}}}}}"
}' https://api.github.com/graphql | \
jq '[.data.repository.object.history.nodes[].author| {name,email}]|unique'
Why do you need to use Github API for that? You can just clone the package and use git log:
git log --format=format:%an path/to/file ver1..ver2 |sort |uniq
Until and unless it is not necessary to interact with GITHUB API directly one can get the list of contributors by cloning the repo down and then getting into the cloned directory and then getting the list from the github log file using shortlog command
import os
import commands
cmd = "git shortlog -s -n"
os.chdir("C:\Users\DhruvOhri\Documents\COMP 6411\pygithub3-0.3")
os.system("git clone https://github.com/poise/python.git")
os.chdir("/home/d/d_ohri/Desktop/python")
output = commands.getoutput(cmd)
print(output)
raw_input("press enter to continue")
There is one more way to list contributors in case one wants to use GITHUB API, we can use pytgithub3 wrapper to interact with GITHUB API and get list of contributors as follows using list_contributors:
from pytgithub3.services.repo import Repo
r = Repo()
r.lis_contributors(user='userid/author',repo='repo name')
for page in r:
for result in page:
print result