Github Actions behave different comparing to a local environment - python

I have created a GitHub actions pipeline to do linting of the recent committed code. The test script works in my local environment, but not on the GitHub server. What peculiarity I don't notice?
Here is my code for linting:
#!/usr/bin/env bash
LATEST_COMMIT=$(git rev-parse HEAD);
echo "Analyzing code changes under the commit hash: $LATEST_COMMIT";
FILES_UNDER_THE_LATEST_COMMIT=$(git diff-tree --no-commit-id --name-only -r $LATEST_COMMIT);
echo "Files under the commit:";
echo $FILES_UNDER_THE_LATEST_COMMIT;
MATCHES=$(echo $FILES_UNDER_THE_LATEST_COMMIT | grep '.py');
echo "Files under the commit with Python extension: $MATCHES";
echo "Starting linting...";
if echo $MATCHES | grep -q '.py';
then
echo $MATCHES | xargs pylint --rcfile .pylintrc;
else
echo "Nothing to lint";
fi
Here is my GitHub Actions config:
name: Pylint
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10"]
steps:
- uses: actions/checkout#v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
if: "!contains(github.event.head_commit.message, 'NO_LINT')"
run: |
python -m pip install --upgrade pip
pip install pylint psycopg2 snowflake-connector-python pymssql
chmod +x .github/workflows/run_linting.sh
- name: Analysing all Python scripts in the project with Pylint
if: "contains(github.event.head_commit.message, 'CHECK_ALL')"
run: pylint --rcfile .pylintrc lib processes tests
- name: Analysing the latest committed changes Pylint
if: "!contains(github.event.head_commit.message, 'NO_LINT')"
run: .github/workflows/run_linting.sh
What I get in GitHub:
And what on my computer:

Here's your problem in a nutshell:
steps:
- uses: actions/checkout#v3
By default, checkout#v2 and checkout#v3 make a shallow (depth 1), single-branch clone. Such a clone has exactly one commit in it: the most recent one.
As a consequence, this:
git diff-tree --no-commit-id --name-only -r $LATEST_COMMIT
produces no output at all. There's no parent commit available to compare against. (I'd argue that this is a bit of a bug in Git: git diff-tree should notice that the parent is missing due to the .git/shallow grafts file. However, git diff-tree traditionally produces an empty diff for a root commit, and without special handling in git diff-tree, the shallow clone makes git diff-tree think this is a root commit. Oddly, the user-oriented git diff would treat every file as added—still not what you want, but it would have actually worked.)
To fix this, force the depth to be at least 2. Using depth: 0 will force a full (non-shallow) clone, but the reason for using a shallow, single-branch clone is to speed up the action by omitting unnecessary commits. As only the first two "layers" of commit are required here, depth: 2 provides the correct number.
Side note: your bash code has every command terminated with a semicolon. This works fine, but is unnecessary (it reminds me of doing too much C or C++ programming.1 Also, you can just run git diff-tree <options> HEAD: there's no need for a separate git rev-parse step here (though you might still want that in the echo).
1As I switch from C to C++ to Go to Python to shell etc., I either put in too many or too few parentheses and semicolons, leading to C compiler errors from:
if x == 0 {
because my brain is in Go mode. 😀 When I switch back to hacking on the Go code, I put in too many parentheses, but gofmt removes them and there's no compile error.

Related

Github action to execute a Python script that create a file, then commit and push this file

My repo contains a main.py that generates a html map and save results in a csv. I want the action to:
execute the python script (-> this seems to be ok)
that the file generated would then be in the repo, hence having the file generated to be added, commited and pushed to the main branch to be available in the page associated with the repo.
name: refresh map
on:
schedule:
- cron: "30 11 * * *" #runs at 11:30 UTC everyday
jobs:
getdataandrefreshmap:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v3 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v4
with:
python-version: 3.8 #install the python needed
- name: Install dependencies
run: |
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: execute py script
uses: actions/checkout#v3
run: |
python main.py
git config user.name github-actions
git config user.email github-actions#github.com
git add .
git commit -m "crongenerated"
git push
The github-action does not pass when I include the 2nd uses: actions/checkout#v3 and the git commands.
Thanks in advance for your help
If you want to run a script, then you don't need an additional checkout step for that. There is a difference between steps that use workflows and those that execute shell scripts directly. You can read more about it here.
In your configuration file, you kind of mix the two in the last step. You don't need an additional checkout step because the repo from the first step is still checked out. So you can just use the following workflow:
name: refresh map
on:
schedule:
- cron: "30 11 * * *" #runs at 11:30 UTC everyday
jobs:
getdataandrefreshmap:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v3 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v4
with:
python-version: 3.8 #install the python needed
- name: Install dependencies
run: |
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: execute py script
run: |
python main.py
git config user.name github-actions
git config user.email github-actions#github.com
git add .
git commit -m "crongenerated"
git push
I tested it with a dummy repo and everything worked.

self-hosted runner, Windows, Environment variables, savin paths

I'm trying to calculate paths for a pip install from a internal devpi server. I'm running a self-hosted runner on a Windows server virtual machine. I'm trying to install the latest PIP package to the tool directory by calculating the path as follows;
- name: pip install xmlcli
env:
MYTOOLS: ${{ runner.tool_cache }}\mytools
run: |
echo "${{ runner.tool_cache }}"
$env:MYTOOLS
pip install --no-cache-dir --upgrade mytools.xmlcli --target=$env:MYTOOLS -i ${{secrets.PIP_INDEX_URL}}
echo "XMLCLI={$env:MYTOOLS}\mytools\xmlcli" >> $GITHUB_ENV`
- name: test xmlcli
run: echo "${{ env.XMLCLI }}"
As you can see; I've had some noob issues trying to output the env variable in windows. I came to the conclusion that under windows; the "run" command is being sent via powershell. Hence the "$env:MYTOOLS" usage.
The problem is the echo "XMLCLI=..." back to the git_env doesn't seem to be working properly as the test xmlcli step returns empty string.
I'm pretty sure I tried several different iterations of the echo command; but, haven't been successful.
Is there a video/docs/something that will clearly lays out the usage of "path arithmetic" from within the github action environment?
You need to append to $env:GITHUB_ENV, or you can set the script execution engine on your run action.
When using shell pwsh, then you can use:
"{environment_variable_name}={value}" >> $env:GITHUB_ENV
When using shell powershell
"{environment_variable_name}={value}" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
But in your case, if you're more familiar with bash, you can force the run action to always use bash
- run: |
// your stuff here
shell: bash
See:
run: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
Workflow commands for actions: https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions?tool=bash
#jessehouwing gave me enough information to test a solution to my particular problem. Assuming Windows runners always runs on Powershell, the answer is:
- name: pip install xmlcli
env:
MYTOOLS: ${{ runner.tool_cache }}\mytools
run: |
echo "${{ runner.tool_cache }}"
$env:MYTOOLS
pip install --no-cache-dir --upgrade mytools.xmlcli --target=$env:MYTOOLS -i ${{secrets.PIP_INDEX_URL}}
"XMLCLI=$env:MYTOOLS\mytools\xmlcli" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
- name: test xmlcli
run: echo "${{ env.XMLCLI }}"
Incidentally; I was using
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-environment-variable
as a reference; it didn't seem to give the powershell equivalent.

Python/Git/Azure Repos - Read remote file without Pulling/Downloading [duplicate]

How do I checkout just one file from a git repo?
Originally, I mentioned in 2012 git archive (see Jared Forsyth's answer and Robert Knight's answer), since git1.7.9.5 (March 2012), Paul Brannan's answer:
git archive --format=tar --remote=origin HEAD:path/to/directory -- filename | tar -O -xf -
But: in 2013, that was no longer possible for remote https://github.com URLs.
See the old page "Can I archive a repository?"
The current (2018) page "About archiving content and data on GitHub" recommends using third-party services like GHTorrent or GH Archive.
So you can also deal with local copies/clone:
You could alternatively do the following if you have a local copy of the bare repository as mentioned in this answer,
git --no-pager --git-dir /path/to/bar/repo.git show branch:path/to/file >file
Or you must clone first the repo, meaning you get the full history:
in the .git repo
in the working tree.
But then you can do a sparse checkout (if you are using Git1.7+),:
enable the sparse checkout option (git config core.sparsecheckout true)
adding what you want to see in the .git/info/sparse-checkout file
re-reading the working tree to only display what you need
To re-read the working tree:
$ git read-tree -m -u HEAD
That way, you end up with a working tree including precisely what you want (even if it is only one file)
Richard Gomes points (in the comments) to "How do I clone, fetch or sparse checkout a single directory or a list of directories from git repository?"
A bash function which avoids downloading the history, which retrieves a single branch and which retrieves a list of files or directories you need.
With Git 2.40 (Q1 2023), the logic to see if we are using the "cone" mode by checking the sparsity patterns has been tightened to avoid mistaking a pattern that names a single file as specifying a cone.
See commit 5842710 (03 Jan 2023) by William Sprent (williams-unity).
(Merged by Junio C Hamano -- gitster -- in commit ab85a7d, 16 Jan 2023)
dir: check for single file cone patterns
Signed-off-by: William Sprent
Acked-by: Victoria Dye
The sparse checkout documentation states that the cone mode pattern set is limited to patterns that either recursively include directories or patterns that match all files in a directory.
In the sparse checkout file, the former manifest in the form:
/A/B/C/
while the latter become a pair of patterns either in the form:
/A/B/
!/A/B/*/
or in the special case of matching the toplevel files:
/*
!/*/
The 'add_pattern_to_hashsets()' function contains checks which serve to disable cone-mode when non-cone patterns are encountered.
However, these do not catch when the pattern list attempts to match a single file or directory, e.g. a pattern in the form:
/A/B/C
This causes sparse-checkout to exhibit unexpected behaviour when such a pattern is in the sparse-checkout file and cone mode is enabled.
Concretely, with the pattern like the above, sparse-checkout, in non-cone mode, will only include the directory or file located at '/A/B/C'.
However, with cone mode enabled, sparse-checkout will instead just manifest the toplevel files but not any file located at '/A/B/C'.
Relatedly, issues occur when supplying the same kind of filter when partial cloning with '--filter=sparse:oid=<oid>'.
'upload-pack' will correctly just include the objects that match the non-cone pattern matching.
Which means that checking out the newly cloned repo with the same filter, but with cone mode enabled, fails due to missing objects.
To fix these issues, add a cone mode pattern check that asserts that every pattern is either a directory match or the pattern '/*'.
Add a test to verify the new pattern check and modify another to reflect that non-directory patterns are caught earlier.
First clone the repo with the -n option, which suppresses the default checkout of all files, and the --depth 1 option, which means it only gets the most recent revision of each file
git clone -n git://path/to/the_repo.git --depth 1
Then check out just the file you want like so:
cd the_repo
git checkout HEAD name_of_file
If you already have a copy of the git repo, you can always checkout a version of a file using a git log to find out the hash-id (for example 3cdc61015724f9965575ba954c8cd4232c8b42e4) and then you simply type:
git checkout hash-id path-to-file
Here is an actual example:
git checkout 3cdc61015724f9965575ba954c8cd4232c8b42e4 /var/www/css/page.css
Normally it's not possible to download just one file from git without downloading the whole repository as suggested in the first answer.
It's because Git doesn't store files as you think (as CVS/SVN do), but it generates them based on the entire history of the project.
But there are some workarounds for specific cases. Examples below with placeholders for user, project, branch, filename.
GitHub
wget https://raw.githubusercontent.com/user/project/branch/filename
GitLab
wget https://gitlab.com/user/project/raw/branch/filename
GitWeb
If you're using Git on the Server - GitWeb, then you may try in example (change it into the right path):
wget "http://example.com/gitweb/?p=example;a=blob_plain;f=README.txt;hb=HEAD"
GitWeb at drupalcode.org
Example:
wget "http://drupalcode.org/project/ads.git/blob_plain/refs/heads/master:/README.md"
googlesource.com
There is an undocumented feature that allows you to download base64-encoded versions of raw files:
curl "https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json?format=TEXT" | base64 --decode
In other cases check if your Git repository is using any web interfaces.
If it's not using any web interface, you may consider to push your code to external services such as GitHub, Bitbucket, etc. and use it as a mirror.
If you don't have wget installed, try curl -O (url) alternatively.
Minimal Guide
git checkout -- <filename>
Ref: https://git-scm.com/docs/git-checkout
Dup: Undo working copy modifications of one file in Git?
git checkout branch_or_version -- path/file
example: git checkout HEAD -- main.c
Now we can! As this is the first result on google, I thought I'd update this to the latest standing. With the advent of git 1.7.9.5, we have the git archive command which will allow you to retrieve a single file from a remote host.
git archive --remote=git://git.foo.com/project.git HEAD:path/in/repo filename | tar -x
See answer in full here https://stackoverflow.com/a/5324532/290784
Here is the complete solution for pulling and pushing only a particular file inside git repository:
First you need to clone git repository with a special hint –no checkout
git clone --no-checkout <git url>
The next step is to get rid of unstaged files in the index with the command:
git reset
Now you are allowed to start pulling files you want to change with the command:
git checkout origin/master <path to file>
Now the repository folder contains files that you may start editing right away. After editing you need to execute plain and familar sequence of commands.
git add <path to file>
git commit -m <message text>
git push
Working in GIT 1.7.2.2
For example you have a remote some_remote with branches branch1, branch32
so to checkout a specific file you call this commands:
git checkout remote/branch path/to/file
as an example it will be something like this
git checkout some_remote/branch32 conf/en/myscript.conf
git checkout some_remote/branch1 conf/fr/load.wav
This checkout command will copy the whole file structure conf/en and conf/fr into the current directory where you call these commands (of course I assume you ran git init at some point before)
Very simple:
git checkout from-branch-name -- path/to/the/file/you/want
This will not checkout the from-branch-name branch. You will stay on whatever branch you are on, and only that single file will be checked out from the specified branch.
Here's the relevant part of the manpage for git-checkout
git checkout [-p|--patch] [<tree-ish>] [--] <pathspec>...
When <paths> or --patch are given, git checkout does not switch
branches. It updates the named paths in the working tree from the
index file or from a named <tree-ish> (most often a commit). In
this case, the -b and --track options are meaningless and giving
either of them results in an error. The <tree-ish> argument can be
used to specify a specific tree-ish (i.e. commit, tag or tree) to
update the index for the given paths before updating the working
tree.
Hat tip to Ariejan de Vroom who taught me this from this blog post.
git clone --filter from Git 2.19
This option will actually skip fetching most unneeded objects from the server:
git clone --depth 1 --no-checkout --filter=blob:none \
"file://$(pwd)/server_repo" local_repo
cd local_repo
git checkout master -- mydir/myfile
The server should be configured with:
git config --local uploadpack.allowfilter 1
git config --local uploadpack.allowanysha1inwant 1
There is no server support as of v2.19.0, but it can already be locally tested.
TODO: --filter=blob:none skips all blobs, but still fetches all tree objects. But on a normal repo, this should be tiny compared to the files themselves, so this is already good enough. Asked at: https://www.spinics.net/lists/git/msg342006.html Devs replied a --filter=tree:0 is in the works to do that.
Remember that --depth 1 already implies --single-branch, see also: How do I clone a single branch in Git?
file://$(path) is required to overcome git clone protocol shenanigans: How to shallow clone a local git repository with a relative path?
The format of --filter is documented on man git-rev-list.
An extension was made to the Git remote protocol to support this feature.
Docs on Git tree:
https://github.com/git/git/blob/v2.19.0/Documentation/technical/partial-clone.txt
https://github.com/git/git/blob/v2.19.0/Documentation/rev-list-options.txt#L720
https://github.com/git/git/blob/v2.19.0/t/t5616-partial-clone.sh
Test it out
#!/usr/bin/env bash
set -eu
list-objects() (
git rev-list --all --objects
echo "master commit SHA: $(git log -1 --format="%H")"
echo "mybranch commit SHA: $(git log -1 --format="%H")"
git ls-tree master
git ls-tree mybranch | grep mybranch
git ls-tree master~ | grep root
)
# Reproducibility.
export GIT_COMMITTER_NAME='a'
export GIT_COMMITTER_EMAIL='a'
export GIT_AUTHOR_NAME='a'
export GIT_AUTHOR_EMAIL='a'
export GIT_COMMITTER_DATE='2000-01-01T00:00:00+0000'
export GIT_AUTHOR_DATE='2000-01-01T00:00:00+0000'
rm -rf server_repo local_repo
mkdir server_repo
cd server_repo
# Create repo.
git init --quiet
git config --local uploadpack.allowfilter 1
git config --local uploadpack.allowanysha1inwant 1
# First commit.
# Directories present in all branches.
mkdir d1 d2
printf 'd1/a' > ./d1/a
printf 'd1/b' > ./d1/b
printf 'd2/a' > ./d2/a
printf 'd2/b' > ./d2/b
# Present only in root.
mkdir 'root'
printf 'root' > ./root/root
git add .
git commit -m 'root' --quiet
# Second commit only on master.
git rm --quiet -r ./root
mkdir 'master'
printf 'master' > ./master/master
git add .
git commit -m 'master commit' --quiet
# Second commit only on mybranch.
git checkout -b mybranch --quiet master~
git rm --quiet -r ./root
mkdir 'mybranch'
printf 'mybranch' > ./mybranch/mybranch
git add .
git commit -m 'mybranch commit' --quiet
echo "# List and identify all objects"
list-objects
echo
# Restore master.
git checkout --quiet master
cd ..
# Clone. Don't checkout for now, only .git/ dir.
git clone --depth 1 --quiet --no-checkout --filter=blob:none "file://$(pwd)/server_repo" local_repo
cd local_repo
# List missing objects from master.
echo "# Missing objects after --no-checkout"
git rev-list --all --quiet --objects --missing=print
echo
echo "# Git checkout fails without internet"
mv ../server_repo ../server_repo.off
! git checkout master
echo
echo "# Git checkout fetches the missing file from internet"
mv ../server_repo.off ../server_repo
git checkout master -- d1/a
echo
echo "# Missing objects after checking out d1/a"
git rev-list --all --quiet --objects --missing=print
GitHub upstream.
Output in Git v2.19.0:
# List and identify all objects
c6fcdfaf2b1462f809aecdad83a186eeec00f9c1
fc5e97944480982cfc180a6d6634699921ee63ec
7251a83be9a03161acde7b71a8fda9be19f47128
62d67bce3c672fe2b9065f372726a11e57bade7e
b64bf435a3e54c5208a1b70b7bcb0fc627463a75 d1
308150e8fddde043f3dbbb8573abb6af1df96e63 d1/a
f70a17f51b7b30fec48a32e4f19ac15e261fd1a4 d1/b
84de03c312dc741d0f2a66df7b2f168d823e122a d2
0975df9b39e23c15f63db194df7f45c76528bccb d2/a
41484c13520fcbb6e7243a26fdb1fc9405c08520 d2/b
7d5230379e4652f1b1da7ed1e78e0b8253e03ba3 master
8b25206ff90e9432f6f1a8600f87a7bd695a24af master/master
ef29f15c9a7c5417944cc09711b6a9ee51b01d89
19f7a4ca4a038aff89d803f017f76d2b66063043 mybranch
1b671b190e293aa091239b8b5e8c149411d00523 mybranch/mybranch
c3760bb1a0ece87cdbaf9a563c77a45e30a4e30e
a0234da53ec608b54813b4271fbf00ba5318b99f root
93ca1422a8da0a9effc465eccbcb17e23015542d root/root
master commit SHA: fc5e97944480982cfc180a6d6634699921ee63ec
mybranch commit SHA: fc5e97944480982cfc180a6d6634699921ee63ec
040000 tree b64bf435a3e54c5208a1b70b7bcb0fc627463a75 d1
040000 tree 84de03c312dc741d0f2a66df7b2f168d823e122a d2
040000 tree 7d5230379e4652f1b1da7ed1e78e0b8253e03ba3 master
040000 tree 19f7a4ca4a038aff89d803f017f76d2b66063043 mybranch
040000 tree a0234da53ec608b54813b4271fbf00ba5318b99f root
# Missing objects after --no-checkout
?f70a17f51b7b30fec48a32e4f19ac15e261fd1a4
?8b25206ff90e9432f6f1a8600f87a7bd695a24af
?41484c13520fcbb6e7243a26fdb1fc9405c08520
?0975df9b39e23c15f63db194df7f45c76528bccb
?308150e8fddde043f3dbbb8573abb6af1df96e63
# Git checkout fails without internet
fatal: '/home/ciro/bak/git/test-git-web-interface/other-test-repos/partial-clone.tmp/server_repo' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
# Git checkout fetches the missing directory from internet
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (1/1), 45 bytes | 45.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (1/1), 45 bytes | 45.00 KiB/s, done.
# Missing objects after checking out d1
?f70a17f51b7b30fec48a32e4f19ac15e261fd1a4
?8b25206ff90e9432f6f1a8600f87a7bd695a24af
?41484c13520fcbb6e7243a26fdb1fc9405c08520
?0975df9b39e23c15f63db194df7f45c76528bccb
Conclusions: all blobs except d1/a are missing. E.g. f70a17f51b7b30fec48a32e4f19ac15e261fd1a4, which is d1/b, is not there after checking out d1/.
Note that root/root and mybranch/mybranch are also missing, but --depth 1 hides that from the list of missing files. If you remove --depth 1, then they show on the list of missing files.
Two variants on what's already been given:
git archive --format=tar --remote=git://git.foo.com/project.git HEAD:path/to/directory filename | tar -O -xf -
and:
git archive --format=zip --remote=git://git.foo.com/project.git HEAD:path/to/directory filename | funzip
These write the file to standard output.
You can do it by
git archive --format=tar --remote=origin HEAD | tar xf -
git archive --format=tar --remote=origin HEAD <file> | tar xf -
Say the file name is 123.txt, this works for me:
git checkout --theirs 123.txt
If the file is inside a directory A, make sure to specify it correctly:
git checkout --theirs "A/123.txt"
In git you do not 'checkout' files before you update them - it seems like this is what you are after.
Many systems like clearcase, csv and so on require you to 'checkout' a file before you can make changes to it. Git does not require this. You clone a repository and then make changes in your local copy of repository.
Once you updated files you can do:
git status
To see what files have been modified. You add the ones you want to commit to index first with (index is like a list to be checked in):
git add .
or
git add blah.c
Then do git status will show you which files were modified and which are in index ready to be commited or checked in.
To commit files to your copy of repository do:
git commit -a -m "commit message here"
See git website for links to manuals and guides.
If you need a specific file from a specific branch from a remote Git repository the command is:
git archive --remote=git://git.example.com/project.git refs/heads/mybranch path/to/myfile |tar xf -
The rest can be derived from #VonC's answer:
If you need a specific file from the master branch it is:
git archive --remote=git://git.example.com/project.git HEAD path/to/myfile |tar xf -
If you need a specific file from a tag it is:
git archive --remote=git://git.example.com/project.git mytag path/to/myfile |tar xf -
this works for me. use git with some shell command
git clone --no-checkout --depth 1 git.example.com/project.git && cd project && git show HEAD:path/to/file_you_need > ../file_you_need && cd .. && rm -rf project
Another solution, similar to the one using --filter=blob:none is to use --filter=tree:0 (you can read an explanation about the differences here).
This method is usually faster than the blob-one because it doesn't download the tree structure, but has a drawback. Given you are delaying the retrieval of the tree, you will have a penalty when you enter into the repo directory (depending on the size and structure of your repo it may be many times larger compared with a simple shallow-clone).
If that's the case for you, you can fix it by not entering into the repo:
git clone -n --filter=tree:0 <repo_url> tgt_dir
git -C tgt_dir checkout <branch> -- <filename>
cat tgt_dir/<filename> # or move it to another place and delete tgt_dir ;)
Take into consideration that if you have to checkout multiple files, the tree population will also impact your performance, so I recommend this for a single file and only if the repo is large enough to be worth it all these actions.
It sounds like you're trying to carry over an idea from centralized version control, which git by nature is not - it's distributed. If you want to work with a git repository, you clone it. You then have all of the contents of the work tree, and all of the history (well, at least everything leading up to the tip of the current branch), not just a single file or a snapshot from a single commit.
git clone /path/to/repo
git clone git://url/of/repo
git clone http://url/of/repo
I am adding this answer as an alternative to doing a formal checkout or some similar local operation. Assuming that you have access to the web interface of your Git provider, you might be able to directly view any file at a given desired commit. For example, on GitHub you may use something like:
https://github.com/hubotio/hubot/blob/ed25584f/src/adapter.coffee
Here ed25584f is the first 8 characters from the SHA-1 hash of the commit of interest, followed by the path to the source file.
Similary, on Bitbucket we can try:
https://bitbucket.org/cofarrell/stash-browse-code-plugin/src/06befe08
In this case, we place the commit hash at the end of the source URL.
In codecommit (git version of Amazon AWS) you can do this:
aws codecommit \
get-file --repository-name myrepo \
--commit-specifier master \
--file-path path/myfile \
--output text \
--query fileContent |
base64 --decode > myfile
I don’t see what worked for me listed out here so I will include it should anybody be in my situation.
My situation, I have a remote repository of maybe 10,000 files and I need to build an RPM file for my Linux system. The build of the RPM includes a git clone of everything. All I need is one file to start the RPM build. I can clone the entire source tree which does what I need but it takes an extra two minutes to download all those files when all I need is one. I tried to use the git archive option discussed and I got “fatal: Operation not supported by protocol.” It seems I have to get some sort of archive option enabled on the server and my server is maintained by bureaucratic thugs that seem to enjoy making it difficult to get things done.
What I finally did was I went into the web interface for bitbucket and viewed the one file I needed. I did a right click on the link to download a raw copy of the file and selected “copy shortcut” from the resulting popup. I could not just download the raw file because I needed to automate things and I don’t have a browser interface on my Linux server.
For the sake of discussion, that resulted in the URL:
https://ourArchive.ourCompany.com/projects/ThisProject/repos/data/raw/foo/bar.spec?at=refs%2Fheads%2FTheBranchOfInterest
I could not directly download this file from the bitbucket repository because I needed to sign in first. After a little digging, I found this worked:
On Linux:
echo "myUser:myPass123"| base64
bXlVc2VyOm15UGFzczEyMwo=
curl -H 'Authorization: Basic bXlVc2VyOm15UGFzczEyMwo=' 'https://ourArchive.ourCompany.com/projects/ThisProject/repos/data/raw/foo/bar.spec?at=refs%2Fheads%2FTheBranchOfInterest' > bar.spec
This combination allowed me to download the one file I needed to build everything else.
if you have a file, locally changed (the one which messing with git pull) just do:
git checkout origin/master filename
git checkout - switch branches or restore working tree files, (here we switching nothing, just overwriting file
origin/master - your current branch or you can use specific revision-number for example: cd0fa799c582e94e59e5b21e872f5ffe2ad0154b,
filename with path from project main directory (where directory .git lives)
so if you have structure:
`.git
public/index.html
public/css/style.css
vendors
composer.lock`
and want reload index.html - just use public/index.html
Yes you can this by this command which download one specific file
wget -o <DesiredFileName> <Git FilePath>\?token\=<personalGitToken>
example
wget -o javascript-test-automation.md https://github.com/akashgupta03/awesome-test-automation/blob/master/javascript-test-automation.md\?token\=<githubPersonalTone>
git checkout <other-branch> -- <single-file> works for me on git.2.37.1.
However, the file is (git-magically) staged for commit and I can not see git diff properly.
I then run git restore --staged db/structure.sql to unstage it.
That way I DO have the file in the exact version that I want and I can see the difference with other versions of that file.
If you have edited a local version of a file and wish to revert to the original version maintained on the central server, this can be easily achieved using Git Extensions.
Initially the file will be marked for commit, since it has been modified
Select (double click) the file in the file tree menu
The revision tree for the single file is listed.
Select the top/HEAD of the tree and right click save as
Save the file to overwrite the modified local version of the file
The file now has the correct version and will no longer be marked for commit!
Easy!
If you only need to download the file, no need to check out with Git.
GitHub Mate is much easier to do so, it's a Chrome extension, enables you click the file icon to download it. also open source

How can I recover the commit message when the git commit-msg hook fails?

I'm using one of git's hooks commit-msg to validate a commit message for certain format and contents.
However, whenever a commit message fails the hook, I have sometimes lost a paragraph or more of text from my message.
I've played around with saving it off somewhere, but I'm not sure how to restore it to the user when they attempt to fix the failed commit message, only the last good commit message shows up.
Has anyone else dealt with this before? How did you solve it?
Info: I am using python scripts for my validation.
The commit message is stored in .git/COMMIT_EDITMSG. After a "failed" committing attempt, you could run:
git commit --edit --file=.git/COMMIT_EDITMSG
or shorter, e.g.:
git commit -eF .git/COMMIT_EDITMSG
which will load the bad commit message in your $EDITOR (or the editor you set up in your Git configuration), so that you can try to fix the commit message. You could also set up an alias for the above, with:
git config --global alias.fix-commit 'commit --edit --file=.git/COMMIT_EDITMSG'
and then use git fix-commit instead.
Background
As stated, when running git commit, git starts your editor pointing to
the $GIT_DIR/COMMIT_EDITMSG file. Unless the commit-msg hook in question
moves/deletes/damages the file, the message should still be there.
I suppose that reusing the message is not the default behavior because it might
interfere with the prepare-commit-msg hook. Ideally, there would be a toggle
available to enable reusing by default, in order to avoid data loss. The
next-best thing would be to override a git sub-command with a git alias,
but unfortunately it is currently not possible and that is
unlikely to change. So we are left with creating a custom alias for it.
I went with an alias similar to the one in the accepted answer:
git config alias.recommit \
'!git commit -F "$(git rev-parse --git-dir)/COMMIT_EDITMSG" --edit'
Then, when running git recommit, the rejected commit message's content should
appear in the editor.
Addition
Note that both aliases would fail for the first commit in the repository, since the
COMMIT_EDITMSG file would not have been created yet. To make it also work in
that case, it looks a bit more convoluted:
git config alias.recommit \
'!test -f "$(git rev-parse --git-dir)/COMMIT_EDITMSG" &&
git commit -F "$(git rev-parse --git-dir)/COMMIT_EDITMSG" --edit ||
git commit'
Which can be shortened to:
git config alias.recommit \
'!cm="$(git rev-parse --git-dir)/COMMIT_EDITMSG" &&
test -f "$cm" && git commit -F "$cm" --edit || git commit'
Either way, considering the added safety, for interactive usage you could even
use one of the aforementioned aliases by default instead of git commit.
You could also make a wrapper for git itself and divert the calls based
on the arguments (i.e.: on the sub-command), though that would require ensuring
that all subsequent calls to git refer to the original binary, lest they
result in infinite recursion:
git () {
cm="$(git rev-parse --git-dir)/COMMIT_EDITMSG"
case "$1" in
commit)
shift
test -f "$cm" && command git commit -F "$cm" --edit "$#" ||
command git commit "$#"
;;
*)
command git "$#";;
esac
}
Note that if the above is added to your rc file (e.g.: ~/.bashrc), then every
call to git present in it will refer to the wrapper, unless you prepend them
with command as well.
Novelty
Finally, I just learned that aliasing to a wrapper file with a different
name is an option:
PATH="$HOME/bin:$PATH"
export PATH
alias git='my-git'
So the wrapper (e.g.: ~/bin/my-git) can be much simpler:
#!/bin/sh
cm="$(git rev-parse --git-dir)/COMMIT_EDITMSG"
case "$1" in
commit)
shift
test -f "$cm" && git commit -F "$cm" --edit "$#" ||
git commit "$#"
;;
*)
git "$#";;
esac
And also avoid interference, as aliases are not expanded when used in external
scripts.

Is there a Python/Django equivalent to Rails bundler-audit?

I'm fairly new to Django so apologies in advance if this is obvious.
In Rails projects, I use a gem called bundler-audit to check that the patch level of the gems I'm installing don't include security vulnerabilities. Normally, I incorporate running bundler-audit into my CI pipeline so that any time I deploy, I get a warning (and fail) if a gem has a security vulnerability.
Is there a similar system for checking vulnerabilities in Python packages?
After writing out this question, I searched around some more and found Safety, which was exactly what I was looking for.
In case anyone else is setting up CircleCI for a Django project and wants to check their packages for vulnerabilities, here is the configuration I used in my .circleci/config.yml:
version: 2
jobs:
build:
# build and run tests
safety_check:
docker:
- image: circleci/python:3.6.1
steps:
- checkout
- run:
command: |
python3 -m venv env3
. env3/bin/activate
pip install safety
# specify requirements.txt
safety check -r requirements.txt
merge_master:
# merge passing code into master
workflows:
version: 2
test_and_merge:
jobs:
- build:
filters:
branches:
ignore: master
- safety_check:
filters:
branches:
ignore: master
- merge_master:
filters:
branches:
only: develop
requires:
- build
# code is only merged if safety check passes
- safety_check
To check that this works, run pip install insecure-package && pip freeze > requirements.txt then push and watch for Circle to fail.

Categories

Resources