gh-pages will not refresh after CircleCI build - python

I have a python package with documentation built by sphinx, and a gh-pages branch setup for github pages. I have a script that when run locally will publish my changes without issue. At the end of a CircleCI test cycle the script runs without errors but the changes will not show in the actual page. The changes are however shown in index.html of the gh-pages branch.
Recap
Make changes locally.
./deploy_docs.sh
Verify changes <- They appear
Make changes, push to github, have CircleCI run test
CircleCI runs ./deploy_docs.sh
index.html on gh-branch reflects changes. Website does not host new index.html
My deploy script:
#!/bin/sh
set -e
git checkout gh-pages
git pull --unshallow origin gh-pages
git reset --hard origin/$CIRCLE_BRANCH
sphinx-apidoc -o docs/source dslib
sphinx-build docs/source .
git add -A
git commit -m "Deploy to GitHub pages $CIRCLE_SHA1 [ci skip]"
git push -f origin gh-pages
Why are the gh-pages not being updated after being pushed to gh-pages from CircleCI?

Related

how to get git remote push branch name dynamically in script/Travis?

I am having a shell script basically whenever a user is making any changes in git file travis will start running and in Travis YAML file I have specified shell script path and shell script basically compare my current_working_branch with remote push branch.
I am using the below command to get the change file and running Pylint on the changed file.
all_changed_files=$(git diff-tree --name-only -r --no-commit-id --line-prefix=$(git rev-parse --show-toplevel)/ $TRAVIS_BRANCH origin/main | grep '\.py'$)
So in the above command origin/main is fixed and working fine I am getting problems when I am trying to push changes from $TRAVIS_BRANCH (current local branch) to a new branch let us say origin/ABC (git push -u origin ABC)
So is there any way to get the details of what branch a user is pushing so that I will use it in my script file. (I have checked the Travis CI document but no luck)?

Python/Git/Azure Repos - Read remote file without Pulling/Downloading [duplicate]

How do I checkout just one file from a git repo?
Originally, I mentioned in 2012 git archive (see Jared Forsyth's answer and Robert Knight's answer), since git1.7.9.5 (March 2012), Paul Brannan's answer:
git archive --format=tar --remote=origin HEAD:path/to/directory -- filename | tar -O -xf -
But: in 2013, that was no longer possible for remote https://github.com URLs.
See the old page "Can I archive a repository?"
The current (2018) page "About archiving content and data on GitHub" recommends using third-party services like GHTorrent or GH Archive.
So you can also deal with local copies/clone:
You could alternatively do the following if you have a local copy of the bare repository as mentioned in this answer,
git --no-pager --git-dir /path/to/bar/repo.git show branch:path/to/file >file
Or you must clone first the repo, meaning you get the full history:
in the .git repo
in the working tree.
But then you can do a sparse checkout (if you are using Git1.7+),:
enable the sparse checkout option (git config core.sparsecheckout true)
adding what you want to see in the .git/info/sparse-checkout file
re-reading the working tree to only display what you need
To re-read the working tree:
$ git read-tree -m -u HEAD
That way, you end up with a working tree including precisely what you want (even if it is only one file)
Richard Gomes points (in the comments) to "How do I clone, fetch or sparse checkout a single directory or a list of directories from git repository?"
A bash function which avoids downloading the history, which retrieves a single branch and which retrieves a list of files or directories you need.
With Git 2.40 (Q1 2023), the logic to see if we are using the "cone" mode by checking the sparsity patterns has been tightened to avoid mistaking a pattern that names a single file as specifying a cone.
See commit 5842710 (03 Jan 2023) by William Sprent (williams-unity).
(Merged by Junio C Hamano -- gitster -- in commit ab85a7d, 16 Jan 2023)
dir: check for single file cone patterns
Signed-off-by: William Sprent
Acked-by: Victoria Dye
The sparse checkout documentation states that the cone mode pattern set is limited to patterns that either recursively include directories or patterns that match all files in a directory.
In the sparse checkout file, the former manifest in the form:
/A/B/C/
while the latter become a pair of patterns either in the form:
/A/B/
!/A/B/*/
or in the special case of matching the toplevel files:
/*
!/*/
The 'add_pattern_to_hashsets()' function contains checks which serve to disable cone-mode when non-cone patterns are encountered.
However, these do not catch when the pattern list attempts to match a single file or directory, e.g. a pattern in the form:
/A/B/C
This causes sparse-checkout to exhibit unexpected behaviour when such a pattern is in the sparse-checkout file and cone mode is enabled.
Concretely, with the pattern like the above, sparse-checkout, in non-cone mode, will only include the directory or file located at '/A/B/C'.
However, with cone mode enabled, sparse-checkout will instead just manifest the toplevel files but not any file located at '/A/B/C'.
Relatedly, issues occur when supplying the same kind of filter when partial cloning with '--filter=sparse:oid=<oid>'.
'upload-pack' will correctly just include the objects that match the non-cone pattern matching.
Which means that checking out the newly cloned repo with the same filter, but with cone mode enabled, fails due to missing objects.
To fix these issues, add a cone mode pattern check that asserts that every pattern is either a directory match or the pattern '/*'.
Add a test to verify the new pattern check and modify another to reflect that non-directory patterns are caught earlier.
First clone the repo with the -n option, which suppresses the default checkout of all files, and the --depth 1 option, which means it only gets the most recent revision of each file
git clone -n git://path/to/the_repo.git --depth 1
Then check out just the file you want like so:
cd the_repo
git checkout HEAD name_of_file
If you already have a copy of the git repo, you can always checkout a version of a file using a git log to find out the hash-id (for example 3cdc61015724f9965575ba954c8cd4232c8b42e4) and then you simply type:
git checkout hash-id path-to-file
Here is an actual example:
git checkout 3cdc61015724f9965575ba954c8cd4232c8b42e4 /var/www/css/page.css
Normally it's not possible to download just one file from git without downloading the whole repository as suggested in the first answer.
It's because Git doesn't store files as you think (as CVS/SVN do), but it generates them based on the entire history of the project.
But there are some workarounds for specific cases. Examples below with placeholders for user, project, branch, filename.
GitHub
wget https://raw.githubusercontent.com/user/project/branch/filename
GitLab
wget https://gitlab.com/user/project/raw/branch/filename
GitWeb
If you're using Git on the Server - GitWeb, then you may try in example (change it into the right path):
wget "http://example.com/gitweb/?p=example;a=blob_plain;f=README.txt;hb=HEAD"
GitWeb at drupalcode.org
Example:
wget "http://drupalcode.org/project/ads.git/blob_plain/refs/heads/master:/README.md"
googlesource.com
There is an undocumented feature that allows you to download base64-encoded versions of raw files:
curl "https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json?format=TEXT" | base64 --decode
In other cases check if your Git repository is using any web interfaces.
If it's not using any web interface, you may consider to push your code to external services such as GitHub, Bitbucket, etc. and use it as a mirror.
If you don't have wget installed, try curl -O (url) alternatively.
Minimal Guide
git checkout -- <filename>
Ref: https://git-scm.com/docs/git-checkout
Dup: Undo working copy modifications of one file in Git?
git checkout branch_or_version -- path/file
example: git checkout HEAD -- main.c
Now we can! As this is the first result on google, I thought I'd update this to the latest standing. With the advent of git 1.7.9.5, we have the git archive command which will allow you to retrieve a single file from a remote host.
git archive --remote=git://git.foo.com/project.git HEAD:path/in/repo filename | tar -x
See answer in full here https://stackoverflow.com/a/5324532/290784
Here is the complete solution for pulling and pushing only a particular file inside git repository:
First you need to clone git repository with a special hint –no checkout
git clone --no-checkout <git url>
The next step is to get rid of unstaged files in the index with the command:
git reset
Now you are allowed to start pulling files you want to change with the command:
git checkout origin/master <path to file>
Now the repository folder contains files that you may start editing right away. After editing you need to execute plain and familar sequence of commands.
git add <path to file>
git commit -m <message text>
git push
Working in GIT 1.7.2.2
For example you have a remote some_remote with branches branch1, branch32
so to checkout a specific file you call this commands:
git checkout remote/branch path/to/file
as an example it will be something like this
git checkout some_remote/branch32 conf/en/myscript.conf
git checkout some_remote/branch1 conf/fr/load.wav
This checkout command will copy the whole file structure conf/en and conf/fr into the current directory where you call these commands (of course I assume you ran git init at some point before)
Very simple:
git checkout from-branch-name -- path/to/the/file/you/want
This will not checkout the from-branch-name branch. You will stay on whatever branch you are on, and only that single file will be checked out from the specified branch.
Here's the relevant part of the manpage for git-checkout
git checkout [-p|--patch] [<tree-ish>] [--] <pathspec>...
When <paths> or --patch are given, git checkout does not switch
branches. It updates the named paths in the working tree from the
index file or from a named <tree-ish> (most often a commit). In
this case, the -b and --track options are meaningless and giving
either of them results in an error. The <tree-ish> argument can be
used to specify a specific tree-ish (i.e. commit, tag or tree) to
update the index for the given paths before updating the working
tree.
Hat tip to Ariejan de Vroom who taught me this from this blog post.
git clone --filter from Git 2.19
This option will actually skip fetching most unneeded objects from the server:
git clone --depth 1 --no-checkout --filter=blob:none \
"file://$(pwd)/server_repo" local_repo
cd local_repo
git checkout master -- mydir/myfile
The server should be configured with:
git config --local uploadpack.allowfilter 1
git config --local uploadpack.allowanysha1inwant 1
There is no server support as of v2.19.0, but it can already be locally tested.
TODO: --filter=blob:none skips all blobs, but still fetches all tree objects. But on a normal repo, this should be tiny compared to the files themselves, so this is already good enough. Asked at: https://www.spinics.net/lists/git/msg342006.html Devs replied a --filter=tree:0 is in the works to do that.
Remember that --depth 1 already implies --single-branch, see also: How do I clone a single branch in Git?
file://$(path) is required to overcome git clone protocol shenanigans: How to shallow clone a local git repository with a relative path?
The format of --filter is documented on man git-rev-list.
An extension was made to the Git remote protocol to support this feature.
Docs on Git tree:
https://github.com/git/git/blob/v2.19.0/Documentation/technical/partial-clone.txt
https://github.com/git/git/blob/v2.19.0/Documentation/rev-list-options.txt#L720
https://github.com/git/git/blob/v2.19.0/t/t5616-partial-clone.sh
Test it out
#!/usr/bin/env bash
set -eu
list-objects() (
git rev-list --all --objects
echo "master commit SHA: $(git log -1 --format="%H")"
echo "mybranch commit SHA: $(git log -1 --format="%H")"
git ls-tree master
git ls-tree mybranch | grep mybranch
git ls-tree master~ | grep root
)
# Reproducibility.
export GIT_COMMITTER_NAME='a'
export GIT_COMMITTER_EMAIL='a'
export GIT_AUTHOR_NAME='a'
export GIT_AUTHOR_EMAIL='a'
export GIT_COMMITTER_DATE='2000-01-01T00:00:00+0000'
export GIT_AUTHOR_DATE='2000-01-01T00:00:00+0000'
rm -rf server_repo local_repo
mkdir server_repo
cd server_repo
# Create repo.
git init --quiet
git config --local uploadpack.allowfilter 1
git config --local uploadpack.allowanysha1inwant 1
# First commit.
# Directories present in all branches.
mkdir d1 d2
printf 'd1/a' > ./d1/a
printf 'd1/b' > ./d1/b
printf 'd2/a' > ./d2/a
printf 'd2/b' > ./d2/b
# Present only in root.
mkdir 'root'
printf 'root' > ./root/root
git add .
git commit -m 'root' --quiet
# Second commit only on master.
git rm --quiet -r ./root
mkdir 'master'
printf 'master' > ./master/master
git add .
git commit -m 'master commit' --quiet
# Second commit only on mybranch.
git checkout -b mybranch --quiet master~
git rm --quiet -r ./root
mkdir 'mybranch'
printf 'mybranch' > ./mybranch/mybranch
git add .
git commit -m 'mybranch commit' --quiet
echo "# List and identify all objects"
list-objects
echo
# Restore master.
git checkout --quiet master
cd ..
# Clone. Don't checkout for now, only .git/ dir.
git clone --depth 1 --quiet --no-checkout --filter=blob:none "file://$(pwd)/server_repo" local_repo
cd local_repo
# List missing objects from master.
echo "# Missing objects after --no-checkout"
git rev-list --all --quiet --objects --missing=print
echo
echo "# Git checkout fails without internet"
mv ../server_repo ../server_repo.off
! git checkout master
echo
echo "# Git checkout fetches the missing file from internet"
mv ../server_repo.off ../server_repo
git checkout master -- d1/a
echo
echo "# Missing objects after checking out d1/a"
git rev-list --all --quiet --objects --missing=print
GitHub upstream.
Output in Git v2.19.0:
# List and identify all objects
c6fcdfaf2b1462f809aecdad83a186eeec00f9c1
fc5e97944480982cfc180a6d6634699921ee63ec
7251a83be9a03161acde7b71a8fda9be19f47128
62d67bce3c672fe2b9065f372726a11e57bade7e
b64bf435a3e54c5208a1b70b7bcb0fc627463a75 d1
308150e8fddde043f3dbbb8573abb6af1df96e63 d1/a
f70a17f51b7b30fec48a32e4f19ac15e261fd1a4 d1/b
84de03c312dc741d0f2a66df7b2f168d823e122a d2
0975df9b39e23c15f63db194df7f45c76528bccb d2/a
41484c13520fcbb6e7243a26fdb1fc9405c08520 d2/b
7d5230379e4652f1b1da7ed1e78e0b8253e03ba3 master
8b25206ff90e9432f6f1a8600f87a7bd695a24af master/master
ef29f15c9a7c5417944cc09711b6a9ee51b01d89
19f7a4ca4a038aff89d803f017f76d2b66063043 mybranch
1b671b190e293aa091239b8b5e8c149411d00523 mybranch/mybranch
c3760bb1a0ece87cdbaf9a563c77a45e30a4e30e
a0234da53ec608b54813b4271fbf00ba5318b99f root
93ca1422a8da0a9effc465eccbcb17e23015542d root/root
master commit SHA: fc5e97944480982cfc180a6d6634699921ee63ec
mybranch commit SHA: fc5e97944480982cfc180a6d6634699921ee63ec
040000 tree b64bf435a3e54c5208a1b70b7bcb0fc627463a75 d1
040000 tree 84de03c312dc741d0f2a66df7b2f168d823e122a d2
040000 tree 7d5230379e4652f1b1da7ed1e78e0b8253e03ba3 master
040000 tree 19f7a4ca4a038aff89d803f017f76d2b66063043 mybranch
040000 tree a0234da53ec608b54813b4271fbf00ba5318b99f root
# Missing objects after --no-checkout
?f70a17f51b7b30fec48a32e4f19ac15e261fd1a4
?8b25206ff90e9432f6f1a8600f87a7bd695a24af
?41484c13520fcbb6e7243a26fdb1fc9405c08520
?0975df9b39e23c15f63db194df7f45c76528bccb
?308150e8fddde043f3dbbb8573abb6af1df96e63
# Git checkout fails without internet
fatal: '/home/ciro/bak/git/test-git-web-interface/other-test-repos/partial-clone.tmp/server_repo' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
# Git checkout fetches the missing directory from internet
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (1/1), 45 bytes | 45.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (1/1), 45 bytes | 45.00 KiB/s, done.
# Missing objects after checking out d1
?f70a17f51b7b30fec48a32e4f19ac15e261fd1a4
?8b25206ff90e9432f6f1a8600f87a7bd695a24af
?41484c13520fcbb6e7243a26fdb1fc9405c08520
?0975df9b39e23c15f63db194df7f45c76528bccb
Conclusions: all blobs except d1/a are missing. E.g. f70a17f51b7b30fec48a32e4f19ac15e261fd1a4, which is d1/b, is not there after checking out d1/.
Note that root/root and mybranch/mybranch are also missing, but --depth 1 hides that from the list of missing files. If you remove --depth 1, then they show on the list of missing files.
Two variants on what's already been given:
git archive --format=tar --remote=git://git.foo.com/project.git HEAD:path/to/directory filename | tar -O -xf -
and:
git archive --format=zip --remote=git://git.foo.com/project.git HEAD:path/to/directory filename | funzip
These write the file to standard output.
You can do it by
git archive --format=tar --remote=origin HEAD | tar xf -
git archive --format=tar --remote=origin HEAD <file> | tar xf -
Say the file name is 123.txt, this works for me:
git checkout --theirs 123.txt
If the file is inside a directory A, make sure to specify it correctly:
git checkout --theirs "A/123.txt"
In git you do not 'checkout' files before you update them - it seems like this is what you are after.
Many systems like clearcase, csv and so on require you to 'checkout' a file before you can make changes to it. Git does not require this. You clone a repository and then make changes in your local copy of repository.
Once you updated files you can do:
git status
To see what files have been modified. You add the ones you want to commit to index first with (index is like a list to be checked in):
git add .
or
git add blah.c
Then do git status will show you which files were modified and which are in index ready to be commited or checked in.
To commit files to your copy of repository do:
git commit -a -m "commit message here"
See git website for links to manuals and guides.
If you need a specific file from a specific branch from a remote Git repository the command is:
git archive --remote=git://git.example.com/project.git refs/heads/mybranch path/to/myfile |tar xf -
The rest can be derived from #VonC's answer:
If you need a specific file from the master branch it is:
git archive --remote=git://git.example.com/project.git HEAD path/to/myfile |tar xf -
If you need a specific file from a tag it is:
git archive --remote=git://git.example.com/project.git mytag path/to/myfile |tar xf -
this works for me. use git with some shell command
git clone --no-checkout --depth 1 git.example.com/project.git && cd project && git show HEAD:path/to/file_you_need > ../file_you_need && cd .. && rm -rf project
Another solution, similar to the one using --filter=blob:none is to use --filter=tree:0 (you can read an explanation about the differences here).
This method is usually faster than the blob-one because it doesn't download the tree structure, but has a drawback. Given you are delaying the retrieval of the tree, you will have a penalty when you enter into the repo directory (depending on the size and structure of your repo it may be many times larger compared with a simple shallow-clone).
If that's the case for you, you can fix it by not entering into the repo:
git clone -n --filter=tree:0 <repo_url> tgt_dir
git -C tgt_dir checkout <branch> -- <filename>
cat tgt_dir/<filename> # or move it to another place and delete tgt_dir ;)
Take into consideration that if you have to checkout multiple files, the tree population will also impact your performance, so I recommend this for a single file and only if the repo is large enough to be worth it all these actions.
It sounds like you're trying to carry over an idea from centralized version control, which git by nature is not - it's distributed. If you want to work with a git repository, you clone it. You then have all of the contents of the work tree, and all of the history (well, at least everything leading up to the tip of the current branch), not just a single file or a snapshot from a single commit.
git clone /path/to/repo
git clone git://url/of/repo
git clone http://url/of/repo
I am adding this answer as an alternative to doing a formal checkout or some similar local operation. Assuming that you have access to the web interface of your Git provider, you might be able to directly view any file at a given desired commit. For example, on GitHub you may use something like:
https://github.com/hubotio/hubot/blob/ed25584f/src/adapter.coffee
Here ed25584f is the first 8 characters from the SHA-1 hash of the commit of interest, followed by the path to the source file.
Similary, on Bitbucket we can try:
https://bitbucket.org/cofarrell/stash-browse-code-plugin/src/06befe08
In this case, we place the commit hash at the end of the source URL.
In codecommit (git version of Amazon AWS) you can do this:
aws codecommit \
get-file --repository-name myrepo \
--commit-specifier master \
--file-path path/myfile \
--output text \
--query fileContent |
base64 --decode > myfile
I don’t see what worked for me listed out here so I will include it should anybody be in my situation.
My situation, I have a remote repository of maybe 10,000 files and I need to build an RPM file for my Linux system. The build of the RPM includes a git clone of everything. All I need is one file to start the RPM build. I can clone the entire source tree which does what I need but it takes an extra two minutes to download all those files when all I need is one. I tried to use the git archive option discussed and I got “fatal: Operation not supported by protocol.” It seems I have to get some sort of archive option enabled on the server and my server is maintained by bureaucratic thugs that seem to enjoy making it difficult to get things done.
What I finally did was I went into the web interface for bitbucket and viewed the one file I needed. I did a right click on the link to download a raw copy of the file and selected “copy shortcut” from the resulting popup. I could not just download the raw file because I needed to automate things and I don’t have a browser interface on my Linux server.
For the sake of discussion, that resulted in the URL:
https://ourArchive.ourCompany.com/projects/ThisProject/repos/data/raw/foo/bar.spec?at=refs%2Fheads%2FTheBranchOfInterest
I could not directly download this file from the bitbucket repository because I needed to sign in first. After a little digging, I found this worked:
On Linux:
echo "myUser:myPass123"| base64
bXlVc2VyOm15UGFzczEyMwo=
curl -H 'Authorization: Basic bXlVc2VyOm15UGFzczEyMwo=' 'https://ourArchive.ourCompany.com/projects/ThisProject/repos/data/raw/foo/bar.spec?at=refs%2Fheads%2FTheBranchOfInterest' > bar.spec
This combination allowed me to download the one file I needed to build everything else.
if you have a file, locally changed (the one which messing with git pull) just do:
git checkout origin/master filename
git checkout - switch branches or restore working tree files, (here we switching nothing, just overwriting file
origin/master - your current branch or you can use specific revision-number for example: cd0fa799c582e94e59e5b21e872f5ffe2ad0154b,
filename with path from project main directory (where directory .git lives)
so if you have structure:
`.git
public/index.html
public/css/style.css
vendors
composer.lock`
and want reload index.html - just use public/index.html
Yes you can this by this command which download one specific file
wget -o <DesiredFileName> <Git FilePath>\?token\=<personalGitToken>
example
wget -o javascript-test-automation.md https://github.com/akashgupta03/awesome-test-automation/blob/master/javascript-test-automation.md\?token\=<githubPersonalTone>
git checkout <other-branch> -- <single-file> works for me on git.2.37.1.
However, the file is (git-magically) staged for commit and I can not see git diff properly.
I then run git restore --staged db/structure.sql to unstage it.
That way I DO have the file in the exact version that I want and I can see the difference with other versions of that file.
If you have edited a local version of a file and wish to revert to the original version maintained on the central server, this can be easily achieved using Git Extensions.
Initially the file will be marked for commit, since it has been modified
Select (double click) the file in the file tree menu
The revision tree for the single file is listed.
Select the top/HEAD of the tree and right click save as
Save the file to overwrite the modified local version of the file
The file now has the correct version and will no longer be marked for commit!
Easy!
If you only need to download the file, no need to check out with Git.
GitHub Mate is much easier to do so, it's a Chrome extension, enables you click the file icon to download it. also open source

Git add and commit with Python script running on GitHub Actions

I want to add and commit a file with Python script to a repo running on the GitHub Actions server.
I am committing a file by running the following in my main.py file:
import os
FILE = './file.txt'
os.system(f'git add {FILE}')
os.system("git commit -m 'update file'")
On GitHub actions, I am running Python script as:
- name: Commit file
run: python commit_file.py
But there isn't any scripts commits on the repo master branch after GitHub actions run.
On my local machine, os.system("git commit -m 'update file'") command is running normally, and I can verify that the commit exist with the git log command. On the GitHub actions, the following error occurs:
Run Commit file
python commit_file.py
Author identity unknown
*** Please tell me who you are.
Run
git config --global user.email "you#example.com"
git config --global user.name "Your Name"
to set your account's default identity.
Omit --global to set the identity only in this repository.
fatal: empty ident name (for <runner#fv-az112-319.bzi55m0t1ufu323ye4cn5ktith.xx.internal.cloudapp.net>) not allowed
EDIT:
I've added two more lines which will configure git credentials:
os.system('git config --global user.email "you#example.com"')
os.system('git config --global user.name "GitHub Actions"')
and now, the warnings are gone, but there is still no commit on the repo.
It's as the error states, you need to configure the user email and name using git config when you want to commit a new file to a repo using github action.
Note: Those values can be fake, hardcoded or depend on a variable.
You just need to add those 2 command lines before the git add and the git commit commands in your python script.
Moreover, you will need to perform a git push at the end of the script for the file to appear on the repository branch at the end of the workflow run.

Best way to push the project folder into GitHub

I have a python project and i am trying to write a script to push the project folder into git
What are the steps that should be taken
I have tried
https://www.freecodecamp.org/news/automate-project-github-setup-mac/
but cant seem to fix it
Firstly, you need to create a repo on the Github UI.
After creating the repo, Github will give you a URL to that repo.
Assuming you have SSH keys setup, you'll then be able to use that URL to push your commits like so:
On a project directory:
git init # initialize a repo; create a .git folder with git internal data
git add file1.py file2.sh file3.js README.md # choose which files you'd like to upload
git commit -m "Add project files" # create a commit
git remote add origin "github repo url"
git push # upload commit to Github
Automating this "first push" for several projects is relatively easy.
Just be careful with what files you're going to git add. Git is not designed to version binaries like images, pdfs, etc.
I use this code. Assuming you have SSH keys setup.
import os
from datetime import datetime
current_time = datetime.now().strftime("%H:%M:%S")
os.system("git init")
os.system("git remote rm origin")
os.system("git remote add origin git#github.com:user/repository.git")
os.system('git config --global user.email "email#example.com"')
os.system('git config --global user.name "username"')
os.system("git rm -r --cached .")
os.system("git add .")
git_commit_with_time = f'git commit -m "update:{current_time}"'
os.system(git_commit_with_time)
os.system("git push -f --set-upstream origin master")
You can also customize it according to you.
You can use something else instead of the current_time in the commit.
I have been using this for many months and I found it the best way.
Because since I have automated the task, I have not had any problem till date.

Incrementally add function to Azure Function Host without overwriting already deployed functions

I'm working with multiple teams that develop & test Azure Functions independently from each other but want to deploy all functions to a centralized Azure Function host, like so:
The publishing methods I know overwrite the existing content on the host which is not wanted, we strive for an incremental update (similar to this question with the only difference that we use Python on Linux-based host instead of C#).
My question is: What is the easiest way to do this (assuming that hosts.json and function settings are the same for both projects)?
If team A runs
curl -X POST -u <user> --data-binary #"./func1-2.zip" https://<funcname>.scm.azurewebsites.net/api/zipdeploy
in their release pipeline and afterwards team B runs
curl -X POST -u <user> --data-binary #"./func3-4.zip" https://<funcname>.scm.azurewebsites.net/api/zipdeploy
func1 and func2 from team A are gone. Using PUT on the https://<funcname>.scm.azurewebsites.net/api/zip/ endpoint as indicated here didn't seem to publish the functions at all. When using FTP, I don't see any files in site/wwwroot/ at all, even after already publishing functions.
You need to use continuous deployment:
First, create a repo on Github, then configure deploy center:
Then use git to upload your local function app to Github:
echo "# xxxxxx" >> README.md
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin <git-url>
git push -u origin main
update:
git init
git add .
git commit -m "something"
git branch -M main
git remote add origin <git-url>
git push -u origin main
Refer to this official doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-continuous-deployment

Categories

Resources