How do I checkout just one file from a git repo?
Originally, I mentioned in 2012 git archive (see Jared Forsyth's answer and Robert Knight's answer), since git1.7.9.5 (March 2012), Paul Brannan's answer:
git archive --format=tar --remote=origin HEAD:path/to/directory -- filename | tar -O -xf -
But: in 2013, that was no longer possible for remote https://github.com URLs.
See the old page "Can I archive a repository?"
The current (2018) page "About archiving content and data on GitHub" recommends using third-party services like GHTorrent or GH Archive.
So you can also deal with local copies/clone:
You could alternatively do the following if you have a local copy of the bare repository as mentioned in this answer,
git --no-pager --git-dir /path/to/bar/repo.git show branch:path/to/file >file
Or you must clone first the repo, meaning you get the full history:
in the .git repo
in the working tree.
But then you can do a sparse checkout (if you are using Git1.7+),:
enable the sparse checkout option (git config core.sparsecheckout true)
adding what you want to see in the .git/info/sparse-checkout file
re-reading the working tree to only display what you need
To re-read the working tree:
$ git read-tree -m -u HEAD
That way, you end up with a working tree including precisely what you want (even if it is only one file)
Richard Gomes points (in the comments) to "How do I clone, fetch or sparse checkout a single directory or a list of directories from git repository?"
A bash function which avoids downloading the history, which retrieves a single branch and which retrieves a list of files or directories you need.
With Git 2.40 (Q1 2023), the logic to see if we are using the "cone" mode by checking the sparsity patterns has been tightened to avoid mistaking a pattern that names a single file as specifying a cone.
See commit 5842710 (03 Jan 2023) by William Sprent (williams-unity).
(Merged by Junio C Hamano -- gitster -- in commit ab85a7d, 16 Jan 2023)
dir: check for single file cone patterns
Signed-off-by: William Sprent
Acked-by: Victoria Dye
The sparse checkout documentation states that the cone mode pattern set is limited to patterns that either recursively include directories or patterns that match all files in a directory.
In the sparse checkout file, the former manifest in the form:
/A/B/C/
while the latter become a pair of patterns either in the form:
/A/B/
!/A/B/*/
or in the special case of matching the toplevel files:
/*
!/*/
The 'add_pattern_to_hashsets()' function contains checks which serve to disable cone-mode when non-cone patterns are encountered.
However, these do not catch when the pattern list attempts to match a single file or directory, e.g. a pattern in the form:
/A/B/C
This causes sparse-checkout to exhibit unexpected behaviour when such a pattern is in the sparse-checkout file and cone mode is enabled.
Concretely, with the pattern like the above, sparse-checkout, in non-cone mode, will only include the directory or file located at '/A/B/C'.
However, with cone mode enabled, sparse-checkout will instead just manifest the toplevel files but not any file located at '/A/B/C'.
Relatedly, issues occur when supplying the same kind of filter when partial cloning with '--filter=sparse:oid=<oid>'.
'upload-pack' will correctly just include the objects that match the non-cone pattern matching.
Which means that checking out the newly cloned repo with the same filter, but with cone mode enabled, fails due to missing objects.
To fix these issues, add a cone mode pattern check that asserts that every pattern is either a directory match or the pattern '/*'.
Add a test to verify the new pattern check and modify another to reflect that non-directory patterns are caught earlier.
First clone the repo with the -n option, which suppresses the default checkout of all files, and the --depth 1 option, which means it only gets the most recent revision of each file
git clone -n git://path/to/the_repo.git --depth 1
Then check out just the file you want like so:
cd the_repo
git checkout HEAD name_of_file
If you already have a copy of the git repo, you can always checkout a version of a file using a git log to find out the hash-id (for example 3cdc61015724f9965575ba954c8cd4232c8b42e4) and then you simply type:
git checkout hash-id path-to-file
Here is an actual example:
git checkout 3cdc61015724f9965575ba954c8cd4232c8b42e4 /var/www/css/page.css
Normally it's not possible to download just one file from git without downloading the whole repository as suggested in the first answer.
It's because Git doesn't store files as you think (as CVS/SVN do), but it generates them based on the entire history of the project.
But there are some workarounds for specific cases. Examples below with placeholders for user, project, branch, filename.
GitHub
wget https://raw.githubusercontent.com/user/project/branch/filename
GitLab
wget https://gitlab.com/user/project/raw/branch/filename
GitWeb
If you're using Git on the Server - GitWeb, then you may try in example (change it into the right path):
wget "http://example.com/gitweb/?p=example;a=blob_plain;f=README.txt;hb=HEAD"
GitWeb at drupalcode.org
Example:
wget "http://drupalcode.org/project/ads.git/blob_plain/refs/heads/master:/README.md"
googlesource.com
There is an undocumented feature that allows you to download base64-encoded versions of raw files:
curl "https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json?format=TEXT" | base64 --decode
In other cases check if your Git repository is using any web interfaces.
If it's not using any web interface, you may consider to push your code to external services such as GitHub, Bitbucket, etc. and use it as a mirror.
If you don't have wget installed, try curl -O (url) alternatively.
Minimal Guide
git checkout -- <filename>
Ref: https://git-scm.com/docs/git-checkout
Dup: Undo working copy modifications of one file in Git?
git checkout branch_or_version -- path/file
example: git checkout HEAD -- main.c
Now we can! As this is the first result on google, I thought I'd update this to the latest standing. With the advent of git 1.7.9.5, we have the git archive command which will allow you to retrieve a single file from a remote host.
git archive --remote=git://git.foo.com/project.git HEAD:path/in/repo filename | tar -x
See answer in full here https://stackoverflow.com/a/5324532/290784
Here is the complete solution for pulling and pushing only a particular file inside git repository:
First you need to clone git repository with a special hint –no checkout
git clone --no-checkout <git url>
The next step is to get rid of unstaged files in the index with the command:
git reset
Now you are allowed to start pulling files you want to change with the command:
git checkout origin/master <path to file>
Now the repository folder contains files that you may start editing right away. After editing you need to execute plain and familar sequence of commands.
git add <path to file>
git commit -m <message text>
git push
Working in GIT 1.7.2.2
For example you have a remote some_remote with branches branch1, branch32
so to checkout a specific file you call this commands:
git checkout remote/branch path/to/file
as an example it will be something like this
git checkout some_remote/branch32 conf/en/myscript.conf
git checkout some_remote/branch1 conf/fr/load.wav
This checkout command will copy the whole file structure conf/en and conf/fr into the current directory where you call these commands (of course I assume you ran git init at some point before)
Very simple:
git checkout from-branch-name -- path/to/the/file/you/want
This will not checkout the from-branch-name branch. You will stay on whatever branch you are on, and only that single file will be checked out from the specified branch.
Here's the relevant part of the manpage for git-checkout
git checkout [-p|--patch] [<tree-ish>] [--] <pathspec>...
When <paths> or --patch are given, git checkout does not switch
branches. It updates the named paths in the working tree from the
index file or from a named <tree-ish> (most often a commit). In
this case, the -b and --track options are meaningless and giving
either of them results in an error. The <tree-ish> argument can be
used to specify a specific tree-ish (i.e. commit, tag or tree) to
update the index for the given paths before updating the working
tree.
Hat tip to Ariejan de Vroom who taught me this from this blog post.
git clone --filter from Git 2.19
This option will actually skip fetching most unneeded objects from the server:
git clone --depth 1 --no-checkout --filter=blob:none \
"file://$(pwd)/server_repo" local_repo
cd local_repo
git checkout master -- mydir/myfile
The server should be configured with:
git config --local uploadpack.allowfilter 1
git config --local uploadpack.allowanysha1inwant 1
There is no server support as of v2.19.0, but it can already be locally tested.
TODO: --filter=blob:none skips all blobs, but still fetches all tree objects. But on a normal repo, this should be tiny compared to the files themselves, so this is already good enough. Asked at: https://www.spinics.net/lists/git/msg342006.html Devs replied a --filter=tree:0 is in the works to do that.
Remember that --depth 1 already implies --single-branch, see also: How do I clone a single branch in Git?
file://$(path) is required to overcome git clone protocol shenanigans: How to shallow clone a local git repository with a relative path?
The format of --filter is documented on man git-rev-list.
An extension was made to the Git remote protocol to support this feature.
Docs on Git tree:
https://github.com/git/git/blob/v2.19.0/Documentation/technical/partial-clone.txt
https://github.com/git/git/blob/v2.19.0/Documentation/rev-list-options.txt#L720
https://github.com/git/git/blob/v2.19.0/t/t5616-partial-clone.sh
Test it out
#!/usr/bin/env bash
set -eu
list-objects() (
git rev-list --all --objects
echo "master commit SHA: $(git log -1 --format="%H")"
echo "mybranch commit SHA: $(git log -1 --format="%H")"
git ls-tree master
git ls-tree mybranch | grep mybranch
git ls-tree master~ | grep root
)
# Reproducibility.
export GIT_COMMITTER_NAME='a'
export GIT_COMMITTER_EMAIL='a'
export GIT_AUTHOR_NAME='a'
export GIT_AUTHOR_EMAIL='a'
export GIT_COMMITTER_DATE='2000-01-01T00:00:00+0000'
export GIT_AUTHOR_DATE='2000-01-01T00:00:00+0000'
rm -rf server_repo local_repo
mkdir server_repo
cd server_repo
# Create repo.
git init --quiet
git config --local uploadpack.allowfilter 1
git config --local uploadpack.allowanysha1inwant 1
# First commit.
# Directories present in all branches.
mkdir d1 d2
printf 'd1/a' > ./d1/a
printf 'd1/b' > ./d1/b
printf 'd2/a' > ./d2/a
printf 'd2/b' > ./d2/b
# Present only in root.
mkdir 'root'
printf 'root' > ./root/root
git add .
git commit -m 'root' --quiet
# Second commit only on master.
git rm --quiet -r ./root
mkdir 'master'
printf 'master' > ./master/master
git add .
git commit -m 'master commit' --quiet
# Second commit only on mybranch.
git checkout -b mybranch --quiet master~
git rm --quiet -r ./root
mkdir 'mybranch'
printf 'mybranch' > ./mybranch/mybranch
git add .
git commit -m 'mybranch commit' --quiet
echo "# List and identify all objects"
list-objects
echo
# Restore master.
git checkout --quiet master
cd ..
# Clone. Don't checkout for now, only .git/ dir.
git clone --depth 1 --quiet --no-checkout --filter=blob:none "file://$(pwd)/server_repo" local_repo
cd local_repo
# List missing objects from master.
echo "# Missing objects after --no-checkout"
git rev-list --all --quiet --objects --missing=print
echo
echo "# Git checkout fails without internet"
mv ../server_repo ../server_repo.off
! git checkout master
echo
echo "# Git checkout fetches the missing file from internet"
mv ../server_repo.off ../server_repo
git checkout master -- d1/a
echo
echo "# Missing objects after checking out d1/a"
git rev-list --all --quiet --objects --missing=print
GitHub upstream.
Output in Git v2.19.0:
# List and identify all objects
c6fcdfaf2b1462f809aecdad83a186eeec00f9c1
fc5e97944480982cfc180a6d6634699921ee63ec
7251a83be9a03161acde7b71a8fda9be19f47128
62d67bce3c672fe2b9065f372726a11e57bade7e
b64bf435a3e54c5208a1b70b7bcb0fc627463a75 d1
308150e8fddde043f3dbbb8573abb6af1df96e63 d1/a
f70a17f51b7b30fec48a32e4f19ac15e261fd1a4 d1/b
84de03c312dc741d0f2a66df7b2f168d823e122a d2
0975df9b39e23c15f63db194df7f45c76528bccb d2/a
41484c13520fcbb6e7243a26fdb1fc9405c08520 d2/b
7d5230379e4652f1b1da7ed1e78e0b8253e03ba3 master
8b25206ff90e9432f6f1a8600f87a7bd695a24af master/master
ef29f15c9a7c5417944cc09711b6a9ee51b01d89
19f7a4ca4a038aff89d803f017f76d2b66063043 mybranch
1b671b190e293aa091239b8b5e8c149411d00523 mybranch/mybranch
c3760bb1a0ece87cdbaf9a563c77a45e30a4e30e
a0234da53ec608b54813b4271fbf00ba5318b99f root
93ca1422a8da0a9effc465eccbcb17e23015542d root/root
master commit SHA: fc5e97944480982cfc180a6d6634699921ee63ec
mybranch commit SHA: fc5e97944480982cfc180a6d6634699921ee63ec
040000 tree b64bf435a3e54c5208a1b70b7bcb0fc627463a75 d1
040000 tree 84de03c312dc741d0f2a66df7b2f168d823e122a d2
040000 tree 7d5230379e4652f1b1da7ed1e78e0b8253e03ba3 master
040000 tree 19f7a4ca4a038aff89d803f017f76d2b66063043 mybranch
040000 tree a0234da53ec608b54813b4271fbf00ba5318b99f root
# Missing objects after --no-checkout
?f70a17f51b7b30fec48a32e4f19ac15e261fd1a4
?8b25206ff90e9432f6f1a8600f87a7bd695a24af
?41484c13520fcbb6e7243a26fdb1fc9405c08520
?0975df9b39e23c15f63db194df7f45c76528bccb
?308150e8fddde043f3dbbb8573abb6af1df96e63
# Git checkout fails without internet
fatal: '/home/ciro/bak/git/test-git-web-interface/other-test-repos/partial-clone.tmp/server_repo' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
# Git checkout fetches the missing directory from internet
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (1/1), 45 bytes | 45.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (1/1), 45 bytes | 45.00 KiB/s, done.
# Missing objects after checking out d1
?f70a17f51b7b30fec48a32e4f19ac15e261fd1a4
?8b25206ff90e9432f6f1a8600f87a7bd695a24af
?41484c13520fcbb6e7243a26fdb1fc9405c08520
?0975df9b39e23c15f63db194df7f45c76528bccb
Conclusions: all blobs except d1/a are missing. E.g. f70a17f51b7b30fec48a32e4f19ac15e261fd1a4, which is d1/b, is not there after checking out d1/.
Note that root/root and mybranch/mybranch are also missing, but --depth 1 hides that from the list of missing files. If you remove --depth 1, then they show on the list of missing files.
Two variants on what's already been given:
git archive --format=tar --remote=git://git.foo.com/project.git HEAD:path/to/directory filename | tar -O -xf -
and:
git archive --format=zip --remote=git://git.foo.com/project.git HEAD:path/to/directory filename | funzip
These write the file to standard output.
You can do it by
git archive --format=tar --remote=origin HEAD | tar xf -
git archive --format=tar --remote=origin HEAD <file> | tar xf -
Say the file name is 123.txt, this works for me:
git checkout --theirs 123.txt
If the file is inside a directory A, make sure to specify it correctly:
git checkout --theirs "A/123.txt"
In git you do not 'checkout' files before you update them - it seems like this is what you are after.
Many systems like clearcase, csv and so on require you to 'checkout' a file before you can make changes to it. Git does not require this. You clone a repository and then make changes in your local copy of repository.
Once you updated files you can do:
git status
To see what files have been modified. You add the ones you want to commit to index first with (index is like a list to be checked in):
git add .
or
git add blah.c
Then do git status will show you which files were modified and which are in index ready to be commited or checked in.
To commit files to your copy of repository do:
git commit -a -m "commit message here"
See git website for links to manuals and guides.
If you need a specific file from a specific branch from a remote Git repository the command is:
git archive --remote=git://git.example.com/project.git refs/heads/mybranch path/to/myfile |tar xf -
The rest can be derived from #VonC's answer:
If you need a specific file from the master branch it is:
git archive --remote=git://git.example.com/project.git HEAD path/to/myfile |tar xf -
If you need a specific file from a tag it is:
git archive --remote=git://git.example.com/project.git mytag path/to/myfile |tar xf -
this works for me. use git with some shell command
git clone --no-checkout --depth 1 git.example.com/project.git && cd project && git show HEAD:path/to/file_you_need > ../file_you_need && cd .. && rm -rf project
Another solution, similar to the one using --filter=blob:none is to use --filter=tree:0 (you can read an explanation about the differences here).
This method is usually faster than the blob-one because it doesn't download the tree structure, but has a drawback. Given you are delaying the retrieval of the tree, you will have a penalty when you enter into the repo directory (depending on the size and structure of your repo it may be many times larger compared with a simple shallow-clone).
If that's the case for you, you can fix it by not entering into the repo:
git clone -n --filter=tree:0 <repo_url> tgt_dir
git -C tgt_dir checkout <branch> -- <filename>
cat tgt_dir/<filename> # or move it to another place and delete tgt_dir ;)
Take into consideration that if you have to checkout multiple files, the tree population will also impact your performance, so I recommend this for a single file and only if the repo is large enough to be worth it all these actions.
It sounds like you're trying to carry over an idea from centralized version control, which git by nature is not - it's distributed. If you want to work with a git repository, you clone it. You then have all of the contents of the work tree, and all of the history (well, at least everything leading up to the tip of the current branch), not just a single file or a snapshot from a single commit.
git clone /path/to/repo
git clone git://url/of/repo
git clone http://url/of/repo
I am adding this answer as an alternative to doing a formal checkout or some similar local operation. Assuming that you have access to the web interface of your Git provider, you might be able to directly view any file at a given desired commit. For example, on GitHub you may use something like:
https://github.com/hubotio/hubot/blob/ed25584f/src/adapter.coffee
Here ed25584f is the first 8 characters from the SHA-1 hash of the commit of interest, followed by the path to the source file.
Similary, on Bitbucket we can try:
https://bitbucket.org/cofarrell/stash-browse-code-plugin/src/06befe08
In this case, we place the commit hash at the end of the source URL.
In codecommit (git version of Amazon AWS) you can do this:
aws codecommit \
get-file --repository-name myrepo \
--commit-specifier master \
--file-path path/myfile \
--output text \
--query fileContent |
base64 --decode > myfile
I don’t see what worked for me listed out here so I will include it should anybody be in my situation.
My situation, I have a remote repository of maybe 10,000 files and I need to build an RPM file for my Linux system. The build of the RPM includes a git clone of everything. All I need is one file to start the RPM build. I can clone the entire source tree which does what I need but it takes an extra two minutes to download all those files when all I need is one. I tried to use the git archive option discussed and I got “fatal: Operation not supported by protocol.” It seems I have to get some sort of archive option enabled on the server and my server is maintained by bureaucratic thugs that seem to enjoy making it difficult to get things done.
What I finally did was I went into the web interface for bitbucket and viewed the one file I needed. I did a right click on the link to download a raw copy of the file and selected “copy shortcut” from the resulting popup. I could not just download the raw file because I needed to automate things and I don’t have a browser interface on my Linux server.
For the sake of discussion, that resulted in the URL:
https://ourArchive.ourCompany.com/projects/ThisProject/repos/data/raw/foo/bar.spec?at=refs%2Fheads%2FTheBranchOfInterest
I could not directly download this file from the bitbucket repository because I needed to sign in first. After a little digging, I found this worked:
On Linux:
echo "myUser:myPass123"| base64
bXlVc2VyOm15UGFzczEyMwo=
curl -H 'Authorization: Basic bXlVc2VyOm15UGFzczEyMwo=' 'https://ourArchive.ourCompany.com/projects/ThisProject/repos/data/raw/foo/bar.spec?at=refs%2Fheads%2FTheBranchOfInterest' > bar.spec
This combination allowed me to download the one file I needed to build everything else.
if you have a file, locally changed (the one which messing with git pull) just do:
git checkout origin/master filename
git checkout - switch branches or restore working tree files, (here we switching nothing, just overwriting file
origin/master - your current branch or you can use specific revision-number for example: cd0fa799c582e94e59e5b21e872f5ffe2ad0154b,
filename with path from project main directory (where directory .git lives)
so if you have structure:
`.git
public/index.html
public/css/style.css
vendors
composer.lock`
and want reload index.html - just use public/index.html
Yes you can this by this command which download one specific file
wget -o <DesiredFileName> <Git FilePath>\?token\=<personalGitToken>
example
wget -o javascript-test-automation.md https://github.com/akashgupta03/awesome-test-automation/blob/master/javascript-test-automation.md\?token\=<githubPersonalTone>
git checkout <other-branch> -- <single-file> works for me on git.2.37.1.
However, the file is (git-magically) staged for commit and I can not see git diff properly.
I then run git restore --staged db/structure.sql to unstage it.
That way I DO have the file in the exact version that I want and I can see the difference with other versions of that file.
If you have edited a local version of a file and wish to revert to the original version maintained on the central server, this can be easily achieved using Git Extensions.
Initially the file will be marked for commit, since it has been modified
Select (double click) the file in the file tree menu
The revision tree for the single file is listed.
Select the top/HEAD of the tree and right click save as
Save the file to overwrite the modified local version of the file
The file now has the correct version and will no longer be marked for commit!
Easy!
If you only need to download the file, no need to check out with Git.
GitHub Mate is much easier to do so, it's a Chrome extension, enables you click the file icon to download it. also open source
I have written a pre-push hook in python which prevents push to the master branch partially. i.e when in feature branch and given this command git push origin master,the files are pushed.
In the below image when the head is in master branch, the push is prevented.
But when the head is in feature1 branch, the push to master is not prevented.
My code so far :
#!/usr/bin/env python
import sys
import re
from subprocess import check_output
branch = check_output(['git', 'symbolic-ref','--short','HEAD']).strip()
print('branch-name:',branch.decode('utf-8')) #this prints the current branch: feature (if in feature)
if ((branch.decode('utf-8')) != 'master'):
print('into if clause')
print('push to remote successful')
sys.exit(0)
else :
print('into else clause')
print('you are not allowed to push to the master branch')
sys.exit(1)
I want to modify the code in such a manner that following commands must not be allowed(irrespective of the branch it is in) : git push --force origin master ; git push --delete origin master ; git push origin master ; git co master ; git push --force origin. Thanks in advance.
If you are using free-plan on private repo in Github, you may not be able to use protected branch feature. So you need to block any push / commit from local.
Please keep in mind, it can be bypassed easily with --no-verify command.
I recommend you to do it using husky instead of python, since it is way easier I think..
This is what I did to make it work locally and distributed to all repo's members.
First of all, you need to install husky to control pre-commit and pre-push hook.
Then, I made a pre-push bash script and commit it inside the repository. Then call this script from husky pre-push hook with husky parameter.
This is my husky configuration inside package.json (you can set separated config if you want)
"husky": {
"hooks": {
"pre-commit": "./commands/pre-commit",
"pre-push": "./commands/pre-push $HUSKY_GIT_STDIN"
}
},
as you can see I have 2 scripts, one for pre-push and one for pre-commit.
And this is my commands/pre-push bash script
#!/bin/bash
echo -e "===\n>> Talenavi Pre-push Hook: Checking branch name / Mengecek nama branch..."
BRANCH=`git rev-parse --abbrev-ref HEAD`
PROTECTED_BRANCHES="^(master|develop)"
if [[ $1 != *"$BRANCH"* ]]
then
echo -e "\n🚫 You must use (git push origin $BRANCH) / Anda harus menggunakan (git push origin $BRANCH).\n" && exit 1
fi
if [[ "$BRANCH" =~ $PROTECTED_BRANCHES ]]
then
echo -e "\n🚫 Cannot push to remote $BRANCH branch, please create your own branch and use PR."
echo -e "🚫 Tidak bisa push ke remote branch $BRANCH, silahkan buat branch kamu sendiri dan gunakan pull request.\n" && exit 1
fi
echo -e ">> Finish checking branch name / Selesai mengecek nama branch.\n==="
exit 0
The script basically will do 2 things:
This script will block anybody who tries to push to a certain branch (in my case I don't want anybody -including myself- to push directly to master and develop branch). They need to work in their own branch and then create a pull request.
This script will block anybody who tries to push to a branch that is different from their current active branch. For example you are in branch fix/someissue but then you mistakenly type git push origin master.
For more detailed instructions you can follow from this article:
https://github.com/talenavi/husky-precommit-prepush-githooks
Finally I migrated my development env from runserver to gunicorn/nginx.
It'd be convenient to replicate the autoreload feature of runserver to gunicorn, so the server automatically restarts when source changes. Otherwise I have to restart the server manually with kill -HUP.
Any way to avoid the manual restart?
While this is old question you need to know that ever since version 19.0 gunicorn has had the --reload option.
So now no third party tools are needed.
One option would be to use the --max-requests to limit each spawned process to serving only one request by adding --max-requests 1 to the startup options. Every newly spawned process should see your code changes and in a development environment the extra startup time per request should be negligible.
Bryan Helmig came up with this and I modified it to use run_gunicorn instead of launching gunicorn directly, to make it possible to just cut and paste these 3 commands into a shell in your django project root folder (with your virtualenv activated):
pip install watchdog -U
watchmedo shell-command --patterns="*.py;*.html;*.css;*.js" --recursive --command='echo "${watch_src_path}" && kill -HUP `cat gunicorn.pid`' . &
python manage.py run_gunicorn 127.0.0.1:80 --pid=gunicorn.pid
I use git push to deploy to production and set up git hooks to run a script. The advantage of this approach is you can also do your migration and package installation at the same time. https://mikeeverhart.net/2013/01/using-git-to-deploy-code/
mkdir -p /home/git/project_name.git
cd /home/git/project_name.git
git init --bare
Then create a script /home/git/project_name.git/hooks/post-receive.
#!/bin/bash
GIT_WORK_TREE=/path/to/project git checkout -f
source /path/to/virtualenv/activate
pip install -r /path/to/project/requirements.txt
python /path/to/project/manage.py migrate
sudo supervisorctl restart project_name
Make sure to chmod u+x post-receive, and add user to sudoers. Allow it to run sudo supervisorctl without password. https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
From my local / development server, I set up git remote that allows me to push to the production server
git remote add production ssh://user_name#production-server/home/git/project_name.git
# initial push
git push production +master:refs/heads/master
# subsequent push
git push production master
As a bonus, you will get to see all the prompts as the script is running. So you will see if there is any issue with the migration/package installation/supervisor restart.
I have a Docker file trying to deploy Django code to a container
FROM ubuntu:latest
MAINTAINER { myname }
#RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sou$
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y tar git curl dialog wget net-tools nano buil$
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python python-dev python-distribute python-p$
RUN mkdir /opt/app
WORKDIR /opt/app
#Pull Code
RUN git clone git#bitbucket.org/{user}/{repo}
RUN pip install -r website/requirements.txt
#EXPOSE = ["8000"]
CMD python website/manage.py runserver 0.0.0.0:8000
And then I build my code as docker build -t dockerhubaccount/demo:v1 ., and this pulls my code from Bitbucket to the container. I run it as docker run -p 8000:8080 -td felixcheruiyot/demo:v1 and things appear to work fine.
Now I want to update the code i.e since I used git clone ..., I have this confusion:
How can I update my code when I have new commits and upon Docker containers build it ships with the new code (note: when I run build it does not fetch it because of cache).
What is the best workflow for this kind of approach?
There are a couple of approaches you can use.
You can use docker build --no-cache to avoid using the cache of the Git clone.
The startup command calls git pull. So instead of running python manage.py, you'd have something like CMD cd /repo && git pull && python manage.py or use a start script if things are more complex.
I tend to prefer 2. You can also run a cron job to update the code in your container, but that's a little more work and goes somewhat against the Docker philosophy.
I would recommend you checkout out the code on your host and COPY it into the image. That way it will be updated whenever you make a change. Also, during development you can bind mount the source directory over the code directory in the container, meaning any changes are reflected immediately in the container.
A docker command for git repositories that checks for the last update would be very useful though!
Another solution.
Docker build command uses cache as long as a instruction string is exactly same as the one of cached image. So, if you write
RUN echo '2014122400' >/dev/null && git pull ...
On next update, you change as follows.
RUN echo '2014122501' >/dev/null && git pull ...
This can prevents docker from using cache.
I would like to offer another possible solution. I need to warn however that it's definitely not the "docker way" of doing things and relies on the existence of volumes (which could be a potential blocker in tools like Docker Swarm and Kubernetes)
The basic principle that we will be taking advantage of is the fact that the contents of container directories that are used as Docker Volumes, are actually stored in the file system of the host. Check out this part of the documentation.
In your case you would make /opt/app a Docker Volume. You don't need to map the Volume explicitly to a location on the host's file-system since as a I will describe below, the mapping can be obtained dynamically.
So for starters leave your Dockerfile exactly as it is and switch your container creation command to something like:
docker run -p 8000:8080 -v /opt/app --name some-name -td felixcheruiyot/demo:v1
The command docker inspect -f {{index .Volumes "/opt/webapp"}} some-name will print the full file system path on the host where your code is stored (this is where I picked up the inspect trick).
Armed with that knowledge all you have to do is replace that code and your all set.
So a very simple deploy script would be something like:
code_path=$(docker inspect -f {{index .Volumes "/opt/webapp"}} some-name)
rm -rfv $code_path/*
cd $code_path
git clone git#bitbucket.org/{user}/{repo}
The benefits you get with an approach like this are:
There are no potentially costly cacheless image rebuilds
There is no need to move application specific running information into the run command. The Dockerfile is the only source of needed for instrumenting the application
UPDATE
You can achieve the same results I have mentioned above using docker cp (starting Docker 1.8). This way the container need not have volumes, and you can replace code in the container as you would on the host file-system.
Of course as I mentioned in the beginning of the answer, this is not the "docker way" of doing things, which advocates containers being immutable and reproducible.
If you use GitHub you can use the GitHub API to not cache specific RUN commands.
You need to have jq installed to parse JSON: apt-get install -y jq
Example:
docker build --build-arg SHA=$(curl -s 'https://api.github.com/repos/Tencent/mars/commits' | jq -r '.[0].sha') -t imageName .
In Dockerfile (ARG command should be right before RUN):
ARG SHA=LATEST
RUN SHA=${SHA} \
git clone https://github.com/Tencent/mars.git
Or if you don't want to install jq:
SHA=$(curl -s 'https://api.github.com/repos/Tencent/mars/commits' | grep sha | head -1)
If a repository has new commits, git clone will be executed.