Count lines of code in a Django Project - python

Is there an easy way to count the lines of code you have written for your django project?
Edit: The shell stuff is cool, but how about on Windows?

Yep:
shell]$ find /my/source -name "*.py" -type f -exec cat {} + | wc -l
Job's a good 'un.

You might want to look at CLOC -- it's not Django specific but it supports Python. It can show you lines counts for actual code, comments, blank lines, etc.

Starting with Aiden's answer, and with a bit of help in a question of my own, I ended up with this god-awful mess:
# find the combined LOC of files
# usage: loc Documents/fourU py html
function loc {
#find $1 -name $2 -type f -exec cat {} + | wc -l
namelist=''
let i=2
while [ $i -le $# ]; do
namelist="$namelist -name \"*.$#[$i]\""
if [ $i != $# ]; then
namelist="$namelist -or "
fi
let i=i+1
done
#echo $namelist
#echo "find $1 $namelist" | sh
#echo "find $1 $namelist" | sh | xargs cat
echo "find $1 $namelist" | sh | xargs cat | wc -l
}
which allows you to specify any number of extensions you want to match. As far as I can tell, it outputs the right answer, but... I thought this would be a one-liner, else I wouldn't have started in bash, and it just kinda grew from there.
I'm sure that those more knowledgable than I can improve upon this, so I'm going to put it in community wiki.

Check out the wc command on unix.

Get wc command on Windows using GnuWin32 (http://gnuwin32.sourceforge.net/packages/coreutils.htm)
wc *.py

Related

print all prefixes of a string - translate from python to bash

I have a shell script that needs to take an .so and get all its prefixes, where a prefix is the part of the name, up until the ".so" part + the next part up until the ".".
Example: for 'example.so.1' we'll have the following prefixes: 'example.so', 'example.so.1'
I have a python(3) code that does it, and I want to get a bash equivalent.
Bash wrapper for the Python source:
#!/bin/bash
dst='/tmp'
for src in 'example.so' 'example.so.1' 'example.so.1.2' 'example.so.1.2.3'; do
python3 -c "
import os, sys, itertools as it, re;
so_path = os.path.abspath(sys.argv[1]);
dst = sys.argv[2];
so = os.path.basename(so_path);
so_name = so.split('.')[0];
regex = '\.\w+';
for suffix in it.accumulate(re.findall(regex, so)):
dst_so = os.path.join(dst, so_name + suffix)
print('src: {}. dst: {}'.format(so_path, dst_so))
" "${src}" "${dst}";
done
This is my tryout in bash using awk (it's not complete and only prints the source. I keep tweaking it, but can't get it do exactly what I want):
#!/bin/bash
dst='/tmp'
delimiter='.'
for src in 'example.so' 'example.so.1' 'example.so.1.2' 'example.so.1.2.3'; do
for nubmer_of_delimiters in `seq $(echo ${src} | grep ${delimiter} | wc -l)`; do
echo ${src} :: ${src} | awk -F. '{print $nubmer_of_delimiters}';
done
done
What would be the best way to achieve this? (I'm guessing awk, though I did try to use a bit og cut, sed, etc.
The bash code must run on clean ubuntu18 with no extra installs
Looks like you should be able to just shave in a loop.
given f=example.so.1.2.3, try
$: while [[ "$f" =~ [.]so[.] ]]; do echo "$f"; f=${f%.*}; done; echo "$f"
example.so.1.2.3
example.so.1.2
example.so.1
example.so
If you want the smaller ones first, pass it through a sort.
$: { while [[ "$f" =~ [.]so[.] ]]
> do echo "$f"
> f=${f%.*}
> done
> echo "$f"
> } | sort
example.so
example.so.1
example.so.1.2
example.so.1.2.3

grouping and divding files which contains numbers in it into saperate folders

I wanted to move the files in group of 30 in sequence starting from image_1,image_2... from current folder to the new folder.
the file name pattern is like below
image_1.png
image_2.png
.
.
.
image_XXX.png
I want to move image_[1-30].png to folder fold30
and image[31-60].png to fold60 and so on
I have following code to do this and it works wanted to know is there any shortcut to do this.
or is there any smaller code that i can write for the same
#!/bin/bash
counter=0
folvalue=30
totalFiles=$(ls -1 image_*.png | sort -V | wc -l)
foldernames=fold$folvalue
for file in $(ls -1 image_*.png | sort -V )
do
((counter++))
mkdir -p $foldernames
mv $file ./$foldernames/
if [[ "$counter" -eq "$folvalue" ]];
then
let folvalue=folvalue+30
foldernames="fold${folvalue}"
echo $foldernames
fi
done
the above code moves image_1,image_2,..4..30 in folder
fold30
image_31,....image_60 to folder
fold60
I really recommend using sed all the time. It's hard on the eyes but once you get used to it you can do all these jaring tasks in no time.
What it does is simple. Running sed -e "s/regex/substitution/" <(cat file) goes through each line replacing matching patterns regex with substitution.
With it you can just transform your input into comands and pipe it to bash.
If you want to know more there's good documentation here. (also not easy on the eyes though)
Anyway here's the code:
while FILE_GROUP=$(find . -maxdepth 0 -name "image_*.png" | sort -V | head -30) && [ -n "$FILE_GROUP" ]
do
$FOLDER="${YOUR_PREFIX}$(sed -e "s/^.*image_//" -e "s/\.png//" <(echo "$FILE_GROUP" | tail -1))"
mkdir -p $FOLDER
sed -e "s/\.\///" -e "s|.*|mv & $FOLDER|" <(echo "$FILE_GROUP") | bash
done
And here's what it should do:
- while loop grabs the first 30 files.
- take the number out of the last of those files and name the directory
- mkdir FOLDER
- go through each line and turn $FILE into mv $FILE $FOLDER then execute those lines (pipe to bash)
note: replace $YOUR_PREFIXwith your folder
EDIT: surprisingly the code did not work out of the box(who would have thought...) But I've done some fixing and testing and it should work now.
The simplest way to do that is with rename, a.k.a. Perl rename. It will:
let you run any amount of code of arbitrary complexity to figure out a new name,
let you do a dry run telling you what it would do without doing anything,
warn you if any files would be overwritten,
automatically create intermediate directory hierarchies.
So the command you want is:
rename -n -p -e '(my $num = $_) =~ s/\D//g; $_ = ($num+29)-(($num-1)%30) . "/" . $_' *png
Sample Output
'image_1.png' would be renamed to '30/image_1.png'
'image_10.png' would be renamed to '30/image_10.png'
'image_100.png' would be renamed to '120/image_100.png'
'image_101.png' would be renamed to '120/image_101.png'
'image_102.png' would be renamed to '120/image_102.png'
'image_103.png' would be renamed to '120/image_103.png'
'image_104.png' would be renamed to '120/image_104.png'
...
...
If that looks correct, you can run it again without the -n switch to do it for real.

How to call a python script from bash with arguments?

I am going round and round in circles here and there is probably a very simple answer.
I have a bash script that loops around video files in a thumbdrive and if it finds the .mp4 extension, it will upload to youtube using Youtube's python example - which works from the command line.
I'm having trouble passing the arguments to the python script call in the bash script:
cd /home/pi/Documents
echo $PWD
for _file in /media/pi/*/*.mp4; do
clean=$(echo $_file|wc -m)
if [ $clean -gt 15 ]; then
did=$(grep "$_file" /home/pi/Documents/uploaded_videos.txt | wc -l)
if [ $did = 0 ]; then
#Edit this to your liking
#echo "now uploading $_file"
args="--file=\"${_file}\" –-title=\"${_file}\" –-description=\"show 2018\" -–keywords=\"2018,show, Winter,show 2018,\" -–category=\"28\" -–privacyStatus=\"private\""
python yt_up.py "$args"
echo $_file uploaded to youtube via google developer api | logger
echo $_file >> /home/pi/Documents/uploaded_videos.txt
fi
fi
done
But the arguments are not recognised by the yt_up.py script. I've tried various combinations of quotes but I just can't get it to work.
I can pass one argument to the script:
python yt_up.py --file="$_file" works,
but adding additional arguments were not recognised.
Lots of antipatterns and misconceptions in your current code!
cd /home/pi/Documents
For best practice you should test whether this succeeded. Probably not a problem currently, but it doesn't hurt to do so.
echo $PWD
Missing quotes here, but that's not fatal.
for _file in /media/pi/*/*.mp4; do
clean=$(echo $_file|wc -m)
That's not how to count the number of characters in a string. You should use clean=${#_file} instead.
if [ $clean -gt 15 ]; then
Here I guess you want to know whether the glob matched anything. That's not how to proceed. You either want to use the shopt -s nullglob option of Bash, or use if [ -e "$_file" ]; then to check whether the glob matched an actual file.
did=$(grep "$_file" /home/pi/Documents/uploaded_videos.txt | wc -l)
if [ $did = 0 ]; then
That's how you check whether a file doesn't contains a string. Use if ! grep -q "$_file" /home/pi/Documents/uploaded_videos.txt; then instead.
#Edit this to your liking
#echo "now uploading $_file"
args="--file=\"${_file}\" –-title=\"${_file}\" –-description=\"show 2018\" -–keywords=\"2018,show, Winter,show 2018,\" -–category=\"28\" -–privacyStatus=\"private\""
Here you have misconceptions about how the shell reads a command. Quote removal is performed before variable expansion, so your quotes are wrong. Typically you want to use an array! that's what Bash arrays are for! Also note that you have some weird hyphens here.
python yt_up.py "$args"
echo $_file uploaded to youtube via google developer api | logger
echo $_file >> /home/pi/Documents/uploaded_videos.txt
fi
fi
done
Here's a possibility to fix your mistakes (and hopefully make it work):
#!/bin/bash
# We define a variable with the path to the uploaded_videos.txt file
uploaded_videos=/home/pi/Documents/uploaded_videos.txt
# Will expand non-matching globs to nothing:
shopt -s nullglob
# cd into the directory, and exit if fails
cd /home/pi/Documents || exit
# Use the builtin pwd instead of echo "$PWD"
pwd
for file in /media/pi/*/*.mp4; do
if ! grep -qFx "$file" "$uploaded_videos"; then
# Define an array to contain the arguments to pass
args=(
--file="$file"
--title="$file"
--description="show 2018"
--keywords="2018,show, Winter,show 2018,"
--category="28"
--privacyStatus="private"
)
# call your script with the fields of the array as arguments
python yt_up.py "${args[#]}"
echo "$file uploaded to youtube via google developer api" | logger
echo "$file" >> "$uploaded_videos"
fi
done
You could improve the final step by explicitly checking whether yt_up.py succeeded:
if python yt_up.py "${args[#]}"; then
echo "$file uploaded to youtube via google developer api" | logger
echo "$file" >> "$uploaded_videos"
else
echo "Something wrong happened!"
fi
I think you have just to add you argument at in same line as python command. like this
python yt_up.py --file="${_file}" –-title="${_file}" --description="show 2018" --keywords="2018,show, Winter,show 2018," -–category="28" --privacyStatus="private"
It works
Your args variable for python is a large sys.argv[1], so may be you have troubles because of it.
Rewrite shell like this:
cd /home/pi/Documents
echo $PWD
for _file in /media/pi/*/*.mp4; do
clean=$(echo $_file|wc -m)
if [ $clean -gt 15 ]; then
did=$(grep "$_file" /home/pi/Documents/uploaded_videos.txt | wc -l)
if [ $did = 0 ]; then
#Edit this to your liking
#echo "now uploading $_file"
args=( $(echo "--file=\"${_file}\" –-title=\"${_file}\" –-description=\"show 2018\" -–keywords=\"2018,show, Winter,show 2018,\" -–category=\"28\" -–privacyStatus=\"private\"") )
python yt_up.py "${args[#]}"
echo $_file uploaded to youtube via google developer api | logger
echo $_file >> /home/pi/Documents/uploaded_videos.txt
fi
fi
done
Now args is array, and each it's element will be read by python as separated sys.argv[i] element.

traversing daily dump directories

I have 6 months of data to go through, looking like this
0101
0102
.
.
0131
0201
0202
.
.
all the way to
0630
I want to fo through each directory, and execute an awk file on the contents, or do it in a weekly manner (each 7 directories will make one week of data
is there an easy way to do this in awk or python?
many thanks
You can use find to walk your tree and xargs to apply your awk script:
find . -type f | xargs awk -f awkfile
EDIT: awk syntax corrected thanks to input from #nya. I Am Not An AWK Expert.
Why not use plain bash? You can try this:
find . -type f -exec awk -f 'your_awk_script.awk' {} \;
find traverses through directory tree, and -exec option makes it execute the given comand(in this case awk -f your_awk_script.awk) on each file({} is the default placeholder for argument).
To run this tiny script every seven days, look into cron.

Unable to replace the word in a given folder's contents by Sed/Python/Perl

I have a project where I have folders, subfolders, and files. I need to replace the word Masi by the word Bond in each files.
I run the following Sed script called replace unsuccessfully
s/Masi/Bond/
in Zsh by
sed -f PATH/replace PATH2/project/**
It gives me all files, also the ones which do not have Masi, as an output.
Sed is not necessarily the best tool for the task.
I am interested in Python and Perl.
How would you do the replacement in Sed/Perl/Python, such that only the file contents are changed?
To replace the word in all files found in the current directory and subdirectories
perl -p -i -e 's/Masi/Bond/g' $(grep -rl Masi *)
The above won't work if you have spaces in filenames. Safer to do:
find . -type f -exec perl -p -i -e 's/Masi/Bond/g' {} \;
or in Mac which has spaces in filenames
find . -type f -print0 | xargs -0 perl -p -i -e 's/Masi/Bond/g'
Explanations
-p means print or die
-i means "do not make any backup files"
-e allows you to run perl code in command line
Renaming a folder full of files:
use warnings;
use strict;
use File::Find::Rule;
my #list = File::Find::Rule->new()->name(qr/Masi/)->file->in('./');
for( #list ){
my $old = $_;
my $new = $_;
$new =~ s/Masi/Bond/g;
rename $old , $new ;
}
Replacing Strings in Files
use warnings;
use strict;
use File::Find::Rule;
use File::Slurp;
use File::Copy;
my #list = File::Find::Rule->new()->name("*.something")->file->grep(qr/Masi/)->in('./');
for( #list ){
my $c = read_file( $_ );
if ( $c =~ s/Masi/Bond/g; ){
File::Copy::copy($_, "$_.bak"); # backup.
write_file( $_ , $c );
}
}
strict (core) - Perl pragma to restrict unsafe constructs
warnings (core) - Perl pragma to control optional warnings
File::Find::Rule - Alternative interface to File::Find
File::Find (core) - Traverse a directory tree.
File::Slurp - Efficient Reading/Writing of Complete Files
File::Copy (core) - Copy files or filehandles
Why not just pass the -i option (man sed) to sed and be done with it? If it doesn't find Masi in a file, the file will just be rewritten with no modification. Or am I missing something?
If you don't want to replace the files' contents inline (which is what the -i will do) you can do exactly as you are now, but throw a grep & xargs in front of it:
grep -rl Masi PATH/project/* | xargs sed -f PATH/replace
Lots of options, but do not write an entire perl script for this (I'll give the one-liner a pass ;)). find, grep, sed, xargs, etc. will always be more flexible, IMHO.
In response to comment:
grep -rl Masi PATH/project/* | xargs sed -n -e '/Masi/ p'
A solution tested on Windows
Requires CPAN module File::Slurp. Will work with standard Unix shell wildcards. Like ./replace.pl PATH/replace.txt PATH2/replace*
#!/usr/bin/perl
use strict;
use warnings;
use File::Glob ':glob';
use File::Slurp;
foreach my $dir (#ARGV) {
my #filelist = bsd_glob($dir);
foreach my $file (#filelist) {
next if -d $file;
my $c=read_file($file);
if ($c=~s/Masi/Bond/g) {
print "replaced in $file\n";
write_file($file,$c);
} else {
print "no match in $file\n";
}
}
}
import glob
import os
# Change the glob for different filename matching
for filename in glob.glob("*"):
dst=filename.replace("Masi","Bond")
os.rename(filename, dst)

Categories

Resources