Python one-liner to print every file in the current directory - python

How can I make the following one liner print every file through Python?
python -c "import sys;print '>>',sys.argv[1:]" | dir *.*
Specifically would like to know how to pipe into a python -c.
DOS or Cygwin responses accepted.

python -c "import os; print os.listdir('.')"
If you want to apply some formatting like you have in your question,
python -c "import os; print '\n'.join(['>>%s' % x for x in os.listdir('.')])"
If you want to use a pipe, use xargs:
ls | xargs python -c "import sys; print '>>', sys.argv[1:]"
or backticks:
python -c "import sys; print '>>', sys.argv[1:]" `ls`

You can read data piped into a Python script by reading sys.stdin. For example:
ls -al | python -c "import sys; print sys.stdin.readlines()"
It is not entirely clear what you want to do (maybe I am stupid). The confusion comes from your example which is piping data out of a python script.

If you want to print all files:
find . -type f
If you want to print only the current directory's files
find . -type f -maxdepth 1
If you want to include the ">>" before each line
find . -type f -maxdepth 1 | xargs -L 1 echo ">>"
If you don't want the space between ">>" and $path from echo
find . -type f -maxdepth 1 | xargs -L 1 printf ">>%s\n"
This is all using cygwin, of course.

ls | python -c "import sys; print sys.stdin.read()"
just read stdin as normal for pipes

would like to know how to pipe though
You had the pipe the wrong way round, if you wanted to feed the output of ‘dir’ into Python, ‘dir’ would have to be on the left. eg.:
dir "*.*" | python -c "import sys;[sys.stdout.write('>>%s\n' % line) for line in sys.stdin]"
(The hack with the list comprehension is because you aren't allowed a block-introducing ‘for’ statement on one line after a semicolon.)
Clearly the Python-native solution (‘os.listdir’) is much better in practice.

Specifically would like to know how to pipe into a python -c
see cobbal's answer
piping through a program is transparent from the program's point of view, all the program knows is that it's getting input from the standard input stream
Generally speaking, a shell command of the form
A | B
redirects the output of A to be the input of B
so if A spits "asdf" to standard output, then B gets "asdf" into its standard input
the standard input stream in python is sys.stdin

Related

Python subprocess() reading file in bash

I have a shell command for a file as given below:
filename="/4_illumina/gt_seq/gt_seq_proccessor/200804_MN01111_0025_A000H35TCJ/fastq_files/raw_data/200804_MN01111_0025_A000H35TCJ.demultiplex.log"
assembled_reads=$(cat $filename | grep -i " Assembled reads ...................:" | grep -v "Assembled reads file...............:")
Now I am trying to run this within a python environment using subprocess as:
task = subprocess.Popen("cat $filename | grep -i " Assembled reads ...................:" | grep -v "Assembled reads file...............:"", shell=True, stdout=subprocess.PIPE)
p_stdout = task.stdout.read()
print (p_stdout)
This is not working becasue I am not able to parse the filename variable from python to shell and probably there is a syntax error in the way I have written the grep command.
Any suggestions ?
This code seems to solve your problem with no external tools required.
filename="/4_illumina/gt_seq/gt_seq_proccessor/200804_MN01111_0025_A000H35TCJ/fastq_files/raw_data/200804_MN01111_0025_A000H35TCJ.demultiplex.log"
for line in open(filename):
if "Assembled reads" in line and "Assembled reads file" not in line:
print(line.rstrip())
I would consider doing all the reading and searching in python and maybe rethink what you want to achieve, however:
In a shell:
$ export filename=/tmp/x-output.GOtV
In a Python (note the access to $filename and mixing quotes in the command, I also use custom grep command to simplify things a bit):
import os
import subprocess
tmp = subprocess.Popen(f"cat {os.environ['filename']} | grep -i 'x'", shell=True, stdout=subprocess.PIPE)
data = tmp.stdout.read()
print(data)
Though working, the solution is ... not what I consider a clean code.

Running python scripts for different input directory through bash terminal

I am trying to automate my task through the terminal using bash. I have a python script that takes two parameters (paths of input and output) and then the script runs and saves the output in a text file.
All the input directories have a pattern that starts from "g-" whereas the output directory remains static.
So, I want to write a script that could run on its own so that I don't have to manually run it on hundreds of directories.
$ python3 program.py ../g-changing-directory/ ~/static-directory/ > ~/static-directory/final/results.txt
You can do it like this:
find .. -maxdepth 1 -type d -name "g-*" | xargs -n1 -P1 -I{} python3 program.py {} ~/static-directory/ >> ~/static-directory/final/results.txt
find .. will look in the parent directory -maxdepth 1 will look only on the top level and not take any subdirectories -type d only takes directories -name "g-*" takes objects starting with g- (use -iname "g-*" if you want objects starting with g- or G-).
We pipe it to xargs which will apply the input from stdin to the command specified. -n1 tells it to start a process per input word, -P1 tells it to only run one process at a time, -I{} tells it to replace {} with the input in the command.
Then we specify the command to run for the input, where {} is replaced by xargs.: python3 program.py {} ~/static-directory/ >> ~/static-directory/final/results.txt have a look at the >> this will append to a file if it exists, while > will overwrite the file, if it exists.
With -P4 you could start four processes in parallel. But you do not want to do that, as you are writing into one file and multi-processing can mess up your output file. If every process would write into its own file, you could do multi-processing safely.
Refer to man find and man xargs for further details.
There are many other ways to do this, as well. E.g. for loops like this:
for F in $(ls .. | grep -oP "g-.*"); do
python3 program.py $F ~/static-directory/ >> ~/static-directory/final/results.txt
done
There are many ways to do this, here's what I would write:
find .. -type d -name "g-*" -exec python3 program.py {} ~/static-directory/ \; > ~/static-directory/final/results.txt
You haven't mentioned if you want nested directories to be included, if the answer is no then you have to add the -maxdepth parameter as in #toydarian's answer.

Why do we lost stdout from a terminal after running subprocess.check_output(xyz, shell=True)?

I have this bash line:
$ printf ' Number of xml files: %s\n' `find . -name '*.xml' | wc -l`
Number of xml files: 4
$
When I run it from python in this way the python interpreter stop and my terminal does not have stdout anymore::
$ ls
input aa bb
$ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
>>>
>>> import subprocess
>>> cmd = "printf 'xml files: %s\n' `find . -name '*.xml' | wc -l`"
>>> subprocess.check_output(['/bin/bash', cmd], shell=True)
$ ls # stdout is not seen any more I have to kill this terminal
$
Obviously the question here is not how to make this bash work from python::
>>> import subprocess
>>> cmd = "printf 'xml files: %s\n' `find . -name '*.xml' | wc -l`"
>>> out = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE)
>>> print(str(out.stdout, 'utf8'))
xml files: 4
>>>
The two following issues No output from subprocess.check_output() and Why is terminal blank after running python executable? does not answer the question
The short version is that check_output is buffering all the output to return. When you run ls, its standard output is going to check_output's buffer, not the terminal. When you exit the shell you are currently in, you'll get all the output at once as a single Python string.
This leads to the question, why are you getting a sub shell in the first place, instead of executing the contents of cmd? First, you are using bash wrong; its argument is a file to run, not an arbitrary command line. A more correct version of what you are doing would be
cmd = "printf 'xml files: %s\n' `find . -name '*.xml' | wc -l`"
subprocess.check_output(['/bin/bash', '-c', cmd])
or, if you want subprocess to run a shell for you, instead of explicitly executing it,
subprocess.check_output(cmd, shell=True)
Combining the list argument with shell=True is almost never what you want.
Second, given your original code, check_output first tries to combine your list into a single string, which is then joined to sh -c. That is, you try to execute something like
sh -c /bin/bash "printf 'xml files: %s\n' `find . -name '*.xml' | wc -l`"
sh runs /bin/bash, and your command string is just used as an additional argument to sh which, for the purposes of this question, we can assume is ignored. So you are in an interactive shell whose standard output is buffered instead of displayed, as described in the first part of this answer.

How can I pass the results of Python's help('modules') output to pbcopy in one line?

Just like the title addresses, how can this be done? I stupidly tried the following, but I will share the stupidity here so you can get an idea as to what I want to happen:
myself$ python help('modules') | pbcopy
Is this a good idea:
fout = open('output.txt', 'w')
fout.write(help('modules'))
On my Ubuntu, and hopefully on your boxen too (as it is a standard python feature), there is the handy pydoc command, thus it is very easy to type
pydoc modules | pbcopy
Use pydoc to look up documentation and print it.
Example:
$ python -c 'import pydoc; print pydoc.getdoc(id)'
id(object) -> integer
Return the identity of an object. This is guaranteed to be unique among
simultaneously existing objects. (Hint: it's the object's memory address.)
I don't know what is pbcopy, but I gouess this woul do the trick:
python -c 'import urllib; help(urllib)' | pbcopy
at least this is definitely works:
python -c 'import urllib; help(urllib)' > file
From man python:
-c command
Specify the command to execute (see next section). This terminates the option list (following options are passed as arguments to the command).
Update:
In order to copy this to clipboard you can add this to ~/.bashrc:
pc() { python -c "import $1; help($1);" | xclip -i -selection clipboard; }
then just call pc logging or pc my_module
Or you can pipe it to pbcopy or what ever works for you.

Using bash echo command to input into a python one liner

I am trying to input a string using echo into a Python one liner then perform a Caeasar's Cipher on the string.
One of the examples my instructor gave me was this.
~ $ echo "Hello Holly." | python -c "import sys; [print(line) for line in sys.stdin]"
The output is suppose to be: Hello Holly.
How ever when I type the command in I get:
File "<string>", line 1
import sys; [print(line) for line in sys.stdin]
^
SyntaxError: invalid syntax
I would appreciate it if someone could point out the error to me. I am using Python 2.6 on Centos 6.
Thanks.
In Python 2 print is a statement, not a function. Try this instead:
echo "Hello Holly." | python -c "import sys; print [line for line in sys.stdin]"
Alternatively, you could use the following to just print plain text (thanks #mgilson):
echo "Hello Holly." | python -c "import sys; print ' '.join([line for line in sys.stdin])"
It looks like you are using python2 while your isntructor is using python 3.

Categories

Resources