Using cat command in Python for printing - python

In the Linux kernel, I can send a file to the printer using the following command
cat file.txt > /dev/usb/lp0
From what I understand, this redirects the contents in file.txt into the printing location. I tried using the following command
>>os.system('cat file.txt > /dev/usb/lp0')
I thought this command would achieve the same thing, but it gave me a "Permission Denied" error. In the command line, I would run the following command prior to concatenating.
sudo chown root:lpadmin /dev/usb/lp0
Is there a better way to do this?

While there's no reason your code shouldn't work, this probably isn't the way you want to do this. If you just want to run shell commands, bash is much better than python. On the other hand, if you want to use Python, there are better ways to copy files than shell redirection.
The simplest way to copy one file to another is to use shutil:
shutil.copyfile('file.txt', '/dev/usb/lp0')
(Of course if you have permissions problems that prevent redirect from working, you'll have the same permissions problems with copying.)
You want a program that reads input from the keyboard, and when it gets a certain input, it prints a certain file. That's easy:
import shutil
while True:
line = raw_input() # or just input() if you're on Python 3.x
if line == 'certain input':
shutil.copyfile('file.txt', '/dev/usb/lp0')
Obviously a real program will be a bit more complex—it'll do different things with different commands, and maybe take arguments that tell it which file to print, and so on. If you want to go that way, the cmd module is a great help.

Remember, in UNIX - everything is a file. Even devices.
So, you can just use basic (or anything else, e.g. shutil.copyfile) files methods (http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files).
In your case code may (just a way) be like that:
# Read file.txt
with open('file.txt', 'r') as content_file:
content = content_file.read()
with open('/dev/usb/lp0', 'w') as target_device:
target_device.write(content)
P. S. Please, don't use system() call (or similar) to solve your issue.

under windows OS there is no cat command you should usetype instead of cat under windows
(**if you want to run cat command under windows please look at: https://stackoverflow.com/a/71998867/2723298 )
import os
os.system('type a.txt > copy.txt')
..or if your OS is linux and cat command didn't work anyway here are other methods to copy file..
with grep:
import os
os.system('grep "" a.txt > b.txt')
*' ' are important!
copy file with sed:
os.system('sed "" a.txt > sed.txt')
copy file with awk:
os.system('awk "{print $0}" a.txt > awk.txt')

Related

Linux bash executables behave differently depending on whether I am activating from the command line or with os.system('')

I've been attempting to execute a certain CLI from within python and store the output for later use within the same script. I suspect this question has a simple answer, but if one wishes to go through the entire pipeline, here is the tool in question.
wget http://rna.urmc.rochester.edu/Releases/current/RNAstructureForLinux.tgz
tar xvf as usual, go inside the resulting directory and execute 'make all', the executables I use in the bash script are within the 'exe' directory.
I attempted to execute the commands with os.system(), but with little luck. The CLI I am using; however, seems to be running. The function which I have set to execute the os.system() commands contains the following block.
txt = open('home/spectre/tools/RNAstructure/exe/RNAStructure_nucleic_acid.txt',"w")
txt.write('AAGGCTGTCCAGGCGCAATGTGGTGGCTGCTTCTCTGGGGAGTCCTCCAGGCTTGCCCAACCCGGGGCTCCGTCCTCTTGGCCCAAGAGCTACCCCAGCAGCTGACATCCCCCGGGTACCCAGAGCCGTATGGCAAAGGCCAAGAGAGCAGCACGGACATCAAGGCTCCAGAGGGCTTTGCTGTGAGGCTCGTCTTCCAGGACTTCGACCTGGAGCCGTCCCAGGACTGTGCAGGGGACTCTGTCACAGTGAGCTGGGGATGGGGGGGGTCCCGCCAGGACTGTGGCCAGGGAGATTCCCGGGGTTGTGGGAAGTGGCGGTGCCCTGAATCCCCCATCTGGAGGAGGGATGAAT')
os.system(' cd ~/tools/RNAstructure/exe ; ./python_RNA_structure.sh')
nucleotides, structure, MFE =
RNAStructure_from_file('home/spectre/tools/RNAstructure/exe/RNAStructure_bracket_output.txt')
The executable *.sh file contains this.
#!/bin/bash
cd ~/tools/RNAstructure/exe
./Fold RNAStructure_nucleic_acid.txt RNAStructure_nucleic_acid_output.txt
./ct2dot RNAStructure_nucleic_acid_output.txt -1 RNAStructure_bracket_output.txt
If I execute the bash script from the command line the output should look a little like this
Initializing nucleic acids...
Using auto-detected DATAPATH: "../data_tables" (set DATAPATH to avoid this warning).
done.
98% \[==================================================\] \\ done.
Writing output ct file...done.
Single strand folding complete.
Converting CT file...
Using auto-detected DATAPATH: "../data_tables" (set DATAPATH to avoid this warning).
CT file conversion complete.
If I execute the bash script form the python file.
Initializing nucleic acids...
Using auto-detected DATAPATH: "../data_tables" (set DATAPATH to avoid this warning).
Error reading sequence. The file did not contain any nucleotides.
Single strand folding complete with errors.
Converting CT file...
Using auto-detected DATAPATH: "../data_tables" (set DATAPATH to avoid this warning).
CT file conversion complete.
It looks an awful lot like my CLI can find the files it needs inside the terminal, but not outside of it. I haven't experimented with any parameters like trying absolute paths, but I understood by using os.system() I could execute a bash script, but it is not clear to me why this is changing how that script behaves.
What I've done to resolve the problem:
reopening the file seems to resolve the problem, but I am still working out why.
The problem seems to resolve when I reopen the file within the python script like so:
txt = open('home/spectre/tools/RNAstructure/exe/RNAStructure_nucleic_acid.txt',"w")
txt.write('AAGGCTGTCCAGGCGCAATGTGGTGGCTGCTTCTCTGGGGAGTCCTCCAGGCTTGCCCAACCCGGGGCTCCGTCCTCTTGGCCCAAGAGCTACCCCAGCAGCTGACATCCCCCGGGTACCCAGAGCCGTATGGCAAAGGCCAAGAGAGCAGCACGGACATCAAGGCTCCAGAGGGCTTTGCTGTGAGGCTCGTCTTCCAGGACTTCGACCTGGAGCCGTCCCAGGACTGTGCAGGGGACTCTGTCACAGTGAGCTGGGGATGGGGGGGGTCCCGCCAGGACTGTGGCCAGGGAGATTCCCGGGGTTGTGGGAAGTGGCGGTGCCCTGAATCCCCCATCTGGAGGAGGGATGAAT')
txt = open('home/spectre/tools/RNAstructure/exe/RNAStructure_nucleic_acid.txt')
os.system(' cd ~/tools/RNAstructure/exe ; ./python_RNA_structure.sh')
nucleotides, structure, MFE =
RNAStructure_from_file('home/spectre/tools/RNAstructure/exe/RNAStructure_bracket_output.txt')
I am not sure why this resolves the problem, I found this solution serendipitously. I'll update the answer when I figure out why, unless someone wants to beat me to it. It's magic to me for now.
It seems that after opening the file, RNAStructure_nucleic_acid.txt, and assigning it to the txt variable for writing, I need to reopen it after writing is complete. Otherwise the file is blank when I try printing it's output within the program, but after the program finishes executing, the file contains the correct text.

Add to ~/.zshrc file with python

I'm trying to write a cli that will take a users path that they input into the command line, then add this path to the correct path file depending on their shell - in this case zsh. I have tried using:
shell = str(subprocess.check_output("echo $SHELL", shell=True))
click.echo("Enter the path you would like to add:")
path = input()
if 'zsh' in shell:
with open(".zshrc", 'w') as zsh:
zsh.write(f'export PATH="$PATH:{path}"')
This throws no errors but doesn't seem to add to the actual ~./zshrc file.
Is there a better way to append to the file without manually opening the file and typing it in?
New to this so sorry if it's a stupid question...
Open the file in append mode. Your code also assumes that the current working directory is the user's home directory, which is not a good assumption to make.
from pathlib import Path
import os
if 'zsh' in os.environ.get("SHELL", ""):
with open(Path.home() / ".zshrc", 'a') as f:
f.write(f'export PATH={path}:$PATH')
with (Path.home() / ".zshrc').open("a") as f: would work as well.
Note that .zprofile would be the preferred location for updating an envirornent variable like PATH, rather than .zshrc.
Solved! Just want to put the answer here if anyone comes across the same problem.
Instead of trying to open the file with
with open(".zshrc", 'w') as zsh:
zsh.write(f'export PATH="$PATH:{path}"')
you can just do
subprocess.call(f"echo 'export PATH=$PATH:{path}' >> ~/.zshrc", shell=True)
If anybody has a way of removing from ~/.zshrc with python that would be very helpful...

Converting a file from .sam to .bam using python subprocess

I would like to start out by saying any help is greatly appreciated. I'm new to Python and scripting in general. I am trying to use a program called samtools view to convert a file from .sam to a .bam I need to be able do what this BASH command is doing in Python:
samtools view -bS aln.sam > aln.bam
I understand that BASH commands like | > < are done using the subprocess stdin, stdout and stderr in Python. I have tried a few different methods and still can't get my BASH script converted correctly. I have tried:
cmd = subprocess.call(["samtools view","-bS"], stdin=open(aln.sam,'r'), stdout=open(aln.bam,'w'), shell=True)
and
from subprocess import Popen
with open(SAMPLE+ "."+ TARGET+ ".sam",'wb',0) as input_file:
with open(SAMPLE+ "."+ TARGET+ ".bam",'wb',0) as output_file:
cmd = Popen([Dir+ "samtools-1.1/samtools view",'-bS'],
stdin=(input_file), stdout=(output_file), shell=True)
in Python and am still not getting samtools to convert a .sam to a .bam file. What am I doing wrong?
Abukamel is right, but in case you (or others) are wondering about your specific examples....
You're not too far off with your first attempt, just a few minor items:
Filenames should be in quotes
samtools reads from a named input file, not from stdin
You don't need "shell=True" since you're not using shell tricks like redirection
So you can do:
import subprocess
subprocess.call(["samtools", "view", "-bS", "aln.sam"],
stdout=open('aln.bam','w'))
Your second example has more or less the same issues, so would need to be changed to something like:
from subprocess import Popen
with open('aln.bam', 'wb',0) as output_file:
cmd = Popen(["samtools", "view",'-bS','aln.sam'],
stdout=(output_file))
You can pass execution to the shell by kwarg 'shell=True'
subprocess.call('samtools view -bS aln.sam > aln.bam', shell=True)

os.system: saving shell variables with multiple commands in one method

I am having a problem using my command/commands with one instance of os.system.
Unfortunately I have to use os.system as I have no control over this, as I send the string to the os.system method. I know I should really use subprocess module for my case, but that ain't an option.
So here is what I am trying to do.
I have a string like below:
cmd = "export BASE_PATH=`pwd`; export fileList=`python OutputString.py`; ./myscript --files ${fileList}; cp outputfile $BASE_PATH/.;"
This command then gets sent to the os.system module like so
os.system(cmd)
unfortunately when I consult my log file I get something that looks like this
os.system(r"""export BASE_PATH=/tmp/bla/bla; export fileList=; ./myscript --files ; cp outputfile /.;""")
As you can see BASE_PATH seems to be working but then when I call it with the cp outputfile /.
I get a empty string
Also with my fileList I get a empty string as fileList=python OutputString.py should print out a file list to this variable.
My thoughts:
Are these bugs due to a new process for each command? Hence I loose the variable in BASE_PATH in the next command.
Also for I not sure why fileList is empty.
Is there a solution to my above problem using os.system and my command string?
Please Note I have to use os.system module. This is out of my control.

Redirecting CGI error output from STDERR to a file (python AND perl)

I'm moving a website to Hostmonster and asked where the server log is located so I can automatically scan it for CGI errors. I was told, "We're sorry, but we do not have cgi errors go to any files that you have access to."
For organizational reasons I'm stuck with Hostmonster and this awful policy, so as a workaround I thought maybe I'd modify the CGI scripts to redirect STDERR to a custom log file.
I have a lot of scripts (269) so I need an easy way in both Python and Perl to redirect STDERR to a custom log file.
Something that accounts for file locking either explicitly or implicitly would be great, since a shared CGI error log file could theoretically be written to by more than one script at once if more than one script fails at the same time.
(I want to use a shared error log so I can email its contents to myself nightly and then archive or delete it.)
I know I may have to modify each file (grrr), that's why I'm looking for something elegant that will only be a few lines of code. Thanks.
For Perl, just close and re-open STDERR to point to a file of your choice.
close STDERR;
open STDERR, '>>', '/path/to/your/log.txt'
or die "Couldn't redirect STDERR: $!";
warn "this will go to log.txt";
Alternatively, you could look into a filehandle multiplexer like File::Tee.
Python: cgitb. At the top of your script, before other imports:
import cgitb
cgitb.enable(False, '/home/me/www/myapp/logs/errors')
(‘errors’ being a directory the web server user has write-access to.)
In Perl try CGI::Carp
BEGIN {
use CGI::Carp qw(carpout);
use diagnostics;
open(LOG, ">errors.txt");
carpout(LOG);
close(LOG);
}
use CGI::Carp qw(fatalsToBrowser);
The solution I finally went with was similar to the following, near the top of all my scripts:
Perl:
open(STDERR,">>","/path/to/my/cgi-error.log")
or die "Could not redirect STDERR: $OS_ERROR";
Python:
sys.stderr = open("/path/to/my/cgi-error.log", "a")
Apparently in Perl you don't need to close the STDERR handle before reopening it.
Normally I would close it anyway as a best practice, but as I said in the question, I have 269 scripts and I'm trying to minimize the changes. (Plus it seems more Perlish to just re-open the open filehandle, as awful as that sounds.)
In case anyone else has something similar in the future, here's what I'm going to do for updating all my scripts at once:
Perl:
find . -type f -name "*.pl" -exec perl -pi.bak -e 's%/usr/bin/perl%/usr/bin/perl\nopen(STDERR,">>","/path/to/my/cgi-error.log")\n or die "Could not redirect STDERR: \$OS_ERROR";%' {} \;
Python:
find . -type f -name "*.py" -exec perl -pi.bak -e 's%^(import os, sys.*)%$1\nsys.stderr = open("/path/to/my/cgi-error.log", "a")%' {} \;
The reason I'm posting these commands is that it took me quite a lot of syntactical massaging to get those commands to work (e.g., changing Couldn't to Could not, changing #!/usr/bin/perl to just /usr/bin/perl so the shell wouldn't interpret ! as a history character, using $OS_ERROR instead of $!, etc.)
Thanks to everyone who commented. Since no one answered for both Perl and Python I couldn't really "accept" any of the given answers, but I did give votes to the ones which led me in the right direction. Thanks again!
python:
import sys
sys.stderr = open('file_path_with_write_permission/filename', 'a')
Python has the sys.stderr module that you might want to look into.
>>>help(sys.__stderr__.read)
Help on built-in function read:
read(...)
read([size]) -> read at most size bytes, returned as a string.
If the size argument is negative or omitted, read until EOF is reached.
Notice that when in non-blocking mode, less data than what was requested
may be returned, even if no size parameter was given.
You can store the output of this in a string and write that string to a file.
Hope this helps
In my Perl CGI programs, I usually have
BEGIN {
open(STDERR,'>>','stderr.log');
}
right after shebang line and "use strict;use warnings;". If you want, you may append $0 to file name. But this will not solve multiple programs problem, as several copies of one programs may be run simultaneously. I usually just have several output files, for every program group.

Categories

Resources