I can use the ping command and save the output using the following line:
command = os.system('ping 127.0.0.1 > new.txt')
However each time the script is run the text file is overwritten so I only have the last ping saved. I have looked into logging but cannot find a way to save the outputs of the ping requests into a text file without over writing.
I have tried:
logging.debug(command = os.system('ping 127.0.0.1'))
But this throws up an error with: debug() takes at least 1 argument (0 given)
Any help would be appreciated, thanks!
You could get result of subprocess.check_output and write it to a file:
import subprocess
result = subprocess.check_output(['ping', '127.0.0.1'])
with open("new.txt", "a") as myfile:
myfile.write(result)
If you insist on using os.system, then simply use >> redirection:
command = os.system('ping 127.0.0.1 >> new.txt')
This would append new data to new.txt instead of overwriting it.
Another solution is to use subprocess module and manage file handler manually. This has the advantage of skipping the shell (it's faster and in some cases safer):
import subprocess
out = open('new.txt', 'a')
subprocess.call(['ping', '127.0.0.1'], stdout = out)
out.close()
Notice that you can do something else with stdout. For example, save it to string.
Related
This question already has answers here:
Save output of os.system to text file
(4 answers)
Closed 2 years ago.
I'm trying to write a script which uses the os command(linux) and save them in the text file. But when I try to run this code the output of the os command is not saved in the text file.
#!/usr/bin/python
import sys
import os
target = raw_input('Enter the website : ')
ping_it = os.system('ping ' + target)
string_it = str(ping_it)
with open("Output.txt", "w+") as fo:
fo.write(string_it)
fo.close()
After running the script when I check txt file the only thing I get is no 2 in the Output.txt.
Welcome to Stackoverflow.
The main issue here is that os.system is not designed to produce the output from the command - it simply runs it, and the process sends its output to whatever it inherits from its parent (your program).
To capture output it's easiest to use the subprocess module, which allows you to capture the process's outputs.
Here's a fairly simple program that will get you started:
import subprocess
target = 'google.com'
ping_it = subprocess.Popen('ping ' + target,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = ping_it.communicate()
with open("Output.txt", "w+") as fo:
fo.write(str(out))
fo.close()
If you want to read output as it is produced rather than waiting for the subprocess to terminate you can use a single subprocess.PIPE channel and read from that, which is conveniently expressed in forms like this:
with Popen(["ping", "google.com"], stdout=PIPE) as proc:
print(proc.stdout.read())
In this example I chose to give the command as a list of arguments rather than as a simple string. This avoids having to join arguements into a string if they are already in list form.
Note that when interacting with subprocesses in this way it's possible for the subprocess to get in a blocked state because either stdout or stderr has filled up its output buffer space. If your program then tries to read from the other channel that will create a deadlock, where each process is waiting for the other to do something. To avoid this you can make stderr a temporary file, then verify after subprocess completion that the file contains nothing of significance (and, ideally, remove it).
From docs you can use os.popen to assign output of any command to a variable.
import os
target = raw_input('Enter the website : ')
output = os.popen('ping ' + target).read() # Saving the output
with open('output.txt', 'w+') as f:
f.write(output)
Exactly what are you trying to save in the file? You did save the output of os.command, which is nothing more than the final status of the execution. This is exactly what the documentation tells you is the return value of that command.
If you want the output of the ping command, you need to use something that focuses on ping, not on os.command. The simple way is to add UNIX redirection:
os.system('ping ' + target + '&> Output.txt')
If you feel a need to pass the results through Python, use a separate process and receive the command results; see here.
You can also spawn a separate process and examine the results as they are produced, line by line. You don't seem to need that, but just in case, see my own question here.
I'm trying to execute a bash and want to redirect the output on the shell to a file
this is what I have so far
baby = subprocess.Popen('some command', stdout = subprocess.PIPE, shell = True)
print (baby.stdout.readlines()) #i wanna see it in console
with open("log.txt","a") as fobj: #wanna create a file log.txt and save the output there as string
fobj.write(stdout)
but I get this error NameError: name 'stdout' is not defined
I've looked at these question Run subprocess and print output to logging , subprocess.Popen() doesn't redirect output to file , Can you make a python subprocess output stdout and stderr as usual, but also capture the output as a string? but to no avail, they were all too complex for me, i got lost in all the code..
can't i just redirect the stdout to a normal txt file?
and is there a way I can built a function to time the execution of that script and put it in that same log.txt file (i use time ./myscript.py to take the time but i also don't know how to redirect it to the txt file)
You should adjust this example however suits you
First create input.txt file
Then
import sys
with open(sys.argv[1], 'r') as input, open(sys.argv[2], 'w') as output:
for line in input:
output.write(str(line.strip()) + '\n')
You run it from command line(s4.py,you can rename it)
python s4.py input.txt output.txt
Output
obama
is
my boss
You can guess but output is same as input.
Having some issues calling awk from within Python. Normally, I'd do the following to call the command in awk from the command line.
Open up command line, in admin mode or not.
Change my directory to awk.exe, namely cd R\GnuWin32\bin
Call awk -F "," "{ print > (\"split-\" $10 \".csv\") }" large.csv
My command is used to split up the large.csv file based on the 10th column into a number of files named split-[COL VAL HERE].csv. I have no issues running this command. I tried to run the same code in Python using subprocess.call() but I'm having some issues. I run the following code:
def split_ByInputColumn():
subprocess.call(['C:/R/GnuWin32/bin/awk.exe', '-F', '\",\"',
'\"{ print > (\\"split-\\" $10 \\".csv\\") }\"', 'large.csv'],
cwd = 'C:/R/GnuWin32/bin/')
and clearly, something is running when I execute the function (CPU usage, etc) but when I go to check C:/R/GnuWin32/bin/ there are no split files in the directory. Any idea on what's going wrong?
As I stated in my previous answer that was downvoted, you overprotect the arguments, making awk argument parsing fail.
Since there was no comment, I supposed there was a typo but it worked... So I suppose that's because I should have strongly suggested a full-fledged python solution, which is the best thing to do here (as stated in my previous answer)
Writing the equivalent in python is not trivial as we have to emulate the way awk opens files and appends to them afterwards. But it is more integrated, pythonic and handles quoting properly if quoting occurs in the input file.
I took the time to code & test it:
def split_ByInputColumn():
# get rid of the old data from previous runs
for f in glob.glob("split-*.csv"):
os.remove(f)
open_files = dict()
with open('large.csv') as f:
cr = csv.reader(f,delimiter=',')
for r in cr:
tenth_row = r[9]
filename = "split-{}.csv".format(tenth_row)
if not filename in open_files:
handle = open(filename,"wb")
open_files[filename] = (handle,csv.writer(handle,delimiter=','))
open_files[filename][1].writerow(r)
for f,_ in open_files.values():
f.close()
split_ByInputColumn()
in detail:
read the big file as csv (advantage: quoting is handled properly)
compute the destination filename
if filename not in dictionary, open it and create csv.writer object
write the row in the corresponding dictionary
in the end, close file handles
Aside: My old solution, using awk properly:
import subprocess
def split_ByInputColumn():
subprocess.call(['awk.exe', '-F', ',',
'{ print > ("split-" $10 ".csv") }', 'large.csv'],cwd = 'some_directory')
Someone else posted an answer (and then subsequently deleted it), but the issue was that I was over-protecting my arguments. The following code works:
def split_ByInputColumn():
subprocess.call(['C:/R/GnuWin32/bin/awk.exe', '-F', ',',
'{ print > (\"split-\" $10 \".csv\") }', 'large.csv'],
cwd = 'C:/R/GnuWin32/bin/')
I would like to start out by saying any help is greatly appreciated. I'm new to Python and scripting in general. I am trying to use a program called samtools view to convert a file from .sam to a .bam I need to be able do what this BASH command is doing in Python:
samtools view -bS aln.sam > aln.bam
I understand that BASH commands like | > < are done using the subprocess stdin, stdout and stderr in Python. I have tried a few different methods and still can't get my BASH script converted correctly. I have tried:
cmd = subprocess.call(["samtools view","-bS"], stdin=open(aln.sam,'r'), stdout=open(aln.bam,'w'), shell=True)
and
from subprocess import Popen
with open(SAMPLE+ "."+ TARGET+ ".sam",'wb',0) as input_file:
with open(SAMPLE+ "."+ TARGET+ ".bam",'wb',0) as output_file:
cmd = Popen([Dir+ "samtools-1.1/samtools view",'-bS'],
stdin=(input_file), stdout=(output_file), shell=True)
in Python and am still not getting samtools to convert a .sam to a .bam file. What am I doing wrong?
Abukamel is right, but in case you (or others) are wondering about your specific examples....
You're not too far off with your first attempt, just a few minor items:
Filenames should be in quotes
samtools reads from a named input file, not from stdin
You don't need "shell=True" since you're not using shell tricks like redirection
So you can do:
import subprocess
subprocess.call(["samtools", "view", "-bS", "aln.sam"],
stdout=open('aln.bam','w'))
Your second example has more or less the same issues, so would need to be changed to something like:
from subprocess import Popen
with open('aln.bam', 'wb',0) as output_file:
cmd = Popen(["samtools", "view",'-bS','aln.sam'],
stdout=(output_file))
You can pass execution to the shell by kwarg 'shell=True'
subprocess.call('samtools view -bS aln.sam > aln.bam', shell=True)
I am writing a python script that reads a line/string, calls Unix, uses grep to search a query file for lines that contain the string, and then prints the results.
from subprocess import call
for line in infilelines:
output = call(["grep", line, "path/to/query/file"])
print output
print line`
When I look at my results printed to the screen, I will get a list of matching strings from the query file, but I will also get "1" and "0" integers as output, and line is never printed to the screen. I expect to get the lines from the query file that match my string, followed by the string that I used in my search.
call returns the process return code.
If using Python 2.7, use check_output.
from subprocess import check_output
output = check_output(["grep", line, "path/to/query/file"])
If using anything before that, use communicate.
import subprocess
process = subprocess.Popen(["grep", line, "path/to/query/file"], stdout=subprocess.PIPE)
output = process.communicate()[0]
This will open a pipe for stdout that you can read with communicate. If you want stderr too, you need to add "stderr=subprocess.PIPE" too.
This will return the full output. If you want to parse it into separate lines, use split.
output.split('\n')
I believe Python takes care of line-ending conversions for you, but since you're using grep I'm going to assume you're on Unix where the line-ending is \n anyway.
http://docs.python.org/library/subprocess.html#subprocess.check_output
The following code works with Python >= 2.5:
from commands import getoutput
output = getoutput('grep %s path/to/query/file' % line)
output_list = output.splitlines()
Why would you want to execute a call to external grep when Python itself can do it? This is extra overhead and your code will then be dependent on grep being installed. This is how you do simple grep in Python with "in" operator.
query=open("/path/to/query/file").readlines()
query=[ i.rstrip() for i in query ]
f=open("file")
for line in f:
if "line" in query:
print line.rstrip()
f.close()