Python call grep to fail - python

I want to execute this command in Python
grep keyMessage logFile.log > keyMessageFile.log
This is what I done now
from subprocess import call
keyMessage= 'keyMessage'
call(["grep", keyMessage, "logFile.log"])
but I don't know how to add the > keyMessageFile.log part
By the way, the reason why I use grep is because it's much faster than use read file then compare string then write file
#Update
There is the slower python code I write
keyMessage= 'keyMessage'
with open('logFile.log') as f:
for line in f:
with open(keyMessage+ '.txt', 'a') as newFile:
if(keyMessage not in line):
continue
else:
newFile.write(line)

The simplest way to do this (reasonably safely too) is:
from subprocess import check_call
from shlex import quote
check_call('grep %s logFile.log > keyMessageFile.log' % quote(keyMessage), shell=True)
However unless you really need the regex matching capabilities of grep, and you end up reading keyMessageFile.log in your program anyway, I don't think the following would be unreasonably slow:
def read_matching_lines(filename, key):
with open(filename) as fp:
for line in fp:
if key in line:
yield line
for matching_line in read_matching_lines('logFile.log', keyMessage):
print(matching_line)

subprocess.call has a parameter stdout. Pass an file opened for writing to it.
with open("keyMessageFile.log", "w") as o:
keyMessage= 'keyMessage'
call(["grep", keyMessage, "logFile.log"], stdout=o)
subprocess.call is the old API, you should use subprocess.run instead.

For me this works:
import sys
os.system('grep %s logFile.log > keyMessageFile.log' % 'looking string')

Related

Unable to read file with python

I'm trying to read the content of a file with python 3.8.5 but the output is empty, I don't understand what I'm doing wrong.
Here is the code:
import subprocess
import os
filename = "ls.out"
ls_command = "ls -la"
file = open(filename, "w")
subprocess.Popen(ls_command, stdout=file, shell=True)
file.close()
# So far, all is ok. The file "ls.out" is correctly created and filled with the output of "ls -la" command"
file = open(filename, "r")
for line in file:
print(line)
file.close()
The output of this script is empty, it doesn't print anything. I'm not able to see the content of ls.out.
What is not correct here ?
Popen creates a new process and launches it but returns immediately. So the end result is that you've forked your code and have both processes running at once. Your python code in executing faster than the start and finish of ls. Thus, you need to wait for the process to finish by adding a call to wait():
import subprocess
import os
filename = "ls.out"
ls_command = "ls -la"
file = open(filename, "w")
proc = subprocess.Popen(ls_command, stdout=file, shell=True)
proc.wait()
file.close()
file = open(filename, "r")
for line in file:
print(line)
file.close()
Popen merely starts the subprocess. Chances are the file is not yet populated when you open it.
If you want to wait for the Popen object to finish, you have to call its wait method, etc; but a much better and simpler solution is to use subprocess.check_call() or one of the other higher-level wrappers.
If the command prints to standard output, why don't you read it drectly?
import subprocess
import shlex
result = subprocess.run(
shlex.split(ls_command), # avoid shell=True
check=True, text=True, capture_output=True)
line = result.stdout

python readlines not working during incron

I'm trying to call a python script through incron:
/data/alucard-ops/drop IN_CLOSE_WRITE /data/alucard-ops/util/test.py $#/$#
but I cant seem to read from the file passed. Here is the script:
#!/usr/bin/env /usr/bin/python3
import os,sys
logfile = '/data/alucard-ops/log/'
log = open(logfile + 'test.log', 'a')
log.write(sys.argv[1] + "\n")
log.write(str(os.path.exists(sys.argv[1])) + "\n")
datafile = open(sys.argv[1], 'r')
log.write('Open\n')
data = datafile.readlines()
log.write("read\n")
datafile.close()
The output generated by the script:
/data/alucard-ops/drop/nsco-20180219.csv
True
Open
It seems to stop at the readlines() call. I dont see any errors in the syslog.
Update: It seems that i can use a subprocess to cat the file and it retrieves the contents. But, when i decode it, data.decode('utf-8') I'm back to nothing in the variable.
I ended up using watchdog instead.

How to avoid a memory errors in Python when removing empty lines in a large text file? [duplicate]

For example, we have some file like that:
first line
second line
third line
And in result we have to get:
first line
second line
third line
Use ONLY python
The with statement is excellent for automatically opening and closing files.
with open('myfile','rw') as file:
for line in file:
if not line.isspace():
file.write(line)
import fileinput
for line in fileinput.FileInput("file",inplace=1):
if line.rstrip():
print line
import sys
with open("file.txt") as f:
for line in f:
if not line.isspace():
sys.stdout.write(line)
Another way is
with open("file.txt") as f:
print "".join(line for line in f if not line.isspace())
with open(fname, 'r+') as fd:
lines = fd.readlines()
fd.seek(0)
fd.writelines(line for line in lines if line.strip())
fd.truncate()
I know you asked about Python, but your comment about Win and Linux indicates that you're after cross-platform-ness, and Perl is at least as cross-platform as Python. You can do this easily with one line of Perl on the command line, no scripts necessary: perl -ne 'print if /\S/' foo.txt
(I love Python and prefer it to Perl 99% of the time, but sometimes I really wish I could do command-line scripts with it as you can with the -e switch to Perl!)
That said, the following Python script should work. If you expect to do this often or for big files, it should be optimized with compiling the regular expressions too.
#!/usr/bin/python
import re
file = open('foo.txt', 'r')
for line in file.readlines():
if re.search('\S', line): print line,
file.close()
There are lots of ways to do this, that's just one :)
>>> s = """first line
... second line
...
... third line
... """
>>> print '\n'.join([i for i in s.split('\n') if len(i) > 0])
first line
second line
third line
>>>
You can use below way to delete all blank lines:
with open("new_file","r") as f:
for i in f.readlines():
if not i.strip():
continue
if i:
print i,
We can also write the output to file using below way:
with open("new_file","r") as f, open("outfile.txt","w") as outfile:
for i in f.readlines():
if not i.strip():
continue
if i:
outfile.write(i)
Have you tried something like the program below?
for line in open(filename):
if len(line) > 1 or line != '\n':
print(line, end='')
Explanation: On Linux/Windows based platforms where we have shell installed below solution may work as "os" module will be available and trying with Regex
Solution:
import os
os.system("sed -i \'/^$/d\' file.txt")

How to execute a python script and write output to txt file?

I'm executing a .py file, which spits out a give string. This command works fine
execfile ('file.py')
But I want the output (in addition to it being shown in the shell) written into a text file.
I tried this, but it's not working :(
execfile ('file.py') > ('output.txt')
All I get is this:
tugsjs6555
False
I guess "False" is referring to the output file not being successfully written :(
Thanks for your help
what your doing is checking the output of execfile('file.py') against the string 'output.txt'
you can do what you want to do with subprocess
#!/usr/bin/env python
import subprocess
with open("output.txt", "w+") as output:
subprocess.call(["python", "./script.py"], stdout=output);
This'll also work, due to directing standard out to the file output.txt before executing "file.py":
import sys
orig = sys.stdout
with open("output.txt", "wb") as f:
sys.stdout = f
try:
execfile("file.py", {})
finally:
sys.stdout = orig
Alternatively, execute the script in a subprocess:
import subprocess
with open("output.txt", "wb") as f:
subprocess.check_call(["python", "file.py"], stdout=f)
If you want to write to a directory, assuming you wish to hardcode the directory path:
import sys
import os.path
orig = sys.stdout
with open(os.path.join("dir", "output.txt"), "wb") as f:
sys.stdout = f
try:
execfile("file.py", {})
finally:
sys.stdout = orig
If you are running the file on Windows command prompt:
python filename.py >> textfile.txt
The output would be redirected to the textfile.txt in the same folder where the filename.py file is stored.
The above is only if you have the results showing on cmd and you want to see the entire result without it being truncated.
The simplest way to run a script and get the output to a text file is by typing the below in the terminal:
PCname:~/Path/WorkFolderName$ python scriptname.py>output.txt
*Make sure you have created output.txt in the work folder before executing the command.
Use this instead:
text_file = open('output.txt', 'w')
text_file.write('my string i want to put in file')
text_file.close()
Put it into your main file and go ahead and run it. Replace the string in the 2nd line with your string or a variable containing the string you want to output. If you have further questions post below.
file_open = open("test1.txt", "r")
file_output = open("output.txt", "w")
for line in file_open:
print ("%s"%(line), file=file_output)
file_open.close()
file_output.close()
using some hints from Remolten in the above posts and some other links I have written the following:
from os import listdir
from os.path import isfile, join
folderpath = "/Users/nupadhy/Downloads"
filenames = [A for A in listdir(folderpath) if isfile(join(folderpath,A))]
newlistfiles = ("\n".join(filenames))
OuttxtFile = open('listallfiles.txt', 'w')
OuttxtFile.write(newlistfiles)
OuttxtFile.close()
The code above is to list all files in my download folder. It saves the output to the output to listallfiles.txt. If the file is not there it will create and replace it with a new every time to run this code. Only thing you need to be mindful of is that it will create the output file in the folder where your py script is saved. See how you go, hope it helps.
You could also do this by going to the path of the folder you have the python script saved at with cmd, then do the name.py > filename.txt
It worked for me on windows 10

Read file and copy to standard output.

I'm trying to write a python program that will read input and copy it to standard output (with no alterations). I've been told that it needs to operate as a Python version of the Unix cat function. If a file cannot be opened, an error message needs to be printed, and then the program needs to continue processing any additional files. I am a complete beginner, and have tried my best to scrape something together with my limited knowledge. Here is what I have so far:
from sys import argv, stdout, stdin, stderr
if len(argv) == 1:
try:
stdout.write(raw_input(' ') + '\n')
except:
stderr.write ('sorry' + '\n')
quit()
else:
for filename in argv[1:]:
try:
filehandle + open(filename)
except IOError:
stderr.write('Sorry, could not open', filename + '\n')
continue
f = filehandle.read()
stdout.write(f)
I am not quite sure where to go from here.. does anyone have any advice/am I on the right track even a little bit? Please and thank you!
This function will copy the specified file to the console line by line (in case you later on decide to give it the ability to use the -n command line option of cat)
def catfile(fn):
with open(fn) as f:
for line in f:
print line,
It can be called with the filename once you have established the file exists.

Categories

Resources