I have two simple scripts - I am trying to pass some information (date as input into the python script) to the bash script. Here's the python one:
#!/usr/local/bin/python
import os
import sys
import subprocess
year = "2012"
month = "5"
month_name = "may"
file = open('date.tmp','w')
file.write(year + "\n")
file.write(month + "\n")
file.write(month_name + "\n")
file.close
subprocess.call("/home/lukasz/bashdate.sh")
And here's the bash one:
#!/bin/bash
cat /home/lukasz/date.tmp | \
while read CMD; do
echo -e $CMD
done
rm /home/lukasz/date.tmp
Python script works fine without issues. It calls the bash script but it looks like the while loop just does not run. I know the bash script does run overall because the rm command gets executed and the date.tmp file is removed. However if I comment out the subprocess call in python then run the bash script manually it works fine displaying each line.
Brief explanation of what I am trying to accomplish. I have a python script that exports a very large DB to CSV (almost 300 tables and a few gigs of data) which then calls the bash script to zip the CSVs into one file and move it to another location. I need to pass the month and year supplied to the python script to the bash script.
I believe that you need file.close() instead of file.close. With the latter, you're not actually closing the file since you don't call the method. Since you haven't actually closed the file yet, it might not be flushed and so the entire contents of the file might be buffered rather than written to disk.
As a side note, these things are taken care of automatically if you use a context manager:
with open('foofile','w') as fout:
fout.write("this data")
fout.write("that data")
#Sleep well tonight knowing that python guarantees your file is closed properly
do_more_stuff(blah,foo,bar,baz,qux)
Instead of writing a temp file, send the values of year, month, and month-name to the bash script as parameters. Ie, in the Python code remove all the lines with file in them, and replace
subprocess.call("/home/lukasz/bashdate.sh")
with
subprocess.call(['/home/lukasz/bashdate.sh', year, month, month_name])
and in the bash script, replace the cat ... rm lines with (eg)
y=$1; m=$2; mn=$3
which puts the year, month, and month-name into shell variables y, m, and mn.
Maybe try adding shell=True to the call:
subprocess.call("/home/lukasz/bashdate.sh", shell=True)
Related
I have several thousand lines of code that all ultimately results in a few strings being printed using print() calls. Is there a way to, at the bottom of my code, export everything that has been printed to a text file?
This will help.
python main.py > output.txt
The operator > redirects the output of main.py from stdout to the regular file output.txt.
You can do this with redirection in your shell or reopening sys.stdout. Here are both ways:
Reopen sys.stdout:
At the beginning of your code, you can use this code:
import sys
sys.stdout = open('logfile', 'w')
# ... rest of your program ...
and everything printed to standard output (including print() calls) will be written to logfile. This method will always work.
Redirection in your shell:
This is my preferred method if you're running the script from the command line every time. In some niche cases, this won't work (ie: the script is being run by some other program), but it will work for 90% of cases. You can simply run your original script like this:
python myFile.py > logfile
and everything will be written to logfile. If for some reason this doesn't work for you, use method #1.
I have a simple shell script script.sh:
echo "ubuntu:$1" | sudo chpasswd
I need to open the script, read it, insert the argument, and save it as a string like so: 'echo "ubuntu:arg_passed_when_opening" | sudo chpasswd' using Python.
All the options suggested here actually execute the script, which is not what I want.
Any suggestions?
You would do this the same way that you read any text file, and we can use sys.argv to get the argument passed when running the python script.
Ex:
import sys
with open('script.sh', 'r') as sfile:
modified_file_contents = sfile.read().replace('$1', sys.argv[1])
With this method, modified_file_contents is a string containing the text of the file, but with the specified variable replaced with the argument passed to the python script when it was run.
This question already has answers here:
how to direct output into a txt file in python in windows
(6 answers)
Closed 6 years ago.
I am running a python script which checks for the modifications of files in a folder. I want that output to be printed in a file. The problem is that the output is DYNAMIC , the cmd is always open and when a file is modified, I will have an information right-ahead about that in the cmd window. All the solutions which I found were matching the situations were I just run a command and I finish with that.
I tryed with:
python script.py > d:\output.txt but the output.txt file is empty
An example of the command prompt windows, after I run the command python script.py and I touch the 2 files, the command prompt will look like this. I want to capture that output.
Solution: In the python script which I use, add to the logging.basicConfig function, one more argument : filename='d:\test.log'
The issue is output buffering. If you wait long enough, you'll eventually see data show up in the file in "blocks". There are a few ways around it, for example:
Run python with the -u (unbuffered) flag
Add a sys.stdout.flush() after all print statements (which can be simplified by replacing stdout with a custom class to do it for you; see the linked question for more)
Add flush=True option to print statements if your version of Python supports it
If appropriate, use the logging module instead of print statements.
python test.py>test.txt
It's working for me in windows cmd prompt
As I see it the simplest would be to add the file handling (the writing to output.txt ) inside your script. Thus, when it is time to print the information you need to have (as your example shows when you touch two files you print two lines), you can open the file, write the specific line and close it after it is done (then you can see the updated output.txt).
Get the file path for the output.txt as a command line argument like
python script.py --o 'd:\output.txt'
for example.
I am converting some Python scripts I wrote in a Windows environment to run in Unix (Red Hat 5.4), and I'm having trouble converting the lines that deal with filepaths. In Windows, I usually read in all .txt files within a directory using something like:
pathtotxt = "C:\\Text Data\\EJC\\Philosophical Transactions 1665-1678\\*\\*.txt"
for file in glob.glob(pathtotxt):
It seems one can use the glob.glob() method in Unix as well, so I'm trying to implement this method to find all text files within a directory entitled "source" using the following code:
#!/usr/bin/env python
import commands
import sys
import glob
import os
testout = open('testoutput.txt', 'w')
numbers = [1,2,3]
for number in numbers:
testout.write(str(number + 1) + "\r\n")
testout.close
sourceout = open('sourceoutput.txt', 'w')
pathtosource = "/afs/crc.nd.edu/user/d/dduhaime/data/hill/source/*.txt"
for file in glob.glob(pathtosource):
with open(file, 'r') as openfile:
readfile = openfile.read()
souceout.write (str(readfile))
sourceout.close
When I run this code, the testout.txt file comes out as expected, but the sourceout.txt file is empty. I thought the problem might be solved if I change the line
pathtosource = "/afs/crc.nd.edu/user/d/dduhaime/data/hill/source/*.txt"
to
pathtosource = "/source/*.txt"
and then run the code from the /hill directory, but that didn't resolve my problem. Do others know how I might be able to read in the text files in the source directory? I would be grateful for any insights others can offer.
EDIT: In case it is relevant, the /afs/ tree of directories referenced above is located on a remote server that I'm ssh-ing into via Putty. I'm also using a test.job file to qsub the Python script above. (This is all to prepare myself to submit jobs on the SGE cluster system.) The test.job script looks like:
#!/bin/csh
#$ -M dduhaime#nd.edu
#$ -m abe
#$ -r y
#$ -o tmp.out
#$ -e tmp.err
module load python/2.7.3
echo "Start - `date`"
python tmp.py
echo "Finish - `date`"
Got it! I had misspelled the output command. I wrote
souceout.write (str(readfile))
instead of
sourceout.write (str(readfile))
What a dunce. I also added a newline bit to the line:
sourceout.write (str(readfile) + "\r\n")
and it works fine. I think it's time for a new IDE!
You haven't really closed the file. The function testout.close() isn't called, because you have forgotten the parentheses. The same is for sourceout.close()
testout.close
...
sourceout.close
Has to be:
testout.close()
...
sourceout.close()
If the program finishes all files are automatically closed so it is only important if you reopen the file.
Even better (the pythonic version) would be to use the with statement. Instead of this:
testout = open('testoutput.txt', 'w')
numbers = [1,2,3]
for number in numbers:
testout.write(str(number + 1) + "\r\n")
testout.close()
you would write this:
with open('testoutput.txt', 'w') as testout:
numbers = [1,2,3]
for number in numbers:
testout.write(str(number + 1) + "\r\n")
In this case the file will be automatically closed even when an error occurs.
I am trying to daemonize a python script that currently runs in the foreground. However, I still need to be able to see its output which it currently dumps to stdout.
So I am using the following piece of code which generates a unique file name in /tmp and then it assigns sys.stdout to this new file. All subsequent calls to 'print' are then redirected to this log file.
import uuid
outfile = open('/tmp/outfile-' + str(uuid.uuid4()), 'w')
outfile.write("Log file for daemon script...\n")
sys.stdout=outfile
# Rest of script uses print statements to dump information into the /tmp file
.
.
.
The problem I am facing is that, when I tail -f the file created in /tmp, I don't see any output. However, once I kill my daemon process, output is visible in the /tmp logfile, because python flushes out the file data.
I want to monitor the /tmp log file in realtime, hence it would be great if somehow, the output can be made visible in realtime.
One solution that I have tried was trying to use unbeffered IO, but that didn't help either.
Try harder to use unbuffered I/O. The problem is almost certainly that your output is buffered.
Opening the file like this should work:
outfile = open(name, 'w', 0)