i have a list of 20 lines, either a 0 or a 2. right now i have it rewriting the 20 lines of the text files based off ping results. i'm having a seperate program read that 20 lines, but it generates errors when there is not 20 lines (as the text file is being written). How can i edit each individual text line without rewriting the document?
ping ip
if ping == 0
f= open("status", 'ab')
f.write("0\n")
f.close
thats one condition on how it writes. i do wipe the document before this executes.
If I understand the use of constantly in the title correctly, you're trying to pass real time data here... Programs should not communicate in real time through files. That's not stable as well as awfully slow. if that's not the case you may want to rewrite the file opening it with w (write) instead of a (append).
if ping == 0
with open("status", 'wb') as f:
# write all 20 lines
Read more about modes.
note: to actually close a file you should call file.close by using f.close() and not f.close. If you're using with as a context manager like I suggest, the file is closed once the context is over (indentation returns to with level).
Related
My primary goal is to write to one file (e.g. file.txt) in many parallel flows, each flow should start from defined offset of a file.
Example:
script 1 - writes 10 chars from position 0
script 2 - writes 10 chars from position 10
script 3 - writes 10 chars from position 20
I didn't even get to parallelism cause I got stuck on writing to different offsets of a file.
I have created a simple script to check my idea:
file = open("sample_file.txt", "w")
file.seek(100)
file.write("new line")
file.close()
Ok, so the file was created, offset was moved to 100 and sentence 'new line' was added. Success.
But then I wanted to open the same file and add something with offsett 10:
file = open("sample_file.txt", "w")
file.seek(100)
file.write("new line")
file.close()
file = open("sample_file.txt", "a")
file.seek(10)
file.write("second line")
file.close()
And the sentence 'second line' is added but at the end of the file.
I'm sure it is possible to add chars somewhere in the middle of a file.
Can anyone help with this simple one?
Or maybe someone has an idea how to do it in parallel?
Pawel
As this post suggests, opening a file in 'a' mode will:
Open for writing. The file is created if it does not exist. The
stream is positioned at the end of the file. Subsequent writes
to the file will always end up at the then current end of file,
irrespective of any intervening fseek(3) or similar.
On the other hand, the mode 'r+' will let you:
Open for reading and writing. The stream is positioned at the
beginning of the file.
And though not mentioned explicitly, this will let you seek the file and write at different positions
Anyway if you are going to do this in parallel, you will have to control the resources. You don't want 2 processes writing to the file at the same time. Regarding that issue, see this SO question.
I have a simple program that processes some lines in a text file (adds some text to them). But then it saves them to another file. I would like to know if you can remove the line after the line is processed in the loop. Here is a example of how my program works:
datafile = open("data.txt", "a+")
donefile = open("done.txt", "a+")
for i in datafile:
#My program goes in here
donefile.write(processeddata)
#end of loop
datafile.close()
donefile.close()
As you can see, it just processes some lines from a file (separated by a newline). Is there a way to remove the line in the end of the loop so that when the program is closed it can continue where it left off?
Just so that I get the question right- you'd like to remove the line from datafile once you've processed and stored it in donefile ?
There is no need to do this and its also pretty risky to write to a file which is your source of read.
Instead , why not delete the donefile after you exit the loop? (i.e. after you close your files)
file iterator is a lazy iterator. So when you do for i in datafile it loads one line into memory at a time, so you are only working with that one line...so memory constraints shouldn't be of your concern
Lastly, to access files, please consider using with statement. It takes care of file handle exceptions and makes your program more robust
I am attempting to output a new txt file but it come up blank. I am doing this
my_file = open("something.txt","w")
#and then
my_file.write("hello")
Right after this line it just says 5 and then no text comes up in the file
What am I doing wrong?
You must close the file before the write is flushed. If I open an interpreter and then enter:
my_file = open('something.txt', 'w')
my_file.write('hello')
and then open the file in a text program, there is no text.
If I then issue:
my_file.close()
Voila! Text!
If you just want to flush once and keep writing, you can do that too:
my_file.flush()
my_file.write('\nhello again') # file still says 'hello'
my_file.flush() # now it says 'hello again' on the next line
By the way, if you happen to read the beautiful, wonderful documentation for file.write, which is only 2 lines long, you would have your answer (emphasis mine):
Write a string to the file. There is no return value. Due to buffering, the string may not actually show up in the file until the flush() or close() method is called.
If you don't want to care about closing file, use with:
with open("something.txt","w") as f:
f.write('hello')
Then python will take care of closing the file for you automatically.
As Two-Bit Alchemist pointed out, the file has to be closed. The python file writer uses a buffer (BufferedIOBase I think), meaning it collects a certain number of bytes before writing them to disk in bulk. This is done to save overhead when a lot of write operations are performed on a single file.
Also: When working with files, try using a with-environment to make sure your file is closed after you are done writing/reading:
with open("somefile.txt", "w") as myfile:
myfile.write("42")
# when you reach this point, i.e. leave the with-environment,
# the file is closed automatically.
The python file writer uses a buffer (BufferedIOBase I think), meaning
it collects a certain number of bytes before writing them to disk in
bulk. This is done to save overhead when a lot of write operations are
performed on a single file. Ref #m00am
Your code is also okk. Just add a statement for close file, then work correctly.
my_file = open("fin.txt","w")
#and then
my_file.write("hello")
my_file.close()
Noob question here. I'm scheduling a cron job for a Python script for every 2 hours, but I want the script to stop running after 48 hours, which is not a feature of cron. To work around this, I'm recording the number of executions at the end of the script in a text file using a tally mark x and opening the text file at the beginning of the script to only run if the count is less than n.
However, my script seems to always run regardless of the conditions. Here's an example of what I've tried:
with open("curl-output.txt", "a+") as myfile:
data = myfile.read()
finalrun = "xxxxx"
if data != finalrun:
[CURL CODE]
with open("curl-output.txt", "a") as text_file:
text_file.write("x")
text_file.close()
I think I'm missing something simple here. Please advise if there is a better way of achieving this. Thanks in advance.
The problem with your original code is that you're opening the file in a+ mode, which seems to set the seek position to the end of the file (try print(data) right after you read the file). If you use r instead, it works. (I'm not sure that's how it's supposed to be. This answer states it should write at the end, but read from the beginning. The documentation isn't terribly clear).
Some suggestions: Instead of comparing against the "xxxxx" string, you could just check the length of the data (if len(data) < 5). Or alternatively, as was suggested, use pickle to store a number, which might look like this:
import pickle
try:
with open("curl-output.txt", "rb") as myfile:
num = pickle.load(myfile)
except FileNotFoundError:
num = 0
if num < 5:
do_curl_stuff()
num += 1
with open("curl-output.txt", "wb") as myfile:
pickle.dump(num, myfile)
Two more things concerning your original code: You're making the first with block bigger than it needs to be. Once you've read the string into data, you don't need the file object anymore, so you can remove one level of indentation from everything except data = myfile.read().
Also, you don't need to close text_file manually. with will do that for you (that's the point).
Sounds more for a job scheduling with at command?
See http://www.ibm.com/developerworks/library/l-job-scheduling/ for different job scheduling mechanisms.
The first bug that is immediately obvious to me is that you are appending to the file even if data == finalrun. So when data == finalrun, you don't run curl but you do append another 'x' to the file. On the next run, data will be not equal to finalrun again so it will continue to execute the curl code.
The solution is of course to nest the code that appends to the file under the if statement.
Well there probably is an end of line jump \n character which makes that your file will contain something like xx\n and not simply xx. Probably this is why your condition does not work :)
EDIT
What happens if through the python command line you type
open('filename.txt', 'r').read() # where filename is the name of your file
you will be able to see whether there is an \n or not
Try using this condition along with if clause instead.
if data.count('x')==24
data string may contain extraneous data line new line characters. Check repr(data) to see if it actually a 24 x's.
I am trying to create a program which takes an input file, counts the number of words in each row and writes a string of that certain number in another output file. I managed to develope this code:
in_file = "our_input.txt"
out_file = "output.txt"
f=open(in_file)
g=open(out_file,"w")
for line in f:
if line == "\n":
g.write("0\n")
else:
g.write(str(line.count(" ")+1)+"\n")
now, this works well, but the problem is that it works for only a certain amount of lines. If my input file has 8000 lines, it will display only the first 6800. If it has 6000, than will be displayed (all numbers are rounded, right).
I tried creating another program, which splits each line to a list, and then counting the length of it, but the problem remains just the same.
Any idea what could cause this?
You need to close each file after you're done with it. The safest way to do this is by using the with statement:
with open(in_file) as f, open(out_file,"w") as g:
for line in f:
if line == "\n":
g.write("0\n")
else:
g.write(str(line.count(" ")+1)+"\n")
When reaching the end of a with block, all files you opened in the with line will be closed.
The reason for the behavior you see is that for performance reasons, reading and writing to/from files is buffered. Because of the way hard drives are constructed, data is read/written in blocks rather than in individual bytes - so even if you attempt to read/write a single byte, you have to read/write an entire block. Therefore, most programming languages' built-in file IO functions actually read (at least) one block at a time into memory and feed you data from that in-memory block until it needs to read another block. Similarly, writing is performed by actually writing into a memory block first, and only writing the block to disk when it is full. If you don't close the file writer, whatever is in the last in-memory block won't be written.