This code works fine for me. Appends data at the end.
def writeFile(dataFile, nameFile):
fob = open(nameFile,'a+')
fob.write("%s\n"%dataFile)
fob.close()
But the problem is when I close the program and later run again I found that all the previous data were lost. Process is started to write from the start and there is no data in the file.
But during the run it perfectly add a line at the end of file.
I can't understand the problem. Please some one help.
NB: I am using Ubuntu-10.04 with python 2.6
There is nothing wrong with the code you posted here... I tend to agree with other the comments that this file is probably being overwritten elsewhere in your code.
The only suggestion I can think of to test this explicitly (if your use case can tolerate it) is to throw in an exit() statement at the end of the function and then open the file externally (aka in gedit) and see if the last change took.
Alternatively to the exit, you could run the program in the terminal and include a call to pdb at the end of this function which would interrupt the program without killing it:
import pdb; pdb.set_trace()
You will then have to hit c to continue the program each time this runs.
If that checks out, do a search for other places this file might be getting opened.
Related
So I know how to execute a Python file within another Python file, exec(open('file.py').read()) But the file that I want to loop has a while(True): loop within the file. What I want to do is loop the opening of the file containing the while true loop, while having that file run in the background.
The Opening Code:
loopCount=1
maxCount=100
while(loopcount<=maxCount):
exec(open('whileTrue.py').read())
I would think that you would do something like this, but instead of opening the file, allowing the file to run in the background, and opening the file again, etc; the code opens the file once then has the file run without opening the next file until after the file has been closed.
Any help with this is greatly appreciated.
Firstly, the exec(...) command is unsafe and really should be avoided. An alternative method would be to import your other Python file as if it were a module.
To be able to run the second file multiple times in parallel, you could use the threading module built into Python.
Say we have two files, FileA.py and FileB.py.
FileA.py:
import threading, FileB
for i in range(5):
currentThread = threading.Thread(target=FileB.Begin)
currentThread.start()
FileB.py:
import time
def Begin():
for i in range(10):
print("Currently on the {} second...".format(i))
time.sleep(1)
FileA.py will start 5 different threads, each running the Begin() function from FileB.py. You may replace the contents of the Begin() function, rename it or do whatever you like, but be sure to update the threads target in FileA.py.
It should be noted that opening too many threads is a bad idea as you may exceed the amount of memory your computer has. As well as that, you should ensure that your threads eventually end and are not a true 'infinite loop', as you may find you have to force your version of FileA.py to close in order to end the threads.
For more detail on the threading module you can see the python documentation: https://docs.python.org/3/library/threading.html
Or you could fine many guides related to the topic by searching for Python threading on Google.
I hope this helps.
I was testing some code on IDLE for Python which I haven't used in a while and stumbled on an unusual error.
I was attempting to run this simple code:
for i in range(10):
print(i)
print('Done')
I recall that the shell works on a line by line basis, so here is what I did first:
>>> for i in range(10):
print(i)
print('Done')
This resulted in a indent error, shown by the picture below:
I tried another way, as it might be that the next statement needed to be at the start perhaps, like this:
>>> for i in range(10):
print(i)
print('Done')
But this gave a syntax error, oddly:
This is quite odd to me the way IDLE works.
Take Note:
I am actually testing a much more complex program and didn't want to create a small Python file for a short test. After all, isn't IDLE's shell used for short tests anyways?
Why is multi-line coding causing this issue? Thanks.
Just hit return once or twice after the print(i), until you get the >>> prompt again. Then you can type the print('Done'). What's going on is that python is waiting for you to tell it that you're done working inside that for. And you do that by hitting return.
(You'll see, though, that the for loop is executed right away.)
I am trying to write the lists into a file. My code writes all the list except last list. I don't know why. Can someone please take a look at my code and let me know what I am doing wrong?
complete_test=[['apple','ball'],['test','test1'],['apple','testing']]
counter = 1
for i in complete_test:
r=open("testing"+str(counter)+".txt",'w')
for j in i:
r.write(j+'\n')
counter=counter +1
Thank you.
You need to call r.close().
This doesn't happen if you run your code as a Python file, but it's reproducible in the interpreter, and it happens for this reason:
All of the changes to a file are buffered, rather than executed immediately. CPython will close files when there are no longer any valid references to them, such as when the only variable referencing them is overwritten on each iteration of your loop. (And when they are closed, all of the buffered changes are flushed—written out.) On the final iteration, you never close the file because the variable r sticks around. You can verify this because calling exit() in the interpreter closes the file and causes the changes to be written.
This is a motivating example for context managers and with statements, as in inspectorG4dget's answer. They handle the opening and closing of the files for you. Use that code, rather than actually calling r.close(), and understand that this is what's going when you do it.
Here's a much cleaner way of doing the same thing:
complete_test=[['apple','ball'],['test','test1'],['apple','testing']]
for i,sub in enumerate(complete_list, 1): # `enumerate` gives the index of each element in the list as well
with open("testing{}".format(i), 'w') as outfile: # no need to worry about opening and closing, if you use `with`
outfile.write('\n'.join(sub)) # no need to write in a loop
outfile.write('\n')
Not a duplicate of this question, as I'm working through the python interface to gdb.
This one is similar but does not have an answer.
I'm extending a gdb.breakpoint in python so that it writes certain registers to file, and then jumps to an address: at 0x4021ee, I want to write stuff to file, then jump to 0x4021f3
However, nothing in command is ever getting executed.
import gdb
class DebugPrintingBreakpoint(gdb.Breakpoint):
def __init__(self, spec, command):
super(DebugPrintingBreakpoint, self).__init__(spec, gdb.BP_BREAKPOINT, internal = False)
self.command = command
def stop(self):
with open('tracer', 'a') as f:
f.write(chr(gdb.parse_and_eval("$rbx") ^ 0x71))
f.close()
return False
gdb.execute("start")
DebugPrintingBreakpoint("*0x4021ee", "jump *0x4021f3")
gdb.execute("continue")
If I explicitly add gdb.execute(self.command) to the end of stop(), I get Python Exception <class 'gdb.error'> Cannot execute this command while the selected thread is running.:
Anyone have a working example of command lists with breakpoints in python gdb?
A couple options to try:
Use gdb.post_event from stop() to run the desired command later. I believe you'll need to return True from your function then call continue from your event.
Create a normal breakpoint and listen to events.stop to check if your breakpoint was hit.
The Breakpoint.stop method is called when, in gdb terms, the inferior is still "executing". Partly this is a bookkeeping oddity -- of course the inferior isn't really executing, it is stopped while gdb does a bit of breakpoint-related processing. Internally it is more like gdb hasn't yet decided to report the stop to other interested parties inside gdb. This funny state is what lets stop work so nicely vis a vis next and other execution commands.
Some commands in gdb can't be invoked while the inferior is running, like jump, as you've found.
One thing you could try -- I have never tried this and don't know if it would work -- would be to assign to the PC in your stop method. This might do the right thing; but of course you should know that the documentation warns against doing weird stuff like this.
Failing that I think the only approach is to fall back to using commands to attach the jump to the breakpoint. This has the drawback that it will interfere with next.
One final way would be to patch the running code to insert a jump or just a sequence of nops.
In my code, I write a file to my hard disk. After that, I need to import the generated file and then continue processing it.
for i in xrange(10):
filename=generateFile()
# takes some time, I wish to freeze the program here
# and continue once the file is ready in the system
file=importFile(filename)
processFile(file)
If I run the code snippet in one go, most likely file=importFile(filename) will complain that that file does not exist, since the generation takes some time.
I used to manually run filename=generateFile() and wait before running file=importFile(filename).
Now that I'm using a for loop, I'm searching for an automatic way.
You could use time.sleep and I would expect that if you are loading a module this way you would need to reload rather than import after the first import.
However, unless the file is very large why not just generate the string and then eval or exec it?
Note that since your file generation function is not being invoked in a thread it should be blocking and will only return when it thinks it has finished writing - possibly you can improve things by ensuring that the file writer ends with outfile.flush() then outfile.close() but on some OSs there may still be a time when the file is not actually available.
for i in xrange(10):
(filename, is_finished)=generateFile()
while is_finished:
file=importFile(filename)
processFile(file)
continue;
I think you should use a flag to test if the file is generate.