I'm very new to Python (and coding in general, if I'm honest) and decided to learn by dipping into the Twitter API to make a weird Twitterbot that scrambles the words in a tweet and reposts them, _ebooks style.
Anyway, the way I have it currently set up, it pulls the latest tweet and then compares it to a .txt file with the previous tweet. If the tweet and the .txt file match (i.e., not a new tweet), it does nothing. If they don't, it replaces the .txt file with the current tweet, then scrambles and posts it. I feel like there's got to be a better way to do this than what I'm doing. Here's the relevant code:
words = hank[0]['text']
target = open("hank.txt", "r")
if words == "STOP":
print "Sam says stop :'("
return
else:
if words == target.read():
print "Nothing New."
else:
target.close()
target = open("hank.txt", "w")
target.write(words)
target.close()
Obviously, opening as 'r' just to check it against the tweet, closing, and re-opening as 'w' is not very efficient. However, if I open as 'w+' it deletes all the contents of the file when I read it, and if I open it as 'r+', it adds the new tweet either to the beginning or the end of the file (dependent on where I set the pointer, obviously). I am 100% sure I am missing something TOTALLY obvious, but after hours of googling and dredging through Python documentation, I haven't found anything simpler. Any help would be more than welcome haha. :)
with open(filename, "r+") as f:
data = f.read()// Redaing the data
//any comparison of tweets etc..
f.truncate()//here basically it clears the file.
f.seek(0)// setting the pointer
f.write("most recent tweet")// writing to the file
No need to close the file instance, it automatically closes.
Just read python docs on these methods used for a more clear picture.
I suggest you use yield to compare hank.txt and words line by line so that more memory space could be saved, if you are so focused on efficiency.
As for file operation, I don't think there is a better way in overwriting a file. If you are using Linux, maybe 'cat > hank.txt' could be faster. Just a guess.
Related
I appreciate that this may be an issue with my computer/software, but I want to double check that my code isn't causing the problem before ruling it out.
I have written a fairly simple program. I have a short list of strings read in from a text file, then with a different text file open, I iterate over each word in the second text file, checking if the first two letters of the word are contained in the first list of strings.
Then, if that condition is fulfilled, I use string interpolation to insert that word into a string of HTML code. Finally, I append that string to an existing empty .html. When finished iterating through, I close the html file.
with open("strings.txt", "r") as f:
strings = f.read().splitlines()
urlfile = open("links.html", "a")
with open("words.txt", "r") as f:
text = f.read().splitlines()
for word in text:
if word[:2] in strings:
html = '<a href="[URL]/{}">'.format(word)
urlfile.write(html)
urlfile.close()
so far there doesn't actually seem to be any issues with my code doing what I want - I am generating the right html code and if I print it to console it does so quickly. It is being appended to the html file.
The problem I have is that something I am doing must be computationally expensive or problematic, because Notepad++ freezes every time I try to check links.html for the results. I have managed to see that it looks correct, but Notepad++ then becomes unusable, and my computer is clearly straining. The only solution I have is to close anything related to the html file.
None of the lists used are long and all the operations should in theory be quite simple, so I feel as though I must be doing something wrong. Am I writing to files in an unsafe way? Am I doing something wildly expensive that I'm just missing? I am using Notepad++ v7.9.5, Python 3, and Anaconda prompt.
EDIT: I am now able to access the html file on my browser and on Notepad++ without issue. I think the source of the problem was some laptop software updating in the background without me noticing. I'll check that first next time!
I am attempting to output a new txt file but it come up blank. I am doing this
my_file = open("something.txt","w")
#and then
my_file.write("hello")
Right after this line it just says 5 and then no text comes up in the file
What am I doing wrong?
You must close the file before the write is flushed. If I open an interpreter and then enter:
my_file = open('something.txt', 'w')
my_file.write('hello')
and then open the file in a text program, there is no text.
If I then issue:
my_file.close()
Voila! Text!
If you just want to flush once and keep writing, you can do that too:
my_file.flush()
my_file.write('\nhello again') # file still says 'hello'
my_file.flush() # now it says 'hello again' on the next line
By the way, if you happen to read the beautiful, wonderful documentation for file.write, which is only 2 lines long, you would have your answer (emphasis mine):
Write a string to the file. There is no return value. Due to buffering, the string may not actually show up in the file until the flush() or close() method is called.
If you don't want to care about closing file, use with:
with open("something.txt","w") as f:
f.write('hello')
Then python will take care of closing the file for you automatically.
As Two-Bit Alchemist pointed out, the file has to be closed. The python file writer uses a buffer (BufferedIOBase I think), meaning it collects a certain number of bytes before writing them to disk in bulk. This is done to save overhead when a lot of write operations are performed on a single file.
Also: When working with files, try using a with-environment to make sure your file is closed after you are done writing/reading:
with open("somefile.txt", "w") as myfile:
myfile.write("42")
# when you reach this point, i.e. leave the with-environment,
# the file is closed automatically.
The python file writer uses a buffer (BufferedIOBase I think), meaning
it collects a certain number of bytes before writing them to disk in
bulk. This is done to save overhead when a lot of write operations are
performed on a single file. Ref #m00am
Your code is also okk. Just add a statement for close file, then work correctly.
my_file = open("fin.txt","w")
#and then
my_file.write("hello")
my_file.close()
Noob question here. I'm scheduling a cron job for a Python script for every 2 hours, but I want the script to stop running after 48 hours, which is not a feature of cron. To work around this, I'm recording the number of executions at the end of the script in a text file using a tally mark x and opening the text file at the beginning of the script to only run if the count is less than n.
However, my script seems to always run regardless of the conditions. Here's an example of what I've tried:
with open("curl-output.txt", "a+") as myfile:
data = myfile.read()
finalrun = "xxxxx"
if data != finalrun:
[CURL CODE]
with open("curl-output.txt", "a") as text_file:
text_file.write("x")
text_file.close()
I think I'm missing something simple here. Please advise if there is a better way of achieving this. Thanks in advance.
The problem with your original code is that you're opening the file in a+ mode, which seems to set the seek position to the end of the file (try print(data) right after you read the file). If you use r instead, it works. (I'm not sure that's how it's supposed to be. This answer states it should write at the end, but read from the beginning. The documentation isn't terribly clear).
Some suggestions: Instead of comparing against the "xxxxx" string, you could just check the length of the data (if len(data) < 5). Or alternatively, as was suggested, use pickle to store a number, which might look like this:
import pickle
try:
with open("curl-output.txt", "rb") as myfile:
num = pickle.load(myfile)
except FileNotFoundError:
num = 0
if num < 5:
do_curl_stuff()
num += 1
with open("curl-output.txt", "wb") as myfile:
pickle.dump(num, myfile)
Two more things concerning your original code: You're making the first with block bigger than it needs to be. Once you've read the string into data, you don't need the file object anymore, so you can remove one level of indentation from everything except data = myfile.read().
Also, you don't need to close text_file manually. with will do that for you (that's the point).
Sounds more for a job scheduling with at command?
See http://www.ibm.com/developerworks/library/l-job-scheduling/ for different job scheduling mechanisms.
The first bug that is immediately obvious to me is that you are appending to the file even if data == finalrun. So when data == finalrun, you don't run curl but you do append another 'x' to the file. On the next run, data will be not equal to finalrun again so it will continue to execute the curl code.
The solution is of course to nest the code that appends to the file under the if statement.
Well there probably is an end of line jump \n character which makes that your file will contain something like xx\n and not simply xx. Probably this is why your condition does not work :)
EDIT
What happens if through the python command line you type
open('filename.txt', 'r').read() # where filename is the name of your file
you will be able to see whether there is an \n or not
Try using this condition along with if clause instead.
if data.count('x')==24
data string may contain extraneous data line new line characters. Check repr(data) to see if it actually a 24 x's.
The general gist of this question: if there is even a remote possibility that something could go wrong, should I catch the possible error? Specifically:
I have an app that reads and writes the previous history of the program to a .txt file. Upon initialization, the program reads the history file to determine what operations it should and should not do. If no history file yet exists, it creates one. Like so:
global trackList
try:
# Open history of downloaded MP3s and transfer it to trackList
with open('trackData.txt', 'r') as f:
trackrackList = f.readlines()
except Exception, e: #if file does not exist, create a blank new one
with open('trackData.txt', 'w') as f:
f.write("")
The program then proceeds to download MP3s based on whether or not they were in the txt file. Once it has downloaded an MP3, it adds it to the txt file. Like so:
mp3File = requests.get(linkURL)
with open('trackData.txt', 'a') as f:
f.write(linkURL + '\n')
Now, it is almost 100% percent certain that the txt file will remain since the time it was created in the first function. We're dealing with downloading a few MP3s here -- the program will never run for more than a few minutes. However, there is the remote possibility that the history txt file will have been deleted by the user or otherwise corrupted in while the MP3 was downloaded, in which case the program will crash because there is no error handling.
Would a good programmer wrap that last block of code in a try ... except block that creates the history txt file if it does not exist, or is that just needless paranoia and wasted space? It's trivial to implement, but keep in mind that I have programs like this where there are literally hundreds of opportunities for users to delete/corrupt a previously created txt file in a tiny window of time. My normally flat Python code would turn into a nested try ... except minefield.
A safer solution would be to open the file and keep it open while you are still downloading. The user will then not be able to delete it. After everything is downloaded and logged, close the file. This will also result in better performance.
Why are you creating an empty file on application startup? Just do nothing if the file isn't present on startup - open('trackData.txt', 'a') will still create a new file.
I am a Python beginner and my next project is a program in which you enter the details of your program and then select the file (I'm using Tkinter), and then the program will format the details and write them to the start of the file.
I know that you'd have to 'rewrite' it and that a tmp file is probably in hand. I just want to know simple ways that one could achieve adding text to the beginning of a file.
Thanks.
To add text to the beginning of a file, you can (1) open the file for reading, (2) read the file, (3) open the file for writing and overwrite it with (your text + the original file text).
formatted_text_to_add = 'Sample text'
with open('userfile', 'rb') as filename:
filetext = filename.read()
newfiletext = formatted_text_to_add + '/n' + filetext
with open('userfile', 'wb') as filename:
filename.write(newfiletext)
This requires two I/O operations and I'm tempted to look for a way to do it in one pass. However, prior answers to similar questions suggest that trying to write to the beginning or middle of a file in Python gets complicated quite quickly unless you bite the bullet and overwrite the original file with the new text.
If I understand what you're asking, I believe you're looking for what's called a project skeleton. This link handles it pretty well.
This probably won't solve your exact problem, as you will need to know in advance the exact number of bytes you'll be adding to the beginning of the file.
# Put some text in the file
f = open("tmp.txt", "w")
print >>f, "123456789"
f.close()
# Open the file in read/write mode
f = open("tmp.txt", "r+")
f.seek(0) # reposition the file pointer to the beginning of the file
f.write('abc') # use write to avoid writing new lines
f.close()
When you reposition the file pointer using seek, you can overwrite the bytes that are already stored at that position. You can't, however, "insert" text, pushing existing bytes ahead to make room for new data. When I said you would need to know the exact number of bytes,
I meant you would have to "leave room" for the text at the beginning of the file. Something like:
f = open("tmp.txt", "w")
f.write("\0\0\0456789")
f.close()
# Some time later...
f = open("tmp.txt", "r+")
f.seek(0)
f.write('123')
f.close()
For text files, this can work if you leave a "blank" line of, say, 50 spaces at the beginning of the file. Later, you can go back and overwrite up to 50 bytes (the newline being byte 51)
without overwriting following lines. Of course, you can leave multiple lines at the beginning. The point is that you can't grow or shrink your reserved block of lines to be overwritten. There's nothing special about the newline in a file, other than that it is treated specially by file methods like read and readline for splitting blocks of data into separate strings.
To add one of more lines of text to the beginning of a file, without overwriting what's already present, you'll have to use the "read the old file, write to a new file" solution outlined in other answers.