I have a Python script that I want to increment a global variable every time it is run. Is this possible?
Pretty easy to do with an external file, you can create a function to do that for you so you can use multiple files for multiple vars if needed, although in that case you might want to look into some sort of serialization and store everything in the same file. Here's a simple way to do it:
def get_var_value(filename="varstore.dat"):
with open(filename, "a+") as f:
f.seek(0)
val = int(f.read() or 0) + 1
f.seek(0)
f.truncate()
f.write(str(val))
return val
your_counter = get_var_value()
print("This script has been run {} times.".format(your_counter))
# This script has been run 1 times
# This script has been run 2 times
# etc.
It will store in varstore.dat by default, but you can use get_var_value("different_store.dat") for a different counter file.
example:-
import os
if not os.path.exists('log.txt'):
with open('log.txt','w') as f:
f.write('0')
with open('log.txt','r') as f:
st = int(f.read())
st+=1
with open('log.txt','w') as f:
f.write(str(st))
Each time you run your script,the value inside log.txt will increment by one.You can make use of it if you need to.
Yes, you need to store the value into a file and load it back when the program runs again. This is called program state serialization or persistency.
For a code example:
with open("store.txt",'r') as f: #open a file in the same folder
a = f.readlines() #read from file to variable a
#use the data read
b = int(a[0]) #get integer at first position
b = b+1 #increment
with open("store.txt",'w') as f: #open same file
f.write(str(b)) #writing a assuming it has been changed
The a variable will I think be a list when using readlines.
Related
I am using python to develop a basic calculator. My goal is to create an output file of results for a series. I've tried looking for the correct command but haven't had any luck. Below is my current code. I believe all i need it to finish the "def saveSeries(x):" but I am not sure how to. I have created the input file already in the respective folder.
import os
import sys
import math
import numpy
.
.
.
.
def saveSeries(x):
def readSeries():
global x
x = []
input file = open("input.txt", "r")
for line in inputFile:
x.append(float(line))
I believe this question is asking about the saveSeries function. I assume the file structure you want is the following:
1
2
3
4
5
Your solution is very close, but you'll want to iterate over your number list and write a number for each line.
def saveSeries(x):
outfile = open('output.txt', 'w')
for i in x:
outfile.write(str(i) + "\n") # \n will create a new line
outfile.close()
I also noticed your readSeries() was incorrect. Try this instead and call .readlines().
def readSeries():
global x
x = []
inputFile = open("input.txt", "r")
for line in inputFile.readlines():
x.append(float(line))
inputFile.close()
There are lots of ways you could go about accomplishing your goal, so you have to decide how you want to implement it. Do you want to write/update your file every time your user performs a new calculation? Do you want to have a save button, that would save all of the calculations performed?
One simple example:
# Create New File If Doesn't Exist
myFile = open('myCalculations.txt', 'w')
myFile.write("I should insert a function that returns my desired output and format")
myFile.close()
I'm writing to a file in three functions and i'm trying not overwrite the file. I want every time i run the code i generate a new file
with open("atx.csv", 'w')as output:
writer = csv.writer(output)
If you want to write to different files each time you execute the script, you need to change the file names, otherwise they will be overwritten.
import os
import csv
filename = "atx"
i = 0
while os.path.exists(f"{filename}{i}.csv"):
i += 1
with open(f"{filename}{i}.csv", 'w') as output:
writer = csv.writer(output)
writer.writerow([1, 2, 3]) #or whatever you want to write in the file, this line is just an example
Here I use os.path.exists() to check if a file is already present on the disk, and increment the counter.
First time you run the script, you get axt0.csv, second time axt1.csv, and so on.
Replicate this for your three files.
EDIT
Also note that here I'm using formatted string literals which are available since python3.6. If you have an earlier version of python, use "{}{:d}.csv".format(filename, i) instead of f"{filename}{i}.csv"
EDIT bis after comments
If the same file is needed to be manipulated by more functionsduring the execution of the script, the easiest thing came to my mind is to open the writer outside the functions and pass it as an argument.
filename = "atx"
i = 0
while os.path.exists(f"{filename}{i}.csv"):
i += 1
with open(f"{filename}{i}.csv", 'w') as output:
writer = csv.writer(output)
foo(writer, ...) #put the arguments of your function instead of ...
bar(writer, ...)
etc(writer, ...)
This way each time you call one of the functions it writes to the same file, appending the output at the bottom of the file.
Of course there are other ways. You might check for the file name existence only in the first function you call, and in the others just open the file with the 'a' options, to append the output.
you can do something like this so that each file gets named something a little different and therefore will not be overwritten:
for v in ['1','2','3']:
with open("atx_{}.csv".format(v), 'w')as output:
writer = csv.writer(output)
You are using just one filename. When using the same value as a name atx.csv you will either overwritte it with w or append with a.
If you want new files to be created, just check first if the file is there.
import os
files = os.listdir()
files = [f for f in files if 'atx' in f]
num = str(len(files)) if len(files) > 0 else ''
filename = "atx{0}.csv".format(num)
with open(filename, 'w')as output:
writer = csv.writer(output)
Change with open("atx.csv", 'w') to with open("atx.csv", 'a')
https://www.guru99.com/reading-and-writing-files-in-python.html#2
I'm trying to write this code so that once it ends the previous answers are saved and the code could be run again without the old answers being overwritten. I moved the how, why and scale_of_ten variables into 'with open' section and I had some success with the code being able to work for the amount of times the files was executed, but each time it was executed the old answers were overwritten. How would I write the code so that it saves the old answers while it takes in new answers?
import csv
import datetime
# imports modules
now = datetime.datetime.now()
# define current time when file is executed
how = str(raw_input("How are you doing?"))
why = str(raw_input("Why do you feel that way?"))
scale_of_ten = int(raw_input("On a scale of 1-10. With 10 being happy and 1 being sad. How happy are you?"))
#creates variables for csv file
x = [now.strftime("%Y-%m-%d %H:%M"),how,why,scale_of_ten]
# creates list for variables to be written in
with open ('happy.csv','wb') as f:
wtr = csv.writer(f)
wtr.writerow(x)
wtr.writerow(x)
f.close()
With w mode, open truncates the file. Use a (append) mode.
....
with open ('happy.csv', 'ab') as f:
# ^
....
BTW, f.close() is not needed if you use with statement.
I have a text file that will be populated depending on a radio button selected by a user in php
I am reading from the same text file, whenever I find that a new line has been added the Python script prints it. For now I am just printing it. Eventually I will send out a message on a GSM modem based on the new records in the .txt file. Right now I'm just printing it to make sure I can identify a new record.
Here's the code I'm using. It works just fine.
def main():
flag = 0
count = 0
while flag == 0:
f = open('test.txt', 'r')
with open('test.txt') as f:
for i, l in enumerate(f):
pass
nol = i + 1
if count < nol:
while count <= nol:
f = open('test.txt', 'r')
a = f.readlines(count)
if count == 0:
print a[0]
count = count+2
else:
print a[count-1]
count = count+1
if __name__ == '__main__':
main()
I was wondering if there is a better way to do this.
Also, php will keep writing to the file. And python will keep reading from it. Will this cause a clash? Since multiple instance are open?
According to this answer you can use watchdog package to watch over changes in the file. Alternatively you can use custom made solution using fcntl in Unix.
First solution has advantage of being cross-platform.
I have a 22mb text file containing a list of numbers (1 number per line). I am trying to have python read the number, process the number and write the result in another file. All of this works but if I have to stop the program it starts all over from the beginning. I tried to use a mysql database at first but it was way too slow. I am getting about 4 times the number being processed this way. I would like to be able to delete the line after the number was processed.
with open('list.txt', 'r') as file:
for line in file:
filename = line.rstrip('\n') + ".txt"
if os.path.isfile(filename):
print "File", filename, "exists, skipping!"
else:
#process number and write file
#(need code to delete current line here)
As you can see every time it is restarted it has to search the hard drive for the file name to make sure it gets to the place it left off. With 1.5 million numbers this can take a while. I found an example with truncate but it did not work.
Are there any commands similar to array_shift (PHP) for python that will work with text files.
I would use a marker file to keep the number of the last line processed instead of rewriting the input file:
start_from = 0
try:
with open('last_line.txt', 'r') as llf: start_from = int(llf.read())
except:
pass
with open('list.txt', 'r') as file:
for i, line in enumerate(file):
if i < start_from: continue
filename = line.rstrip('\n') + ".txt"
if os.path.isfile(filename):
print "File", filename, "exists, skipping!"
else:
pass
with open('last_line.txt', 'w') as outfile: outfile.write(str(i))
This code first checks for the file last_line.txt and tries to read a number from it. The number is the number of line which was processed in during the previous attempt. Then it simply skips the required number of lines.
I use Redis for stuff like that. Install redis and then pyredis and you can have a persistent set in memory. Then you can do:
r = redis.StrictRedis('localhost')
with open('list.txt', 'r') as file:
for line in file:
if r.sismember('done', line):
continue
else:
#process number and write file
r.sadd('done', line)
if you don't want to install Redis you can also use the shelve module, making sure that you open it with the writeback=False option. I really recommend Redis though, it makes things like this so much easier.
Reading the data file should not be a bottleneck. The following code read a 36 MB, 697997 line text file in about 0,2 seconds on my machine:
import time
start = time.clock()
with open('procmail.log', 'r') as f:
lines = f.readlines()
end = time.clock()
print 'Readlines time:', end-start
Because it produced the following result:
Readlines time: 0.1953125
Note that this code produces a list of lines in one go.
To know where you've been, just write the number of lines you've processed to a file. Then if you want to try again, read all the lines and skip the ones you've already done:
import os
# Raad the data file
with open('list.txt', 'r') as f:
lines = f.readlines()
skip = 0
try:
# Did we try earlier? if so, skip what has already been processed
with open('lineno.txt', 'r') as lf:
skip = int(lf.read()) # this should only be one number.
del lines[:skip] # Remove already processed lines from the list.
except:
pass
with open('lineno.txt', 'w+') as lf:
for n, line in enumerate(lines):
# Do your processing here.
lf.seek(0) # go to beginning of lf
lf.write(str(n+skip)+'\n') # write the line number
lf.flush()
os.fsync() # flush and fsync make sure the lf file is written.