appending value in python and writing to a file - python

I have made a classifier in Python and it works fine. Now I need to output the results into a text file, which I can also do without a problem. The problem is, I need to include the id of the result as well as the result itself. I am new to python and still getting used to the syntax so any help would be greatly appreciated.
printedRow = '{id1},{result1}'.format(id1=id , result1 = result)
print("~~~~~RESULT~~~~~")
for k in range(0,len(pans)):
pans.append(row[0] + ','+ p.classify(row))
print (pans)
print(pans, file=outfile)
The id that I need to include with the results of the classifier is being held in the index row[0]. When I run this code, the same id is printed with every result. The results are printing out fine, I just need to be able to match the id with the result.
They are both held in lists and there are about 2000 values in each list.

I am not sure what you are trying to do because you don't define what row or pans are. Consider using more Pythonic source code. If an object is a collection of any type, then use some sort of iterator. Enumerate is useful to give each loop an index number.
for n, p in enumerate(pans):
print "pan %d is %s" % (n, p)
You also appear to be appending to pans within a loop whose size depends on pans. That could go on forever!
If you want to add to a list, just add it, e.g.:
newpans = pans + [x.classify(row) for x in oldpans]
Rather than building up a structure in memory, then printing it to a file, write directly to a file. This will use up far less memory. If you want speed, let the computer buffer the file for you, e.g.
f = open("output.txt", "w")
for n, p in enumerate(pans):
f.write("%s,%s\n" % (n, classify(p)))
f.close()

Related

Saving variables in Python as columns without brackets

I have a code which returns some variables that I would like to later on use in another program. However, the output doesn't look like I want it to. The rows are within brackets [], and I would like to have them removed.
I have found the following question that deals with something similar; however, I am saving the variable, not printing it on the screen:
how to remove characters from printed output in python
This is where the variable is defined, the libraries used are chaospy and numpy.
nodes, weights = cp.generate_quadrature(order, dist, rule="G", sparse=True)
nodes_trans = nodes.transpose()
And this is where the variable is saved
with open('nodes_smolyak_trans.txt', 'w') as ndsT:
for itemndsT in nodes_trans:
ndsT.write("%s\n" % itemndsT)
ndsT.close()
Also, dist, mentioned above is defined as
dist = cp.J(wX, wY, pX, pY)
And all it's compoents are defined equally as
windX = cp.Uniform(0, 100)
Now, all rows of my output looks like this:
[50. 50. 50. 21.13248654]
I would like them to be instead simply
21.13248654
If the number of rows is small, I can manually remove them by hand, but when the output contains hundreds, I waste too much time, so any sugestion to remove the brackets manually is welcome.
My suspicion is that itemndsT is a list type, which is why when you write it to the file it includes the square brackets. You'll need to format it into a string yourself before writing it. There are a few ways you can do this, but using the join string function is one of the simplest:
for itemndsT in nodes_trans:
item_str = " ".join(map(str,itemndsT))
ndsT.write("%s\n" % item_str)

What is the fastest performance tuple for large data sets in python?

Right now, I'm basically running through an excel sheet.
I have about 20 names and then I have 50k total values that match to one of those 20 names, so the excel sheet is 50k rows long, column B showing any random value, and column A showing one of the 20 names.
I'm trying to get a string for each of the names that show all of the values.
Name A: 123,244,123,523,123,5523,12505,142... etc etc.
Name B: 123,244,123,523,123,5523,12505,142... etc etc.
Right now, I created a dictionary that runs through the excel sheet, checks if the name is all ready in the dictionary, if it is, then it does a
strA = strA + "," + foundValue
Then it inserts strA back into the dictionary for that particular name. If the name doesn't exist, it creates that dictionary key and then adds that value to it.
Now, this was working all well at first.. but it's been about 15 or 20 mins and it is only on 5k values added to the dictionary so far and it seems to get slower as time goes on and it keeps running.
I wonder if there is a better way to do this or faster way to do this. I was thinking of building new dictionaries every 1k values and then combine them all together at the end.. but that would be 50 dictionaries total and it sounds complicated.. although maybe not.. I'm not sure, maybe it could work better that way, this seems to not work.
I DO need the string that shows each value with a comma between each value. That is why I am doing the string thing right now.
There are a number of things that are likely causing your program to run slowly.
String concatenation in python can be extremely inefficient when used with large strings.
Strings in Python are immutable. This fact frequently sneaks up and bites novice Python programmers on the rump. Immutability confers some advantages and disadvantages. In the plus column, strings can be used as keys in dictionaries and individual copies can be shared among multiple variable bindings. (Python automatically shares one- and two-character strings.) In the minus column, you can't say something like, "change all the 'a's to 'b's" in any given string. Instead, you have to create a new string with the desired properties. This continual copying can lead to significant inefficiencies in Python programs.
Considering each string in your example could contain thousands of characters, each time you do a concatenation, python has to copy that giant string into memory to create a new object.
This would be much more efficient:
strings = []
strings.append('string')
strings.append('other_string')
...
','.join(strings)
In your case, instead of each dictionary key storing a massive string, it should store a list, and you would just append each match to the list, and only at the very end would you do a string concatenation using str.join.
In addition, printing to stdout is also notoriously slow. If you're printing to stdout on each iteration of your massive 50,000 item loop, each iteration is being held up by the unbuffered write to stdout. Consider only printing every nth iteration, or perhaps writing to a file instead (file writes are normally buffered) and then tailing the file from another terminal.
This answer is based on OP's answer to my comment. I asked what he would do with the dict, suggesting that maybe he doesn't need to build it in the first place. #simon replies:
i add it to an excel sheet, so I take the KEY, which is the name, and
put it in A1, then I take the VALUE, which is
1345,345,135,346,3451,35.. etc etc, and put that into A2. then I do
the rest of my programming with that information...... but i need
those values seperated by commas and acessible inside that excel sheet
like that!
So it looks like the dict doesn't have to be built after all. Here is an alternative: for each name, create a file, and store those files in a dict:
files = {}
name = 'John' # let's say
if name not in files:
files[name] = open(name, 'w')
Then when you loop over the 50k-row excel, you do something like this (pseudo-code):
for row in 50k_rows:
name, value_string = rows.split() # or whatever
file = files[name]
file.write(value_string + ',') # if already ends with ',', no need to add
Since your value_string is already comma separated, your file will be csv-like without any further tweaking on your part (except maybe you want to strip the last trailing comma after you're done). Then when you need the values, say, of John, just value = open('John').read().
Now I've never worked with 50k-row excels, but I'd be very surprised if this is not quite a bit faster than what you currently have. Having persistent data is also (well, maybe) a plus.
EDIT:
Above is a memory-oriented solution. Writing to files is much slower than appending to lists (but probably still faster than recreating many large strings). But if the lists are huge (which seems likely) and you run into a memory problem (not saying you will), you can try the file approach.
An alternative, similar to lists in performance (at least for the toy test I tried) is to use StringIO:
from io import StringIO # python 2: import StringIO import StringIO
string_ios = {'John': StringIO()} # a dict to store StringIO objects
for value in ['ab', 'cd', 'ef']:
string_ios['John'].write(value + ',')
print(string_ios['John'].getvalue())
This will output 'ab,cd,ef,'
Instead of building a string that looks like a list, use an actual list and make the string representation you want out of it when you are done.
The proper way is to collect in lists and join at the end, but if for some reason you want to use strings, you could speed up the string extensions. Pop the string out of the dict so that there's only one reference to it and thus the optimization can kick in.
Demo:
>>> timeit('s = d.pop(k); s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
0.8417842664330237
>>> timeit('s = d[k]; s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
294.2475278390723
Depending on how you have read the excel file, but let's say that lines are read as delimiter-separated tuples or something:
d = {}
for name, foundValue in line_tuples:
try:
d[name].append(foundValue)
except KeyError:
d[name] = [foundValue]
d = {k: ",".join(v) for k, v in d.items()}
Alternatively using pandas:
import pandas as pd
df = pd.read_excel("some_excel_file.xlsx")
d = df.groupby("A")["B"].apply(lambda x: ",".join(x)).to_dict()

python script for removing duplicates taking 24 hrs+ to loop through 10^7 records

input t1
P95P,71655,LINC-JP,pathogenic
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
output op
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
myCode
def dup():
fi=open("op","a")
l=[];final="";
q=[];dic={};
for i in open("t1"):
k=i.split(",")
q.append(k[1])
q.append(k[0])
if q in l:
pass
else:
final= final + i.strip() + "\n"
fi.write(str(i.strip()))
fi.write("\n")
l.append(q)
q=[]
#print i.strip()
fi.close()
return final.strip()
d=dup()
In the above input line1,2 and line 3,4 are duplicates. Hence in output these duplicates are removed the entries in my input files are around 10^7.
Why is my code running since past 24 hrs for an input of 76Mb file. Also it has yet to complete one iteration of entire input file.It works fine for small files.
Can anyone please point out the reason for this long time.How can I optimize my program ? thnx
You're using an O(n2) algorithm, which scales poorly for larger files:
for i in open("t1"): # linear pass of file takes O(n) time
...
if q in l: # linear pass of list l takes O(n) time
...
...
You should consider using a set (i.e. make l a set) or itertools.groupby if duplicates will always be next to each other. These approaches will be O(n).
if you have access to a Unix system, uniq is a nice utility that is made for your problem.
uniq input.txt output.txt
see https://www.cs.duke.edu/csl/docs/unix_course/intro-89.html
I know this is a Python question, but sometimes Python is not the tool for the task.
And you can always embed a system call in your python script.
It's not clear why you're building a huge string (final) that holds the same thing the file does, or what dic is for. In terms of performance, you can look up x in y much faster if y is a set than if y is a list. Also, a minor point; shorter variable names don't improve performance, so use good ones instead. I would suggest:
def deduplicate(infile, outfile):
seen = set()
#final = []
with open(outfile, "a") as out, open(infile) as in_:
for line in in_:
check = tuple(line.split(",")[:2])
if check not in seen:
#final.append(line.strip())
out.write(line) # why 'strip' the '\n' then 'write' a new one?
seen.add(check)
#return "\n".join(final)
If you do really need final, make it a list until the last moment (see commented-out lines) - gradual string concatenation means the creation of lots of unnecessary objects.
There are a couple things that you are doing very inefficiently. The largest is that you made l a list, so the line if q in l has to search through everything in the list already in order to check if q matches it. If you make l a set, the membership check can be done using a hash calculation and array lookup, which take the same (small) amount of time no matter how much you add to the set (though it will cause l not to be read in the order that it was written).
Other little speedups that you can do include:
Using a tupple (k[1], k[0]) instead of a list for q.
You are writing your output file fi every loop. Your OS will try to batch and background the writes, but it may be faster to just do one big write at the end. I am not sure on this point but try it.

python. write to file, cannot understand behavior

I don't understand why I cannot write to file in my python program. I have list of strings measurements. I want just write them to file. Instead of all strings it writes only 1 string. I cannot understand why.
This is my piece of code:
fmeasur = open(fmeasur_name, 'w')
line1st = 'rev number, alg time\n'
fmeasur.write(line1st)
for i in xrange(len(measurements)):
fmeasur.write(measurements[i])
print measurements[i]
fmeasur.close()
I can see all print of these trings, but in the file there is only one. What could be the problem?
The only plausible explanation that I have is that you execute the above code multiple times, each time with a single entry in measurements (or at least the last time you execute the code, len(measurements) is 1).
Since you're overwriting the file instead of appending to it, only the last set of measurements would be present in the file, but all of them would appear on the screen.
edit Or do you mean that the data is there, but there's no newlines between the measurements? The easiest way to fix that is by using print >>fmeasur, measurements[i] instead of fmeasur.write(...).

list index out of range error received

hey guys, beginner here. I have written a program that outputs files to .txt's and am using another to read them and use them. i have used a list to store these values (len(..) gives me 100 for all files). However, whenever i run this:
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
exec "f=open('file%02d.txt','r').readlines()"%w #stores data from file00,file01,file02...
f00=open('file00.txt','r').readlines() #same as ^ but from file00
for y in range(100):
xvp=float(f[c].rstrip('\n')) #the error is on this line; the file are stored in vertical order
pvp=float(f00[y].rstrip('\n')) #maybe even this one
#and i do stuff with those values...
I get in line 12,
xvp=float(f[c].rstrip('\n'))
IndexError: list index out of range
note: there are 100 numbers stored on separate lines in the .txt's
please, if there is any way to help you help me, let me know
thanks
You seem to be incrementing c two thousand times (20 times 100 -- actually only 1900 times, since range(1,20) will not reach the value 20, as you seem to desire in a comment) -- so of course you're going out of range if you use it to index a list of 100! The whole code is rather a mess and I suggest refactoring it radically, to avoid exec and do things the Python way. Assuming Python 2.6 or better (in 2.5, you need a from __future__ import with_statement at the start of your module):
f00 = open('file00.txt').readlines()
for w in range(1, 21):
for x in range(100):
with open('file%02d.txt' % w) as f:
for line in f:
xvp = float(line)
for line00 in f00:
rvp = float(line00)
do_stuff(xvp, rvp)
I don't know if this is the logic you want -- coupling every line of file00.txt with each line from the 20 other files -- but at least this makes it clear which lines are coupled up with which;-). If what you want is to only couple the first line of file00.txt with the first line from each of the others, then second line with second lines, etc, then add import itertools at the start of your module and change the contents of the with into:
for line00, line in itertools.izip(f00, f):
rvp = float(line00)
xvp = float(line)
do_stuff(xvp, rvp)
and so forth.
Note that I'm reading all of file00.txt in memory once and for all (into the f00 list of lines) because apparently you need to loop on those contents more than once, but that's not needed for the other files.
An obvious optimization is to convert file00.txt's lines to floats only once, replacing the f00 = statement with
with open('file00.txt') as f:
rvps = [float(line) for line in f]
then use rvps directly instead of repeating the conversion every time on the strings in f00 -- for example, in the second version (the one using itertools.izip):
for rvp, line in itertools.izip(rvps, f):
xvp = float(line)
do_stuff(xvp, rvp)
Edit: I see I've done a number of tiny enhancements while hardly realizing I was doing so, maybe I'd better explain them;-). No need to pass 'r' when opening a file for reading (can't hurt, but it's quite idiomatic to omit it). No need to strip trailing (or for that matter leading) whitespace from a string before calling float on it -- float happily skips all such leading and trailing whitespace itself. I did fix what apparently was another bug (you'd never deal with file20.txt) by fixing the applicable range to range(1, 21).
The with open(...) as f: statements do the opening, bind name f to the open file object, and, as soon as the block of statements they control is finished, guarantee that the file is properly closed -- it should almost invariably be used in preference to a stand-alone open, because ensuring all files are closed ASAP is really very good practice (the with statement has many other excellent use cases, but this is the single most frequent one, and the only one that happens to be necessary for this functionality).
Looping directly on an open file object f (provided the file is opened in text mode, as is the default and applies throughout here), for line in f:, provides one after the other the lines of f (without ever needing to keep them all in memory at once) and is an extremely popular and good Pythonic idiom.
The construct rvps = [float(line) for line in f], which I use in my recommended optimization, is known as a "list comprehension" and it's a nicely speedy and compact alternative to a loop that builds a new list.
itertools.izip, given a number of iterables, provides a single iterable whose items are tuples made by the items of the other iterables "walked in lockstep". The built-in zip is similar, but (in Python 2) it builds a list in memory, which itertools.izip avoids, so it's good practice to learn to use the itertools version to avoid wasting memory (not really important for small files like the ones you have, but good habits are best learned and "just applied" rather than having to reflect on them every single time -- just one one doesn't start every morning pondering whether one should brush one's teeth, but just goes and does so as a matter of good habit;-).
I'm sure there's more, but this is what comes to mind off-hand - feel free to ask if I can be of further assistance!
there are 100 numbers stored on
separate lines in the .txt's
but in
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
you incrementing c by 20*100 = 2000 times.
Maybe you need c = 0 in "w" cycle or just use x instead of c?
Based on how you describe your files, you are indexing into them incorrectly. By using c which is incremented for each iteration of the second loop. It will reach values of up to 2000. Using x seems to be the logical choice.
#restructured for efficiency
file = open('file00.txt','r')
f00 = file.readlines() #no need to reopen the file for every iteration
file.close() #always close the file when done with
for w in range(1,20):
file = open('file%02d.txt'%w,'r')
f = file.readlines() #only open once per iteration
file.close()
for x in range(100):
xvp = float(f[x].rstrip('\n'))
for y in range(100):
pvp = float(f00[y].rstrip('\n'))
#do stuff

Categories

Resources