I have a .dat file with no delimiters that I am trying to read into an array. Say each new line represents one person, and variables in each line are defined in terms of a fixed number of characters, e.g the first variable "year" is the first four characters, the second variable "age" is the next 2 characters (no delimiters within the line) e.g.:
201219\n
201220\n
201256\n
Here is what I am doing right now:
data_file = 'filename.dat'
file = open(data_file, 'r')
year = []
age = []
for line in file:
year.append(line[0:4])
age.append(line[4:])
This works fine for a small number of lines and variables, but when I try loading the full data file (500Mb with 10 million lines and 20 variables) I get a MemoryError. Is there a more efficient way to load this type of data into arrays?
First off, you're probably better off with a list of class instances than a bunch of parallel lists, from a software engineering standpoint. If you try this, you probably should look into __slots__ to decrease the memory overhead.
You could also try pypy - it has some memory optimizations for homogeneous lists.
I'd probably use gdbm or bsddb rather than sqlite, if you want an on-disk solution. gdbm and bsddb look like dict's, except you index them (the keys) by a string and the values are strings too. So your class (the one I mentioned above) would have a __str__ and/or __repr__ method(s) that would convert to a string (could use pickle) for storage in the table. Then your constructor would be made to deal with reversing the process somehow.
If you ever get to such large data that a gdbm or bsddb is too slow, you could try just writing to a flat file - that'll not be as nice for jumping around obviously, but it eliminates a lot of seek()'ing which can be very advantageous sometimes.
HTH
The problem here doesn't appear to be that you're having problems reading it as fitting it into memory. When you're talking about 200 million anything in memory you're going to have some issues.
Try storing it as a list of strings (i.e. trade memory for CPU), or if you can just don't store it at all.
Another option to try is dumping it into a sqlite database. If you use an in-memory db you might end out with the same issue, but maybe not.
If you go for the string style, do something like this:
def get_age(person):
return int(person[4:])
people = file.readlines() # Wait a while....
for person in people:
print(get_age(person)*2) # Or something else
Here's an example of getting mean income for a particular age in a particular year:
def get_mean_income_by_age_and_year(people, target_age, target_year):
count = 0
total = 0.0
for person in people:
income, age, year = get_income(person), get_age(person), get_year(person)
if age == target_age and year == target_year:
total += income
count += 1
if count:
return total/count
else:
return 0.0
Really, though, this basically does what storing it in a sqlite database would do for you. If there are only a couple of very specific things you want to do, then going this way is probably reasonable. But it sounds like there are probably several things you want to be doing with this info - if so a sqlite database is probably what you want.
A more efficient data structure for lots of uniform numeric data is the array. Depending on how much memory you have, using an array may work.
import array
year = array.array('i') # int
age = array.array('i') # int
income = array.array('f') # float
with open('data.txt', 'r') as f:
for line in f:
year.append(int(line[0:4]))
age.append(int(line[4:6]))
income.append(float(line[6:12]))
Related
I am relatively new to Python. However, my needs generally only involve simple string manipulation of rigidly formatted data files. I have a specific situation that I have scoured the web trying to solve and have come up blank.
This is the situation. I have a simple list of two-part entires, formatted like this:
name = ['PAUL;25', 'MARY;60', 'PAUL;40', 'NEIL;50', 'MARY;55', 'HELEN;25', ...]
And, I need to keep only one instance of any repeated name (ignoring the number to the right of the ' ; '), keeping only the entry with the highest number, along with that highest value still attached. So the answer would look like this:
ans = ['MARY;60', 'PAUL;40', 'HELEN;25', 'NEIL;50, ...]
The order of the elements in the list is irrelevant, but the format of the ans list entries must remain the same.
I can probably figure out a way to brute force it. I have looked at 2D lists, sets, tuples, etc. But, I can't seem to find the answer. The name list has about a million entries, so I need something that is efficient. I am sure it will be painfully easy for some of you.
Thanks for any input you can provide.
Cheers.
alkemyst
Probably the best data structure for this would be a dictionary, with the entries split up (and converted to integer) and later re-joined.
Something like this:
max_score = {}
for n in name:
person, score_str = n.split(';')
score = int(score_str)
if person not in max_score or max_score[person] < score:
max_score[person] = score
ans = [
'%s;%s' % (person, score)
for person, score in max_score.items()
]
This is a fairly common structure for many functions and programs: first convert the input to an internal representation (in this case, split and convert to integer), then do the logic or calculation (in this case, uniqueness and maximum), then convert to the required output representation (in this case, string separated with ;).
In terms of efficiency, this code looks at each input item once, then at each output item once; there's unlikely to be any approach that can do better than that (certainly not formally, and likely not in practice). All of the per-item operations are constant-time and fast. It accumulates the intermediate answer in memory (in max_score), but again that is unavoidable; if memory is an issue, the input and output could be changed to iterators/generators, but the whole intermediate answer has to be accumulated in max_score before any items can be output.
This is my first post, please be gentle. I'm attempting to sort some
files into ascending and descending order. Once I have sorted a file, I am storing it in a list which is assigned to a variable. The user is then to choose a file and search for an item. I get an error message....
TypeError: unorderable types; int() < list()
.....when ever I try to search for an item using the variable of my sorted list, the error occurs on line 27 of my code. From research, I know that an int and list cannot be compared, but I cant for the life of me think how else to search a large (600) list for an item.
At the moment I'm just playing around with binary search to get used to it.
Any suggestions would be appreciated.
year = []
with open("Year_1.txt") as file:
for line in file:
line = line.strip()
year.append(line)
def selectionSort(alist):
for fillslot in range(len(alist)-1,0,-1):
positionOfMax=0
for location in range(1,fillslot+1):
if alist[location]>alist[positionOfMax]:
positionOfMax = location
temp = alist[fillslot]
alist[fillslot] = alist[positionOfMax]
alist[positionOfMax] = temp
def binarySearch(alist, item):
first = 0
last = len(alist)-1
found = False
while first<=last and not found:
midpoint = (first + last)//2
if alist[midpoint] == item:
found = True
else:
if item < alist[midpoint]:
last = midpoint-1
else:
first = midpoint+1
return found
selectionSort(year)
testlist = []
testlist.append(year)
print(binarySearch(testlist, 2014))
Year_1.txt file consists of 600 items, all years in the format of 2016.
They are listed in descending order and start at 2017, down to 2013. Hope that makes sense.
Is there some reason you're not using the Python: bisect module?
Something like:
import bisect
sorted_year = list()
for each in year:
bisect.insort(sorted_year, each)
... is sufficient to create the sorted list. Then you can search it using functions such as those in the documentation.
(Actually you could just use year.sort() to sort the list in-place ... bisect.insort() might be marginally more efficient for building the list from the input stream in lieu of your call to year.append() ... but my point about using the `bisect module remains).
Also note that 600 items is trivial for modern computing platforms. Even 6,000 won't take but a few milliseconds. On my laptop sorting 600,000 random integers takes about 180ms and similar sized strings still takes under 200ms.
So you're probably not gaining anything by sorting this list in this application at that data scale.
On the other hand Python also includes a number of modules in its standard libraries for managing structured data and data files. For example you could use Python: SQLite3.
Using this you'd use standard SQL DDL (data definition language) to describe your data structure and schema, SQL DML (data manipulation language: INSERT, UPDATE, and DELETE statements) to manage the contents of the data and SQL queries to fetch data from it. Your data can be returned sorted on any column and any mixture of ascending and descending on any number of columns with the standard SQL ORDER BY clauses and you can add indexes to your schema to ensure that the data is stored in a manner to enable efficient querying and traversal (table scans) in any order on any key(s) you choose.
Because Python includes SQLite in its standard libraries, and because SQLite provides SQL client/server semantics over simple local files ... there's almost no downside to using it for structured data. It's not like you have to install and maintain additional software, servers, handle network connections to a remote database server nor any of that.
I'm going to walk through some steps before getting to the answer.
You need to post a [mcve]. Instead of telling us to read from "Year1.txt", which we don't have, you need to put the list itself in the code. Do you NEED 600 entries to get the error in your code? No. This is sufficient:
year = ["2001", "2002", "2003"]
If you really need 600 entries, then provide them. Either post the actual data, or
year = [str(x) for x in range(2017-600, 2017)]
The code you post needs to be Cut, Paste, Boom - reproduces the error on my computer just like that.
selectionSort is completely irrelevant to the question, so delete it from the question entirely. In fact, since you say the input was already sorted, I'm not sure what selectionSort is actually supposed to do in your code, either. :)
Next you say testlist = [].append(year). USE YOUR DEBUGGER before you ask here. Simply looking at the value in your variable would have made a problem obvious.
How to append list to second list (concatenate lists)
Fixing that means you now have a list of things to search. Before you were searching a list to see if 2014 matched the one thing in there, which was a complete list of all the years.
Now we get into binarySearch. If you look at the variables, you see you are comparing the integer 2014 with some string, maybe "1716", and the answer to that is useless, if it even lets you do that (I have python 2.7 so I am not sure exactly what you get there). But the point is you can't find the integer 2014 in a list of strings, so it will always return False.
If you don't have a debugger, then you can place strategic print statements like
print ("debug info: binarySearch comparing ", item, alist[midpoint])
Now here, what VBB said in comments worked for me, after I fixed the other problems. If you are searching for something that isn't even in the list, and expecting True, that's wrong. Searching for "2014" returns True, if you provide the correct list to search. Alternatively, you could force it to string and then search for it. You could force all the years to int during the input phase. But the int 2014 is not the same as the string "2014".
Right now, I'm basically running through an excel sheet.
I have about 20 names and then I have 50k total values that match to one of those 20 names, so the excel sheet is 50k rows long, column B showing any random value, and column A showing one of the 20 names.
I'm trying to get a string for each of the names that show all of the values.
Name A: 123,244,123,523,123,5523,12505,142... etc etc.
Name B: 123,244,123,523,123,5523,12505,142... etc etc.
Right now, I created a dictionary that runs through the excel sheet, checks if the name is all ready in the dictionary, if it is, then it does a
strA = strA + "," + foundValue
Then it inserts strA back into the dictionary for that particular name. If the name doesn't exist, it creates that dictionary key and then adds that value to it.
Now, this was working all well at first.. but it's been about 15 or 20 mins and it is only on 5k values added to the dictionary so far and it seems to get slower as time goes on and it keeps running.
I wonder if there is a better way to do this or faster way to do this. I was thinking of building new dictionaries every 1k values and then combine them all together at the end.. but that would be 50 dictionaries total and it sounds complicated.. although maybe not.. I'm not sure, maybe it could work better that way, this seems to not work.
I DO need the string that shows each value with a comma between each value. That is why I am doing the string thing right now.
There are a number of things that are likely causing your program to run slowly.
String concatenation in python can be extremely inefficient when used with large strings.
Strings in Python are immutable. This fact frequently sneaks up and bites novice Python programmers on the rump. Immutability confers some advantages and disadvantages. In the plus column, strings can be used as keys in dictionaries and individual copies can be shared among multiple variable bindings. (Python automatically shares one- and two-character strings.) In the minus column, you can't say something like, "change all the 'a's to 'b's" in any given string. Instead, you have to create a new string with the desired properties. This continual copying can lead to significant inefficiencies in Python programs.
Considering each string in your example could contain thousands of characters, each time you do a concatenation, python has to copy that giant string into memory to create a new object.
This would be much more efficient:
strings = []
strings.append('string')
strings.append('other_string')
...
','.join(strings)
In your case, instead of each dictionary key storing a massive string, it should store a list, and you would just append each match to the list, and only at the very end would you do a string concatenation using str.join.
In addition, printing to stdout is also notoriously slow. If you're printing to stdout on each iteration of your massive 50,000 item loop, each iteration is being held up by the unbuffered write to stdout. Consider only printing every nth iteration, or perhaps writing to a file instead (file writes are normally buffered) and then tailing the file from another terminal.
This answer is based on OP's answer to my comment. I asked what he would do with the dict, suggesting that maybe he doesn't need to build it in the first place. #simon replies:
i add it to an excel sheet, so I take the KEY, which is the name, and
put it in A1, then I take the VALUE, which is
1345,345,135,346,3451,35.. etc etc, and put that into A2. then I do
the rest of my programming with that information...... but i need
those values seperated by commas and acessible inside that excel sheet
like that!
So it looks like the dict doesn't have to be built after all. Here is an alternative: for each name, create a file, and store those files in a dict:
files = {}
name = 'John' # let's say
if name not in files:
files[name] = open(name, 'w')
Then when you loop over the 50k-row excel, you do something like this (pseudo-code):
for row in 50k_rows:
name, value_string = rows.split() # or whatever
file = files[name]
file.write(value_string + ',') # if already ends with ',', no need to add
Since your value_string is already comma separated, your file will be csv-like without any further tweaking on your part (except maybe you want to strip the last trailing comma after you're done). Then when you need the values, say, of John, just value = open('John').read().
Now I've never worked with 50k-row excels, but I'd be very surprised if this is not quite a bit faster than what you currently have. Having persistent data is also (well, maybe) a plus.
EDIT:
Above is a memory-oriented solution. Writing to files is much slower than appending to lists (but probably still faster than recreating many large strings). But if the lists are huge (which seems likely) and you run into a memory problem (not saying you will), you can try the file approach.
An alternative, similar to lists in performance (at least for the toy test I tried) is to use StringIO:
from io import StringIO # python 2: import StringIO import StringIO
string_ios = {'John': StringIO()} # a dict to store StringIO objects
for value in ['ab', 'cd', 'ef']:
string_ios['John'].write(value + ',')
print(string_ios['John'].getvalue())
This will output 'ab,cd,ef,'
Instead of building a string that looks like a list, use an actual list and make the string representation you want out of it when you are done.
The proper way is to collect in lists and join at the end, but if for some reason you want to use strings, you could speed up the string extensions. Pop the string out of the dict so that there's only one reference to it and thus the optimization can kick in.
Demo:
>>> timeit('s = d.pop(k); s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
0.8417842664330237
>>> timeit('s = d[k]; s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
294.2475278390723
Depending on how you have read the excel file, but let's say that lines are read as delimiter-separated tuples or something:
d = {}
for name, foundValue in line_tuples:
try:
d[name].append(foundValue)
except KeyError:
d[name] = [foundValue]
d = {k: ",".join(v) for k, v in d.items()}
Alternatively using pandas:
import pandas as pd
df = pd.read_excel("some_excel_file.xlsx")
d = df.groupby("A")["B"].apply(lambda x: ",".join(x)).to_dict()
This question has been asked many times. After spending some time reading the answers, I did some quick profiling to try out the various methods mentioned previously...
I have a 600 MB file with 6 million lines of strings (Category paths from DMOZ project).
The entry on each line is unique.
I want to load the file once & keep searching for matches in the data
The three methods that I tried below list the time taken to load the file, search time for a negative match & memory usage in the task manager
1) set :
(i) data = set(f.read().splitlines())
(ii) result = search_str in data
Load time ~ 10s, Search time ~ 0.0s, Memory usage ~ 1.2GB
2) list :
(i) data = f.read().splitlines()
(ii) result = search_str in data
Load time ~ 6s, Search time ~ 0.36s, Memory usage ~ 1.2GB
3) mmap :
(i) data = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
(ii) result = data.find(search_str)
Load time ~ 0s, Search time ~ 5.4s, Memory usage ~ NA
4) Hash lookup (using code from #alienhard below):
Load time ~ 65s, Search time ~ 0.0s, Memory usage ~ 250MB
5) File search (using code from #EOL below):
with open('input.txt') as f:
print search_str in f #search_str ends with the ('\n' or '\r\n') as in the file
Load time ~ 0s, Search time ~ 3.2s, Memory usage ~ NA
6) sqlite (with primary index on url):
Load time ~ 0s, Search time ~ 0.0s, Memory usage ~ NA
For my use case, it seems like going with the set is the best option as long as I have sufficient memory available. I was hoping to get some comments on these questions :
A better alternative e.g. sqlite ?
Ways to improve the search time using mmap. I have a 64-bit setup.
[edit] e.g. bloom filters
As the file size grows to a couple of GB, is there any way I can keep using 'set' e.g. split it in batches ..
[edit 1] P.S. I need to search frequently, add/remove values and cannot use a hash table alone because I need to retrieve the modified values later.
Any comments/suggestions are welcome !
[edit 2] Update with results from methods suggested in answers
[edit 3] Update with sqlite results
Solution : Based on all the profiling & feeback, I think I'll go with sqlite. Second alternative being method 4. One downside of sqlite is that the database size is more than double of the original csv file with urls. This is due to the primary index on url
Variant 1 is great if you need to launch many sequential searches. Since set is internally a hash table, it's rather good at search. It takes time to build, though, and only works well if your data fit into RAM.
Variant 3 is good for very big files, because you have plenty of address space to map them and OS caches enough data. You do a full scan; it can become rather slow once your data stop to fit into RAM.
SQLite is definitely a nice idea if you need several searches in row and you can't fit the data into RAM. Load your strings into a table, build an index, and SQLite builds a nice b-tree for you. The tree can fit into RAM even if data don't (it's a bit like what #alienhard proposed), and even if it doesn't, the amount if I/O needed is dramatically lower. Of course, you need to create a disk-based SQLite database. I doubt that memory-based SQLite will beat Variant 1 significantly.
Custom hash table search with externalized strings
To get fast access time and a lower memory consumption you could do the following:
for each line compute a string hash and add it to a hash table, e.g., index[hash] = position (do not store the string). If there is a collision, store all file positions for that key in a list.
to look up a string, compute its hash and look it up in the table. If the key is found, read the string at position from the file to verify you really have a match. If there are multiple positions check each one until you find a match or none.
Edit 1: replaced line_number by position (as pointed out by a commenter, one obviously needs the actual position and not line numbers)
Edit 2: provide code for an implementation with a custom hash table, which shows that this approach is more memory efficient than the other approaches mentioned:
from collections import namedtuple
Node = namedtuple('Node', ['pos', 'next'])
def build_table(f, size):
table = [ None ] * size
while True:
pos = f.tell()
line = f.readline()
if not line: break
i = hash(line) % size
if table[i] is None:
table[i] = pos
else:
table[i] = Node(pos, table[i])
return table
def search(string, table, f):
i = hash(string) % len(table)
entry = table[i]
while entry is not None:
pos = entry.pos if isinstance(entry, Node) else entry
f.seek(pos)
if f.readline() == string:
return True
entry = entry.next if isinstance(entry, Node) else None
return False
SIZE = 2**24
with open('data.txt', 'r') as f:
table = build_table(f, SIZE)
print search('Some test string\n', table, f)
The hash of a line is only used to index into the table (if we used a normal dict, the hashes would also be stored as keys). The file position of the line is stored at the given index. Collisions are resolved with chaining, i.e., we create a linked list. However, the first entry is never wrapped in a node (this optimization makes the code a bit more complicated but it saves quite some space).
For a file with 6 million lines I chose a hash table size of 2^24. With my test data I got 933132 collisions. (A hash table of half the size was comparable in memory consumption, but resulted in more collisions. Since more collisions means more file access for searches, I would rather use a large table.)
Hash table: 128MB (sys.getsizeof([None]*(2**24)))
Nodes: 64MB (sys.getsizeof(Node(None, None)) * 933132)
Pos ints: 138MB (6000000 * 24)
-----------------
TOTAL: 330MB (real memory usage of python process was ~350MB)
You could also try
with open('input.txt') as f:
# search_str is matched against each line in turn; returns on the first match:
print search_str in f
with search_str ending with the proper newline sequence('\n' or '\r\n'). This should use little memory, as the file is read progressively. It should also be quite fast, since only part of the file is read.
I would guess many of the paths start out the same on DMOZ.
You should use a trie data structure and store the individual characters on nodes.
Tries have O(m) lookup time (where m is the key length) also save a lot of space, when saving large dictionaries or tree like data.
You could also store path parts on nodes to reduce node count — this is called Patricia Trie. But that makes the lookup slower by the average string length comparison time. See SO question Trie (Prefix Tree) in Python for more info about implementations.
There are a couple of trie implementations on Python Package Index, but they are not very good. I have written one in Ruby and in Common Lisp, which is especially well suited for this task – if you ask nicely, I could maybe publish it as open source... :-)
what about a text indexing solution ?
I would use Lucene in the Java world but there is a python engine called Whoosh
https://bitbucket.org/mchaput/whoosh/wiki/Home
Without building an index file your searching will be to slow, and this is not so simple task. So better to use already developed software. The best way will be use Sphinx Search Engine.
I have a file of names and addresses as follows (example line)
OSCAR ,CANNONS ,8 ,STIEGLITZ CIRCUIT
And I want to read it into a dictionary of name and value. Here self.field_list is a list of the name, length and start point of the fixed fields in the file. What ways are there to speed up this method? (python 2.6)
def line_to_dictionary(self, file_line,rec_num):
file_line = file_line.lower() # Make it all lowercase
return_rec = {} # Return record as a dictionary
for (field_start, field_length, field_name) in self.field_list:
field_data = file_line[field_start:field_start+field_length]
if self.strip_fields == True: # Strip off white spaces first
field_data = field_data.strip()
if field_data != '': # Only add non-empty fields to dictionary
return_rec[field_name] = field_data
# Set hidden fields
#
return_rec['_rec_num_'] = rec_num
return_rec['_dataset_name_'] = self.name
return return_rec
struct.unpack() combined with s specifiers with lengths will tear the string apart faster than slicing.
Edit: Just saw your remark below about commas. The approach below is fast when it comes to file reading, but it is delimiter-based, and would fail in your case. It's useful in other cases, though.
If you want to read the file really fast, you can use a dedicated module, such as the almost standard Numpy:
data = numpy.loadtxt('file_name.txt', dtype=('S10', 'S8'), delimiter=',') # dtype must be adapted to your column sizes
loadtxt() also allows you to process fields on the fly (with the converters argument). Numpy also allows you to give names to columns (see the doc), so that you can do:
data['name'][42] # Name # 42
The structure obtained is like an Excel array; it is quite memory efficient, compared to a dictionary.
If you really need to use a dictionary, you can use a dedicated loop over the data array read quickly by Numpy, in a way similar to what you have done.
If you want to get some speed up, you can also store field_start+field_length directly in self.field_list, instead of storing field_length.
Also, if field_data != '' can more simply be written as if field_data (if this gives any speed up, it is marginal, though).
I would say that your method is quite fast, compared to what standard Python can do (i.e., without using non-standard, dedicated modules).
If your lines include commas like the example, you can use line.split(',') instead of several slices. This may prove to be faster.
You'll want to use the csv module.
It handle not only csv, but any csv-like format which yours seems to be.