Related
I have a large file (1.6 gigs) with millions of rows that has columns delimited with:
[||]
I have tried to use the csv module but it says I can only use a single character as a delimiter. So Here is what I have:
fileHandle = open('test.txt', 'r', encoding="UTF-16")
thelist = []
for line in fileHandle:
fields = line.split('[||]')
therow = {
'dea_reg_nbr':fields[0],
'bus_actvty_cd':fields[1],
'drug_schd':fields[3],
#50 more columns like this
}
thelist.append(therow)
fileHandle.close()
#now I have thelist which is what I want
And boom, now I have a list of dictionaries and it works. I want a list because I care about the order, and the dictionary because downstream it's being expected. This just feels like I should be taking advantage of something more efficient. I don't think this scales well with over a million rows and so much data. So, my question as follows:
What would be the more efficient way of taking a multi-character delimited text file (UTF-16 encoded) and creating a list of dictionaries?
Any thoughts would be appreciated!
One way to make it scale better is to use a generator instead of loading all million rows into memory at once. This may or may not be possible depending on your use-case; it will work best if you only need to make one pass over the full data set. Multiple passes will require you to either store all the data in memory in some form or another or to read it from the file multiple times.
Anyway, here's an example of how you could use a generator for this problem:
def file_records():
with open('test.txt', 'r', encoding='UTF-16') as fileHandle:
for line in fileHandle:
fields = line.split('[||]')
therow = {
'dea_reg_nbr':fields[0],
'bus_actvty_cd':fields[1],
'drug_schd':fields[3],
#50 more columns like this
}
yield therow
for record in file_records():
# do work on one record
The function file_records is a generator function because of the yield keyword. When this function is called, it returns an iterator that you can iterate over exactly like a list. The records will be returned in order, and each one will be a dictionary.
If you're unfamiliar with generators, this is a good place to start reading about them.
The thing that makes this scale so well is that you will only ever have one therow in memory at a time. Essentially what's happening is that at the beginning of every iteration of the loop, the file_records function is reading the next line of the file and returning the computed record. It will wait until the next row is needed before doing the work, and the previous record won't linger in memory unless it's needed (such as if it's referenced in whatever data structure you build in # do work on one record).
Note also that I moved the open call to a with statement. This will ensure that the file gets closed and all related resources are freed once the iteration is done or an exception is raised. This is much simpler than trying to catch all those cases yourself and calling fileHandle.close().
I am reading from file x which is contained individual data. These data are separated from each other by new line.I want to calculate tf_idf_vectorizer() for each individual data. So, I need to remove all members of the tweets whenever the code fine new line (\n) . I got error for the bold line in my code.
def load_text():
file=open('x.txt', 'r')
tweets = []
all_matrix = []
for line in file:
if line in ['\n', '\r\n']:
all_matrix.append(tf_idf_vectorizer(tweets))
**for i in tweets: tweets.remove(i)**
else:
tweets.append(line)
file.close()
return all_matrix
You can make tweets an empty list again with a simple assignment.
tweets = []
If you actually need to empty out the list in-place, the way you do it is either:
del tweets[:]
… or …
tweets[:] = []
In general, you can delete or replace any subslice of a list in this way; [:] is just the subslice that means "the whole list".
However, since nobody else has a reference to tweets, there's really no reason to empty out the list; just create a new empty list, and bind tweets to that, and let the old list become garbage to be cleaned up:
tweets = []
Anyway, there are two big problems with this:
for i in tweets: tweets.remove(i)
First, when you want to remove a specific element, you should never use remove. That has to search the list to find a matching element—which is wasteful (since you already know which one you wanted), and also incorrect if you have any duplicates (there could be multiple matches for the same element). Instead, use the index. For example, del tweets[index]. You can use the enumerate function to get the indices. The same thing is true for lots of other list, string, etc. functions—don't use index, find, etc. with a value when you could get the index directly.
Second, if you remove the first element, everything else shifts up by one. So, first you remove element #0. Then, when you remove element #1, that's not the original element #1, but the original #2, which has shifted up one space. And besides skipping every other element, once you're half-way through, you're trying to remove elements past the (new) end of the list. In general, avoid mutating a list while iterating over it; if you must mutate it, it's only safe to do so from the right, not the left (and it's still tricky to get right).
The right way to remove elements one by one from the left is:
while tweets:
del tweets[0]
However, this will be pretty slow, because you keep having to re-adjust the list after each removal. So it's still better to go from the right:
while tweets:
del tweets[-1]
But again, there's no need to go one by one when you can just do the whole thing at once, or not even do it, as explained above.
You should never try to remove items from a list while iterating over that list. If you want a fresh, empty list, just create one.
tweets = []
Otherwise you may not actually remove all the elements of the list, as I suspect you noticed.
You could also re-work the code to be:
from itertools import groupby
def load_tweet(filename):
with open(filename) as fin:
tweet_blocks = (g for k, g in groupby(fin, lambda line: bool(line.strip())) if k)
return [tf_idf_vectorizer(list(tweets)) for tweets in tweet_blocks]
This groups the file into runs of non-blank lines and blank lines. Where the lines aren't blank, we build a list from them to pass to the vectorizer inside a list-comp. This means that we're not having references to lists hanging about, nor are we appending one-at-a-time to lists.
I have a problem while I'm doing my assignment with python.
I'm new to python so I am a complete beginner.
Question: How can I merge two files below?
s555555,7
s333333,10
s666666,9
s111111,10
s999999,9
and
s111111,,,,,
s222222,,,,,
s333333,,,,,
s444444,,,,,
s555555,,,,,
s666666,,,,,
s777777,,,,,
After merging, it should look something like:
s111111,10,,,,
s222222,,,,,
s333333,10,,,,
s444444,,,,,
s555555,7,,,,
s666666,9,,,,
s777777,,,,,
s999999,9,,,,
Thanks for reading and any helps would be appreciated!!!
Here are the steps you can follow for one approach to the problem. In this I'll be using FileA, FileB and Result as the various filenames.
One way to approach the problem is to give each position in the file (each ,) a number to reference it by, then you read the lines from FileA, then you know that after the first , you need to put the first line from FileB to build your result that you will write out to Result.
Open FileA. Ideally you should use the with statement because it will automatically close the file when its done. Or you can use the normal open() call, but make sure you close the file after you are done.
Loop through each line of FileA and add it to a list. (Hint: you should use split()). Why a list? It makes it easier to refer to items by index as that's our plan.
Repeat steps 1 and 2 for FileB, but store it in a different list variable.
Now the next part is to loop through the list of lines from FileA, match them with the list from FileB, to create a new line that you will write to the Result file. You can do this many ways, but a simple way is:
First create an empty list that will store your results (final_lines = [])
Loop through the list that has the lines for FileA in a for loop.
You should also keep in mind that not every line from FileA will have a corresponding line in FileB. For every first "bit" in FileA's list, find the corresponding line in FileB's list, and then get the next item by using the index(). If you are keen you would have realized that the first item is always 0 and the next one is always 1, so why not simply hard code the values? If you look at the assignment; there are multiple ,s so it could be that at some point you have a fourth or fifth "column" that needs to be added. Teachers love to check for this stuff.
Use append() to add the items in the right order to final_lines.
Now that you have the list of lines ready, the last part is simple:
Open a new file (use with or open)
Loop through final_lines
Write each line out to the file (make sure you don't forget the end of line character).
Close the file.
If you have any specific questions - please ask.
Not relating to python, but on linux:
sort -k1 c1.csv > sorted1
sort -k1 c2.csv > sorted2
join -t , -11 -21 -a 1 -a 2 sorted1 sorted2
Result:
s111111,10,,,,,
s222222,,,,,
s333333,10,,,,,
s444444,,,,,
s555555,7,,,,,
s666666,9,,,,,
s777777,,,,,
s999999,9
Make a dict using the first element as a primary key, and then merge the rows?
Something like this:
f1 = csv.reader(open('file1.csv', 'rb'))
f2 = csv.reader(open('file2.csv', 'rb'))
mydict = {}
for row in f1:
mydict[row[0]] = row[1:]
for row in f2:
mydict[row[0]] = mydict[row[0]].extend(row[1:])
fout = csv.write(open('out.txt','w'))
for k,v in mydict:
fout.write([k]+v)
hey guys, beginner here. I have written a program that outputs files to .txt's and am using another to read them and use them. i have used a list to store these values (len(..) gives me 100 for all files). However, whenever i run this:
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
exec "f=open('file%02d.txt','r').readlines()"%w #stores data from file00,file01,file02...
f00=open('file00.txt','r').readlines() #same as ^ but from file00
for y in range(100):
xvp=float(f[c].rstrip('\n')) #the error is on this line; the file are stored in vertical order
pvp=float(f00[y].rstrip('\n')) #maybe even this one
#and i do stuff with those values...
I get in line 12,
xvp=float(f[c].rstrip('\n'))
IndexError: list index out of range
note: there are 100 numbers stored on separate lines in the .txt's
please, if there is any way to help you help me, let me know
thanks
You seem to be incrementing c two thousand times (20 times 100 -- actually only 1900 times, since range(1,20) will not reach the value 20, as you seem to desire in a comment) -- so of course you're going out of range if you use it to index a list of 100! The whole code is rather a mess and I suggest refactoring it radically, to avoid exec and do things the Python way. Assuming Python 2.6 or better (in 2.5, you need a from __future__ import with_statement at the start of your module):
f00 = open('file00.txt').readlines()
for w in range(1, 21):
for x in range(100):
with open('file%02d.txt' % w) as f:
for line in f:
xvp = float(line)
for line00 in f00:
rvp = float(line00)
do_stuff(xvp, rvp)
I don't know if this is the logic you want -- coupling every line of file00.txt with each line from the 20 other files -- but at least this makes it clear which lines are coupled up with which;-). If what you want is to only couple the first line of file00.txt with the first line from each of the others, then second line with second lines, etc, then add import itertools at the start of your module and change the contents of the with into:
for line00, line in itertools.izip(f00, f):
rvp = float(line00)
xvp = float(line)
do_stuff(xvp, rvp)
and so forth.
Note that I'm reading all of file00.txt in memory once and for all (into the f00 list of lines) because apparently you need to loop on those contents more than once, but that's not needed for the other files.
An obvious optimization is to convert file00.txt's lines to floats only once, replacing the f00 = statement with
with open('file00.txt') as f:
rvps = [float(line) for line in f]
then use rvps directly instead of repeating the conversion every time on the strings in f00 -- for example, in the second version (the one using itertools.izip):
for rvp, line in itertools.izip(rvps, f):
xvp = float(line)
do_stuff(xvp, rvp)
Edit: I see I've done a number of tiny enhancements while hardly realizing I was doing so, maybe I'd better explain them;-). No need to pass 'r' when opening a file for reading (can't hurt, but it's quite idiomatic to omit it). No need to strip trailing (or for that matter leading) whitespace from a string before calling float on it -- float happily skips all such leading and trailing whitespace itself. I did fix what apparently was another bug (you'd never deal with file20.txt) by fixing the applicable range to range(1, 21).
The with open(...) as f: statements do the opening, bind name f to the open file object, and, as soon as the block of statements they control is finished, guarantee that the file is properly closed -- it should almost invariably be used in preference to a stand-alone open, because ensuring all files are closed ASAP is really very good practice (the with statement has many other excellent use cases, but this is the single most frequent one, and the only one that happens to be necessary for this functionality).
Looping directly on an open file object f (provided the file is opened in text mode, as is the default and applies throughout here), for line in f:, provides one after the other the lines of f (without ever needing to keep them all in memory at once) and is an extremely popular and good Pythonic idiom.
The construct rvps = [float(line) for line in f], which I use in my recommended optimization, is known as a "list comprehension" and it's a nicely speedy and compact alternative to a loop that builds a new list.
itertools.izip, given a number of iterables, provides a single iterable whose items are tuples made by the items of the other iterables "walked in lockstep". The built-in zip is similar, but (in Python 2) it builds a list in memory, which itertools.izip avoids, so it's good practice to learn to use the itertools version to avoid wasting memory (not really important for small files like the ones you have, but good habits are best learned and "just applied" rather than having to reflect on them every single time -- just one one doesn't start every morning pondering whether one should brush one's teeth, but just goes and does so as a matter of good habit;-).
I'm sure there's more, but this is what comes to mind off-hand - feel free to ask if I can be of further assistance!
there are 100 numbers stored on
separate lines in the .txt's
but in
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
you incrementing c by 20*100 = 2000 times.
Maybe you need c = 0 in "w" cycle or just use x instead of c?
Based on how you describe your files, you are indexing into them incorrectly. By using c which is incremented for each iteration of the second loop. It will reach values of up to 2000. Using x seems to be the logical choice.
#restructured for efficiency
file = open('file00.txt','r')
f00 = file.readlines() #no need to reopen the file for every iteration
file.close() #always close the file when done with
for w in range(1,20):
file = open('file%02d.txt'%w,'r')
f = file.readlines() #only open once per iteration
file.close()
for x in range(100):
xvp = float(f[x].rstrip('\n'))
for y in range(100):
pvp = float(f00[y].rstrip('\n'))
#do stuff
So lets say I'm using Python's ftplib to retrieve a list of log files from an FTP server. How would I parse that list of files to get just the file names (the last column) inside a list? See the link above for example output.
Using retrlines() probably isn't the best idea there, since it just prints to the console and so you'd have to do tricky things to even get at that output. A likely better bet would be to use the nlst() method, which returns exactly what you want: a list of the file names.
This best answer
You may want to use ftp.nlst() instead of ftp.retrlines(). It will give you exactly what you want.
If you can't, read the following :
Generators for sysadmin processes
In his now famous review, Generator Tricks For Systems Programmers An Introduction, David M. Beazley gives a lot of receipes to answer to this kind of data problem with wuick and reusable code.
E.G :
# empty list that will receive all the log entry
log = []
# we pass a callback function bypass the print_line that would be called by retrlines
# we do that only because we cannot use something better than retrlines
ftp.retrlines('LIST', callback=log.append)
# we use rsplit because it more efficient in our case if we have a big file
files = (line.rsplit(None, 1)[1] for line in log)
# get you file list
files_list = list(files)
Why don't we generate immediately the list ?
Well, it's because doing it this way offer you much flexibility : you can apply any intermediate generator to filter files before turning it into files_list : it's just like pipe, add a line, you add a process without overheat (since it's generators). And if you get rid off retrlines, it still work be it's even better because you don't store the list even one time.
EDIT : well, I read the comment to the other answer and it says that this won't work if there is any space in the name.
Cool, this will illustrate why this method is handy. If you want to change something in the process, you just change a line. Swap :
files = (line.rsplit(None, 1)[1] for line in log)
and
# join split the line, get all the item from the field 8 then join them
files = (' '.join(line.split()[8:]) for line in log)
Ok, this may no be obvious here, but for huge batch process scripts, it's nice :-)
And a slightly less-optimal method, by the way, if you're stuck using retrlines() for some reason, is to pass a function as the second argument to retrlines(); it'll be called for each item in the list. So something like this (assuming you have an FTP object named 'ftp') would work as well:
filenames = []
ftp.retrlines('LIST', lambda line: filenames.append(line.split()[-1]))
The list 'filenames' will then be a list of the file names.
Is there any reason why ftplib.FTP.nlst() won't work for you? I just checked and it returns only names of the files in a given directory.
Since every filename in the output starts at the same column, all you have to do is get the position of the dot on the first line:
drwxrwsr-x 5 ftp-usr pdmaint 1536 Mar 20 09:48 .
Then slice the filename out of the other lines using the position of that dot as the starting index.
Since the dot is the last character on the line, you can use the length of the line minus 1 as the index. So the final code is something like this:
lines = ftp.retrlines('LIST')
lines = lines.split("\n") # This should split the string into an array of lines
filename_index = len(lines[0]) - 1
files = []
for line in lines:
files.append(line[filename_index:])
If the FTP server supports the MLSD command, then please see section “single directory case” from that answer.
Use an instance (say ftpd) of the FTPDirectory class, call its .getdata method with connected ftplib.FTP instance in the correct folder, then you can:
directory_filenames= [ftpfile.name for ftpfile in ftpd.files]
I believe it should work for you.
file_name_list = [' '.join(each_file.split()).split()[-1] for each_file_detail in file_list_from_log]
NOTES -
Here I am making a assumption that you want the data in the program (as list), not on console.
each_file_detail is each line that is being produced by the program.
' '.join(each_file.split())
To replace multiple spaces by 1 space.