Counting Occurrences of Zip Codes in Big Data Set w/Python - python

I'm a python newbie looking to count the 100 most occurring zip codes in several .csv files (6+). There are literally 3 million+ zip codes in the data set, and I'm looking for a way to pull out only the top 100 most occurring. Here is a sample of code below that was inspired from another post, although I'm trying to count across several .csv files. Thanks in advance!
import csv
import collections
zip = collections.Counter()
with open('zipcodefile1.csv', 'zipcodefile2.csv', 'zipcodefile3.csv') as input file:
for row in csv.reader(input_file, delimiter=';'):
ZIP[row[1]] += 1
print ZIP.most_common(100)

I'd suggest using Python's generators here, as they will be nice and efficient. First, suppose we have two files:
zc1.txt:
something;00001
another;00002
test;00003
and zc2.txt:
foo;00001
bar;00001
quuz;00003
Now let's write a function that takes several filenames and iterates through the lines in all of the files, returning only the zip codes:
import csv
def iter_zipcodes(paths):
for path in paths:
with open(path) as fh:
for row in csv.reader(fh, delimiter=';'):
yield row[1]
Note that we write yield row[1]. This signals that the function is a generator, and it returns its values lazily.
We can test it out as follows:
>>> list(iter_zipcodes(['zc1.txt', 'zc2.txt']))
['00001', '00002', '00003', '00001', '00001', '00003']
So we see that the generator simply spits out the zip codes in each file, in order. Now let's count them:
>>> zipcodes = iter_zipcodes(['zc1.txt', 'zc2.txt'])
>>> counts = collections.Counter(zipcodes)
>>> counts
Counter({'00001': 3, '00002': 1, '00003': 2})
Looks like it worked. This approach is efficient because it only reads one line in at a time. When one file is completely read, it moves on to the next.

Related

Best way of writing and reading multiple list of lists or arrays of arrays with different dimensions to and from file

I have some parameters and functions that I need to write to a file, but the functions and x-values have different lengths, i.e. domains and codomains, when compared to each other.
My current formatting, assuming e.g. two parameters A and B and two functions f1 and f2 is [A,B,x,f1,f2] where then x, f1, f2 are itself again lists or numpy arrays.
So my imagined data.txt could look like this:
[0, 0, [0,1,2,3], [1,2,3,4], [4,5,6,7]]
[0, 1, [0,1,2,3,4,5,6], [1,2,3,4,5,6,7], [4,5,6,7,8,9,10]]
[1, 10, [2,3,4,5,6], [1,2,3,4,5], [4,5,6,7,8]]
Then I could read in line by line, getting the parameters A and B and plot the functions f1 and f2, given the right x-values.
To write to a file I use the following code, which almost gives me what I described above.
OutSummary=[A,B,x,f1,f2]
Out=str(OutSummary).replace('\n', '')
f=open(filename,'a')
np.savetxt(f, [Out],fmt='%s')
f.close()
Currently, this produces entries like [0, 1, array([ 8. , 8.29229229, 8.58458458, ....
The issue is now that reading in does not work, due to the different lengths of the x-values and function arrays, i.e.
PrevEntris = np.genfromtxt(filename,dtype=str)
(with dtype=str, or dtype=None, or no dtype given) gives me e.g.
ValueError: Some errors were detected ! Line #7 (got 555 columns instead of 1105),
i.e. the x-values contained in the 7th line only had 555 entries, while the previous 6 had 1105.
I see that this is not good code, and I am saving arrays as strings, but I did not manage to find a better way. I'd be really interested to know if there is some advanced numpy way of handling this, or maybe even using a SQL database kind of thing rather than one .txt file? I spent the last few hours trying to make it work using json, but was not successful, yet (1st time user).
Thanks
You can use the builtin json module, since Python lists and JSON have the same syntax. Here is some example code if you want to store each of the lists in a bigger list:
import json
f=open("data.txt")
lines=f.read().splitlines()
data=[json.loads(line) for line in lines]
f.close() #remember to close your files
Edit: I realized I should have used list comprehensions instead so I changed my response. Still works the same way, but is neater
The answer form Icestrike411 works very well for me, especially for my requested formatting style. Additionally, I solved it another way in the meantime also using json, when slightly altering the format.
One data block could look like
new_data = {
"A": 1,
"B": 0.05,
"X": [0,1,2,3,4]
}
and then I append it to the outfile with the following function, running append('some.txt', new_data):
def append(filename, new_entry):
try:
with open(filename, "r") as filea:
content = json.load(filea)
except:
content=[]
#print("Likely empty file.")
content.append(new_entry)
out=json.dumps(content).replace("},", "},\n") #new line for each entry in file
with open(filename, "w") as fileb:
fileb.write(out)
and then reading it with
with open(filename, "r") as file:
contentr = json.load(file)

My CSV files are not being assigned to the correct Key in a dictionary

def read_prices(tikrList):
#read each file and get the price list dictionary
def getPriceDict():
priceDict = {}
TLL = len(tikrList)
for x in range(0,TLL):
with open(tikrList[x] + '.csv','r') as csvFile:
csvReader = csv.reader(csvFile)
for column in csvReader:
priceDict[column[0]] = float(column[1])
return priceDict
#populate the final dictionary with the price dictionary from the previous function
def popDict():
combDict = {}
TLL = len(tikrList)
for x in range(0,TLL):
for y in tikrList:
combDict[y] = getPriceDict()
return combDict
return(popDict())
print(read_prices(['GOOG','XOM','FB']))
What is wrong with the code is that when I return the final dictionary the key for GOOG,XOM,FB is represnting the values for the FB dictionary only.
As you can see with this output:
{'GOOG': {'2015-12-31': 104.660004, '2015-12-30': 106.220001},
'XOM': {'2015-12-31': 104.660004, '2015-12-30': 106.220001},
'FB': {'2015-12-31': 104.660004, '2015-12-30': 106.220001}
I have 3 different CSV files but all of them are just reading the CSV file for FB.
I want to apologize ahead of time if my code is not easy to read or doesn't make sense. I think there is an issue with storing the values and returning the priceDict in the getPriceDict function but I cant seem to figure it out.
Any help is appreciated, thank you!
Since this is classwork I won't provide a solution but I'll point a few things out.
You have defined three functions - two are defined inside the third. While structuring functions like that can make sense for some problems/solutions I don't see any benefit in your solution. It seems to make it more complicated.
The two inner functions don't have any parameters, you might want to refactor them so that when they are called you pass them the information they need. One advantage of a function is to encapsulate an idea/process into a self-contained code block that doesn't rely on resources external to itself. This makes it easy to test so you know that the function works and you can concentrate on other parts of the code.
This piece of your code doesn't make much sense - it never uses x from the outer loop:
...
for x in range(0,TLL):
for y in tikrList:
combDict[y] = getPriceDict()
When you iterate over a list the iteration will stop after the last item and it will iterate over the items themselves - no need to iterate over numbers to access the items: don't do for i in range(thelist): print(thelist[i])
>>> tikrList = ['GOOG','XOM','FB']
>>> for name in tikrList:
... print(name)
GOOG
XOM
FB
>>>
When you read through a tutorial or the documentation, don't just look at the examples - read and understand the text .

Problem parsing data from a firewall log and finding "worm"

I am struggling with trying to see what is wrong with my code. I am new to python.
import os
uniqueWorms = set()
logLineList = []
with open("redhat.txt", 'r') as logFile:
for eachLine in logFile:
logLineList.append(eachLine.split())
for eachColumn in logLineList:
if 'worm' in eachColumn.lower():
uniqueWorms.append()
print (uniqueWorms)
eachLine.split() returns a list of words. When you append this to logLineList, it becomes a 2-dimensional list of lists.
Then when you iterate over it, eachColumn is a list, not a single column.
If you want logLineList to be a list of words, use
logLineList += eachLine.split()
instead of
logLineList.append(eachLine.split())
Finally, uniqueWorms.append() should be uniqueWOrms.append(eachColumn). And print(uniqueWorms) should be outside the loop, so you just see the final result, not every time a worm is added.

Count and flag duplicates in a column in a csv

this type of question has been asked many times. So apologies; I have searched hard to get an answer - but have not found anything that is close enough to my needs (and I am not sufficiently advanced (I am a total newbie) to customize an existing answer). So thanks in advance for any help.
Here's my query:
I have 30 or so csv files and each contains between 500 and 15,000 rows.
Within each of them (in the 1st column) - are rows of alphabetical IDs (some contain underscores and some also have numbers).
I don't care about the unique IDs - but I would like to identify the duplicate IDs and the number of times they appear in all the different csv files.
Ideally I'd like the output for each duped ID to appear in a new csv file and be listed in 2 columns ("ID", "times_seen")
It may be that I need to compile just 1 csv with all the IDs for your code to run properly - so please let me know if I need to do that
I am using python 2.7 (a crawling script that I run needs this version, apparently).
Thanks again
It seems the most easy way to achieve want you want would make use of dictionaries.
import csv
import os
# Assuming all your csv are in a single directory we will iterate on the
# files in this directory, selecting only those ending with .csv
# to list files in the directory we will use the walk function in the
# os module. os.walk(path_to_dir) returns a generator (a lazy iterator)
# this generator generates tuples of the form root_directory,
# list_of_directories, list_of_files.
# So: declare the generator
file_generator = os.walk("/path/to/csv/dir")
# get the first values, as we won't recurse in subdirectories, we
# only ned this one
root_dir, list_of_dir, list_of_files = file_generator.next()
# Now, we only keep the files ending with .csv. Let me break that down
csv_list = []
for f in list_of_files:
if f.endswith(".csv"):
csv_list.append(f)
# That's what was contained in the line
# csv_list = [f for _, _, f in os.walk("/path/to/csv/dir").next() if f.endswith(".csv")]
# The dictionary (key value map) that will contain the id count.
ref_count = {}
# We loop on all the csv filenames...
for csv_file in csv_list:
# open the files in read mode
with open(csv_file, "r") as _:
# build a csv reader around the file
csv_reader = csv.reader(_)
# loop on all the lines of the file, transformed to lists by the
# csv reader
for row in csv_reader:
# If we haven't encountered this id yet, create
# the corresponding entry in the dictionary.
if not row[0] in ref_count:
ref_count[row[0]] = 0
# increment the number of occurrences associated with
# this id
ref_count[row[0]]+=1
# now write to csv output
with open("youroutput.csv", "w") as _:
writer = csv.writer(_)
for k, v in ref_count.iteritems():
# as requested we only take duplicates
if v > 1:
# use the writer to write the list to the file
# the delimiters will be added by it.
writer.writerow([k, v])
You may need to tweek a little csv reader and writer options to fit your needs but this should do the trick. You'll find the documentation here https://docs.python.org/2/library/csv.html. I haven't tested it though. Correcting the little mistakes that may have occurred is left as a practicing exercise :).
That's rather easy to achieve. It would look something like:
import os
# Set to what kind of separator you have. '\t' for TAB
delimiter = ','
# Dictionary to keep count of ids
ids = {}
# Iterate over files in a dir
for in_file in os.listdir(os.curdir):
# Check whether it is csv file (dummy way but it shall work for you)
if in_file.endswith('.csv'):
with open(in_file, 'r') as ifile:
for line in ifile:
my_id = line.strip().split(delimiter)[0]
# If id does not exist in a dict = set count to 0
if my_id not in ids:
ids[my_id] = 0
# Increment the count
ids[my_id] += 1
# saves ids and counts to a file
with open('ids_counts.csv', 'w') as ofile:
for key, val in ids.iteritems():
# write down counts to a file using same column delimiter
ofile.write('{}{}{}\n'.format(key, delimiter, value))
Check out the pandas package. You can read an write csv files quite easily with it.
http://pandas.pydata.org/pandas-docs/stable/10min.html#csv
Then, when having the csv-content as a dataframe you convert it with the as_matrix function.
Use the answers to this question to get the duplicates as a list.
Find and list duplicates in a list?
I hope this helps
As you are a newbie, Ill try to give some directions instead of posting an answer. Mainly because this is not a "code this for me" platform.
Python has a library called csv, that allows to read data from CSV files (Boom!, surprised?). This library allows you to read the file. Start by reading the file (preferably an example file that you create with just 10 or so rows and then increase the amount of rows or use a for loop to iterate over different files). The examples in the bottom of the page that I linked will help you printing this info.
As you will see, the output you get from this library is a list with all the elements of each row. Your next step should be extracting just the ID that you are interested in.
Next logical step is counting the amount of appearances. There is also a class from the standard library called counter. They have a method called update that you can use as follows:
from collections import Counter
c = Counter()
c.update(['safddsfasdf'])
c # Counter({'safddsfasdf': 1})
c['safddsfasdf'] # 1
c.update(['safddsfasdf'])
c # Counter({'safddsfasdf': 2})
c['safddsfasdf'] # 2
c.update(['fdf'])
c # Counter({'safddsfasdf': 2, 'fdf': 1})
c['fdf'] # 1
So basically you will have to pass it a list with the elements you want to count (you could have more than 1 id in the list, for exampling reading 10 IDs before inserting them, for improved efficiency, but remember not constructing a thousands of elements list if you are seeking good memory behaviour).
If you try this and get into some trouble come back and we will help further.
Edit
Spoiler alert: I decided to give a full answer to the problem, please avoid it if you want to find your own solution and learn Python in the progress.
# The csv module will help us read and write to the files
from csv import reader, writer
# The collections module has a useful type called Counter that fulfills our needs
from collections import Counter
# Getting the names/paths of the files is not this question goal,
# so I'll just have them in a list
files = [
"file_1.csv",
"file_2.csv",
]
# The output file name/path will also be stored in a variable
output = "output.csv"
# We create the item that is gonna count for us
appearances = Counter()
# Now we will loop each file
for file in files:
# We open the file in reading mode and get a handle
with open(file, "r") as file_h:
# We create a csv parser from the handle
file_reader = reader(file_h)
# Here you may need to do something if your first row is a header
# We loop over all the rows
for row in file_reader:
# We insert the id into the counter
appearances.update(row[:1])
# row[:1] will get explained afterwards, it is the first column of the row in list form
# Now we will open/create the output file and get a handle
with open(output, "w") as file_h:
# We create a csv parser for the handle, this time to write
file_writer = writer(file_h)
# If you want to insert a header to the output file this is the place
# We loop through our Counter object to write them:
# here we have different options, if you want them sorted
# by number of appearances Counter.most_common() is your friend,
# if you dont care about the order you can use the Counter object
# as if it was a normal dict
# Option 1: ordered
for id_and_times in apearances.most_common():
# id_and_times is a tuple with the id and the times it appears,
# so we check the second element (they start at 0)
if id_and_times[1] == 1:
# As they are ordered, we can stop the loop when we reach
# the first 1 to finish the earliest possible.
break
# As we have ended the loop if it appears once,
# only duplicate IDs will reach to this point
file_writer.writerow(id_and_times)
# Option 2: unordered
for id_and_times in apearances.iteritems():
# This time we can not stop the loop as they are unordered,
# so we must check them all
if id_and_times[1] > 1:
file_writer.writerow(id_and_times)
I offered 2 options, printing them ordered (based on Counter.most_common() doc) and unoredered (based on normal dict method dict.iteritems()). Choose one. From a speed point of view I'm not sure which one would be faster, as one first needs to order the Counter but also stops looping when finding the first element non-duplicated while the second doesn't need to order the elements but needs to loop every ID. The speed will probably be dependant on your data.
About the row[:1] thingy:
row is a list
You can get a subset of a list telling the initial and final positions
In this case the initial position is omited, so it defaults to the start
The final position is 1, so just the first element gets selected
So the output is another list with just the first element
row[:1] == [row[0]] They have the same output, getting a sublist of only the same element is the same that constructing a new list with only the first element

Python: Maintaining original order when zipping two lists into a dictionary

I am reading a CSV file and combining rows into dictionaries, with the first row containing the keys and the subsequent rows containing the values.
I want my dictionary keys to be in the same order as the original csv file, but the dict(zip)) function seems to order them randomly. I tried OrderedDict and that didn't work.
If there is a better way to produce my dictionaries I'm open to suggestions, but I would really like to know how i can do this while keeping my existing code, just because I am very new to Python (and programming in general) and I would like to be able to understand my own code at this point.
import csv # imports the csv module
with open("csvfile.csv", "r") as file_var:
reader = csv.reader(file_var)
my_list = []
for row in reader:
if (len(row)!=0):
my_list = my_list + [row]
for i in range(1, len(my_list)):
user = dict(zip(my_list[0], my_list[i]))
print "----------------------"
print user['first_name'], user['last_name']
for key in user:
print key, user[key]
Dictionaries have an arbitrary order. You should use an OrderedDict instead.
from collections import OrderedDict
user = OrderedDict(zip(my_list[0], my_list[i]))
etc.
I note you say it didn't work, but I see no reason why it wouldn't. In what way did it fail?

Categories

Resources