Comparing two csv files and getting difference - python

I have two csv file I need to compare and then spit out the differnces:
CSV FORMAT:
Name Produce Number
Adam Apple 5
Tom Orange 4
Adam Orange 11
I need to compare the two csv files and then tell me if there is a difference between Adams apples on sheet and sheet 2 and do that for all names and produce numbers. Both CSV files will be formated the same.
Any pointers will be greatly appreciated

I have used csvdiff
$pip install csvdiff
$csvdiff --style=compact col1 a.csv b.csv
Link to package on pypi
I found this link useful

If your CSV files aren't so large they'll bring your machine to its knees if you load them into memory, then you could try something like:
import csv
csv1 = list(csv.DictReader(open('file1.csv')))
csv2 = list(csv.DictReader(open('file2.csv')))
set1 = set(csv1)
set2 = set(csv2)
print set1 - set2 # in 1, not in 2
print set2 - set1 # in 2, not in 1
print set1 & set2 # in both
For large files, you could load them into a SQLite3 database and use SQL queries to do the same, or sort by relevant keys and then do a match-merge.

One of the best utilities for comparing two different files is diff.
See Python implementation here: Comparing two .txt files using difflib in Python

import csv
def load_csv_to_dict(fname, get_key, get_data):
with open(fname, 'rb') as inf:
incsv = csv.reader(inf)
incsv.next() # skip header
return {get_key(row):get_data(row) for row in incsv}
def main():
key = lambda r: tuple(r[0:2])
data = lambda r: int(r[2])
f1 = load_csv_to_dict('file1.csv', key, data)
f2 = load_csv_to_dict('file2.csv', key, data)
f1keys = set(f1.iterkeys())
f2keys = set(f2.iterkeys())
print("Keys in file1 but not file2:")
print(", ".join(str(a)+":"+str(b) for a,b in (f1keys-f2keys)))
print("Keys in file2 but not file1:")
print(", ".join(str(a)+":"+str(b) for a,b in (f2keys-f1keys)))
print("Differing values:")
for k in (f1keys & f2keys):
a,b = f1[k], f2[k]
if a != b:
print("{}:{} {} <> {}".format(k[0],k[1], a, b))
if __name__=="__main__":
main()

If you want to use Python's csv module along with a function generator, you can use nested looping and compare large .csv files. The example below compares each row using a cursory comparision:
import csv
def csv_lazy_get(csvfile):
with open(csvfile) as f:
r = csv.reader(f)
for row in r:
yield row
def csv_cmp_lazy(csvfile1, csvfile2):
gen_2 = csv_lazy_get(csvfile2)
for row_1 in csv_lazy_get(csvfile1):
row_2 = gen_2.next()
print("row_1: ", row_1)
print("row_2: ", row_2)
if row_2 == row_1:
print("row_1 is equal to row_2.")
else:
print("row_1 is not equal to row_2.")
gen_2.close()

Here a start that does not use difflib. It is really just a point to build from because maybe Adam and apples appear twice on the sheet; can you ensure that is not the case? Should the apples be summed, or is that an error?
import csv
fsock = open('sheet.csv','rU')
rdr = csv.reader(fsock)
sheet1 = {}
for row in rdr:
name, produce, amount = row
sheet1[(name, produce)] = int(amount) # always an integer?
fsock.close()
# repeat the above for the second sheet, then compare
You get the idea?

Related

Compare 2 large CSVs using python - output the differences

I am writing a program to compare all files and directories between two filepaths (basically the files metadata, content, and internal directories should match)
File content comparison is done row by row. Dimensions of the csv may or may not be the same, but below approaches generally manages scenerios whereby dimensions are not the same.
The problem is that processing time is too slow.
Some context:
The two files are identified to be different using filecmp
This particular problematic csv is ~11k columns and 800 rows.
My program will not know what is the data type within
the csv beforehand, so defining the dtype for pandas is
not an option
Difflib does an excellent job if the csv file is small, but not for this particular usecase
I've looked at all the related questions on SO, and tried these approaches, but the processing time was terrible. Approach 3 gives weird results
Approach 1 (Pandas) - Terrible wait and I keep getting this error
UserWarning: You are merging on int and float columns where the float values are not equal to their int representation.
import pandas as pd
import numpy as np
df1 = pd.read_csv(f1)
df2 = pd.read_csv(f2)
diff = df1.merge(df2, how='outer', indicator='exists').query("exists!='both'")
print(diff)
Approach 2 (Difflib) - Terrible wait for this huge csv
import difflib
def CompareUsingDiffLib(f1, f2 ):
html = h.make_file(file1_lines, file2_lines, context=True,numlines=0)
htmlfilepath = filePath + "\\htmlFiles"
with open(htmlfilepath, 'w') as fh:
fh.write(html)
with open (file1) as f, open(file2) as z:
f1 = f.readlines()
f2 = z.readlines()
CompareUsingDiffLib(f1, f2 )
Approach 3 (Pure python) - Incorrect results
with open (f1) as f, open(f2) as z:
file1 = f.readlines()
file2 = z.readlines()
# check row number of diff in file 1
for line in file1:
if line not in file2:
print(file1.index(line))
# it shows from all the row from row number 278 to last row
# is not in file 2, which is incorrect
# I checked using difflib, and using excel as well
# no idea why the results are like that
# running below code shows the same result as the first block of code
for line in file2:
if line not in file1:
print(file2.index(line))
Approach 4 (csv-diff) - Terrible wait
from csv_diff import load_csv, compare
diff = compare(
load_csv(open("one.csv")),
load_csv(open("two.csv"))
)
Can anybody please help on either:
An approach with less processing time
Debugging Approach 3
Comparing the files with readlines() and just testing for membership ("this in that?") does not equal diff'ing the lines.
with open (f1) as f, open(f2) as z:
file1 = f.readlines()
file2 = z.readlines()
for line in file1:
if line not in file2:
print(file1.index(line))
Consider these two CSVs:
file1.csv file2.csv
----------- -----------
a,b,c,d a,b,c,d
1,2,3,4 1,2,3,4
A,B,C,D i,ii,iii,iv
i,ii,iii,iv A,B,C,D
That script will produce nothing (and give the false impression there's no diff) because every line in file 1 is in file 2, even though the files differ line-for-line. (I cannot say why you think you were getting false positives, though, without seeing the files.)
I recommend using the CSV module and iterating the files row by row, and then even column by column:
import csv
path1 = "file1.csv"
path2 = "file2.csv"
with open(path1) as f1, open(path2) as f2:
reader1 = csv.reader(f1)
reader2 = csv.reader(f2)
for i, row1 in enumerate(reader1):
try:
row2 = next(reader2)
except StopIteration:
print(f"Row {i+1}, f1 has this extra row compared to f2")
continue
if row1 == row2:
continue
if len(row1) != len(row2):
print(f"Row {i+1} of f1 has {len(row1)} cols, f2 has {len(row2)} cols")
continue
for j, cell1 in enumerate(row1):
cell2 = row2[j]
if cell1 != cell2:
print(f'Row {i+1}, Col {j+1} of f1 is "{cell1}", f2 is "{cell2}"')
for row2 in reader2:
i += 1
print(f"Row {i+1}, f2 has this extra row compared to f1")
This uses an iterator of file1 to drive an iterator for file2, accounts for any difference in row counts between the two files by just noting a StopIteration exception if file1 has more rows than file2, and printing a difference if there are any rows left to read in file2 (reader2) at the very bottom.
When I run that against these files:
file1 file2
----------- ----------
a,b,c,d a,b,c
1,2,3,4 1,2,3,4
A,B,C,D A,B,C,Z
i,ii,iii,iv i,ii,iii,iv
x,xo,xox,xoxo
I get:
Row 1 of f1 has 4 cols, f2 has 3 cols
Row 3, Col 4 of f1 is "D", f2 is "Z"
Row 5, f2 has an extra row compared to f1
If I swap path1 and path2, I get this:
Row 1 of f1 has 3 cols, f2 has 4 cols
Row 3, Col 4 of f1 is "Z", f2 is "D"
Row 5, f1 has this extra row compared to f2
And it does this fast. I mocked up two 800 x 11_000 CSVs with very, very small differences between rows (if any) and it processed all diffs in under a second of user time (not counting printing).
use can use filecmp to byte by byte compare two files. docs
implementation
>>> import filecmp
>>> filecmp.cmp('somepath/file1.csv', 'otherpath/file1.csv')
True
>>> filecmp.cmp('somepath/file1.csv', 'otherpath/file2.csv')
True
note: file name doesnt matter.
speed comparison against hashing: https://stackoverflow.com/a/1072576/16239119

How can I append columns from csv files to one file? [duplicate]

This question already has answers here:
How to add a new column to a CSV file?
(11 answers)
Closed 4 years ago.
I'm writing a script in Python. I have a bunch of csv files that each contain 1 column. These are what files might look like this:
FirstFile.csv
First
a
b
c
SecondFile.csv
Second
a2
b2
c2
I want some resultant file (let's call it result.csv) to be created that looks like:
First Second
a a2
b b2
c c2
How can I append all the csv's in a directory in python and append all the columns so I have a result.csv that looks like this (but, of course, with many more columns)?
You can try using Pandas.
import pandas as pd
result = pd.concat([ pd.read_csv(f) for f in filenames ],axis=1)
result.to_csv("result.csv",index=False)
Create a list of your file names (e.g. filenames)
Import Pandas
Use the concat function with list comprehension
You can use the csv module:
Create 10 files:
filenames = []
for i in range(10):
filenames.append(f"file_{i}.txt")
with open(filenames[-1],"w") as f:
f.write(f"Header{i}\n")
for row in range(5):
f.write(f"text_{i}_{row}\n")
Read in all files:
data = []
for f in filenames: # filled when creating files, you can use os.walk to fill yours
with open(f) as r:
data.append([x.strip() for x in r])
# data is a list of columns, we need a list of list of columns, so we transpose the data:
transpose = zip(*data)
# write the joined file
import csv
with open("joined.txt","w", newline="") as j:
w = csv.writer(j)
w.writerows(transpose)
Check if it is ok:
with open("joined.txt") as j:
print(j.read())
Output:
Header0,Header1,Header2,Header3,Header4,Header5,Header6,Header7,Header8,Header9
text_0_0,text_1_0,text_2_0,text_3_0,text_4_0,text_5_0,text_6_0,text_7_0,text_8_0,text_9_0
text_0_1,text_1_1,text_2_1,text_3_1,text_4_1,text_5_1,text_6_1,text_7_1,text_8_1,text_9_1
text_0_2,text_1_2,text_2_2,text_3_2,text_4_2,text_5_2,text_6_2,text_7_2,text_8_2,text_9_2
text_0_3,text_1_3,text_2_3,text_3_3,text_4_3,text_5_3,text_6_3,text_7_3,text_8_3,text_9_3
text_0_4,text_1_4,text_2_4,text_3_4,text_4_4,text_5_4,text_6_4,text_7_4,text_8_4,text_9_4
data looks like this:
[['Header0', 'text_0_0', 'text_0_1', 'text_0_2', 'text_0_3', 'text_0_4'], # one files data
['Header1', 'text_1_0', 'text_1_1', 'text_1_2', 'text_1_3', 'text_1_4'],
['Header2', 'text_2_0', 'text_2_1', 'text_2_2', 'text_2_3', 'text_2_4'],
['Header3', 'text_3_0', 'text_3_1', 'text_3_2', 'text_3_3', 'text_3_4'],
['Header4', 'text_4_0', 'text_4_1', 'text_4_2', 'text_4_3', 'text_4_4'],
['Header5', 'text_5_0', 'text_5_1', 'text_5_2', 'text_5_3', 'text_5_4'],
['Header6', 'text_6_0', 'text_6_1', 'text_6_2', 'text_6_3', 'text_6_4'],
['Header7', 'text_7_0', 'text_7_1', 'text_7_2', 'text_7_3', 'text_7_4'],
['Header8', 'text_8_0', 'text_8_1', 'text_8_2', 'text_8_3', 'text_8_4'],
['Header9', 'text_9_0', 'text_9_1', 'text_9_2', 'text_9_3', 'text_9_4']]
Transposed it looks like:
[('Header0', 'Header1', 'Header2', 'Header3', 'Header4', 'Header5', 'Header6', 'Header7', 'Header8', 'Header9'),
('text_0_0', 'text_1_0', 'text_2_0', 'text_3_0', 'text_4_0', 'text_5_0', 'text_6_0', 'text_7_0', 'text_8_0', 'text_9_0'),
('text_0_1', 'text_1_1', 'text_2_1', 'text_3_1', 'text_4_1', 'text_5_1', 'text_6_1', 'text_7_1', 'text_8_1', 'text_9_1'),
('text_0_2', 'text_1_2', 'text_2_2', 'text_3_2', 'text_4_2', 'text_5_2', 'text_6_2', 'text_7_2', 'text_8_2', 'text_9_2'),
('text_0_3', 'text_1_3', 'text_2_3', 'text_3_3', 'text_4_3', 'text_5_3', 'text_6_3', 'text_7_3', 'text_8_3', 'text_9_3'),
('text_0_4', 'text_1_4', 'text_2_4', 'text_3_4', 'text_4_4', 'text_5_4', 'text_6_4', 'text_7_4', 'text_8_4', 'text_9_4')]
I'm sure there are more pythonic ways, but this will work (so long as all files have an identical number of lines).
input_files = ['FirstFile.csv', 'SecondFile.csv']
csv_separator = '\t'
data = []
for file in input_files:
partial_data = []
with open(file, 'r') as f:
for line in f:
partial_data.append(line.strip('\n'))
data.append(partial_data)
with open('output.csv','w') as output:
for item in range(len(data[0])):
line = []
for part in range(len(data)):
line.append(data[part][item])
output.write(csv_separator.join(line)+'\n')
If you're looking for a pure python solution it's probably best to csv.DictReader and csv.DictWriter so you have more control over how the data is formatted. Also, everything is 'generated' on the fly so it will be more memory efficient with very large files.
import csv
with open('csv1.csv') as csv1, open('csv2.csv') as csv2:
r1 = csv.DictReader(csv1)
r2 = csv.DictReader(csv2)
with open('csv3.csv', 'w') as csv3:
writer = csv.DictWriter(csv3,
fieldnames=["First", "Second"],
lineterminator='\n'
)
writer.writeheader()
writer.writerows({**x, **y} for x, y in zip(r1, r2))

Loop within loop when comparing csv files in Python

I have two csv files. I am trying to look up a value the first column in one file (file 1) in the first column in the other file (file 2). If they match then print the row from file 2.
Pseudo code:
read file1.csv
read file2.csv
loop through file1
compare each row with each row of file 2 in turn
if file1[0] == file2[0]:
print row of file 2
file1:
45,John
46,Fred
47,Bill
File2:
46,Roger
48,Pete
49,Bob
I want it to print :
46 Roger
EDIT - these are examples, the actual file is much bigger (5,000 rows, 7 columns)
I have the following:
import csv
with open('csvfile1.csv', 'rt') as csvfile1, open('csvfile2.csv', 'rt') as csvfile2:
csv1reader = csv.reader(csvfile1)
csv2reader = csv.reader(csvfile2)
for rowcsv1 in csv1reader:
for rowcsv2 in csv2reader:
if rowcsv1[0] == rowcsv2[0]:
print(rowcsv1)
However I am getting no output.
I am aware there are other ways of doing it (with dict, pandas) but I cam keen to know why my approach is not working.
EDIT: I now see that it is only iterating through the first row of file 1 and then closing, but I am unclear how to stop it closing (I also understand that this is not the best way to do do it).
You open csv2reader = csv.reader(csvfile2) then iterate through it vs the first row of csv1reader - it has now reached end of file and will not produce any more data.
So for the second through last rows of csv1reader you are comparing against the items of an empty list, ie no comparison takes place.
In any case, this is a very inefficient method; unless you are working on very large files, it would be much better to do
import csv
# load second file as lookup table
data = {}
with open("csv2file.csv") as inf2:
for row in csv.reader(inf2):
data[row[0]] = row
# now process first file against it
with open("csv1file.csv") as inf1:
for row in csv.reader(inf1):
if row[0] in data:
print(data[row[0]])
See Hugh Bothwell's answer for why your code isn't working. For a fast way of doing what you stated you want to do in your question, try this:
import csv
with open('csvfile1.csv', 'rt') as csvfile1, open('csvfile2.csv', 'rt') as csvfile2:
csv1 = list(csv.reader(csvfile1))
csv2 = list(csv.reader(csvfile2))
duplicates = {a[0] for a in csv1} & {a[0] for a in csv2}
for row in csv2:
if row[0] in duplicates:
print(row)
It gets the duplicate numbers from the two csv files, then loops through the second cvs file, printing the row if the number at index 0 is in the first cvs file. This is a much faster algorithm than what you were attempting to do.
If order matters, as #hugh-bothwell's mentioned in #will-da-silva's answer, you could do:
import csv
from collections import OrderedDict
with open('csvfile1.csv', 'rt') as csvfile1, open('csvfile2.csv', 'rt') as csvfile2:
csv1 = list(csv.reader(csvfile1))
csv2 = list(csv.reader(csvfile2))
d = {row[0]: row for row in csv2}
k = OrderedDict.fromkeys([a[0] for a in csv1]).keys()
duplicate_keys = [k for k in k if k in d]
for k in duplicate_keys:
print(d[k])
I'm pretty sure there's a better way to do this, but try out this solution, it should work.
counter = 0
import csv
with open('csvfile1.csv', 'rt') as csvfile1, open('csvfile2.csv', 'rt') as
csvfile2:
csv1reader = csv.reader(csvfile1)
csv2reader = csv.reader(csvfile2)
for rowcsv1 in csv1reader:
for rowcsv2 in csv2reader:
if rowcsv1[counter] == rowcsv2[counter]:
print(rowcsv1)
counter += 1 #increment it out of the IF statement.

How do I search through a very large csv file?

I have 2 csv files (well, one of them is .tab), both of them with 2 columns of numbers. My job is to go through each row of the first file, and see if it matches any of the rows in the second file. If it does, I print a blank line to my output csv file. Otherwise, I print 'R,R' to the output csv file. My current algorithm does the following:
Scan each row of the second file (two integers each), go to the position of those two integers in a 2D array (so if the integers are 2 and 3, I'll go to position [2,3]) and assign a value of 1.
Go through each row of the first file, check if the position of the two integers of each row has a value of 1 in the array, and then print the according output to a third csv file.
Unfortunately the csv files are very large, so I instantly get "MemoryError:" when running this. What is an alternative for scanning through large csv files?
I am using Jupyter Notebook. My code:
import csv
import numpy
def SNP():
thelines = numpy.ndarray((6639,524525))
tempint = 0
tempint2 = 0
with open("SL05_AO_RO.tab") as tsv:
for line in csv.reader(tsv, dialect="excel-tab"):
tempint = int(line[0])
tempint2 = int(line[1])
thelines[tempint,tempint2] = 1
return thelines
def common_sites():
tempint = 0
tempint2 = 0
temparray = SNP()
print('Checkpoint.')
with open('output_SL05.csv', 'w', newline='') as fp:
with open("covbreadth_common_sites.csv") as tsv:
for line in csv.reader(tsv, dialect="excel-tab"):
tempint = int(line[0])
tempint2 = int(line[1])
if temparray[tempint,tempint2] == 1:
a = csv.writer(fp, delimiter=',')
data = [['','']]
a.writerows(data)
else:
a = csv.writer(fp, delimiter=',')
data = [['R','R']]
a.writerows(data)
print('Done.')
return
common_sites()
Files:
https://drive.google.com/file/d/0B5v-nJeoVouHUjlJelZtV01KWFU/view?usp=sharing and https://drive.google.com/file/d/0B5v-nJeoVouHSDI4a2hQWEh3S3c/view?usp=sharing
You're dataset really isn't that big, but it is relatively sparse. You aren't using a sparse structure to store the data which is causing the problem.
Just use a set of tuples to store the seen data, and then the lookup on that set is O(1), e.g:
In [1]:
import csv
with open("SL05_AO_RO.tab") as tsv:
seen = set(map(tuple, csv.reader(tsv, dialect="excel-tab")))
with open("covbreadth_common_sites.csv") as tsv:
common = [line for line in csv.reader(tsv, dialect="excel-tab") if tuple(line) in seen]
common[:10]
Out[1]:
[['1049', '7280'], ['1073', '39198'], ['1073', '39218'], ['1073', '39224'], ['1073', '39233'],
['1098', '661'], ['1098', '841'], ['1103', '15100'], ['1103', '15107'], ['1103', '28210']]
10 loops, best of 3: 150 ms per loop
In [2]:
len(common), len(seen)
Out[2]:
(190, 138205)
I have 2 csv files (well, one of them is .tab), both of them with 2 columns of numbers. My job is to go through each row of the first file, and see if it matches any of the rows in the second file. If it does, I print a blank line to my output csv file. Otherwise, I print 'R,R' to the output csv file.
import numpy as np
f1 = np.loadtxt('SL05_AO_RO.tab')
f2 = np.loadtxt('covbreadth_common_sites.csv')
f1.sort(axis=0)
f2.sort(axis=0)
i, j = 0, 0
while i < f1.shape[0]:
while j < f2.shape[0] and f1[i][0] > f2[j][0]:
j += 1
while j < f2.shape[0] and f1[i][0] == f2[j][0] and f1[i][1] > f2[j][1]:
j += 1
if j < f2.shape[0] and np.array_equal(f1[i], f2[j]):
print()
else:
print('R,R')
i += 1
Load data to ndarray to optimize memory usage
Sort data
Find matches in sorted arrays
Total complexity is O(n*log(n) + m*log(m)), where n and m are sizes of input files.
Using of set() will not reduce memory usage per unique entry so I do not recommend to use it with large datasets.
Since CSV is just a DB dump, import it to any SQL DB, then do query on it. This is very efficient way.

Merge 2 csv file with one unique column but different header [duplicate]

This question already has answers here:
Merging two CSV files using Python
(2 answers)
Closed 7 years ago.
I want to merge 2 csv file using some scripting language (like bash script or python).
1st.csv (this data is from mysql query)
member_id,name,email,desc
03141,ej,ej#domain.com,cool
00002,jes,jes#domain.com,good
00002,charmie,charm#domain.com,sweet
2nd.csv (from mongodb query)
id,address,create_date
00002,someCity,20150825
00003,newCity,20140102
11111,,20150808
The examples are not the actual, though i know that some of the member_id from qsl and the id from mongodb are the same.
(*and i wish my output will be something like this)
desiredoutput.csv
meber_id,name,email,desc,address,create_date
03141,ej,ej#domain.com,cool,,
00002,jes,jes#domain.com,good,someCity,20150825
00002,charmie,charm#domain.com,sweet,
11111,,,,20150808
help will be much appreciated. thanks in advance
#########################################################################
#!/usr/bin/python
import csv
import itertools as IT
filenames = ['1st.csv', '2nd.csv']
handles = [open(filename, 'rb') for filename in filenames]
readers = [csv.reader(f, delimiter=',') for f in handles]
with open('desiredoutput.csv', 'wb') as h:
writer = csv.writer(h, delimiter=',', lineterminator='\n', )
for rows in IT.izip_longest(*readers, fillvalue=['']*2):
combined_row = []
for row in rows:
row = row[:1] # column where 1 know there are identical data
if len(row) == 1:
combined_row.extend(row)
else:
combined_row.extend(['']*1)
writer.writerow(combined_row)
for f in handles:
f.close()
#########################################################################
just read and tried this code(manipulate) in this site too
Since you haven't posted an attempt, I'll give you a general answer (using Python) to get you started.
Create a dict, d
Iterate over all the rows of the first file, convert each row into a list and store it in d using meber_id as the key and the list as the value.
Iterate over all the rows of the second file, convert each row into a list leaving out the id column and update the list under d[id] with the new list if d[id] exists, otherwise store the new list under d[id].
Finally, iterate over the values in d and print them out comma separated to a file.
Edit
In your attempt, you are trying to use izip_longest to iterate over the rows of both files at the same time. But this would work only if there were an equal number of rows in both files and they were in the same order.
Anyhow, here is one way of doing it.
Note: This is using the Python 3.4+ csv module. For 2.7 it might look a little different.
import csv
d = {}
with open("file1.csv", newline="") as f:
for row in csv.reader(f):
d.setdefault(row[0], []).append(row + [""] * 3)
with open("file2.csv", newline="") as f:
for row in csv.reader(f):
old_row = d.setdefault(row[0][0], [row[0], "", "", ""])
old_row[4:] = row[1:]
with open("out.csv", "w", newline="") as f:
writer = csv.writer(f)
for rows in d.values():
writer.writerows(rows)
Here goes a suggestion using pandas I've got from this answer and pandas doc about merging.
import pandas as pd
first = pd.read_csv('1st.csv')
second = pd.read_csv('2nd.csv')
merged = pd.concat([first, second], axis=1)
This will output:
meber_id name email desc id address create_date
3141 ej ej#domain.com cool 2 someCity 20150825
2 jes jes#domain.com good 11 newCity 20140102
11 charmie charm#domain.com sweet 11111 NaN 20150808

Categories

Resources