I need to compare the values of the items in two different dictionaries.
Let's say that dictionary RawData has items that represent phone numbers and number names.
Rawdata for example has items like: {'name': 'Customer Service', 'number': '123987546'} {'name': 'Switchboard', 'number': '48621364'}
Now, I got dictionary FilteredData, which already contains some items from RawData: {'name': 'IT-support', 'number': '32136994'} {'name': 'Company Customer Service', 'number': '123987546'}
As you can see, Customer Service and Company Customer Service both have the same values, but different keys. In my project, there might be hundreds of similar duplicates, and we only want unique numbers to end up in FilteredData.
FilteredData is what we will be using later in the code, and RawData will be discarded.
Their names(keys) can be close duplicates, but not their numbers(values)**
There are two ways to do this.
A. Remove the duplicate items in RawData, before appending them into FilteredData.
B. Append them into FilteredData, and go through the numbers(values) there, removing the duplicates. Can I use a set here to do that? It would work on a list, obviously.
I'm not looking for the most time-efficient solution. I'd like the most simple and easy to learn one, if and when someone takes over my job someday. In my project it's mandatory for the next guy working on the code to get a quick grip of it.
I've already looked at sets, and tried to face the problem by nesting two for loops, but something tells me there gotta be an easier way.
Of course I might have missed the obvious solution here.
Thanks in advance!
I hope I understands your problem here:
data = [{'name': 'Customer Service', 'number': '123987546'}, {'name': 'Switchboard', 'number': '48621364'}]
newdata = [{'name': 'IT-support', 'number': '32136994'}, {'name': 'Company Customer Service', 'number': '123987546'}]
def main():
numbers = set()
for entry in data:
numbers.add(entry['number'])
for entry in newdata:
if entry['number'] not in numbers:
data.append(entry)
print data
main()
Output:
[{'name': 'Customer Service', 'number': '123987546'},
{'name': 'Switchboard', 'number': '48621364'},
{'name': 'IT-support', 'number': '32136994'}]
What you can do is take a dict.values(), create a set of those to remove duplicates and then go through the old dictionary and find the first key with that value and add it to a new one. Keep the set around because when you get the next dict entry, try adding the element to that set and see if the length of the set is longer that before adding it. If it is, it's a unique element and you can add it to the dict.
If you're willing on changing how FilteredData is currently, you can just use a dict and use the number as your key:
RawData = [
{'name': 'Customer Service', 'number': '123987546'},
{'name': 'Switchboard', 'number': '48621364'}
]
# Change how FilteredData is structured
FilteredDataMap = {
'32136994':
{'name': 'IT-support', 'number': '32136994'},
'123987546':
{'name': 'Company Customer Service', 'number': '123987546'}
}
for item in RawData:
number = item.get('number')
if number not in FilteredDataMap:
FilteredDataMap[number] = item
# If you need the list of items
FilteredData = list(FilteredDataMap.values())
You can just pull the actual list from the Map using .values()
I take the numbers are unique. Then, another solution would be taking advantage of the uniqueness of dictionary keys. This means converting each list of dictionary to a dictionary of 'number:name' pairs. Then, you simple need to update RawData with FilteredData.
RawData = [
{'name': 'Customer Service', 'number': '123987546'},
{'name': 'Switchboard', 'number': '48621364'}
]
FilteredData = [
{'name': 'IT-support', 'number': '32136994'},
{'name': 'Company Customer Service', 'number': '123987546'}
]
def convert_list(input_list):
return {item['number']:item['name'] for item in input_list}
def unconvert_dict(input_dict):
return [{'name':val, 'number': key} for key, val in input_dict.items()]
NewRawData = convert_list(RawData)
NewFilteredData = conver_list(FilteredData)
DesiredResultConverted = NewRawData.update(NewFilteredData)
DesuredResult = unconvert_dict(DesiredResultConverted)
In this example, the variables will have the following values:
NewRawData = {'123987546':'Customer Service', '48621364': 'Switchboard'}
NewFilteredData = {'32136994': 'IT-support', '123987546': 'Company Customer Service'}
When you update NewRawData with NewFilteredData, Company Customer Service will overwrite Customer Service as the value associated with the key 123987546. So,
DesiredResultConverted = {'123987546':'Company Customer Service', '48621364': 'Switchboard', '32136994': 'IT-support'}
Then, if you still prefer the original format, you can "unconvert" back.
I have a csv containing ~45,000 rows, which equates to seven days' worth of data. It has been sorted by datetime, with the oldest record first.
This is a sample row once the csv has been passed into the csv module's DictReader:
{'end': '423', 'g': '2', 'endid': '17131', 'slat': '40.7', 'endname': 'Horchata', 'cid': '1', 'startname': 'Sriracha', 'startid': '521', 'slon': '-73.9', 'usertype': 'Sub', 'stoptime': '2015-02-01 00:14:00+00', 'elong': '-73.9', 'starttime': '2015-02-01 00:00:00+00', 'elat': '40.7', 'dur': '801', 'meppy': '', 'birth_year': '1978'}
...and another:
{'end': '418', 'g': '1', 'endid': '17108', 'slat': '40.7', 'endname': 'Guacamole', 'cid': '1', 'startname': 'Cerveza', 'startid': '519', 'slon': '-73.9', 'usertype': 'Sub', 'stoptime': '2015-02-01 00:14:00+00', 'elong': '-73.9', 'starttime': '2015-02-02 00:00:00+00', 'elat': '40.7', 'dur': '980', 'meppy': '', 'birth_year': '1983'}
I recently wrote the code below. It runs through the csv (after it's been passed to DictReader). The code yields the first row of each new day, i.e. whenever the day changes, based on starttime:
dayList = []
def first_ride(reader):
for row in reader:
starttime = dateutil.parser.parse(row['starttime'])
if starttime.day not in dayList:
day_holder.append(starttime.day)
yield row
else:
pass
My goal now is to produce a single list containing the value associated with birth_year from each of the seven records, i.e.:
[1992, 1967, 1988, 1977, 1989, 1953, 1949]
The catch is that I want to understand how to do it using Python's HOFs to the maximum extent possible (i.e. map/ reduce, and likely filter), without the generator (currently used in my code), and without global variables. To eliminate the global variable, my guess is that each starttime's day will have to be compared to the one before, but not using the list, as I currently have it set up. As a final FYI, I run Python 2.7.
I majorly appreciate any expertise donated.
You can just reduce the dayList, into a list of birth_years:
reduce(lambda r, d: r + [d['birth_year']], dayList, [])
Or you can use a comprehension (preferred):
[d['birth_year'] for d in dayList]
I have a CSV with two columns, column one is the team dedicated to a particular building in our project.
The second column is the actual building number.
What I am looking for is a dictionary with the first column as the key and the buildings that belong to that team in the list.
I have tried various forms of csv.reader and csv.DictReader along with different for loops to rewrite the data to another dictionary, but I cannot get the structure I want.
CSV:
team,bldg,
3,204,
3,250,
3,1437,
2,1440,
1,1450,
The structure of the dictionary would be as follows:
dict["1"] = ["1450"]
dict["2"] = ["1440"]
dict["3"] = ["204", "250", "1437"]
This works:
import csv
result={}
with open('/tmp/test.csv','r') as f:
red=csv.DictReader(f)
for d in red:
result.setdefault(d['team'],[]).append(d['bldg'])
#results={'1': ['1450'], '3': ['204', '250', '1437'], '2': ['1440']}
The useful collections.defaultdict in the standard library makes short work of this task:
import csv
import collections as co
dd = co.defaultdict(list)
with open('/path/to/your.csv'),'rb') as fin:
dr = csv.DictReader(fin)
for line in dr:
dd[line['team']].append(line['bldg'])
# defaultdict(<type 'list'>, {'1': ['1450'], '3': ['204', '250', '1437'], '2': ['1440']})
http://docs.python.org/2/library/collections.html#collections.defaultdict
The first argument provides the initial value for the default_factory
attribute; it defaults to None.
My scenario is as follows: I have a table of data (handful of fields, less than a hundred rows) that I use extensively in my program. I also need this data to be persistent, so I save it as a CSV and load it on start-up. I choose not to use a database because every option (even SQLite) is an overkill for my humble requirement (also - I would like to be able to edit the values offline in a simple way, and nothing is simpler than notepad).
Assume my data looks as follows (in the file it's comma separated without titles, this is just an illustration):
Row | Name | Year | Priority
------------------------------------
1 | Cat | 1998 | 1
2 | Fish | 1998 | 2
3 | Dog | 1999 | 1
4 | Aardvark | 2000 | 1
5 | Wallaby | 2000 | 1
6 | Zebra | 2001 | 3
Notes:
Row may be a "real" value written to the file or just an auto-generated value that represents the row number. Either way it exists in memory.
Names are unique.
Things I do with the data:
Look-up a row based on either ID (iteration) or name (direct access).
Display the table in different orders based on multiple field: I need to sort it e.g. by Priority and then Year, or Year and then Priority, etc.
I need to count instances based on sets of parameters, e.g. how many rows have their year between 1997 and 2002, or how many rows are in 1998 and priority > 2, etc.
I know this "cries" for SQL...
I'm trying to figure out what's the best choice for data structure. Following are several choices I see:
List of row lists:
a = []
a.append( [1, "Cat", 1998, 1] )
a.append( [2, "Fish", 1998, 2] )
a.append( [3, "Dog", 1999, 1] )
...
List of column lists (there will obviously be an API for add_row etc):
a = []
a.append( [1, 2, 3, 4, 5, 6] )
a.append( ["Cat", "Fish", "Dog", "Aardvark", "Wallaby", "Zebra"] )
a.append( [1998, 1998, 1999, 2000, 2000, 2001] )
a.append( [1, 2, 1, 1, 1, 3] )
Dictionary of columns lists (constants can be created to replace the string keys):
a = {}
a['ID'] = [1, 2, 3, 4, 5, 6]
a['Name'] = ["Cat", "Fish", "Dog", "Aardvark", "Wallaby", "Zebra"]
a['Year'] = [1998, 1998, 1999, 2000, 2000, 2001]
a['Priority'] = [1, 2, 1, 1, 1, 3]
Dictionary with keys being tuples of (Row, Field):
Create constants to avoid string searching
NAME=1
YEAR=2
PRIORITY=3
a={}
a[(1, NAME)] = "Cat"
a[(1, YEAR)] = 1998
a[(1, PRIORITY)] = 1
a[(2, NAME)] = "Fish"
a[(2, YEAR)] = 1998
a[(2, PRIORITY)] = 2
...
And I'm sure there are other ways... However each way has disadvantages when it comes to my requirements (complex ordering and counting).
What's the recommended approach?
EDIT:
To clarify, performance is not a major issue for me. Because the table is so small, I believe almost every operation will be in the range of milliseconds, which is not a concern for my application.
Having a "table" in memory that needs lookups, sorting, and arbitrary aggregation really does call out for SQL. You said you tried SQLite, but did you realize that SQLite can use an in-memory-only database?
connection = sqlite3.connect(':memory:')
Then you can create/drop/query/update tables in memory with all the functionality of SQLite and no files left over when you're done. And as of Python 2.5, sqlite3 is in the standard library, so it's not really "overkill" IMO.
Here is a sample of how one might create and populate the database:
import csv
import sqlite3
db = sqlite3.connect(':memory:')
def init_db(cur):
cur.execute('''CREATE TABLE foo (
Row INTEGER,
Name TEXT,
Year INTEGER,
Priority INTEGER)''')
def populate_db(cur, csv_fp):
rdr = csv.reader(csv_fp)
cur.executemany('''
INSERT INTO foo (Row, Name, Year, Priority)
VALUES (?,?,?,?)''', rdr)
cur = db.cursor()
init_db(cur)
populate_db(cur, open('my_csv_input_file.csv'))
db.commit()
If you'd really prefer not to use SQL, you should probably use a list of dictionaries:
lod = [ ] # "list of dicts"
def populate_lod(lod, csv_fp):
rdr = csv.DictReader(csv_fp, ['Row', 'Name', 'Year', 'Priority'])
lod.extend(rdr)
def query_lod(lod, filter=None, sort_keys=None):
if filter is not None:
lod = (r for r in lod if filter(r))
if sort_keys is not None:
lod = sorted(lod, key=lambda r:[r[k] for k in sort_keys])
else:
lod = list(lod)
return lod
def lookup_lod(lod, **kw):
for row in lod:
for k,v in kw.iteritems():
if row[k] != str(v): break
else:
return row
return None
Testing then yields:
>>> lod = []
>>> populate_lod(lod, csv_fp)
>>>
>>> pprint(lookup_lod(lod, Row=1))
{'Name': 'Cat', 'Priority': '1', 'Row': '1', 'Year': '1998'}
>>> pprint(lookup_lod(lod, Name='Aardvark'))
{'Name': 'Aardvark', 'Priority': '1', 'Row': '4', 'Year': '2000'}
>>> pprint(query_lod(lod, sort_keys=('Priority', 'Year')))
[{'Name': 'Cat', 'Priority': '1', 'Row': '1', 'Year': '1998'},
{'Name': 'Dog', 'Priority': '1', 'Row': '3', 'Year': '1999'},
{'Name': 'Aardvark', 'Priority': '1', 'Row': '4', 'Year': '2000'},
{'Name': 'Wallaby', 'Priority': '1', 'Row': '5', 'Year': '2000'},
{'Name': 'Fish', 'Priority': '2', 'Row': '2', 'Year': '1998'},
{'Name': 'Zebra', 'Priority': '3', 'Row': '6', 'Year': '2001'}]
>>> pprint(query_lod(lod, sort_keys=('Year', 'Priority')))
[{'Name': 'Cat', 'Priority': '1', 'Row': '1', 'Year': '1998'},
{'Name': 'Fish', 'Priority': '2', 'Row': '2', 'Year': '1998'},
{'Name': 'Dog', 'Priority': '1', 'Row': '3', 'Year': '1999'},
{'Name': 'Aardvark', 'Priority': '1', 'Row': '4', 'Year': '2000'},
{'Name': 'Wallaby', 'Priority': '1', 'Row': '5', 'Year': '2000'},
{'Name': 'Zebra', 'Priority': '3', 'Row': '6', 'Year': '2001'}]
>>> print len(query_lod(lod, lambda r:1997 <= int(r['Year']) <= 2002))
6
>>> print len(query_lod(lod, lambda r:int(r['Year'])==1998 and int(r['Priority']) > 2))
0
Personally I like the SQLite version better since it preserves your types better (without extra conversion code in Python) and easily grows to accommodate future requirements. But then again, I'm quite comfortable with SQL, so YMMV.
A very old question I know but...
A pandas DataFrame seems to be the ideal option here.
http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.html
From the blurb
Two-dimensional size-mutable, potentially heterogeneous tabular data
structure with labeled axes (rows and columns). Arithmetic operations
align on both row and column labels. Can be thought of as a dict-like
container for Series objects. The primary pandas data structure
http://pandas.pydata.org/
I personally would use the list of row lists. Because the data for each row is always in the same order, you can easily sort by any of the columns by simply accessing that element in each of the lists. You can also easily count based on a particular column in each list, and make searches as well. It's basically as close as it gets to a 2-d array.
Really the only disadvantage here is that you have to know in what order the data is in, and if you change that ordering, you'll have to change your search/sorting routines to match.
Another thing you can do is have a list of dictionaries.
rows = []
rows.append({"ID":"1", "name":"Cat", "year":"1998", "priority":"1"})
This would avoid needing to know the order of the parameters, so you can look through each "year" field in the list.
Have a Table class whose rows is a list of dict or better row objects
In table do not directly add rows but have a method which update few lookup maps e.g. for name
if you are not adding rows in order or id are not consecutive you can have idMap too
e.g.
class Table(object):
def __init__(self):
self.rows = []# list of row objects, we assume if order of id
self.nameMap = {} # for faster direct lookup for row by name
def addRow(self, row):
self.rows.append(row)
self.nameMap[row['name']] = row
def getRow(self, name):
return self.nameMap[name]
table = Table()
table.addRow({'ID':1,'name':'a'})
First, given that you have a complex data retrieval scenario, are you sure even SQLite is overkill?
You'll end up having an ad hoc, informally-specified, bug-ridden, slow implementation of half of SQLite, paraphrasing Greenspun's Tenth Rule.
That said, you are very right in saying that choosing a single data structure will impact one or more of searching, sorting or counting, so if performance is paramount and your data is constant, you could consider having more than one structure for different purposes.
Above all, measure what operations will be more common and decide which structure will end up costing less.
I personally wrote a lib for pretty much that quite recently, it is called BD_XML
as its most fundamental reason of existence is to serve as a way to send data back and forth between XML files and SQL databases.
It is written in Spanish (if that matters in a programming language) but it is very simple.
from BD_XML import Tabla
It defines an object called Tabla (Table), it can be created with a name for identification an a pre-created connection object of a pep-246 compatible database interface.
Table = Tabla('Animals')
Then you need to add columns with the agregar_columna (add_column) method, with can take various key word arguments:
campo (field): the name of the field
tipo (type): the type of data stored, can be a things like 'varchar' and 'double' or name of python objects if you aren't interested in exporting to a data base latter.
defecto (default): set a default value for the column if there is none when you add a row
there are other 3 but are only there for database tings and not actually functional
like:
Table.agregar_columna(campo='Name', tipo='str')
Table.agregar_columna(campo='Year', tipo='date')
#declaring it date, time, datetime or timestamp is important for being able to store it as a time object and not only as a number, But you can always put it as a int if you don't care for dates
Table.agregar_columna(campo='Priority', tipo='int')
Then you add the rows with the += operator (or + if you want to create a copy with an extra row)
Table += ('Cat', date(1998,1,1), 1)
Table += {'Year':date(1998,1,1), 'Priority':2, Name:'Fish'}
#…
#The condition for adding is that is a container accessible with either the column name or the position of the column in the table
Then you can generate XML and write it to a file with exportar_XML (export_XML) and escribir_XML (write_XML):
file = os.path.abspath(os.path.join(os.path.dirname(__file__), 'Animals.xml'))
Table.exportar_xml()
Table.escribir_xml(file)
And then import it back with importar_XML (import_XML) with the file name and indication that you are using a file and not an string literal:
Table.importar_xml(file, tipo='archivo')
#archivo means file
Advanced
This are ways you can use a Tabla object in a SQL manner.
#UPDATE <Table> SET Name = CONCAT(Name,' ',Priority), Priority = NULL WHERE id = 2
for row in Table:
if row['id'] == 2:
row['Name'] += ' ' + row['Priority']
row['Priority'] = None
print(Table)
#DELETE FROM <Table> WHERE MOD(id,2) = 0 LIMIT 1
n = 0
nmax = 1
for row in Table:
if row['id'] % 2 == 0:
del Table[row]
n += 1
if n >= nmax: break
print(Table)
this examples assume a column named 'id'
but can be replaced width row.pos for your example.
if row.pos == 2:
The file can be download from:
https://bitbucket.org/WolfangT/librerias