I have a set of tuples within a list in which I am trying to group the similar items together.
Eg.
[('/Desktop/material_design_segment/arc_01.texture', 'freshnel_intensity_3.0022.jpg'),
('/Desktop/material_design_segment/arc_01.texture', 'freshnel_intensity_4.0009.jpg'),
('/Desktop/material_design_segment/arc_08.texture', 'freshnel_intensity_8.0020.jpg'),
('/Desktop/material_design_segment/arc_05.texture', 'freshnel_intensity_5.0009.jpg'),
('/Desktop/material_design_filters/custom/phase_03.texture', 'rounded_viscosity.0002.jpg'),
('/Desktop/material_design_filters/custom/phase_03.texture', 'freshnel_intensity_9.0019.jpg')]
My results should return me:
'/Desktop/material_design_segment/arc_01.texture':
'freshnel_intensity_3.0022.jpg',
'freshnel_intensity_4.0009.jpg',
'/Desktop/material_design_segment/arc_08.texture':
'freshnel_intensity_8.0020.jpg'
'/Desktop/material_design_segment/arc_05.texture':
'freshnel_intensity_5.0009.jpg'
'/Desktop/material_design_filters/custom/phase_03.texture':
'rounded_viscosity.0002.jpg',
'freshnel_intensity_9.0019.jpg'
However, when I tried using my code as follows, it only returns me 1 item.
groups = defaultdict(str)
for date, value in aaa:
groups[date] = value
pprint(groups)
This is the ouput:
{'/Desktop/material_design_segment/arc_01.texture': 'freshnel_intensity_4.0009.jpg'
'/Desktop/material_design_filters/custom/phase_03.texture': 'freshnel_intensity_9.0019.jpg'
'/Desktop/material_design_segment/arc_08.texture': 'freshnel_intensity_8.0020.jpg'
'/Desktop/material_design_segment/arc_05.texture': 'freshnel_intensity_5.0009.jpg'}
Where am I doing it wrong?
You're assigning value to groups[date], which overwrites the previous value. You need to append it to a list.
groups = defaultdict(list)
for date, value in aaa:
groups[date].append(value)
You should append the values into a list as follows (based on your code):
groups = defaultdict(list)
for date, value in aaa:
groups[date].append(value)
print(groups)
Related
I have 2 variables I am trying to manipulate the data. I have a variable with a list that has 2 items.
row = [['Toyyota', 'Cammry', '3000'], ['Foord', 'Muustang', '6000']]
And a dictionary that has submissions
submission = {
'extracted1_1': 'Toyota', 'extracted1_2': 'Camry', 'extracted1_3': '1000',
'extracted2_1': 'Ford', 'extracted2_2': 'Mustang', 'extracted2_3': '5000',
'reportDate': '2022-06-01T08:30', 'reportOwner': 'John Smith'}
extracted1_1 would match up with the first value in the first item from row. extracted1_2 would be the 2nd value in the 1st item, and extracted2_1 would be the 1st value in the 2nd item and so on. I'm trying to update row with the corresponding submission and having a hard time getting it to work properly.
Here's what I have currently:
iter_bit = iter((submission.values()))
for bit in row:
i = 0
for bits in bit:
bit[i] = next(iter_bit)
i += 1
While this somewhat works, i'm looking for a more efficient way to do this by looping through the submission rather than the row. Is there an easier or more efficient way by looping through the submission to overwrite the corresponding value in row?
Iterate through submission, and check if the key is in the format extractedX_Y. If it does, use those as the indexes into row and assign the value there.
import re
regex = re.compile(r'^extracted(\d+)_(\d+)$')
for key, value in submissions.items():
m = regex.search(key)
if m:
x = int(m.group(1))
y = int(m.group(2))
row[x-1][y-1] = value
It seems you are trying to convert the portion of the keys after "extracted" to indices into row. To do this, first slice out the portion you don't need (i.e. "extracted"), and then split what remains by _. Then, convert each of these strings to integers, and subtract 1 because in python indices are zero-based.
for key, value in submission.items():
# e.g. key = 'extracted1_1', value = 'Toyota'
if not key.startswith("extracted"):
continue
indices = [int(i) - 1 for i in key[9:].split("_")]
# e.g. indices = [0, 0]
# Set the value
row[indices[0]][indices[1]] = value
Now you have your modified row:
[['Toyota', 'Camry', '1000'], ['Ford', 'Mustang', '5000']]
No clue if its faster but its a 2-liner hahaha
for n, val in zip(range(len(row) * 3), submission.values()):
row[n//3][n%3] = val
that said, i would probably do something safer in a work environment, like parsing the key for its index.
In the CSV I'm reading from, there are multiple rows for each ID:
ID,timestamp,name,text
444,2022-03-01T11:05:00.000Z,Amrita Patel,Hello
444,2022-03-01T11:06:00.000Z,Amrita Patel,Nice to meet you
555,2022-03-01T12:05:00.000Z,Zach Do,Good afternoon
555,2022-03-01T11:06:00.000Z,Zach Do,I like oranges
555,2022-03-01T11:07:00.000Z,Zach Do,definitely
I need to extract each such that I will have one file per ID, with the timestamp, name, and text in that file. For example, for ID 444, it will have 2 timestamps and 2 different texts in it, along with the name.
I'm able to get the text designated to the proper ID, using this code:
from collections import defaultdict
d = {}
l = []
list_of_lists = []
for k in csv_file:
l.append([k['ID'],k['text']])
list_of_lists.append(l)
for key, val in list_of_lists[0]:
d.setdefault(key, []).append(val)
The problem is that this isn't enough, I need to add in the other values to the one ID key. If I try:
l.append([k['ID'],[k['text'],k['name']]])
I get
ValueError: too many values to unpack
Just use a list for value instead,
{key: [value1, value2], ...}
I'm importing a CSV to a dictionary, where there are a number of houses labelled (I.E. 1A, 1B,...)
Rows are labelled containing some item such as 'coffee' and etc. In the table is data indicating how much of each item each house hold needs.
Excel screenshot
What I am trying to do it check the values of the key value pairs in the dictionary for anything that isn't blank (containing either 1 or 2), and then take the key value pair and the 'PRODUCT NUMBER' (from the csv) and append those into a new list.
I want to create a shopping list that will contain what item I need, with what quantity, to which household.
the column containing 'week' is not important for this
I import the CSV into python as a dictionary like this:
import csv
import pprint
from typing import List, Dict
input_file_1 = csv.DictReader(open("DATA CWK SHOPPING DATA WEEK 1 FILE B.xlsb.csv"))
table: List[Dict[str, int]] = [] #list
for row in input_file_1:
string_row: Dict[str, int] = {} #dictionary
for column in row:
string_row[column] = row[column]
table.append(string_row)
I found on 'geeksforgeeks' how to access the pair by its value. however when I try this in my dictionary, it only seems to be able to search for the last row.
# creating a new dictionary
my_dict ={"java":100, "python":112, "c":11}
# list out keys and values separately
key_list = list(my_dict.keys())
val_list = list(my_dict.values())
# print key with val 100
position = val_list.index(100)
print(key_list[position])
I also tried to do a for in range loop, but that didn't seem to work either:
for row in table:
if row["PRODUCT NUMBER"] == '1' and row["Week"] == '1':
for i in range(8):
if string_row.values() != ' ':
print(row[i])
Please, if I am unclear anywhere, please let me know and I will clear it up!!
Here is a loop I made that should do what you want.
values = list(table.values())
keys = list(table.keys())
new_table = {}
index = -1
for i in range(values.count("")):
index = values.index("", index +1)
new_table[keys[index]] = values[index]
If you want to remove those values from the original dict you can just add in
d.pop(keys[index]) into the loop
I have a CSV file with column name (in first row) and values (rest of the row). I wanted to create variables to store these values for every row in a loop. So I started off by creating a dictionary with the CSV file and I got a list of the records with a key-value pair. So now I wanted to create variables to store the "value" extracted from the "key" of each item and within a loop for every record. I am not sure if I am setting this correctly.
Here is the dictionary I have.
my_dict = [{'value id':'value1', 'name':'name1','info':'info1'},
{'value id':'value2', 'name':'name2','info':'info2'},
{'value id':'value3', 'name':'name3','info':'info3'},
}]
for i in len(my_dict):
item[value id] = value1
item[name] = name1
item[info] = info1
The value id and name will be unique and are identifiers the list. Ultimately, I wanted to create an item object i.e. item[info] = info1 and I can add other codes to modify the item[info].
try this,
my_dict = [{'value':'value1', 'name':'name1','info':'info1'},
{'value':'value2', 'name':'name2','info':'info2'},
{'value':'value3', 'name':'name3','info':'info3'}]
for obj in my_dict:
value = obj['value']
name = obj['name']
info = obj['info']
to expand on #aws_apprentice's point, you can capture the data by creating some additional variables
my_dict = [{'value':'value1', 'name':'name1','info':'info1'},
{'value':'value2', 'name':'name2','info':'info2'},
{'value':'value3', 'name':'name3','info':'info3'}]
values = []
names = []
info = []
for obj in my_dict:
values.append(obj['value'])
names.append(obj['name'])
info.append(obj['info'])
I have a list of dictionaries read in from csv DictReader that represent rows of a csv file:
rows = [{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"}]
I would like to create a new dictionary, where only unique ID's are stored. But I would like to only keep the row entry with the most recent date. Based on the above example, it would keep the row with date 2/2/18.
I was thinking of doing something like this, but having trouble translating the pseudocode in the else statement into actual python.
I can figure out the part of checking the two dates for which is more recent, but having the most trouble figuring out how I check the new list for the dictionary that contains the same id and then retrieving the date from that row.
Note: Unfortunately, due to resource constraints on our platform I am unable to use pandas for this project.
new_data = []
for row in rows:
if row['id'] not in new_data:
new_data.append(row)
else:
check the element in new_data with the same id as row['id']
if that element's date value is less recent:
replace it with the current row
else :
continue to next row in rows
You'll need a function to convert your date (as string) to a date (as date).
import datetime
def to_date(date_str):
d1, m1, y1 = [int(s) for s in date_str.split('/')]
return datetime.date(y1, m1, d1)
I assumed your date format is d/m/yy. Consider using datetime.strptime to parse your dates, as illustrated by Alex Hall's answer.
Then, the idea is to loop over your rows and store them in a new structure (here, a dict whose keys are the IDs). If a key already exists, compare its date with the current row, and take the right one. Following your pseudo-code, this leads to:
rows = [{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"}]
new_data = dict()
for row in rows:
existing = new_data.get(row['id'], None)
if existing is None or to_date(existing['date']) < to_date(row['date']):
new_data[row['id']] = row
If your want your new_data variable to be a list, use new_data = list(new_data.values()).
import datetime
rows = [{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"}]
def parse_date(d):
return datetime.datetime.strptime(d, "%d/%m/%y").date()
tmp_dict = {}
for row in rows:
if row['id'] not in tmp_dict.keys():
tmp_dict['id'] = row
else:
if parse_date(row['date']) > parse_date(tmp_dict[row['id']]):
tmp_dict['id'] = row
print tmp_dict.values()
output
[{'date': '2/2/18', 'foo': 'baz', 'id': '123'}]
Note: you can merge the two if to if row['id'] not in tmp_dict.keys() || parse_date(row['date']) > parse_date(tmp_dict[row['id']]) for cleaner and shorter code
Firstly, work with proper date objects, not strings. Here is how to parse them:
from datetime import datetime, date
rows = [{"id": "123", "date": "1/1/18", "foo": "bar"},
{"id": "123", "date": "2/2/18", "foo": "baz"}]
for row in rows:
row['date'] = datetime.strptime(row['date'], '%d/%m/%y').date()
(check if the format is correct)
Then for the actual task:
new_data = {}
for row in rows:
new_data[row['id']] = max(new_data.get(row['id'], date.min),
row['date'])
print(new_data.values())
Alternatively:
Here are some generic utility functions that work well here which I use in many places:
from collections import defaultdict
def group_by_key_func(iterable, key_func):
"""
Create a dictionary from an iterable such that the keys are the result of evaluating a key function on elements
of the iterable and the values are lists of elements all of which correspond to the key.
"""
result = defaultdict(list)
for item in iterable:
result[key_func(item)].append(item)
return result
def group_by_key(iterable, key):
return group_by_key_func(iterable, lambda x: x[key])
Then the solution can be written as:
by_id = group_by_key(rows, 'id')
for id_num, group in list(by_id.items()):
by_id[id_num] = max(group, key=lambda r: r['date'])
print(by_id.values())
This is less efficient than the first solution because it creates lists along the way that are discarded, but I use the general principles in many places and I thought of it first, so here it is.
If you like to utilize classes as much as I do, then you could make your own class to do this:
from datetime import date
rows = [
{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"},
{"id":"456","date":"3/3/18","foo":"bar"},
{"id":"456","date":"1/1/18","foo":"bar"}
]
class unique(dict):
def __setitem__(self, key, value):
#Add key if missing or replace key if date is newer
if key not in self or self[key]["date"] < value["date"]:
dict.__setitem__(self, key, value)
data = unique() #Initialize new class based on dict
for row in rows:
d, m, y = map(int, row["date"].split('/')) #Split date into parts
row["date"] = date(y, m, d) #Replace date value
data[row["id"]] = row #Set new data. Will overwrite same ids with more recent
print data.values()
Outputs:
[
{'date': datetime.date(18, 2, 2), 'foo': 'baz', 'id': '123'},
{'date': datetime.date(18, 3, 3), 'foo': 'bar', 'id': '456'}
]
Keep in mind that data is a dict that essentially overrides the __setitem__ method that uses IDs as keys. And the dates are date objects so they can be compared easily.