Iterating through dataframe and updating based on dictionary conditions - python

I have the following xlsx file that I need to work on:
I want to iterate through the dataframe and if the column ITEM CODE contains a dictionary key, I want to check on the same row if contains a dictionary value[0] (first position in the tuple) and if contains I want to insert dictionary value1 (second position in the tuple) into another column named SKU
Dataframe: #df3 = df2.append(df1)
catp = {"2755":(('24','002'),('25','003'),('26','003'),('27','004'),('28','005'),('29','006'),('30','007'),('31','008'),
('32','009'),('32','010'),('33','011'),('34','012'),('35','013'),('36','014')),
"2513":(('38','002'),('40','003'),('42','004'),('44','005'),('46','006'),('48','007'),('50','008'),('52','009'),
('54','010'))}
for i, row in df3.iterrows():
if catp.key() in df3['ITEM CODE'][i] and catp.value()[0] in df3['TG'][i]:
codmarime = catp.value()[1]
df3['SKU'][i] = '20'+df3['ITEM CODE'][i]+[i]+codmarime
else:
df3['SKU'][i] = '20'+df3['ITEM CODE'][i]+'???'
If 2755 and 24 found SKU = '202755638002'
If 2513 and 44 found SKU = '202513123005'
Output xlsx

As you failed to provide text data to create at least a fragment of your DataFrame,
I copied from your picture 3 rows, creating my test DataFrame:
df3 = pd.DataFrame(data=[
[ '1513452', 'AVRO D2', '685', 'BLACK/BLACK/ANTRACITE', '24', 929.95, '8052644627565' ],
[ '2513452', 'AVRO D2', '685', 'BLACK/BLACK/ANTRACITE', '21', 929.95, '8052644627565' ],
[ '2755126', 'AMELIA', 'Y17', 'DARK-DENIM', '24', 179.95, '8052644627565' ]],
columns=[ 'ITEM CODE', 'ITEM', 'COLOR', 'COLOR CODE', 'TG', 'PRICE', 'EAN' ])
Details:
The first row does not contain any of catp keys in ITEM CODE column.
The second row: ITEM CODE contains one of your codes (2513) but for TG
column no tuple saved under 2513 key contains first element == 21.
The third row: ITEM CODE contains one of your codes (2755), TG == 24
and among tuples saved under 2755 there is one == 24.
Then we have to define a couple of auxiliary functions:
def findContainedCodeAndVal(dct, str):
for eachKey in dct.keys():
if str.find(eachKey) >= 0:
return (eachKey, dct[eachKey])
else:
return (None, None)
This function attempts to find in dct a key contained in str.
It returns a 2-tuple containing the key found and associated value from dct.
def find2ndElem(tuples, str):
for tpl in tuples:
if tpl[0] == str:
return tpl[1]
else:
return ''
This function checks each tuple from tuples whether its first element
== str and returns the second element from this tuple.
And the last function to define is a function to be applied to each row
from your DataFrame. It returns the value to be saved in SKU column:
def fn(row):
ind = row.name # Read row index
iCode = row['ITEM CODE']
k, val = findContainedCodeAndVal(catp, iCode)
codmarime = ''
if k:
tg = row.TG
codmarime = find2ndElem(val, tg)
if codmarime == '':
codmarime = '???'
return f'20/{iCode}/{ind}/{codmarime}'
Note that it uses your catp dictionary.
For demonstration purposes, I introduced in the returned value additional
slashes, separating adjacent parts. In the target version remove them.
And the last thing to do is to compute SKU column of your DataFrame,
applying fn function to each row of df3 and saving the result under
SKU column:
df3['SKU'] = df3.apply(fn, axis=1)
When you print the DataFrame (containig my test data), SKU column will
contain:
20/1513452/0/???
20/2513452/1/???
20/2755126/2/002

I am unable to understand the question properly but just correcting the errors I see in your code:
if catp.key() in df3['ITEM CODE'][i] and catp.value()[0] in df3['TG'][i]:
This is incorrect.
I am taking a different approach should work if I understand the end-goal
for key in catp.keys():
xdf = df3.loc[(df3['SKU'].astype(str).contains(key)) & (df3['SKU'].astype(str).contains(catp[key][0])]
if len(xdf)>0:
for i, row in xdf.iterrows():
codmarime = catp[key][1]
df3.at[i,'SKU'] = '20'+row['ITEM CODE'][i]+[i]+codmarime

Related

Python Replace values in list with dict

I have 2 variables I am trying to manipulate the data. I have a variable with a list that has 2 items.
row = [['Toyyota', 'Cammry', '3000'], ['Foord', 'Muustang', '6000']]
And a dictionary that has submissions
submission = {
'extracted1_1': 'Toyota', 'extracted1_2': 'Camry', 'extracted1_3': '1000',
'extracted2_1': 'Ford', 'extracted2_2': 'Mustang', 'extracted2_3': '5000',
'reportDate': '2022-06-01T08:30', 'reportOwner': 'John Smith'}
extracted1_1 would match up with the first value in the first item from row. extracted1_2 would be the 2nd value in the 1st item, and extracted2_1 would be the 1st value in the 2nd item and so on. I'm trying to update row with the corresponding submission and having a hard time getting it to work properly.
Here's what I have currently:
iter_bit = iter((submission.values()))
for bit in row:
i = 0
for bits in bit:
bit[i] = next(iter_bit)
i += 1
While this somewhat works, i'm looking for a more efficient way to do this by looping through the submission rather than the row. Is there an easier or more efficient way by looping through the submission to overwrite the corresponding value in row?
Iterate through submission, and check if the key is in the format extractedX_Y. If it does, use those as the indexes into row and assign the value there.
import re
regex = re.compile(r'^extracted(\d+)_(\d+)$')
for key, value in submissions.items():
m = regex.search(key)
if m:
x = int(m.group(1))
y = int(m.group(2))
row[x-1][y-1] = value
It seems you are trying to convert the portion of the keys after "extracted" to indices into row. To do this, first slice out the portion you don't need (i.e. "extracted"), and then split what remains by _. Then, convert each of these strings to integers, and subtract 1 because in python indices are zero-based.
for key, value in submission.items():
# e.g. key = 'extracted1_1', value = 'Toyota'
if not key.startswith("extracted"):
continue
indices = [int(i) - 1 for i in key[9:].split("_")]
# e.g. indices = [0, 0]
# Set the value
row[indices[0]][indices[1]] = value
Now you have your modified row:
[['Toyota', 'Camry', '1000'], ['Ford', 'Mustang', '5000']]
No clue if its faster but its a 2-liner hahaha
for n, val in zip(range(len(row) * 3), submission.values()):
row[n//3][n%3] = val
that said, i would probably do something safer in a work environment, like parsing the key for its index.

if isinstance(x, list) appending new values from the list overwrites the previous one

I'm absolutely new to Python trying to automate some of my stuff.
I'm currently trying to find a way to make a dictionary to a table based on columns from the table itself.
cursor.execute(query)
columns = [col[0] for col in cursor.description]
rows = [dict(zip(columns, row)) for row in cursor.fetchall()]
results = []
for row in rows:
arg1 = row['arg1']
arg2 = row['arg2']
temp = row
temp['concat'] = (str(arg1) + str(arg2))
concat = row['concat']
def map_sth(x):
return {
'arg1arg2': ["abcd","efgh"],
'arg1arg2': "xyz",
}[str(x)]
mapped = map_sth(concat)
if isinstance(mapped, list):
for mapping in mapped:
temp['new_column'] = mapping
results.append(temp)
else:
temp['new_column'] = mapped
results.append(temp)
df = pandas.DataFrame(results)
df.to_csv("file.csv",index=False)
I debugged the code and it works fine for results with only one item in map_sth
if isinstance(mapped, list):
for mapping in mapped:
temp['new_column'] = mapping
results.append(temp)
else:
temp['new_column'] = mapped
results.append(temp)
One it gets into isinstance it gives me correct value for the first loop but once entered in the second loop it overwrites both values with the second phrase from map_sth
Any help would be much appreciated as i'm currently stuck :/
Thanks!

Trying to Access keys in Dict from their values

I'm importing a CSV to a dictionary, where there are a number of houses labelled (I.E. 1A, 1B,...)
Rows are labelled containing some item such as 'coffee' and etc. In the table is data indicating how much of each item each house hold needs.
Excel screenshot
What I am trying to do it check the values of the key value pairs in the dictionary for anything that isn't blank (containing either 1 or 2), and then take the key value pair and the 'PRODUCT NUMBER' (from the csv) and append those into a new list.
I want to create a shopping list that will contain what item I need, with what quantity, to which household.
the column containing 'week' is not important for this
I import the CSV into python as a dictionary like this:
import csv
import pprint
from typing import List, Dict
input_file_1 = csv.DictReader(open("DATA CWK SHOPPING DATA WEEK 1 FILE B.xlsb.csv"))
table: List[Dict[str, int]] = [] #list
for row in input_file_1:
string_row: Dict[str, int] = {} #dictionary
for column in row:
string_row[column] = row[column]
table.append(string_row)
I found on 'geeksforgeeks' how to access the pair by its value. however when I try this in my dictionary, it only seems to be able to search for the last row.
# creating a new dictionary
my_dict ={"java":100, "python":112, "c":11}
# list out keys and values separately
key_list = list(my_dict.keys())
val_list = list(my_dict.values())
# print key with val 100
position = val_list.index(100)
print(key_list[position])
I also tried to do a for in range loop, but that didn't seem to work either:
for row in table:
if row["PRODUCT NUMBER"] == '1' and row["Week"] == '1':
for i in range(8):
if string_row.values() != ' ':
print(row[i])
Please, if I am unclear anywhere, please let me know and I will clear it up!!
Here is a loop I made that should do what you want.
values = list(table.values())
keys = list(table.keys())
new_table = {}
index = -1
for i in range(values.count("")):
index = values.index("", index +1)
new_table[keys[index]] = values[index]
If you want to remove those values from the original dict you can just add in
d.pop(keys[index]) into the loop

Fill pandas dataframe with a for loop

I have 4 dataframes for 4 newspapers (newspaper1,newspaper2,newspaper3,newspaper4])
which have a single column for author name.
Now I'd like to merge these 4 dataframes into one, which has 5 columns: author, and newspaper1,newspaper2,newspaper3,newspaper4 which contain 1/0 value (1 for author writing for that newspaper)
import pandas as pd
listOfMedia =[newspaper1,newspaper2,newspaper3,newspaper4]
merged = pd.DataFrame(columns=['author','newspaper1','newspaper2', 'newspaper4', 'newspaper4'])
while this loop does what I intended (fills the merged df author columns with the name):
for item in listOfMedia:
merged.author = item.author
I can't figure out how to fill the newspapers columns with the 1/0 values...
for item in listOfMedia:
if item == newspaper1:
merged['newspaper1'] = '1'
elif item == newspaper2:
merged['newspaper2'] = '1'
elif item == newspaper3:
merged['newspaper3'] = '1'
else:
merged['newspaper4'] = '1'
I keep getting error
During handling of the above exception, another exception occurred:
TypeError: attrib() got an unexpected keyword argument 'convert'
Tried to google that error but didn't help me identify what the problem is.
What am I missing here? I also think there must be smarter way to fill the newspaper/author matrix, however don't seem to be able to figure out even this simple way. I am using jupyter notebook.
Actually you are setting all rows to 1 so use:
for col in merged.columns:
merged[col].values[:] = 1
I've taken a guess at what I think your dataframes look like.
newspaper1 = pd.DataFrame({'author': ['author1', 'author2', 'author3']})
newspaper2 = pd.DataFrame({'author': ['author1', 'author2', 'author4']})
newspaper3 = pd.DataFrame({'author': ['author1', 'author2', 'author5']})
newspaper4 = pd.DataFrame({'author': ['author1', 'author2', 'author6']})
Firstly we will copy the dataframes so we don't affect the originals:
newspaper1_temp = newspaper1.copy()
newspaper2_temp = newspaper2.copy()
newspaper3_temp = newspaper3.copy()
newspaper4_temp = newspaper4.copy()
Next we replace the index of each dataframe with the author name:
newspaper1_temp.index = newspaper1['author']
newspaper2_temp.index = newspaper2['author']
newspaper3_temp.index = newspaper3['author']
newspaper4_temp.index = newspaper4['author']
Then we concatenate these dataframes (matching them together by the index we set):
merged = pd.concat([newspaper1_temp, newspaper2_temp, newspaper3_temp, newspaper4_temp], axis =1)
merged.columns = ['newspaper1', 'newspaper2', 'newspaper3', 'newspaper4']
And finally we replace NaN's with 0 and then non-zero entries (they will still have the author names in them) as 1:
merged = merged.fillna(0)
merged[merged != 0] = 1

create a filtered list of dictionaries based on existing list of dictionaries

I have a list of dictionaries read in from csv DictReader that represent rows of a csv file:
rows = [{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"}]
I would like to create a new dictionary, where only unique ID's are stored. But I would like to only keep the row entry with the most recent date. Based on the above example, it would keep the row with date 2/2/18.
I was thinking of doing something like this, but having trouble translating the pseudocode in the else statement into actual python.
I can figure out the part of checking the two dates for which is more recent, but having the most trouble figuring out how I check the new list for the dictionary that contains the same id and then retrieving the date from that row.
Note: Unfortunately, due to resource constraints on our platform I am unable to use pandas for this project.
new_data = []
for row in rows:
if row['id'] not in new_data:
new_data.append(row)
else:
check the element in new_data with the same id as row['id']
if that element's date value is less recent:
replace it with the current row
else :
continue to next row in rows
You'll need a function to convert your date (as string) to a date (as date).
import datetime
def to_date(date_str):
d1, m1, y1 = [int(s) for s in date_str.split('/')]
return datetime.date(y1, m1, d1)
I assumed your date format is d/m/yy. Consider using datetime.strptime to parse your dates, as illustrated by Alex Hall's answer.
Then, the idea is to loop over your rows and store them in a new structure (here, a dict whose keys are the IDs). If a key already exists, compare its date with the current row, and take the right one. Following your pseudo-code, this leads to:
rows = [{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"}]
new_data = dict()
for row in rows:
existing = new_data.get(row['id'], None)
if existing is None or to_date(existing['date']) < to_date(row['date']):
new_data[row['id']] = row
If your want your new_data variable to be a list, use new_data = list(new_data.values()).
import datetime
rows = [{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"}]
def parse_date(d):
return datetime.datetime.strptime(d, "%d/%m/%y").date()
tmp_dict = {}
for row in rows:
if row['id'] not in tmp_dict.keys():
tmp_dict['id'] = row
else:
if parse_date(row['date']) > parse_date(tmp_dict[row['id']]):
tmp_dict['id'] = row
print tmp_dict.values()
output
[{'date': '2/2/18', 'foo': 'baz', 'id': '123'}]
Note: you can merge the two if to if row['id'] not in tmp_dict.keys() || parse_date(row['date']) > parse_date(tmp_dict[row['id']]) for cleaner and shorter code
Firstly, work with proper date objects, not strings. Here is how to parse them:
from datetime import datetime, date
rows = [{"id": "123", "date": "1/1/18", "foo": "bar"},
{"id": "123", "date": "2/2/18", "foo": "baz"}]
for row in rows:
row['date'] = datetime.strptime(row['date'], '%d/%m/%y').date()
(check if the format is correct)
Then for the actual task:
new_data = {}
for row in rows:
new_data[row['id']] = max(new_data.get(row['id'], date.min),
row['date'])
print(new_data.values())
Alternatively:
Here are some generic utility functions that work well here which I use in many places:
from collections import defaultdict
def group_by_key_func(iterable, key_func):
"""
Create a dictionary from an iterable such that the keys are the result of evaluating a key function on elements
of the iterable and the values are lists of elements all of which correspond to the key.
"""
result = defaultdict(list)
for item in iterable:
result[key_func(item)].append(item)
return result
def group_by_key(iterable, key):
return group_by_key_func(iterable, lambda x: x[key])
Then the solution can be written as:
by_id = group_by_key(rows, 'id')
for id_num, group in list(by_id.items()):
by_id[id_num] = max(group, key=lambda r: r['date'])
print(by_id.values())
This is less efficient than the first solution because it creates lists along the way that are discarded, but I use the general principles in many places and I thought of it first, so here it is.
If you like to utilize classes as much as I do, then you could make your own class to do this:
from datetime import date
rows = [
{"id":"123","date":"1/1/18","foo":"bar"},
{"id":"123","date":"2/2/18", "foo":"baz"},
{"id":"456","date":"3/3/18","foo":"bar"},
{"id":"456","date":"1/1/18","foo":"bar"}
]
class unique(dict):
def __setitem__(self, key, value):
#Add key if missing or replace key if date is newer
if key not in self or self[key]["date"] < value["date"]:
dict.__setitem__(self, key, value)
data = unique() #Initialize new class based on dict
for row in rows:
d, m, y = map(int, row["date"].split('/')) #Split date into parts
row["date"] = date(y, m, d) #Replace date value
data[row["id"]] = row #Set new data. Will overwrite same ids with more recent
print data.values()
Outputs:
[
{'date': datetime.date(18, 2, 2), 'foo': 'baz', 'id': '123'},
{'date': datetime.date(18, 3, 3), 'foo': 'bar', 'id': '456'}
]
Keep in mind that data is a dict that essentially overrides the __setitem__ method that uses IDs as keys. And the dates are date objects so they can be compared easily.

Categories

Resources