Using apply to add multiple columns in pandas - python

I'm trying to run a function (row_extract) over a column in my dataframe, that returns three values that I then want to add to three new columns.
I've tried running it like this
all_data["substance", "extracted name", "name confidence"] = all_data["name"].apply(row_extract)
but I get one column with all three values. I'm going to iterate over the rows, but that doesn't seem like a very efficient system - any thoughts?
This is my current solution, but it takes an age.
for index, row in all_data.iterrows():
all_data.at[index, "substance"], all_data.at[index, "extracted name"], all_data.at[index, "name confidence"] = row_extract(row["name"])

Check what the type of your function output is or what the datatypes are. It seems like that's a string.
You can use the "split" method on a string to separate them.
https://docs.python.org/2/library/string.html
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html
Alternatively, adjust your function to return more than one value.
E.g.
def myfunc():
...
...
return x, y, z

Related

Cannot match two values in two different csvs

I am parsing through two separate csv files with the goal of finding matching customerID's and dates to manipulate balance.
In my for loop, at some point there should be a match as I intentionally put duplicate ID's and dates in my csv. However, when parsing and attempting to match data, the matches aren't working properly even though the values are the same.
main.py:
transactions = pd.read_csv(INPUT_PATH, delimiter=',')
accounts = pd.DataFrame(
columns=['customerID', 'MM/YYYY', 'minBalance', 'maxBalance', 'endingBalance'])
for index, row in transactions.iterrows():
customer_id = row['customerID']
date = formatter.convert_date(row['date'])
minBalance = 0
maxBalance = 0
endingBalance = 0
dict = {
"customerID": customer_id,
"MM/YYYY": date,
"minBalance": minBalance,
"maxBalance": maxBalance,
"endingBalance": endingBalance
}
print(customer_id in accounts['customerID'] and date in accounts['MM/YYYY'])
# Returns False
if (accounts['customerID'].equals(customer_id)) and (accounts['MM/YYYY'].equals(date)):
# This section never runs
print("hello")
else:
print("world")
accounts.loc[index] = dict
accounts.to_csv(OUTPUT_PATH, index=False)
Transactions CSV:
customerID,date,amount
1,12/21/2022,500
1,12/21/2022,-300
1,12/22/2022,100
1,01/01/2023,250
1,01/01/2022,300
1,01/01/2022,-500
2,12/21/2022,-200
2,12/21/2022,700
2,12/22/2022,200
2,01/01/2023,300
2,01/01/2023,400
2,01/01/2023,-700
Accounts CSV
customerID,MM/YYYY,minBalance,maxBalance,endingBalance
1,12/2022,0,0,0
1,12/2022,0,0,0
1,12/2022,0,0,0
1,01/2023,0,0,0
1,01/2022,0,0,0
1,01/2022,0,0,0
2,12/2022,0,0,0
2,12/2022,0,0,0
2,12/2022,0,0,0
2,01/2023,0,0,0
2,01/2023,0,0,0
2,01/2023,0,0,0
Expected Accounts CSV
customerID,MM/YYYY,minBalance,maxBalance,endingBalance
1,12/2022,0,0,0
1,01/2023,0,0,0
1,01/2022,0,0,0
2,12/2022,0,0,0
2,01/2023,0,0,0
Where does the problem come from
Your Problem comes from the comparison you're doing with pandas Series, to make it simple, when you do :
customer_id in accounts['customerID']
You're checking if customer_id is an index of the Series accounts['customerID'], however, you want to check the value of the Series.
And in your if statement, you're using the pd.Series.equals method. Here is an explanation of what does the method do from the documentation
This function allows two Series or DataFrames to be compared against each other to see if they have the same shape and elements. NaNs in the same location are considered equal.
So equals is used to compare between DataFrames and Series, which is different from what you're trying to do.
One of many solutions
There are multiple ways to achieve what you're trying to do, the easiest is simply to get the values from the series before doing the comparison :
customer_id in accounts['customerID'].values
Note that accounts['customerID'].values returns a NumPy array of the values of your Series.
So your comparison should be something like this :
print(customer_id in accounts['customerID'].values and date in accounts['MM/YYYY'].values)
And use the same thing in your if statement :
if (customer_id in accounts['customerID'].values and date in accounts['MM/YYYY'].values):
Alternative solutions
You can also use the pandas.Series.isin function that given an element as input return a boolean Series showing whether each element in the Series matches the given input, then you will just need to check if the boolean Series contain one True value.
Documentation of isin : https://pandas.pydata.org/docs/reference/api/pandas.Series.isin.html
It is not clear from the information what does formatter.convert_date function does. but from the example CSVs you added it seems like it should do something like:
def convert_date(mmddyy):
(mm,dd,yy) = mmddyy.split('/')
return mm + '/' + yy
in addition, make sure that data types are also equal
(both date fields are strings and also for customer id)

How to delete some specific columns in a large measurenent data which donot contain a some values?

i have a large measurement data which contain 35O columns after filtering(for example to A49,B0to B49,F0 toF49) with some random numbers.
Now i want to look in to (B0 to B49) whether it has values in the range(say: between 20 and 30).If not I want to delete that columns from the measurement data.
How to do this in python with pandas?
I want to know some faster methods for this filtering?
sample data:https://docs.google.com/spreadsheets/d/17Xjc81jkjS-64B4FGZ06SzYDRnc6J27m/edit?usp=sharing&ouid=106137353367530025738&rtpof=true&sd=true
(In Pandas) You can apply a function on all elements of an array using the applymap function. You can also apply aggregating actions to have a single value out of a whole column. You put those two things together to have what you want.
For instance, you want to know if a given set of columns (the "B" ones) have value in some range (say, 20:30). So, you want to verify the values at the element level, but collect the column names as output.
You can do that with the following code. Execute them separately/progressively to understand what they are doing.
>>> b_cols_of_interest_indx = df.filter(regex='^B').applymap(lambda x:20<x<30).any()
>>> b_cols_of_interest_indx[b_cols_of_interest_indx]
B19 True
B21 True
dtype: bool

Running multiple columns through a function

is there anyway I can run a function like this
crypto['Price'] = crypto['Ticker'].transform(lambda item: cg.get_price(ids=item, vs_currencies='usd'))
using the function
cg.get_coin_market_chart_range_by_id(id='bitcoin',vs_currency='usd',from_timestamp='1635505200',to_timestamp='1635548400')
With three columns for the values id , from_timestamp , to_timestamp
with the columns being
crypto['Ticker'] , crypto['Dateroundts'] , crypto['Dateround+1ts']
I basically want to make a new column with a the function above using the three columns as variables and dont know how.
you can use apply with axis=1 to apply across a row:
crypto['Price'] = crypto.apply(lambda x: cg.get_coin_market_chart_range_by_id(id=x.Ticker,vs_currency='usd',from_timestamp=x.Dateroundts,to_timestamp=x['Dateround+1ts']), axis=1)
You can either use the dot notation or use x like a dict (with square bracket). Though if the name is not a single alphnumeric word, you can only use x like dict.

Iterate over multiple dataframes and perform maths functions save output

I have several dataframes on which I an performing the same functions - extracting mean, geomean, median etc etc for a particular column (PurchasePrice), organised by groups within another column (GORegion). At the moment I am just performing this for each dataframe separately as I cannot work out how to do this in a for loop and save separate data series for each function performed on each dataframe.
i.e. I perform median like this:
regmedian15 = pd.Series(nw15.groupby(["GORegion"])['PurchasePrice'].median(), name = "regmedian_nw15")
I want to do this for a list of dataframes [nw15, nw16, nw17], extracting the same variable outputs for each of them.
I have tried things like :
listofnwdfs = [nw15, nw16, nw17]
for df in listofcmldfs:
df+'regmedian' = pd.Series(df.groupby(["GORegion"])
['PurchasePrice'].median(), name = df+'regmedian')
but it says "can't assign to operator"
I think the main point is I can't work out how to create separate output variable names using the names of the dataframes I am inputting into the for loop. I just want a for loop function that produces my median output as a series for each dataframe in the list separately, and I can then do this for means and so on.
Many thanks for your help!
First, df+'regmedian' = ... is not valid Python syntax. You are trying to assign a value to an expression of the form A + B, which is why Python complains that you are trying to re-define the meaning of +.
Also, df+'regmedian' itself seems strange. You are trying to add a DataFrame and a string.
One way to keep track of different statistics for different datafarmes is by using dicts. For example, you can replace
listofnwdfs = [nw15, nw16, nw17]
with
dict_of_nwd_frames = {15: nw15, 16: nw16, 17: nw17}
Say you want to store 'regmedian' data for each frame. You can do this with dicts as well.
data = dict()
for key, df in dict_of_nwd_frames.items():
data[(i, 'regmedian')] = pd.Series(df.groupby(["GORegion"])['PurchasePrice'].median(), name = str(key) + 'regmedian')

Why my code works (filtering a data-frame with a function)?

First of all, I have created a function with input of lat Lon in order to filter ships not entering a particular zone.
check_devaiation_notInZone(LAT, LON)
It takes two inputs and return True if ships not enter a particular zone.
Secondly, I got data on many ships with Lat in one header and Lon in another header in CSV format. So, I need to take data from two column into the function and create another column to store the output of the function.
After I looked at Pandas: How to use apply function to multiple columns. I found the solution df1['deviation'] = df1.apply(lambda row: check_devaiation_notInZone(row['Latitude'], row['Longitude']), axis = 1)
But I have no idea why it works. Can anyone explain the things inside the apply()?
A lambda function is just like a normal function but it has no name and can be used only in the place where it is defined.
lambda row: check_devaiation_notInZone(row['Latitude'], row['Longitude'])
is the same as:
def anyname(row):
return check_devaiation_notInZone(row['Latitude'], row['Longitude'])
So in the apply you just call another function check_devaiation_notInZone with the parameters row['Latitude'], row['Longitude'].

Categories

Resources