Related
I've the following dataframe containing floats as input and would like to compute how many values are in range 0;90 and 90;180. The output dataframe was obtained using frequency() function from excel.
[Input dataframe]
[Desired output]
I'd like to do the same thing with python but didn't find a solution. Do you have any suggestion ?
I can also provide source files if needed.
Here's one way, by dividing the columns by 90, then using groupy and count:
import numpy as np
import pandas as pd
data = [
[87.084,5.293],
[55.695,0.985],
[157.504,2.995],
[97.701,179.593],
[97.67,170.386],
[118.713,177.53],
[99.972,176.665],
[124.849,1.633],
[72.787,179.459]
]
df = pd.DataFrame(data,columns=['Var1','Var2'])
df = (df / 90).astype(int)
df1 = pd.DataFrame([["0-90"], ["90-180"]])
df1['Var1'] = df.groupby('Var1').count()
df1['Var2'] = df.groupby('Var2').count()
print(df1)
Output:
0 Var1 Var2
0 0-90 3 4
1 90-180 6 5
I have two dataframes df1 and df2 each of them have column containing product code and product price, I wanted to check the difference between prices in the 2 dataframes and store the result of this function I created in a new dataframe "df3" containing the product code and the final price, Here is my attempt :
Function to calculate the difference in the way I want:
def range_calc(z, y):
final_price = pd.DataFrame(columns = ["Market_price"])
res = z-y
abs_res = abs(res)
if abs_res == 0:
return (z)
if z>y:
final_price = (abs_res / z ) * 100
else:
final_price = (abs_res / y ) * 100
return(final_price)
For loop I created to check the two df and use the function:
Last_df = pd.DataFrame(columns = ["Product_number", "Market_Price"])
for i in df1["product_ID"]:
for x in df2["product_code"]:
if i == x:
Last_df["Product_number"] = i
Last_df["Market_Price"] = range_calc(df1["full_price"],df2["tot_price"])
The problem is that I am getting this error every time:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
Why you got the error message "The truth value of a Series is ambiguous"
You got the error message The truth value of a Series is ambiguous
because you tried input a pandas.Series into an if-clause
nums = pd.Series([1.11, 2.22, 3.33])
if nums == 0:
print("nums == zero (nums is equal to zero)")
else:
print("nums != zero (nums is not equal to zero)")
# AN EXCEPTION IS RAISED!
The Error Message is something like the following:
ValueError: The truth value of a Series is ambiguous. Use a.empty,
a.bool(), a.item(), a.any() or a.all().
Somehow, you got a Series into the inside of the if-clause.
Actually, I know how it happened, but it will take me a moment to explain:
Well, suppose that you want the value out of row 3 and column 4 of a pandas dataframe.
If you attempt to extract a single value out of a pandas table in a specific row and column, then that value is sometimes a Series object, not a number.
Consider the following example:
# DATA:
# Name Age Location
# 0 Nik 31 Toronto
# 1 Kate 30 London
# 2 Evan 40 Kingston
# 3 Kyra 33 Hamilton
To create the dataframe above, we can write:
df = pd.DataFrame.from_dict({
'Name': ['Nik', 'Kate', 'Evan', 'Kyra'],
'Age': [31, 30, 40, 33],
'Location': ['Toronto', 'London', 'Kingston', 'Hamilton']
})
Now, let us try to get a specific row of data:
evans_row = df.loc[df['Name'] == 'Evan']
and we try to get a specific value out of that row of data:
evans_age = evans_row['Age']
You might think that evans_age is the integer 40, but you would be wrong.
Let us see what evans_age really is:
print(80*"*", "EVAN\'s AGE", type(Evans_age), sep="\n")
print(Evans_age)
We have:
EVAN's AGE
<class 'pandas.core.series.Series'>
2 40
Name: Age, dtype: int64
Evan's Age is not a number.
evans_age is an instance of the class stored as pandas.Series
After extracting a single cell out of a pandas dataframe you can write .tolist()[0] to extract the number out of that cell.
evans_real_age = evans_age.tolist()[0]
print(80*"*", "EVAN\'s REAL AGE", type(evans_real_age), sep="\n")
print(evans_real_age)
EVAN's REAL AGE
<class 'numpy.int64'>
40
The exception in your original code was probably thrown by if abs_res == 0.
If abs_res is a pandas.Series then abs_res == 0 returns another Series.
There is no way to compare if an entire list of numbers is equal to zero.
Normally people just enter one input to an if-clause.
if (912):
print("912 is True")
else:
print("912 is False")
When an if-statement receives more than one value, then the python interpreter does not know what to do.
For example, what should the following do?
import pandas as pd
data = pd.Series([1, 565, 120, 12, 901])
if data:
print("data is true")
else:
print("data is false")
You should only input one value into an if-condition. Instead, you entered a pandas.Series object as input to the if-clause.
In your case, the pandas.Series only had one number in it. However, in general, pandas.Series contain many values.
The authors of the python pandas library assume that a series contains many numbers, even if it only has one.
The computer thought that you tired to put many different numbers inside of one single if-clause.
The difference between a "function definition" and a "function call"
Your original question was,
"I want to make a function if the common key is found"
Your use of the phrase "make a function" is incorrect. You probably meant, "I want to call a function if a common key is found."
The following are all examples of function "calls":
import pandas as pd
import numpy as np
z = foo(1, 91)
result = funky_function(811, 22, "green eggs and ham")
output = do_stuff()
data = np.random.randn(6, 4)
df = pd.DataFrame(data, index=dates, columns=list("ABCD"))
Suppose that you have two containers.
If you truly want to "make" a function if a common key is found, then you would have code like the following:
dict1 = {'age': 26, 'phone':"303-873-9811"}
dict2 = {'name': "Bob", 'phone':"303-873-9811"}
def foo(dict1, dict2):
union = set(dict2.keys()).intersection(set(dict1.keys()))
# if there is a shared key...
if len(union) > 0:
# make (create) a new function
def bar(*args, **kwargs):
pass
return bar
new_function = foo(dict1, dict2)
print(new_function)
If you are not using the def keyword, that is known as a function call
In python, you "make" a function ("define" a function) with the def keyword.
I think that your question should be re-titled.
You could write, "How do I call a function if two pandas dataframes have a common key?"
A second good question be something like,
"What went wrong if we see the error message, ValueError: The truth value of a Series is ambiguous.?"
Your question was worded strangely, but I think I can answer it.
Generating Test Data
Your question did not include test data. If you ask a question on stack overflow again, please provide a small example of some test data.
The following is an example of data we can use:
product_ID full_price
0 prod_id 1-1-1-1 11.11
1 prod_id 2-2-2-2 22.22
2 prod_id 3-3-3-3 33.33
3 prod_id 4-4-4-4 44.44
4 prod_id 5-5-5-5 55.55
5 prod_id 6-6-6-6 66.66
6 prod_id 7-7-7-7 77.77
------------------------------------------------------------
product_code tot_price
0 prod_id 3-3-3-3 34.08
1 prod_id 4-4-4-4 45.19
2 prod_id 5-5-5-5 56.30
3 prod_id 6-6-6-6 67.41
4 prod_id 7-7-7-7 78.52
5 prod_id 8-8-8-8 89.63
6 prod_id 9-9-9-9 100.74
Products 1 and 2 are unique to data-frame 1
Products 8 and 9 are unique to data-frame 2
Both data-frames contain data for products 3, 4, 5, ..., 7.
The prices are slightly different between data-frames.
The test data above is generated by the following code:
import pandas as pd
from copy import copy
raw_data = [
[
"prod_id {}-{}-{}-{}".format(k, k, k, k),
int("{}{}{}{}".format(k, k, k, k))/100
] for k in range(1, 10)
]
raw_data = [row for row in raw_data]
df1 = pd.DataFrame(data=copy(raw_data[:-2]), columns=["product_ID", "full_price"])
df2 = pd.DataFrame(data=copy(raw_data[2:]), columns=["product_code", "tot_price"])
for rowid in range(0, len(df2.index)):
df2.at[rowid, "tot_price"] += 0.75
print(df1)
print(60*"-")
print(df2)
Add some error checking
It is considered to be "best-practice" to make sure that your function
inputs are in the correct format.
You wrote a function named range_calc(z, y). I reccomend making sure that z and y are integers, and not something else (such as a pandas Series object).
def range_calc(z, y):
try:
z = float(z)
y = float(y)
except ValueError:
function_name = inspect.stack()[0][3]
with io.StringIO() as string_stream:
print(
"Error: In " + function_name + "(). Inputs should be
like decimal numbers.",
"Instead, we have: " + str(type(y)) + " \'" +
repr(str(y))[1:-1] + "\'",
file=string_stream,
sep="\n"
)
err_msg = string_stream.getvalue()
raise ValueError(err_msg)
# DO STUFF
return
Now we get error messages:
import pandas as pd
data = pd.Series([1, 565, 120, 12, 901])
range_calc("I am supposed to be an integer", data)
# ValueError: Error in function range_calc(). Inputs should be like
decimal numbers.
# Instead, we have: <class 'str'> "I am supposed to be an integer"
Code which Accomplishes what you Wanted.
The following is some rather ugly code which computes what you wanted:
# You can continue to use your original `range_calc()` function unmodified
# Use the test data I provided earlier in this answer.
def foo(df1, df2):
last_df = pd.DataFrame(columns = ["Product_number", "Market_Price"])
df1_ids = set(df1["product_ID"].tolist())
df2_ids = set(df2["product_code"].tolist())
pids = df1_ids.intersection(df2_ids) # common_product_ids
for pid in pids:
row1 = df1.loc[df1['product_ID'] == pid]
row2 = df2.loc[df2["product_code"] == pid]
price1 = row1["full_price"].tolist()[0]
price2 = row2["tot_price"].tolist()[0]
price3 = range_calc(price1, price2)
row3 = pd.DataFrame([[pid, price3]], columns=["Product_number", "Market_Price"])
last_df = pd.concat([last_df, row3])
return last_df
# ---------------------------------------
last_df = foo(df1, df2)
The result is:
Product_number Market_Price
0 prod_id 6-6-6-6 1.112595
0 prod_id 7-7-7-7 0.955171
0 prod_id 4-4-4-4 1.659659
0 prod_id 5-5-5-5 1.332149
0 prod_id 3-3-3-3 2.200704
Note that one of many reasons that my solution is ugly is in the following line of code:
last_df = pd.concat([last_df, row3])
if last_df is large (thousands of rows), then the code will run very slowly.
This is because instead of inserting a new row of data, we:
copy the original dataframe
append a new row of data to the copy.
delete/destroy the original data-frame.
It is really silly to copy 10,000 rows of data only to add one new value, and then delete the old 10,000 rows.
However, my solution has fewer bugs than your original code, relatively speaking.
sometimes when you check a condition on series or dataframes, your output is a series such as ( , False).
In this case you must use any, all, item,...
use print function for your condition to see the series.
Also I must tell your code is very very slow and has O(n**2). You can first calculate df3 as joining df1 and df2 then using apply method for fast calculating.
I have a dataframe as shown below:
>>> import pandas as pd
>>> df = pd.DataFrame(data = [['app;',1,2,3],['app; web;',4,5,6],['web;',7,8,9],['',1,4,5]],columns = ['a','b','c','d'])
>>> df
a b c d
0 app; 1 2 3
1 app; web; 4 5 6
2 web; 7 8 9
3 1 4 5
I have an input array that looks like this: ["app","web"]
For each of these values I want to check against a specific column of a dataframe and return a decision as shown below:
>>> df.a.str.contains("app")
0 True
1 True
2 False
3 False
Since str.contains only allows me to look for an individual value, I was wondering if there's some other direct way to determine the same something like:
df.a.str.contains(["app","web"]) # Returns TypeError: unhashable type: 'list'
My end goal is not to do an absolute match (df.a.isin(["app", "web"]) but rather a 'contains' logic that says return true even if it has those characters present in that cell of data frame.
Note: I can of course use apply method to create my own function for the same logic such as:
elementsToLookFor = ["app","web"]
df[header] = df.apply(lambda element: all([a in element for a in elementsToLookFor]))
But I am more interested in the optimal algorithm for this and so prefer to use a native pandas function within pandas, or else the next most optimized custom solution.
This should work too:
l = ["app","web"]
df['a'].str.findall('|'.join(l)).map(lambda x: len(set(x)) == len(l))
also this should work as well:
pd.concat([df['a'].str.contains(i) for i in l],axis=1).all(axis = 1)
so many solutions, which one is the most efficient
The str.contains-based answers are generally fastest, though str.findall is also very fast on smaller dfs:
values = ['app', 'web']
pattern = ''.join(f'(?=.*{value})' for value in values)
def replace_dummies_all(df):
return df.a.str.replace(' ', '').str.get_dummies(';')[values].all(1)
def findall_map(df):
return df.a.str.findall('|'.join(values)).map(lambda x: len(set(x)) == len(values))
def lower_contains(df):
return df.a.astype(str).str.lower().str.contains(pattern)
def contains_concat_all(df):
return pd.concat([df.a.str.contains(l) for l in values], axis=1).all(1)
def contains(df):
return df.a.str.contains(pattern)
Try with str.get_dummies
df.a.str.replace(' ','').str.get_dummies(';')[['web','app']].all(1)
0 False
1 True
2 False
3 False
dtype: bool
Update
df['a'].str.contains(r'^(?=.*web)(?=.*app)')
Update 2: (To ensure case insenstivity doesn't matter and the column dtype is str without which the logic may fail):
elementList = ['app','web']
for eachValue in elementList:
valueString += f'(?=.*{eachValue})'
df[header] = df[header].astype(str).str.lower() #To ensure case insenstivity and the dtype of the column is string
result = df[header].str.contains(valueString)
I have a Dataframe that consists of around 0.2 Million Records. When I'm inputting this Dataframe as an input to a model, it's throwing this error:
Cast string to float is not supported.
Is there any way I can check which particular value in the data frame is causing this error?
I've tried running this command and checking if any value is a string in the column.
False in map((lambda x: type(x) == str), trainDF['Embeddings'])
Output:
True
In panda when we convert those type mix column we do
df['col'] = pd.to_numeric(df['col'],errors = 'coerce')
Which will return NaN for those item can not be convert to float, you can drop then with dropna or fill some default value with fillna
You should loop over trainDF's indices and find the rows that have errors using try except.
>>> import pandas as pd
>>> trainDF = pd.DataFrame({'Embeddings':['100', '23.2', '44a', '453.2']})
>>> trainDF
Embeddings
0 100
1 23.2
2 44a
3 453.2
>>> error_indices = []
>>> for idx, row in trainDF.iterrows():
... try:
... trainDF.loc[idx, 'Embeddings'] = float(row['Embeddings'])
... except:
... error_indices.append(idx)
...
>>> trainDF
Embeddings
0 100.0
1 23.2
2 44a
3 453.2
>>> trainDF.loc[error_indices]
Embeddings
2 44a
I have a large .csv file that has 11'000'000 rows and 3 columns: id ,magh , mixid2.
What I have to do is to select the rows with the same id and then check if these rows have the same mixid2; if True I remove the rows, If False I initialize a class with the information of the selected rows.
That is my code:
obs=obs.set_index('id')
obs=obs.sort_index()
#dropping elements with only one mixid2 and filling S
ID=obs.index.unique()
S=[]
good_bye_list = []
for i in tqdm(ID):
app=obs.loc[i]
if len(np.unique([app['mixid2'],])) != 1:
#fill the class list
S.append(star(app['magh'].values,app['mixid2'].values,z_in))
else :
#drop
good_bye_list.append(i)
obs=obs.drop(good_bye_list)
The .csv file is very large so it takes 40 min to compute everything.
How can I improve the speed??
Thank you for the help.
This is the .csv file:
id,mixid2,magh
3447001203296326,557,14.25
3447001203296326,573,14.25
3447001203296326,525,14.25
3447001203296326,541,14.25
3447001203296330,540,15.33199977874756
3447001203296330,573,15.33199977874756
3447001203296333,172,17.476999282836914
3447001203296333,140,17.476999282836914
3447001203296333,188,17.476999282836914
3447001203296333,156,17.476999282836914
3447001203296334,566,15.626999855041506
3447001203296334,534,15.626999855041506
3447001203296334,550,15.626999855041506
3447001203296338,623,14.800999641418455
3447001203296338,639,14.800999641418455
3447001203296338,607,14.800999641418455
3447001203296344,521,12.8149995803833
3447001203296344,537,12.8149995803833
3447001203296344,553,12.8149995803833
3447001203296345,620,12.809000015258787
3447001203296345,543,12.809000015258787
3447001203296345,636,12.809000015258787
3447001203296347,558,12.315999984741213
3447001203296347,542,12.315999984741213
3447001203296347,526,12.315999984741213
3447001203296352,615,12.11299991607666
3447001203296352,631,12.11299991607666
3447001203296352,599,12.11299991607666
3447001203296360,540,16.926000595092773
3447001203296360,556,16.926000595092773
3447001203296360,572,16.926000595092773
3447001203296360,524,16.926000595092773
3447001203296367,490,15.80799961090088
3447001203296367,474,15.80799961090088
3447001203296367,458,15.80799961090088
3447001203296369,639,15.175000190734865
3447001203296369,591,15.175000190734865
3447001203296369,623,15.175000190734865
3447001203296369,607,15.175000190734865
3447001203296371,460,14.975000381469727
3447001203296373,582,14.532999992370605
3447001203296373,614,14.532999992370605
3447001203296373,598,14.532999992370605
3447001203296374,184,14.659000396728516
3447001203296374,203,14.659000396728516
3447001203296374,152,14.659000396728516
3447001203296374,136,14.659000396728516
3447001203296374,168,14.659000396728516
3447001203296375,592,14.723999977111815
3447001203296375,608,14.723999977111815
3447001203296375,624,14.723999977111815
3447001203296375,92,14.723999977111815
3447001203296375,76,14.723999977111815
3447001203296375,108,14.723999977111815
3447001203296375,576,14.723999977111815
3447001203296376,132,14.0649995803833
3447001203296376,164,14.0649995803833
3447001203296376,180,14.0649995803833
3447001203296376,148,14.0649995803833
3447001203296377,168,13.810999870300293
3447001203296377,152,13.810999870300293
3447001203296377,136,13.810999870300293
3447001203296377,184,13.810999870300293
3447001203296378,171,13.161999702453613
3447001203296378,187,13.161999702453613
3447001203296378,155,13.161999702453613
3447001203296378,139,13.161999702453613
3447001203296380,565,13.017999649047852
3447001203296380,517,13.017999649047852
3447001203296380,549,13.017999649047852
3447001203296380,533,13.017999649047852
3447001203296383,621,13.079999923706055
3447001203296383,589,13.079999923706055
3447001203296383,605,13.079999923706055
3447001203296384,541,12.732000350952148
3447001203296384,557,12.732000350952148
3447001203296384,525,12.732000350952148
3447001203296385,462,12.784000396728516
3447001203296386,626,12.663999557495115
3447001203296386,610,12.663999557495115
3447001203296386,577,12.663999557495115
3447001203296389,207,12.416000366210938
3447001203296389,255,12.416000366210938
3447001203296389,223,12.416000366210938
3447001203296389,239,12.416000366210938
3447001203296390,607,12.20199966430664
3447001203296390,591,12.20199966430664
3447001203296397,582,16.635000228881836
3447001203296397,598,16.635000228881836
3447001203296397,614,16.635000228881836
3447001203296399,630,17.229999542236328
3447001203296404,598,15.970000267028807
3447001203296404,631,15.970000267028807
3447001203296404,582,15.970000267028807
3447001203296408,540,16.08799934387207
3447001203296408,556,16.08799934387207
3447001203296408,524,16.08799934387207
3447001203296408,572,16.08799934387207
3447001203296409,632,15.84000015258789
3447001203296409,616,15.84000015258789
Hello and welcome to StackOverflow.
In pandas the rule of thumb is that raw loops are always slower than the dedicated functions. To apply a function to a sub-DataFrame of rows that fulfill certain criteria you can use groupby
In your case the function is a bit ... unpythonic as the instantiation of S is a side effect and the deleting of rows you are currenty iterating over is dangerous. For example in a dictionary you should never do this. That said, you can create a function like this:
In [37]: def my_func(df):
...: if df['mixid2'].nunique() == 1:
...: return None
...: else:
...: S.append(df['mixid2'])
...: return df
and apply it to you DataFrame via
S = []
obs.groupby('id').apply(my_func)
This iterates over all subdataframes with the same id and drops them if there is exactly one unique value in mixid2. Otherwise it appends the values to a list S
The resulting DataFrame is 3 rows shorter
Out[38]:
id mixid2 magh
id
3447001203296326 0 3447001203296326 557 14.250000
1 3447001203296326 573 14.250000
... ... ... ...
3447001203296409 98 3447001203296409 632 15.840000
99 3447001203296409 616 15.840000
[97 rows x 3 columns]
and S contains 28 elements. That you could pass into the star constructor just as you did.
I guess you want to groupby and exclude all the elements where mixid2 appears more than 1 times using set_index. To get the original shape, we use reset_index after the filtering.
df = obs.set_index('mixid2').loc[~df.groupby('mixid2').count().id.eq(1)].reset_index()
df.shape
(44, 3)
I'm not entirely sure, if I understood you correctly. But what you can do is first remove duplicates in your dataframe and then use the groupby function to get all the remaining data points with same id:
# dropping all duplicates based on id an mixid2
df.drop_duplicates(["id", "mixid2"], inplace=True)
# then iterate over all groups:
for index, grp in df.groupby(["id"]):
pass # do stuff here with the grp
Normally it is a good idea to rely on pandas internal functions, since they are mostly optimised quite well.
new_df = app.groupby(['id','mixid2'], as_index=False).agg('count')
new_df = new_df[new_df['magh'] > 1]
then pass new_df to your function.