How to do multiple queries? - python

I want to do a multiple queries. Here is my data frame:
data = {'Name':['Penny','Ben','Benny','Mark','Ben1','Ben2','Ben3'],
'Eng':[5,1,4,3,1,2,3],
'Math':[1,5,3,2,2,2,3],
'Physics':[2,5,3,1,1,2,3],
'Sports':[4,5,2,3,1,2,3],
'Total':[12,16,12,9,5,8,12],
'Group':['A','A','A','A','A','B','B']}
df1=pd.DataFrame(data, columns=['Name','Eng','Math','Physics','Sports','Total','Group'])
df1
I have 3 queries:
Group A or B
Math > Eng
Name starts with 'B'
I tried to do it one by one
df1[df1.Name.str.startswith('B')]
df1.query('Math > Eng')
df1[df1.Group == 'A'] #I cannot run the code with df1[df1.Group == 'A' or 'B']
Then, I tried to merge those queries
df1.query("'Math > Eng' & 'df1[df1.Name.str.startswith('B')]' & 'df1[df1.Group == 'A']")
TokenError: ('EOF in multi-line statement', (2, 0))
I also tried to pass str.startswith() into df.query()
df1.query("df1.Name.str.startswith('B')")
UndefinedVariableError: name 'df1' is not defined
I have tried lots of ways but no one works. How can I put those queries together?

The long way to solve this – and the one with the most transparency, so best for beginners – is to create a boolean column for each filter. Then sum those columns as one final filter:
df1['filter_1'] = df1['Group'].isin(['A','B'])
df1['filter_2'] = df1['Math'] > df1['Eng']
df1['filter_3'] = df1['Name'].str.startswith('B')
# If all are true
df1['filter_final'] = df1[['filter_1', 'filter_2', 'filter_3']].all(axis=1)
You can certainly combine these steps into one:
mask = ((df1['Group'].isin(['A','B'])) &
(df1['Math'] > df1['Eng']) &
(df1['Name'].str.startswith('B'))
)
df['filter_final'] = mask
Lastly, selecting rows which satisfy your filter is done as follows:
df_filtered = df1[df1['filter_final']]
This selects rows from df1 where final_filter == True

Firstly, the answer is:
df1.query("Math > Eng & Name.str.startswith('B') & Group=='A'")
Additional comments
In query, the column's name doesn't accompany the data frame's name.
df1[df1.Group.isin(['A', 'B'])] or df1.query("Group in ['A', 'B']") instead of df1[df1.Group == 'A' or 'B']

Related

Selection in dataframe base on multiple condition

I am developping a dashboard using dash.
The user can select different parameters and a dataframe is updated (6 parameters).
The idea was to do :
filtering = []
if len(filter1)>0:
filtering.append("df['col1'].isin(filter1)")
if len(filter2)>0:
filtering.append("df['col2'].isin(filter2)")
condition = ' & '.join(filtering)
df.loc[condition]
But I have a key error, what i understand, as condition is a string.
Any advice on how I can do it ? What is the best practise ?
NB : I have a solution with if condition but I would like to maximise this part, avoiding the copy of the dataframe (>10 millions of rows).
dff = df.copy()
if len(filter1)>0:
dff = dff.loc[dff.col1.isin(filter1)]
if len(filter2)>0:
dff = dff.loc[dff.col2.isin(filter2)]
you can use eval:
filtering = []
if len(filter1)>0:
filtering.append("df['col1'].isin(filter1)")
if len(filter2)>0:
filtering.append("df['col2'].isin(filter2)")
condition = ' & '.join(filtering)
df.loc[eval(condition)]
You can merge the masks using the & operator and only apply the merged mask once
from functools import reduce
filters = []
if len(filter1)>0:
filters.append(df.col1.isin(filter1))
if len(filter2)>0:
filters.append(df.col2.isin(filter2))
if len(filters) > 0:
final_filter = reduce(lambda a, b: a&b, filters)
df = df.loc[final_filter]

How to filter this dataframe?

I have a large dataframe (sample). I was filtering the data according to this code:
A = [f"A{i}" for i in range(50)]
B = [f"B{i}" for i in range(50)]
C = [f"C{i}" for i in range(50)]
for i in A:
cond_A = (df[i]>= -0.0423) & (df[i]<=3)
filt_df = df[cond_A]
for i in B:
cond_B = (filt_df[i]>= 15) & (filt_df[i]<=20)
filt_df2 = filt_df[cond_B]
for i in C:
cond_C = (filt_df2[i]>= 15) & (filt_df2[i]<=20)
filt_df3 = filt_df2[cond_B]
When I print filt_df3, I am getting only an empty dataframe - why?
How can I improve the code, other approaches like some advanced techniques?
I am not sure the code above works as outlined in the edit below?
I would like to know how can I change the code, such that it works as outlined in the edit below?
Edit:
I want to remove the rows based on columns (A0 - A49) based on cond_A.
Then filter the dataframe from 1 based on columns (B0 - B49) with cond_B.
Then filter the dataframe from 2 based on columns (C0 - C49) with cond_C.
Thank you very much in advance.
It seems to me that there is an issue with your codes when you are using the iteration to do the filtering. For example, filt_df is being overwritten in every iteration of the first loop. When the loop ends, filt_df only contains the data filtered with the conditions set in the last iteration. Is this what you intend to do?
And if you want to do the filtering efficient, you can try to use pandas.DataFrame.query (see documentation here). For example, if you want to filter out all rows with column B0 to B49 containing values between 0 and 200 inclusive, you can try to use the Python codes below (assuming that you have imported the raw data in the variable df below).
condition_list = [f'B{i} >= 0 & B{i} <= 200' for i in range(50)]
filter_str = ' & '.join(condition_list)
subset_df = df.query(filter_str)
print(subset_df)
Since the column A1 contains only -0.057 which is outside [-0.0423, 3] everything gets filtered out.
Nevertheless, you seem not to take over the filter in every loop as filt_df{1|2|3} is reset.
This should work:
import pandas as pd
A = [f"A{i}" for i in range(50)]
B = [f"B{i}" for i in range(50)]
C = [f"C{i}" for i in range(50)]
filt_df = df.copy()
for i in A:
cond_A = (df[i] >= -0.0423) & (df[i]<=3)
filt_df = filt_df[cond_A]
filt_df2 = filt_df.copy()
for i in B:
cond_B = (filt_df[i]>= 15) & (filt_df[i]<=20)
filt_df2 = filt_df2[cond_B]
filt_df3 = filt_df2.copy()
for i in C:
cond_C = (filt_df2[i]>= 15) & (filt_df2[i]<=20)
filt_df3 = filt_df3[cond_B]
print(filt_df3)
Of course you will find a lot of filter tools in the pandas library that can be applied to multiple columns
For example this:
https://stackoverflow.com/a/39820329/6139079
You can filter by all columns together with DataFrame.all for test if all rows match together:
A = [f"A{i}" for i in range(50)]
cond_A = ((df[A] >= -0.0423) & (df[A]<=3)).all(axis=1)
B = [f"B{i}" for i in range(50)]
cond_B = ((df[B]>= 15) & (df[B]<=20)).all(axis=1)
C = [f"C{i}" for i in range(50)]
cond_C = ((df[C]>= 15) & (df[C]<=20)).all(axis=1)
And last chain all masks by & for bitwise AND:
filt_df = df[cond_A & cond_B & cond_C]
If get empty DataFrame it seems no row satisfy all conditions.

Dropping rows at specific minutes

I am trying to drop rows at specific minutes ( 05,10, 20 )
I have datetime as an index
df5['Year'] = df5.index.year
df5['Month'] = df5.index.month
df5['Day']= df5.index.day
df5['Day_of_Week']= df5.index.day_name()
df5['hour']= df5.index.strftime('%H')
df5['Min']= df5.index.strftime('%M')
df5
Then I run below
def clean(df5):
for i in range(len(df5)):
hour = pd.Timestamp(df5.index[i]).hour
minute = pd.Timestamp(df5.index[i]).minute
if df5 = df5[(df5.index.minute ==5) | (df5.index.minute == 10)| (df5.index.minute == 20)]
df.drop(axis=1, index=i, inplace=True)
it returnes invalid syntax error.
Here looping is not necessary, also not recommended.
Use DatetimeIndex.minute with Index.isin and inverted mask by ~ filtering in boolean indexing:
df5 = df5[~df5.index.minute.isin([5, 10, 20])]
For reuse column df5['Min'] use strings values:
df5 = df5[~df5['Min'].isin(['05', '10', '20'])]
All together:
def clean(df5):
return df5[~df5.index.minute.isin([5, 10, 20])]
You can just do it using boolean indexing, assuming that the index is already parsed as datetime.
df5 = df5[~((df5.index.minute == 5) | (df5.index.minute == 10) | (df5.index.minute == 20))]
Or the opposite of the same answer:
df5 = df5[(df5.index.minute != 5) | (df5.index.minute != 10) | (df5.index.minute != 20)]
Generally speaking, the right synthax to combine a logic OR inside an IF statement is the following:
today = 'Saturday'
if today=='Sunday' OR today=='Saturday':
print('Today is off. Rest at home')
In your case, you should probably use something like this:
if df5 == df5[(df5.index.minute ==5)] OR df5[(df5.index.minute ==10)]
......
FINAL NOTE:
You made some mistakes using == and =
In Python (and many other programming languages), a single equal mark = is used to assign a value to a variable, whereas two consecutive equal marks == is used to check whether 2 expressions give the same value .
= is an assignment operator
== is an equality operator

Compare entire rows for equality if some condition is satisfied

Let's say I have the following data of a match in a CSV file:
name,match1,match2,match3
Alice,2,4,3
Bob,2,3,4
Charlie,1,0,4
I'm writing a python program. Somewhere in my program I have scores collected for a match stored in a list, say x = [1,0,4]. I have found where in the data these scores exist using pandas and I can print "found" or "not found". However I want my code to print out to which name these scores correspond to. In this case the program should output "charlie" since charlie has all these values [1,0,4]. how can I do that?
I will have a large set of data so I must be able to tell which name corresponds to the numbers I pass to the program.
Yes, here's how to compare entire rows in a dataframe:
df[(df == x).all(axis=1)].index # where x is the pd.Series we're comparing to
Also, it makes life easiest if you directly set name as the index column when you read in the CSV.
import pandas as pd
from io import StringIO
df = """\
name,match1,match2,match3
Alice,2,4,3
Bob,2,3,4
Charlie,1,0,4"""
df = pd.read_csv(StringIO(df), index_col='name')
x = pd.Series({'match1':1, 'match2':0, 'match3':4})
Now you can see that doing df == x, or equivalently df.eq(x), is not quite what you want because it does element-wise compare and returns a row of True/False. So you need to aggregate those rows with .all(axis=1) which finds rows where all comparison results were True...
df.eq(x).all(axis=1)
df[ (df == x).all(axis=1) ]
# match1 match2 match3
# name
# Charlie 1 0 4
...and then finally since you only want the name of such rows:
df[ (df == x).all(axis=1) ].index
# Index(['Charlie'], dtype='object', name='name')
df[ (df == x).all(axis=1) ].index.tolist()
# ['Charlie']
which is what you wanted. (I only added the spaces inside the expression for clarity).
You need to use DataFrame.loc which would work like this:
print(df.loc[(df.match1 == 1) & (df.match2 == 0) & (df.match3 == 4), 'name'])
Maybe try something like this:
import pandas as pd
import numpy as np
# Makes sample data
match1 = np.array([2,2,1])
match2 = np.array([4,4,0])
match3 = np.array([3,3,4])
name = np.array(['Alice','Bob','Charlie'])
df = pd.DataFrame({'name': id, 'match1': match1, 'match2':match2, 'match3' :match3})
df
# example of the list you want to get the data from
x=[1,0,4]
#x=[2,4,3]
# should return the name Charlie as well as the index (based on the values in the list x)
df['name'].loc[(df['match1'] == x[0]) & (df['match2'] == x[1]) & (df['match3'] ==x[2])]
# Makes a new dataframe out of the above
mydf = pd.DataFrame(df['name'].loc[(df['match1'] == x[0]) & (df['match2'] == x[1]) & (df['match3'] ==x[2])])
# Loop that prints out the name based on the index of mydf
# Assuming there are more than one name, it will print all. if there is only one name, it will print only that)
for i in range(0,len(mydf)):
print(mydf['name'].iloc[i])
you can use this
here data is your Data frame ,you can change accordingly your data frame name,
and
considering [1,0,4] is int type
data = data[(data['match1']== 1)&(data['match2']==0)&(data['match3']== 4 ).index
print(data[0])
if data is object type then use this
data = data[(data['match1']== "1")&(data['match2']=="0")&(data['match3']== "4" ).index
print(data[0])

Python pandas extract variables from dataframe

What is the best way to convert DataFrame columns into variables. I have a condition for bet placement and I use head(n=1)
back_bf_lay_bq = bb[(bb['bf_back_bq_lay_lose_net'] > 0) & (bb['bq_lay_price'] < 5) & (bb['bq_lay_price'] != 0) & (bb['bf_back_liquid'] > bb['bf_back_stake']) & (bb['bq_lay_liquid'] > bb['bq_lay_horse_win'])].head(n=1)
I would like to convert columns into variables and pass them to API for bet placement. So I convert back_bf_lay_bq to dictionary and extract values
#Bets placements
dd = pd.DataFrame.to_dict(back_bf_lay_bq, orient='list')
#Betdaq bet placement
bq_selection_id = dd['bq_selection_id'][0]
bq_lay_stake = dd['bq_lay_stake'][0]
bq_lay_price = dd['bq_lay_price'][0]
bet_type = 2
reset_count = dd['bq_count_reset'][0]
withdrawal_sequence = dd['bq_withdrawal_sequence'][0]
kill_type = 2
betdaq_request = betdaq_api.PlaceOrdersNoReceipt(bq_selection_id,bq_lay_stake,bq_lay_price,bet_type,reset_count,withdrawal_sequence,kill_type)
I do not think that it is the most efficient way and it brings a bug from time to time
bq_selection_id = dd['bq_selection_id'][0]
IndexError: list index out of range
So can you suggest a better way to get values from DataFrame and pass them to API?
IIUC you could use iloc to get your first row and then slice your dataframe with your columns subset and pass that to your variables. Something like that:
bq_selection_id, bq_lay_stake, bq_lay_price, withdrawal_sequence = back_bf_lay_bq[['bq_selection_id', 'bq_lay_stake', 'bq_lay_price', 'withdrawal_sequence']].iloc[0]

Categories

Resources