How can I drop rows with certain values from a dataframe? - python

I'm taking two different datasets and merging them into a single data frame, but I need to take one of the columns ('Presunto Responsable') of the resulting data frame and remove the rows with the value 'Desconocido' in it.
This is my code so far:
#%% Get data
def getData(path_A, path_B):
victims = pd.read_excel(path_A)
dfv = pd.DataFrame(data=victims)
cases = pd.read_excel(path_B)
dfc = pd.DataFrame(data=cases)
return dfv, dfc
#%% merge dataframes
def mergeData(data_A, data_B):
data = pd.DataFrame()
#merge dataframe avoiding duplicated colums
cols_to_use = data_B.columns.difference(data_A.columns)
data = pd.merge(data_A, data_B[cols_to_use], left_index=True, right_index=True, how='outer')
cols_at_end = ['Presunto Responsable']
#Take 'Presunto Responsable' at the end of the dataframe
data = data[[c for c in data if c not in cols_at_end]
+ [c for c in cols_at_end if c in data]]
return data
#%% Drop 'Desconocido' values in 'Presunto Responsable'
def dropData(data):
indexNames = data[data['Presunto Responsable'] == 'Desconocido'].index
for c in indexNames:
data.drop(indexNames , inplace=True)
return data
The resulting dataframe still has the rows with 'Desconocido' values in them. What am I doing wrong?

You can just say:
data = data[data['Presunto Responsable'] != 'Desconocido']
Also, btw, when you do pd.read_excel() it creates a dataframe, you don't need to then pass that into pd.DataFrame().

Related

How to avoid need for creating variable dynamically?

I am looking into creating a big dataframe (pandas) from several individual frames. The data is organized in MF4-Files and the number of source files varies for each cycle. The goal is to have this process automated.
Creation of Dataframes:
df = (MDF('File1.mf4')).to_dataframe(channels)
df1 = (MDF('File2.mf4')).to_dataframe(channels)
df2 = (MDF('File3.mf4')).to_dataframe(channels)
These Dataframes are then merged:
df = pd.concat([df, df1, df2], axis=0)
How can I do this without dynamically creating variables for df, df1 etc.? Or is there no other way?
I have all filepathes in an Array of the form:
Filepath = ['File1.mf4', 'File2.mf4','File3.mf4',]
Now I am thinking of looping through it and create dynamically the data frames df,df1.df1000.... Any advice here?
Edit here is the full code:
df = (MDF('File1.mf4')).to_dataframe(channels)
df1 = (MDF('File2.mf4')).to_dataframe(channels)
df2 = (MDF('File3.mf4')).to_dataframe(channels)
#The Data has some offset:
x = df.index.max()
df1.index += x
x = df1.index.max()
df2.index += x
#With correct index now the data can be merged
df = pd.concat([df, df1, df2], axis=0)
The way I'm interpreting your question is that you have a predefined list you want. So just:
l = []
for f in [ list ... of ... files ]:
df = load_file(f) # however you load it
l.append(df)
big_df = pd.concat(l)
del l, df, f # if you want to clean it up
You therefore don't need to manually specify variable names for your data sub-sections. If you also want to do checks or column renaming between the various files, you can also just put that into the for-loop (or alternatively, if you want to simplify to a list comprehension, into the load_file function body).
Try this:
df_list = [(MDF(file)).to_dataframe(channels) for file in Filepath]
df = pd.concat(df_list)

rsuffix for merging data in pandas

I have multiple dataframes with the same columns but different values that look like that
Product 1 Dataframe
Here's the code that generated them
import pandas as pd
d1 = {"Year":[2018,2019,2020],"Quantity": [10,20,30], "Price": [100,200,300]}
df_product1 = pd.DataFrame(data=d1)
d2 = {"Year":[2018,2019,2020],"Quantity": [20,20,50], "Price": [120,110,380]}
df_product2 = pd.DataFrame(data=d2)
d3 = {"Year":[2018,2019,2020],"Quantity": [40,20,70], "Price": [1000,140,380]}
df_product3 = pd.DataFrame(data=d3)
I merge two dataframes and identify suffixes like so
df_total = df_product1.merge(df_product2,on="Year", suffixes = ("_Product1","_Product2"))
And I get
First Merged Dataframe
However, when I merge another dataframe to the result above using:
df_total = df_total.merge(df_product3,on="Year", suffixes = ("_Product","_Product3"))
I get
Final Merged Dataframe
Where there is no suffix for the third product.
I would like the last two columns of the dataframe to be Quantity_Product3, Price_Product3 instead of just Quantity and Price.
Let me know if it is possible or if I need to approach the problem from a completely different angle.
Why you don't get the result you want
It's explained in the docs: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html
suffixeslist-like, default is (“_x”, “_y”) A length-2 sequence where
each element is optionally a string indicating the suffix to add to
overlapping column names in left and right respectively. Pass a value
of None instead of a string to indicate that the column name from left
or right should be left as-is, with no suffix. At least one of the
values must not be None.
Suffixes are added to overlapping column names.
See this example - suffixes are added to column b, because both dataframes have a column b, but not to columns a and c, as they are unique and not in common between the two dataframes.
df1 = pd.DataFrame(columns =['a','b'], data = np.random.rand(10,2))
df2 = pd.DataFrame(columns =['b','c'], data = np.random.rand(10,2), index = np.arange(5,15))
# equivalent to an inner join on the indices
out = pd.merge(df1, df2, how ='inner', left_index = True, right_index = True)
A crude solution
Why don't you just rename the columns manually? Not elegant but effective
A possible alternative
The table you are trying to build looks like a pivot. I would look into normalising all your dataframes, concatenating them, then running a pivot on the result.
Depending on your case, this may well be more convoluted and could well be overkill. I mention it because I want to bring your attention to the concepts of pivoting/unpivoting (stacking/unstacking/normalising) data.
The code below takes a df which looks similar to yours and normalises it. For simpler cases you can use pandas.melt(). I don't have the exact data of your example but this should be a good starting point.
def random_dates(start, end, n, unit='D', seed=None):
ndays = (end - start).days + 1
return start + pd.to_timedelta(
np.random.randint(0, ndays, n), unit=unit
)
df = pd.DataFrame()
mysize = 20
df['key'] = np.arange(0,mysize)
df['A_value'] = np.random.randint(0,10000,mysize)
df['A_date'] = random_dates(pd.to_datetime('2010-01-01' ), pd.to_datetime('2019-01-01'), mysize)
df['B_value'] = np.random.randint(-5000,5000,mysize)
df['B_date'] = random_dates(pd.to_datetime('2005-01-01' ), pd.to_datetime('2015-01-01'), mysize)
df['C_value'] = np.random.randint(-10000,10000,mysize)
df['C_date'] = random_dates(pd.to_datetime('2000-01-01' ), pd.to_datetime('2019-01-01'), mysize)
df2 = df.set_index('key', drop=True, verify_integrity = True)
df2 = df2.stack().reset_index()
df2.columns=['key','rawsource','rawvalue']
df2['source'] = df2['rawsource'].apply(lambda x: x[0:1])
df2['metric'] = df2['rawsource'].apply(lambda x: x[2:])
df2 = df2.drop(['rawsource'], axis = 1)
df_piv = df2.pivot_table( index=['key','source'], columns = 'metric' , values ='rawvalue', aggfunc='first' ).reset_index().rename_axis(None, axis=1)

how to iterate over multiple dataframes and add values to new dataframe in python

I have 4 data frames:
df1 = pd.read_csv('values1.csv')
df2 = pd.read_csv('values2.csv')
df3 = pd.read_csv('values3.csv')
df4 = pd.read_csv('values4.csv')
each of them have a structure as follows:
I want to create a new data frame such that it has aggregated values for each category in all the data frames. So, the new data frame should have values which are calculated using the formula :-
Total['values'][0] = df1['values'][0] / (df1['values'][0] + df2['values'][0] + df3['values'][0] + df4['values'][0] )
Like this it should generate values for all the rows.
Can someone please help me out.
First join all DataFrames with concat and aggregate sum for Series and then convert column category to index for Series from df1 and divide by Series.div:
s = pd.concat([df1, df2, df3, df4]).groupby('category')['values'].sum()
out = df1.set_index('category')['values'].div(s).reset_index(name='total')
EDIT:
s = pd.concat([df1, df2, df3, df4]).groupby('category')['values'].sum()
s1 = pd.concat([df1, df2]).groupby('category')['values'].sum()
out = s1.div(s2).reset_index(name='new')

How to merge columns interspersing the data?

I'm new to python and pandas and working to create a Pandas MultiIndex with two independent variables: flow and head which create a dataframe and I have 27 different design points. It's currently organized in a single dataframe with columns for each variable and rows for each design point.
Here's how I created the MultiIndex:
flow = df.loc[0, ["Mass_Flow_Rate", "Mass_Flow_Rate.1",
"Mass_Flow_Rate.2"]]
dp = df.loc[:,"Design Point"]
index = pd.MultiIndex.from_product([dp, flow], names=
['DP','Flows'])
I then created three columns of data:
df0 = df.loc[:,"Head2D"]
df1 = df.loc[:,"Head2D.1"]
df2 = df.loc[:,"Head2D.1"]
And want to merge these into a single column of data such that I can use this command:
pc = pd.DataFrame(data, index=index)
Using the three columns with the same indexes for the rows (0-27), I want to merge the columns into a single column such that the data is interspersed. If I call the columns col1, col2 and col3 and I denote the index in parentheses such that col1(0) indicates column1 index 0, I want the data to look like:
col1(0)
col2(0)
col3(0)
col1(1)
col2(1)
col3(1)
col1(2)...
it is a bit confusing. But what I understood is that you are trying to do this:
flow = df.loc[0, ["Mass_Flow_Rate", "Mass_Flow_Rate.1",
"Mass_Flow_Rate.2"]]
dp = df.loc[:,"Design Point"]
index = pd.MultiIndex.from_product([dp, flow], names=
['DP','Flows'])
df0 = df.loc[:,"Head2D"]
df1 = df.loc[:,"Head2D.1"]
df2 = df.loc[:,"Head2D.1"]
data = pd.concat[df0, df1, df2]
pc = pd.DataFrame(data=data, index=index)

Pandas : Data Frame Pruning

I have a data frame as given below:
data = [['1','tom',1,0],['1','tom',0,1],['2','lynda',0,1],['2','lynda',0,1]]
df = pd.DataFrame(data, columns = ['ID','NAME', 'A','B'])
df.head()
I want to transform the dataframe to look like the below:
where in logical OR is taken for columns A and B. ID and NAME will always have same pair-values irrespective of how many times they appear but columns A and B can change(00,10,11,01).
So at the end I want ID,NAME,A,B.
You can always sum and compare to 0.
data = [['1','tom',1,0],['1','tom',0,1],['2','lynda',0,1],['2','lynda',0,1]]
df = pd.DataFrame(data, columns = ['ID','NAME', 'A','B'])
g_df = (df.groupby(['ID', 'NAME']).sum() >0).astype(float)
g_df.reset_index()

Categories

Resources