df1 is from excel file with columns as below:
Currency
Net Original
Net USD
COGS
USD
1.5
1.2
2.1
USD
1.3
2.1
1.2
USD
1.1
2.3
-1.1
Peso Mexicano
1.6
2.2
2.1
Step 1: Need to derive conversion rate column 'Conv' where 'Currency' is 'Peso Mexicano'
#Filter "Peso Mexicano" currency & take it as a separate data frame (df2)
df2 = df1[df1['Currency']== "Peso Mexicano"]
Step 2:
#Next use formula to get the "Conversion Rate" from df2 using formula
df2['Conv']= (df2['Net USD']/df2['Net Original'])
#Output 1.37
#Multiply the filtered result 'Conv' with 'COGS' column to get the desired result
df1['Inv'] = (df2['Conv']*df1['COGS'])*-1
display(df1)
However the result shows 'NaN' column 'Inv' wherever the currency is 'USD'.
Expected output:
Currency
Net Original
Net USD
COGS
Inv
USD
1.5
1.2
2.1
1.87
USD
1.3
2.1
1.2
0.64
USD
1.1
2.3
-1.1
-2.50
Peso Mexicano
1.6
2.2
2.1
1.87
You needed to aggregate your conv computation, even if there is only one value (I took the mean here).
Here is a working code:
df2 = df1[df1['Currency'] == "Peso Mexicano"]
conv = (df2['Net USD']/df2['Net Original']).mean()
df['Inv'] = conv*df['COGS']-1
output:
Currency Net Original Net USD COGS Inv
0 USD 1.5 1.2 2.1 1.8875
1 USD 1.3 2.1 1.2 0.6500
2 USD 1.1 2.3 -1.1 -2.5125
3 Peso Mexicano 1.6 2.2 2.1 1.8875
Background:
I'm currently developing some data profiling in SQL Server. This consists of calculating aggregate statistics on the values in targeted columns.
I'm using SQL for most of the heavy lifting, but calling Python for some of the statistics that SQL is poor at calculating. I'm leveraging the Pandas package through SQL Server Machine Language Services.
However,
I'm currently developing this script on Visual Studio. The SQL portion is irrelevant other than as background.
Problem:
My issue is that when I call one of the Python statistics functions, it produces the output as a series with the labels seemingly not part of the data. I cannot access the labels at all. I need the values of these labels, and I need to normalize the data and insert a column with static values describing which calculation was performed on that row.
Constraints:
I will need to normalize each statistic so I can union the datasets and pass the values back to SQL for further processing. All output needs to accept dynamic schemas, so no hardcoding labels etc.
Attempted solutions:
I've tried explicitly coercing output to dataframes. This just results in a series with label "0".
I've also tried adding static values to the columns. This just adds the target column name as one of the inaccessible labels, and the intended static value as part of the series.
I've searched many times for a solution, and couldn't find anything relevant to the problem.
Code and results below. Using the iris dataset as an example.
###########################
## AGG STATS TEST SCRIPT
##
###########################
#LOAD MODULES
import pandas as pds
#GET SAMPLE DATASET
iris = pds.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
#CENTRAL TENDENCY
mode1 = iris.mode()
stat_mode = pds.melt(
mode1
)
stat_median = iris.median()
stat_median['STAT_NAME'] = 'STAT_MEDIAN' #Try to add a column with the value 'STAT_MEDIAN'
#AGGREGATE STATS
stat_describe = iris.describe()
#PRINT RESULTS
print(iris)
print(stat_median)
print(stat_describe)
###########################
## OUTPUT
##
###########################
>>> #PRINT RESULTS
... print(iris) #ORIGINAL DATASET
...
sepal_length sepal_width petal_length petal_width species
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa
.. ... ... ... ... ...
145 6.7 3.0 5.2 2.3 virginica
146 6.3 2.5 5.0 1.9 virginica
147 6.5 3.0 5.2 2.0 virginica
148 6.2 3.4 5.4 2.3 virginica
149 5.9 3.0 5.1 1.8 virginica
[150 rows x 5 columns]
>>> print(stat_median) #YOU CAN SEE THAT IT INSERTED COLUMN INTO ROW LABELS, VALUE INTO RESULTS SERIES
sepal_length 5.8
sepal_width 3
petal_length 4.35
petal_width 1.3
STAT_NAME STAT_MEDIAN
dtype: object
>>> print(stat_describe) #BASIC DESCRIPTIVE STATS, NEED TO LABEL THE STATISTIC NAMES TO UNPIVOT THIS
sepal_length sepal_width petal_length petal_width
count 150.000000 150.000000 150.000000 150.000000
mean 5.843333 3.057333 3.758000 1.199333
std 0.828066 0.435866 1.765298 0.762238
min 4.300000 2.000000 1.000000 0.100000
25% 5.100000 2.800000 1.600000 0.300000
50% 5.800000 3.000000 4.350000 1.300000
75% 6.400000 3.300000 5.100000 1.800000
max 7.900000 4.400000 6.900000 2.500000
>>>
Any assistance is greatly appreciated. Thank you!
I figured it out. There's a function called reset_index that will convert the index to a column, and create a new numerical index.
stat_median = pds.DataFrame(stat_median)
stat_median.reset_index(inplace=True)
stat_median = stat_median.rename(columns={'index' : 'fieldname', 0: 'value'})
stat_median['stat_name'] = 'median'
I have a data frame and I am looking to get the max value for each row and the column header for the column where the max value is located and return a new dataframe. In reality my data frame has over 50 columns and over 30,000 rows:
df1:
ID Tis RNA DNA Prot Node Exv
AB 1.4 2.3 0.0 0.3 2.4 4.4
NJ 2.2 3.4 2.1 0.0 0.0 0.2
KL 0.0 0.0 0.0 0.0 0.0 0.0
JC 5.2 4.4 2.1 5.4 3.4 2.3
So the ideal output looks like this:
df2:
ID
AB Exv 4.4
NJ RNA 3.4
KL N/A N/A
JC Prot 5.4
I have tried the following without any success:
df2 = df1.max(axis=1)
result.index = df1.idxmax(axis=1)
also tried:
df2=pd.Series(df1.columns[np.argmax(df1.values,axis=1)])
final=pd.DataFrame(df1.lookup(s.index,s),s)
I have looked at other posts but still can't seem to solve this.
Any help would be great
Use if ID is index DataFrame.agg with replace 0 rows by missing values:
df = df1.agg(['idxmax','max'], axis=1).mask(lambda x: x['max'].eq(0))
print (df)
idxmax max
AB Exv 4.4
NJ RNA 3.4
KL NaN NaN
JC Prot 5.4
Use if ID is column:
df = df1.set_index('ID').agg(['idxmax','max'], axis=1).mask(lambda x: x['max'].eq(0))
Is it possible to add "range" (ie.max-min) to the pandas describe function in python?
I would like to get like this ?
sepal_length sepal_width
count 150 150
mean 5.843333 3.054
std 0.828066 0.433594
min 4.3 2
25% 5.1 2.8
50% 5.8 3
75% 6.4 3.3
max 7.9 4.4
Range 3.6 2.4
I think simpliest is add to output subtracting rows and wrap to function:
def describe_new(df):
df1 = df.describe()
df1.loc["range"] = df1.loc['max'] - df1.loc['min']
return df1
print (describe_new(df))
I have a data frame (df) in pandas with four columns and I want a new column to represent the mean of this four columns: df['mean']= df.mean(1)
1 2 3 4 mean
NaN NaN NaN NaN NaN
5.9 5.4 2.4 3.2 4.225
0.6 0.7 0.7 0.7 0.675
2.5 1.6 1.5 1.2 1.700
0.4 0.4 0.4 0.4 0.400
So far so good. But when I save the results to a csv file this is what I found:
5.9,5.4,2.4,3.2,4.2250000000000005
0.6,0.7,0.7,0.7,0.6749999999999999
2.5,1.6,1.5,1.2,1.7
0.4,0.4,0.4,0.4,0.4
I guess I can force the format in the mean column, but any idea why this is happenning?
I am using winpython with python 3.3.2 and pandas 0.11.0
You could use the float_format parameter:
import pandas as pd
import io
content = '''\
1 2 3 4 mean
NaN NaN NaN NaN NaN
5.9 5.4 2.4 3.2 4.225
0.6 0.7 0.7 0.7 0.675
2.5 1.6 1.5 1.2 1.700
0.4 0.4 0.4 0.4 0.400'''
df = pd.read_table(io.BytesIO(content), sep='\s+')
df.to_csv('/tmp/test.csv', float_format='%g', index=False)
yields
1,2,3,4,mean
,,,,
5.9,5.4,2.4,3.2,4.225
0.6,0.7,0.7,0.7,0.675
2.5,1.6,1.5,1.2,1.7
0.4,0.4,0.4,0.4,0.4
The answers seem correct. Floating point numbers cannot be perfectly represented on our systems. There are bound to be some differences. Read The Floating Point Guide.
>>> a = 5.9+5.4+2.4+3.2
>>> a / 4
4.2250000000000005
As you said, you could always format the results if you want to get only a fixed number of points after the decimal.
>>> "{:.3f}".format(a/4)
'4.225'