Anova Analysis Python_Urgent - python

I hope I can be as clear as possible.
I have an excel file with 400 subjects for a study and for each one of them I have their age, their sex and 40 more columns of biological variables.
Es: CODE0001; (age)20; M\F; Biovalue1; BioValue 2 ..... Biovalue 40.
My goal is to analyze these data with the 1-way Anova because I think it's the best option I have. I'm trying do it (even using this guide https://www.marsja.se/four-ways-to-conduct-one-way-anovas-using-python/ ) but there's always a problem with the code.
So: how can I set up my data in order to be able to use the code for example from that website?
I've already done Dataset.mean() and Dataset.std() for all the data, but I can't use for example the value "Mean Age" because it seems like Jupyter only reads it as a string and not a value.
I'm in a deep state of confusion, so all kind of help will be super appreciated!!!
Thank you in advance

I'm sorry but I didn't understand. I'm relatively new to python so maybe i couldn't explain myself properly.
I need to do an Anova analysis:
First I did this:
AnalisiISAD.mean()
2) Then I made a list from that:
MeanList = [......]
3) Then i proceded with the anova script
AnalisiI.boxplot('MeanList', by='AgeT0', figsize=(12,8))
ctrl = Analisi['MeanList'][Analisi == 'ctrl']
grps = pd.unique(Analisi.group.values)
d_data = {grp:Analisi['MeanList'][Analisi.group ==grp] for grp in grps}
k = len(pd.unique(Analisi.group))
N = len(Analisi.values)
n = Analisi.groupby('AgeT0').size()[0]
but this error occurs: KeyError: 'Column not found: MeanList'
Does this mean I have to create a new column in the excel file? How do I do that?

When using df.mean() or df.std(), try changing the data to pd.Series first and run it.

Related

Python: Use the "i" counter in while loop as digit for expressions

This seems like it should be very simple but am not sure the proper syntax in Python. To streamline my code I want a while loop (or for loop if better) to cycle through 9 datasets and use the counter to call each file out using the counter as a way to call on correct file.
I would like to use the "i" variable within the while loop so that for each file with sequential names I can get the average of 2 arrays, the max-min of this delta, and the max-min of another array.
Example code of what I am trying to do but the avg(i) and calling out temp(i) in loop does not seem proper. Thank you very much for any help and I will continue to look for solutions but am unsure how to best phrase this to search for them.
temp1 = pd.read_excel("/content/113VW.xlsx")
temp2 = pd.read_excel("/content/113W6.xlsx")
..-> temp9
i=1
while i<=9
avg(i) =np.mean(np.array([temp(i)['CC_H='],temp(i)['CC_V=']]),axis=0)
Delta(i)=(np.max(avg(i)))-(np.min(avg(i)))
deltaT(i)=(np.max(temp(i)['temperature='])-np.min(temp(i)['temperature=']))
i+= 1
EG: The slow method would be repeating code this for each file
avg1 =np.mean(np.array([temp1['CC_H='],temp1['CC_V=']]),axis=0)
Delta1=(np.max(avg1))-(np.min(avg1))
deltaT1=(np.max(temp1['temperature='])-np.min(temp1['temperature=']))
avg2 =np.mean(np.array([temp2['CC_H='],temp2['CC_V=']]),axis=0)
Delta2=(np.max(avg2))-(np.min(avg2))
deltaT2=(np.max(temp2['temperature='])-np.min(temp2['temperature=']))
......
Think of things in terms of lists.
temps = []
for name in ('113VW','113W6',...):
temps.append( pd.read_excel(f"/content/{name}.xlsx") )
avg = []
Delta = []
deltaT = []
for data in temps:
avg.append(np.mean(np.array([data['CC_H='],data['CC_V=']]),axis=0)
Delta.append(np.max(avg[-1]))-(np.min(avg[-1]))
deltaT.append((np.max(data['temperature='])-np.min(data['temperature=']))
You could just do your computations inside the first loop, if you don't need the dataframes after that point.
The way that I would tackle this problem would be to create a list of filenames, and then iterate through them to do the necessary calculations as per the following:
import pandas as pd
# Place the files to read into this list
files_to_read = ["/content/113VW.xlsx", "/content/113W6.xlsx"]
results = []
for i, filename in enumerate(files_to_read):
temp = pd.read_excel(filename)
avg_val =np.mean(np.array([temp(i)['CC_H='],temp['CC_V=']]),axis=0)
Delta=(np.max(avg_val))-(np.min(avg_val))
deltaT=(np.max(temp['temperature='])-np.min(temp['temperature=']))
results.append({"avg":avg_val, "Delta":Delta, "deltaT":deltaT})
# Create a dataframe to show the results
df = pd.DataFrame(results)
print(df)
I have included the enumerate feature to grab the index (or i) should you want to access it for anything, or include it in the results. For example, you could change the the results.append line to something like this:
results.append({"index":i, "Filename":filename, "avg":avg_val, "Delta":Delta, "deltaT":deltaT})
Not sure if I understood the question correctly. But if you want to read the files inside a loop using indexes (i variable), you can create a list to hold the contents of the excel files instead of using 9 different variables.
something like
files = []
files.append(pd.read_excel("/content/113VW.xlsx"))
files.append(pd.read_excel("/content/113W6.xlsx"))
...
then use the index variable to iterate over the list
i=1
while i<=9
avg(i) = np.mean(np.array([files[i]['CC_H='],files[i]['CC_V=']]),axis=0)
...
i+=1
P.S.: I am not a Pandas/NumPy expert, so you may have to adapt the code to your needs

python - Is it possible concatenate a string in equation?

I have variables of all the months and is better to change a month one time at the start than all the script. Not focus on the counts, only in understanding of python.
Is it possible?
My variable is uqFEB (and I have uqJAN, uqMAR...).
uqFEB = xarray.DataArray: lat: 721, lon: 1440
month = 'FEB'
The expectative is to write somenthing like this:
x = uq+{'month'}+*2
or
x = uq'month'*2
And python substitute and understand:
x= uqFEB*2
How can I write this syntax? It's possible?
Thank you so much!
As mentioned in my comment (and others) you want a dictionary. Any time you are thinking of dynamically referencing a variable in your code, think "Dictionary" instead:
month_dict={"Jan":1, "Feb":2, "Mar":3, "Apr":4, "May":5, "Jun":6, "Jul":7, "Aug":8, "Sept":9, "Oct":10, "Nov":11, "Dec":12}
x=month_dict["Feb"]**2
print(x)
4

Finding Min and Max Values in List Pattern in Python

I have a csv file and the data pattern like this:
I am importing it from csv file. In input data, there are some whitespaces and I am handling it by using pattern as above. For output, I want to write a function that takes this file as an input and prints the lowest and highest blood pressure. Also, it will return average of all mean values. On the other side, I should not use pandas.
I wrote below code blog.
bloods=open(bloodfilename).read().split("\n")
blood_pressure=bloods[4].split(",")[1]
pattern=r"\s*(\d+)\s*\[\s*(\d+)-(\d+)\s*\]"
re.findall(pattern,blood_pressure)
#now extract mean, min and max information from the blood_pressure of each patinet and write a new file called blood_pressure_modified.csv
pattern=r"\s*(\d+)\s*\[\s*(\d+)-(\d+)\s*\]"
outputfilename="blood_pressure_modified.csv"
# create a writeable file
outputfile=open(outputfilename,"w")
for blood in bloods:
patient_id, blood_pressure=bloods.strip.split(",")
mean=re.findall(pattern,blood_pressure)[0]
blood_pressure_modified=re.sub(pattern,"",blood_pressure)
print(patient_id, blood_pressure_modified, mean, sep=",", file=outputfile)
outputfile.close()
Output should looks like this:
This is a very simple kind of answer to this. No regex, pandas or anything.
Let me know if this is working. I can try making it work better for any case it doesn't work.
bloods=open("bloodfilename.csv").read().split("\n")
means = []
'''
Also, rather than having two list of mins and maxs,
we can have just one and taking min and max from this
list later would do the same trick. But just for clarity I kept both.
'''
mins = []
maxs = []
for val in bloods[1:]: #assuming first line is header of the csv
mean, ranges = val.split(',')[1].split('[')
means.append(int(mean.strip()))
first, second = ranges.split(']')[0].split('-')
mins.append(int(first.strip()))
maxs.append(int(second.strip()))
print(f'the lowest and the highest blood pressure are: {min(mins)} {max(maxs)} respectively\naverage of mean values is {sum(means)/len(means)}')
You can also create functions to perform small small strip stuff. That's usually a better way to code. I wrote this in bit hurry, so don't mind.
Maybe this could help with your question,
Suppose you have a CSV file like this, and want to extract only the min and max values,
SN Number
1 135[113-166]
2 140[110-155]
3 132[108-180]
4 40[130-178]
5 133[118-160]
Then,
import pandas as pd
df = pd.read_csv("REPLACE_WITH_YOUR_FILE_NAME.csv")
results = df['YOUR_NUMBER_COLUMN'].apply(lambda x: x.split("[")[1].strip("]").split("-"))
with open("results.csv", mode="w") as f:
f.write("MIN," + "MAX")
f.write("\n")
for i in results:
f.write(str(i[0]) + "," + str(i[1]))
f.write("\n")
f.close()
After you ran the snippet after without any errors then in your current working directory their should be a file named results.csv Open it up and you will have the results

Dynamic creation of fieldname in pandas groupby-aggregate

I have many aggregations of the type below in my code:
period = 'ag'
index = ['PA']
lvl = 'pa'
wm = lambda x: np.average(x, weights=dfdom.loc[x.index, 'pop'])
dfpa = dfdom[(dfdom['stratum_kWh'] !=8)].groupby(index).agg(
pa_mean_ea_ag_kwh = ('mean_ea_'+period+'_kwh', wm),
pa_pop = ('dom_pop', 'sum'))
It's straightforward to build the right hand side of the aggregation equation. I want to also dynamically build the left hand side of the aggregate equations so that 'dom', 'ea', 'ag' and 'kw/kwh/thm' can be all created as variable inputs and used depending on which process I'm executing. This will significantly reduce the amount of code that needs to be written and updates will also be easier to manage as otherwise I need to write separate otherwise identical code for each combination of the above.
Can I use eval to do this? I'd appreciate guidance on how to do it. Thanks.
Adding code written after feedback from Vaidøtas I.:
index = ['PA']
lvl = 'pa'
fname = lvl+"_pop"
b = f'dfdom.groupby({index}).agg({lvl}_pop = ("dom_pop", "sum"))'
dfpab = exec(b)
The output for the above is a 'NoneType object'. If I lift the text in variable b and directly run the code as show below, I get a dataframe.
dfpab = dfdom.groupby(['PA']).agg(pa_pop = ("dom_pop", "sum"))
(I've simplified my original example to better connect with the second code added.)
Use exec(), eval() is something different
For example:
exec(f"variable_name{added_namepart} = variable_value{added_valuepart}")

Python nest list performance choice

I am trying to understand if there is an advantage in space/time/programming to storing data from a signal processing system as nested list in either :
data[channel][sample]
data[sample][channel]
I can code processing for both - thou I personally find 1) easy to write and index to then 2).
However, 2) is the more common was my local group programs in and stores the data (either in excel/csv or from the data gathering systems). While it is easy to transpose
dataA = map(list, zip(*dataB))
I was wondering if there are any storage or performance - or even - module compatibility issues with 1 over 2?
with 1) I can loop like this
for R in dataA :
for C in R :
process_channel(C)
matplotlib.loglog(dataA[0], dataA[i])
where dataA[0] is time or frequency and i is some other channel to plot
with 2)
for R in dataB :
for C in R
process_sample(C)
matplotlib.loglog([j[0] for j in dataB],[k[i] for k in dataB])
This looks worse in programming style. Maybe I am missing a list method of making this easier? I have also developed code to used dicts ... but this really breaks with general use. So I am less inclined to continue to use dicts. Although the dict storage is
dataC = list(['f':0.1,'chnl1':100.0],['f':0.2,'chnl1':110.0])
or some such. It seems that to be better integrated option 2 is better. However, I am trying to understand how better to code when using option 2) when you wish to process over channels then samples? Just transpose the matrix first and then do the work in option 1) space and transpose back the results:
dataA = smoothing(dataA, smooth_factor)
def smoothing(d, s) :
td = numpy.transpose(d)
td = map(list, zip(*d))
nd=[]
for row in td :
col = []
for i in xrange(0,len(row)-step,step) :
col.append(sum(row[i:i+step]/step)
nd.append(col)
nd = numpy.transpose(nd)
return nd
while this construction works - transposing back and forth all the time looks - um - inefficient.

Categories

Resources