How can I multiply a numpy array with pandas series? - python

I have a numpy series of size (50,0)
array([1.01255569e+00, 1.04166667e+00, 1.07158165e+00, 1.10229277e+00,
1.13430127e+00, 1.16387337e+00, 1.20365912e+00, 1.24007937e+00,
1.27877238e+00, 1.31856540e+00, 1.35281385e+00, 1.40291807e+00,
1.45180023e+00, 1.49700599e+00, 1.55183116e+00, 1.60051216e+00,
1.66002656e+00, 1.73370319e+00, 1.80115274e+00, 1.87687688e+00,
1.95312500e+00, 2.04750205e+00, 2.14961307e+00, 2.23613596e+00,
2.34082397e+00, 2.48015873e+00, 2.61780105e+00, 2.75027503e+00,
2.91715286e+00, 3.07881773e+00, 3.31564987e+00, 3.57142857e+00,
3.81679389e+00, 4.17362270e+00, 4.51263538e+00, 4.95049505e+00,
5.59284116e+00, 6.17283951e+00, 7.02247191e+00, 8.03858521e+00,
9.72762646e+00, 1.17370892e+01, 1.47928994e+01, 2.10084034e+01,
3.12500000e+01, 4.90196078e+01, 9.25925926e+01, 2.08333333e+02,
5.00000000e+02, 1.25000000e+03])
And I have a pandas dataframe of length 50 as well.
x
0 9.999740e-01
1 9.981870e-01
2 9.804506e-01
3 9.187764e-01
4 8.031568e-01
5 6.544660e-01
6 5.032716e-01
7 3.707446e-01
8 2.650768e-01
9 1.857835e-01
10 1.285488e-01
11 8.824506e-02
12 6.030141e-02
13 4.111080e-02
14 2.800453e-02
15 1.907999e-02
16 1.301045e-02
17 8.882996e-03
18 6.074386e-03
19 4.161024e-03
20 2.855636e-03
21 1.963543e-03
22 1.352791e-03
23 9.338596e-04
24 6.459459e-04
25 4.476854e-04
26 3.108912e-04
27 2.163201e-04
28 1.508106e-04
29 1.053430e-04
30 7.372442e-05
31 5.169401e-05
32 3.631486e-05
33 2.555852e-05
34 1.802129e-05
35 1.272995e-05
36 9.008454e-06
37 6.386289e-06
38 4.535381e-06
39 3.226546e-06
40 2.299394e-06
41 1.641469e-06
42 1.173785e-06
43 8.407618e-07
44 6.032249e-07
45 4.335110e-07
46 3.120531e-07
47 2.249870e-07
48 1.624726e-07
49 1.175140e-07
And I want to multiply every numpy cells with pandas cell.
Example:
1.01255569e+00*9.999740e-01
1.04166667e+00*9.981870e-01
Desired output
numpy array of same size.

You can just use the .values property of the 'x' series in your Pandas dataframe:
df['x'].values * arr
where df is your dataframe and arr is your array.
The above expression will return the result as a Numpy array. If you want a Pandas DataFrame instead, you can omit the use of .values:
df['x'] * arr

Or np.multiply, multiply n with p['x'].values:
print(np.multiply(n,p['x'].values))
Or pd.Series.multiply:
print(np.array(p['x'].multiply(n)))
Or pd.Series.mul:
print(np.array(p['x'].mul(n)))

Related

Sort rows of curve shaped data in python

I have a dataset that consists of 5 rows that are formed like a curve. I want to separate the inner row from the other or if possible each row and store them in a separate array. Is there any way to do this, like somehow flatten the curved data and sorting it afterwards based on the x and y values?
I would like to assign each row from left to right numbers from 0 to the max of the row. Right now the labels for each dot are not useful for me and I can't change the labels.
Here are the first 50 data points of my data set:
x y
0 -6.4165 0.3716
1 -4.0227 2.63
2 -7.206 3.0652
3 -3.2584 -0.0392
4 -0.7565 2.1039
5 -0.0498 -0.5159
6 2.363 1.5329
7 -10.7253 3.4654
8 -8.0621 5.9083
9 -4.6328 5.3028
10 -1.4237 4.8455
11 1.8047 4.2297
12 4.8147 3.6074
13 -5.3504 8.1889
14 -1.7743 7.6165
15 1.1783 6.9698
16 4.3471 6.2411
17 7.4067 5.5988
18 -2.6037 10.4623
19 0.8613 9.7628
20 3.8054 9.0202
21 7.023 8.1962
22 9.9776 7.5563
23 0.1733 12.6547
24 3.7137 11.9097
25 6.4672 10.9363
26 9.6489 10.1246
27 12.5674 9.3369
28 3.2124 14.7492
29 6.4983 13.7562
30 9.2606 12.7241
31 12.4003 11.878
32 15.3578 11.0027
33 6.3128 16.7014
34 9.7676 15.6557
35 12.2103 14.4967
36 15.3182 13.5166
37 18.2495 12.5836
38 9.3947 18.5506
39 12.496 17.2993
40 15.3987 16.2716
41 18.2212 15.1871
42 21.1241 14.0893
43 12.3548 20.2538
44 15.3682 18.9439
45 18.357 17.8862
46 21.0834 16.6258
47 23.9992 15.4145
48 15.3776 21.9402
49 18.3568 20.5803
50 21.1733 19.3041
It seems that your curves have a pattern, so you could select the curve of interest using splicing. I had the offset the selection slightly to get the five curves because the first 8 points are not in the same order as the rest of the data. So the initial 8 data points are discarded. But these could be added back in afterwards if required.
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({ 'x': [-6.4165, -4.0227, -7.206, -3.2584, -0.7565, -0.0498, 2.363, -10.7253, -8.0621, -4.6328, -1.4237, 1.8047, 4.8147, -5.3504, -1.7743, 1.1783, 4.3471, 7.4067, -2.6037, 0.8613, 3.8054, 7.023, 9.9776, 0.1733, 3.7137, 6.4672, 9.6489, 12.5674, 3.2124, 6.4983, 9.2606, 12.4003, 15.3578, 6.3128, 9.7676, 12.2103, 15.3182, 18.2495, 9.3947, 12.496, 15.3987, 18.2212, 21.1241, 12.3548, 15.3682, 18.357, 21.0834, 23.9992, 15.3776, 18.3568, 21.1733],
'y': [0.3716, 2.63, 3.0652, -0.0392, 2.1039, -0.5159, 1.5329, 3.4654, 5.9083, 5.3028, 4.8455, 4.2297, 3.6074, 8.1889, 7.6165, 6.9698, 6.2411, 5.5988, 10.4623, 9.7628, 9.0202, 8.1962, 7.5563, 12.6547, 11.9097, 10.9363, 10.1246, 9.3369, 14.7492, 13.7562, 12.7241, 11.878, 11.0027, 16.7014, 15.6557, 14.4967, 13.5166, 12.5836, 18.5506, 17.2993, 16.2716, 15.1871, 14.0893, 20.2538, 18.9439, 17.8862, 16.6258, 15.4145, 21.9402, 20.5803, 19.3041]})
# Generate the 5 dataframes
df_list = [df.iloc[i+8::5, :] for i in range(5)]
# Generate the plot
fig = plt.figure()
for frame in df_list:
plt.scatter(frame['x'], frame['y'])
plt.show()
# Print the data of the innermost curve
print(df_list[4])
OUTPUT:
The 5th dataframe df_list[4] contains the data of the innermost plot.
x y
12 4.8147 3.6074
17 7.4067 5.5988
22 9.9776 7.5563
27 12.5674 9.3369
32 15.3578 11.0027
37 18.2495 12.5836
42 21.1241 14.0893
47 23.9992 15.4145
You can then add the missing data like this:
# Retrieve the two missing points of the inner curve
inner_curve = pd.concat([df_list[4], df[5:7]]).sort_index(ascending=True)
print(inner_curve)
# Plot the inner curve only
fig2 = plt.figure()
plt.scatter(inner_curve['x'], inner_curve['y'], color = '#9467BD')
plt.show()
OUTPUT: inner curve
x y
5 -0.0498 -0.5159
6 2.3630 1.5329
12 4.8147 3.6074
17 7.4067 5.5988
22 9.9776 7.5563
27 12.5674 9.3369
32 15.3578 11.0027
37 18.2495 12.5836
42 21.1241 14.0893
47 23.9992 15.4145
Complete Inner Curve

I need help comparing data within a table in python

I have the following table
a1b1
a1Eb1
a1b2
a1Eb2
a2b1
a2Eb1
a2b2
a2Eb2
a3b1
a3Eb1
a3b2
a3Eb2
2
20
8
54
3
56
3
67
2
78
7
75
8
30
6
67
6
35
4
56
3
85
6
74
5
54
4
64
7
23
6
48
4
67
4
82
6
65
7
53
8
27
7
35
5
25
3
64
4
34
2
52
4
28
8
27
6
94
2
29
i want to compare the following data:
a1b1 vs a1b2;
then generate arrays containing
a1b1
a1b2
minor a1b1
2
8
20
a1b2
a1b1
minor a1b2
6
8
30
and so for each row of the table
and for each of the following comparisons
a2b1 vs a2b2;
a3b1 vs a3b2;
I have tried to do it with pandas in python
import pandas as pd
import numpy as np
df = pd.DataFrame ({'a1b1':[2,8,5,6,4],
'a1Eb1':[20,30,54,65,34],
'a1b2':[8,6,4,7,2],
'a1Eb2':[54,67,64,53,52],
'a2b1':[3,6,7,8,4],
'a2Eb1':[56,35,23,27,28],
'a2b2':[3,4,6,7,8],
'a2Eb2':[67,56,48,35,27],
'a3b1':[2,3,4,5,6],
'a3Eb1':[78,85,67,25,94],
'a3b2':[7,6,4,3,2],
'a3Eb3':[75,74,82,64,29],
})
but i don't know how to go on.
Output expected
To the first line a1b1<a1b2 then print the following
df1=pd.DataFrame{'a1b1':[2],
'a1b2':[8],
'a1Eb1':[20]}
This can be, a DataFrame, a list or any data structure
If you want to display only specific columns of your dataframe you can use the following syntax with [[ and ]] after the name of the dataframe (df), and in between you just add the names of the columns you want to see. It can be 2,
3 or even all of the columns of the dataframes, as long as you separate their names with a comma and put them between quotes.
df[['a1b1','a1b2']] # to display two columns
df[['a2b1','a2b2']]
df[['a3b1','a3b2']]
to display 3 columns, it could for example be :
df[['a3b1','a3b2','a3b1']]
and so on.

Plot histogram using two columns (values, counts) in python dataframe

I have a dataframe having multiple columns in pairs: if one column is values then the adjacent column is the corresponding counts. I want to plot a histogram using values as x variable and counts as the frequency.
For example, I have the following columns:
Age Counts
60 1204
45 700
21 400
. .
. .
34 56
10 150
I want my code to bin the Age values in ten-year intervals between the maximum and minimum values and get the cumulative frequencies for each interval from the Counts column and then plot a histogram. Is there a way to do this using matplotlib ?
I have tried the following but in vain:
patient_dets.plot(x='PatientAge', y='PatientAgecounts', kind='hist')
(patient_dets is the dataframe with 'PatientAge' and 'PatientAgecounts' as columns)
I think you need Series.plot.bar:
patient_dets.set_index('PatientAge')['PatientAgecounts'].plot.bar()
If need bins, one possible solution is with pd.cut:
#helper df with min and max ages
df1 = pd.DataFrame({'G':['14 yo and younger','15-19','20-24','25-29','30-34',
'35-39','40-44','45-49','50-54','55-59','60-64','65+'],
'Min':[0, 15,20,25,30,35,40,45,50,55,60,65],
'Max':[14,19,24,29,34,39,44,49,54,59,64,120]})
print (df1)
G Max Min
0 14 yo and younger 14 0
1 15-19 19 15
2 20-24 24 20
3 25-29 29 25
4 30-34 34 30
5 35-39 39 35
6 40-44 44 40
7 45-49 49 45
8 50-54 54 50
9 55-59 59 55
10 60-64 64 60
11 65+ 120 65
cutoff = np.hstack([np.array(df1.Min[0]), df1.Max.values])
labels = df1.G.values
patient_dets['Groups'] = pd.cut(patient_dets.PatientAge, bins=cutoff, labels=labels, right=True, include_lowest=True)
print (patient_dets)
PatientAge PatientAgecounts Groups
0 60 1204 60-64
1 45 700 45-49
2 21 400 20-24
3 34 56 30-34
4 10 150 14 yo and younger
patient_dets.groupby(['PatientAge','Groups'])['PatientAgecounts'].sum().plot.bar()
You can use pd.cut() to bin your data, and then plot using the function plot('bar')
import numpy as np
nBins = 10
my_bins = np.linspace(patient_dets.Age.min(),patient_dets.Age.max(),nBins)
patient_dets.groupby(pd.cut(patient_dets.Age, bins =nBins)).sum()['Counts'].plot('bar')

Pandas sort() ignoring negative sign

I want to sort a pandas df but I'm having problems with the negative values.
import pandas as pd
df = pd.read_csv('File.txt', sep='\t', header=None)
#Suppress scientific notation (finally)
pd.set_option('display.float_format', lambda x: '%.8f' % x)
print(df)
print(df.dtypes)
print(df.shape)
b = df.sort(axis=0, ascending=True)
print(b)
This gives me the ascending order but completely disregards the sign.
SPATA1 -0.00000005
HMBOX1 0.00000005
SLC38A11 -0.00000005
RP11-571M6.17 0.00000004
GNRH1 -0.00000004
PCDHB8 -0.00000004
CXCL1 0.00000004
RP11-48B3.3 -0.00000004
RNFT2 -0.00000004
GRIK3 -0.00000004
ZNF483 0.00000004
RP11-627G18.1 0.00000003
Any ideas what I'm doing wrong?
Thanks
Loading your file with:
df = pd.read_csv('File.txt', sep='\t', header=None)
Since sort(....) is deprecated, you can use sort_values:
b = df.sort_values(by=[1], axis=0, ascending=True)
where [1] is your column of values. For me this returns:
0 1
0 ACTA1 -0.582570
1 MT-CO1 -0.543877
2 CKM -0.338265
3 MT-ND1 -0.306239
5 MT-CYB -0.128241
6 PDK4 -0.119309
8 GAPDH -0.090912
9 MYH1 -0.087777
12 RP5-940J5.9 -0.074280
13 MYH2 -0.072261
16 MT-ND2 -0.052551
18 MYL1 -0.049142
19 DES -0.048289
20 ALDOA -0.047661
22 ENO3 -0.046251
23 MT-CO2 -0.043684
26 RP11-799N11.1 -0.034972
28 TNNT3 -0.032226
29 MYBPC2 -0.030861
32 TNNI2 -0.026707
33 KLHL41 -0.026669
34 SOD2 -0.026166
35 GLUL -0.026122
42 TRIM63 -0.022971
47 FLNC -0.018180
48 ATP2A1 -0.017752
49 PYGM -0.016934
55 hsa-mir-6723 -0.015859
56 MT1A -0.015110
57 LDHA -0.014955
.. ... ...
60 RP1-178F15.4 0.013383
58 HSPB1 0.014894
54 UBB 0.015874
53 MIR1282 0.016318
52 ALDH2 0.016441
51 FTL 0.016543
50 RP11-317J10.2 0.016799
46 RP11-290D2.6 0.018803
45 RRAD 0.019449
44 MYF6 0.019954
43 STAC3 0.021931
41 RP11-138I1.4 0.023031
40 MYBPC1 0.024407
39 PDLIM3 0.025442
38 ANKRD1 0.025458
37 FTH1 0.025526
36 MT-RNR2 0.025887
31 HSPB6 0.027680
30 RP11-451G4.2 0.029969
27 AC002398.12 0.033219
25 MT-RNR1 0.040741
24 TNNC1 0.042251
21 TNNT1 0.047177
17 MT-ND3 0.051963
15 MTND1P23 0.059405
14 MB 0.063896
11 MYL2 0.076358
10 MT-ND5 0.076479
7 CA3 0.100221
4 MT-ND6 0.140729
[18152 rows x 2 columns]

Healpy map2alm and alm2map inconsistency?

I'm just starting to work with Healpy and have noticed that if I use a map to get alm's and then use those alm's to generate a new map, I do not get the map I started with. Here's what I'm looking at:
import numpy as np
import healpy as hp
nside = 2 # healpix nside parameter
m = np.arange(hp.nside2npix(nside)) # create a map to test
alm = hp.map2alm(m) # compute alm's
new_map = hp.alm2map(alm, nside) # create new map from computed alm's
# Let's look at two maps
print(m)
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47] # as expected
print(new_map)
[-23.30522233 -22.54434515 -21.50906755 -20.09203749 -19.48841773
-18.66392484 -16.99593867 -16.789984 -15.14587061 -14.57960049
-13.4403252 -13.35992138 -10.51368725 -10.49793946 -10.1262039
-8.6340571 -7.41789272 -6.87712224 -5.75765487 -3.75121764
-4.35825512 -1.6221964 -1.03902923 -0.41478954 0.52480646
2.34629955 2.1511705 2.40325268 5.39576497 5.38390848
5.78324832 7.24779083 8.4915595 9.0047257 10.15179735
12.1306303 12.62672772 13.4512206 15.11920678 15.32516145
16.96927483 17.53554496 18.67482024 18.75522407 20.42078855
21.18166574 22.21694334 23.6339734 ] # not what I was expecting
As you can see, new_map doesn't match the input map, m. I imagine there's some subtlety to these functions that I'm missing. Any idea?
I get a different result:
print(new_map)
[ 0.15859344, 0.91947062, 1.95474822, 3.37177828,
4.01808325, 4.84257613, 6.51056231, 6.71651698,
8.36063036, 8.92690049, 10.06617577, 10.1465796 ,
12.98620654, 13.00668621, 13.3736899 , 14.87056857,
16.08200108, 16.62750343, 17.74223892, 19.75340803,
19.13441288, 21.8704716 , 22.45363877, 23.07787846,
24.01747446, 25.83896755, 25.6438385 , 25.89592068,
28.89565876, 28.88853415, 29.28314212, 30.7524165 ,
31.9914533 , 32.50935137, 33.65169114, 35.63525597,
36.13322869, 36.95772158, 38.62570775, 38.83166242,
40.47577581, 41.04204594, 42.18132122, 42.26172504,
43.88460433, 44.64548151, 45.68075911, 47.09778917]
Older versions of healpy were automatically removing a constant offset from the map before transformation, better to update healpy to the last version.
The residual difference is related to the fact the pixelization introduces an error, this error is larger at low nside.

Categories

Resources