This question already has answers here:
Python - Rounding by quarter-intervals
(4 answers)
Closed 1 year ago.
I am trying to write a code that rounds down the start list values to the nearest .25 value. How would I be able to do that? So the first index in the start list is 3.645 and that would be rounded to 3.5 since .0, . 25, 0.5, 0.75 are the only increments for 0.25. Since the last index already is a multiple of 0.25 2 stays as 2.
start = [3.645, 33.54656, 1.77, 1.99, 2]
Expected output:
[3.5, 33.5, 1.75, 1.75, 2]
This being Python, there's probably some fancy package that will let you do this, but [int(4*x)/4 for x in start] is one way to do it. You can also do [x - x % .25 for x in start] (the modulus operator does work for non-integer modulus).
However, I notice you had your last number be just 2, rather than 2.0. If you want integers to not be cast as floats, you'll have to do something more complicated, such as [int(x) if int(x) == x else int(4*x)/4 for x in start]. And if you want numbers to be cast as int if they're integers after the rounding, it will have to be even more complicated, such as [x - x%.25 if x%1 > .25 else int(x) for x in start]
I'd like to create a function that takes a (sorted) list as its argument and outputs a list containing each element's corresponding percentile.
For example, fn([1,2,3,4,17]) returns [0.0, 0.25, 0.50, 0.75, 1.00].
Can anyone please either:
Help me correct my code below? OR
Offer a better alternative than my code for mapping values in a list to their corresponding percentiles?
My current code:
def median(mylist):
length = len(mylist)
if not length % 2:
return (mylist[length / 2] + mylist[length / 2 - 1]) / 2.0
return mylist[length / 2]
###############################################################################
# PERCENTILE FUNCTION
###############################################################################
def percentile(x):
"""
Find the correspoding percentile of each value relative to a list of values.
where x is the list of values
Input list should already be sorted!
"""
# sort the input list
# list_sorted = x.sort()
# count the number of elements in the list
list_elementCount = len(x)
#obtain set of values from list
listFromSetFromList = list(set(x))
# count the number of unique elements in the list
list_uniqueElementCount = len(set(x))
# define extreme quantiles
percentileZero = min(x)
percentileHundred = max(x)
# define median quantile
mdn = median(x)
# create empty list to hold percentiles
x_percentile = [0.00] * list_elementCount
# initialize unique count
uCount = 0
for i in range(list_elementCount):
if x[i] == percentileZero:
x_percentile[i] = 0.00
elif x[i] == percentileHundred:
x_percentile[i] = 1.00
elif x[i] == mdn:
x_percentile[i] = 0.50
else:
subList_elementCount = 0
for j in range(i):
if x[j] < x[i]:
subList_elementCount = subList_elementCount + 1
x_percentile[i] = float(subList_elementCount / list_elementCount)
#x_percentile[i] = float(len(x[x > listFromSetFromList[uCount]]) / list_elementCount)
if i == 0:
continue
else:
if x[i] == x[i-1]:
continue
else:
uCount = uCount + 1
return x_percentile
Currently, if I submit percentile([1,2,3,4,17]), the list [0.0, 0.0, 0.5, 0.0, 1.0] is returned.
I think your example input/output does not correspond to typical ways of calculating percentile. If you calculate the percentile as "proportion of data points strictly less than this value", then the top value should be 0.8 (since 4 of 5 values are less than the largest one). If you calculate it as "percent of data points less than or equal to this value", then the bottom value should be 0.2 (since 1 of 5 values equals the smallest one). Thus the percentiles would be [0, 0.2, 0.4, 0.6, 0.8] or [0.2, 0.4, 0.6, 0.8, 1]. Your definition seems to be "the number of data points strictly less than this value, considered as a proportion of the number of data points not equal to this value", but in my experience this is not a common definition (see for instance wikipedia).
With the typical percentile definitions, the percentile of a data point is equal to its rank divided by the number of data points. (See for instance this question on Stats SE asking how to do the same thing in R.) Differences in how to compute the percentile amount to differences in how to compute the rank (for instance, how to rank tied values). The scipy.stats.percentileofscore function provides four ways of computing percentiles:
>>> x = [1, 1, 2, 2, 17]
>>> [stats.percentileofscore(x, a, 'rank') for a in x]
[30.0, 30.0, 70.0, 70.0, 100.0]
>>> [stats.percentileofscore(x, a, 'weak') for a in x]
[40.0, 40.0, 80.0, 80.0, 100.0]
>>> [stats.percentileofscore(x, a, 'strict') for a in x]
[0.0, 0.0, 40.0, 40.0, 80.0]
>>> [stats.percentileofscore(x, a, 'mean') for a in x]
[20.0, 20.0, 60.0, 60.0, 90.0]
(I used a dataset containing ties to illustrate what happens in such cases.)
The "rank" method assigns tied groups a rank equal to the average of the ranks they would cover (i.e., a three-way tie for 2nd place gets a rank of 3 because it "takes up" ranks 2, 3 and 4). The "weak" method assigns a percentile based on the proportion of data points less than or equal to a given point; "strict" is the same but counts proportion of points strictly less than the given point. The "mean" method is the average of the latter two.
As Kevin H. Lin noted, calling percentileofscore in a loop is inefficient since it has to recompute the ranks on every pass. However, these percentile calculations can be easily replicated using different ranking methods provided by scipy.stats.rankdata, letting you calculate all the percentiles at once:
>>> from scipy import stats
>>> stats.rankdata(x, "average")/len(x)
array([ 0.3, 0.3, 0.7, 0.7, 1. ])
>>> stats.rankdata(x, 'max')/len(x)
array([ 0.4, 0.4, 0.8, 0.8, 1. ])
>>> (stats.rankdata(x, 'min')-1)/len(x)
array([ 0. , 0. , 0.4, 0.4, 0.8])
In the last case the ranks are adjusted down by one to make them start from 0 instead of 1. (I've omitted "mean", but it could easily be obtained by averaging the results of the latter two methods.)
I did some timings. With small data such as that in your example, using rankdata is somewhat slower than Kevin H. Lin's solution (presumably due to the overhead scipy incurs in converting things to numpy arrays under the hood) but faster than calling percentileofscore in a loop as in reptilicus's answer:
In [11]: %timeit [stats.percentileofscore(x, i) for i in x]
1000 loops, best of 3: 414 µs per loop
In [12]: %timeit list_to_percentiles(x)
100000 loops, best of 3: 11.1 µs per loop
In [13]: %timeit stats.rankdata(x, "average")/len(x)
10000 loops, best of 3: 39.3 µs per loop
With a large dataset, however, the performance advantage of numpy takes effect and using rankdata is 10 times faster than Kevin's list_to_percentiles:
In [18]: x = np.random.randint(0, 10000, 1000)
In [19]: %timeit [stats.percentileofscore(x, i) for i in x]
1 loops, best of 3: 437 ms per loop
In [20]: %timeit list_to_percentiles(x)
100 loops, best of 3: 1.08 ms per loop
In [21]: %timeit stats.rankdata(x, "average")/len(x)
10000 loops, best of 3: 102 µs per loop
This advantage will only become more pronounced on larger and larger datasets.
I think you want scipy.stats.percentileofscore
Example:
percentileofscore([1, 2, 3, 4], 3)
75.0
percentiles = [percentileofscore(data, i) for i in data]
In terms of complexity, I think reptilicus's answer is not optimal. It takes O(n^2) time.
Here is a solution that takes O(n log n) time.
def list_to_percentiles(numbers):
pairs = zip(numbers, range(len(numbers)))
pairs.sort(key=lambda p: p[0])
result = [0 for i in range(len(numbers))]
for rank in xrange(len(numbers)):
original_index = pairs[rank][1]
result[original_index] = rank * 100.0 / (len(numbers)-1)
return result
I'm not sure, but I think this is the optimal time complexity you can get. The rough reason I think it's optimal is because the information of all of the percentiles is essentially equivalent to the information of the sorted list, and you can't get better than O(n log n) for sorting.
EDIT: Depending on your definition of "percentile" this may not always give the correct result. See BrenBarn's answer for more explanation and for a better solution which makes use of scipy/numpy.
Pure numpy version of Kevin's solution
As Kevin said, optimal solution works in O(n log(n)) time. Here is fast version of his code in numpy, which works almost the same time as stats.rankdata:
percentiles = numpy.argsort(numpy.argsort(array)) * 100. / (len(array) - 1)
PS. This is one if my favourite tricks in numpy.
this might look oversimplyfied but what about this:
def percentile(x):
pc = float(1)/(len(x)-1)
return ["%.2f"%(n*pc) for n, i in enumerate(x)]
EDIT:
def percentile(x):
unique = set(x)
mapping = {}
pc = float(1)/(len(unique)-1)
for n, i in enumerate(unique):
mapping[i] = "%.2f"%(n*pc)
return [mapping.get(el) for el in x]
I tried Scipy's percentile score but it turned out to be very slow for one of my tasks. So, simply implemented it this way. Can be modified if a weak ranking is needed.
def assign_pct(X):
mp = {}
X_tmp = np.sort(X)
pct = []
cnt = 0
for v in X_tmp:
if v in mp:
continue
else:
mp[v] = cnt
cnt+=1
for v in X:
pct.append(mp[v]/cnt)
return pct
Calling the function
assign_pct([23,4,1,43,1,6])
Output of function
[0.75, 0.25, 0.0, 1.0, 0.0, 0.5]
If I understand you correctly, all you want to do, is to define the percentile this element represents in the array, how much of the array is before that element. as in [1, 2, 3, 4, 5]
should be [0.0, 0.25, 0.5, 0.75, 1.0]
I believe such code will be enough:
def percentileListEdited(List):
uniqueList = list(set(List))
increase = 1.0/(len(uniqueList)-1)
newList = {}
for index, value in enumerate(uniqueList):
newList[index] = 0.0 + increase * index
return [newList[val] for val in List]
For me the best solution is to use QuantileTransformer in sklearn.preprocessing.
from sklearn.preprocessing import QuantileTransformer
fn = lambda input_list : QuantileTransformer(100).fit_transform(np.array(input_list).reshape([-1,1])).ravel().tolist()
input_raw = [1, 2, 3, 4, 17]
output_perc = fn( input_raw )
print "Input=", input_raw
print "Output=", np.round(output_perc,2)
Here is the output
Input= [1, 2, 3, 4, 17]
Output= [ 0. 0.25 0.5 0.75 1. ]
Note: this function has two salient features:
input raw data is NOT necessarily sorted.
input raw data is NOT necessarily single column.
This version allows also to pass exact percentiles values used to ranking:
def what_pctl_number_of(x, a, pctls=np.arange(1, 101)):
return np.argmax(np.sign(np.append(np.percentile(x, pctls), np.inf) - a))
So it's possible to find out what's percentile number value falls for provided percentiles:
_x = np.random.randn(100, 1)
what_pctl_number_of(_x, 1.6, [25, 50, 75, 100])
Output:
3
so it hits to 75 ~ 100 range
for a pure python function to calculate a percentile score for a given item, compared to the population distribution (a list of scores), I pulled this from the scipy source code and removed all references to numpy:
def percentileofscore(a, score, kind='rank'):
n = len(a)
if n == 0:
return 100.0
left = len([item for item in a if item < score])
right = len([item for item in a if item <= score])
if kind == 'rank':
pct = (right + left + (1 if right > left else 0)) * 50.0/n
return pct
elif kind == 'strict':
return left / n * 100
elif kind == 'weak':
return right / n * 100
elif kind == 'mean':
pct = (left + right) / n * 50
return pct
else:
raise ValueError("kind can only be 'rank', 'strict', 'weak' or 'mean'")
source: https://github.com/scipy/scipy/blob/v1.2.1/scipy/stats/stats.py#L1744-L1835
Given that calculating percentiles is trickier than one would think, but way less complicated than the full scipy/numpy/scikit package, this is the best for light-weight deployment. The original code filters for only nonzero-values better, but otherwise, the math is the same. The optional parameter controls how it handles values that are in between two other values.
For this use case, one can call this function for each item in a list using the map() function.
This question already has answers here:
Python rounding error with float numbers [duplicate]
(2 answers)
Floating point representation error in Python [duplicate]
(1 answer)
Closed 10 years ago.
There is a python code as following:
import sys
import fileinput, string
K = 3
f = raw_input("please input the initial "+str(K)+" lamba: ").split()
Z = []
sumoflamba = 0.0
for m in f:
j = m.find("/")
if j!=-1:
e=float(m[:j])/float(m[j+1:])
else:
e = float(m)
sumoflamba+=e
if e==0:
print "the initial lamba cannot be zero!"
sys.exit()
Z.append(e)
print sumoflamba
if sumoflamba!=1:
print "initial lamba must be summed to 1!"
sys.exit()
When I run it with 0.7, 0.2, 0.1. It will print the warning and exits! However, when I run it with 0.1, 0.2, 0.7. It works fine. 0.3, 0.3, 0.4 works fine too. I do not have a clue....Can someone explain this, please?
The "print sumoflamda" will give 1.0 for all these cases.
Pretty much what the link Lattyware provided explains - but in a nutshell, you can't expect equality comparisons to work in floating point without being explicit about the precision. If you were to either round off the value or cast it to an integer you would get predictable results
>>> f1 = 0.7 + 0.2 + 0.1
>>> f2 = 0.1 + 0.2 + 0.7
>>> f1 == f2
False
>>> round(f1,2) == round(f2,2)
True
Floats are imprecise. The more you operate with them, the more imprecision they accumulate. Some numbers can be represented exactly, but most cannot. Comparing them for equality will almost always be a mistake.
It is a bad practice to check floating point numbers for equality. Best you can do here is to check that your number within the desired range. For details of how floating point works see http://en.wikipedia.org/wiki/Floating_point#Internal_representation