I was looking for a Python library function which computes multinomial coefficients.
I could not find any such function in any of the standard libraries.
For binomial coefficients (of which multinomial coefficients are a generalization) there is scipy.special.binom and also scipy.misc.comb. Also, numpy.random.multinomial draws samples from a multinomial distribution, and sympy.ntheory.multinomial.multinomial_coefficients returns a dictionary related to multinomial coefficients.
However, I could not find a multinomial coefficients function proper, which given a,b,...,z returns (a+b+...+z)!/(a! b! ... z!). Did I miss it? Is there a good reason there is none available?
I would be happy to contribute an efficient implementation to SciPy say. (I would have to figure out how to contribute, as I have never done this).
For background, they do come up when expanding (a+b+...+z)^n. Also, they count the ways of depositing a+b+...+z distinct objects into distinct bins such that the first bin contains a objects, etc. I need them occasionally for a Project Euler problem.
BTW, other languages do offer this function: Mathematica, MATLAB, Maple.
To partially answer my own question, here is my simple and fairly efficient implementation of the multinomial function:
def multinomial(lst):
res, i = 1, 1
for a in lst:
for j in range(1,a+1):
res *= i
res //= j
i += 1
return res
It seems from the comments so far that no efficient implementation of the function exists in any of the standard libraries.
Update (January 2020). As Don Hatch has pointed out in the comments, this can be further improved by looking for the largest argument (especially for the case that it dominates all others):
def multinomial(lst):
res, i = 1, sum(lst)
i0 = lst.index(max(lst))
for a in lst[:i0] + lst[i0+1:]:
for j in range(1,a+1):
res *= i
res //= j
i -= 1
return res
No, there is not a built-in multinomial library or function in Python.
Anyway this time math could help you. In fact a simple method for calculating the multinomial
keeping an eye on the performance is to rewrite it by using the characterization of the multinomial coefficient as a product of binomial coefficients:
where of course
Thanks to scipy.special.binom and the magic of recursion you can solve the problem like this:
from scipy.special import binom
def multinomial(params):
if len(params) == 1:
return 1
return binom(sum(params), params[-1]) * multinomial(params[:-1])
where params = [n1, n2, ..., nk].
Note: Splitting the multinomial as a product of binomial is also good to prevent overflow in general.
You wrote "sympy.ntheory.multinomial.multinomial_coefficients returns a dictionary related to multinomial coefficients", but it is not clear from that comment if you know how to extract the specific coefficients from that dictionary. Using the notation from the wikipedia link, the SymPy function gives you all the multinomial coefficients for the given m and n. If you only want a specific coefficient, just pull it out of the dictionary:
In [39]: from sympy import ntheory
In [40]: def sympy_multinomial(params):
...: m = len(params)
...: n = sum(params)
...: return ntheory.multinomial_coefficients(m, n)[tuple(params)]
...:
In [41]: sympy_multinomial([1, 2, 3])
Out[41]: 60
In [42]: sympy_multinomial([10, 20, 30])
Out[42]: 3553261127084984957001360
Busy Beaver gave an answer written in terms of scipy.special.binom. A potential problem with that implementation is that binom(n, k) returns a floating point value. If the coefficient is large enough, it will not be exact, so it would probably not help you with a Project Euler problem. Instead of binom, you can use scipy.special.comb, with the argument exact=True. This is Busy Beaver's function, modified to use comb:
In [46]: from scipy.special import comb
In [47]: def scipy_multinomial(params):
...: if len(params) == 1:
...: return 1
...: coeff = (comb(sum(params), params[-1], exact=True) *
...: scipy_multinomial(params[:-1]))
...: return coeff
...:
In [48]: scipy_multinomial([1, 2, 3])
Out[48]: 60
In [49]: scipy_multinomial([10, 20, 30])
Out[49]: 3553261127084984957001360
Here are two approaches, one using factorials, one using Stirling's approximation.
Using factorials
You can define a function to return multinomial coefficients in a single line using vectorised code (instead of for-loops) as follows:
from scipy.special import factorial
def multinomial_coeff(c):
return factorial(c.sum()) / factorial(c).prod()
(Where c is an np.ndarray containing the number of counts for each different object). Usage example:
>>> import numpy as np
>>> coeffs = np.array([2, 3, 4])
>>> multinomial_coeff(coeffs)
1260.0
In some cases this might be slower because you will be computing certain factorial expressions multiple times, in other cases this might be faster because I believe that numpy naturally parallelises vectorised code. Also this reduces the required number of lines in your program and is arguably more readable. If someone has the time to run speed tests on these different options then I'd be interested to see the results.
Using Stirling's approximation
In fact the logarithm of the multinomial coefficient is much faster to compute (based on Stirling's approximation) and allows computation of much larger coefficients:
from scipy.special import gammaln
def log_multinomial_coeff(c):
return gammaln(c.sum()+1) - gammaln(c+1).sum()
Usage example:
>>> import numpy as np
>>> coeffs = np.array([2, 3, 4])
>>> np.exp(log_multinomial_coeff(coeffs))
1259.999999999999
Your own answer (the accepted one) is quite good, and is especially simple. However, it does have one significant inefficiency: your outer loop for a in lst is executed one more time than is necessary. In the first pass through that loop, the values of i and j are always identical, so the multiplications and divisions do nothing. In your example multinomial([123, 134, 145]), there are 123 unneeded multiplications and divisions, adding time to the code.
I suggest finding the maximum value in the parameters and removing it, so those unneeded operations are not done. That adds complexity to the code but reduces the execution time, especially for short lists of large numbers. My code below executes multcoeff(123, 134, 145) in 111 microseconds, while your code takes 141 microseconds. That is not a large increase, but that could matter. So here is my code. This also takes individual values as parameters rather than a list, so that is another difference from your code.
def multcoeff(*args):
"""Return the multinomial coefficient
(n1 + n2 + ...)! / n1! / n2! / ..."""
if not args: # no parameters
return 1
# Find and store the index of the largest parameter so we can skip
# it (for efficiency)
skipndx = args.index(max(args))
newargs = args[:skipndx] + args[skipndx + 1:]
result = 1
num = args[skipndx] + 1 # a factor in the numerator
for n in newargs:
for den in range(1, n + 1): # a factor in the denominator
result = result * num // den
num += 1
return result
Starting Python 3.8,
since the standard library now includes the math.comb function (binomial coefficient)
and since the multinomial coefficient can be computed as a product of binomial coefficients
we can implement it without external libraries:
import math
def multinomial(*params):
return math.prod(math.comb(sum(params[:i]), x) for i, x in enumerate(params, 1))
multinomial(10, 20, 30) # 3553261127084984957001360
Related
I am new to python and I need your kindly help.
I have three matrices, in particular:
Matrix M (class of the matrix: scipy.sparse.csc.csc_matrix), dimensions: N x C;
Matrix G (class of the matrix: numpy.ndarray), dimensions: C x T;
Matrix L (class of the matrix: numpy.ndarray), dimensions: T x N.
Where: N = 10000, C = 1000, T = 20.
I would like to calculate, this score:
I tried by using two for loops , one for the i-index and one for c. Furthermore, I used a dot product for obtaining the last sum in the equation. But my implementation requires too much times for giving the result.
This is what I implemented:
score = 0.0
for i in range(N):
for c in range(C):
Mic = M[i,c]
score += np.outer(Mic,(np.dot(L[:,i],G[c,:])))
Is there a way to avoid the two for loops?
Thank you in advance!
Best
Try this score = np.einsum("ic,ti,ct->", M, L, G)
EDIT1
By the way, in your case, score = np.sum(np.diag(M # G # L)) (in PYTHON3 starting from version 3.5, you can use the semantics of the # operator for matmul function) is faster than einsum (especially in np.trace((L # M) # G ) due to efficient use of memory, maybe #hpaulj meant this in his comment). But einsum is easier to use for complex tensor products (to encode with einsum I used your math expression directly without thinking about optimization).
Generally, using for with numpy results in a dramatic slowdown in computation speed (think "vectorize your computations" in the case of numpy).
I'm migrating some legacy code from R to Python and I'm having trouble matching the quantile results with numpy percentile.
Given the following list of numbers:
a1 = [
5.75,6.13333333333333,7.13636363636364,9,10.1,4.80952380952381,8.82926829268293,4.7906976744186,3.83333333333333,6,6.1,
8.88235294117647,30,5.7,3.98507462686567,6.83333333333333,8.39805825242718,4.78260869565217,7.26356589147287,5.67857142857143,
3.58333333333333,6.69230769230769,14.3333333333333,14.3333333333333,5.125,5.16216216216216,5.36363636363636,10.7142857142857,
4.90909090909091,7.5,8,6,6.93939393939394,10.4,6,6.8,5.33333333333333,10.3076923076923,4.5625,5.4,6.44,3.36363636363636,
11.1666666666667,4.5,7.35714285714286,10.6363636363636,9.26746031746032,3.83333333333333,5.75,9.14285714285714,8.27272727272727,
5,5.92307692307692,5.23076923076923,4.09375,6.25,4.63888888888889,6.07142857142857,5,5.42222222222222,3.93892045454545,4.8,
8.71428571428571,6.25925925925926,4.12,5.30769230769231,4.26086956521739,5.22222222222222,4.64285714285714,5,3.64705882352941,
5.33333333333333,3.65217391304348,3.54166666666667,10.0952380952381,3.38235294117647,8.67123287671233,2.66666666666667,3.5,4.875,
4.5,6.2,5.45454545454545,4.89189189189189,4.71428571428571,1,5.33333333333333,6.09090909090909,4.36756756756757,6,5.17197452229299,
4.48717948717949,5.01219512195122,4.83098591549296,5.25,8.52,5.47692307692308,5.45454545454545,8.6578947368421,8.35714285714286,3.25,
8.5,4,5.95652173913043,7.05882352941176,7.5,8.6,8.49122807017544,5.14285714285714,4,13.3294117647059,9.55172413793103,5.57446808510638,
4.5,8,4.11764705882353,3.9,5.14285714285714,6,4.66666666666667,6,3.75,4.93333333333333,4.5,5.21666666666667,6.53125,6,7,7.28333333333333,
7.34615384615385,7.15277777777778,8.07936507936508,11.609756097561
]
Using quantile in R such that
quantile(a1, probs=.05, type=2)
Gives a results of 3.541667
Trying all of the interpolation methods in numpy to find the same result:
{x:np.percentile(a1,q=5, interpolation=x) for x in ['linear','lower','higher','nearest','midpoint']}
Yields
{'linear': 3.566666666666666,
'lower': 3.54166666666667,
'higher': 3.58333333333333,
'nearest': 3.58333333333333,
'midpoint': 3.5625}
As we can see the lower interpolation method returns the same result as R quantile type 2
However again with a different quantile in R we get different results:
quantile(a1, probs=.95, type=2)
Gives a result of 10.71429
And with numpy:
{x:np.percentile(a1,q=95, interpolation=x) for x in ['linear','lower','higher','nearest','midpoint']}
Yields
{'linear': 10.667532467532439,
'lower': 10.6363636363636,
'higher': 10.7142857142857,
'nearest': 10.6363636363636,
'midpoint': 10.67532467532465}
In this case the higher interpolation method returns the same result
I'm hoping that someone familiar enough w/the R quantile types can help me reproduce the same quantile logic in numpy.
You can implement this yourself. With type=2 it's a rather simple calculation. You either take the next highest order statistic or at a discontinuity (i.e. 100 values and you want the p=0.06, which falls exactly on the 6th value) you take the average of that order statistic and the next greatest order statistic.
import numpy as np
def R_type2(arr, p):
"""
arr : array-like
p : float between [0, 1]
"""
#m=0 for Q_2(p) in R
x = np.sort(arr)
n = len(x)
aleph = n*p
k = np.floor(np.array(aleph).clip(1, n-1)).astype(int)
gamma = {False: 1, True: 0.5}.get(aleph==k) # Discontinuity or not
# Deal with case where it should be smallest value
if aleph < 1:
return x[k-1] # x[0]
else:
return (1.-gamma)*x[k-1] + gamma*x[k]
R_type2(a1, 0.05)
#3.54166666666667
R_type2(a1, 0.95)
#10.7142857142857
A word of caution. k will be an integer while n*p is a float. In general it's a very bad idea to do aleph==k because this leads to problems with floating point inaccuracies. For instance with 100 numbers p=0.07 is NOT considered a discontinuity because 0.07 cannot be represented precisely. However, because R seems to implement a pure equality check I left it like the above for consistency.
I personally would favor changing from the equaltiy: {False: 1, True: 0.5}.get(aleph==k)
to {False: 1, True: 0.5}.get(np.isclose(aleph,k)) that way floating point issues don't become a problem.
Dealing with memory issues, I was wondering if there is some library in Python that allows to define a matrix M based on some matrix operations, i.e. in a simple case:
M = A.dot(B)
... or perhaps with some threshold t:
M = (A.dot(B.T) / C) > t
But not actually computes M, only computes the elements when needed, i.e. when I ask for M[i,j] or M[i,:] or M[[i:j],:] it computes only those values.
I guess tensorflow could be an answer, but I am not sure about that.
thankyou for your help.
i am very new to programming, but have decided to learn Python. i am doing a program that can check if a number is a prime. this is mathematically done by checking if (x-1)^p -(x^p-1) is devisible by p (Capable of being divided, with no remainder) then p is a prime.
However i have run into trouble. this is my code so far:
from sympy import *
x=symbols('x')
p=11
f=(pow(x - 1, p)) - (pow(x, p) - 1) # (x-1)^p -(x^p-1)
f1=expand(f)
>>> -11*x**10 + 55*x**9 - 165*x**8 + 330*x**7 - 462*x**6 + 462*x**5 - 330*x**4 + 165*x**3 - 55*x**2 + 11*x
f2= f1/p
>>> -x**10 + 5*x**9 - 15*x**8 + 30*x**7 - 42*x**6 + 42*x**5 - 30*x**4 + 15*x**3 - 5*x**2 + x
to tell if the number p is a prime i need to check if the coefficients of the polynomium is divisible by p. so i have to check if the coefficients of f2 is whole numbers or real numbers.
this is what i would like to make a program that can check: https://www.youtube.com/watch?v=HvMSRWTE2mI
i have tried making it into int but it still shows fractions like 1/2 and 3/7. i wish that it will only show whole numbers.
how do i make it so?
What the method effective does is expand the polynomial and drop the first (x^p) and last coefficients (x^0). Then you have to iterate through the rest and check for divisibility. Since a polynomial expansion of power p produces p+1 terms (from 0 to p), we want to collect p-2 terms (from 1 to p-1). This is all summed up in the following code.
from sympy.abc import x
def is_prime_sympy(p):
poly = pow((x - 1), p).expand()
return not any(poly.coeff(x, i) % p for i in xrange(1, p))
This works, but the higher the number you input, e.g. 1013, the longer you'll notice it takes. Sympy is slow because internally it stores all expressions as some classes and all multiplications and additions take a long time. We can simply generate the coefficients using Pascal's triangle. For the polynomial (x - 1)^p, the coefficients are supposed to change sign, but we don't care about that. We just want the raw numbers. Credits to Copperfield for pointing out you only need half of the coefficients because of symmetry.
import math
def combination(n, r):
return math.factorial(n) // (math.factorial(r) * math.factorial(n - r))
def pascals_triangle(row):
# only generate half of the coefficients because of symmetry
return (combination(row, term) for term in xrange(1, (row+1)//2))
def is_prime_math(p):
return not any(c % p for c in pascals_triangle(p))
We can time both methods now to see which one is faster.
import time
def benchmark(p):
t0 = time.time()
is_prime_math(p)
t1 = time.time()
is_prime_sympy(p)
t2 = time.time()
print 'Math: %.3f, Sympy: %.3f' % (t1-t0, t2-t1)
And some tests.
>>> benchmark(512)
Math: 0.001, Sympy: 0.241
>>> benchmark(2003)
Math: 3.852, Sympy: 41.695
We know that 512 is not a prime. The very second term we have to check for divisibility fails the test, so most of the time is actually spent generating the coefficients. Python lazily computes them while sympy must expand the whole polynomial out before we can start collecting them. This shows as that a generator approach is preferable.
2003 is prime and here we notice sympy performs 10 times as slowly. In fact, all of the time is spent generating the coefficients, as iterating over 2000 elements for a modulo operation takes no time. So if there are any further optimisations, that's where one should focus.
numpy.poly1d()
Numpy has a class that can manipulate polynomial coefficients and it's exactly what we want. It even works relatively fast for powers up to 50k. However, in its original implementation it's useless to us. That is because the coefficients are stored as signed int32, which means very quickly they will overflow and our modulo operations will be thrown off. In fact, it'll fail for even 37.
But it's fast, though, right? Maybe if we can hack it so it accepts infite precision integers... Maybe it's possible, maybe it isn't. But even if it is, we have to consider that maybe the reason why it is so fast is exactly because it uses a fixed precision type under the hood.
For the sake of curiosity, this is what the implementation would look like if it were any useful.
import numpy as np
def is_prime_numpy(p):
poly = pow(np.poly1d([1, -1]), p)
return not any(c % p for c in poly.coeffs[1:-1])
And for the curious ones, the source code is located in ...\numpy\lib\polynomial.py.
I am not sure if I understood what you mean, but for checking if a number is an integer or float you can use isinstance:
>>> isinstance(1/2.0, float)
>>> True
>>> isinstance(1/2, float)
>>> False
I'm trying to make a calculator for something, but the formulas use a sigma, I have no idea how to do a sigma in python, is there an operator for it?
Ill put a link here with a page that has the formulas on it for illustration:http://fromthedepths.gamepedia.com/User:Evil4Zerggin/Advanced_cannon
A sigma (∑) is a Summation operator. It evaluates a certain expression many times, with slightly different variables, and returns the sum of all those expressions.
For example, in the Ballistic coefficient formula
The Python implementation would look something like this:
# Just guessing some values. You have to search the actual values in the wiki.
ballistic_coefficients = [0.3, 0.5, 0.1, 0.9, 0.1]
total_numerator = 0
total_denominator = 0
for i, coefficient in enumerate(ballistic_coefficients):
total_numerator += 2**(-i) * coefficient
total_denominator += 2**(-i)
print('Total:', total_numerator / total_denominator)
You may want to look at the enumerate function, and beware precision problems.
The easiest way to do this is to create a sigma function the returns the summation, you can barely understand this, you don't need to use a library. you just need to understand the logic .
def sigma(first, last, const):
sum = 0
for i in range(first, last + 1):
sum += const * i
return sum
# first : is the first value of (n) (the index of summation)
# last : is the last value of (n)
# const : is the number that you want to sum its multiplication each (n) times with (n)
An efficient way to do this in Python is to use reduce().
To solve
3
Σ i
i=1
You can use the following:
from functools import reduce
result = reduce(lambda a, x: a + x, [0]+list(range(1,3+1)))
print(result)
reduce() will take arguments of a callable and an iterable, and return one value as specified by the callable. The accumulator is a and is set to the first value (0), and then the current sum following that. The current value in the iterable is set to x and added to the accumulator. The final accumulator is returned.
The formula to the right of the sigma is represented by the lambda. The sequence we are summing is represented by the iterable. You can change these however you need.
For example, if I wanted to solve:
Σ π*i^2
i
For a sequence I [2, 3, 5], I could do the following:
reduce(lambda a, x: a + 3.14*x*x, [0]+[2,3,5])
You can see the following two code lines produce the same result:
>>> reduce(lambda a, x: a + 3.14*x*x, [0]+[2,3,5])
119.32
>>> (3.14*2*2) + (3.14*3*3) + (3.14*5*5)
119.32
I've looked all the answers that different programmers and coders have tried to give to your query but i was unable to understand any of them maybe because i am a high school student anyways according to me using LIST will definately reduce some pain of coding so here it is what i think simplest way to form a sigma function .
#creating a sigma function
a=int(input("enter a number for sigma "))
mylst=[]
for i in range(1,a+1):
mylst.append(i)
b=sum(mylst)
print(mylst)
print(b)
Captial sigma (Σ) applies the expression after it to all members of a range and then sums the results.
In Python, sum will take the sum of a range, and you can write the expression as a comprehension:
For example
Speed Coefficient
A factor in muzzle velocity is the speed coefficient, which is a
weighted average of the speed modifiers si of the (non-
casing) parts, where each component i starting at the head has half the
weight of the previous:
The head will thus always determine at least 25% of the speed
coefficient.
For example, suppose the shell has a Composite Head (speed modifier
1.6), a Solid Warhead Body (speed modifier 1.3), and a Supercavitation
Base (speed modifier 0.9). Then we have
s0=1.6
s1=1.3
s2=0.9
From the example we can see that i starts from 0 not the usual 1 and so we can do
def speed_coefficient(parts):
return (
sum(0.75 ** i * si for i, si in enumerate(parts))
/
sum(0.75 ** i for i, si in enumerate(parts))
)
>>> speed_coefficient([1.6, 1.3, 0.9])
1.3324324324324326
import numpy as np
def sigma(s,e):
x = np.arange(s,e)
return np.sum([x+1])