I was asked to do the newton polynomial interpolation and I was able to write the main code.
https://en.wikipedia.org/wiki/Newton_polynomial
But there is still one small thing that I am not able to get around since a couple of days, after reading I found a way to do it using Sympy, but I am not allowed to use anything other than basic numpy.
Now my problem is that I trying to multiply something like this
p(x)=j(x-q)(x-w)(x-e)+k(x-w)(x-e)+l(x-e)+d
to get this p(x)=ax³+bx²+cx+d , so I amlooking for the polynomial coefficients a,b,c,d
for example:
p(x)=5-7(x+1)+9(x+1)(x)-7(x+1)(x)(x-1)=-7x³+9x²+9x-2
of course I am looking for the general case, not only for ploynomials from third degree.
Any tip would be much appreciated, I am really stuck at this since a couple of days.
and Sorry for the sloppy writing of notation, but it seems stackoverflow doesn't accept latex and I am not able to post a picture because I don't have rhe required reputation. (if there is other solutuions to post it properly please tell me and I'll just post it again)
Thanks in advance :)
First, I'll rewrite the equation as
c3(x-r3)(x-r2)(x-r1)+c2(x-r2)(x-r1)+c1(x-r1)+c0
Next, note that this is equivalent to:
((c3(x-r3)+c2)(x-r2)+c1)(x-r1)+c0
You can multiply it out if you want to check.
So in general, you can do:
poly = np.poly1d([c[n]])
for i in range(n,0,-1):
poly = poly*np.poly1d([1,-r[n]])+np.poly1d([n-1])
You can probably replace np.poly1d([c[n]]) with just c[n] and np.poly1d([c[n-1]]) with just c[n-1], if you're willing to trust the coercion to work properly
One way of doing it is to represent a polinomial as an array where a[0]..a[n] where a[i] is the constant that you multipliy (x^i). The function will be something like p(x) = a[0] + a[1]*x + a[2]* (x**2)....
Now to add two polinomials in this representation you just need to pad the shorter one with 0s and add the values at matching indices.
If you want to multiply a polinomial by k*(x**z) you need to multiply every value by k and insert z zeros in front( a[0:0] = [0.] * z).
Using these two operations you can resolve the equation and get the coefficients you want.
Multiplying two polynomials x(x-1) is the same as convolving their coefficients:
# x => [1, 0]
# (x-1) => [1, -1]
numpy.convolve([1, 0], [1, -1]) # [1, -1, 0] => x^2 - x + 0
This means you can solve the problem using
import numpy
def mult(a, b):
"""
Polynomial multiplication is simply a convolution
"""
return numpy.convolve(a, b)
def add(a, b):
"""
Addition is a bit complex as a and b may have different lengths.
Simply prepend zeros to the shorter one
"""
if len(a) < len(b):
a = numpy.insert(a, 0, [0] * (len(b) - len(a)))
if len(b) < len(a):
b = numpy.insert(b, 0, [0] * (len(a) - len(b)))
return a + b
# p(x)=5-7(x+1)+9(x+1)(x)-7(x+1)(x)(x-1)=-7x³+9x²+9x-2
add(
add(
numpy.array([5]),
mult([-7], [1, 1]),
),
add(
mult([9], mult([1, 1], [1, 0])),
mult([-7], mult([1, 1], mult([1, 0], [1, -1])))
)
)
yields
array([-7, 9, 9, -2]) # => -7x^3 + 9x^2 + 9x - 2
Using numpy, we have access to the poly1d object. With that, j(x-q)(x-w)(x-e)+k(x-w)(x-e)+l(x-e)+d is equivalent to:
In [ ]: j, q, w, e, k, w, l, d = range(1, 9)
...: poly1 = j*np.poly1d([-q, -w, -e], r=1)
...: poly2 = k*np.poly1d([-w, -e], r=0)
...: poly3 = l*np.poly1d([-e])
...: poly = poly1 + poly2 + poly3 + d
...: print(poly)
3 2
1 x + 12 x + 14 x + 8
Related
In my limited experience with python & numpy, I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
A=[3,-1, 4]
B = array([1,1,1],[1,-1,1],[1,1,-1])
The most close one in B is [1, -1, 1]
the weight of positive and negative > close of (A, B)
find the most close one in B (all the same Pos or Neg)
B1 = array([1,1,1], [1,-1,1], [1,1, -1], [3,1,4])
The result is [1,-1,1]
after searching around for a decent XX solution and found that everything out there was difficult to use.
Thanks in advance.
One possible way:
A = np.array([3,-1, 4])
B = np.array([[1,1,1],[1,-1,1],[1,1,-1]])
# distances array-wise
np.abs(B - A)
# sum of absolute values of distances (smallest is closest)
np.sum(np.abs(B - A), axis=1)
# index of smallest (in this case index 1)
np.argmin(np.sum(np.abs(B - A), axis=1))
# all in one line (take array 1 from B)
result = B[np.argmin(np.sum(np.abs(B - A), axis=1))]
Try this,
import numpy as np
A=np.array([3,-1, 4])
B =np.array([[1,1,1],[1,-1,1],[1,1,-1]])
x=np.inf
for val in B:
if (x>(np.absolute(A-val)).sum())and((np.sign(A)==np.sign(val)).all()==True):
x=(np.absolute(A-val)).sum()
y=val
print x
print y
I have a sequence definition
xn+1 = f(xn , xn-1)
Where xn is x evaluated at time tn. Any value in the sequence is defined by some function of the previous two values (and the time step, but for now that's constant). I would like to generate the first N values in this sequence, given x0 & x1.
What's the most pythonic way to do this?
My current approach is just to loop. I create a numpy.ones array of the correct size, then loop through it by index. If index = 0 or 1 then I change the value from 1 to x0 / x1 respectively. For greater indecies I lookup the previous values in the array and apply the function.
But I feel like this doesn't seem to be making use of the numpy array methods, so I wonder if there's a better approach?
Code
In my code I have a createSequence function, which takes in a definition of xn+1 as well as boundary conditions and timestep, and outputs a sequence following those rules. NB, I'm very new to Python so any general advice would also be appreciated!
import numpy as np
def x_next(x_current,x_previous,dt):
"""Function to return the next value in the sequence
x_current and x_previous are the values at tn and tn-1 respectively
dt is the time step
"""
return (x_current - x_previous)/dt #as an example
def createSequence(x_next,x0,x1,start,stop,dt):
""" Function to create sequence based on x_next function, and boundary conditions"""
num = (stop-start)/dt
x_array = np.ones(int(num))
x_array[0] = x0
x_array[1] = x1
for index in range(len(x_array)):
if index == 0:
x_array[index] = x0
elif index == 1:
x_array[index] = x1
else:
x_current = x_array[index - 1]
x_previous = x_array[index - 2]
x_array[index] = x_next(x_current,x_previous,dt)
return x_array
print(createSequence(x_next=x_next,x0=0.1,x1=0.2,start=0,stop=20,dt=0.1))
I would recommend using a generator because
it allows you to generate sequences of arbitrary length without wasting memory
one might argue it is "pythonic".
In the following, I will use the Fibonacci sequence as an example because it takes a similar form to your problem.
def fibonacci(a=0, b=1, length=None):
# Generate a finite or infinite sequence
num = 0
while length is None or num < length:
# Evaluate the next Fibonacci number
c = a + b
yield c
# Advance to the next item in the sequence
a, b = c, a
num += 1
Note that a corresponds to your x_n, b corresponds to x_{n-1}, and c corresponds to x_{n+1}. And a simple example:
>>> list(fibonacci(length=10))
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
If you want to get the sequence into a numpy array
>>> np.fromiter(fibonacci(length=10), int)
array([ 1, 1, 2, 3, 5, 8, 13, 21, 34, 55])
I think you want the initial collection of terms. However, if it should happen that you, or anyone reading this question, might want individual terms then the sympy library comes in handy. Everything here up to the horizontal line is from Solve a recurrence relation.
>>> from sympy import *
>>> var('y')
y
>>> var('n', integer=True)
n
>>> f = Function('f')
>>> f = y(n)-2*y(n-1)-5*y(n-2)
>>> r = rsolve(f, y(n), [1, 4])
Once you have r you can either evaluate it for various values of n within the sympy facilities ...
>>> N(r.subs(n,1))
4.00000000000000
>>> N(r.subs(n,2))
13.0000000000000
>>> N(r.subs(n,10))
265333.000000000
Or you could 'lift' the code in r and re-use it for your own routines.
>>> r
(1/2 + sqrt(6)/4)*(1 + sqrt(6))**n + (-sqrt(6) + 1)**n*(-sqrt(6)/4 + 1/2)
I have implemented a cyclic iteration function in two ways:
def Spin1(n, N) : # n - current state, N - highest state
value = n + 1
case1 = (value > N)
case2 = (value <= N)
return case1 * 0 + case2 * value
def Spin2(n, N) :
value = n + 1
if value > N :
return 0
else : return value
These functions are identical regarding the returned results. However the second function is not broadcasting-capable for a numpy array. So to test the first function I run this:
import numpy
AR1 = numpy.zeros((3, 4), dtype = numpy.uint32)
AR1[1,2] = 5
print AR1
print Spin1(AR1,5)
Magically it works, and that is so sweet. So I see exactly what I want:
[[0 0 0 0]
[0 0 5 0]
[0 0 0 0]]
[[1 1 1 1]
[1 1 0 1]
[1 1 1 1]]
Now with the second function print Spin2(AR1,5) it fails with this error:
if value > N
ValueError: The truth value of an array with more than one element is ambiguous.
Use a.any() or a.all()
And it's clear why, since if Array statement is nonsence. So for now I just used the first variant. But when I look at those functions I have a strong feeling that in the first function there are much more mathematical operations so I don't lose the hope that I can do something about optimising it.
Questions:
1. Is it possible to optimise the function Spin1 to do less operations or how do I use the function Spin2 in broadcasting mode (possibly without making my code too ugly)? Extra question: What would be the fastest way to do that manipulation with an array?
2. Is there some standard Python function which does the same calculation (not implicitly broadcasting-capable) and how it is correctly called - "cyclic increment" probably?
There is a numpy function for this: np.where:
In [590]: AR1
Out[590]:
array([[0, 0, 0, 0],
[0, 0, 5, 0],
[0, 0, 0, 0]], dtype=uint32)
In [591]: np.where(AR1 >= 5, 0, 1)
Out[591]:
array([[1, 1, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 1]])
So, you could define:
def Spin1(n, N) :
value = n + 1
return np.where(value > N, 0, value)
NumPy also provides a way to turn normal Python functions into ufuncs:
def Spin2(n, N) :
value = n + 1
if value > N :
return 0
else : return value
Spin2 = np.vectorize(Spin2)
So that you can now call Spin2 on arrays:
In [595]: Spin2(AR1, 5)
Out[595]:
array([[1, 1, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 1]])
However, np.vectorize mainly provides syntactic sugar. There is still a Python function call being made for each array element, which makes np.vectorized ufuncs no faster than equivalent code using Python for-loops.
Your Spin1 follows a well established pattern in array oriented languages (e.g. APL, MATLAB) for 'vectorizing' a function like Spin2. You create one or more booleans (or 0/1 arrays) to represent the various states the array elements can take, and then construct the output by multiplication and summation.
For example, to avoid divide-by-zero problems, I have used:
1/(x+(x==0))
A variation on this is to use a boolean index array to select array elements that should be changed. In this case, you want to return value, but with selected elements 'rolled over'.
def Spin3(n, N) : # n - current state, N - highest state
value = n + 1
value[value>N] = 0
return value
In this case, the indexing approach is simpler, and seems to fit the program logic better. It may be faster, but I can't guarantee that. It's good to keep both approaches in mind.
I put here some feedback as an answer, just not to mess up with the question. So I've done timing tests on various functions and it turns out that assigning by a boolean mask in this case is the fastest variant (hpaulj's answer). np.where was 1.4 times slower and np.vectorize(Spin2) was 15 times slower. Now just out of curiousity I wanted to test this with loops, so I made up this algorithm for testing:
AR1 = numpy.zeros((rows, cols), dtype = numpy.uint32)
while d <= 100:
Buf = numpy.zeros_like(AR1)
r = 0
c = 0
while (r < rows) :
while (c < cols) :
temp = AR1[r, c] + 1
if temp > 5 :
Buf[r, c] = 1
else : Buf[r, c] = temp
c += 1
r += 1
c = 0
AR1 = Buf
d += 1
I am not sure, but it seems to be very straightforward implementation of all the above mentioned functions. But it is sooo slow, almost 300 times slower. I have read similar questions on SO, but still I don't get it, WHY is it so? And what exactly is causing this slowdown. Here I have intentionally made up a buffer to avoid read-write functions on the same elements and do not do memory clean up. So what can be more simple, I am confused. Don't want to open a new question, since it was asked few times already, so probably someone will put comments or has good links clarifying this?
I'm totally stuck and have no idea how to go about solving this. Let's say I've an array
arr = [1, 4, 5, 10]
and a number
n = 8
I need shortest sequence from within arr which equals n. So for example following sequences within arr equals n
c1 = 5,1,1,1
c2 = 4,4
c3= 1,1,1,1,1,1,1,1
So in above case, our answer is c2 because it's shortest sequences in arr that equals sum.
I'm not sure what's the simplest way of finding a solution to above? Any ideas, or help will be really appreciated.
Thanks!
Edited:
Fixed the array
Array will possibly have postive values only.
I'm not sure how subset problem fixes this, probably due to my own ignorance. Does sub-set algorithm always give the shortest sequence that equals sum? For example, will subset problem identify c2 as the answer in above scenario?
As has been pointed before this is the minimum change coin problem, typically solved with dynamic programming. Here's a Python implementation solved in time complexity O(nC) and space complexity O(C), where n is the number of coins and C the required amount of money:
def min_change(V, C):
table, solution = min_change_table(V, C)
num_coins, coins = table[-1], []
if num_coins == float('inf'):
return []
while C > 0:
coins.append(V[solution[C]])
C -= V[solution[C]]
return coins
def min_change_table(V, C):
m, n = C+1, len(V)
table, solution = [0] * m, [0] * m
for i in xrange(1, m):
minNum, minIdx = float('inf'), -1
for j in xrange(n):
if V[j] <= i and 1 + table[i - V[j]] < minNum:
minNum = 1 + table[i - V[j]]
minIdx = j
table[i] = minNum
solution[i] = minIdx
return (table, solution)
In the above functions V is the list of possible coins and C the required amount of money. Now when you call the min_change function the output is as expected:
min_change([1,4,5,10], 8)
> [4, 4]
For the benefit of people who find this question in future -
As Oscar Lopez and Priyank Bhatnagar, have pointed out, this is the coin change (change-giving, change-making) problem.
In general, the dynamic programming solution they have proposed is the optimal solution - both in terms of (provably!) always producing the required sum using the fewest items, and in terms of execution speed. If your basis numbers are arbitrary, then use the dynamic programming solution.
If your basis numbers are "nice", however, a simpler greedy algorithm will do.
For example, the Australian currency system uses denominations of $100, $50, $20, $10, $5, $2, $1, $0.50, $0.20, $0.10, $0.05. Optimal change can be given for any amount by repeatedly giving the largest unit of change possible until the remaining amount is zero (or less than five cents.)
Here's an instructive implementation of the greedy algorithm, illustrating the concept.
def greedy_give_change (denominations, amount):
# Sort from largest to smallest
denominations = sorted(denominations, reverse=True)
# number of each note/coin given
change_given = list()
for d in denominations:
while amount > d:
change_given.append(d)
amount -= d
return change_given
australian_coins = [100, 50, 20, 10, 5, 2, 1, 0.50, 0.20, 0.10, 0.05]
change = greedy_give_change(australian_coins, 313.37)
print (change) # [100, 100, 100, 10, 2, 1, 0.2, 0.1, 0.05]
print (sum(change)) # 313.35
For the specific example in the original post (denominations = [1, 4, 5, 10] and amount = 8) the greedy solution is not optimal - it will give [5, 1, 1, 1]. But the greedy solution is much faster and simpler than the dynamic programming solution, so if you can use it, you should!
This is problem is known as Minimum coin change problem.
You can solve it by using dynamic programming.
Here is the pseudo code :
Set MinCoin[i] equal to Infinity for all of i
MinCoin[0] = 0
For i = 1 to N // The number N
For j = 0 to M - 1 // M denominations given
// Number i is broken into i-Value[j] for which we already know the answer
// And we update if it gives us lesser value than previous known.
If (Value[j] <= i and MinCoin[i-Value[j]]+1 < MinCoin[i])
MinCoin[i] = MinCoin[i-Value[j]]+1
Output MinCoin[N]
This is an variant of subset-sum problem. In your problem, you can pick an item several times. You still can use a similar idea to solve this problem by using the dynamic prorgamming technique. The basic idea is to design a function F(k, j), such that F(k, j) = 1 means that there is a sequence from arr whose sum is j and length is k.
Formally, the base case is that F(k, 1) = 1, if there exists an i, such that arr[i] = k. For inductive case, F(k, j) = 1, if there exists an i, such that arr[i] = m, and F(k-1, j-m) = 1.
The smallest k with F(k, n) = 1 is the length of the shortest sequence you want.
By using the dynamic programming technique, you can compute function F without using recursion.
By tracking additional information for every F(k, j), you also can reconstruct the shortest sequence.
What you're trying to solve is a variant of the coin change problem. Here you're looking for smallest amount of change, or the minimum amount of coins that sum up to a given amount.
Consider a simple case where your array is
c = [1, 2, 3]
you write 5 as a combination of elements from C and want to know what is the shortest such combination. Here C is the set of coin values and 5 is the amount for which you want to get change.
Let's write down all possible combinations:
1 + 1 + 1 + 1 + 1
1 + 1 + 1 + 2
1 + 2 + 2
1 + 1 + 3
2 + 3
Note that two combinations are the same up to re-ordering, so for instance 2 + 3 = 3 + 2.
Here there is an awesome result that's not obvious at first sight but it's very easy to prove. If you have any sequence of coins/values that is a sequence of minimum length that sums up to a given amount, no matter how you split this sequence the two parts will also be sequences of minimum length for the respective amounts.
For instance if c[3] + c[1] + c[2] + c[7] + c[2] + c[3] add up to S and we know that 6 is the minimal length of any sequence of elements from c that add up to S then if you split
|
S = c[3] + c[1] + c[2] + c[7] | + c[2] + c[3]
|
you have that 4 is the minimal length for sequences that add up to c[3] + c[1] + c[2] + c[7] and 2 the minimal length for sequences that add up to c[2] + c[3].
|
S = c[3] + c[1] + c[2] + c[7] | + c[2] + c[3]
|
= S_left + S_right
How to prove this? By contradiction, assume that the length of S_left is not optimal, that is there's a shorter sequence that adds up to S_left. But then we could write S as a sum of this shorter sequence and S_right, thus contradicting the fact that the length of S is minimal. □
Since this is true no matter how you split the sequence, you can use this result to build a recursive algorithm that follows the principles of dynamic programming paradigm (solving smaller problems while possibly skipping computations that won't be used, memoization or keeping track of computed values, and finally combining the results).
Because of this property of maintaining optimality for subproblems, the coins problem is also said to "exhibit optimal substructure".
OK, so in the small example above this is how we would go about solving the problem with a dynamic programming approach: assume we want to find the shortest sequence of elements from c = [1, 2, 3] for writing the sum 5. We solve the subproblems obtained by subtracting one coin: 5 - 1, 5 - 2, and 5 - 3, we take the smallest solution of these subproblems and add 1 (the missing coin).
So we can write something like
shortest_seq_length([1, 2, 3], 5) =
min( shortest_seq_length([1, 2, 3], 5-1),
shortest_seq_length([1, 2, 3], 5-2),
shortest_seq_length([1, 2, 3], 5-3)
) + 1
It is convenient to write the algorithm bottom-up, starting from smaller values of the sums that can be saved and used to form bigger sums. We just solve the problem for all possible values starting from 1 and going up to the desired sum.
Here's the code in Python:
def shortest_seq_length(c, S):
res = {0: 0} # res contains computed results res[i] = shortest_seq_length(c, i)
for i in range(1, S+1):
res[i] = min([res[i-x] for x in c if x<=i]) + 1
return res[S]
Now this works except for the cases when we cannot fill the memoization structure for all values of i. This is the case when we don't have the value 1 in c, so for instance we cannot form the sum 1 if c = [2, 5] and with the above function we get
shortest_seq_length([2, 3], 5)
# ValueError: min() arg is an empty sequence
So to take care of this issue one could for instance use a try/catch:
def shortest_seq_length(c, S):
res = {0: 0} # res contains results for each sum res[i] = shortest_seq_length(c, i)
for i in range(1, S+1):
try:
res[i] = min([res[i-x] for x in c if x<=i and res[i-x] is not None]) +1
except:
res[i] = None # takes care of error when [res[i-x] for x in c if x<=i] is empty
return res[S]
Or without try/catch:
def shortest_seq_length(c, S):
res = {0: 0} # res[i] = shortest_seq_length(c, i)
for i in range(1, S+1):
prev = [res[i-x] for x in c if x<=i and res[i-x] is not None]
if len(prev)>0:
res[i] = min(prev) +1
else:
res[i] = None # takes care of error when [res[i-x] for x in c if x<=i] is empty
return res[S]
Try it out:
print(shortest_seq_length([2, 3], 5))
# 2
print(shortest_seq_length([1, 5, 10, 25], 37))
# 4
print(shortest_seq_length([1, 5, 10], 30))
# 3
print(shortest_seq_length([1, 5, 10], 25))
# 3
print(shortest_seq_length([1, 5, 10], 29))
# 7
print(shortest_seq_length([5, 10], 9))
# None
To show not only the length but also the combinations of coins of minimal length:
from collections import defaultdict
def shortest_seq_length(coins, sum):
combos = defaultdict(list)
combos[0] = [[]]
for i in range(1, sum+1):
for x in coins:
if x<=i and combos[i-x] is not None:
for p in combos[i-x]:
comb = sorted(p + [x])
if comb not in combos[i]:
combos[i].append(comb)
if len(combos[i])>0:
m = (min(map(len,combos[i])))
combos[i] = [combo for i, combo in enumerate(combos[i]) if len(combo) == m]
else:
combos[i] = None
return combos[sum]
total = 9
coin_sizes = [10, 8, 5, 4, 1]
shortest_seq_length(coin_sizes, total)
# [[1, 8], [4, 5]]
To show all sequences remove the minumum computation:
from collections import defaultdict
def all_seq_length(coins, sum):
combos = defaultdict(list)
combos[0] = [[]]
for i in range(1, sum+1):
for x in coins:
if x<=i and combos[i-x] is not None:
for p in combos[i-x]:
comb = sorted(p + [x])
if comb not in combos[i]:
combos[i].append(comb)
if len(combos[i])==0:
combos[i] = None
return combos[sum]
total = 9
coin_sizes = [10, 5, 4, 8, 1]
all_seq_length(coin_sizes, total)
# [[4, 5],
# [1, 1, 1, 1, 5],
# [1, 4, 4],
# [1, 1, 1, 1, 1, 4],
# [1, 8],
# [1, 1, 1, 1, 1, 1, 1, 1, 1]]
One small improvement to the algorithm is to skip the step of computing the minimum when the sum is equal to one of the values/coins, but this can be done better if we write a loop to compute the minimum. This however doesn't improve the overall complexity that's O(mS) where m = len(c).
Just for the sake of exercise I'm trying to find a way to express Pascal's Triangle with Python list comprehension, and do that in iterative way. I'm representing Pascal's Triangle in Python as:
tri = [[1], [1, 1], [1, 2, 1], [1, 3, 3, 1], [1, 4, 6, 4, 1], ...]
As I'm doing it iteratively, I need to somehow access previously calculated lines of the triangle, and I'm trying to do this without declaring a local variable.
So far I have this:
tri = [lines.append(
([1] + [lines[i][j]+lines[i][j-1] for j in xrange(1, i+1)] + [1]) if i > 0
else [1, 1])
or lines[i] for i, lines in enumerate([[[1]]]*height)]
Any ideas?
EDIT: As pointed out by #brc this is really bad example of how and/or when to use list comprehensions.
You can simply use the definition of the binomial coefficients:
from math import factorial
tri = [[factorial(n) // (factorial(k) * factorial(n - k)) for k in range(n+1)]
for n in range(height)]
Obeying all the slightly insane restrictions you imposed upon yourself, the only simplifications I can think of are incorporated here:
[d.setdefault(j, [sum(d[len(d)-1][max(i, 0):i + 2]) for i in range(-1, j)])
for j, d in enumerate([{0: [1]}] * 5)]
At least this is shorter than your version and gets rid of all the conditional expressions. Of course it's still insane.
As explicitly iterative approaches are wanted, I would use an iterator.
def bincoeff(num=None):
from math import factorial
if num is None:
it = iter(lambda: True, False) # waiting for Godot
else:
it = xrange(num)
for _ in it:
yield [factorial(n) // (factorial(k) * factorial(n - k)) for k in range(n+1)]
With this generator, you can
build a list:
bc = list(bincoeff(100))
get all up to a certain maximum:
for bc in bincoeff():
if len(bc) > 100: break
print bc
...