I'm trying to minimize a function f of ~80 variables stored in an array. The function is defined by two nested loops: the outer one indexes array by i, while the inner loop is performed array[i] times and adds the result of a computation to a running total. The computation depends on some conditions x and y and changes slightly every time it's performed, which is why I need the loop structure. Here is a minimal working example in Python:
def f[array]:
total = 0
x = 0
y = 0
for i in range(len(array)):
for j in range(array[i]):
result = 2*x + y
total = total + result
x = x+1
x = 0
y = y+1
return total
So for instance, print f([2,1]) returns 3, since [(2*0) + 0] + [(2*1) + 0] + [(2*0) + 1] = 0+2+1 = 3.
I want to find the entries of array that minimize the value of f. However, when I tell (e.g.) Mathematica to minimize f([x1, x2, ..., x80]) and spit out the minimizer array, the program complains because it can't perform the loops defining f an indeterminate number of times.
In light of this, my question is the following:
How do I minimize a multivariate function whose parameters describe the number of times a given loop is to be iterated?
I had originally tried to implement this in Mathematica, but found that I could not define f by the procedure above. The best I could do is tell Mathematica to perform the loops above, then define f[array_] := total after total had been computed. When I ran my code, Mathematica naturally claimed that it could not evaluate f, throwing an error even before it executed my command to NMinimize[{f[array] array ϵ Integers}, array]. The fact that Mathematica is trying to evaluate f before it is called in NMinimize indicates that I don't quite understand how functions work in Mathematica. Any help in untangling this situation would be greatly appreciated!
As written your function has an analytical minimum and there is no need for numerical optimization. Unfortunately, StackOverflow won't let me show the mathematics of it (if you ask it on MathExchange I can provide a derivation), but given an array A = [a0 a1 ... an] where each ai is a positive integer, and an array Y = [0 1 ... n] the function you posted reduces to the following matrix multiplication A * (A - 1 + Y)' where ' denotes a matrix transpose and * denotes matrix multiplication. So, trivially, the function is minimized when each ai is minimized. So, if this is part of a larger optimization, your task should be focused on finding the minimum of each element of A if the elements themselves are constrained.
Related
So I was reviewing some slides my teacher gave us and we are given the following Python code:
a=5
b=6
c=10
for i in range(n):
for j in range(n):
x = i * j
y = j * j
z = i * j
for k in range(n):
w = a*k + 45
v = b*b
d=33
For the first part (variable declaration) the time complexity is constant, so O(1) or for the purposes of writing the whole thing as an equation at the end, 3. And same for the last part with 1.
Now, for the second and third parts is where my question comes in. The second part apparently has a 3n^2 + 2 time complexity and the third one a 2n + 1. I know that the 3n^2 and 2n come from the number of variables inside the loops (because they get iterated that many times, and in the nested one that makes it n*n).
But I just don't know where the + 2 and + 1 come from.
I've tried looking up how come a for loop in Python is n+1 but not a single site so far describes it like that, I think it's because all of them give the general time complexity, which of course I get that it's O(n), but part of my assignment is to give the specific one as well and that's where the constants come in.
My guess is that the n comes from the range(n) part rather than from the for i in itself, and thus that declaration of the for is essentially like any other variable declarations (constant) but I'm really not sure and would like to understand why.
(If you don't feel like giving out a full explanation I'd be fine with just any link to some site/video that does so).
Thank you :)
formula for a for loop: x*n+1.
x - number of operations performs for each iteration.
n - number of iterations
+1 - creating range obj.
So in your case the formula is 1 + n(3n + 1) <=>1 + 3n^2 + n.
creating main loop range obj + n iterations * (3 operations * n iterations + creating 1 range object)
The time complexity depends on programming language you are computing it for.
Source: http://math.uni.wroc.pl/~jagiella/p2python/skrypt_html/wyklad2-1.html
Let w = (w_1, w_2, w_3, ...., w_n) be an array, n is large
Without using loops, I want to define the function
sum from i = 1 to i = n, log(1 + exp(w_i))
Is there a vector operation that handles this in Numpy? I was thinking of
np.dot(np.ones((n,)), np.log(1+np.exp(w))
but I don't know if that works.
You can use np.sum(...) to sum all elements of the array.
While np.log(1+np.exp(w)) should work fine, there's also np.log1p(...) which calculates the ln of one plus values with better precision in case of very small numbers.
Putting it all together:
result = np.sum(np.log1p(np.exp(w)))
I am using scipy.stats.binom to work with the binomial distribution. Given n and p, the probability function is
A sum over k ranging over 0 to n should (and indeed does) give 1. Fixing a point x_0, we can add the probabilities in both directions and the two sums ought to add to 1. However the code below yields two different answers when x_0 is close to n.
from scipy.stats import binom
n = 9
p = 0.006985
b = binom(n=n, p=p)
x_0 = 8
# Method 1
cprob = 0
for k in range(x_0, n+1):
cprob += b.pmf(k)
print('cumulative prob with method 1:', cprob)
# Method 2
cprob = 1
for k in range(0, x_0):
cprob -= b.pmf(k)
print('cumulative prob with method 2:', cprob)
I expect the outputs from both methods to agree. For x_0 < 7 it agrees but for x_0 >= 8 as above I get
>> cumulative prob with method 1: 5.0683768775504006e-17
>> cumulative prob with method 2: 1.635963929799698e-16
The precision error in the two methods propagates through my code (later) and gives vastly different answers. Any help is appreciated.
Roundoff errors of the order of the machine epsilon are expected and are inevitable. That these propagate and later blow up means that your problem is very poorly conditioned. You'd need to rethink the algorithm or an implementation, depending on where the bad conditioning comes from.
In your specific example you can get by using either np.sum (which tries to be careful with roundoff), or even math.fsum from the standard library.
I'm trying to make a calculator for something, but the formulas use a sigma, I have no idea how to do a sigma in python, is there an operator for it?
Ill put a link here with a page that has the formulas on it for illustration:http://fromthedepths.gamepedia.com/User:Evil4Zerggin/Advanced_cannon
A sigma (∑) is a Summation operator. It evaluates a certain expression many times, with slightly different variables, and returns the sum of all those expressions.
For example, in the Ballistic coefficient formula
The Python implementation would look something like this:
# Just guessing some values. You have to search the actual values in the wiki.
ballistic_coefficients = [0.3, 0.5, 0.1, 0.9, 0.1]
total_numerator = 0
total_denominator = 0
for i, coefficient in enumerate(ballistic_coefficients):
total_numerator += 2**(-i) * coefficient
total_denominator += 2**(-i)
print('Total:', total_numerator / total_denominator)
You may want to look at the enumerate function, and beware precision problems.
The easiest way to do this is to create a sigma function the returns the summation, you can barely understand this, you don't need to use a library. you just need to understand the logic .
def sigma(first, last, const):
sum = 0
for i in range(first, last + 1):
sum += const * i
return sum
# first : is the first value of (n) (the index of summation)
# last : is the last value of (n)
# const : is the number that you want to sum its multiplication each (n) times with (n)
An efficient way to do this in Python is to use reduce().
To solve
3
Σ i
i=1
You can use the following:
from functools import reduce
result = reduce(lambda a, x: a + x, [0]+list(range(1,3+1)))
print(result)
reduce() will take arguments of a callable and an iterable, and return one value as specified by the callable. The accumulator is a and is set to the first value (0), and then the current sum following that. The current value in the iterable is set to x and added to the accumulator. The final accumulator is returned.
The formula to the right of the sigma is represented by the lambda. The sequence we are summing is represented by the iterable. You can change these however you need.
For example, if I wanted to solve:
Σ π*i^2
i
For a sequence I [2, 3, 5], I could do the following:
reduce(lambda a, x: a + 3.14*x*x, [0]+[2,3,5])
You can see the following two code lines produce the same result:
>>> reduce(lambda a, x: a + 3.14*x*x, [0]+[2,3,5])
119.32
>>> (3.14*2*2) + (3.14*3*3) + (3.14*5*5)
119.32
I've looked all the answers that different programmers and coders have tried to give to your query but i was unable to understand any of them maybe because i am a high school student anyways according to me using LIST will definately reduce some pain of coding so here it is what i think simplest way to form a sigma function .
#creating a sigma function
a=int(input("enter a number for sigma "))
mylst=[]
for i in range(1,a+1):
mylst.append(i)
b=sum(mylst)
print(mylst)
print(b)
Captial sigma (Σ) applies the expression after it to all members of a range and then sums the results.
In Python, sum will take the sum of a range, and you can write the expression as a comprehension:
For example
Speed Coefficient
A factor in muzzle velocity is the speed coefficient, which is a
weighted average of the speed modifiers si of the (non-
casing) parts, where each component i starting at the head has half the
weight of the previous:
The head will thus always determine at least 25% of the speed
coefficient.
For example, suppose the shell has a Composite Head (speed modifier
1.6), a Solid Warhead Body (speed modifier 1.3), and a Supercavitation
Base (speed modifier 0.9). Then we have
s0=1.6
s1=1.3
s2=0.9
From the example we can see that i starts from 0 not the usual 1 and so we can do
def speed_coefficient(parts):
return (
sum(0.75 ** i * si for i, si in enumerate(parts))
/
sum(0.75 ** i for i, si in enumerate(parts))
)
>>> speed_coefficient([1.6, 1.3, 0.9])
1.3324324324324326
import numpy as np
def sigma(s,e):
x = np.arange(s,e)
return np.sum([x+1])
The method I've used to try and solve this works but I don't think it's very efficient because as soon as I enter a number that is too large it doesn't work.
def fib_even(n):
fib_even = []
a, b = 0, 1
for i in range(0,n):
c = a+b
if c%2 == 0:
fib_even.append(c)
a, b = b, a+b
return fib_even
def sum_fib_even(n):
fib_evens = fib_even(n)
s = 0
for i in fib_evens:
s = s+i
return s
n = 4000000
answer = sum_fib_even(n)
print answer
This for example doesn't work for 4000000 but will work for 400. Is there a more efficient way of doing this?
It is not necessary to compute all the Fibonacci numbers.
Note: I use in what follows the more standard initial values F[0]=0, F[1]=1 for the Fibonacci sequence. Project Euler #2 starts its sequence with F[2]=1,F[3]=2,F[4]=3,.... For this problem the result is the same for either choice.
Summation of all Fibonacci numbers (as a warm-up)
The recursion equation
F[n+1] = F[n] + F[n-1]
can also be read as
F[n-1] = F[n+1] - F[n]
or
F[n] = F[n+2] - F[n+1]
Summing this up for n from 1 to N (remember F[0]=0, F[1]=1) gives on the left the sum of Fibonacci numbers, and on the right a telescoping sum where all of the inner terms cancel
sum(n=1 to N) F[n] = (F[3]-F[2]) + (F[4]-F[3]) + (F[5]-F[4])
+ ... + (F[N+2]-F[N+1])
= F[N+2] - F[2]
So for the sum using the number N=4,000,000 of the question one would have just to compute
F[4,000,002] - 1
with one of the superfast methods for the computation of single Fibonacci numbers. Either halving-and-squaring, equivalent to exponentiation of the iteration matrix, or the exponential formula based on the golden ratio (computed in the necessary precision).
Since about every 20 Fibonacci numbers you gain 4 additional digits, the final result will consist of about 800000 digits. Better use a data type that can contain all of them.
Summation of the even Fibonacci numbers
Just inspecting the first 10 or 20 Fibonacci numbers reveals that all even members have an index of 3*k. Check by subtracting two successive recursions to get
F[n+3]=2*F[n+2]-F[n]
so F[n+3] always has the same parity as F[n]. Investing more computation one finds a recursion for members three indices apart as
F[n+3] = 4*F[n] + F[n-3]
Setting
S = sum(k=1 to K) F[3*k]
and summing the recursion over n=3*k gives
F[3*K+3]+S-F[3] = 4*S + (-F[3*K]+S+F[0])
or
4*S = (F[3*K]+F[3*K]) - (F[3]+F[0]) = 2*F[3*K+2]-2*F[2]
So the desired sum has the formula
S = (F[3*K+2]-1)/2
A quick calculation with the golden ration formula reveals what N should be so that F[N] is just below the boundary, and thus what K=N div 3 should be,
N = Floor( log( sqrt(5)*Max )/log( 0.5*(1+sqrt(5)) ) )
Reduction of the Euler problem to a simple formula
In the original problem, one finds that N=33 and thus the sum is
S = (F[35]-1)/2;
Reduction of the problem in the question and consequences
Taken the mis-represented problem in the question, N=4,000,000, so K=1,333,333 and the sum is
(F[1,333,335]-1)/2
which still has about 533,400 digits. And yes, biginteger types can handle such numbers, it just takes time to compute with them.
If printed in the format of 60 lines a 80 digits, this number fills 112 sheets of paper, just to get the idea what the output would look like.
It should not be necessary to store all intermediate Fibonacci numbers, perhaps the storage causes a performance problem.