I'm trying to build a linear optimization model for a production unit. I have Decision variable (binary variable) X(i)(j) where I is hour of J day. The constrain I need to introduce is a limitation on downtime (minimum time period the production unit needs to be turned off between two starts).
For example:
Hours: 1 2 3 4 5 6 7 8 9 10 11 12
On/off: 0 1 0 1 1 0 1 1 1 0 0 1
I cannot run the hour 4 or 7 because time period between 2 and 4 / 5 and 7 is one. I can run hour 12 since I have two hour gap after hour 9. How do I enforce this constrain in Linear programming/ optimization?
I think you are asking for a way to model: "at least two consecutive periods of down time". A simple formulation is to forbid the pattern:
t t+1 t+2
1 0 1
This can be written as a linear inequality:
x(t) - x(t+1) + x(t+2) <= 1
One way to convince yourself this is correct is to just enumerate the patterns:
x(t) x(t+1) x(t+2) LHS
0 0 0 0
0 0 1 1
0 1 0 -1
0 1 1 0
1 0 0 1
1 0 1 2 <--- to be excluded
1 1 0 0
1 1 1 1
With x(t) - x(t+1) + x(t+2) <= 1 we exactly exclude the pattern 101 but allow all others.
Similarly, "at least two consecutive periods of up time" can be handled by excluding the pattern
t t+1 t+2
0 1 0
or
-x(t) + x(t+1) - x(t+2) <= 0
Note: one way to derive the second from the first constraint is to observe that forbidding the pattern 010 is the same as saying y(t)=1-x(t) and excluding 101 in terms of y(t). In other words:
(1-x(t)) - (1-x(t+1)) + (1-x(t+2)) <= 1
This is identical to
-x(t) + x(t+1) - x(t+2) <= 0
In the comments it is argued this method does not work. That is based on a substantial misunderstanding of this method. The pattern 100 (i.e. x(1)=1,x(2)=0,x(3)=0) is not allowed because
-x(0)+x(1)-x(2) <= 0
Where x(0) is the status before we start our planning period. This is historic data. If x(0)=0 we have x(1)-x(2)<=0, disallowing 10. I.e. this method is correct (if not, a lot of my models would fail).
Related
I have a large dataframe with a price column that stays at the same value as the time increases and then will change values, and then stay at that value new value for a while before going up or down. I want to write a function that looks at the price column and creates a new column called next movement that indicates wheather or not the next movement of the price column will be up or down.
For example if the price column looked like [1,1,1,2,2,2,4,4,4,3,3,3,4,4,4,2,1] then the next movement column should be [1,1,1,1,1,1,0,0,0,1,1,1,0,0,0,0,-1] with 1 representing the next movement being up 0 representing the next movement being down, and -1 representing unknown.
def make_next_movement_column(DataFrame, column):
DataFrame["next movement"] = -1
for i in range (DataFrame.shape[0]):
for j in range(i + 1, DataFrame.shape[0]):
if(DataFrame[column][j] > DataFrame[column][i]):
DataFrame["next movement"][i:j] = 1
break;
if(DataFrame[column][j] < DataFrame[column][i]):
DataFrame["next movement"][i:j] = 0
break;
i = j - 1
return DataFrame
I wrote this function and it does work, but the problem is it is horribly ineffcient. I was wondering if there was a more effcient way to write this function.
This answer doesn't seem to work because the diff method only looks at the next column but I want to find the next movement no matter how far away it is.
Annotated code
# Calculate the diff between rows
s = df['column'].diff(-1)
# Broadcast the last diff value per group
s = s.mask(s == 0).bfill()
# Select from [1, 0] depending upon the value of diff
df['next_movement'] = np.select([s <= -1, s >= 1], [1, 0], -1)
Result
column next_movement
0 1 1
1 1 1
2 1 1
3 2 1
4 2 1
5 2 1
6 4 0
7 4 0
8 4 0
9 3 1
10 3 1
11 3 1
12 4 0
13 4 0
14 4 0
15 2 0
16 1 -1
I have a 3 dimensional numpy array with binary (0 and 1) values representing a voxelized space. A value of 1 means that the voxel is occupied, 0 means it is empty. For simplicity I will describe the problem with 2D data. An example of such a pocket could look like this:
1 1 1 1 1 1 1
1 1 0 0 0 1 1
1 0 0 0 0 0 1
1 0 0 1 0 0 1
1 0 0 1 1 1 1
1 1 1 1 1 1 1
I also have a dataset of fragments which are smaller than the pocket. Think of them as tetris pieces if you'd like, just in 3D. Similar to the game, the fragments can be rotated. Some examples:
0 1 1 1 1 1 0 1 1 0
1 1 0 0 0 1 0 1 1 0
I am looking to fill in the pocket with the fragments so the remaining empty space (0s) is as small as possible.
So far, I was thinking that I could decompose the pocket into smaller rectangular pockets, calculate the dimensions of these rectangular areas and of the fragments, then just match them based on these dimensions. Or maybe I could rotate the fragments so the values of 1 are closer to the "wall", and focus on boxes closer to the border. Next, I could look up the rectangular areas again and work towards filling in the core/inside of the pocket. To optimize the outcome, I can wrap these steps around a Monte-Carlo Tree Search algo.
Obviously I don't expect a complete answer, but if you have any better ideas on how to approach this, I would be happy to hear it. Any references to similar space search algorithms/papers would also be appreciated.
I'm very new to learning python, though I understand the basics of the looping, I am unable to understand the method in which output is arrived at.
In particular, how does the mapping of all three for loops happen to give the desired output, as I finding it impossible to understand the logic to be applied, when I try to write the output on paper without referring to IDE.
Code:
n = 4
a = 3
z = 2
for i in range(n):
for j in range(a):
for p in range(z):
print(i, j, p)
Output is:
0 0 0
0 0 1
0 1 0
0 1 1
0 2 0
0 2 1
1 0 0
1 0 1
1 1 0
1 1 1
1 2 0
1 2 1
2 0 0
2 0 1
2 1 0
2 1 1
2 2 0
2 2 1
3 0 0
3 0 1
3 1 0
3 1 1
3 2 0
3 2 1
The first loop iterates four times.
The second loop iterates three times. However since it is embedded inside the first loop, it actually iterates twelve times (4 * 3.)
The third loop iterates two times. However since it is embedded inside the first and second loops, it actually iterates twenty-four times (4 * 3 * 2).
I have a numpy 2D array with all zeros and 1s and I want those rows that has atleast one 1 for each column. For example:
PROBLEM STATEMENT: Find minimal rows that gives maximum 1s across all columns.
INPUT1:
A B C D E
t1 0 0 0 1 1
t2 0 1 1 0 1
t3 0 1 1 0 1
t4 1 0 1 0 1
t5 1 0 1 0 1
t6 1 1 1 1 0
Here, there are multiple answers like (t6, t1), (t6, t2), (t6, t3), (t6, t4), (t6, t5).
INPUT2:
A B C D E
t1 0 0 0 1 1
t2 0 1 1 0 1
t3 0 1 1 0 1
t4 1 0 1 0 1
t5 1 0 1 0 1
t6 1 1 1 1 1
Answer: t6
I don't want to use brute force method as my original matrix is very big. Is there a smart way to do this?
Naive solution, worst-case O(2^n)
This iterates over all possible choices of rows, starting with as few rows as possible, making average cases usually low-polynomial time.
from itertools import combinations
import numpy as np
def minimum_rows(arr):
out_list = []
rows = arr.shape[0]
for x in range(1, rows):
for combo in combinations(range(rows),x):
if np.logical_or.reduce(arr[[combo]]).all():
out_list.append(combo)
if out_list:
return out_list
I wrote this entirely on my phone without much testing, so it may or may not work. It employs no tricks, but is fairly fast. Note that it will be slower when the ratio columns/rows is larger or the the probability of a given element being True is smaller, as that makes it less likely for fewer rows to meet the conditions required, causing x to increase, which in turn will increase the number of combinations iterated though.
I'm trying to write a Python code in order to determine the number of possible permutations of a matrix where neighbouring elements can only be adjacent integer numbers. I also wish to know how many times each total set of numbers appears (by that I mean, the same numbers of each integer in n matrices, but not in the same matrix permutation)
Forgive me if I'm not being clear, or if my terminology isn't ideal! Consider a 5 x 5 zero matrix. This is an acceptable permutaton, as all of the elements are adjacent to an identical number.
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
25 x 0, 0 x 1, 0 x 2
The elements within the matrix can be changed to 1 or 2. Changing any of the elements to 1 would also be an acceptable permutation, as the 1 would be surrounded by an adjacent integer, 0. For example, changing the central [2,2] element of the matrix:
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
24 x 0, 1 x 1, 0 x 2
However, changing the [2,2] element in the centre to a 2 would mean that all of the elements surrounding it would have to switch to 1, as 2 is not adjacent to 0.
0 0 0 0 0
0 1 1 1 0
0 1 2 1 0
0 1 1 1 0
0 0 0 0 0
16 x 0, 8 x 1, 1 x 2
I want to know how many permutations are possible from that zeroed 5x5 matrix by changing the elements to 1 and 2, whilst keeping neighbouring elements as adjacent integers. In other words, any permutations where 0 and 2 are adjacent are not allowed.
I also wish to know how many matrices contain a certain number of each integer. For example, both of the below matrices would be 24 x 0, 1 x 1, 0 x 2. Over every permutation, I'd like to know how many correspond to this frequency of integers.
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Again, sorry if I'm not being clear or my nomenclature is poor! Thanks for your time - I'd really appreciate some help with this, and any words or guidance would be kindly received.
Thanks,
Sam
First, what you're calling a permutation isn't.
Secondly your problem is that a naive brute force solution would look at 3^25 = 847,288,609,443 possible combinations. (Somewhat less, but probably still in the hundreds of billions.)
The right way to solve this is called dynamic programming. What you need to do for your basic problem is calculate, for i from 0 to 4, for each of the different possible rows you could have there how many possible matrices you could have had that end in that row.
Add up all of the possible answers in the last row, and you'll have your answer.
For the more detailed count, you need to divide it by row, by cumulative counts you could be at for each value. But otherwise it is the same.
The straightforward version should require tens of thousands of operation. The detailed version might require millions. But this will be massively better than the hundreds of billions that the naive recursive version takes.
Just search for some more simple rules:
1s can be distributed arbitrarily in the array, since the matrix so far only consists of 0s. 2s can aswell be distributed arbitrarily, since only neighbouring elements must be either 1 or 2.
Thus there are f(x) = n! / x! possibilities to distributed 1s and 2s over the matrix.
So the total number of possible permutations is 2 * sum(x = 1 , n * n){f(x)}.
Calculating the number of possible permutations with a fixed number of 1s can easily be solved by simple calculating f(x).
The number of matrices with a fixed number of 2s and 1s is a bit more tricky. Here you can only rely on the fact that all mirrored versions of the matrix yield the same number of 1s and 2s and are valid. Apart from using that fact you can only brute-force search for correct solutions.