I have a 3 dimensional numpy array with binary (0 and 1) values representing a voxelized space. A value of 1 means that the voxel is occupied, 0 means it is empty. For simplicity I will describe the problem with 2D data. An example of such a pocket could look like this:
1 1 1 1 1 1 1
1 1 0 0 0 1 1
1 0 0 0 0 0 1
1 0 0 1 0 0 1
1 0 0 1 1 1 1
1 1 1 1 1 1 1
I also have a dataset of fragments which are smaller than the pocket. Think of them as tetris pieces if you'd like, just in 3D. Similar to the game, the fragments can be rotated. Some examples:
0 1 1 1 1 1 0 1 1 0
1 1 0 0 0 1 0 1 1 0
I am looking to fill in the pocket with the fragments so the remaining empty space (0s) is as small as possible.
So far, I was thinking that I could decompose the pocket into smaller rectangular pockets, calculate the dimensions of these rectangular areas and of the fragments, then just match them based on these dimensions. Or maybe I could rotate the fragments so the values of 1 are closer to the "wall", and focus on boxes closer to the border. Next, I could look up the rectangular areas again and work towards filling in the core/inside of the pocket. To optimize the outcome, I can wrap these steps around a Monte-Carlo Tree Search algo.
Obviously I don't expect a complete answer, but if you have any better ideas on how to approach this, I would be happy to hear it. Any references to similar space search algorithms/papers would also be appreciated.
Related
I am very new to python and am attempting to create a board game of sorts for which i need a 2D array of the dimensions 4 x 10.
I have messed around for a while trying to create the array however am stumped on the best way to do it so that in future i will be able to populate the array how i wish.
The main problem lies in presentation, obviously python automatically prints an array as [x, y, z,].
I want to create something more visually appealing, for example:
-------------------
| x | y | z |
-------------------
sorry if this seems a stupid question i am just clueless on how to go about this and any help would be much appreciated.
So you have a couple of ways of doing this. If you're not using any libraries then you can make it into a list of lists.
a = [
[0,3,0,..], #first row
[1,4,0,..], #second row
[2,5,0,..], #third row
..**strong text**]
a[0][0] = 0
a[1][0] = 1
#a[row][col]=value
Another way of doing this is by simply using numpy library and making 2D arrays. This way you can do a bunch of fast and easy calculations on your frame or fading transitions, etc..
https://numpy.org/doc/stable/reference/generated/numpy.array.html
Just for fun ... here's something I quickly knocked together to help get you started.
Now, I'm intentionally going to leave this without further explanation, and let you do a bit of research on your own. :-)
Setup:
import numpy as np
# Header.
h = np.arange(10).astype(str)
# Border.
b = ['-']*19
# Values.
m = np.full([4, 10], '0')
Print the board:
print(' '.join(h))
print(''.join(b))
for i in m:
print(' '.join(i))
print(''.join(b))
print(' '.join(h))
Output:
0 1 2 3 4 5 6 7 8 9
-------------------
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
-------------------
0 1 2 3 4 5 6 7 8 9
I have a nest containing three lists which are being filled by a for-loop and the fill is controlled by if-conditions.
After the first iteration it could look like the following example:
a = [[1,2,0,0,0,0],[0,0,4,5,0,0],[0,0,0,0,6,7]]
which, by condition, are not overlapping. After the second iteration new values are being appended to the corresponding nested lists. In order to make sure the lists are the same length I append zeros in each run.
As soon as I set a condition so two lists overlap I get a gap the "size" of the desired overlap after the third iteration eventhough it should append directly to the corresponding list. Additionally if i set several overlaps (e.g. for each iteration) they add up so e.g. for three overlaps each the size of two i get a gap of six.
Below you see what i mean
w/o overlap w overlap (returns) w overlap (should return)
1 0 0 1 0 0 1 0 0
1 0 0 1 0 0 1 0 0
0 1 0 0 1 0 0 1 0
0 1 0 0 1 1 0 1 1
0 1 0 0 1 1 0 1 1
0 0 1 0 0 1 0 0 1
0 0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0 0
1 0 0 1 0 0
1 0 0 1 0 0
I have created a Pyfiddle here with the code I am using (it is the shortest I could create).It should give you the idea of what I am trying to achieve and what I have done so far.
Further I have used this post to wrap my head around it but it does not seem to apply to my problem.
EDIT: I think I have narrowed down the problem. It seems that due to the overlap the relevant list is being "pulled up" by the size of the overlap without adjusting the size of the remaining lists by the size of the offset. So the difference is being filled by zeros.
EDIT2:
My idea is to add the overlap/offset before the list is filled depending on the size of its predecessor. Since the start index depends on the the size of the predecessor it could be calculated using the difference of predecessor size and the gap.
Basically in the parent for-loop for i in range(len(data_j)) I would add:
overlap = len(data_j[i-1]['axis'])-offset
Unfortunately another problem occured during the process which you can find here Connect string value to a corresponding variable name
I have solved it using the steps from teh other post regarding this issue (See here: Connect string value to a corresponding variable name)
I have created another fiddle with the solution so you can compare it with the original fiddle to see what I did.
New Fiddle
Bascially I add the offset by summing up the size of the current predecessor list and the offset value (which can be negative as well to create an overlap). This sum is assigned to n_offset. Further another problem occurred with .append.
As soon as all lists are filled and you need to append more values to one of these lists the gap occurs again. This is caused by the for-loop appending the zeros. The range is n_offset and since it takes the size of the predecessor-list it just adds an amount of zeros the size of the first filling of the same list. That's why you have to subtract the length of the list from n_offset.
I have used convolution2d to generate some statistics on conditions of local patterns. To be complete, I'm working with images and the value 0.5 is my 'gray-screen', I cannot use masks before this unfortunately (dependence on some other packages). I want to add new objects to my image, but it should overlap at least 75% of non-gray-screen. Let's assume the new object is square, I mask the image on gray-screen versus the rest, do a 2-d convolution with a n by n matrix filled with 1s so I can get the sum of the number of gray-scale pixels in that patch. This all works, so I have a matrix with suitable places to place my new object. How do I efficiently pick a random one from this matrix?
Here is a small example with a 5x5 image and a 2x2 convolution matrix, where I want a random coordinate in my last matrix with a 1 (because there is at most 1 0.5 in that patch)
Image:
1 0.5 0.5 0 1
0.5 0.5 0 1 1
0.5 0.5 1 1 0.5
0.5 1 0 0 1
1 1 0 0 1
Convolution matrix:
1 1
1 1
Convoluted image:
3 3 1 0
4 2 0 1
3 1 0 1
1 0 0 0
Conditioned on <= 1:
0 0 1 1
0 0 1 1
0 1 1 1
1 1 1 1
How do I get a uniformly distributed coordinate of the 1s efficiently?
np.where and np.random.randint should do the trick :
#we grab the indexes of the ones
x,y = np.where(convoluted_image <=1)
#we chose one index randomly
i = np.random.randint(len(x))
random_pos = [x[i],y[i]]
I'm trying to write a Python code in order to determine the number of possible permutations of a matrix where neighbouring elements can only be adjacent integer numbers. I also wish to know how many times each total set of numbers appears (by that I mean, the same numbers of each integer in n matrices, but not in the same matrix permutation)
Forgive me if I'm not being clear, or if my terminology isn't ideal! Consider a 5 x 5 zero matrix. This is an acceptable permutaton, as all of the elements are adjacent to an identical number.
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
25 x 0, 0 x 1, 0 x 2
The elements within the matrix can be changed to 1 or 2. Changing any of the elements to 1 would also be an acceptable permutation, as the 1 would be surrounded by an adjacent integer, 0. For example, changing the central [2,2] element of the matrix:
0 0 0 0 0
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
24 x 0, 1 x 1, 0 x 2
However, changing the [2,2] element in the centre to a 2 would mean that all of the elements surrounding it would have to switch to 1, as 2 is not adjacent to 0.
0 0 0 0 0
0 1 1 1 0
0 1 2 1 0
0 1 1 1 0
0 0 0 0 0
16 x 0, 8 x 1, 1 x 2
I want to know how many permutations are possible from that zeroed 5x5 matrix by changing the elements to 1 and 2, whilst keeping neighbouring elements as adjacent integers. In other words, any permutations where 0 and 2 are adjacent are not allowed.
I also wish to know how many matrices contain a certain number of each integer. For example, both of the below matrices would be 24 x 0, 1 x 1, 0 x 2. Over every permutation, I'd like to know how many correspond to this frequency of integers.
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Again, sorry if I'm not being clear or my nomenclature is poor! Thanks for your time - I'd really appreciate some help with this, and any words or guidance would be kindly received.
Thanks,
Sam
First, what you're calling a permutation isn't.
Secondly your problem is that a naive brute force solution would look at 3^25 = 847,288,609,443 possible combinations. (Somewhat less, but probably still in the hundreds of billions.)
The right way to solve this is called dynamic programming. What you need to do for your basic problem is calculate, for i from 0 to 4, for each of the different possible rows you could have there how many possible matrices you could have had that end in that row.
Add up all of the possible answers in the last row, and you'll have your answer.
For the more detailed count, you need to divide it by row, by cumulative counts you could be at for each value. But otherwise it is the same.
The straightforward version should require tens of thousands of operation. The detailed version might require millions. But this will be massively better than the hundreds of billions that the naive recursive version takes.
Just search for some more simple rules:
1s can be distributed arbitrarily in the array, since the matrix so far only consists of 0s. 2s can aswell be distributed arbitrarily, since only neighbouring elements must be either 1 or 2.
Thus there are f(x) = n! / x! possibilities to distributed 1s and 2s over the matrix.
So the total number of possible permutations is 2 * sum(x = 1 , n * n){f(x)}.
Calculating the number of possible permutations with a fixed number of 1s can easily be solved by simple calculating f(x).
The number of matrices with a fixed number of 2s and 1s is a bit more tricky. Here you can only rely on the fact that all mirrored versions of the matrix yield the same number of 1s and 2s and are valid. Apart from using that fact you can only brute-force search for correct solutions.
I'm trying to write a function that will check for undirected percolation in a numpy array. In this case, undirected percolation occurs when there is some kind of path that the liquid can follow (the liquid can travel up, down, and sideways, but not diagonally). Below is an example of an array that could be given to us.
1 0 1 1 0
1 0 0 0 1
1 0 1 0 0
1 1 1 0 0
1 0 1 0 1
The result of percolation in this scenario is below.
1 0 1 1 0
1 0 0 0 0
1 0 1 0 0
1 1 1 0 0
1 0 1 0 0
In the scenario above, the liquid could follow a path and everything with a 1 currently would refill except for the 1's in positions [1,4] and [4,4].
The function I'm trying to write starts at the top of the array and checks to see if it's a 1. If it's a 1, it writes it to a new array. What I want it to do next is check the positions above, below, left, and right of the 1 that has just been assigned.
What I currently have is below.
def flow_from(sites,full,i,j)
n = len(sites)
if j>=0 and j<n and i>=0 and i<n: #Check to see that value is in array bounds
if sites[i,j] == 0:
full[i,j] = 0
else:
full[i,j] = 1
flow_from(sites, full, i, j + 1)
flow_from(sites, full, i, j - 1)
flow_from(sites, full, i + 1, j)
flow_from(sites, full, i - 1, j)
In this case, sites is the original matrix, for example the one shown above. New is the matrix that has been replaced with it's flow matrix. Second matrix shown. And i and j are used to iterate through.
Whenever I run this, I get an error that says "RuntimeError: maximum recursion depth exceeded in comparison." I looked into this and I don't think I need to adjust my recursion limit, but I have a feeling there's something blatantly obvious with my code that I just can't see. Any pointers?
Forgot about your code block. This is a known problem with a known solution from the scipy library. Adapting the code from this answer and assume your data is in an array named A.
from scipy.ndimage import measurements
# Identify the clusters
lw, num = measurements.label(A)
area = measurements.sum(A, lw, index=np.arange(lw.max() + 1))
print A
print area
This gives:
[[1 0 1 1 0]
[1 0 0 0 1]
[1 0 1 0 0]
[1 1 1 0 0]
[1 0 1 0 1]]
[[1 0 2 2 0]
[1 0 0 0 3]
[1 0 1 0 0]
[1 1 1 0 0]
[1 0 1 0 4]]
[ 0. 9. 2. 1. 1.]
That is, it's labeled all the "clusters" for you and identified the size! From here you can see that the clusters labeled 3 and 4 have size 1 which is what you want to filter away. This is a much more powerful approach because now you can filter for any size.