Related
input_list = [(2282.405, -89.9415, 266.1414), (2276.534, -89.9526, 266.9091), (2276.534, -83.9573, 266.9091), (2282.405, -83.9464, 266.1414), (2288.276, -77.9407, 265.3738), (2294.148, -77.9301, 264.6062), (2294.148, -83.9247, 264.6062), (2288.276, -83.9356, 265.3738), (2282.405, -71.9563, 266.1414), (2288.276, -71.9459, 265.3738), (2282.405, -77.9514, 266.1414), (2288.276, -89.9304, 265.3738), (2276.534, -77.962, 266.9091), (2294.148, -71.9355, 264.6062), (2276.534, -71.9667, 266.9091), (2294.148, -89.9193, 264.6062)]
Requirement is to make 9 list which will contain coordinates of 4 points which forms a closed loop
I tried some ways by find the distances and then creating sets but the issue is with point which lies in the middle , there are 4 combinations comming up.
Requirement is to get exactly 9 list using python as shown in image , ( list will contains coordinates of 4 grids )
the sequence should be always anticlockwise
Requiring to get "counterclockwise" answer, in a 3d assignment is tricky. A leaf has 2 sides, but I can imagine that the actual requirement is to always loop on the same order, and that if an edge is counted once "a to b", it will be counted "b to a" the next time.
Also your example look planar and "grid like". If this is really the assignment, I'd suggest using numpy eigenvector functions to reduce your coordinate system to 2d, then a transformation matrix to have your points aligned to (0,0),(0,1),etc.
To find this transformation matrix, take one point randomly, find the closest one on the x dimension, the closest one on the y dimension.
I'd say your assignment is more of a math assignment
Edit : I've given it a little more though. I really think that the simpliest answer is to make the most of the fact that :
You can simply enough find a corner point, and a corner square from your input,
A 3x3 grid of unitary squares can be transformed into your input, using the corner square as reference for the first unitary square. You just have to find the correct matrix for the matrix multiplication,
You can write by hand the loops for the unitary square, use a matrix addition to translate it as any square in the grid and project them using the previous matrix.
Given a two-dimensional array T of size NxN, filled with various natural numbers (They do not have to be sorted in any way as in the example below.). My task is to write a program that transforms the array in such a way that all elements lying above the main diagonal are larger than each element lying on the diagonal and all elements lying below the main diagonal are to be smaller than each element on the diagonal.
For example:
T looks like this:
[2,3,5][7,11,13][17,19,23] and one of the possible solutions is:
[13,19,23][3,7,17][5,2,11]
I have no clue how to do this. Would anyone have an idea what algorithm should be used here?
Let's say the matrix is NxN.
Put all N² values inside an array.
Sort the array with whatever method you prefer (ascending order).
In your final array, the (N²-N)/2 first values go below the diagonal, the following N values go to the diagonal, and the final (N²-N)/2 values go above the diagonal.
The following pseudo-code should do the job:
mat <- array[N][N] // To be initialized.
vec <- array[N*N]
for i : 0 to (N-1)
for j : 0 to (N-1)
vec[i*N+j]=mat[i][j]
next j
next i
sort(vec)
p_below <- 0
p_diag <- (N*N-N)/2
p_above <- (N*N+N)/2
for i : 0 to (N-1)
for j : 0 to (N-1)
if (i>j)
mat[i][j] = vec[p_above]
p_above <- p_above + 1
endif
if (i<j)
mat[i][j] = vec[p_below]
p_below <- p_below + 1
endif
if (i=j)
mat[i][j] = vec[p_diag]
p_diag <- p_diag + 1
endif
next j
next i
Code can be heavily optimized by sorting directly the matrix, using a (quite complex) custom sort operator, so it can be sorted "in place". Technically, you'll do a bijection between the matrix indices to a partitioned set of indices representing "below diagonal", "diagonal" and "above diagonal" indices.
But I'm unsure that it can be considered as an algorithm in itself, because it will be highly dependent on the language used AND on how you stored, internally, your matrix (and how iterators/indices are used). I could write one in C++, but I lack knownledge to give you such an operator in Python.
Obviously, if you can't use a standard sorting function (because it can't work on anything else but an array), then you can write your own with the tricky comparison builtin the algorithm.
For such small matrixes, even a bubble-sort can work properly, but obviously implementing at least a quicksort would be better.
Elements about optimizing:
First, we speak about the trivial bijection from matrix coordinate [x][y] to [i]: i=x+y*N. The invert is obviously x=floor(i/N) & y=i mod N. Then, you can parse the matrix as a vector.
This is already what I do in the first part initializing vec, BTW.
With matrix coordinates, it's easy:
Diagonal is all cells where x=y.
The "below" partition is everywhere x<y.
The "above" partition is everywhere x>y.
Look at coordinates in the below 3x3 matrix, it's quite evident when you know it.
0,0 1,0 2,0
0,1 1,1 2,1
0,2 1,2 2,2
We already know that the ordered vector will be composed of three parts: first the "below" partition, then the "diagonal" partition, then the "above" partition.
The next bijection is way more tricky, since it requires either a piecewise linear function OR a look-up table. The first requires no additional memory but will use more CPU power, the second use as much memory as the matrix but will require less CPU power.
As always, optimization for speed often cost memory. If memory is scarse because you use huge matrixes, then you'll prefer a function.
In order to shorten a bit, I'll explain only for "below" partition. In the vector, the (N-1) first elements will be the ones belonging to the first column. Then, we'll have (N-2) elements for the 2nd column, (N-3) for the third, until we had only 1 element for the (N-1)th column. You see the scheme, sum of the number of elements and the column (zero-based index) is always (N-1).
I won't write the function, because it's quite complex and, honestly, it won't help so much to understand. Simply know that converting from matrix indices to vector is "quite easy".
The opposite is more tricky and CPU-intensive, and it SHOULD use a (N-1) element vector to store where each column starts within the vector to GREATLY speed up the process. Thanks, this vector can also be used (from end to begin) for the "above" partition, so it won't burn too much memory.
Now, you can sort your "vector" normally, simply by chaining the two bijection together with the vector index, and you'll get a matrix cell instead. As long as the sorting algorithm is stable (that's usually the case), it will works and will sort your matrix "in place", at the expense of a lot of mathematical computing to "route" the linear indexes to matrix indexes.
Please note that, despite we speak about bijections, we need ONLY the "vector to matrix" formulas. The "matrix to vector" are important - it MUST be a bijection! - but you won't use them, since you'll sort directly the (virtual) vector from 0 to N²-1.
I'm wondering if there is a way to implement Conway's game of life without resorting to for loops, if statements and other control structures typical of programming.
It should be pretty easy to vectorize for loops, but how would you convert the checks on the neighborhood to a matrix operation?
The base logic is something like this:
def neighbors(cell, distance=1):
"""Return the neighbors of cell."""
x, y = cell
r = xrange(0 - distance, 1 + distance)
return ((x + i, y + j) # new cell offset from center
for i in r for j in r # iterate over range in 2d
if not i == j == 0) # exclude the center cell
I hope this is not considered as off-topic by the mods, I'm genuinely curios and I am just starting out with CAs.
Cheers
The answer to your question is "yes, it is possible" (particularly the board updates from board n to board n+1).
I describe the process in detail here. The main technique to generate the neighborhood around a central cell involves using "strides" (the way that numpy and other array computation systems know how to walk across rows and columns of elements when they are really stored in memory in flat 1D thing) in a custom fashion to generate neighborhoods around cells. I describe that process here.
One last comment: since Game of Life iterates from state n to state n+1, while you could literally remove all imperative looping, it doesn't really make sense to take out that top-level control loop. So, has a loop: for round in range(num_rounds): board.update() where board.update doesn't use loops (except to do some side calculations ... again, you could remove those but it would make the program longer and less elegant).
To give you a concrete example (and be more compatible with StackOverflow's answer requirements), here's some select cutting and pasting from my posts to generate the central neighborhoods from a simple 4x4 board [apologies, this is python 2 code, you'll have to modify the prints a bit]:
board = np.arange(16).reshape((4,4))
print board
print board.shape
We want to pick out the four "complete" neighborhoods centered around 5, 6, 7, and 8. Let’s look at the neighborhood for 5. What is the shape of the result? 3×3. What are the strides? Well, to walk across a row is still just walking one element at a time and to get to the next row is still 4 elements at a time. These are the same as the strides in the original. The difference is we don’t take "everything", we just take a selection. Let’s see if that actually works:
from numpy.lib.stride_tricks import as_strided
neighbors = as_strided(board, shape=(3,3), strides=board.strides)
print neighbors
Ok, nice. Now, if we want all four neighborhoods, what is the output shape? We have several 3×3 results. How many? In this case, we have 2×2 of them (for each of the "center" cells). This gives a shape of (2,2,3,3) – the neighborhoods are the inner dimensions and the organization of the neighborhoods is the outer dimensions.
So, our strides (in terms of elements) end up being (4,0) within one neighborhood and (4,0) for progressing neighborhood to neighborhood. The total stride (element wise) is: (4,0,4,0). But, the component strides (our outer two dimensions) are the same as the strides of the board. This means that our neighborhood strides are board.strides + board.strides.
print board.strides + board.strides
neighborhoods = as_strided(board,
shape=(2,2,3,3),
strides=board.strides+board.strides)
print neighborhoods[0,0]
print neighborhoods[-1, -1]
I am having a small issue understanding indexing in Numpy arrays. I think a simplified example is best to get an idea of what I am trying to do.
So first I create an array of zeros of the size I want to fill:
x = range(0,10,2)
y = range(0,10,2)
a = zeros(len(x),len(y))
so that will give me an array of zeros that will be 5X5. Now, I want to fill the array with a rather complicated function that I can't get to work with grids. My problem is that I'd like to iterate as:
for i in xrange(0,10,2):
for j in xrange(0,10,2):
.........
"do function and fill the array corresponding to (i,j)"
however, right now what I would like to be a[2,10] is a function of 2 and 10 but instead the index for a function of 2 and 10 would be a[1,4] or whatever.
Again, maybe this is elementary, I've gone over the docs and find myself at a loss.
EDIT:
In the end I vectorized as much as possible and wrote the simulation loops that I could not in Cython. Further I used Joblib to Parallelize the operation. I stored the results in a list because an array was not filling right when running in Parallel. I then used Itertools to split the list into individual results and Pandas to organize the results.
Thank you for all the help
Some tips for your to get the things done keeping a good performance:
- avoid Python `for` loops
- create a function that can deal with vectorized inputs
Example:
def f(xs, ys)
return x**2 + y**2 + x*y
where you can pass xs and ys as arrays and the operation will be done element-wise:
xs = np.random.random((100,200))
ys = np.random.random((100,200))
f(xs,ys)
You should read more about numpy broadcasting to get a better understanding about how the arrays's operations work. This will help you to design a function that can handle properly the arrays.
First, you lack some parenthesis with zeros, the first argument should be a tuple :
a = zeros((len(x),len(y)))
Then, the corresponding indices for your table are i/2 and j/2 :
for i in xrange(0,10,2):
for j in xrange(0,10,2):
# do function and fill the array corresponding to (i,j)
a[i/2, j/2] = 1
But I second Saullo Castro, you should try to vectorize your computations.
This may be more of an 'approach' or conceptual question.
Basically, I have a python a multi-dimensional list like so:
my_list = [[0,1,1,1,0,1], [1,1,1,0,0,1], [1,1,0,0,0,1], [1,1,1,1,1,1]]
What I have to do is iterate through the array and compare each element with those directly surrounding it as though the list was layed out as a matrix.
For instance, given the first element of the first row, my_list[0][0], I need to know know the value of my_list[0][1], my_list[1][0] and my_list[1][1]. The value of the 'surrounding' elements will determine how the current element should be operated on. Of course for an element in the heart of the array, 8 comparisons will be necessary.
Now I know I could simply iterate through the array and compare with the indexed values, as above. I was curious as to whether there was a more efficient way which limited the amount of iteration required? Should I iterate through the array as is, or iterate and compare only values to either side and then transpose the array and run it again. This, however would ignore those values to the diagonal. And should I store results of the element lookups, so I don't keep determining the value of the same element multiple times?
I suspect this may have a fundamental approach in Computer Science, and I am eager to get feedback on the best approach using Python as opposed to looking for a specific answer to my problem.
You may get faster, and possibly even simpler, code by using numpy, or other alternatives (see below for details). But from a theoretical point of view, in terms of algorithmic complexity, the best you can get is O(N*M), and you can do that with your design (if I understand it correctly). For example:
def neighbors(matrix, row, col):
for i in row-1, row, row+1:
if i < 0 or i == len(matrix): continue
for j in col-1, col, col+1:
if j < 0 or j == len(matrix[i]): continue
if i == row and j == col: continue
yield matrix[i][j]
matrix = [[0,1,1,1,0,1], [1,1,1,0,0,1], [1,1,0,0,0,1], [1,1,1,1,1,1]]
for i, row in enumerate(matrix):
for j, cell in enumerate(cell):
for neighbor in neighbors(matrix, i, j):
do_stuff(cell, neighbor)
This has takes N * M * 8 steps (actually, a bit less than that, because many cells will have fewer than 8 neighbors). And algorithmically, there's no way you can do better than O(N * M). So, you're done.
(In some cases, you can make things simpler—with no significant change either way in performance—by thinking in terms of iterator transformations. For example, you can easily create a grouper over adjacent triplets from a list a by properly zipping a, a[1:], and a[2:], and you can extend this to adjacent 2-dimensional nonets. But I think in this case, it would just make your code more complicated that writing an explicit neighbors iterator and explicit for loops over the matrix.)
However, practically, you can get a whole lot faster, in various ways. For example:
Using numpy, you may get an order of magnitude or so faster. When you're iterating a tight loop and doing simple arithmetic, that's one of the things that Python is particularly slow at, and numpy can do it in C (or Fortran) instead.
Using your favorite GPGPU library, you can explicitly vectorize your operations.
Using multiprocessing, you can break the matrix up into pieces and perform multiple pieces in parallel on separate cores (or even separate machines).
Of course for a single 4x6 matrix, none of these are worth doing… except possibly for numpy, which may make your code simpler as well as faster, as long as you can express your operations naturally in matrix/broadcast terms.
In fact, even if you can't easily express things that way, just using numpy to store the matrix may make things a little simpler (and save some memory, if that matters). For example, numpy can let you access a single column from a matrix naturally, while in pure Python, you need to write something like [row[col] for row in matrix].
So, how would you tackle this with numpy?
First, you should read over numpy.matrix and ufunc (or, better, some higher-level tutorial, but I don't have one to recommend) before going too much further.
Anyway, it depends on what you're doing with each set of neighbors, but there are three basic ideas.
First, if you can convert your operation into simple matrix math, that's always easiest.
If not, you can create 8 "neighbor matrices" just by shifting the matrix in each direction, then perform simple operations against each neighbor. For some cases, it may be easier to start with an N+2 x N+2 matrix with suitable "empty" values (usually 0 or nan) in the outer rim. Alternatively, you can shift the matrix over and fill in empty values. Or, for some operations, you don't need an identical-sized matrix, so you can just crop the matrix to create a neighbor. It really depends on what operations you want to do.
For example, taking your input as a fixed 6x4 board for the Game of Life:
def neighbors(matrix):
for i in -1, 0, 1:
for j in -1, 0, 1:
if i == 0 and j == 0: continue
yield np.roll(np.roll(matrix, i, 0), j, 1)
matrix = np.matrix([[0,0,0,0,0,0,0,0],
[0,0,1,1,1,0,1,0],
[0,1,1,1,0,0,1,0],
[0,1,1,0,0,0,1,0],
[0,1,1,1,1,1,1,0],
[0,0,0,0,0,0,0,0]])
while True:
livecount = sum(neighbors(matrix))
matrix = (matrix & (livecount==2)) | (livecount==3)
(Note that this isn't the best way to solve this problem, but I think it's relatively easy to understand, and likely to illuminate whatever your actual problem is.)