CFD boundary conditions - python

I am trying to understand a piece of code which comes from a paper concerning fluid simulations for games. I am looking at the way in which the boundary conditions are solved. Since I have no knowledge of C++ I am having additional difficulties.
From what I have understand:
IX(i, j) represents a 2d grid cell situated at i in the x-direction, j in the y-direction
u[IX(i, j)] is then the velocity in the (i, j) cell
The following macro is used for IX(i, j):
#define IX(i, j) ((i) + (N + 2) * (j))

I won’t attempt to wade through the whole paper, but I can explain the
syntax and make some educated guesses about what’s going on.
#define IX(i,j) ((i)+(N+2)*(j))
This looks to me like they’re transforming two-dimensional coordinates
i,j into a one-dimensional array index. j is the row number and i
is the column number, which jibes with your description, and the total
number of columns is N+2.
0 1 2 ... (N+2)-1
(N+2)+0 (N+2)+1 (N+2)+2 ... 2(N+2)-1
...
Then we have this:
x[IX(0,i)] = b==1 ? -x[IX(1,i)] : x[IX(1,i)]
In C, a ? b : c means if a, b, else c. It’s an expression whose
value is either b or c, depending on whether a is true or not.
It’s called the ternary operator, read more here.
Python has its own ternary operator, with the operands in a different
order:
b if a else c
So x[IX(0,i)] = b==1 ? -x[IX(1,i)] : x[IX(1,i)] is equivalent to
saying:
if (b == 1)
x[IX(0,i)] = -x[IX(1,i)]
else
x[IX(0,i)] = x[IX(1,i)]
So, in row i, the new value at column 0 is the value at column 1,
possibly negated. Looking at page 10, this seems to have something to do
with the boundaries. This is at the left edge, so we’re setting it to
the value at one cell inward, or the negation of that, depending on b.
Hope this clears things up somewhat.

Related

generate random directed fully-accessible adjacent probability matrix

given V nodes and E connections as parameters, how do I generate random directed fully-!connected! adjacent probability matrix, where all the connections weights fanning out of a node sum to 1.
The idea is after I pick random starting node to do a random walk according to the probabilities thus generating similar-random-structured sequences.
Although I prefer adj-matrix, graph is OK too.
Of course the fan-out connections can be one or many.
Loops are OK just not with itself.
I can do the walk using np.random.choice(nodes,prob)
Now that Jerome mention it it seem I was mistaken .. I dont want fully-coonnected BUT a closed-loop where there are no islands of sub-graphs i.e. all nodes are accessible via others.
Sorry I dont know how is this type of graph called ?
here is my complex solution ;(
def gen_adjmx(self):
passx = 1
c = 0 #connections so far
#until enough conns are generated
while c < self.nconns :
#loop the rows
for sym in range(self.nsyms):
if c >= self.nconns : break
if passx == 1 : #guarantees at least one connection
self.adj[sym, randint(self.nsyms) ] = randint(100)
else:
if randint(2) == 1 : #maybe a conn ?
col = randint(self.nsyms)
#already exists
if self.adj[sym, col] > 0 : continue
self.adj[sym, col ] = randint(100)
c += 1
passx += 1
self.adj /= self.adj.sum(axis=0)
You can simply create a random matrix and normalize the rows so that the sum is 1:
v = np.random.rand(n, n)
v /= v.sum(axis=1)
You mentioned that you want a graph which doesn't have any islands. I guess what you mean is that the adjacency matrix should be irreducible, i.e. the associated graph doesn't have any disconnected components.
One way to generate a random graph with the required property is to generate a random graph and then see if it has the property; throw it out and try again if it doesn't, otherwise keep it.
Here's a sketch of a solution with that in mind.
(1) generate a matrix n_vertices by n_vertices, which contains n_edges elements which are 1, and the rest are 0. This is a random adjacency matrix.
(2) test the adjacency matrix to see if it's irreducible. If so, keep it, otherwise go back to step 1.
I'm sure you can implement that in Python. I tried a proof of concept in Maxima (https://maxima.sourceforge.io), since it's convenient in some ways. There are probably ways to go about it which directly construct an irreducible matrix.
I implemented the irreducibility test for a matrix A as whether sum(A^^k, k, 0, n) has any 0 elements, according to: https://math.stackexchange.com/a/1703650 That test becomes more and more expensive as the number of vertices grows; and as the ratio of edges to vertices decreases, it increases the probability that you'll have to repeat steps 1 and 2. Whether that's tolerable for you depends on the typical number of vertices and edges you're working with.
random_irreducible (n_vertices, n_edges) :=
block ([A, n: 1],
while not irreducible (A: random_adjacency (n_vertices, n_edges))
do n: n + 1,
[A, n]);
random_adjacency (n_vertices, n_edges) :=
block([list_01, list_01_permuted, get_element],
list_01: append (makelist (1, n_edges), makelist (0, n_vertices^2 - n_edges)),
list_01_permuted: random_permutation (list_01),
get_element: lambda ([i, j], list_01_permuted[1 + (i - 1) + (j - 1)*n_vertices]),
genmatrix (get_element, n_vertices, n_vertices));
irreducible (A) :=
is (member (0, flatten (args (sum (A^^k, k, 0, length(A))))) = false);
A couple of things, one is I left out the part about normalizing edge weights so they sum to 1. I guess you'll have to put in that part to get a transition matrix and not just an adjacency matrix. The other is that I didn't prevent elements on the diagonal, i.e., you can stay on a vertex instead of always going to another one. If that's important, you'll have to deal with that too.

Python - Streamlining sudoku solver code

I am writing a script to efficiently solve a sudoku puzzle, but there's one part of my code that I think is extremely ugly and want to streamline.
def square(cell):
rows='ABCDEFGHI'
cols='123456789'
cell_row = cell[0][0]
cell_col = cell[0][1]
if cell_row in rows[0:3]:
x = 'A'
if cell_row in rows[3:6]:
x = 'B'
if cell_row in rows[6:9]:
x = 'C'
if cell_col in cols[0:3]:
y = 'a'
if cell_col in cols[3:6]:
y = 'b'
if cell_col in cols[6:9]:
y = 'c'
return (['Aa','Ab','Ac','Ba','Bb','Bc','Ca','Cb','Cc'].index(x+y))+1
Given that a sudoku board is comprised of 9 3x3 squares the purpose of this function is to take the coordinates of a cell on the board and return the number of the 3x3 square to which the cell belongs (where the square in the top left is number 1, and the bottom right is number 9). The input 'cell' is in the form ['A5', 6] where A indicates the row, 5 the column and 6 the value of the cell.
The code that I have works but there's got to be a much more efficient or presentable way of doing it. I would be grateful for any suggestions.
Personally, I don't think magic numbers like '65' and '97' make the solution more presentable! How about:
def square(cell):
rows = 'ABCDEFGHI'
cell_row = rows.index(cell[0][0])
cell_col = int(cell[0][1]) - 1
return 3 * (cell_row // 3) + cell_col // 3 + 1
I was able to make a greatly simplified version of your formula. I started by assigning both the row and column a 0-based index. Then I used integer division to only get the information about what 3-block the square is in. Since moving down a 3-block of rows increases the index by 3 while moving to the right only increases it by 1, I multiply the row index by 3 after the division. Here's the finished function:
def square(cell):
coords = (ord(cell[0][0]) - 65,int(cell[0][1]) - 1)
return 3 * (coords[0] // 3) + coords[1] // 3 + 1
Edit: Fixed offset by 1 - even though I would rather start at 0 as you'll probably want to use the returned value as an index for another (sub-)array.
And as I cannot comment on other answers yet just my 2 cents here:
cdlane's answer is slightly slower than the one presented here. If you get rid of the .lower() (I assume you don't care about fail safes at this point) and use Brien's answer you gain another slight performance boost. I don't know how often you'll evaluate square() but maybe it's worth to ditch readability for performance ;)
I think the attached snippet should do the trick.
def square(cell):
# http://www.asciitable.com/
# https://docs.python.org/3/library/functions.html#ord
row = ord(cell[0][0].lower()) - 97
column = int(cell[0][1])-1
return 3*(row//3) + column//3 + 1

Optimize finding pairs of arrays which can be compared

Definition: Array A(a1,a2,...,an) is >= than B(b1,b2,...bn) if they are equal sized and a_i>=b_i for every i from 1 to n.
For example:
[1,2,3] >= [1,2,0]
[1,2,0] not comparable with [1,0,2]
[1,0,2] >= [1,0,0]
I have a list which consists of a big number of such arrays (approx. 10000, but can be bigger). Arrays' elements are positive integers. I need to remove all arrays from this list that are bigger than at least one of other arrays. In other words: if there exists such B that A >= B then remove A.
Here is my current O(n^2) approach which is extremely slow. I simply compare every array with all other arrays and remove it if it's bigger. Are there any ways to speed it up.
import numpy as np
import time
import random
def filter_minimal(lst):
n = len(lst)
to_delete = set()
for i in xrange(n-1):
if i in to_delete:
continue
for j in xrange(i+1,n):
if j in to_delete: continue
if all(lst[i]>=lst[j]):
to_delete.add(i)
break
elif all(lst[i]<=lst[j]):
to_delete.add(j)
return [lst[i] for i in xrange(len(lst)) if i not in to_delete]
def test(number_of_arrays,size):
x = map(np.array,[[random.randrange(0,10) for _ in xrange(size)] for i in xrange(number_of_arrays)])
return filter_minimal(x)
a = time.time()
result = test(400,10)
print time.time()-a
print len(result)
P.S. I've noticed that using numpy.all instead of builtin python all slows the program dramatically. What can be the reason?
Might not be exactly what you are asking for, but this should get you started.
import numpy as np
import time
import random
def compare(x,y):
#Reshape x to a higher dimensional array
compare_array=x.reshape(-1,1,x.shape[-1])
#You can now compare every x with every y element wise simultaneously
mask=(y>=compare_array)
#Create a mask that first ensures that all elements of y are greater then x and
#then ensure that this is the case at least once.
mask=np.any(np.all(mask,axis=-1),axis=-1)
#Places this mask on x
return x[mask]
def test(number_of_arrays,size,maxval):
#Create arrays of size (number_of_arrays,size) with maximum value maxval.
x = np.random.randint(maxval, size=(number_of_arrays,size))
y= np.random.randint(maxval, size=(number_of_arrays,size))
return compare(x,y)
print test(50,10,20)
First of all we need to carefully check the objective. Is it true that we delete any array that is > ANY of the other arrays, even the deleted ones? For example, if A > B and C > A and B=C, then do we need to delete only A or both A and C? If we only need to delete INCOMPATIBLE arrays, then it is a much harder problem. This is a very difficult problem because different partitions of the set of arrays may be compatible, so you have the problem of finding the largest valid partition.
Assuming the easy problem, a better way to define the problem is that you want to KEEP all arrays which have at least one element < the corresponding element in ALL the other arrays. (In the hard problem, it is the corresponding element in the other KEPT arrays. We will not consider this.)
Stage 1
To solve this problem what you do is arrange the arrays in columns and then sort each row while maintaining the key to the array and the mapping of each array-row to position (POSITION lists). For example, you might end up with a result in stage 1 like this:
row 1: B C D A E
row 2: C A E B D
row 3: E D B C A
Meaning that for the first element (row 1) array B has a value >= C, C >= D, etc.
Now, sort and iterate the last column of this matrix ({E D A} in the example). For each item, check if the element is less than the previous element in its row. For example, in row 1, you would check if E < A. If this is true you return immediately and keep the result. For example, if E_row1 < A_row1 then you can keep array E. Only if the values in the row are equal do you need to do a stage 2 test (see below).
In the example shown you would keep E, D, A (as long as they passed the test above).
Stage 2
This leaves B and C. Sort the POSITION list for each. For example, this will tell you that the row with B's mininum position is row 2. Now do a direct comparison between B and every array below it in the mininum row, here row 2. Here there is only one such array, D. Do a direct comparison between B and D. This shows that B < D in row 3, therefore B is compatible with D. If the item is compatible with every array below its minimum position keep it. We keep B.
Now we do the same thing for C. In C's case we need only do one direct comparison, with A. C dominates A so we do not keep C.
Note that in addition to testing items that did not appear in the last column we need to test items that had equality in Stage 1. For example, imagine D=A=E in row 1. In this case we would have to do direct comparisons for every equality involving the array in the last column. So, in this case we direct compare E to A and E to D. This shows that E dominates D, so E is not kept.
The final result is we keep A, B, and D. C and E are discarded.
The overall performance of this algorithm is n2*log n in Stage 1 + { n lower bound, n * log n - upper bound } in Stage 2. So, maximum running time is n2*log n + nlogn and minimum running time is n2logn + n. Note that the running time of your algorithm is n-cubed n3. Since you compare each matrix (n*n) and each comparison is n element comparisons = n*n*n.
In general, this will be much faster than the brute force approach. Most of the time will be spent sorting the original matrix, a more or less unavoidable task. Note that you could potentially improve my algorithm by using priority queues instead of sorting, but the resulting algorithm would be much more complicated.

Map Color Algorithm in Python

I have a 2D array in Python (version 3.2) that looks like this:
...AAA....
...AAABB..
..BBBBBCC.
.....CCCC.
.DDD..CC..
.DDD......
It represents a kind of map with areas painted different colors. The above example shows four distinct regions, A, B, C, and D.
Here's an example of indexing the array:
map[1][5] == 'A' would return True.
I'm trying to write a function that takes in an array like this, and a row/col index, and returns the number of adjoining spaces that are of the same "color". So using that example above, here are some return values (the arguments are the array, row, and column number respectively:
6 <-- countArea(map, 5, 2)
8 <-- countArea(map, 2, 8)
I'd like to implement this as a recursive function, but I can't figure it out. Here's what I have so far:
def countArea(map, row, col):
key = map[row][col]
if (map[row-1][col] == key):
return 1 + countArea(map, row-1, col)
elif (map[row+1][col] == key):
return 1 + countArea(map, row+1, col)
elif (map[row][col+1] == key):
return 1 + countArea(map, row, col+1)
elif (map[row][col-1] == key):
return 1 + countArea(map, row, col-1)
else:
return 1
I know I'm missing something basic here. I'm basically saying "here is the current character, now look in each direction to see if it has the same character."
My question is, what am I missing in this recursive definition?
Thanks for your help.
My question is, what am I missing in this recursive definition?
Once a grid square has been counted, it must not be counted again (this includes counting by recursive invocations of countArea()!)
Your current algorithm goes as far north as it can, and then keeps taking one step to the south followed by one step to the north. This two-step sequence repeats until you run out of stack space.
If you like, you could read up on algorithms for this problem in Wikipedia.
In your code the algorithm would look one field left of a given input field and in the recursive call would again call the function on the initial field. (What you obviously don't want since it would lead to an infinite recursion)
Approach 1
A method to overcome this problem while still using recursion would be to specify a direction where the recursion should look for more fields of the same type. For example the call to the field directly north (or above) of the initial one could look recursively farer to the north or east (or right), the one to the east go south (below) and east and so on.
By intelligently choosing the first step you can ensure, that there is no overlap in the scanned regions. However it needs some adaptions to specify the directions the recursive call should scan. BUT: Note that this algorithm would not work if the area is overhanging so if not every field northeast of the starting point can be reached by just moving right and up.
There exist more algorithms like this that are also capable to solve the the mentioned problem. Have a look at Flood Filling on wikipedia.
Approach 2
You can also save the already visited fields in some way and directly return from the recursive call if the field was already visited.
The following implementation should work:
def countArea(map, row, col, key=None, seen=None):
if key is None:
key = map[row][col]
if seen is None:
seen = set()
seen.add((row, col)) # mark this location as visited
n = 1
for dy, dx in [(0, 1), (1, 0), (-1, 0), (0, -1)]:
r, c = row + dy, col + dx
if r < 0 or r >= len(map) or c < 0 or c >= len(map[0]): # check boundaries
continue
# only increment and recurse if key matches and we haven't already visited
if map[r][c] == key and (r, c) not in seen:
n += countArea(map, r, c, key, seen)
return n
Example:
>>> print '\n'.join(''.join(row) for row in map)
...AAA....
...AAABB..
..BBBBBCC.
.....CCCC.
.DDD..CC..
.DDD......
>>> countArea(map, 5, 2)
6
>>> countArea(map, 2, 8)
8
Note that this assumes that areas with the same key that are only touching at a diagonal should be considered separate, for example for the following map countArea(map, 0, 0) and countArea(map, 1, 1) would both return 1:
A.
.A
As a side note, you should not use map as a variable name, as it will mask the builtin map() function.

memoization in python, off by one errors

I'm currently taking an algorithms class. I'm testing a lot of them out in python, including dynamic programming. Here is an implementation of the bottom up rod cutting implementation.
It doesn't work because of the off-by-one error. Is there a global setting in python where I can change the default array index to be 1 and not 0? Or can someone please provide me with a better strategy for over-coming the off-by-one errors, which I encounter a million times. It's super annoying.
def bottom_up_memo_cut_rod(p,n):
r = [ 0 for i in range(n) ]
r[0] = 0
for j in range(n):
q = -1
for i in range(j):
q = max(q, p[i] + r[j-i])
r[j] = q
return r[n]
bottom_up_memo_cut_rod([1,5,8,9], 4)
answer should be 10 in this case cutting 4 into (2,2) yields the max price of 10.
There are a couple of things in Python that may help you. The built-in enumerate is a great one.
for idx, val_at_idx in enumerate(aList):
# idx is the 0-indexed position, val_at_idx is the actual value.
You can also use list slicing with enumerate to shift indices if absolutely necessary:
for idxOffBy1, val_at_wrong_idx in enumerate(aList[1:]):
# idx here will be 0, but the value will be be from position 1 in the original list.
Realistically though, you don't want to try to change the interpreter so that lists start at index 1. You want to adjust your algorithm to work with the language.
In Python, you can often avoid working with the indices altogether. That algorithm can be written like this:
def bottom_up_memo_cut_rod(p,n):
r = [0]
for dummy in p:
r.append(max(a + b for a, b in zip(reversed(r),p)))
return r[-1]
print bottom_up_memo_cut_rod([1,5,8,9], 4)
#10
In your case, off-by-one is a result of r[n] where len(r)==n. You either write r[n-1], or, more preferably, r[-1], which means "the last element of r", the same way r[-2] will mean "second last" etc.
Unrelated, but useful: [ 0 for i in range(n) ] can be written as [0] * n

Categories

Resources