I do not know if there is already an standard name for this concept, so do not hesitate to point me to the right direction. I was looking on the internet but the "sync" and "mirror" concepts are different from what I am looking for:
I want to find a way to syncrhonise positions and areas between arrays. See the following diagram:
Here we can see 3 different arrays stored in 3 different variables (x, y, z). Each array has 9 positions. I want to create them in a way that if value in x2 is changed, it is also changed in z4. The same applies for x7:8<->y8:7:-1.
From the bases I think it is not possible, as arrays only store pointers and those pointers are directed to the python ints/floats/etc. It might be possible if using custom objects, but memory wise is probably not wise as every possition would require an individual object.
Related
I have two 2D point clouds, both of them are matched on another. That means I already know that point X in the source-pointcloud is represented by point Y on the target pointcloud. My goal is to create a transformation-matrix by which I will not only be able to match the source to the target pointcloud but also use the matrix to calculate the corresponding position of other additional points on the target-pc that are added to the source-pc later on.
I already tried using a self-written simple rigid-transformation(involving stretching/rotation/movement of one PC), but as transformation between the two PCs involves some kind of irregular stretching, this solution is not accurate enough for me. For reference see 1 for two example-PCs, 2 represents my rigid-transformation-try. I figure that a non-rigid-transformation is necessary.
Therefore I have already looked into different python libraries, for example the different elastix-toolkits or other projects like probreg(https://github.com/neka-nat/probreg).
The problem is that all these either take images as input and no point coordinates and/or (correct me if I'm wrong) only do the registration and do not create a transformation-matrix by which I could later transform additional points.
Do you know any useful python libs and/or can you point me in the right direction for creating a such matrix?
Thanks for your help
I am modeling electrical current through various structures with the help of FiPy. To do so, I solve Laplace's equation for the electrical potential. Then, I use Ohm's law to derive the field and with the help of the conductivity, I obtain the current density.
FiPy stores the potential as a cell-centered variable and its gradient as a face-centered variable which makes sense to me. I have two questions concerning face-centered variables:
If I have a two- or three-dimensional problem, FiPy computes the gradient in all directions (ddx, ddy, ddz). The gradient is a FaceVariable which is always defined on the face between two cell centers. For a structured (quadrilateral) grid, only one of the derivates should be greater than zero since for any face, the position of the two cell-centers involved should only differ in one coordinate. In my simulations however, it occurs frequently that more than one of the derivates (ddx, ddy, ddz) is greater than zero, even for a structured grid.
The manual gives the following explanation for the FaceGrad-Method:
Return gradient(phi) as a rank-1 FaceVariable using differencing for the normal direction(second-order gradient).
I do not see, how this differs from my understanding pointed out above.
What makes it even more problematic: Whenever "too many" derivates are included, current does not seem to be conserved, even in the simplest structures I model...
Is there a clever way to access the data stored in the face-centered variable? Let's assume I would want to compute the electrical current going through my modeled structure.
As of right now, I save the data stored in the FaceVariable as a tsv-file. This yields a table with (x,y,z)-positions and (ddx, ddy, ddz)-values. I read the file and save the data into arrays to use it in Python. This seems counter-intuitive and really inconvenient. It would be a lot better to be able to access the FaceVariable along certain planes or at certain points.
The documentation does not make it clear, but .faceGrad includes tangential components which account for more than just the neighboring cell center values.
Please see this Jupyter notebook for explicit expressions for the different types of gradients that FiPy can calculate (yes, this stuff should go into the documentation: #560).
The value is accessible with myFaceVar.value and the coordinates with myFaceVar.mesh.faceCenters. FiPy is designed around unstructured meshes and so taking arbitrary slices is not trivial. CellVariable objects support interpolation by calling myCellVar((xs, ys, zs)), but FaceVariable objects do not. See this discussion.
I'm working on a project that requires a 2D map with a list for every possible x and y coordinate on that map. Seeing as though the map dimensions are constant, which is faster for creation, searching and changing values of?
Let's say that I have a 2x2 grid with a total of 4 positions. Each stores 2-bits (0, 1, 2 or 3) would having "[0b00, 0b00, 0b00, 0b01]" represent the list be better than "[[0b00, 0b00], [0b00, 0b01]]" in terms of efficiency and readability?
I assumed that the first method would be quicker at creation and iterating over all of the values but the second be faster for finding the value of a certain position (so listName[1][0] is easier to work out than listName[2]).
To clarify, I want to know what is both more memory efficient and CPU efficient for the 3 listed uses and (if it isn't too much trouble) why they are so. Further, the actual lists I'm using are 4096x4096 (using a total of 17Mb in raw data).
Note: I DO already plan on splitting the 4096x4096 grid into sectors that will be part of a nested list, I'm just asking if x and y should be on the same nesting level.
Thanks.
I have two arrays, array A with ~1M lines and array B with ~400K lines. Each contains, among other things, coordinates of a point. For each point in array A, I need to find how many points in array B are within a certain distance of it. How do I avoid naively comparing everything to everything? Based on its speed at the start, running naively would take 10+ days on my machine. That required nested loops, but the arrays are too large to construct a distance matrix (400G entries!)
I thought the way would be to check only a limited set of B coordinates against each A coordinates. However, I haven't determined an easy way of doing that. That is, what's the easiest/quickest way to make a selection that doesn't require checking all the values in B (which is exactly the same task I'm trying to avoid)?
EDIT: I should've mentioned these aren't 2D (or nD) Cartesian, but spherical surface (lat/long), and distance is great-circle distance.
I cannot give a full answer right now, but some hints to get you started. It will be much more efficient to organise the points in B in a kd-tree. You can use the class scipy.spatial.KDTree to do this easily, and you can use the query() method on this class to request the points within a given distance.
Here is one possible implementation of the cross match between list of points on the sphere using k-d tree.
http://code.google.com/p/astrolibpy/source/browse/my_utils/match_lists.py
Another way is to use healpy module and their get_neighbors method.
I've got a reasonable (though rusty) background in algorithms and mathematics, and modest proficiency in Python and C. I can see sorta how to do this, but it's non-trivial, and gets more complicated every time I prototype it. I come before the collective for it's wisdom hoping for an elegant solution I'm not seeing. I think there's some sort of network or graph variant that might be apropos, but it's not clicking. And it's not a homework assignment :-).
I have three sets of data, A, B & C. Each element in A is a string, each element in B is an int and each C is a collection of metadata (dates, times, descriptions, etc.). There will be, potentially, thousands if not millions of elements in each set (though not soon).
Every A will map to zero or more items in B. Conversely, each element in B will map to zero or more items in A. Every item in A and B will have an associated C (possibly empty) which might be shared with other A's and/or B's.
Given an A, I need to report on all B's that it maps to. I further then need to report all A's that those B's map to, as well as all C's associated with what was found. I also need to be able to do the converse (given a B, report associated A's, B's and C's).
I understand there are some fairly pathological possibilities in here, and I'll need to detect loops (depth detection should work fine), etc.
Thoughts?
what first comes to mind for me would be a Graph or Directed Graph
Each element in the data sets could be a node and the edges would represent the mappings. You could write your specific implementation to provide helper methods for the things you know you're going to need to do, like get all Bs that a given A element maps to
UPDATE: i didn't notice you tagged the question graph-algorithm already which I assume means you already thought of a graph data structure