I am currently coding numerical simulations of a lattice for a physics project.
I am used to manipulating square subsections of lattices where a variable or degree of freedom sits at each site of the lattice by using a 2d array.
Now, I would like to move to more general subsections, especifically circular ones just as in the picture below:
Only the red area would by dynamical (evolve in time) and so the outside of it would not need to be stored in memory.
I am wondering if you know of an efficient and somewhat natural container for holding such objects.
My only idea is to store the whole thing plus a flat boolean array telling me for each site if it inside or outside, but that could become a huge waste of computational time later.
ps: I will code that in Python and/or c++
Related
I'm trying to draw a pattern in a 100x100 window using John Zelle’s graphics module, and I am sure that the nested for loop is not the most efficient way to do it.
Any thoughts on how I can optimize the nested for loop? Here is the code:
def Penultimatedigitdesign(x,y,win,color):
for Y in range(y,y+100,40):
for X in range(x+20,x+100,40):
drawFourcircleInSqureTF(win,X,Y,"white",color)
for X in range(x,x+100,40):
drawFourcircleInSqureTF(win,X,Y,color,"white")
for Y in range(y+20,y+100,40):
for X in range(x+20,x+100,40):
drawFourcircleInSqureTF(win,X,Y,color,"white")
for X in range(x,x+100,40):
drawFourcircleInSqureTF(win,X,Y,"white",color)
In general, the strategy for navigating multi-dimensional space optimally is to vectorize it. In other words, structure your starting state as a matrix, apply a function to the matrix, and ask the GPU to give you the output. In Python, numpy is the key library that does this.
In graphics programming, people write "shaders," which are basically functions that operate on a signle pixel. These can be applied to a matrix (i.e., all the pixels on the screen), so that the GPU, which is optimized for this type of work, can carry it out in parallel.
As far as how to specifically implement this for your use case, and for whatever graphics backend you are targeting, I can't say, because there aren't enough details. If this is just a learning exercise, iterating over each pixel and solving the problem procedurally is probably OK.
I am in the process of converting a codebase from Matlab to Python and noticed that generateMesh gets called on some polygons before carrying out a finite element analysis.
What I need to get as an output is a list of all the elements and nodes, with their respective coordinates. I don't need any GUI, just the output nodes and elements information.
The best solution I came across is something like this done with gmsh. I know gmsh is a pretty big library and I am afraid it might be a little too much for my needs. Is there any other package you'd suggest?
Triangular meshes are fine for the moment, but I would like the package to support tetrahedral meshes as well in case it's needed in the future.
Thank you
edit: I forgot to mention that I am only dealing with 2D geometries, as the triangular and tetrahedral elements imply.
I am modeling electrical current through various structures with the help of FiPy. To do so, I solve Laplace's equation for the electrical potential. Then, I use Ohm's law to derive the field and with the help of the conductivity, I obtain the current density.
FiPy stores the potential as a cell-centered variable and its gradient as a face-centered variable which makes sense to me. I have two questions concerning face-centered variables:
If I have a two- or three-dimensional problem, FiPy computes the gradient in all directions (ddx, ddy, ddz). The gradient is a FaceVariable which is always defined on the face between two cell centers. For a structured (quadrilateral) grid, only one of the derivates should be greater than zero since for any face, the position of the two cell-centers involved should only differ in one coordinate. In my simulations however, it occurs frequently that more than one of the derivates (ddx, ddy, ddz) is greater than zero, even for a structured grid.
The manual gives the following explanation for the FaceGrad-Method:
Return gradient(phi) as a rank-1 FaceVariable using differencing for the normal direction(second-order gradient).
I do not see, how this differs from my understanding pointed out above.
What makes it even more problematic: Whenever "too many" derivates are included, current does not seem to be conserved, even in the simplest structures I model...
Is there a clever way to access the data stored in the face-centered variable? Let's assume I would want to compute the electrical current going through my modeled structure.
As of right now, I save the data stored in the FaceVariable as a tsv-file. This yields a table with (x,y,z)-positions and (ddx, ddy, ddz)-values. I read the file and save the data into arrays to use it in Python. This seems counter-intuitive and really inconvenient. It would be a lot better to be able to access the FaceVariable along certain planes or at certain points.
The documentation does not make it clear, but .faceGrad includes tangential components which account for more than just the neighboring cell center values.
Please see this Jupyter notebook for explicit expressions for the different types of gradients that FiPy can calculate (yes, this stuff should go into the documentation: #560).
The value is accessible with myFaceVar.value and the coordinates with myFaceVar.mesh.faceCenters. FiPy is designed around unstructured meshes and so taking arbitrary slices is not trivial. CellVariable objects support interpolation by calling myCellVar((xs, ys, zs)), but FaceVariable objects do not. See this discussion.
I would like to represent a bunch of particles (~100k grains), for which I have the position (including rotation) and a discretized level set function (i.e. the signed distance of every voxel from surface). Due to the large sample, I'm searching for eficient solutions to visualize it.
I first went for vtk, using its python interface, but I'm not really sure if it's the best (and simplest) way to do it since, as far as I know, there is no direct implementation for getting an isosurface from a 3D data set. In the beginning I was thinking usind marching cubes, but then I still would have to use a threshold or to interpolate in order to get the voxels that are on the surface and label them in order to be used by the marching cubes.
Now I found mayavi which has a python function
mlab.pipeline.iso_surface()
I however did not find much documentation on it and was wondering how it behaves in terms of performance.
Does someone have experience with this kind of tools? Which would be the best solution (in terms of efficiency and, secondly, in terms of simplicity - I do not know the vtk library, but if there is a huge difference in performance I can dig into it, also without python interface).
Lowest/Highest Combined Surface(s)
I'm looking for a methodology (and/or preferably a software approach) to create what I'm calling the Lowest (or highest) combined surface for a set of polygons.
So if our input was these two polygons that partially overlap and definitely intersect
My Lowest Combined output would be these three polygons
Given a number of "surfaces" (3d polygons)
We've gone through a variety of approaches and the best solution we could come up with involved applying a point grid to each polygon and performing calculations to return the lowest sets of points at each grid location. The problem is that the original geometry is lost in this approach which doesn't give us a working solution.
Background
I'm looking at a variety of "surfaces" that can be represented by 3d faces (cad Speak) or polygons and usually are distributed in a shapefile (.shp). When there are two surfaces that interact I'm interested in taking either the lowest combined or highest combined surface. I'm able to do this in CAD by manually tracing out new polygons for the interaction zones - but once I get into more than a handful of surfaces this becomes too labor intensive.
The current Approach
My current approach which falls somewhere in the terrible category is to generate a point cloud from each surface on a 1m grid and then do a grid cell based comparison of the points.
I do this by using AutoCAD Civl 3D's surface Generation Tools to create a TIN from each polygon surface and then using its Surface. This is then exported to a 1m DEM file which I believe is a gridded output format.
Next each DEM file is brought into Global Mapper where I generate a single point at the center of each "elevation grid cell". Next this data is exported to a .csv file where each point contains a variety of attributes such as what the name of the surface this point came from and what its altitude is
Next once I have a set of CSV files I run them through a python script that will export the lowest point (and associated attributes) at each grid. I do everything in UTM because the UTM grid is based on meters and it makes everything easier.
Lastly we bring the point file back into global mapper - coloring each point by what surface it started from.
There a variety of issues with this approach - sometimes things don't line up perfectly and there is a variety of cleanup I have to do
Also the edges end up being jagged - as is the case because I've converted nice straight lines into a point cloud
Alternatively we came up with a similar approach in Arc GIS using the Surface Comparison tool, however it had similar limitations to what we ran into with my approach.
What I'm looking for is a way to do this automatically with a variable number of inputs. I'm willing to use just about any tool to have this done, as it seems like it shouldn't be too difficult a process
Software?
When I look at this problem from a programmers point of view it looks rather straight forward - but I'm at a total loss how to proceed. I'm assuming Stack Overflow is the correct stack exchange for this question - but if it should be somewhere else - I'm happy to move it to a different exchange.
I wasn't sure if something like Mathematica (which i have zero experience) with could handle this situation or whether there was some fancy 3d math library in python that could chop polygons up by how they interact and then give me the lowest for co-located polys.
In any case I'm willing to try anything out so please if you have an idea of what tools and/or libraries I can use to do this please share! I have to assume that there is SOMETHING out there that can handle this type of 3d geometric processing
Thanks
EDIT
Because the commenters seem confused I am not asking for code - I am asking for methodologies, libraries, support tools, or even software packages that can perform these operations. I plan to write software to do this, however, I am hoping I don't need to pull out my trig books and write all these operations by hand. I have to assume there is somebody out there that has dealt with something similar before.