I am trying to automate the partitioning of a model in ABAQUS using Python script. So far I have a feeling that I am going down a rabbit hole with no solution. I have a feeling that even if I manage to do it, the algorithm will be very inefficient and slower than manual partitioning.
I want the script to:
to join Interesting Points on each face with lines that are perpendicular to the edges.
to be applicable to any model.
to create partitions that can be deleted/edited later on.
My question is: is automatic partitioning possible? If so, what kind of algorithm should I use?
In the meantime, I have made an initial code below to get an idea of the problem using the Partition by shortest path function:
(note that I am looping through vertices and not Interesting Points because I haven’t found a way to access them.)
The problems I have are:
New faces are created will be created as I partition the faces through the range function. My alternative is to select all the faces.
New interesting points are created as I partition. I could make a shallow copy of the initial interesting points and then extract the coordinates and then use these coordinates to do the partitioning. Before partitioning I will need to convert the coordinates back to a dictionary object.
I cannot seem to access the interesting points from the commands.
from abaqus import *
from abaqusConstants import *
#Define Functions
def Create_cube(myPart,myString):
s = mdb.models[myString].ConstrainedSketch(name='__profile__',sheetSize=200.0)
g, v, d, c = s.geometry, s.vertices, s.dimensions, s.constraints
s.setPrimaryObject(option=STANDALONE)
s.rectangle(point1=(10.0, 10.0), point2=(-10.0, -10.0))
p = mdb.models[myString].Part(name=myPart, dimensionality=THREE_D,type=DEFORMABLE_BODY)
p = mdb.models[myString].parts[myPart]
p.BaseSolidExtrude(sketch=s, depth=20.0)
s.unsetPrimaryObject()
p = mdb.models[myString].parts[myPart]
session.viewports['Viewport: 1'].setValues(displayedObject=p)
del mdb.models[myString].sketches['__profile__']
def subtractTheMatrix(matrix1,matrix2):
matrix = [0,0,0]
for i in range(0, 3):
matrix[i] = matrix1[i] - matrix2[i]
if matrix[i]==0.0:
matrix[i]=int(matrix[i])
return matrix
#Define Variables
myString='Buckling_Analysis'
myRadius= 25.0
myThickness= 2.5
myLength=1526.0
myModel= mdb.Model(name=myString)
myPart='Square'
myOffset=0.0
set_name='foobar'
#-------------------------------------------------------------------MODELLING-----------------------------------------------------------------
#Function1: Create Part
Create_cube(myPart,myString)
#Function2: Extract Coordinates from vertices (using string manipulation)
#Input: vertices in vertex form
#Output: coordinates of vertices in the form [[x,y,z],[x1,y1,z1],[x2,y2,z2]] (name: v1_coordinates)
p = mdb.models[myString].parts[myPart]
v1=p.vertices
v1_coordinates=[]
for x in range(len(v1)):
dictionary_object=v1[x]
dictionary_object_str= str(dictionary_object)
location_pointon=dictionary_object_str.find("""pointOn""")
location_coord=location_pointon+12
coordinates_x_string=dictionary_object_str[location_coord:-5]
coordinates_x_list=coordinates_x_string.split(',') #convert string to list of strings
for lo in range(3):
coordinates_x_list[lo]=float(coordinates_x_list[lo]) #change string list to float list
v1_coordinates.append(coordinates_x_list) #append function. adds float list to existing list
print("""these are all the coordinates for the vertices""",v1_coordinates)
#Function3: Partioning loop though List of Coordinates
#Input: List of Coordinates
#Output: Partioned faces of model (Can only be seen in ABAQUS viewport.)
f = p.faces
v1 = p.vertices
#try and except to ignore when vertex is not in plane
final_number_of_faces=24
for i in range(0,final_number_of_faces,2):
print("this is for face:")
for j in range(len(v1_coordinates)):
fixed_vertex_coord = v1_coordinates[j]
fixed_vertex_dict = v1.getClosest(coordinates=((fixed_vertex_coord[0], fixed_vertex_coord[1], fixed_vertex_coord[2]),))
fixed_vertex_dict_str= str(fixed_vertex_dict[0])
location_1=fixed_vertex_dict_str.find("""],""")
fixed_vertex_index=int(fixed_vertex_dict_str[location_1-1:location_1])
for k in range(len(v1_coordinates)):
try:
if subtractTheMatrix(v1_coordinates[j], v1_coordinates[k])==[0,0,0]:
continue
else:
moving_vertex_coord=v1_coordinates[k]
moving_vertex_dict=v1.getClosest(coordinates=((moving_vertex_coord[0], moving_vertex_coord[1], moving_vertex_coord[2]),))
moving_vertex_dict_str= str(moving_vertex_dict[0])
location_2=moving_vertex_dict_str.find("""],""")
moving_vertex_index=int(moving_vertex_dict_str[location_2-1:location_2])
p.PartitionFaceByShortestPath(point1=v1[fixed_vertex_index], point2=v1[moving_vertex_index], faces=f[i])
except:
print("face error")
continue
Short answer
"Is it possible to automate partitioning in ABAQUS?" -- Yes
"How" -- It depends. For your example you probably will be perfectly fine with the PartitionEdgeByDatumPlane() method.
Long answer
Generally speaking, you cannot create a method that will be applicable to any model. You can automate/generalize partitioning for similar geometries and when partitioning is performed using similar logic.
Depending on your problem you have several methods to perform a partition, for example:
For face: ByShortestPath, BySketch, ByDatumPlane, etc.;
For cell: ByDatumPlane, ByExtrudeEdge, BySweepEdge, etc.
Depending on your initial geometry and required results you could need to use different of those. And your approach (the logic of your script) would evolve accordingly.
Abaqus scripting language is not very optimal for checking intersections, geometrical dependences, etc., so yes, if your geometry/task requires complicated mix of several partitioning methods applied to a complex geometry then it could require some slow approaches (e.g. looping trhough all verticies).
Some additional comments:
no need in re-assigning variable p by p = mdb.models[myString].parts[myPart]: mdb.models[myString].Part(..) already return part object.
do you really need methods setPrimaryObject/unsetPrimaryObject? When automating you generally don't need viewport methods (also session.viewports['Viewport: 1'].setValues(displayedObject=p)).
Please, use attributes of Abaqus objects (as discussed in your previous question);
Don't forget that you can loop through sequences: use for v_i in v1 instead of for x in range(len(v1)) when you don't need to know index explicitly; use enumerate() when you need both object and its index.
Related
I have 11 millions of GPS coordinates to analyse, the efficiency is my major problem. The problem is the following:
I want to keep only 1 GPS coordinates (call it a node) per 50 meters radius around it. So the code is pretty simple, I have a set G and for every node in G I check if the one I want to add is too close to any other one. If it's too close (<50 meters) I don't add it. Otherwise I do add it.
The problem is that the set G is growing pretty fast and at the end to check if I want to add one node to the set I need to run a for loop over millions of elements...
Here is a simplified code for the Node class:
from geopy import distance
class Node: #a point on the map
def __init__(self, lat, long): #lat and long in degree
self.lat = lat
self.long = long
def distanceTo(self, otherNode):
return distance.distance((self.lat, self.long), (otherNode.lat, otherNode.long)).km
def equivalent(self, otherNode):
return self.distanceTo(otherNode) < 0.05 #50 meters away
Here is the 'add' process:
currentNode = Node(lat, long)
alreadyIn = False
for n in graph:
if n.equivalent(currentNode):
alreadyIn = True
break
#set of Nodes
if alreadyIn == False:
G.add(currentNode)
This is not a problem of node clustering because I am not trying to detect any pattern in the dataset. I am just trying to group nodes inside a 50 meter radius.
I think the best would be to have a data structure that given coordinates return True or False if a similar node is in the set. However I can't figure out which one to use since I don't divide the environment in squares but in circles. (Yes a Node A can be equivalent to B and C without B and C being equivalent but I don't really mind...).
Thank you for your help !
Using an object oriented approach is usually slower for calculations like this (though more readable).
You could transform your latitude,longitude to cartesian x,y,z and create numpy arrays from your nodes and use scipy's very fast cKDTree. It provides several methods for operations like this, in your case query_ball_point might be the correct one.
I have 2 dictionaries. Both have key value pairs of an index and a world space location.
Something like:
{
"vertices" :
{
1: "(0.004700, 130.417480, -13.546420)",
2: "(0.1, 152.4, 13.521)",
3: "(58.21, 998.412, -78.0051)"
}
}
Dictionary 1 will always have about 20 - 100 entries, dictionary 2 will always have around 10,000 entries.
For every point in dictionary 1, I want to find the point in dictionary 2 that's closest to it. What is the fastest way of doing that? For each entry in dictionary 1, loop through all entries in dictionary 2 and return the one that's closest by.
Some untested pseudo code:
for point, distance in dict_1.iteritems():
closest_point = get_closest_point(dict_1.get(point))
def get_closest_point(self, start_point)
furthest_distance = 2000000
closest_point = 0
for index, end_point in dict_1.iteritems():
distance = get_distance(self, start_point, end_point)
if distance < furthest_distance:
furthest_distance = distance
closest_point = closest_point
return closest_point
I think something like this will work. The "problem" is that if I have 100 entries in dictionary 1, it will be 100 x 10,000 = 1,000,000 iterations. That just doesn't seem very fast or elegant to me.
Is there a better way of doing this in Maya/Python?
EDIT:
Just want to comment that I've used a closestPointOnMesh node before, which works just fine and is a lot easier if the points you're checking against are actually part of a mesh. You could do something like this:
selected_object = pm.PyNode(pm.selected()[0])
cpom = pm.createNode("closestPointOnMesh", name="cpom")
for vertex, distance in dict_1.iteritems():
selected_object.worldMesh >> cpom.inMesh
cpom.inPosition.set(dict_1.get(vertex))
print "closest vertex is %s " % cpom.closestVertexIndex.get()
Instant reply from the node and all is dandy. However, if the list of point you're checking against are not part of a mesh you can't use this. Would it actually be possible/quicker to:
Construct a mesh out of the points in dictionary 2
Use mesh with closestPointOnMesh node
Delete mesh
You definitely need an acceleration structure for non-trivial amounts of points. A KD tree or an octree is what you want -- KD trees are more performant on search but slower to build and can be harder to code. Also since Octrees are spatial rather than binary they may make it easier to do trivial tests.
You can get a python octree here: http://code.activestate.com/recipes/498121-python-octree-implementation/
if you're doing a lot of distance checks you'll definitely want to use Maya API vector classes to do the actual math compares -- that will be much, much faster than the equivalent python although. You can get these from pymel.datatypes if you don't know the API well, although using the newer API2 versions is pretty painless.
You need what is called a KD Tree. Build a KD Tree with points in your second dictionary and query for the closest point to each point in first dictionary.
I am not familiar with maya, if you can use scipy, you can use this.
PS: There seems to be an implementation in C++ here.
I have a large (>200,000) list of objects (of type RegionProperties, produced by skimage.measure.regionprops). The attributes of each object can be accessed with [] or .. For example:
my_list = skimage.measure.regionprops(...)
my_list[0].area
gets the area.
I want to filter this list to extract elements which have area > 300 to then act on them. I have tried:
# list comprehension
selection = [x for x in my_list if x.area > 300]
for foo in selection:
...
# filter (with predefined function rather than lambda, for speed)
def my_condition(x)
return(x.area > 300)
selection = filter(my_condition, my_list)
for foo in selection:
...
# generator
def filter_by_area(x):
for el in x:
if el.area > 300: yield el
for foo in filter_by_area(prop):
...
I find that generator ~ filter > comprehension in terms of speed but only marginally (4.15s, 4.16s, 4.3s). I have to repeat such a filter thousands of times, resulting in hours of CPU time just filtering a list. This simple operation if currently the bottleneck of the whole image analysis process.
Is there another way of doing this? Possibly involving C, or some peculiarity of RegionProperties objects?
Or maybe a completely different algorithm? I thought about eroding the image to make small particles disappear and only keep large ones, but the measurements have to be done on the non-eroded image and finding the correspondance between the two is long too.
Thank you very much in advance for any pointer!
As suggested by Mr. F, I tried isolating the filtering part by doing some dumb operation in the loop:
selection = [x for x in my_list if x.area > 300]
for foo in selection:
a = 1 + 1
this resulted in exactly the same times as before, even though I was extracting a few properties of the particles in the loop before. This pushed me to look more into how the area property of particles, on which I am doing the filtering, is extracted.
It turns out that skimage.measure.regionprops just prepares the data to compute the properties, it does not compute them explicitly. Extracting one property (such as area) triggers the computation of all the properties needed to get to the extracted property. It turns out that the area is computed as the first moment of the particle image, which, in turns, triggers the computation of all the moments, which triggers other computations, etc. So just doing x.area is not just extracting a pre-computed value but actually computing plenty of stuff.
There is a simpler solution to compute the area. For the record, I do it this way
numpy.sum(x._label_image[x._slice] == x.label)
So my problem is actually very specific to scikit-image RegionProperties objects. By using the formula above to compute the area, instead of using x.area, I get the filtering time down from 4.3s to ~1s.
Thanks for Mr. F for the comment which prompted me to go on this exploration of the code of scikit-image and solve my performance problem (the whole image processing routine when from several days to several hours!).
PS: by the way, with this code, it seems list comprehension gets a (very small) edge over the other two methods. And it's clearer so that's perfect!
I have a dictionary which has coordinates as keys. They are by default in 3 dimensions, like dictionary[(x,y,z)]=values, but may be in any dimension, so the code can't be hard coded for 3.
I need to find if there are other values within a certain radius of a new coordinate, and I ideally need to do it without having to import any plugins such as numpy.
My initial thought was to split the input into a cube and check no points match, but obviously that is limited to integer coordinates, and would grow exponentially slower (radius of 5 would require 729x the processing), and with my initial code taking at least a minute for relatively small values, I can't really afford this.
I heard finding the nearest neighbor may be the best way, and ideally, cutting down the keys used to a range of +- a certain amount would be good, but I don't know how you'd do that when there's more the one point being used.Here's how I'd do it with my current knowledge:
dimensions = 3
minimumDistance = 0.9
#example dictionary + input
dictionary[(0,0,0)]=[]
dictionary[(0,0,1)]=[]
keyToAdd = [0,1,1]
closestMatch = 2**1000
tooClose = False
for keys in dictionary:
#calculate distance to new point
originalCoordinates = str(split( dictionary[keys], "," ) ).replace("(","").replace(")","")
for i in range(dimensions):
distanceToPoint = #do pythagors with originalCoordinates and keyToAdd
#if you want the overall closest match
if distanceToPoint < closestMatch:
closestMatch = distanceToPoint
#if you want to just check it's not within that radius
if distanceToPoint < minimumDistance:
tooClose = True
break
However, performing calculations this way may still run very slow (it must do this to millions of values). I've searched the problem, but most people seem to have simpler sets of data to do this to. If anyone can offer any tips I'd be grateful.
You say you need to determine IF there are any keys within a given radius of a particular point. Thus, you only need to scan the keys, computing the distance of each to the point until you find one within the specified radius. (And if you do comparisons to the square of the radius, you can avoid the square roots needed for the actual distance.)
One optimization would be to sort the keys based on their "Manhattan distance" from the point (that is, add the component offsets), since the Euclidean distance will never be less than this. This would avoid some of the more expensive calculations (though I don't think you need and trigonometry).
If, as you suggest later in the question, you need to handle multiple points, you can obviously process each individually, or you could find the center of those points and sort based on that.
I have a list of lists in the form of
[ [ x1,.....,x8],[x1,.......,x8],...............,[x1,.....x[8]] ] . The number of lists in that list can go upto a million. Each list has 4 gps co-ordinates which show the four points of a rectangle ( assumed that each segment is in the form of a rectangle].
Problem : Given a new point, I need to determine which segment the point falls on and create a new one if it falls in none of them. I am not uploading the data into MySQL as of now, it comes in as a simple text file. I find out the co-ordinates from the text file for any given car.
What I tried : I am thinking of using R-trees to find all points which are near to the given point . ( Near== 200 meters maximum) . But even in R-trees, there seem to be too many options . R,R*,Hilbert.
Q1. Which one should be opted for ?
Q2. Is there a better option than R-trees?Can something be done by searching faster within the list ?
Thanks a lot.
[ {a1:[........]},{a2:[.......]},{a3:[.........]},.... ,{a20:[.....]}] .
Isn't the problem "find whether a given point falls within a certain rectangle in 2D space"?
That could be separated dimensionally, couldn't it? Give each rectangle an ID, then separate into lists of one-dimensional ranges ((id, x0, x1), (id, y0, y1)) and find all the ranges in both dimensions the point falls in. (I'm fairly sure there are very efficient algorithms for this. Heck, you could even leverage, say, sqlite already.) Then just intersect the ID sets you get and you should find all rectangles the point falls in, if any. (Of course you can exit early if either of the single dimensional queries returns no result.)
Not sure if this'd be faster or smarter than R-trees or other spatial indexes though. Hope this helps anyway.
import random as ra
# my _data will hold tuples of gps readings
# under the key of (row,col), knowing that the size of
# the row and col is 10, it will give an overall grid coverage.
# Another dict could translate row/col coordinates into some
# more usefull region names
my_data = {}
def get_region(x,y,region_size=10):
"""Build a tuple of row/col based on
the values provided and region square dimension.
It's for demonstration only and it uses rather naive calculation as
coordinate / grid cell size"""
row = int(x / region_size)
col = int(y / region_size)
return (row,col)
#make some examples and build my_data
for loop in range(10000):
#simule some readings
x = ra.choice(range(100))
y = ra.choice(range(100))
my_coord = get_region(x,y)
if my_data.get(my_coord):
my_data[my_coord].append((x,y))
else:
my_data[my_coord]= [(x,y),]
print my_data