Abaqus Scripting: Error on code to create a set of vertices from several instances - python

Here is the error I am getting when trying to define set of vertices through Python script
mdb.models['Model-8'].rootAssembly.Set(name='Set-3', vertices=
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+''+'1'].vertices.findAt((75.0, 125.0, 0))+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'2'].vertices.findAt((75, 125, 35), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'3'].vertices.findAt((75, 125, 30), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'4'].vertices.findAt((75, 125, 25), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'5'].vertices.findAt((75, 125, 20), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'6'].vertices.findAt((75, 125, 15), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'7'].vertices.findAt((75, 125, 10), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'8'].vertices.findAt((75, 125, 5), ))
Above is the code I am trying to use to define the set. I double checked that all the coordinates are corresponding to where those vertices are located for each of the instance. The code above was adapted from that in the journal file as shown below. What I did was changing the .getSequenceFromMask(mask=('[#1 ]', ), ) to .findAt((X, Y, Z), ), where X, Y & Z correspond to the coordinate of vertex of interest. The error looks like Abaqus doesn't use "+" in defining set of vertices however that was what spat out from the journal file where I adapted my code from.
mdb.models['Model-8'].rootAssembly.Set(name='Set-2', vertices=
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'1'].vertices.getSequenceFromMask(mask=('[#1 ]', ), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'2'].vertices.getSequenceFromMask(mask=('[#1 ]', ), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'3'].vertices.getSequenceFromMask(mask=('[#1 ]', ), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'4'].vertices.getSequenceFromMask(mask=('[#1 ]', ), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'5'].vertices.getSequenceFromMask(mask=('[#1 ]', ), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'6'].vertices.getSequenceFromMask(mask=('[#1 ]', ), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'7'].vertices.getSequenceFromMask(mask=('[#1 ]', ), )+\
mdb.models['Model-8'].rootAssembly.instances[myModule+'_'+myRigidBeamName+'-'+'8'].vertices.getSequenceFromMask(mask=('[#1 ]', ), ))

Found a way to go around this. It seems like Abaqus only like the getSequenceFromMask() method when it comes to defining set. Selecting vertices by findAt() method works fine in other cases but not in defining set. So the workaround is to convert the coordinates of the vertices of interest into a series of "click" (i.e. just like how it was defined in Abaqus GUI) at the exact location of the vertices. Below is the snippet that will do the job. There are 8 vertices grouped in 1 set. v select the vertex, vi (where i = 1, 2, 3, ...) "click" the vertex selected in v to add it to the set.
myInstances = mdb.models['Model-8'].rootAssembly.instances
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'1'].vertices.getClosest(coordinates=(((75, 125, 0)),))
v1=myInstances[myModule+'_'+myRigidBeamName+'-'+'1'].vertices.findAt((((v[0][1])),))
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'2'].vertices.getClosest(coordinates=(((75, 125, 35)),))
v2=myInstances[myModule+'_'+myRigidBeamName+'-'+'2'].vertices.findAt((((v[0][1])),))
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'3'].vertices.getClosest(coordinates=(((75, 125, 30)),))
v3=myInstances[myModule+'_'+myRigidBeamName+'-'+'3'].vertices.findAt((((v[0][1])),))
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'4'].vertices.getClosest(coordinates=(((75, 125, 25)),))
v4=myInstances[myModule+'_'+myRigidBeamName+'-'+'4'].vertices.findAt((((v[0][1])),))
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'5'].vertices.getClosest(coordinates=(((75, 125, 20)),))
v5=myInstances[myModule+'_'+myRigidBeamName+'-'+'5'].vertices.findAt((((v[0][1])),))
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'6'].vertices.getClosest(coordinates=(((75, 125, 15)),))
v6=myInstances[myModule+'_'+myRigidBeamName+'-'+'6'].vertices.findAt((((v[0][1])),))
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'7'].vertices.getClosest(coordinates=(((75, 125, 10)),))
v7=myInstances[myModule+'_'+myRigidBeamName+'-'+'7'].vertices.findAt((((v[0][1])),))
v=myInstances[myModule+'_'+myRigidBeamName+'-'+'8'].vertices.getClosest(coordinates=(((75, 125, 5)),))
v8=myInstances[myModule+'_'+myRigidBeamName+'-'+'8'].vertices.findAt((((v[0][1])),))
mdb.models['Model-8'].rootAssembly.Set(name='Set-4', vertices=v1+v2+v3+v4+v5+v6+v7+v8)

Related

Calculating angles of body skeleton in video using OpenPose

Disclaimer: This question is regarding OpenPose but the key here is actually to figure how to use the output (coordinates stored in the JSON) and not how to use OpenPose, so please consider reading it to the end.
I have a video of a person from the side on a bike (profile of him sitting so we see the right side). I use the OpenPose to extract the coordinates of the skeleton. The OpenPose provides the coordinates in a JSON file looking like (see docs for explanation):
{
"version": 1.3,
"people": [
{
"person_id": [
-1
],
"pose_keypoints_2d": [
594.071,
214.017,
0.917187,
523.639,
216.025,
0.797579,
519.661,
212.063,
0.856948,
539.251,
294.394,
0.873084,
619.546,
304.215,
0.897219,
531.424,
221.854,
0.694434,
550.986,
310.036,
0.787151,
625.477,
339.436,
0.845077,
423.656,
319.878,
0.660646,
404.111,
321.807,
0.650697,
484.434,
437.41,
0.85125,
404.13,
556.854,
0.791542,
443.261,
319.801,
0.601241,
541.241,
370.793,
0.921286,
502.02,
494.141,
0.799306,
592.138,
198.429,
0.943879,
0,
0,
0,
562.742,
182.698,
0.914112,
0,
0,
0,
537.25,
504.024,
0.530087,
535.323,
500.073,
0.526998,
486.351,
500.042,
0.615485,
449.168,
594.093,
0.700363,
431.482,
594.156,
0.693443,
386.46,
560.803,
0.803862
],
"face_keypoints_2d": [],
"hand_left_keypoints_2d": [],
"hand_right_keypoints_2d": [],
"pose_keypoints_3d": [],
"face_keypoints_3d": [],
"hand_left_keypoints_3d": [],
"hand_right_keypoints_3d": []
}
]
}
From what I understand, each JSON is a frame of the video.
My goal is to detect the angles of specific coordinates like right knee, right arm, etc. For example:
openpose_angles = [(9, 10, 11, "right_knee"),
(2, 3, 4, "right_arm")]
This is based on the following OpenPose skeleton dummy:
What I did is to calculate the angle between three coordinates (using Python):
temp_df = json.load(open(os.path.join(jsons_dir, file)))
listPoints = list(zip(*[iter(temp_df['people'][person_number]['pose_keypoints_2d'])] * 3))
count = 0
lmList2 = {}
for x,y,c in listPoints:
lmList2[count]=(x,y,c)
count+=1
p1=angle_cords[0]
p2=angle_cords[1]
p3=angle_cords[2]
x1, y1 ,c1= lmList2[p1]
x2, y2, c2 = lmList2[p2]
x3, y3, c3 = lmList2[p3]
# Calculate the angle
angle = math.degrees(math.atan2(y3 - y2, x3 - x2) -
math.atan2(y1 - y2, x1 - x2))
if angle < 0:
angle += 360
This method I saw on some blog (which I forgot where), but was related to OpenCV instead of OpenPose (not sure if makes the difference), but see angles that do not make sense. We showed it to our teach and he suggested us to use vectors to calculate the angles, instead of using math.atan2. But we got confued on how to implment this.
To summarize, here is the question - What will be the best way to calculate the angles? How to calculate them using vectors?
Your teacher is right. I suspect the problem is that 3 points can make up 3 different angles depending on the order. Just consider the angles in a triangle. Also you seem to ignore the 3rd coordinate.
Reconstruct the Skeleton
In your picture you indicate that the edges/bones of the skeleton are
edges = {(0, 1), (0, 15), (0, 16), (1, 2), (1, 5), (1, 8), (2, 3), (3, 4), (5, 6), (6, 7), (8, 9), (8, 12), (9, 10), (10, 11), (11, 22), (11, 24), (12, 13), (13, 14), (14, 19), (14, 21), (15, 17), (16, 18), (19, 20), (22, 23)}
I get the points from your json file with
np.array(pose['people'][0]['pose_keypoints_2d']).reshape(-1,3)
Now I plot that ignoring the 3rd component to get an idea what I am working with. Notice that this does not change the proportions much since the 3rd component is really small compared to the others.
One definitely recognizes an upside down man. I notice that there seems to be some kind of artifact but I suspect this is just an error in recognition and would be better in an other frame.
Calculate the Angle
Recall that the dot product divided by the product of the norm gives the cosine of the angle. See the wikipedia article on dot product. I'll include the relevant picture from that article. So now I can get the angle of two joined edges like this.
def get_angle(edge1, edge2):
assert tuple(sorted(edge1)) in edges
assert tuple(sorted(edge2)) in edges
edge1 = set(edge1)
edge2 = set(edge2)
mid_point = edge1.intersection(edge2).pop()
a = (edge1-edge2).pop()
b = (edge2-edge1).pop()
v1 = points[mid_point]-points[a]
v2 = points[mid_point]-points[b]
angle = (math.degrees(np.arccos(np.dot(v1,v2)
/(np.linalg.norm(v1)*np.linalg.norm(v2)))))
return angle
For example if you wanted the elbow angles you could do
get_angle((3, 4), (2, 3))
get_angle((5, 6), (6, 7))
giving you
110.35748420197164
124.04586139643376
Which to me makes sense when looking at my picture of the skeleton. It's a bit more than a right angle.
What if I had to calculate the angle between two vectors that do not share one point?
In that case you have to be more careful because in that case the vectors orientation matters. Firstly here is the code
def get_oriented_angle(edge1, edge2):
assert tuple(sorted(edge1)) in edges
assert tuple(sorted(edge2)) in edges
v1 = points[edge1[0]]-points[edge1[1]]
v2 = points[edge2[0]]-points[edge2[1]]
angle = (math.degrees(np.arccos(np.dot(v1,v2)
/(np.linalg.norm(v1)*np.linalg.norm(v2)))))
return angle
As you can see the code is much easier because I don't order the points for you. But it is dangerous since there are two angles between two vectors (if you don't consider their orientation). Make sure both vectors point in the direction of the points you're considering the angle at (both in the opposite direction works too).
Here is the same example as above
get_oriented_angle((3, 4), (2, 3)) -> 69.64251579802836
As you can see this does not agree with get_angle((3, 4), (2, 3))! If you want the same result you have to put the 3 first (or last) in both cases.
If you do
get_oriented_angle((3, 4), (3, 2)) -> 110.35748420197164
It is the same angle as above.

How to call a function with data passed in later

I am working on a school project where I calculate different predictors for some data and I created a function with some predictors so I can use them in a for cycle.
predictor_day = (
[(f"Median {x}", create_median_predictor(x)) for x in (10, 30, 60, 80, 120)]
+ [(f"Average {x}", create_average_predictor(x)) for x in (10, 30, 60, 80, 120)]
+ [
(f"Weighted average {x}", create_weighted_average_predictor(x))
for x in (10, 30, 60, 80, 120)
]
)
This is what one of the predictor functions looks like:
def create_median_predictor(window_size):
def median_predictor(train_data):
return median(train_data[-window_size:])
return median_predictor
Now I also wanted to create a predictor, which takes all the data and returns a median of it, this is what it looks like:
def all_data_median_predictor(train_data):
return median(train_data)
and this is where I am calling it:
for predictor in predictor_day:
prediction = predictor(train_data)
but I cant seem to figure out a way how to add this one to my predictors_day variable, as it is allways missing parameter train_data, is there any way how I can add it to this variable?
Based on the other lists, I assume the the type of predictor_day is List[Tuple[str, Callable]] .
predictor_day = (
[(f"Median {x}", create_median_predictor(x)) for x in (10, 30, 60, 80, 120)]
+ [(f"Average {x}", create_average_predictor(x)) for x in (10, 30, 60, 80, 120)]
+ [
(f"Weighted average {x}", create_weighted_average_predictor(x))
for x in (10, 30, 60, 80, 120)
]
+ [("all data median predictor", all_data_median_predictor)] # the change is in this line
)

How to check if a set of coordinates matches a tetris piece in Python

I’m working with tetris pieces.
The pieces are defined with coordinates, where each piece has an origin block (0,0)
So an L piece could be defined as [(0,0), (0,1), (0,2), (1,2)] as well as [(0,-1), (0,0), (0,1), (1,1)] depending on where you place the origin block.
I want to check whether a set of coordinates A e.g. [(50,50), (50,51), (50,52), (51,52)] matches the shape of a given tetris piece B.
I’m currently using numpy to take away one of the A values from every value in A to reach relative coordinates, then compare with B. The ordering of A will always been in increasing order, but is not guarenteed to match the ordering of B. B is stored in a list with other tetris pieces, and throughout the program, it's origin block will remain the same. This method below seems inefficient and doesn’t account for rotations / reflections of B.
def isAinB(A,B): # A and B are numpy arrays
for i in range(len(A)):
matchCoords = A - A[i]
setM = set([tuple(x) for x in matchCoords])
setB = set([tuple(x) for x in B])
if setM == setB: # Sets are used here because the ordering of M and B are not guarenteed to match
return True
return False
Is there an efficient method / function to implement this? (Accounting for rotations and reflections aswell if possible)
This is one way to approach it. The idea is to first build all the set of variations of a piece in some canonical coordinates (you can do this once per piece kind and reuse it), then put the given piece in the same canonical coordinates and compare.
# Rotates a piece by 90 degrees
def rotate_coords(coords):
return [(y, -x) for x, y in coords]
# Returns a canonical coordinates representation of a piece as a frozen set
def canonical_coords(coords):
x_min = min(x for x, _ in coords)
y_min = min(y for _, y in coords)
return frozenset((y - y_min, x - x_min) for x, y in coords)
# Makes all possible variations of a piece (optionally including reflections)
# as a set of canonical representations
def make_piece_variations(piece, reflections=True):
variations = {canonical_coords(piece)}
for i in range(3):
piece = rotate_coords(piece)
variations.add(canonical_coords(piece))
if reflections:
piece_reflected = [(y, x) for x, y in piece]
variations.update(make_piece_variations(piece_reflected, False))
return variations
# Checks if a given piece is in a set of variations
def matches_piece(piece, variations):
return canonical_coords(piece) in variations
These are some tests:
# L-shaped piece
l_piece = [(0, 0), (0, 1), (0, 2), (1, 2)]
l_piece_variations = make_piece_variations(l_piece, reflections=True)
# Same orientation
print(matches_piece([(50, 50), (50, 51), (50, 52), (51, 52)], l_piece_variations))
# True
# Rotated
print(matches_piece([(50, 50), (51, 50), (52, 50), (52, 49)], l_piece_variations))
# True
# Reflected and rotated
print(matches_piece([(50, 50), (49, 50), (48, 50), (48, 49)], l_piece_variations))
# True
# Rotated and different order of coordinates
print(matches_piece([(50, 48), (50, 50), (49, 48), (50, 49)], l_piece_variations))
# True
# Different piece
print(matches_piece([(50, 50), (50, 51), (50, 52), (50, 53)], l_piece_variations))
# False
This is not a particularly smart algorithm, but it works with minimal constraints.
EDIT: Since in your case you say that the first block and the relative order will always be the same, you can redefine the canonical coordinates as follows to make it just a bit more optimal (although the performance difference will probably be negligible and its use will be more restricted):
def canonical_coords(coords):
return tuple((y - coords[0][0], x - coords[0][1]) for x, y in coords[1:])
The first coordinate will always be (0, 0), so you can skip that and use it as reference point for the rest, and instead of a frozenset you can use a tuple for the sequence of coordinates.

Python set a column in list of list 2D matrix

So given two lists
y_new = ( 165, 152, 145, 174)
pos_2D = ( (2,3), (32,52), (73,11), (43,97) )
I would like to so something like
pos_2D_new = setCol(2, y_new, pos_2D)
where column 2 is the Y coordinate.
pos_2D_new = ( (2,165), (32,152), (73,145), (43,174) )
How to set a 1D into a 2D tuple in Python?
You can use a generator expression with zip:
pos_2D_new = tuple((x, y) for (x, _), y in zip(pos_2D, y_new))
With your sample input, pos_2D_new would become:
((2, 165), (32, 152), (73, 145), (43, 174))
You can do this with:
pos_2D_new = [ (x, y2) for (x, _), y2 in zip(pos_2D, y_new) ]
or if you want a tuple:
pos_2D_new = tuple((x, y2) for (x, __), y2 in zip(pos_2D, y_new))
We thus concurrently iterate over the pos_2D and ynew, and each time we construct a new tuple (x, y2).
The above is of course not very generic, we can make it more generic, and allow to specify what item to replace, like:
def replace_coord(d, old_pos, new_coord):
return tuple(x[:d] + (y,) + x[d+1:] for x, y in zip(old_pos, new_coord))
So for the x-coordinate you can use replace_coord(0, old_pos, new_x_coord) whereas for the y-coordinate it is replace_coord(1, old_pos, new_y_coord). This also works for coordinates in three or more dimensions.
Which would give
def setCol(idx, coords_1d, coords_nd):
# recalling that indexing starts from 0
idx -= 1
return [
c_nd[:idx] + (c_1d,) + c_nd[idx+1:]
for (c_1d, c_nd) in zip(coords_1d, coords_nd)
]
and
>>> setCol(2, y_new, pos_2D)
[(2, 165), (32, 152), (73, 145), (43, 174)]

tkinter polygons

I'm relatively new to tkinter, and I'm making a Game which uses only squares. the book I'm copying off only shows triangles. Here is the code:
# The tkinter launcher (Already complete)
from tkinter import *
HEIGHT = 500
WIDTH = 800
window = Tk()
window.title ('VOID')
c = Canvas (window, width=WIDTH, height=HEIGHT, bg='black')
c.pack()
# Block maker (Issue)
ship_id = c.create_polygon (5, 5, 5, 25, 30, 15, fill='red')
I don't get any errors, it is just the string of numbers, (5, 5, 5, 25, 30, 15) which I don't get, as I'm trying to make a square.
Abstract of Canvas.create_polygon definition:
As displayed, a polygon has two parts: its outline and its interior. Its geometry is specified as a series of vertices [(x0, y0), (x1, y1), … (xn, yn)] (...)
id = C.create_polygon(x0, y0, x1, y1, ..., option, ...)
So you need to pass the coordinates of the square in this specified order.
For example:
myCanvas.create_polygon(5, 5, 5, 10, 10, 10, 10, 5)
can be read as
myCanvas.create_polygon(5,5, 5,10, 10,10, 10,5)
will create a square whose vertices are (5, 5), (5, 10), (10, 10) and (10, 5).
Here's some info on the create_polygon function (official docs).
According to the nmt.edu page, the format of the function call is
id = C.create_polygon(x0, y0, x1, y1, ..., option, ...)
This means that the ship_id = c.create_polygon (5, 5, 5, 25, 30, 15, fill='red') call creates a polygon with the following vertices: (5,5), (5,25), (30, 15) and fills the polygon with red.
If you want to create a square, you'd have to do the following:
ship_id = c.create_polygon (5, 5, 5, 25, 25, 25, 25, 5, fill='red')
which creates a rectangle with vertices (5,5), (5,25), (25,25), (25,5).
If you wanted a more reproducible way to make ships, you could do something like
def ship (x,y):
return [x-5, y-5, x+5, y-5, x+5, y+5, x-5, y+5]
ship_id = c.create_polygon(*ship(100, 500), fill='red')
The above would create sort of a factory for ships (the lambda function) in which you specify the x and y for the center of the ship and then it gives a list of the vertices that can be used for the create_polygon function.
You could even take this a step further to specify ship size with a tweaked lambda function
def ship (x,y,w,h):
return [x-w/2, y-h/2, x+w/2, y-h/2, x+w/2, y+h/2, x-w/2, y+h/2]
ship_id = c.create_polygon(*ship(100, 500, 8, 8), fill='red')

Categories

Resources