How to check if a set of coordinates matches a tetris piece in Python - python

I’m working with tetris pieces.
The pieces are defined with coordinates, where each piece has an origin block (0,0)
So an L piece could be defined as [(0,0), (0,1), (0,2), (1,2)] as well as [(0,-1), (0,0), (0,1), (1,1)] depending on where you place the origin block.
I want to check whether a set of coordinates A e.g. [(50,50), (50,51), (50,52), (51,52)] matches the shape of a given tetris piece B.
I’m currently using numpy to take away one of the A values from every value in A to reach relative coordinates, then compare with B. The ordering of A will always been in increasing order, but is not guarenteed to match the ordering of B. B is stored in a list with other tetris pieces, and throughout the program, it's origin block will remain the same. This method below seems inefficient and doesn’t account for rotations / reflections of B.
def isAinB(A,B): # A and B are numpy arrays
for i in range(len(A)):
matchCoords = A - A[i]
setM = set([tuple(x) for x in matchCoords])
setB = set([tuple(x) for x in B])
if setM == setB: # Sets are used here because the ordering of M and B are not guarenteed to match
return True
return False
Is there an efficient method / function to implement this? (Accounting for rotations and reflections aswell if possible)

This is one way to approach it. The idea is to first build all the set of variations of a piece in some canonical coordinates (you can do this once per piece kind and reuse it), then put the given piece in the same canonical coordinates and compare.
# Rotates a piece by 90 degrees
def rotate_coords(coords):
return [(y, -x) for x, y in coords]
# Returns a canonical coordinates representation of a piece as a frozen set
def canonical_coords(coords):
x_min = min(x for x, _ in coords)
y_min = min(y for _, y in coords)
return frozenset((y - y_min, x - x_min) for x, y in coords)
# Makes all possible variations of a piece (optionally including reflections)
# as a set of canonical representations
def make_piece_variations(piece, reflections=True):
variations = {canonical_coords(piece)}
for i in range(3):
piece = rotate_coords(piece)
variations.add(canonical_coords(piece))
if reflections:
piece_reflected = [(y, x) for x, y in piece]
variations.update(make_piece_variations(piece_reflected, False))
return variations
# Checks if a given piece is in a set of variations
def matches_piece(piece, variations):
return canonical_coords(piece) in variations
These are some tests:
# L-shaped piece
l_piece = [(0, 0), (0, 1), (0, 2), (1, 2)]
l_piece_variations = make_piece_variations(l_piece, reflections=True)
# Same orientation
print(matches_piece([(50, 50), (50, 51), (50, 52), (51, 52)], l_piece_variations))
# True
# Rotated
print(matches_piece([(50, 50), (51, 50), (52, 50), (52, 49)], l_piece_variations))
# True
# Reflected and rotated
print(matches_piece([(50, 50), (49, 50), (48, 50), (48, 49)], l_piece_variations))
# True
# Rotated and different order of coordinates
print(matches_piece([(50, 48), (50, 50), (49, 48), (50, 49)], l_piece_variations))
# True
# Different piece
print(matches_piece([(50, 50), (50, 51), (50, 52), (50, 53)], l_piece_variations))
# False
This is not a particularly smart algorithm, but it works with minimal constraints.
EDIT: Since in your case you say that the first block and the relative order will always be the same, you can redefine the canonical coordinates as follows to make it just a bit more optimal (although the performance difference will probably be negligible and its use will be more restricted):
def canonical_coords(coords):
return tuple((y - coords[0][0], x - coords[0][1]) for x, y in coords[1:])
The first coordinate will always be (0, 0), so you can skip that and use it as reference point for the rest, and instead of a frozenset you can use a tuple for the sequence of coordinates.

Related

Calculating angles of body skeleton in video using OpenPose

Disclaimer: This question is regarding OpenPose but the key here is actually to figure how to use the output (coordinates stored in the JSON) and not how to use OpenPose, so please consider reading it to the end.
I have a video of a person from the side on a bike (profile of him sitting so we see the right side). I use the OpenPose to extract the coordinates of the skeleton. The OpenPose provides the coordinates in a JSON file looking like (see docs for explanation):
{
"version": 1.3,
"people": [
{
"person_id": [
-1
],
"pose_keypoints_2d": [
594.071,
214.017,
0.917187,
523.639,
216.025,
0.797579,
519.661,
212.063,
0.856948,
539.251,
294.394,
0.873084,
619.546,
304.215,
0.897219,
531.424,
221.854,
0.694434,
550.986,
310.036,
0.787151,
625.477,
339.436,
0.845077,
423.656,
319.878,
0.660646,
404.111,
321.807,
0.650697,
484.434,
437.41,
0.85125,
404.13,
556.854,
0.791542,
443.261,
319.801,
0.601241,
541.241,
370.793,
0.921286,
502.02,
494.141,
0.799306,
592.138,
198.429,
0.943879,
0,
0,
0,
562.742,
182.698,
0.914112,
0,
0,
0,
537.25,
504.024,
0.530087,
535.323,
500.073,
0.526998,
486.351,
500.042,
0.615485,
449.168,
594.093,
0.700363,
431.482,
594.156,
0.693443,
386.46,
560.803,
0.803862
],
"face_keypoints_2d": [],
"hand_left_keypoints_2d": [],
"hand_right_keypoints_2d": [],
"pose_keypoints_3d": [],
"face_keypoints_3d": [],
"hand_left_keypoints_3d": [],
"hand_right_keypoints_3d": []
}
]
}
From what I understand, each JSON is a frame of the video.
My goal is to detect the angles of specific coordinates like right knee, right arm, etc. For example:
openpose_angles = [(9, 10, 11, "right_knee"),
(2, 3, 4, "right_arm")]
This is based on the following OpenPose skeleton dummy:
What I did is to calculate the angle between three coordinates (using Python):
temp_df = json.load(open(os.path.join(jsons_dir, file)))
listPoints = list(zip(*[iter(temp_df['people'][person_number]['pose_keypoints_2d'])] * 3))
count = 0
lmList2 = {}
for x,y,c in listPoints:
lmList2[count]=(x,y,c)
count+=1
p1=angle_cords[0]
p2=angle_cords[1]
p3=angle_cords[2]
x1, y1 ,c1= lmList2[p1]
x2, y2, c2 = lmList2[p2]
x3, y3, c3 = lmList2[p3]
# Calculate the angle
angle = math.degrees(math.atan2(y3 - y2, x3 - x2) -
math.atan2(y1 - y2, x1 - x2))
if angle < 0:
angle += 360
This method I saw on some blog (which I forgot where), but was related to OpenCV instead of OpenPose (not sure if makes the difference), but see angles that do not make sense. We showed it to our teach and he suggested us to use vectors to calculate the angles, instead of using math.atan2. But we got confued on how to implment this.
To summarize, here is the question - What will be the best way to calculate the angles? How to calculate them using vectors?
Your teacher is right. I suspect the problem is that 3 points can make up 3 different angles depending on the order. Just consider the angles in a triangle. Also you seem to ignore the 3rd coordinate.
Reconstruct the Skeleton
In your picture you indicate that the edges/bones of the skeleton are
edges = {(0, 1), (0, 15), (0, 16), (1, 2), (1, 5), (1, 8), (2, 3), (3, 4), (5, 6), (6, 7), (8, 9), (8, 12), (9, 10), (10, 11), (11, 22), (11, 24), (12, 13), (13, 14), (14, 19), (14, 21), (15, 17), (16, 18), (19, 20), (22, 23)}
I get the points from your json file with
np.array(pose['people'][0]['pose_keypoints_2d']).reshape(-1,3)
Now I plot that ignoring the 3rd component to get an idea what I am working with. Notice that this does not change the proportions much since the 3rd component is really small compared to the others.
One definitely recognizes an upside down man. I notice that there seems to be some kind of artifact but I suspect this is just an error in recognition and would be better in an other frame.
Calculate the Angle
Recall that the dot product divided by the product of the norm gives the cosine of the angle. See the wikipedia article on dot product. I'll include the relevant picture from that article. So now I can get the angle of two joined edges like this.
def get_angle(edge1, edge2):
assert tuple(sorted(edge1)) in edges
assert tuple(sorted(edge2)) in edges
edge1 = set(edge1)
edge2 = set(edge2)
mid_point = edge1.intersection(edge2).pop()
a = (edge1-edge2).pop()
b = (edge2-edge1).pop()
v1 = points[mid_point]-points[a]
v2 = points[mid_point]-points[b]
angle = (math.degrees(np.arccos(np.dot(v1,v2)
/(np.linalg.norm(v1)*np.linalg.norm(v2)))))
return angle
For example if you wanted the elbow angles you could do
get_angle((3, 4), (2, 3))
get_angle((5, 6), (6, 7))
giving you
110.35748420197164
124.04586139643376
Which to me makes sense when looking at my picture of the skeleton. It's a bit more than a right angle.
What if I had to calculate the angle between two vectors that do not share one point?
In that case you have to be more careful because in that case the vectors orientation matters. Firstly here is the code
def get_oriented_angle(edge1, edge2):
assert tuple(sorted(edge1)) in edges
assert tuple(sorted(edge2)) in edges
v1 = points[edge1[0]]-points[edge1[1]]
v2 = points[edge2[0]]-points[edge2[1]]
angle = (math.degrees(np.arccos(np.dot(v1,v2)
/(np.linalg.norm(v1)*np.linalg.norm(v2)))))
return angle
As you can see the code is much easier because I don't order the points for you. But it is dangerous since there are two angles between two vectors (if you don't consider their orientation). Make sure both vectors point in the direction of the points you're considering the angle at (both in the opposite direction works too).
Here is the same example as above
get_oriented_angle((3, 4), (2, 3)) -> 69.64251579802836
As you can see this does not agree with get_angle((3, 4), (2, 3))! If you want the same result you have to put the 3 first (or last) in both cases.
If you do
get_oriented_angle((3, 4), (3, 2)) -> 110.35748420197164
It is the same angle as above.

Extract those points which are at least 3 degrees far from each other

I have 9 points (longitudes, latitudes in degrees) on the surface of Earth follows.
XY = [(100, 10), (100, 11), (100, 13), (101, 10), (101, 11), (101, 13), (103, 10), (103, 11), (103, 13)]
print (len(XY))
# 9
I wanted to extract those points which are at least 3 degrees far from each other.
I tried it as follows.
results = []
for point in XY:
x1,y1 = point
for result in results:
x2,y2 = result
distance = math.hypot(x2 - x1, y2 - y1)
if distance >= 3:
results.append(point)
print (results)
But output is empty.
edit 2
from sklearn.metrics.pairwise import haversine_distances
from math import radians
results = []
for point in XY:
x1,y1 = [radians(_) for _ in point]
for result in results:
distance = haversine_distances((x1,y1), (x2,y2))
print (distance)
if distance >= 3:
results.append(point)
print (results)
Still the result is empty
edit 3
results = []
for point in XY:
x1,y1 = point
for point in XY:
x2,y2 = point
distance = math.hypot(x2 - x1, y2 - y1)
print (distance)
if distance >= 3:
results.append(point)
print (results)
print (len(results))
# 32 # unexpected len
Important: You've said you want to "Extract those points which are at least 3 degrees far from each other" but then you've used the Euclidean distance with math.hypot(). As mentioned by #martineau, this should use the Haversine angular distance.
Since your points are "(longitudes, latitudes in degrees)", they first need to be converted to radians. And the pairs should be flipped so that latitude comes first, as required by the haversine_distances() function. That can be done with:
XY_r = [(math.radians(lat), math.radians(lon)) for lon, lat in XY]
Here's the kicker - none of the combnation-making or looping is necesssary. If haversine_distances() is passed in a list of points, it will calculate the distances between all of them and provide a result as an array of arrays. These can then be converted back to degrees and checked; or convert 3 degrees to radians and then check against h-dists.
import math
import numpy as np
from sklearn.metrics.pairwise import haversine_distances
XY = [(100, 10), (100, 11), (100, 13), (101, 10), (101, 11), (101, 13), (103, 10), (103, 11), (103, 13)]
# convert to radians and flip so that latitude is first
XY_r = [(math.radians(lat), math.radians(lon)) for lon, lat in XY]
distances = haversine_distances(XY_r) # distances array-of-arrays in RADIANS
dist_criteria = distances >= math.radians(3) # at least 3 degrees (in radians) away
results = [point for point, result in zip(XY, dist_criteria) if np.any(result)]
print(results)
print(len(results))
print('<3 away from all:', set(XY) - set(results))
Output:
[(100, 10), (100, 11), (100, 13), (101, 10), (101, 13), (103, 10), (103, 11), (103, 13)]
8
<3 away from all: {(101, 11)}
Wrt the previous edit and your original code:
Your first two attempts are giving empty results because of this:
results = []
for point in XY:
...
for result in results:
results is initialised as an empty list. So the for result in results loop will directly exit. Nothing inside the loop executes.
The 3rd attempt is getting you 32 results because of repetitions. You've got:
for point in XY:
...
for point in XY:
so some points you get will be the same point.
To avoid duplicates in the loops:
Add a check for it and go to the next iteration:
if (x1, y1) == (x2, y2):
continue
Btw, you're mangling the point variable because it's reused in both loops. It doesn't cause a problem but makes your code harder to debug. Either make them point1 and point2, or even better, instead of for point in XY: x1, y1 = point, you can directly do for x1, y1 in XY - that's called tuple unpacking.
for x1, y1 in XY:
for x2, y2 in XY:
if (x1, y1) == (x2, y2):
continue
...
You also need to change result to be a set instead of a list so that the same point is not re-added to the results when it's more than 3 away from another point. Sets don't allow duplicates, that way points don't get repeated in results.
Use itertools.combinations() to get unique pairs of points without repetitions. This allows you to skip the duplicate check (unless XY actually has duplicate points) and brings the previous block down to one for-loop:
import itertools
import math
results = set() # unique results
for (x1, y1), (x2, y2) in itertools.combinations(XY, r=2):
distance = math.hypot(x2 - x1, y2 - y1) # WRONG! see above
if distance >= 3:
# add both points
results.update({(x1, y1), (x2, y2)})
print(results)
print(len(results))
print('<3 away from all:', set(XY) - results)
The (wrong) output:
{(103, 11), (100, 13), (101, 13), (100, 10), (103, 10), (101, 10), (103, 13), (100, 11)}
8
<3 away from all: {(101, 11)}
(The result is the same but merely by coincidence of the input data.)

Estimate missing points in a list of points

I'm generating a list of (x,y) coordinates from detecting a ball's flight in a video. The problem I have is for a few frames in the middle of the video the ball can't be detected, for these frames the list appends (-1,-1).
Is there a way to estimate the true (x,y) coordinates of the ball for these points?
Eg tracked points list being:
pointList = [(60, 40), (55, 42), (53, 43), (-1, -1), (-1, -1), (-1, -1), (35, 55), (30, 60)]
Then returning an estimate of what the 3 (-1,-1) missing coordinates would be with context to the sourounding points (preserving the curve).
If it's a ball then theoretically it should have a parabolic path, you could try and fit a curve ignoring the (-1, -1) and then replace the missing values.
Something like...
import numpy as np
pointList = [(60, 40), (55, 42), (53, 43), (-1, -1), (-1, -1), (-1, -1), (35, 55), (30, 60)]
x, y = list(zip(*[(x, y) for (x, y) in pointList if x>0]))
fit = np.polyfit(x, y, 2)
polynome = np.poly1d(fit)
# call your polynome for missing data, e.g.
missing = (55 - i*(55-35)/4 for i in range(3))
print([(m, polynome(m)) for m in missing])
giving ...
[(55.0, 41.971982486554325), (50.0, 44.426515896714186), (45.0, 47.44514924300471)]
You could use scipys spline to interpolate the missing values:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import splprep, splev
pointList = [(60, 40), (55, 42), (53, 43),
(-1, -1), (-1, -1), (-1, -1),
(35, 55), (30, 60)]
# Remove the missing values
pointList = np.array(pointList)
pointList = pointList[pointList[:, 0] != -1, :]
def spline(x, n, k=2):
tck = splprep(x.T, s=0, k=k)[0]
u = np.linspace(0.0, 1.0, n)
return np.column_stack(splev(x=u, tck=tck))
# Interpolate the points with a quadratic spline at 100 points
pointList_interpolated = spline(pointList, n=100, k=2)
plt.plot(*pointList.T, c='r', ls='', marker='o', zorder=10)
plt.plot(*pointList_interpolated.T, c='b')
If camera is not moving - just the ball and you ignore the wind, then trajectory is parabolic. See: https://en.wikipedia.org/wiki/Trajectory#Uniform_gravity,_neither_drag_nor_wind
In this case fit quadratic function to points which you know and you will get missing ones. Set also error of boundary points in the vicinity of unknown area (point 53,43 and 35, 55) to be 0 or close to 0 (no-error, big weight in interpolation) when fitting so your interpolation will go through these points.
There are some libraries for polynomial fit. E.g. numpy.polyfit:
https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.polynomial.polynomial.polyfit.html

Recursively trying to find the maximum w/o loops

So I'm given a tuple of ordered pairs in this format:
(x,y) where
x represents the physical weight of the objects, y represents the cost/value of the object.
((5, 20), (10, 70), (40, 200), (20, 80), (10, 100))
Objects may only be used once, but there may be multiples of those objects in the original tuple of ordered pairs.
z is the max weight that can be shipped. It's an integer. z could be 50 or something like that.
Goal: Find the maximum value possible that you can send given the limit Z.
The difficulty is that we can ONLY use recursion and we cannot use loops nor can we use python built-in functions.
I've tried to work out the max value in a list of integers, which I did separately to try to get some sort of idea. I have also tried giving the objects a 'mass' and doing value/weight, but that didn't work very well either.
def maximum_val(objects: ((int,int),) , max_weight : int) -> int:
if max_weight == 0:
return 0
else:
return objects[0][1] + maximum_val(objects[1:], max_weight - gifts[0][0])
((5, 20), (10, 70), (40, 200), (20, 80), (10, 100))
Example: Given the tuple above and the limit Z=40, the best possible value that could be obtained is 250 -> (10, 70), (10, 100), (20, 80)
This is known as knapsack and you are looking for a recursive variant.
At every step, check what is best. Include the first object or skip the first object:
objects = ((5, 20), (10, 70), (40, 200), (20, 80), (10, 100))
def recursive_knapsack(objects, limit ):
if not objects:
return 0
if objects[0][0] > limit:
#first object cant fit
return recursive_knapsack(objects[1:],limit)
include = objects[0][1] + recursive_knapsack(objects[1:], limit-objects[0][0])
exclude = recursive_knapsack(objects[1:],limit)
if include < exclude:
return exclude
else:
return include

The difference between two sets of tuples

I'm trying to write a function that takes a tuple (representing an integer coordinate in the plane) and returns all adjacent coordinates not including the original coordinate.
def get_adj_coord(coord):
'''
coord: tuple of int
should return set of tuples representing coordinates adjacent
to the original coord (not including the original)
'''
x,y = coord
range1 = range(x-1, x+2)
range2 = range(y-1, y+2)
coords = {(x,y) for x in range1 for y in range2} - set(coord)
return coords
The issue is that the return value of this function always includes the original coordinate:
In [9]: get_adj_coord((0,0))
Out[9]: {(-1, -1), (-1, 0), (-1, 1), (0, -1), (0, 0), (0, 1), (1, -1), (1, 0), (1, 1)}
I'm probably missing something fundamental to sets and/or tuples, but the following function is definitely not returning what I expect. I also tried also using:
coords = {(x,y) for x in range1 for y in range2}.remove(coord)
But then function returns nothing. Can anyone point out what I'm very clearly missing here?
That's because you're not subtracting the right set object. Your current approach uses set((0,0)) -> {0} which casts the tuple into a set.
However, what you want is a tuple in a set:
coords = {(x,y) for x in range1 for y in range2} - {coord}

Categories

Resources