Change viewport angle in blender using Python - python

I'm trying to find out if there is a way to change the viewport angle in blender using Python.
I would like a result like you would get from pressing 1, 3, or 7 on the num. pad.
Thank you for any help

First of all, note that you can have multiple 3D views open at once, and each can have its own viewport angle, perspective/ortho settings etc. So your script will have to look for all the 3D views that might be present (which might be none) and decide which one(s) it’s going to affect.
Start with the bpy.data object, which has a window_managers attribute. This collection always seems to have just one element. However, there might be one or more open windows. Each window has a screen, which is divided into one or more areas. So you need to search through all the areas for one with a space type of "VIEW_3D". And then hunt through the spaces of this area for the one(s) with type "VIEW_3D". Such a space will be of subclass SpaceView3D. This will have a region_3d attribute of type RegionView3D. And finally, this object in turn has an attribute called view_matrix, which takes a value of type Matrix that you can get or set.
Got all that? :)

Once you've located the right 'view', you can modify:
view.spaces[0].region_3d.view_matrix
view.spaces[0].region_3d.view_rotation
Note that the region_3d.view_location is the 'look_at' target, not the location of the camera; you have to modify the view_matrix directly if you want to move the position of the camera (as far as I know), but you can subtly adjust the rotation using view_rotation quite easily. You'll probably need to read this to generate a valid quaternion though: http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Perhaps something like this may be useful:
class Utils(object):
def __init__(self, context):
self.context = context
#property
def views(self):
""" Returns the set of 3D views.
"""
rtn = []
for a in self.context.window.screen.areas:
if a.type == 'VIEW_3D':
rtn.append(a)
return rtn
def camera(self, view):
""" Return position, rotation data about a given view for the first space attached to it """
look_at = view.spaces[0].region_3d.view_location
matrix = view.spaces[0].region_3d.view_matrix
camera_pos = self.camera_position(matrix)
rotation = view.spaces[0].region_3d.view_rotation
return look_at, camera_pos, rotation
def camera_position(self, matrix):
""" From 4x4 matrix, calculate camera location """
t = (matrix[0][3], matrix[1][3], matrix[2][3])
r = (
(matrix[0][0], matrix[0][1], matrix[0][2]),
(matrix[1][0], matrix[1][1], matrix[1][2]),
(matrix[2][0], matrix[2][1], matrix[2][2])
)
rp = (
(-r[0][0], -r[1][0], -r[2][0]),
(-r[0][1], -r[1][1], -r[2][1]),
(-r[0][2], -r[1][2], -r[2][2])
)
output = (
rp[0][0] * t[0] + rp[0][1] * t[1] + rp[0][2] * t[2],
rp[1][0] * t[0] + rp[1][1] * t[1] + rp[1][2] * t[2],
rp[2][0] * t[0] + rp[2][1] * t[1] + rp[2][2] * t[2],
)
return output

Related

CadQuery: Selecting an edge by index (Filleting specific edges)

I come from the engineering CAD world and I'm creating some designs in CadQuery. What I want to do is this (pseudocode):
edges = part.edges()
edges[n].fillet(r)
Or ideally have the ability to do something like this (though I can't find any methods for edge properties). Pseudocode:
edges = part.edges()
for edge in edges:
if edge.length() > x:
edge.fillet(a)
else:
edge.fillet(b)
This would be very useful when a design contains non-orthogonal faces. I understand that I can select edges with selectors, but I find them unnecessarily complicated and work best with orthogonal faces. FreeCAD lets you treat edges as a list.
I believe there might be a method to select the closest edge to a point, but I can't seem to track it down.
If someone can provide guidance that would be great -- thank you!
Bonus question: Is there a way to return coordinates of geometry as a list or vector? e.g.:
origin = cq.workplane.center().val
>> [x,y,z]
(or something like the above)
Take a look at this code, i hope this will be helpful.
import cadquery as cq
plane1 = cq.Workplane()
block = plane1.rect(10,12).extrude(10)
edges = block.edges("|Z")
filleted_block = edges.all()[0].fillet(0.5)
show(filleted_block)
For the posterity. To select multiple edges eg. for chamfering you can use newObject() on Workplane. The argument is a list of edges (they have to be cq.occ_impl.shapes.Edge instances, NOT cq.Workplane instances).
import cadquery as cq
model = cq.Workplane().box(10, 10, 5)
edges = model.edges()
# edges.all() returns worplanes, we have to get underlying geometry
selected = list(map(lambda x: x.objects[0], edges.all()))
model_with_chamfer = model.newObject(selected).chamfer(1)
To get edge length you can do something like this:
edge = model.edges().all()[0] # This select one 'random' edge
length = edge.objects[0].Length()
edge.Length() doesn't work since edge is Workplane instance, not geometry instance.
To get edges of certain length you can just create dict with edge geometry and length and filter it using builtin python's filter(). Here is a snippet of my implementation for chamfering short edges on topmost face:
top_edges = model.edges(">Z and #Z")
def get_length(edge):
try:
return edge.vals()[0].Length()
except Exception:
return 0.0
# Inside edges are shorter - filter only those
edge_len_list = list(map(
lambda x: (x.objects[0], get_length(x)),
top_edges.all()))
avg = mean([a for _, a in edge_len_list])
selected = filter(lambda x: x[1] < avg, edge_len_list)
selected = [e for e, _ in selected]
vertical_edges = model.edges("|Z").all()
selected.extend(vertical_edges)
model = model.newObject(selected)
model = model.chamfer(chamfer_size)

When should I use classes and self method in Python?

I've been trying to write a Python program to calculate a point location, based on distance from 4 anchors. I decided to calculate it as intersection points of 4 circles.
I have a question regarding not the algorithm but rather the use of classes in such program. I don't really have much experience with OOP. Is it really necessary to use classes here or does it at least improve a program in any way?
Here's my code:
import math
class Program():
def __init__(self, anchor_1, anchor_2, anchor_3, anchor_4, data):
self.anchor_1 = anchor_1
self.anchor_2 = anchor_2
self.anchor_3 = anchor_3
self.anchor_4 = anchor_4
def intersection(self, P1, P2, dist1, dist2):
PX = abs(P1[0]-P2[0])
PY = abs(P1[1]-P2[1])
d = math.sqrt(PX*PX+PY*PY)
if d < dist1+ dist2 and d > (abs(dist1-dist2)):
ex = (P2[0]-P1[0])/d
ey = (P2[1]-P1[1])/d
x = (dist1*dist1 - dist2*dist2 + d*d) / (2*d)
y = math.sqrt(dist1*dist1 - x*x)
P3 = ((P1[0] + x * ex - y * ey),(P1[1] + x*ey + y*ex))
P4 = ((P1[0] + x * ex + y * ey),(P1[1] + x*ey - y*ex))
return (P3,P4)
elif d == dist1 + dist2:
ex = (P2[0]-P1[0])/d
ey = (P2[1]-P1[1])/d
x = (dist1*dist1 - dist2*dist2 + d*d) / (2*d)
y = math.sqrt(dist1*dist1 - x*x)
P3 = ((P1[0] + x * ex + y * ey),(P1[1] + x*ey + y*ex))
return(P3, None)
else:
return (None, None)
def calc_point(self, my_list):
if len(my_list) != 5:
print("Wrong data")
else:
tag_id = my_list[0];
self.dist_1 = my_list[1];
self.dist_2 = my_list[2];
self.dist_3 = my_list[3];
self.dist_4 = my_list[4];
(self.X1, self.X2) = self.intersection(self.anchor_1, self.anchor_2, self.dist_1, self.dist_2)
(self.X3, self.X4) = self.intersection(self.anchor_1, self.anchor_3, self.dist_1, self.dist_3)
(self.X5, self.X6) = self.intersection(self.anchor_1, self.anchor_4, self.dist_1, self.dist_4)
with open('distances.txt') as f:
dist_to_anchor = f.readlines()
dist_to_anchor = [x.strip() for x in dist_to_anchor]
dist_to_anchor = [x.split() for x in dist_to_anchor]
for row in dist_to_anchor:
for k in range(0,5):
row[k] = float(row[k])
anchor_1= (1,1)
anchor_2 = (-1,1)
anchor_3 = (-1, -1)
anchor_4 = (1, -1)
My_program = Program (anchor_1, anchor_2, anchor_3, anchor_4, dist_to_anchor)
My_program.calc_point(dist_to_anchor[0])
print(My_program.X1)
print(My_program.X2)
print(My_program.X3)
print(My_program.X4)
print(My_program.X5)
print(My_program.X6)
Also, I don't quite understand where should I use self keyword and where it is needless.
Is it really necessary to use classes here or does it at least improve a program in any way?
Classes are never necessary, but they are often very useful for organizing code.
In your case, you've taken procedural code and just wrapped it in a class. It's still basically a bunch of function calls. You'd be better off either writing it as procedures or writing proper classes.
Let's look at how you'd do some geometry in a procedural style vs an object oriented style.
Procedural programming is all about writing functions (procedures) which take some data, process it, and return some data.
def area_circle(radius):
return math.pi * radius * radius
print(area_circle(5))
You have the radius of a circle and you get the area.
Object oriented programming is about asking data to do things.
class Circle():
def __init__(self, radius=0):
self.radius = radius
def area(self):
return math.pi * self.radius * self.radius
circle = Circle(radius=5)
print(circle.area())
You have a circle and you ask it for its area.
That seems a lot of extra code for a very subtle distinction. Why bother?
What happens if you need to calculate other shapes? Here's a Square in OO.
class Square():
def __init__(self, side=0):
self.side = side
def area(self):
return self.side * self.side
square = Square(side=5)
print(square.area())
And now procedural.
def area_square(side):
return side * side
print(area_square(5));
So what? What happens when you want to calculate the area of a shape? Procedurally, everywhere that wants to deal with shapes has to know what sort of shape it's dealing with and what procedure to call on it and where to get that procedure from. This logic might be scattered all over the code. To avoid this you could write a wrapper function and make sure its imported as needed.
from circle import 'area_circle'
from square import 'area_square'
def area(type, shape_data):
if type == 'circle':
return area_circle(shape_data)
elif type == 'square':
return area_square(shape_data)
else:
raise Exception("Unrecognized type")
print(area('circle', 5))
print(area('square', 5))
In OO you get that for free.
print(shape.area())
Whether shape is a Circle or a Square, shape.area() will work. You, the person using the shape, don't need to know anything about how it works. If you want to do more with your shapes, perhaps calculate the perimeter, add a perimeter method to your shape classes and now it's available wherever you have a shape.
As more shapes get added the procedural code gets more and more complex everywhere it needs to use shapes. The OO code remains exactly the same, instead you write more classes.
And that's the point of OO: hiding the details of how the work is done behind an interface. It doesn't matter to your code how it works so long as the result is the same.
Classes and OOP are IMHO always a good choice, by using them, you will be able to better organize and reuse your code, you can create new classes that derive from an existing class to extend its functionality (inheritance) or to change its behavior if you need it to (polymorphism) as well as to encapsulate the internals of your code so it becomes safer (no real encapsulation in Python, though).
In your specific case, for example, you are building a calculator, that uses a technique to calculate an intersection, if somebody else using your class wants to modify that behavior they could override the function (this is Polymorphism in action):
class PointCalculator:
def intersection(self, P1, P2, dist1, dist2):
# Your initial implementation
class FasterPointCalculator(PointCalculator):
def __init__(self):
super().__init__()
def intersection(self, P1, P2, dist1, dist2):
# New implementation
Or, you might extend the class in the future:
class BetterPointCalculator(PointCalculator):
def __init__(self):
super().__init__()
def distance(self, P1, P2):
# New function
You may need to initialize your class with some required data and you may not want users to be able to modify it, you could indicate encapsulation by naming your variables with an underscore:
class PointCalculator:
def __init__(self, p1, p2):
self._p1 = p1
self._p2 = p2
def do_something(self):
# Do something with your data
self._p1 + self._p2
As you have probably noticed, self is passed automatically when calling a function, it contains a reference to the current object (the instance of the class) so you can access anything declared in it like the variables _p1 and _p2 in the example above.
You can also create class methods (static methods) and then you don't have access to self, you should do this for methods that perform general calculations or any operation that doesn't need a specific instance, your intersection method could be a good candidate e.g.
class PointCalculator:
#staticmethod
def intersection(P1, P2, dist1, dist2):
# Return the result
Now you don't need an instance of PointCalculator, you can simply call PointCalculator.intersection(1, 2, 3, 4)
Another advantage of using classes could be memory optimization, Python will delete objects from memory when they go out of scope, so if you have a long script with a lot of data, they will not be released from memory until the script terminates.
Having said that, for small utility scripts that perform very specific tasks, for example, install an application, configure some service, run some OS administration task, etc... a simple script is totally fine and it is one of the reasons Python is so popular.

Get an object's position in another object's coordinate system

Is there a way in MEL or Python in Maya to get one object's position in the coordinate system of another object? I have a camera in a scene that may be rotated in any direction and am trying to measure the distance in its local Z axis to the vertices of various objects in the scene. This obviously needs to be fast, since it will likely be run thousands of times across the scene.
In Maxscript the command would be something like
" in coordsys $camera "
but I have yet to find something like this in Maya. If there's no direct command to do this, does anyone have a way to calculate it using matrix math?
There is no one liner similar to the MXS idiom -- and no easy way to do it in mel. However in Python you can do this fairly easily.
First you need to get the matrix for the coordinate system you want as an MMatrix, which is part of the OpenMaya api. Then get the position you want to check as an MPoint, which is another api class. Here's the cheap way to get them (there are faster methods but they're much wordier):
from maya.api.OpenMaya import MVector, MMatrix, MPoint
import maya.cmds as cmds
def world_matrix(obj):
"""'
convenience method to get the world matrix of <obj> as a matrix object
"""
return MMatrix( cmds.xform(obj, q=True, matrix=True, ws=True))
def world_pos(obj):
"""'
convenience method to get the world position of <obj> as an MPoint
"""
return MPoint( cmds.xform(obj, q=True, t=True, ws=True))
Once you have the matrix and the point, the relative position is simply point times the inverse of the matrix:
relative_position = world_pos('pSphere1') * world_matrix('pCube1').inverse()
print relative_position
# (0.756766, -0.0498943, 3.38499, 1)
The result will be an MPoint, which has 4 numbers (x, y, z and w); the 4th will always be 1 so you can just ignore it, although the math needs it to account for scales and shears.
Use this MEL script to calculate the distance from camera1 to nurbsSphere1 primitive:
vector $p1 = `getAttr camera1.translate`;
vector $p2 = `getAttr nurbsSphere1.translate`;
vector $result = $p1 - $p2;
print (mag($result))
Printed result must be like this:
# MEL 40.1965
Or use this Python script to calculate the distance from camera1 to nurbsSphere1 primitive:
import maya.cmds as cmds
import math
distance = math.sqrt(pow((float)(cmds.getAttr("nurbsSphere1.tx") - cmds.getAttr("camera1.tx")),2) +
pow((float)(cmds.getAttr("nurbsSphere1.ty") - cmds.getAttr("camera1.ty")),2) +
pow((float)(cmds.getAttr("nurbsSphere1.tz") - cmds.getAttr("camera1.tz")),2) )
print(distance)
Printed result must be like this:
# Python 40.1964998512

Python , my object claims to not have an access to a method

I'm writing some Python code and have a class as follows
class GO:
##irrelevant code
def getCenter(self):
xList = []
yList = []
# Put all the x and y coordinates from every GE
# into separate lists
for ge in self.GEList:
for point in ge.pointList:
xList.append(point[0])
yList.append(point[1])
# Return the point whose x and y values are halfway between
# the left- and right-most points, and the top- and
# bottom-most points.
centerX = min(xList) + (max(xList) - min(xList)) / 2
centerY = min(yList) + (max(yList) - min(yList)) / 2
return (centerX, centerY)
###more irrelevant code
def scale(self, factor):
matrix = [[factor,0,0],[0,factor,0],[0,0,1]]
for ge in self.GEList:
fpt = []
(Cx, Cy) = ge.getCenter()
for pt in ge.pointList:
newpt = [pt[0]-C[0],pt[1]-C[0],1]###OR USE TRANSLATE
spt = matrixPointMultiply(matrix, newpt)
finalpt = [spt[0]+C[0],spt[1]+C[0],1]
fpt.append(finalpt)
ge.pointList=fpt
return
Whenever I run it it says: AttributeError: circle instance has no attribute 'getCenter'.
How do I get the object to correctly the call the function upon itself?
This is kind of a noobish question and I am learning, so detailed advice would be helpful.
Have you checked your indenting to make sure it's all consistent? That's a classic Python beginner problem. You need to use consistent whitespace (either tabs or spaces, most people prefer spaces) and the right amount of whitespace.
For example, this may look OK, but it won't do what you expect:
class Dummy(object):
def foo(self):
print "foo!"
def bar(self):
print "bar!"
d = Dummy()
d.bar()
This will return:
AttributeError: 'Dummy' object has no attribute 'bar'
If that's not it, try to pare your code down to the minimum, and post that and how you're calling it. As it stands, the general form looks OK to me, unless I'm missing something.

Matrix View in Function Doesn't Have Side Effects

Edit: I've found what the problem boils down to:
If you run this code:
A = ones((10,4))
view = A[:,1]
view.fill(7)
A
or
A = ones((10,4))
view = A[:,1:3]
view.fill(7)
A
You'll see that the columns of A change
If you run this:
A = ones((10,4))
view = A[:,(1,2)]
view.fill(7)
A
There's no side effects on A. Is this behavior on purpose or a bug?
I have a function that calculates the amount I have to rotate certain columns of x,y points in a matrix. The function only takes one input - a matrix mat:
def rotate(mat):
In the function, I create views to make working with each section easier:
rot_mat = mat[:,(col,col+1)]
Then, I calculate a rotation angle and apply it back on the view that I had created before:
rot_mat[row,0] = cos(rot)*x - sin(rot)*y
rot_mat[row,1] = sin(rot)*x + cos(rot)*y
If I perform this in the main body of my program, the changes to my rot_mat view would propagate to the original matrix mat. When I turned it into a function, the views stopped having side effects on the original matrix. What's the reasoning for this and is there any way to get around it? I should also note that it isn't changing mat within the function itself. At the end, I just try to return mat but no changes have been made.
Full code for function:
def rotate(mat):
# Get a reference shape
ref_sh = 2*random.choice(range(len(filelist)))
print 'Reference shape is '
print (ref_sh/2)
# Create a copy of the reference point matrix
ref_mat = mat.take([ref_sh,ref_sh+1],axis=1)
# Calculate rotation for each set of points
for col in range(len(filelist)):
col = col * 2 # Account for the two point columns
rot_mat = mat[:,(col,col+1)]
# Numerator = sum of wi*yi - zi*xi
numer = inner(ref_mat[:,0],rot_mat[:,1]) - inner(ref_mat[:,1],rot_mat[:,0])
# Denominator = sum of wi*xi + zi*yi
denom = inner(ref_mat[:,0],rot_mat[:,0]) + inner(ref_mat[:,1],rot_mat[:,1])
rot = arctan(numer/denom)
# Rotate the points in rot_mat. As it's a view of mat, the effects are
# propagated.
for row in range(num_points):
x = rot_mat[row,0]
y = rot_mat[row,1]
rot_mat[row,0] = cos(rot)*x - sin(rot)*y
rot_mat[row,1] = sin(rot)*x + cos(rot)*y
return mat
When you do view = A[:,(1,2)] you are using advanced indexing (Numpy manual: Advanced Indexing), which means that the array returns a copy, not a view. It's advanced because your indexing object is a tuple "containing at least one sequence" (the sequence being the tuple (1,2)). The total explicit selection object obj in your case would equal (slice(None), (1,2)), i.e. A[(slice(None), (1,2))] returns the same thing as A[:,(1,2)].
As larsmans suggests above, it seems that __getitem__ and __setitem__ behave differently for advanced indexing, which makes sense, because assigning values to a copy would have no use (the copy would not be stored).

Categories

Resources