I'm trying to make a python proram to find derivatives and integrals as well as showing how. I have so far found that there is an integral_steps function which returns the steps used, but I have not found an equivalent for differentiation.
Does anyone know if there is an equivalent?
If there isn't, do you have any ideas on how to find the steps needed to find a derivative?
Method 1 (manual)
Looking at the code, the Derivative class is where the top-level logic lives. That's only the top-level part. From there on, the computation requires computing derivatives of different nodes inside the expression tree.
The logic for each specific node of the expression tree lives in the _eval_derivative method corresponding to each particular node type.
This would allow you to add code to those _eval_derivative methods in order to trace the entire process and find all the steps.
Method 2 (using a tracer)
Python has multiple tracing packages. python-hunter written by #ionelmc is quite good actually and fits this use-case well.
Among many other features, it allows installing certain callbacks when a function starts executing, and another one when the function returns its value. In fact that's exactly what we need.
Here's an example that shows how to use this (I ran and tested this on Python 3.7.3, SymPy 1.7 and hunter 3.3.1) :
import hunter
import sys
from hunter import Q, When, Stop
hunter.trace(
Q(module_contains="sympy",function='_eval_derivative',kind_in=["call","return"],action=hunter.CallPrinter(repr_func=str))
)
from sympy import *
x = symbols('x')
f = 1/(x * sin(x)**2)
f.diff(x)
So this allows to pick which data structures we want to inspect, how we want to print them, and it allows us to see the intermediary steps of the differentiation process:
[...]7/site-packages/sympy/core/power.py:1267 call => _eval_derivative(self=sin(x)**(-2), s=x)
[...]7/site-packages/sympy/core/power.py:1267 call => _eval_derivative(self=<sympy.core.power.Pow object at 0x7f5925337150>, s=<sympy.core.symbol.Symbol object at 0x7f5925b6a2b0>)
[...]ite-packages/sympy/core/function.py:598 call => _eval_derivative(self=sin(x), s=x)
[...]ite-packages/sympy/core/function.py:598 call => _eval_derivative(self=<sympy.functions.elementary.trigonometric.sin object at 0x7f592589ee08>, s=<sympy.core.symbol.Symbol object at 0x7f5925b6a2b0>)
[...]ite-packages/sympy/core/function.py:612 return <= _eval_derivative: cos(x)
[...]ite-packages/sympy/core/function.py:612 return <= _eval_derivative: <sympy.functions.elementary.trigonometric.cos object at 0x7f592525fef8>
[...]7/site-packages/sympy/core/power.py:1271 return <= _eval_derivative: -2*cos(x)/sin(x)**3
[...]7/site-packages/sympy/core/power.py:1271 return <= _eval_derivative: <sympy.core.mul.Mul object at 0x7f5925259b48>
[...]7/site-packages/sympy/core/power.py:1267 call => _eval_derivative(self=1/x, s=x)
[...]7/site-packages/sympy/core/power.py:1267 call => _eval_derivative(self=<sympy.core.power.Pow object at 0x7f5925337200>, s=<sympy.core.symbol.Symbol object at 0x7f5925b6a2b0>)
[...]7/site-packages/sympy/core/power.py:1271 return <= _eval_derivative: -1/x**2
[...]7/site-packages/sympy/core/power.py:1271 return <= _eval_derivative: <sympy.core.mul.Mul object at 0x7f5925259f10>
If you want to also cover the diff function you can alter the code above and have function_in=['_eval_derivative','diff'] . In this way, you can look at not only the partial results, but also the call of the diff function and its return value.
Method 3 (using a tracer, building a call graph and visualizing it)
Using graphviz, latex and a tracer (again, python-hunter) you can actually see the call graph more clearly. It does take a bit of time to render all the formulas for each intermediary step, because pdflatex is being used (I'm sure there's faster renderers for latex though).
Each node's value is in the following format:
function_name
argument => return_value
There seem to be a few diff nodes that have the argument equal to the return value which I'm not sure how to explain at the moment.
The diagram could probably be more useful if it mentioned somehow where each rule was applied (I can't think of an easy way to do that).
Here's the code for this too:
import hunter
import sys
from hunter import Q, When, Stop, Action
from hunter.actions import ColorStreamAction
formula_ltx = r'''
\documentclass[border=2pt,varwidth]{letter}
\usepackage{amsmath}
\pagenumbering{gobble}
\begin{document}
\[ \texttt{TITLE} \]
\[ FORMULA \]
\end{document}
'''
# ==============
# == Tracing ===
# ==============
from sympy.printing.latex import LatexPrinter, print_latex, latex
global call_tree_root
# a node object to hold an observed function call
# with its argument, its return value and its function name
class Node(object):
def __init__(self, arg=None, retval=None, func_name=None):
self.arg = arg
self.retval = retval
self.arg_ascii = ""
self.retval_ascii = ""
self.func_name = func_name
self.uid = 0
self.children = []
# this is a hunter action where we build a call graph and populate it
# so we can later render it
#
# CGBAction (Call Graph Builder Action)
class CGBAction(ColorStreamAction):
def __init__(self, *args, **kwargs):
super(ColorStreamAction, self).__init__(*args, **kwargs)
# a custom call stack
self.tstack = []
global call_tree_root
call_tree_root = Node(arg="",func_name="root")
self.node_idx = 1
self.tstack.append(call_tree_root)
def __call__(self, event):
if event.kind in ['return','call']:
if event.kind == 'return':
print(str(event.arg))
if len(self.tstack) > 0:
top = self.tstack.pop()
top.retval = latex(event.arg)
top.retval_ascii = str(event.arg)
elif event.kind == 'call':
print(str(event.locals.get('self')))
new = Node()
new.uid = self.node_idx
new.arg = latex(event.locals.get('self'))
new.arg_ascii = str(event.locals.get('self'))
top = self.tstack[-1]
self.tstack.append(new)
top.children.append(new)
new.func_name = event.module + ":" + event.function
self.node_idx += 1
hunter.trace(
Q(module_contains="sympy",function_in=['_eval_derivative','diff'],kind_in=["call","return"],action=CGBAction)
)
from sympy import *
x = symbols('x')
f = 1 / (x * sin(x)**2)
#f = 1 / (x * 3)
#f = sin(exp(cos(x)*asin(x)))
f.diff(x)
# ============================
# == Call graph rendering ====
# ============================
import os
import re
OUT_DIR="formulas"
if not os.path.exists(OUT_DIR):
os.mkdir(OUT_DIR)
def write_formula(prefix,uid,formula,title):
TEX = uid + prefix + ".tex"
PDF = uid + prefix + ".pdf"
PNG = uid + prefix + ".png"
TEX_PATH = OUT_DIR + "/" + TEX
with open(TEX_PATH,"w") as f:
ll = formula_ltx
ll = ll.replace("FORMULA",formula)
ll = ll.replace("TITLE",title)
f.write(ll)
# compile formula
CMD = """
cd formulas ;
pdflatex {TEX} ;
convert -trim -density 300 {PDF} -quality 90 -colorspace RGB {PNG} ;
""".format(TEX=TEX,PDF=PDF,PNG=PNG)
os.system(CMD)
buf_nodes = ""
buf_edges = ""
def dfs_tree(x):
global buf_nodes, buf_edges
arg = ("" if x.arg is None else x.arg)
rv = ("" if x.retval is None else x.retval)
arg = arg.replace("\r","")
rv = rv.replace("\r","")
formula = arg + "\\Rightarrow " + rv
print(x.func_name + " -> " + x.arg_ascii + " -> " + x.retval_ascii)
x.func_name = x.func_name.replace("_","\\_")
write_formula("",str(x.uid),formula,x.func_name)
buf_nodes += """
{0} [image="{0}.png" label=""];
""".format(x.uid);
for y in x.children:
buf_edges += "{0} -> {1};\n".format(x.uid,y.uid);
dfs_tree(y)
dfs_tree(call_tree_root)
g = open(OUT_DIR + "/graph.dot", "w")
g.write("digraph g{")
g.write(buf_nodes)
g.write(buf_edges)
g.write("}\n")
g.close()
os.system("""cd formulas ; dot -Tpng graph.dot > graph.png ;""")
Mapping SymPy logic to differentiation rules
I think one remaining step is to map intermediary nodes from SymPy to differentiation rules. Here's some of the ones I was able to map:
Product rule maps to sympy.core.mul.Mul._eval_derivative
Chain rule maps to sympy.core.function.Function._eval_derivative
Sum rule maps to sympy.core.add.Add._eval_derivative
Derivative of a summation maps to sympy.concrete.summations.Sum._eval_derivative
n-th Derivative of a product via the General Leibniz rule maps to sympy.core.mul.Mul._eval_derivative_n_times
Generalized power rule maps to sympy.core.power.Pow._eval_derivative
I haven't seen a Fraction class in sympy.core so maybe the quotient rule is handled indirectly through a product rule, and a generalized power rule with exponent -1.
Running
In order to get this to run you'll need:
sudo apt-get install graphviz imagemagick texlive texlive-latex-base
And the file /etc/ImageMagick-6/policy.xml will have to be updated with the following line to allow conversion fron PDF->PNG:
<policy domain="coder" rights="read|write" pattern="PDF" />
There's another call graph library called jonga but it's a bit generic and doesn't allow to completely filter out unwanted calls.
Related
I'm trying to find the root of the function that takes 4 known entities (numpy arrays) and a scalar variable value. I'm trying the use the scipy.fsolve tool to find the root but I'm unable to use its syntax.
The function is -
def foo(T_,t_,x1_,x2_,y):
P = np.zeros(len(t_), dtype=object)
s = np.zeros(len(t_))
for i in range(len(t_)):
P[i] = model1.Pr(T_,t_[i],x1_[0][i],x2_[0][i])
agg_disc_fact = sum(P[i]) * y * dt
stub = 1
final_term = P[i][-1]
s[i] = agg_disc_fact - stub + final_term
return s[0]
The variable in the above function is y. The syntax I'm using to find the root is -
result = fsolve(func=foo, x0=0, args=(T,ts,x1,x2))
I'm also open to using other methods/tools of finding the root if they are more efficient or have more intuitive syntax.
Is there any way to access the cost function on a per-iteration basis with scipy.minimize without using the callback and re-executing the cost function?
options.disp seems to be intended to do this, but only causes the optimizer to print the termination message.
It would be fine to print it to stdout and use contextlib.redirect_stdout with io.StringIO to gather it and parse through the data after, but I can't find a way to efficiently access the cost function on each iteration.
The method least_squares does that with parameter verbose=2. However, it is not a general-purpose minimizer, its purpose to to minimize the sum of squares of the given functions. Example:
least_squares(lambda x: [x[0]*x[1]-6, x[0]+x[1]-5], [0, 0], verbose=2)
For other methods, like minimize, there is no such option. Instead of using callback and re-evaluating the cost function, you may want to add some logging to the function itself. For example, here fun appends the computed values to global variable cost_values:
def fun(x):
c = x[0]**2 - 2*x[0] + x[1]**4
cost_values.append(c)
return c
cost_values = []
minimize(fun, [3, 2])
print(cost_values)
In this example there are 4 similar function values for each iteration step, as the minimization algorithm looks around, computing the approximate Jacobian and/or Hessian. So, print(cost_values[::4]) would be the way to get one value of the cost function per step.
But it's not always 4 values per step (depends on dimension and the method used). So it's better to use a callback function to log the costs after each step. The current cost should be stored in a global variable, so it does not have to be recomputed.
def fun(x):
global current_cost
current_cost = x[0]**2 - 2*x[0] + x[1]**4
return current_cost
def log_cost(x):
cost_values.append(current_cost)
cost_values = []
minimize(fun, [3, 2], callback=log_cost)
print(cost_values)
This prints
[3.5058199763814986, -0.2358850818406083, -0.56104822688320077, -0.88774448831043995, -0.96018358963745964, -0.98750765702936738, -0.99588975368993771, -0.99867208501468863, -0.99956795994852465, -0.99985981414137615, -0.99995446605426996, -0.99998521591611178, -0.99999519917089297, -0.99999844105574265, -0.99999949379700426, -0.99999983560485239, -0.99999994662329761, -0.99999998266175671]
I figured out a sort of hack using stdlib features, it uses a "deep" redirect of sys.stdout. Note that this does not work with jupyter since IPython hijacks sys.stdout, which removes the .fileno attribute.
It may be possible to patch Jupyter using a tempfile.SpooledTemporaryFile in this way, removing this issue. I don't know.
I believe because this uses OS-level file descriptors, it is also not threadsafe.
import os
import sys
import tempfile
class forcefully_redirect_stdout(object):
''' Forces stdout to be redirected, for both python code and C/C++/Fortran
or other linked libraries. Useful for scraping values from e.g. the
disp option for scipy.optimize.minimize.
'''
def __init__(self, to=None):
''' Creates a new forcefully_redirect_stdout context manager.
Args:
to (`None` or `str`): what to redirect to. If type(to) is None,
internally uses a tempfile.SpooledTemporaryFile and returns a UTF-8
string containing the captured output. If type(to) is str, opens a
file at that path and pipes output into it, erasing prior contents.
Returns:
`str` if type(to) is None, else returns `None`.
'''
# initialize where we will redirect to and a file descriptor for python
# stdout -- sys.stdout is used by python, while os.fd(1) is used by
# C/C++/Fortran/etc
self.to = to
self.fd = sys.stdout.fileno()
if self.to is None:
self.to = tempfile.SpooledTemporaryFile(mode='w+b')
else:
self.to = open(to, 'w+b')
self.old_stdout = os.fdopen(os.dup(self.fd), 'w')
self.captured = ''
def __enter__(self):
self._redirect_stdout(to=self.to)
return self
def __exit__(self, *args):
self._redirect_stdout(to=self.old_stdout)
self.to.seek(0)
self.captured = self.to.read().decode('utf-8')
self.to.close()
def _redirect_stdout(self, to):
sys.stdout.close() # implicit flush()
os.dup2(to.fileno(), self.fd) # fd writes to 'to' file
sys.stdout = os.fdopen(self.fd, 'w') # Python writes to fd
if __name__ == '__main__':
import re
from scipy.optimize import minimize
def foo(x):
return 1/(x+0.001)**2 + x
with forcefully_redirect_stdout() as txt:
result = minimize(foo, [100], method='L-BFGS-B', options={'disp': True})
print('this appears before `disp` output')
print('here''s the output from disp:')
print(txt.captured)
lines_with_cost_function_values = \
re.findall(r'At iterate\s*\d\s*f=\s*-*?\d*.\d*D[+-]\d*', txt.captured)
fortran_values = [s.split()[-1] for s in lines_with_cost_function_values]
# fortran uses "D" to denote double and "raw" exp notation,
# fortran value 3.0000000D+02 is equivalent to
# python value 3.0000000E+02 with double precision
python_vals = [float(s.replace('D', 'E')) for s in fortran_values]
print(python_vals)
I'm trying to cross compare two outputs labeled "S" in compareDNA (calculating Hamming distance). Though, I cannot figure out how to call an integer from one def to another. I've tried returning the variable, but, I am unable to call it (in a different def) after returning it.
I'm attempting to see which output of "compareDNA(Udnalin, Mdnalin) and compareDNA(Udnalin, Hdnalin)" is higher, to determine which has a greater hamming distance.
How does one call an integer from one def to another?
import sys
def main():
var()
def var():
Mdna = open("mouseDNA.txt", "r")
Mdnalin = Mdna.readline()
print(Mdnalin)
Mdna.close
Hdna = open("humanDNA.txt", "r")
Hdnalin = Hdna.readline()
print(Hdnalin)
Hdna.close
Udna = open("unknownDNA.txt", "r")
Udnalin = Udna.readline()
print(Udnalin)
Udna.close
S = 0
S1 = 0
S2 = 0
print("Udnalin + Mdnalin")
compareDNA(Udnalin, Mdnalin)
S1 = S
print("Udnalin + Hdnalin")
compareDNA(Udnalin, Hdnalin)
def compareDNA(i, j):
diffs = 0
length = len(i)
for x in range(length):
if i[x] != j[x]:
diffs += 1
S = length - diffs / length
S = round(S, 2)
return S
# print("Mouse")
# print("Human")
# print("RATMA- *cough* undetermined")
main()
You probably want to assign the value returned by each call to compareDNA to a separate variable in your var function. Then you can do whatever you want with those values (what exactly you want to do is not clear from your question). Try something like this:
S1 = compareDNA(Udnalin, Mdnalin) # bind the return value from this call to S1
S2 = compareDNA(Udnalin, Hdnalin) # and this one to S2
# do something with S1 and S2 here!
If what you want to do is especially simple (e.g. comparing them to see which is larger), you could even use the return values directly in an expression, such as the condition in a if statement:
if compareDNA(Udnalin, Mdnalin) > S2 = compareDNA(Udnalin, Hdnalin):
print("Unknown DNA is closer to a Mouse")
else:
print("Unknown DNA is closer to a Human")
There's one further thing I'd like to point out, which is unrelated to the core of your question: You should use with statements to handle closing your files, rather than manually trying to close them. Your current code doesn't actually close the files correctly (you're missing the parentheses after .close in each case which are needed to make it a function call).
If you use a with statement instead, the files will be closed automatically at the end of the block (even if there is an exception):
with open("mouseDNA.txt", "r") as Mdna:
Mdnalin = Mdna.readline()
print(Mdnalin)
with open("humanDNA.txt", "r") as Hdna:
Hdnalin = Hdna.readline()
print(Hdnalin)
with open("unknownDNA.txt", "r") as Udna:
Udnalin = Udna.readline()
print(Udnalin)
Minimal Question:
def smooth(indicator, aggregation, tick):
storage.ZZZ = []
storage.ZZZZ = []
is the pertinent part of my definition, when I call that definition I'm using:
MA_now_smooth = smooth(MA, IN, I)[-1]
where MA is an input array, IN and I are constants; the definition is further defined below but ultimately returns the last input to storage.ZZZZ. What I want is to create custom storage objects that are named according to the "indicator" input so that the persistent variables don't overlap when calling upon this definition for myriad array inputs.
ie
smooth(MA, IN, I)[-1]
should create:
storage.ZZZ_MA
storage.ZZZZ_MA
but
smooth(MA2, IN, I)[-1]
should create:
storage.ZZZ_MA2
storage.ZZZZ_MA2
In Depth Question:
I'm creating an Simple Moving Average smoothing definition for TA-lib indicators at tradewave.net; TA-lib is a library of black box functions that give "Financial Technical Analysis" array outputs for things like "moving average" "exponential moving average" "stochastic" etc. My definition is a secondary simple smoothing of these TA-lib functions.
I'm having to do this because when "aggregating" candles counting backwards from current, I'm getting "wiggly" outputs; you can read more about that here if you need background: https://discuss.tradewave.net/t/aggregating-candles-some-thoughts
My definition code works well to create a list of smoothed values when smoothing a single indicator "MA"; a TA-lib array:
import talib
def smooth(indicator, aggregation, tick):
import math
A = int(math.ceil(aggregation/tick))
if info.tick == 0:
storage.ZZZ = []
storage.ZZZZ = []
storage.ZZZ.append(indicator[-1])
storage.ZZZ = storage.ZZZ[-A:]
ZZZ = sum(storage.ZZZ) / len(storage.ZZZ)
storage.ZZZZ.append(ZZZ)
storage.ZZZZ = storage.ZZZZ[-250:]
return storage.ZZZZ
def tick():
I = info.interval
period = 10
IN = 3600
instrument = pairs.btc_usd
C = data(interval=IN)[instrument].warmup_period('close')
MA = talib.MA(C, timeperiod=period, matype=0)
MA_now = MA[-1]
MA_now_smooth = smooth(MA, IN, I)[-1]
plot('MA', MA_now)
plot('MA_smooth', MA_now_smooth)
However, when I attempt to smooth more than one indicator with the same definition, it fails because the persistent variables in the definition are the same for both MA and MA2. This does not work:
import talib
def smooth(indicator, aggregation, tick):
import math
A = int(math.ceil(aggregation/tick))
if info.tick == 0:
storage.ZZZ = []
storage.ZZZZ = []
storage.ZZZ.append(indicator[-1])
storage.ZZZ = storage.ZZZ[-A:]
ZZZ = sum(storage.ZZZ) / len(storage.ZZZ)
storage.ZZZZ.append(ZZZ)
storage.ZZZZ = storage.ZZZZ[-250:]
return storage.ZZZZ
def tick():
I = info.interval
period = 10
IN = 3600
instrument = pairs.btc_usd
C = data(interval=IN)[instrument].warmup_period('close')
MA = talib.MA(C, timeperiod=period, matype=0)
MA2 = talib.MA(C, timeperiod=2*period, matype=0)
MA_now = MA[-1]
MA2_now = MA2[-1]
MA_now_smooth = smooth(MA, IN, I)[-1]
MA2_now_smooth = smooth(MA2, IN, I)[-1]
plot('MA', MA_now)
plot('MA2', MA2_now)
plot('MA_smooth', MA_now_smooth)
plot('MA2_smooth', MA2_now_smooth)
What I would like to do... and don't understand how to do:
I'd like for the definition to create a new persistent storage object for each new input and I'd like for the names of my objects to detect the name of the "indicator" input, ie:
storage.ZZZ_MA
storage.ZZZZ_MA
ZZZ_MA
for the "MA" smoothing and
storage.ZZZ_MA2
storage.ZZZZ_MA2
ZZZ_MA2
for "MA2" smoothing
I would like to be able to reuse this definition with many different array inputs for "indicator" and for each instance use the name of the indicator array appended to the persistent object names used in the definition. For example:
storage.ZZZ_MA3
storage.ZZZ_MA4
etc.
In the instances below info.interval is my tick size of 15 minutes (900 sec) and my aggregation was 1 hour (3600 sec)
With the single output of "MA" and correct smoothing
With dual outputs of "MA" and "MA2" I'm getting incorrect smoothing
In the second image I'm looking for a two "smooth" lines one in the middle of the wiggly red plot and the other in the middle of wiggly blue plot. Instead I'm getting two identical wiggly lines (purple & orange) that split the difference. I understand why, but I don't know how to fix it.
1) please show me how
2) please tell me what I'm looking to do is "called" and point me to some tags/posts where I can learn more.
Thanks for your help!
LP
Make storage a dict, and use string keys rather than trying to create and access dynamic variables?
Well I've arrived at an interim solution.
While I like this solution as its doing everything I need. I would like to eliminate the redundant "label" input. Is there any way for me to reference the name of my input parameter/argument "indicator" instead of its object so that I could return to my original 3 input parameters rather than 4?
I tried this:
def smooth(indicator, aggregation, tick):
import math
A = int(math.ceil(aggregation/tick))
ZZZ = 'ZZZ_%s' % dict([(t.__name__, t) for t in indicator])
ZZZZ = 'ZZZZ_%s' % dict([(t.__name__, t) for t in indicator])
if info.tick == 0:
storage[ZZZ] = []
storage[ZZZZ] = []
storage[ZZZ].append(indicator[-1])
storage[ZZZ] = storage[ZZZ][-A:]
ZZZZZ = sum(storage[ZZZ]) / len(storage[ZZZ])
storage[ZZZZ].append(ZZZZZ)
storage[ZZZZ] = storage[ZZZZ][-250:]
return storage[ZZZZ]
but I get:
File "", line 259, in File "", line 31, in tick File "", line 6, in smooth AttributeError: 'numpy.float64' object has no attribute 'name'
Here is my current 4 argument definition smoothing 4 different TA-lib moving averages. This same definition can be used with many other aggregated TA-lib indicators. It should work with ANY aggregate/tick size ratio including 1:1.
import talib
def smooth(indicator, aggregation, tick, label):
import math
A = int(math.ceil(aggregation/tick))
ZZZ = 'ZZZ_%s' % label
ZZZZ = 'ZZZZ_%s' % label
if info.tick == 0:
storage[ZZZ] = []
storage[ZZZZ] = []
storage[ZZZ].append(indicator[-1])
storage[ZZZ] = storage[ZZZ][-A:]
ZZZZZ = sum(storage[ZZZ]) / len(storage[ZZZ])
storage[ZZZZ].append(ZZZZZ)
storage[ZZZZ] = storage[ZZZZ][-250:]
return storage[ZZZZ]
def tick():
I = info.interval
period = 10
IN = 3600
instrument = pairs.btc_usd
C = data(interval=IN)[instrument].warmup_period('close')
MA1 = talib.MA(C, timeperiod=period, matype=0)
MA2 = talib.MA(C, timeperiod=2*period, matype=0)
MA3 = talib.MA(C, timeperiod=3*period, matype=0)
MA4 = talib.MA(C, timeperiod=4*period, matype=0)
MA1_now = MA1[-1]
MA2_now = MA2[-1]
MA3_now = MA3[-1]
MA4_now = MA4[-1]
MA1_now_smooth = smooth(MA1, IN, I, 'MA1')[-1]
MA2_now_smooth = smooth(MA2, IN, I, 'MA2')[-1]
MA3_now_smooth = smooth(MA3, IN, I, 'MA3')[-1]
MA4_now_smooth = smooth(MA4, IN, I, 'MA4')[-1]
plot('MA1', MA1_now)
plot('MA2', MA2_now)
plot('MA3', MA3_now)
plot('MA4', MA4_now)
plot('MA1_smooth', MA1_now_smooth)
plot('MA2_smooth', MA2_now_smooth)
plot('MA3_smooth', MA3_now_smooth)
plot('MA4_smooth', MA4_now_smooth)
h/t james for collaboration
I'm trying to find out if there is a way to change the viewport angle in blender using Python.
I would like a result like you would get from pressing 1, 3, or 7 on the num. pad.
Thank you for any help
First of all, note that you can have multiple 3D views open at once, and each can have its own viewport angle, perspective/ortho settings etc. So your script will have to look for all the 3D views that might be present (which might be none) and decide which one(s) it’s going to affect.
Start with the bpy.data object, which has a window_managers attribute. This collection always seems to have just one element. However, there might be one or more open windows. Each window has a screen, which is divided into one or more areas. So you need to search through all the areas for one with a space type of "VIEW_3D". And then hunt through the spaces of this area for the one(s) with type "VIEW_3D". Such a space will be of subclass SpaceView3D. This will have a region_3d attribute of type RegionView3D. And finally, this object in turn has an attribute called view_matrix, which takes a value of type Matrix that you can get or set.
Got all that? :)
Once you've located the right 'view', you can modify:
view.spaces[0].region_3d.view_matrix
view.spaces[0].region_3d.view_rotation
Note that the region_3d.view_location is the 'look_at' target, not the location of the camera; you have to modify the view_matrix directly if you want to move the position of the camera (as far as I know), but you can subtly adjust the rotation using view_rotation quite easily. You'll probably need to read this to generate a valid quaternion though: http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Perhaps something like this may be useful:
class Utils(object):
def __init__(self, context):
self.context = context
#property
def views(self):
""" Returns the set of 3D views.
"""
rtn = []
for a in self.context.window.screen.areas:
if a.type == 'VIEW_3D':
rtn.append(a)
return rtn
def camera(self, view):
""" Return position, rotation data about a given view for the first space attached to it """
look_at = view.spaces[0].region_3d.view_location
matrix = view.spaces[0].region_3d.view_matrix
camera_pos = self.camera_position(matrix)
rotation = view.spaces[0].region_3d.view_rotation
return look_at, camera_pos, rotation
def camera_position(self, matrix):
""" From 4x4 matrix, calculate camera location """
t = (matrix[0][3], matrix[1][3], matrix[2][3])
r = (
(matrix[0][0], matrix[0][1], matrix[0][2]),
(matrix[1][0], matrix[1][1], matrix[1][2]),
(matrix[2][0], matrix[2][1], matrix[2][2])
)
rp = (
(-r[0][0], -r[1][0], -r[2][0]),
(-r[0][1], -r[1][1], -r[2][1]),
(-r[0][2], -r[1][2], -r[2][2])
)
output = (
rp[0][0] * t[0] + rp[0][1] * t[1] + rp[0][2] * t[2],
rp[1][0] * t[0] + rp[1][1] * t[1] + rp[1][2] * t[2],
rp[2][0] * t[0] + rp[2][1] * t[1] + rp[2][2] * t[2],
)
return output