get data from octave script execution using oct2py (python3) - python

I'm trying to execute some Matlab scripts (not a function definition) from Python 3 using oct2py module.
Those scripts (a large amount) contains a very extended definition to read a specific ASCIII files (contained in the same directory).
I do not know how to get the data read by Python with the Matlab (octave) scripts.
Here what I am doing:
from oct2py import octave
import numpy as np
import os
import pprint
hom_dir='/path_to/files&scripts_dir/'
os.chdir(hom_dir)
octave.addpath(/path_to/files&scripts_dir/')
out=octave. matlab_file # (matlab_file.m)
output:
Out[237]: <function oct2py.core.Oct2Py._make_octave_command.<locals>.octave_command>”
pprint.pprint(out)
<function Oct2Py._make_octave_command.<locals>.octave_command at 0x7f2069d669d8>”
No error is returned, but I do not know how to get the data (that were read in a Octave session). The examples that I have found for execute .m files using oct2py where about files that define functions, however that is not my case.

Assuming your script places the results on the (virtual) octave workspace, you can simply attempt to access the workspace.
Example:
%% In file myscript.m
a = 1
b = 2
Python code:
>>> octave.run('myscript.m')
>>> vars = octave.who(); vars
[u'A__', u'a', u'b']
>>> octave.a()
1.0
>>> octave.b()
2.0
Some notes / caveats:
I ran into problems when I tried running a script, as it complained I was trying to run it as a function; you can bypass this using the run command.
Your octave current directory may not be the same as your python current directory (this depends on how the octave engine is started). For me python started in my home directory but octave started in my desktop directory. I had to manually check and go to the correct directory, i.e.:
octave.pwd()
octave.cd('/path/to/my/homedir')
Those weird looking variables A__ (B__, etc) in the workspace reflect the most recent arguments you passed into functions via the oct2py engine (but for some reason they can't be called like normal variables). E.g.
>>> octave.plus(1,2)
3.0
>>> print octave.who()
[u'A__', u'B__', u'a', u'b']
>>> octave.eval('A__')
A__ = 1
>>> octave.eval('B__')
B__ = 2
As you may have noticed from above, the usual ans variable is not kept in the workspace. Do not rely on any script actions that reference ans. In the context of oct2py it seems that ans will always evaluate to None

Related

Problems with redirected stdin and stdout when using py in matlab to run python code

I have some python code that runs perfectly when run within python. When I run the code from within matlab using py I appear to have problems with the redirection of stdin and stdout.
The problem can be illustrated with the python code in a file ztest.py:
import os
def ztest():
os.system(f'./jonesgrid > /dev/null < zxcvbn-split-08.txt')
When run within python, all is well:
>>> import ztest
>>> ztest.ztest()
>>>
When this is run from matlab with the command
>> py.ztest.ztest()
I get the following output:
At line 283 of file jonesgrid.for (unit = 5, file = 'fort.5')
Fortran runtime error: End of file
Error termination. Backtrace:
#0 0x2aaaaacead1a
<other lines cut>
The file fort.5 has been created together with fort.6. Normally these two files are associated with standard input and output respectively and are not created in the run. I have also tried using subprocess.run() and get the same problem.
I'm not sure whether this should be posted in a python forum or a matlab one, but I'm guessing the problem lies with the way in which matlab is interfacing with python. Other parts of my code that use os.system() that don't make use of redirection work fine.

What does it mean to "initialize the Julia runtime" when exporting compiled .dll or .so files for use in other langauges?

I'm trying to compile a usable .dll file from Julia to be used in Python as I've already written a large GUI in Python and need some fast optimization work done. Normally I would just call PyJulia or some "live" call, however this program needs to be compiled to distribute within my research team, so whatever solution I end up with needs to be able to run on its own (without Julia or Python actually installed).
Right now I'm able to create .dll files via PackageCompiler.jl, something I learned from previous posts on StackOverflow, however when trying to run these files in Python via the following code
Julia mock package
module JuliaFunctions
# Pkg.add("BlackBoxOptim")
Base.#ccallable function my_main_function(x::Cfloat,y::Cfloat)::Cfloat
z = 0
for i in 1:x
z += i ^ y
end
return z
end
# function julia_main()
# print("Hello from a compiled executable!")
# end
export my_main_function
end # module
Julia script to use PackageCompiler
# using PackageCompiler
using Pkg
# Pkg.develop(path="JuliaFunctions") # This is how you add a local package
# include("JuliaFunctions/src/JuliaFunctions.jl") # this is how you add a local module
using PackageCompiler
# Pkg.add(path="JuliaFunctions")
#time create_sysimage(:JuliaFunctions, sysimage_path="JuliaFunctions.dll")
Trying to use the resulting .dll in CTypes in Python
import ctypes
from ctypes.util import find_library
from ctypes import *
path = os.path.dirname(os.path.realpath(__file__)) + '\\JuliaFunctions.dll'
# _lib = cdll.LoadLibrary(ctypes.util.find_library(path)) # same error
# hllDll = ctypes.WinDLL(path, winmode=0) # same error
with os.add_dll_directory(os.path.dirname(os.path.realpath(__file__))):
_lib = ctypes.CDLL(path, winmode=0)
I get
OSError: [WinError 127] The specified procedure could not be found
With my current understanding, this means that CTypes found the dll and imported it, but didn't find.. something? I've yet to fully grasp how this behaves.
I've verified the function my_main_function is exported in the .dll file via Nirsoft's DLL Export Viewer. Users from previous similar issues have noted that this sysimage is already callable and should work, but they always add at the end something along the lines of "Note that you will also in general need to initialize the Julia runtime."
What does this mean? Is this even something that can be done independently from the Julia installation? The dev docs in PackageCompiler mention this, however they just mention that julia_main is automatically included in the .dll file and gets called as a sort of launch point. This function is also being exported correctly into the .dll file the above code creates. Below is an image of the Nirsoft export viewer output for reference.
Edit 1
Inexplicably, I've rebuilt this .dll on another machine and made progress. Now, the dll is imported correctly. I'm not sure yet why this worked on a fresh Julia install + Python venv, but I'm going to reinstall them on the other one and update this if anything changes. For anyone encountering this, also note you need to specify the expected output, whatever it may be. In my case this is done by adding (after the import):
_lib.testmethod1.restype = c_double # switched from Cfloat earlier, a lot has changed.
_lib.testmethod1.argtypes = [c_double, c_double] # (defined by ctypes)
The current error is now OSError: exception: access violation writing 0x0000000000000024 when trying to actually use the function, which is specific to Python. Any help on this would also be appreciated.

Some functions from library works in python shell but not in a script

I'm having trouble getting certain functions from a library called art (https://github.com/sepandhaghighi/art) to run in a script, though they work fine in a shell. The script/commands entered sequentially look like this:
from art import *
randart() <(function fails in script, but succeeds in shell)
tart("test") <(different function, same library, succeeds in both shell and script)
import sys
print(sys.version)
The python version is 3.7.5 for both the shell and the script. The first function does not throw an error when run in a script, but does not give any output. Its desired output is a random ascii_art from a collection. I feel like I'm missing something really simple. Any ideas? The documentation on github reports "Some environments don't support all 1-Line arts", however they are the same python version on the same machine. Are there other portions of the environment that could be the cause?
You need to print randart() while writing in script. Make a habit of using print() for everything while printing. Shell is the place which returns the value by default whereas you need to tell the script window what to do with any name or function.
so use this:
from art import *
print(randart())
in the shell it is implicitly printed ... in a script you must explicitly
print(randart())

Reload import in IPython

I'm trying to access some Fortran subroutines using F2PY, but I've ran into the following problem during consecutive calls from IPython. Take this minimal Fortran code (hope that I didn't code anything stupid; my Fortran is a bit rusty..):
! test.f90
module mod
integer i
contains
subroutine foo
i = i+1
print*,i
end subroutine foo
end module mod
If I compile this using F2PY (f2py3.5 -c -m test test.f90), import it in Python and call it twice:
# run.py
import test
test.mod.foo()
test.mod.foo()
The resulting output is:
$ python run.py
1
2
So on every call of foo(), i is incremented, which is supposed to happen. But between different calls of run.py (either from the command line or IPython interpreter), everything should be "reset", i.e. the printed counter should start from 1 for every call. This happens when calling run.py from the command line, but if I call the script multiple times from IPython, i keeps increasing:
In [1]: run run.py
1
2
In [2]: run run.py
3
4
I know that there are lots of posts showing how to reload imports (using autoreload in IPython, importlib.reload(), ...), but none of them seem to work for this example. Is there a way to force a clean reload/import?
Some side notes: (1) The Fortran code that I'm trying to access is quite large, old and messy, so I'd prefer not to change anything in there; (2) I could easily do test.mod.i = something in between calls, but the real Fortran code is too complex for such solutions; (3) I'd really prefer a solution which I can put in the Python code over e.g. settings (autoreload, ..) which I have to manually put in the IPython interpreter (forget it once and ...)
If you can slightly change your fortran code you may be able to reset without re-import (probably faster too).
The change is about introducing i as a common and resetting it from outside. Your changed fortran code will look this
! test.f90
module mod
common /set1/ i
contains
subroutine foo
common /set1/ i
i = i+1
print*,i
end subroutine foo
end module mod
reset the variable i from python as below:
import test
test.mod.foo()
test.mod.foo()
test.set1.i = 0 #reset here
test.mod.foo()
This should produce the result as follows:
python run.py
1
2
1

Line Profiling inner function with Cython

I've had pretty good success using this answer to profile my Cython code, but it doesn't seem to work properly with nested functions. In this notebook you can see that the profile doesn't appear when the line profiler is used on a nested function. Is there a way to get this to work?
tl,dr:
This is seems to be an issue with Cython, there's a hackish way that does the trick but isn't reliable, you could use it for one-off cases until this issue has been fixed*
Change the line_profiler source:
I can't be 100% sure for this but it is working, what you need to do is download the source for line_profiler and go fiddle around in python_trace_callback. After the code object is obtained from the current frame of execution (code = <object>py_frame.f_code), add the following:
if what == PyTrace_LINE or what == PyTrace_RETURN:
code = <object>py_frame.f_code
# Add entry for code object with different address if and only if it doesn't already
# exist **but** the name of the function is in the code_map
if code not in self.code_map and code.co_name in {co.co_name for co in self.code_map}:
for co in self.code_map:
# make condition as strict as necessary
cond = co.co_name == code.co_name and co.co_code == code.co_code
if cond:
del self.code_map[co]
self.code_map[code] = {}
This will replace the code object in self.code_map with the one currently executing that matches its name and co.co_code contents. co.co_code is b'' for Cython, so in essence in matches Cython functions with that name. Here is where it can become more robust and match more attributes of a code object (for example, the filename).
You can then procceed to build it with python setup.py build_ext and install with sudo python setup.py install. I'm currently building it with python setup.py build_ext --inplace in order to work with it locally, I'd suggest you do too. If you do build it with --inplace make sure you navigate to the folder containing the source for line_profiler before importing it.
So, in the folder containing the built shared library for line_profiler I set up a cyclosure.pyx file containing your functions:
def outer_func(int n):
def inner_func(int c):
cdef int i
for i in range(n):
c+=i
return c
return inner_func
And an equivalent setup_cyclosure.py script in order to build it:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
from Cython.Compiler.Options import directive_defaults
directive_defaults['binding'] = True
directive_defaults['linetrace'] = True
extensions = [Extension("cyclosure", ["cyclosure.pyx"], define_macros=[('CYTHON_TRACE', '1')])]
setup(name = 'Testing', ext_modules = cythonize(extensions))
As previously, the build was performed with python setup_cyclosure.py build_ext --inplace.
Launching your interpreter from the current folder and issuing the following yields the wanted results:
>>> import line_profiler
>>> from cyclosure import outer_func
>>> f = outer_func(5)
>>> prof = line_profiler.LineProfiler(f)
>>> prof.runcall(f, 5)
15
>>> prof.print_stats()
Timer unit: 1e-06 s
Total time: 1.2e-05 s
File: cyclosure.pyx
Function: inner_func at line 2
Line # Hits Time Per Hit % Time Line Contents
==============================================================
2 def inner_func(int c):
3 cdef int i
4 1 5 5.0 41.7 for i in range(n):
5 5 6 1.2 50.0 c+=i
6 1 1 1.0 8.3 return c
Issue with IPython %%cython:
Trying to run this from IPython results in an unfortunate situation. While executing, the code object doesn't store the path to the file where it was defined, it simply stored the filename. Since I simply drop the code object into the self.code_map dictionary and since code objects have read-only Attributes, we lose the file path information when using it from IPython (because it stores the files generated from %%cython in a temporary directory).
Because of that, you do get the profiling statistics for your code but you get no contents for the contents. One might be able to forcefully copy the filenames between the two code objects in question but that's another issue altogether.
*The Issue:
The issue here is that for some reason, when dealing with nested and/or enclosed functions, there's an abnormality with the address of the code object when it is created and while it is being interpreted in one of Pythons frames. The issue you were facing was caused by the following condition not being satisfied:
if code in self.code_map:
Which was odd. Creating your function in IPython and adding it to the LineProfiler did indeed add it to the self.code_map dictionary:
prof = line_profiler.LineProfiler(f)
prof.code_map
Out[16]: {<code object inner_func at 0x7f5c65418f60, file "/home/jim/.cache/ipython/cython/_cython_magic_1b89b9cdda195f485ebb96a104617e9c.pyx", line 2>: {}}
When the time came to actually test the previous condition though, and the current code object was snatched from the current execution frame with code = <object>py_frame.f_code, the address of the code object was different:
# this was obtained with a basic print(code) in _line_profiler.pyx
code object inner_func at 0x7f7a54e26150
indicating it was re-created. This only happens with Cython and when a function is defined inside another function. Either this or something that I am completely missing.

Categories

Resources