I am new to cython(only use it for doing a little hw now).
I use the following code to see a general idea of it in jupyter notebook.
%load_ext Cython
%%cython
def cfunc(int n):
cdef int a = 0
for i in range(n):
a += i
return a
print(cfunc(10))
However, it only prints out the result 45 once. When I run the print function, the cell doesn't show 45 anyone.
Is there any problems with the code? How can I make the cell prints out 45 just the same as a normal python code? Thanks.
When running %%cython-magic a lot happens under the hood. One can see parts of it when calling the magic in verbose mode, i.e. %%cython --verbose:
A file called _cython_magic_b599dcf313706e8c6031a4a7058da2a2.pyx is generated. b599dcf313706e8c6031a4a7058da2a2 is the sha1-hash of the %%cython-cell, which is needed for example to be able to reload a %%cython-cell (see this SO-post).
This file is cythonized and build to a c-extension called _cython_magic_b599dcf313706e8c6031a4a7058da2a2.
This extension gets imported - this is the moment your code prints 45, and everything from this module is added to the global namespace.
When you execute the cell again, nothing of the above happens: given the sha-hash the machinery can see, that this cell was already executed and loaded - so nothing to be done. Only when the content of the cell is changed and thus its hash the cash will not be used but the 3 steps above executed.
To enforce that the steps above are performed one has to pass --force (or -f) options to the %%cython-magic-cell, i.e.:
%%cython --force
...
# 45 is printed
However, because building extension anew is quite time consuming one would probably prefer the following
%%cython
def cfunc(int n):
cdef int a = 0
for i in range(n):
a += i
return a
# put the code of __main__ into a function
def cython_main():
print(cfunc(10))
# execute the old main
cython_main()
and now calling cython_main() in a new cell, so it gets reevaluated the same way the normal python code would.
Related
I am new to Cython. I've written a super simple test programm to access benefits of Cython. Yet, my pure python is alot faster. Am I doing something wrong?
test.py:
import timeit
imp = '''
import pyximport; pyximport.install()
from hello_cy import hello_c
from hello_py import hello_p
'''
code_py = '''
hello_p()
'''
code_cy = '''
hello_c()
'''
print(timeit.timeit(stmt=code_py, setup=imp))
print(timeit.timeit(stmt=code_cy, setup=imp))
hello_py.py:
def hello_p():
print('Hello World')
hello_cy.pyx:
from libc.stdio cimport printf
cpdef void hello_c():
cdef char * hello_world = 'hello from C world'
printf(hello_world)
hello_py timeit takes 14.697s
hello_cy timeit takes 98s
Am I missing something? How can I make my calls to cpdef functions run faster?
Thank you very much!
I strongly suspect a problem in your configuration.
I have (partially) reproduced your tests in Windows 10, Python 3.10.0, Cython 0.29.26, MSVC 2022, and got quite different results
Because in my tests the Cython code is slightly faster. I made 2 changes:
in hello_cy.pyx, to make both code closer, I have added the newline:
...
printf("%s\n", hello_world)
In the main script I have splitted the call of the functions and the display of the times:
...
pyp = timeit.timeit(stmt=code_py, setup=imp)
pyc = timeit.timeit(stmt=code_cy, setup=imp)
print(pyp)
print(pyc)
When I run the script I get (after pages of hello...):
...
hello from C world
hello from C world
19.135732599999756
14.712803700007498
Which looks more like what could be expected...
Anyway, we do not really know what is tested here, because as much as possible, the IO should not be tested because it depends of a lot of things outside of the programs themselves.
It just isn't a meaningful test - the two functions aren't the same:
Python print prints a string followed by a newline. I think it then flushes the output.
printf scans the string for formatting characters (e.g. %s) and then prints it without an extra newline. By default printf is line bufferred (i.e. flushes after each new line). Since you never post a new line or flush it, then it may be slowed down by managing an increasingly huge buffer.
In summary, don't be mislead by fairly meaningless microbenchmarks. Especially for terminal IO which is rarely actually a limiting factor.
It takes too long because pyximport compiles the cython code on the fly (so you are measuring also compilation from cython to C and compilation C code to native library). You should measure calling already compiled code - see https://cython.readthedocs.io/en/latest/src/quickstart/build.html#building-a-cython-module-using-setuptools
I'm trying to access some Fortran subroutines using F2PY, but I've ran into the following problem during consecutive calls from IPython. Take this minimal Fortran code (hope that I didn't code anything stupid; my Fortran is a bit rusty..):
! test.f90
module mod
integer i
contains
subroutine foo
i = i+1
print*,i
end subroutine foo
end module mod
If I compile this using F2PY (f2py3.5 -c -m test test.f90), import it in Python and call it twice:
# run.py
import test
test.mod.foo()
test.mod.foo()
The resulting output is:
$ python run.py
1
2
So on every call of foo(), i is incremented, which is supposed to happen. But between different calls of run.py (either from the command line or IPython interpreter), everything should be "reset", i.e. the printed counter should start from 1 for every call. This happens when calling run.py from the command line, but if I call the script multiple times from IPython, i keeps increasing:
In [1]: run run.py
1
2
In [2]: run run.py
3
4
I know that there are lots of posts showing how to reload imports (using autoreload in IPython, importlib.reload(), ...), but none of them seem to work for this example. Is there a way to force a clean reload/import?
Some side notes: (1) The Fortran code that I'm trying to access is quite large, old and messy, so I'd prefer not to change anything in there; (2) I could easily do test.mod.i = something in between calls, but the real Fortran code is too complex for such solutions; (3) I'd really prefer a solution which I can put in the Python code over e.g. settings (autoreload, ..) which I have to manually put in the IPython interpreter (forget it once and ...)
If you can slightly change your fortran code you may be able to reset without re-import (probably faster too).
The change is about introducing i as a common and resetting it from outside. Your changed fortran code will look this
! test.f90
module mod
common /set1/ i
contains
subroutine foo
common /set1/ i
i = i+1
print*,i
end subroutine foo
end module mod
reset the variable i from python as below:
import test
test.mod.foo()
test.mod.foo()
test.set1.i = 0 #reset here
test.mod.foo()
This should produce the result as follows:
python run.py
1
2
1
I've had pretty good success using this answer to profile my Cython code, but it doesn't seem to work properly with nested functions. In this notebook you can see that the profile doesn't appear when the line profiler is used on a nested function. Is there a way to get this to work?
tl,dr:
This is seems to be an issue with Cython, there's a hackish way that does the trick but isn't reliable, you could use it for one-off cases until this issue has been fixed*
Change the line_profiler source:
I can't be 100% sure for this but it is working, what you need to do is download the source for line_profiler and go fiddle around in python_trace_callback. After the code object is obtained from the current frame of execution (code = <object>py_frame.f_code), add the following:
if what == PyTrace_LINE or what == PyTrace_RETURN:
code = <object>py_frame.f_code
# Add entry for code object with different address if and only if it doesn't already
# exist **but** the name of the function is in the code_map
if code not in self.code_map and code.co_name in {co.co_name for co in self.code_map}:
for co in self.code_map:
# make condition as strict as necessary
cond = co.co_name == code.co_name and co.co_code == code.co_code
if cond:
del self.code_map[co]
self.code_map[code] = {}
This will replace the code object in self.code_map with the one currently executing that matches its name and co.co_code contents. co.co_code is b'' for Cython, so in essence in matches Cython functions with that name. Here is where it can become more robust and match more attributes of a code object (for example, the filename).
You can then procceed to build it with python setup.py build_ext and install with sudo python setup.py install. I'm currently building it with python setup.py build_ext --inplace in order to work with it locally, I'd suggest you do too. If you do build it with --inplace make sure you navigate to the folder containing the source for line_profiler before importing it.
So, in the folder containing the built shared library for line_profiler I set up a cyclosure.pyx file containing your functions:
def outer_func(int n):
def inner_func(int c):
cdef int i
for i in range(n):
c+=i
return c
return inner_func
And an equivalent setup_cyclosure.py script in order to build it:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
from Cython.Compiler.Options import directive_defaults
directive_defaults['binding'] = True
directive_defaults['linetrace'] = True
extensions = [Extension("cyclosure", ["cyclosure.pyx"], define_macros=[('CYTHON_TRACE', '1')])]
setup(name = 'Testing', ext_modules = cythonize(extensions))
As previously, the build was performed with python setup_cyclosure.py build_ext --inplace.
Launching your interpreter from the current folder and issuing the following yields the wanted results:
>>> import line_profiler
>>> from cyclosure import outer_func
>>> f = outer_func(5)
>>> prof = line_profiler.LineProfiler(f)
>>> prof.runcall(f, 5)
15
>>> prof.print_stats()
Timer unit: 1e-06 s
Total time: 1.2e-05 s
File: cyclosure.pyx
Function: inner_func at line 2
Line # Hits Time Per Hit % Time Line Contents
==============================================================
2 def inner_func(int c):
3 cdef int i
4 1 5 5.0 41.7 for i in range(n):
5 5 6 1.2 50.0 c+=i
6 1 1 1.0 8.3 return c
Issue with IPython %%cython:
Trying to run this from IPython results in an unfortunate situation. While executing, the code object doesn't store the path to the file where it was defined, it simply stored the filename. Since I simply drop the code object into the self.code_map dictionary and since code objects have read-only Attributes, we lose the file path information when using it from IPython (because it stores the files generated from %%cython in a temporary directory).
Because of that, you do get the profiling statistics for your code but you get no contents for the contents. One might be able to forcefully copy the filenames between the two code objects in question but that's another issue altogether.
*The Issue:
The issue here is that for some reason, when dealing with nested and/or enclosed functions, there's an abnormality with the address of the code object when it is created and while it is being interpreted in one of Pythons frames. The issue you were facing was caused by the following condition not being satisfied:
if code in self.code_map:
Which was odd. Creating your function in IPython and adding it to the LineProfiler did indeed add it to the self.code_map dictionary:
prof = line_profiler.LineProfiler(f)
prof.code_map
Out[16]: {<code object inner_func at 0x7f5c65418f60, file "/home/jim/.cache/ipython/cython/_cython_magic_1b89b9cdda195f485ebb96a104617e9c.pyx", line 2>: {}}
When the time came to actually test the previous condition though, and the current code object was snatched from the current execution frame with code = <object>py_frame.f_code, the address of the code object was different:
# this was obtained with a basic print(code) in _line_profiler.pyx
code object inner_func at 0x7f7a54e26150
indicating it was re-created. This only happens with Cython and when a function is defined inside another function. Either this or something that I am completely missing.
As my notebook gets longer, I want to extract some code out, so the notebook would be easier to read.
For example, this is a cell/function that I want to extract from the notebook
def R_square_of(MSE, kde_result):
# R square measure:
# https://en.wikipedia.org/wiki/Coefficient_of_determination
y_mean = np.mean(kde_result)
SS_tot = np.power(kde_result - y_mean,2)
SS_tot_avg = np.average(SS_tot)
SS_res_avg = MSE
R_square = 1 - SS_res_avg/SS_tot_avg
return R_square
How can I do it effectively?
My thoughts:
It's pretty easy to create a my_helper.py and put the code above there, then from my_helper import *
The problem is that, I may use other package in the method (in this case, np, i.e. numpy), then I need to re-import numpy in my_helper.py. Can it re-use the environment created in ipython notebook, hence no need for re-importing?
If I change the code in my_helper.py, I need to restart the kernel to load the change(NameError: global name 'numpy' is not defined), this makes it difficult to change code in that file.
Instead of importing your other file, you could instead run it with the %run magic command:
In [1]: %run -i my_helper.py
-i: run the file in IPython’s namespace instead of an empty one. This is useful if you are experimenting with code written in a text editor which depends on variables defined interactively.
I'd still take the opportunity to recommend writing the file as a proper python module and importing it. This way you actually develop a codebase usable outside of the notebook environment. You could write tests for it or publish it somewhere.
I´m starting in python. I have four functions and are working OK. What I want to do is to save them. I want to call them whenever I want in python.
Here's the code my four functions:
import numpy as ui
def simulate_prizedoor(nsim):
sim=ui.random.choice(3,nsim)
return sims
def simulate_guess(nsim):
guesses=ui.random.choice(3,nsim)
return guesses
def goat_door(prizedoors, guesses):
result = ui.random.randint(0, 3, prizedoors.size)
while True:
bad = (result == prizedoors) | (result == guesses)
if not bad.any():
return result
result[bad] = ui.random.randint(0, 3, bad.sum())
def switch_guesses(guesses, goatdoors):
result = ui.random.randint(0, 3, guesses.size)
while True:
bad = (result == guesses) | (result == goatdoors)
if not bad.any():
return result
result[bad] = ui.random.randint(0, 3, bad.sum())
What you want to do is to take your Python file, and use it as a module or a library.
There's no way to make those four functions automatically available, no matter what, 100% percent of the time, but you can do something very close.
For example, at the top of your file, you imported numpy. numpy is a module or library which has been set up so it's available any time you run python, as long as you import it.
You want to do the same thing -- save those 4 functions into a file, and import them whenever you want them.
For example, if you copy and paste those four functions into a file named foobar.py, then you can simply do from foobar import *. However, this will only work if you're running Python in the same folder where you saved your code.
If you want to make your module available system-wide, you have to save it somewhere on the PYTHONPATH. Usually, saving it to C:\Python27\Lib\site-packages will work (assuming you're running Windows).
If you decide to put them anywhere in your project folder don`t forget to create a blank init.py file so python can see them. A better answer can be provided here : http://docs.python.org/2/tutorial/modules.html
Save them in a file - this makes them a module.
If you put them in a file called mymod.py, in python you can load them as follows
from mymod import *
simulate_prizedoor(23)
Quick solution, without having to explicitly create a file - relies on IPython and its storemagic
IPython 4.0.1 -- An enhanced Interactive Python.
details.
In [1]: def func(a):
...: print a
...:
In [2]: func = _i #gets the previous input
In [3]: store func #store(magic) the input
#(auto-magic enabled or would need '%store')
Stored 'func' (unicode)
In [4]: exit
IPython 4.0.1 -- An enhanced Interactive Python.
In [1]: store -r func #retrieve stored string
In [2]: exec func #execute string as python code
In [3]: func(10)
10
Once you had stored all your functions just once, then you can restore them all with store -r, and then exec func once for each function, in each new session.
(Came across this question while looking for a solution for 'quick saving' functions (most convenient way) while in an interactive python session - adding my current best solution for future readers)