I am a beginner in Cython and I am following the official documentation of Cython and in that there is a section called "Calling C functions" and in that it is written that how to import c function in .py(python file, as according to the 'Pure Python' section of that page) there is code in 'Pure Python' section when I copied it and run on my code editior 'VS code'. So it show me error that is
ModuleNotFoundError: No module named 'cython.cimports'; 'cython' is not a package
the code which is officially published on the page of documentation of Cython website is here:
from cython.cimports.libc.stdlib import atoi
#cython.cfunc
def parse_charptr_to_py_int(s: cython.p_char):
assert s is not cython.NULL, "byte string value is NULL"
return atoi(s) # note: atoi() has no error detection!
Now, I can't able to understand the problem that why it is occuring when I am following the official documentation of Cython
Note:- I was already Installed the Cython using pip install cython.
the code which I was written in my editior is as follows:
import cython
from cython.cimports.libc.stdlib import atoi
#cython.cfunc
def parse_charptr_to_py_int(s: cython.p_char):
assert s is not cython.NULL, "byte string value is NULL"
return atoi(s) # note: atoi() has no error detection!
if __name__ == '__main__':
result= parse_charptr_to_py_int("Rahul")
print(result)
So, if you know that why the error is occuring and what kind of mistake I has done so please let me know also. it will help me a lot in understanding the concepts of Cython much clear and it will help to understand that how to implement it, please!, if you solve this i will very grateful to you!! Thank You!!
There's a couple of possible issues:
The pure Python mode cimport is only available on the Cython 3 alpha build. You've probably installed the older Cython 0.29.x. If you don't want to use the Cython 3 alpha then you can switch the documentation back to the earlier version: https://cython.readthedocs.io/en/stable/index.html
You need to compile Cython for it to work ("pure Python mode doesn't change that. It just avoids non-Python syntax). See https://cython.readthedocs.io/en/latest/src/quickstart/build.html. Note that the documentation for cimports says
Note that this does not mean that C libraries become available to Python code. It only means that you can tell Cython what cimports you want to use, without requiring special syntax.
Related
I've written some code in Python using numpy, pandas, scikit-learn. Is it possible to call this Python code from a Julia Program?
I think you can think to three different way to call Python code from Julia, in order from the most low level to the highest level they are:
Use the Foreign Funcall Interface as suggested by #madbird. However you will almost surelly not want to do this, as a package that exploits it, PyCall.jl, already exists;
Use the abovementioned PyCall.jl package to call any python code and/or wrap a python library (well.. "most".. "any" is a dangerous word). Details of this below;
As wrapping a Python library using PyCall is very easy, the most important Python libraries have been already wrapped in Julia, like Pandas in Pandas.jl, Scikit-learn in ScikitLearn.jl, etc. If you need them, just use the corresponding Julia package.
How to use PyCall.jl to call Python code and libraries.
Note: The following is an excerpt from my "Julia Quick Syntax Reference" book (Apress, 2019)
Julia ⇄ Python
The "standard" way to call Python code in Julia is to use the PyCall package.
Some of its nice features are: (a) it can automatically download and install a local copy of Python, private to Julia, in order to avoid messing with version dependency from our "main" Python installation and provide a consistent environment in Linux, Windows and MacOS; (b) it provides automatic conversion between Julia and Python types; (c) it is very simple to use.
Concerning the first point, PyCall by default install the "private" Python environment in Windows and MacOS while it will use the system default Python environment in Linux. +
We can override such behaviour with (from the Julia prompt) ENV["PYTHON"]="blank or /path/to/python"; using Pkg; Pkg.build("PyCall"); where, if the environmental variable is empty, PyCall will install the "private" version of Python.
Given the vast amount of Python libraries, it is no wonder that PyCall is one of the most common Julia packages.
Embed Python code in a Julia program
Embedding Python code in a Julia program is similar to what we saw with C++, except that we don't need (for the most) to wonder about transforming data. We both define and call the Python functions with py"...", and in the function call we can use directly our Julia data:
using PyCall
py"""
def sumMyArgs (i, j):
return i+j
def getNElement (n):
a = [0,1,2,3,4,5,6,7,8,9]
return a[n]
"""
a = py"sumMyArgs"(3,4) # 7
b = py"sumMyArgs"([3,4],[5,6]) # [8,10]
typeof(b) # Array{Int64,1}
c = py"sumMyArgs"([3,4],5) # [8,9]
d = py"getNElement"(1) # 1
Note that we don't need to convert even for complex data like arrays, and the results are converted back to Julia types.
Type conversion is automatic for numeric, boolean, string, IO stream, date/period, and function types, along with tuples, arrays/lists, and dictionaries of these types. Other types are instead converted to the generic PyObject type.
Note from the last line of the previous example that PyCall doesn't attempt index conversion (Python arrays are 0-based while Julia ones are 1-based): calling the python getNElement() function with "1" as argument will retrieve what in Python is the element "1" of the array.
Use Python libraries
Using a Python library is straightforward as well, as shown in the below example that use the ezodf module to create an OpenDocument spreadsheet (a wrapper of ezodf for ODS documents - that internally use PyCall - already exists, OdsIO).
Before attempting to replicate the following code, please be sure that the ezodf module is available to the Python environment you are using in Julia. If this is an independent environment, just follow the Python way to install packages (e.g. with pip). If you are using the "private" Conda environment, you can use the Conda.jl package and type using Conda; Conda.add_channel("conda-forge"); Conda.add("ezodf").
const ez = pyimport("ezodf") # Equiv. of Python `import ezodf as ez`
destDoc = ez.newdoc(doctype="ods", filename="anOdsSheet.ods")
sheet = ez.Sheet("Sheet1", size=(10, 10))
destDoc.sheets.append(sheet)
dcell1 = get(sheet,(2,3)) # Equiv. of Python `dcell1 = sheet[(2,3)]`. This is cell "D3" !
dcell1.set_value("Hello")
get(sheet,"A9").set_value(10.5) # Equiv. of Python `sheet['A9'].set_value(10.5)`
destDoc.backup = false
destDoc.save()
The usage in Julia of the module follows the Python API with few syntax differences.
The module is imported and assigned to a shorter alias, ez.
We can then directly call its functions with the usual Python syntax module.function().
The doc object returned by newdoc is a generic PyObject type. We can then access its attributes and methods with myPyObject.attribute and myPyObject.method() respectively.
In the cases where we can't directly access some indicized values, like sheet[(2,3)] (where the index is a tuple) we can invoke instead the get(object,key) function. +
Finally, note again that index conversion is not automatically implemented: when asking for get(sheet,(2,3)) these are interpreted as Python-based indexes, and cell D3 of the spreadsheet is returned, not B2.
I guess Foreign Funcall Interface is what you're looking for. Here is the example for Julia. More info in PyCall.jl repository.
Im writing a private online Python interpreter for VK, which would closely simulate IDLE console. Only me and some people in whitelist would be able to use this feature, no unsafe code which can harm my server. But I have a little problem. For example, I send the string with code def foo():, and I dont want to get SyntaxError but continue defining function line by line without writing long strings with use of \n. exec() and eval() doesn't suit me in that case. What should I use to get desired effect? Sorry if duplicate, still dont get it from similar questions.
The Python standard library provides the code and codeop modules to help you with this. The code module just straight-up simulates the standard interactive interpreter:
import code
code.interact()
It also provides a few facilities for more detailed control and customization of how it works.
If you want to build things up from more basic components, the codeop module provides a command compiler that remembers __future__ statements and recognizes incomplete commands:
import codeop
compiler = codeop.CommandCompiler()
try:
codeobject = compiler(some_source_string)
# codeobject is an exec-utable code object if some_source_string was a
# complete command, or None if the command is incomplete.
except (SyntaxError, OverflowError, ValueError):
# If some_source_string is invalid, we end up here.
# OverflowError and ValueError can occur in some cases involving invalid literals.
It boils down to reading input, then
exec <code> in globals,locals
in an infinite loop.
See e.g. IPython.frontend.terminal.console.interactiveshell.TerminalInteractiveSh
ell.mainloop().
Continuation detection is done in inputsplitter.push_accepts_more() by trying ast.parse().
Actually, IPython already has an interactive web console called Jupyter Notebook, so your best bet should be to reuse it.
I export several native C++ classes to Python using SIP. I don't use the resulting maplib_sip.pyd module directly, but rather wrap it in a Python packagepymaplib:
# pymaplib/__init__.py
# Make all of maplib_sip available in pymaplib.
from maplib_sip import *
...
def parse_coordinate(coord_str):
...
# LatLon is a class imported from maplib_sip.
return LatLon(lat_float, lon_float)
Pylint doesn't recognize that LatLon comes from maplib_sip:
error pymaplib parse_coordinate 40 15 Undefined variable 'LatLon'
Unfortunately, the same happens for all the classes from maplib_sip, as well as for most of the code from wxPython (Phoenix) that I use. This effectively makes Pylint worthless for me, as the amount of spurious errors dwarfs the real problems.
additional-builtins doesn't work that well for my problem:
# Both of these don't remove the error:
additional-builtins=maplib_sip.LatLon
additional-builtins=pymaplib.LatLon
# This does remove the error in pymaplib:
additional-builtins=LatLon
# But users of pymaplib still give an error:
# Module 'pymaplib' has no 'LatLon' member
How do I deal with this? Can I somehow tell pylint that maplib_sip.LatLon actually exists? Even better, can it somehow figure that out itself via introspection (which works in IPython, for example)?
I'd rather not have to disable the undefined variable checks, since that's one of the huge benefits of pylint for me.
Program versions:
Pylint 1.2.1,
astroid 1.1.1, common 0.61.0,
Python 3.3.3 [32 bit] on Windows7
you may want to try the new --ignored-modules option, though I'm not sure it will work in your case, beside if you stop using import * (which would probably be a good idea as pylint probably already told you ;).
Rather use short import name, eg import maplib_sip as mls, then prefixed name, eg mls.LatLon where desired.
Notice though that the original problem is worth an issue on pylint tracker (https://bitbucket.org/logilab/pylint/issues) so some investigation will be done to grasp why it doesn't get member of your sip exported module.
I have a pure C module for Python and I'd like to be able to invoke it using the python -m modulename approach. This works fine with modules implemented in Python and one obvious workaround is to add an extra file for that purpose. However I really want to keep things to my one single distributed binary and not add a second file just for this workaround.
I don't care how hacky the solution is.
If you do try to use a C module with -m then you get an error message No code object available for <modulename>.
-m implementation is in runpy._run_module_as_main . Its essence is:
mod_name, loader, code, fname = _get_module_details(mod_name)
<...>
exec code in run_globals
A compiled module has no "code object" accociated with it so the 1st statement fails with ImportError("No code object available for <module>"). You need to extend runpy - specifically, _get_module_details - to make it work for a compiled module. I suggest returning a code object constructed from the aforementioned "import mod; mod.main()":
(python 2.6.1)
code = loader.get_code(mod_name)
if code is None:
+ if loader.etc[2]==imp.C_EXTENSION:
+ code=compile("import %(mod)s; %(mod)s.main()"%{'mod':mod_name},"<extension loader wrapper>","exec")
+ else:
+ raise ImportError("No code object available for %s" % mod_name)
- raise ImportError("No code object available for %s" % mod_name)
filename = _get_filename(loader, mod_name)
(Update: fixed an error in format string)
Now...
C:\Documents and Settings\Пользователь>python -m pythoncom
C:\Documents and Settings\Пользователь>
This still won't work for builtin modules. Again, you'll need to invent some notion of "main code unit" for them.
Update:
I've looked through the internals called from _get_module_details and can say with confidence that they don't even attempt to retrieve a code object from a module of type other than imp.PY_SOURCE, imp.PY_COMPILED or imp.PKG_DIRECTORY . So you have to patch this machinery this way or another for -m to work. Python fails before retrieving anything from your module (it doesn't even check if the dll is a valid module) so you can't do anything by building it in a special way.
Does your requirement of single distributed binary allow for the use of an egg? If so, you could package your module with a __main__.py with your calling code and the usual __init__.py...
If you're really adamant, maybe you could extend pkgutil.ImpLoader.get_code to return something for C modules (e.g., maybe a special __code__ function). To do that, I think you're going to have to actually change it in the Python source. Even then, pkgutil uses exec to execute the code block, so it would have to be Python code anyway.
TL;DR: I think you're euchred. While Python modules have code at the global level that runs at import time, C modules don't; they're mostly just a dict namespace. Thus, running a C module doesn't really make sense from a conceptual standpoint. You need some real Python code to direct the action.
I think that you need to start by making a separate file in Python and getting the -m option to work. Then, turn that Python file into a code object and incorporate it into your binary in such a way that it continues to work.
Look up setuptools in PyPi, download the .egg and take a look at the file. You will see that the first few bytes contain a Python script and these are followed by a .ZIP file bytestream. Something similar may work for you.
There's a brand new thing that may solve your problems easily. I've just learnt about it and it looks preety decent to me: http://code.google.com/p/pts-mini-gpl/wiki/StaticPython
It's not uncommon for an intro programming class to write a Lisp metacircular evaluator. Has there been any attempt at doing this for Python?
Yes, I know that Lisp's structure and syntax lends itself nicely to a metacircular evaluator, etc etc. Python will most likely be more difficult. I am just curious as to whether such an attempt has been made.
For those who don't know what a meta-circular evaluator is, it is an interpreter which is written in the language to be interpreted. For example: a Lisp interpreter written in Lisp, or in our case, a Python interpreter written in Python. For more information, read this chapter from SICP.
As JBernardo said, PyPy is one. However, PyPy's Python interpreter, the meta-circular evaluator that is, is implemented in a statically typed subset of Python called RPython.
You'll be pleased to know that, as of the 1.5 release, PyPy is fully compliant with the official Python 2.7 specification. Even more so: PyPy nearly always beats Python in performance benchmarks.
For more information see PyPy docs and PyPy extra docs.
I think i wrote one here:
"""
Metacircular Python interpreter with macro feature.
By Cees Timmerman, 14aug13.
"""
import re
re_macros = re.compile("^#define (\S+) ([^\r\n]+)", re.MULTILINE)
def meta_python_exec(code):
# Optional meta feature.
macros = re_macros.findall(code)
code = re_macros.sub("", code)
for m in macros:
code = code.replace(m[0], m[1])
# Run the code.
exec(code)
if __name__ == "__main__":
#code = open("metacircular_overflow.py", "r").read() # Causes a stack overflow in Python 3.2.3, but simply raises "RuntimeError: maximum recursion depth exceeded while calling a Python object" in Python 2.7.3.
code = "#define 1 2\r\nprint(1 + 1)"
meta_python_exec(code)