You know, we can bind every C library on Python or Perl programming languages. A good instance is PyQt; PyQt is bound from Qt.
My question is: Can I do the reverse of the above? I mean: suppose I have a library in Python or Perl, and I want to convert it to a C library...can it be done?
However, you can think to convert a web program to shared library or a set of functions.
My Goal: I want to improve a set of security features.
Yes, you can. The term of art is "embedding," as in "embedded Python" or "embed a Python interpreter." Python has a document about it here: https://docs.python.org/2/extending/embedding.html - the general idea is you must use (at least a small part of) the Python C API to launch Python within a C or C++ application.
Once you realize you can embed a scripting language in C, and that scripting languages can invoke C, you then realize you can also embed one scripting language in another, using C as the bridge between them. For example, RubyLuaBridge: https://bitbucket.org/neomantra/rubyluabridge
A lot of commercial applications embed a scripting language interpreter within a C or C++ host program. A good, well-documented example is Adobe Lightroom, which is roughly half C++ and half Lua. You can read about that from the horse's mouth starting on page xi here: http://www.lua.org/gems/front.pdf
Yes, for python at least: Convert Python program to C/C++ code?.
And for Perl: http://perldoc.perl.org/5.8.9/perlcompile.html.
[EDIT] Per the comment, I'll expand. First, the question "Can I translate Python to C?" has already been answered on SO. See the link.
Second, Perl is actually an interpreted language (just like Python), and has the capability to take that intermediate code and translate it into full blown C for native executables. This is done using the 'B' module and it's other companion modules, such as B::C. There's also a standalone program, 'perlcc' for doing just this. [/EDIT]
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Is it feasible to script in a Lisp, as opposed to Ruby/Python/Perl/(insert accepted scripting language)? By this I mean do things like file processing (open a text file, count the number of words, return the nth line), string processing (reverse, split, slice, remove punctuation), prototyping/quick computations, and other things you would normally use Python, etc. for. How productive would doing such tasks in a Lisp be, as opposed to Ruby/Python/Perl/scripting language of choice?
I ask because I want to learn a Lisp but also use it to do something instead of only learning it for the sake of it. I looked around, but couldn't find much information about scripting in a Lisp. If it is feasible, what would be a good implementation?
Thank you!
Today, using LISP as if it's certain that anyone would understand what language one is talking about is absurd since it hasn't been one language since the 70's or probably some time earlier too. LISP only indicates that it's fully parenthesized Polish prefix notation just like Pascal, Ruby, Python and Perl are just variations of ALGOL.
Scheme is a standard and Common LISP is a standard. Both of those are general purpose though Common LISP is a batteries included while Scheme is a minimalistic language. They are quite different in style so comparing them would be like comparing Java with Python.
Embedded LISPS
There are lots of use of Scheme and specialized LISP dialects as embedded languages. Emacs is the most widely used editor in the unix segment and its lisp elisp is the most used lisp language because of this. Image processing applpication GIMP has a Scheme base with extensions for image processing.
Stand alone scripts
It's possible in many Common LISP implementation with the standard #!-notation to make a script work as an executable and run it as an application. Eg. I use CLISP and have scripts using #!/usr/bin/clisp -C as first line. I also use Scheme the same way and in the very fast incremental compiler ikarus you use #!/usr/bin/ikarus --r6rs-script. Clojure has all power of Java libraries and you can use your own classes from it and it also can be made an application with #!/usr/bin/env java -cp /path/to/clojure-1.2.0.jar clojure.main
more permanent application
In Common LISP you can dump an image. It will be a Common Lisp binary with your code already compiled in. Many Scheme implementations has compilation to native and Clojure can compile to java bytecode (though it's not the most common way to do it). Still I have had experience with Ikarus sometimes interpreting faster than a compiled executable from racket, chicken and gambit so I often do my programming in DrRacket and running it in ikarus in Scheme.
Try both Common LISP and Scheme as both of them are good enough for the tasks you specified in your question. There are many free books on the subject and some are worth their price as well. You may also try Racket too, which is a Scheme deviate with lots of libraries for everyday tasks, but it's not conforming to any standard.
About productivity
I imagine you are referring to how quick you can write a certain task in a Lisp dialect. I imagine it depends on how used you are to the syntax. It takes a while to get used to it after only knowing Algol dialects. It takes different approaches as well as you need to think in a more functional manner, especially for Scheme. I imagine when you are as good in Scheme as in your favorite Algol dialect it will be similar. Eg. some algol dialects are faster to prototype inn than others and that is true for Lisp dialects as well.
When I first started to poke around Lisp I used it to write shell scripts... I'm somewhat OCD about order and uniformity and I really liked Lisp languages because they have saner syntax (fewer syntax rules, no random decisions related to particular syntax elements).
If you are looking into Common Lisp, then SBCL, as installed by default on any Linux distro is available right away for CGI scripting. SBCL also has its means to process command line arguments, access pipes, processes and so on. If you aren't after portability between different Lisps, then I'd say that you are good to go. Just to give you examples of where I used this scripting: a girl in our office compiled and maintained a list of words which I had to further process in our application. This list was available as Googledoc spreadsheet. My script would download the words table and parse it into the format I needed. I had scripts that helped me with file manipulation and preprocessing before the project was compiled (the project wasn't in Lisp).
Finally, SBCL has its own means of being used with FastCGI http://kdr2.com/project/sb-fastcgi.html , but of course there are several full-blown HTTP-servers, which you could either use as is, or put behind a proxy. Hunchentoot was historically most popular one, but there are others too, like cl-http, here are some more links: http://www.cliki.net/web .
#!/usr/bin/env sbcl --script
would be the shebang comment to use.
Furthermore, I use Common Lisp for my classes, simply to do my homework, which is, I imagine, what you were after when you said "open a text file, count the number of words, return the nth line". Here's an example I used for the class on introduction to logic: https://github.com/wvxvw/coursera-logic/blob/master/formula.lisp .
Since there was a discussion on the matter, here's something else to be considered.
Typically scripting languages have twofold nature: they are written in in a low-level language, while exposing high-level API to the programmer. This can be a blessing or a curse. On one hand languages like Python, Ruby, JavaScript have highly optimised libraries for dealing with common tasks, while Common Lisp typically, similarly to C++ or Java implements everything in Lisp. Thus, for example, strings are a lot less sophisticated in CL then they are in JavaScript: in CL they are simple arrays of characters, while in JS they are a special kind of trees known as ropes.
Typically, a programmer writing in scripting language is neither required nor expected to pull out a high performance code. The language compensates for it with the base level highly optimized library code. Unfortunately, once the programmer actually wants to squeeze as much performance as possible from the scripting language, it appears that they just don't have the tools to do it, because they can't get to the bottom of the implementation.
On the other hand, dealing with the lower level of details means that the programmer will be less productive overall, and this would require more skill, because optimizing the code to get on par with the industry standard implementations requires skill.
Common Lisp generally falls under the second category, but I'd argue it to be still good for one-liners and casual programming because of the extensive library and highly developed macro system, which allows to reduce verbosity usually associated with low-level languages.
I understand that you have asked two questions:
Is it possible to run scheme scripts from a command line
How effective is that?
I can answer your first question, but not your second. It highly depends on your scheme skills and how much code you want to (re)write in scheme how feasible it is.
So I just answer your first questions :)
Have a look at this SO question: Running Scheme from the command line.
If you have installed, for example, DrRacket (which is a good IDE for many scheme dialects) as your scheme interpreter, you may use the shebang line #! /usr/bin/env mzscheme in your scheme script.
This test script (test.scm)
#!/usr/bin/env mzscheme
#lang scheme
(print (+ 40 2))
can be made executable (with chmod +x test.scm) and executed (with ./test.scm).
I'd say that Lisp/Scheme could be used to write small scripts or big application. But they are not yet ready for wide use.
The big difference between python/ruby and scheme is that python has a huge library of modules centralized in one place. Ruby is quite similar to python with ruby gems.
Scheme on the other hand might have a small library of modules scattered accross the internet. The quality of modules doesn't always compare to the popular modules in python and ruby.
One could say that they are aiming at different goal but I'd say scheme just got old and people started to forget about it and how it could be used as a tool instead of just a school subject.
About Lisp, I can't really say. But from your description, it's possible to write scripts that you'd like to write but if you need something specific it's possible that it's not there and you'll have to rewrite it yourself.
All I can say, is jump in. And become someone who gives a future to this language. Don't be scared. This language has a bright future and you'll learn a lot from it.
WRT to your tasks, what about using Emacs, which comes with an interactive Python-shell. So you have the convenience of editing alongside with running scripts.
Let me say first that I'm NOT searching for automagical solutions here. I want to translate code from Python to Smalltalk because I've noticed some very simple sentences can be automatically translated, examples:
Assigning a variable to a value
Python
i = 1
Smalltalk
i := 1.
Creating a new instance of a class
Python
instance = module.ClassName()
Smalltalk
instance := ClassName new.
A for loop
Python
for a in [0,1,2]:
print (str(a)+str(a))
Smalltalk
#(0 1 2) do: [: a | Transcript show: a + a; cr ]
and so on (while loops, conditionals, etc). My idea is to have a tool which translates all this extremely "simple" cases, and then I may complete or teach a rule system by hand.
Do you know any programming translation tool or library which can help me?
In case you haven't heard of any tool, what technique/pattern you will use to implement such translation? Can you provide a link to an example?
Thanks
You need to parse the Python code, walk the abstract syntax tree that is generated by the parser and output your Smalltalk. There's a nice article about Python ASTs by Eli Bendersky and a slightly older one here. Python makes this relatively straight forward as the Python standard library exposes a lot of the internal tooling of the interpreter and the documentation is reasonably comprehensive.
I am not aware of any such tool, and in general case it might be complicated and/or inefficient. So your route would depend on your more precise need: porting an existing python library, just using it from smalltalk, or making nice clean smalltalk code that does the same thing as python one.
Routes I would consider:
leaving python library as is, and calling it from smalltalk through c interface
implementing python parser in pettit parser an then:
implement smalltalk generator maybe assisted by human through user interface
python interpreter in smalltalk
Note that generator variant might face some difficult issues in general cases, for instance smalltalk has fixed number of instance variables, while in python you can attach then as you go. You could work around that, but resulting smalltalk code might not be pretty.
As for implementing python inside smalltalk take a look at the helvetia presentation from Lukas Renggli, it is on the subject of including other languages inside smalltalk IDE.
Take a look at ply, which is a Lex-Yacc Python implementation. I've used it mostly for translating some other language into Python byte code by building a Python AST with it, but the opposite should be also possible.
Well given a C code , is there a way that i can use other languages like python to execute the C code . What i am trying to say is , there are soo many modules which are built using a language , but also offer access via different languages , is there any way to do that ?
Of course, it's called "extending" in the Python world. The official documentation is here. A short excerpt:
This document describes how to write
modules in C or C++ to extend the
Python interpreter with new modules.
Those modules can define new functions
but also new object types and their
methods. The document also describes
how to embed the Python interpreter in
another application, for use as an
extension language. Finally, it shows
how to compile and link extension
modules so that they can be loaded
dynamically (at run time) into the
interpreter, if the underlying
operating system supports this
feature.
An even easier way for Python would be using the ctypes standard package to run code in DLLs.
Many ways. Generically, this is often called a Foreign Function Interface. That Wikipedia page says the following about Python:
* The major dynamic languages, such as Python, Perl, Tcl, and Ruby,
all provide easy access to native code
written in C/C++ (or any other
language obeying C/C++ calling
conventions).
o Python additionally provides the Ctypes module 2, which
can load C functions from shared
libraries/DLLs on-the-fly and
translate simple data types
automatically between Python and C
semantics. For example:
import ctypes libc = ctypes.CDLL('/lib/libc.so.6' ) # under Linux/Unix
t = libc.time(None) # equivalent C code: t = time(NULL)
print t
A popular choice that supports many languages is SWIG
I do understand that this topic has been covered in some way at StackOverflow but I'm still not able to figure out the exact answer: can I treat IronPython as a Pythonic replacement to C#?
I use CPython every day, I love the Zen :) but my current task is a Windows-only application with a complex GUI and some other features which I would like to implement using .NET.
IronPython is NOT equivalent to "other languages that run on .NET", as the language has support for substantially fewer CLR runtime features.
IronPython classes are not "real" .NET classes, and DLR APIs need to be used when calling IronPython code from traditional CLR-based languages; this means that if you want genuinely easy interoperability, you're stuck writing glue to "hide" the DLR.
Boo is a much more complete Pythonically-inspired language targeting the CLR. Its (dynamically inferrable) static typing (which can be replaced with duck typing on a variable-by-variable basis) also allows libraries written in Boo to be natively used from C# and other CLR-based languages, without needing to make any allowances for the language in use.
That depends on what it is about C# that you need, and which needs replacing.
If the reason you use C# is that you need a reasonably high performance statically typed language then no, IronPython is likely not going to be a replacement.
If the reason you use it is simply "I need something that runs on .NET and can access .NET libraries", then yes, any language that runs on .NET can be used to replace it.
If you use C# because you're working with a team of programmers who only know C-like languages, C# might also be difficult to replace with IronPython.
It depends on what characteristics about C# it is that you care about, and need to find replacements for.
One thing to consider - and I have no idea how IronPython behaves in that respect - is Common Language Subsystem (CLS) compliance for assemblies. CLS compliance guarantees that any .Net language can access a compiled DLL of your code. That means e.g. in C# you cannot have any public or protected method or parameter that is a unsigned integer. I have no idea how easy it is to achieve CLS compliant code in IronPython, an interesting blog entry that I found dates from 2008.
The question "Can I treat IronPython as a Pythonic replacement for C#?" has been answered pretty well by jalf. If the question were "Is IronPython a Pythonic .NET Language?" though, then the answer would absolutely be yes. The principles of Zen - esp. least surprise - absolutely apply to IronPython's integration with the CLR as well.
I think if you were to do that,
it would be easier in .NET 4.0.
I think you can use the newly released IronPython 2.6.1 for .NET 4.0,
It is already easy to use C# in IronPython.
As you can see here, you can easily do it the other way around (using IronPython from C#) with .NET 4.0.
I think its possible, but I agree Boo is a safer way to go.
"Complex" UI usually entails not "writing" it but building it within Visual Studio with point and click. All the callbacks and eventing code is inserted by itself. There is almost nothing like that on python side. I'd say go for C# straight out.
There is one nagging thing though. If you are true Pythonista, the static typing will get to you very very quickly and you will want to start throwing heavy objects at random people.
If that point comes think about building out the UI with C# and embedding IronPython as a scripting engine for implementing your business logic. That could be a tolerable middle ground.
In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two?
I have a rich experience of using swig. SWIG claims that it is a rapid solution for wrapping things. But in real life...
Cons:
SWIG is developed to be general, for everyone and for 20+ languages. Generally, it leads to drawbacks:
- needs configuration (SWIG .i templates), sometimes it is tricky,
- lack of treatment of some special cases (see python properties further),
- lack of performance for some languages.
Python cons:
1) Code style inconsistency. C++ and python have very different code styles (that is obvious, certainly), the possibilities of a swig of making target code more Pythonish is very limited. As an example, it is butt-heart to create properties from getters and setters. See this q&a
2) Lack of broad community. SWIG has some good documentation. But if one caught something that is not in the documentation, there is no information at all. No blogs nor googling helps. So one has to heavily dig SWIG generated code in such cases... That is terrible, I could say...
Pros:
In simple cases, it is really rapid, easy and straight forward
If you produced swig interface files once, you can wrap this C++ code to ANY of other 20+ languages (!!!).
One big concern about SWIG is a performance. Since version 2.04 SWIG includes '-builtin' flag which makes SWIG even faster than other automated ways of wrapping. At least some benchmarks shows this.
When to USE SWIG?
So I concluded for myself two cases when the swig is good to use:
2) If one needs to wrap C++ code for several languages. Or if potentially there could be a time when one needs to distribute the code for several languages. Using SWIG is reliable in this case.
1) If one needs to rapidly wrap just several functions from some C++ library for end use.
Live experience
Update :
It is a year and a half passed as we did a conversion of our library by using SWIG.
First, we made a python version. There were several moments when we experienced troubles with SWIG - it is true. But right now we expanded our library to Java and .NET. So we have 3 languages with 1 SWIG. And I could say that SWIG rocks in terms of saving a LOT of time.
Update 2:
It is two years as we use SWIG for this library. SWIG is integrated into our build system. Recently we had major API change of C++ library. SWIG worked perfectly. The only thing we needed to do is to add several %rename to .i files so our CppCamelStyleFunctions() now looks_more_pythonish in python. First I was concerned about some problems that could arise, but nothing went wrong. It was amazing. Just several edits and everything distributed in 3 languages. Now I am confident that it was a good solution to use SWIG in our case.
Update 3:
It is 3+ years we use SWIG for our library. Major change: python part was totally rewritten in pure python. The reason is that Python is used for the majority of applications of our library now. Even if the pure python version works slower than C++ wrapping, it is more convenient for users to work with pure python, not struggling with native libraries.
SWIG is still used for .NET and Java versions.
The Main question here "Would we use SWIG for python if we started the project from the beginning?". We would! SWIG allowed us to rapidly distribute our product to many languages. It worked for a period of time which gave us the opportunity for better understanding our users requirements.
SWIG generates (rather ugly) C or C++ code. It is straightforward to use for simple functions (things that can be translated directly) and reasonably easy to use for more complex functions (such as functions with output parameters that need an extra translation step to represent in Python.) For more powerful interfacing you often need to write bits of C as part of the interface file. For anything but simple use you will need to know about CPython and how it represents objects -- not hard, but something to keep in mind.
ctypes allows you to directly access C functions, structures and other data, and load arbitrary shared libraries. You do not need to write any C for this, but you do need to understand how C works. It is, you could argue, the flip side of SWIG: it doesn't generate code and it doesn't require a compiler at runtime, but for anything but simple use it does require that you understand how things like C datatypes, casting, memory management and alignment work. You also need to manually or automatically translate C structs, unions and arrays into the equivalent ctypes datastructure, including the right memory layout.
It is likely that in pure execution, SWIG is faster than ctypes -- because the management around the actual work is done in C at compiletime rather than in Python at runtime. However, unless you interface a lot of different C functions but each only a few times, it's unlikely the overhead will be really noticeable.
In development time, ctypes has a much lower startup cost: you don't have to learn about interface files, you don't have to generate .c files and compile them, you don't have to check out and silence warnings. You can just jump in and start using a single C function with minimal effort, then expand it to more. And you get to test and try things out directly in the Python interpreter. Wrapping lots of code is somewhat tedious, although there are attempts to make that simpler (like ctypes-configure.)
SWIG, on the other hand, can be used to generate wrappers for multiple languages (barring language-specific details that need filling in, like the custom C code I mentioned above.) When wrapping lots and lots of code that SWIG can handle with little help, the code generation can also be a lot simpler to set up than the ctypes equivalents.
CTypes is very cool and much easier than SWIG, but it has the drawback that poorly or malevolently-written python code can actually crash the python process. You should also consider boost python. IMHO it's actually easier than swig while giving you more control over the final python interface. If you are using C++ anyway, you also don't add any other languages to your mix.
In my experience, ctypes does have a big disadvantage: when something goes wrong (and it invariably will for any complex interfaces), it's a hell to debug.
The problem is that a big part of your stack is obscured by ctypes/ffi magic and there is no easy way to determine how did you get to a particular point and why parameter values are what they are..
You can also use Pyrex, which can act as glue between high-level Python code and low-level C code. lxml is written in Pyrex, for instance.
ctypes is great, but does not handle C++ classes. I've also found ctypes is about 10% slower than a direct C binding, but that will highly depend on what you are calling.
If you are going to go with ctypes, definitely check out the Pyglet and Pyopengl projects, that have massive examples of ctype bindings.
I'm going to be contrarian and suggest that, if you can, you should write your extension library using the standard Python API. It's really well-integrated from both a C and Python perspective... if you have any experience with the Perl API, you will find it a very pleasant surprise.
Ctypes is nice too, but as others have said, it doesn't do C++.
How big is the library you're trying to wrap? How quickly does the codebase change? Any other maintenance issues? These will all probably affect the choice of the best way to write the Python bindings.
Just wanted to add a few more considerations that I didn't see mentioned yet.
[EDIT: Ooops, didn't see Mike Steder's answer]
If you want to try using a non Cpython implementation (like PyPy, IronPython or Jython), then ctypes is about the only way to go. PyPy doesn't allow writing C-extensions, so that rules out pyrex/cython and Boost.python. For the same reason, ctypes is the only mechanism that will work for IronPython and (eventually, once they get it all working) jython.
As someone else mentioned, no compilation is required. This means that if a new version of the .dll or .so comes out, you can just drop it in, and load that new version. As long as the none of the interfaces changed, it's a drop in replacement.
Something to keep in mind is that SWIG targets only the CPython implementation. Since ctypes is also supported by the PyPy and IronPython implementations it may be worth writing your modules with ctypes for compatibility with the wider Python ecosystem.
I have found SWIG to be be a little bloated in its approach (in general, not just Python) and difficult to implement without having to cross the sore point of writing Python code with an explicit mindset to be SWIG friendly, rather than writing clean well-written Python code. It is, IMHO, a much more straightforward process to write C bindings to C++ (if using C++) and then use ctypes to interface to any C layer.
If the library you are interfacing to has a C interface as part of the library, another advantage of ctypes is that you don't have to compile a separate python-binding library to access third-party libraries. This is particularly nice in formulating a pure-python solution that avoids cross-platform compilation issues (for those third-party libs offered on disparate platforms). Having to embed compiled code into a package you wish to deploy on something like PyPi in a cross-platform friendly way is a pain; one of my most irritating points about Python packages using SWIG or underlying explicit C code is their general inavailability cross-platform. So consider this if you are working with cross-platform available third party libraries and developing a python solution around them.
As a real-world example, consider PyGTK. This (I believe) uses SWIG to generate C code to interface to the GTK C calls. I used this for the briefest time only to find it a real pain to set up and use, with quirky odd errors if you didn't do things in the correct order on setup and just in general. It was such a frustrating experience, and when I looked at the interace definitions provided by GTK on the web I realized what a simple excercise it would be to write a translator of those interface to python ctypes interface. A project called PyGGI was born, and in ONE day I was able to rewrite PyGTK to be a much more functiona and useful product that matches cleanly to the GTK C-object-oriented interfaces. And it required no compilation of C-code making it cross-platform friendly. (I was actually after interfacing to webkitgtk, which isn't so cross-platform). I can also easily deploy PyGGI to any platform supporting GTK.