I'm writing a set of Python modules that are essentially utility modules for other code that is contained in dynamic libraries. To make packaging and use easier, what I'd like to do is bundle them inside the library (on Windows as a string resource, on Linux I'm not sure yet - probably export a function that returns the string). Now I'm wondering if there is a way to import a Python module as a string literal of its source code. So essentially the equivalent of
mymod = "def func():\n return 1"
import(mymod)
Any imports in the imported module itself should also work. Ideally I'm thinking of some way of providing a callback function that is passed the name of a module and returns a string with its contents, so that I can recursively have my modules be loaded; but as a backup I can also live with the situation where upon doing the import of the main module, I would do the same thing in the init in python (i.e. dynamically load any dependencies manually - of course I don't like doing things manually, hence why this is a fallback :) )
Oh and I'd like this to work in Python 2.7 and 3, if that makes a difference...
If you are looking for a way to execute code from a string (which is not recommended, you might be better off going through the whole process of setup.py to make your library installable) you can use the exec statement as follows
exec(mymod)
This will parse the string as it would normal Python source and execute it, leaving you with it's side effects (such as defining functions and variables). This will work in both Python 2.7 and 3.x. See the documentation here and here for more details.
Alternatively, in Python 2.7 only, execfile does the same thing as exec but for a text file
execfile("path/to/my/mod")
The documentation explains what it does and doesn't do.
Related
I have just started learning Python. I was just testing different features of python on python's console in Terminal (Ubuntu's terminal).
Even though I didn't imported any modules, how can I could still use functions like append(), insert(), print(), etc.
I used to program in C language, where I must include 'stdio.h' header file just to 'print' a string. So I was expecting same behaviour in Python.
Is there any standard Module which is by default imported in every Python program's 'interpretation'?
You could check the modules that are loaded in a python file by running
import sys
sys.module.keys()
Functions like print are part of the __builtin__ module that is loaded by default.
Specs: Python 2.7
I'm working on a project that has several modules, I want to activate some features from the __future__ module in all of them. I would like to import all the features I need on one module, and then import that single module to every other, and have those features be active in all of them, or something to that effect.
I tried:
[A.py]
from __future__ import division
[B.py]
import A
print(1/2)
Running B.py the division was still integer. I tried:
[A.py]
print(1/2)
[B.py]
from __future__ import division
import A
Running B.py gave the same result. With both previous examples I also tried switching 'import A' by 'from A import *' with the same results.
I searched Google for a while, and found the best description about how the __future__ module works, obviously enough, on the Python documentation. There I could only find the assurance the features would be active in the module they were imported to, without any mention of how to do it globally.
So I'd like to know if there is a way of doing this, either the way I described, or creating some sort of runtime configuration file, or through some other means.
There's no way to do this in-language; you really can't make __future__ imports global in this sense. (Well, you probably can replace the normal import statements with something complicated around imp or something. See the Future statement documentation and scroll down to "Code compiled by…" But anything like this is almost certainly a bad idea.)
The reason is that from __future__ import division isn't really a normal import. Or, rather, it's more than a normal import. You actually do get a name called division that you can inspect, but just having that value has no effect—so passing it to other modules doesn't affect those modules. On top of the normal import, Python has special magic that detects __future__ imports at the top of a module, or in the interactive interpreter, and changes the way your code is compiled. See future for the "real import" part, and Future statements for the "magic" part, if you want all the details.
And there's no configuration file that lets you do this. But there is a command-line parameter:
python -Qnew main.py
This has the same effect as doing a from __future__ import division everywhere.
You can add this to the #! lines, or alias pyfuturediv='python -Qnew' (or even alias python='python -Qnew') in your shell, or whatever, which maybe as good as a configuration file for your purposes.
But really, if you want to make sure module B gets new-style division, you probably should have the __future__ declaration in B in the first place.
Or, of course, you could just write for Python 3.0+ instead of 2.3-2.7. (Note that some of the core devs were against having command-line arguments, because "the right way to get feature X globally is to use a version of Python >= feature X's MandatoryRelease".) Or use // when you mean //.
Another possibility is to use six, a module designed to let you write code that's almost Python 3.3 and have it work properly in 2.4-2.7 (and 3.0-3.2). For example, you don't get a print function, but you do get a print_ function that works exactly the same. You don't get Unicode literals, but you get u() fake literals—which, together with a UTF-8 encoding declaration in the source, is almost good enough. And it provides a whole lot of stuff that you can't get from __future__ as well—StringIO and BytesIO, exec as a function, the next function, etc.
If the problem is that you have 1000 source files, and it's a pain to edit them all, you could use sed, or use 3to2 with just the option that fixes division, or…
Another approach would be using isort. isort has a -a command line flag to add imports to files that you specify. Simply running isort without arguments will run it recursively on all python files in the current working directory and all subdirectories.
If, like me, you have a virtual environment inside that folder, and are using git (or have an equivalent way of listing only your files) and don't want to run it on all files inside that virtual environment, you can use something like:
git ls-tree -r HEAD --name-only | grep "\.py$" | xargs isort -a -y "from __future__ import division"
I wrote a Python module which contains methods to be used in other modules. By doing some coding, in some situation Python can't see that methods when they're called from other modules with methods_module.method() syntax. What are the situations in which this scenario is possible?
It's a bit unclear what your question is, so it's rather hard to answer. However, if you have imported using the following syntax
from methods_module import method
then this does not expose methods_module to your program. Therefore methods_module.method() would not work.
I'm trying to write a software plug-in that embeds Python. On Windows the plug-in is technically a DLL (this may be relevant). The Python Windows FAQ says:
1.Do not build Python into your .exe file directly. On Windows, Python must be a DLL to handle importing modules that are themselves DLL’s. (This is the first key undocumented fact.) Instead, link to pythonNN.dll; it is typically installed in C:\Windows\System. NN is the Python version, a number such as “23” for Python 2.3.
My question is why exactly Python must be a DLL? If, as in my case, the host application is not an .exe, but also a DLL, could I build Python into it? Or, perhaps, this note means that third-party C extensions rely on pythonN.N.dll to be present and other DLL won't do? Assuming that I'd really want to have a single DLL, what should I do?
I see there's the dynload_win.c file, which appears to be the module to import C extensions on Windows and, as far as I can see, it scans the extension file to find which pythonX.X.dll it imports; but I'm not experienced with Windows and I don't quite understand all the code there.
You need to link to pythonXY.dll as a DLL, instead of linking the relevant code directly into your executable, because otherwise the Python runtime can't load other DLLs (the extension modules it relies on.) If you make your own DLL you could theoretically link all the Python code in that DLL directly, since it doesn't end up in the executable but still in a DLL. You'll have to take care to do the linking correctly, however, as pretty much none of the standard tools (like distutils) will do this for you.
However, regardless of how you embed Python, you can't make do with just the DLL, nor can you make do with just any DLL. The ABI changes between Python versions, so if you compiled your code against Python 2.6, you need python26.dll; you can't use python25.dll or python27.dll. And Python isn't just a DLL; it also needs its standard library, which includes extension modules (which are DLLs themselves, although they have the .pyd extension.) The code in dynload_win.c you ran into is for loading those DLLs, and are not related to loading of pythonXY.dll.
In short, in order to embed Python in your plugin, you need to either ship Python with the plugin, or require that the right Python version is already installed.
(Sorry, I did a stupid thing, I first wrote the question, and then registered, and now I cannot alter it or comment on the replies, because StackOverflow's engine doesn't think I'm the author. I cannot even properly thank those who replied :( So this is actually an update to the question and comments.)
Thanks for all the advice, it's very valuable. As far as I understand with some effort I can link Python statically into a custom DLL, provided that I compile other dynamically loaded extensions myself and link them against the same DLL. (I know I need to ship the standard library too; my plan was to append a zipped archive to the DLL file. As far as I understand, I will even be able to import pure Python modules from it.)
I also found an interesting place in dynload_win.c. (I understand it loads dynamic extensions that use Python C API, e.g. _ctypes.) As far as I can see it not only looks for init_ctypes symbol or whatever the extension name is, but also scans the .pyd file's import table looking for (regex) python\d+\. and then compares the found symbol with known pythonNN. string to make sure the extension was compiled for this version of Python. If the import table doesn't have such a symbol or it refers to another version, it raises an error.
For me it means that:
If I link an extension against pythonNN.dll and try to load it from my custom DLL that includes a statically linked Python, it will pass the check, but — well, here I'm not sure: will it fail because there's no pythonNN.dll (i.e. even before getting to the check) or it will happily load the symbols?
And if I link it against my custom DLL, it will find symbols, but won't pass the check :)
I guess I could rewrite this piece to suit my needs... Are there any other such places, I wonder.
Python needs to be a dll (with a standard name) such that your application, and the plugin, can use the same instance of python.
Plugin dlls are already going to expect to be loading (and using python from) a python26.dll (or whichever version) - if your python is statically embedded in your exe, then two different instances of the python library would be managing the same data structures.
If the python libraries use no static variables at all, and the compile settings are exactly the same this should not be a problem. However, generally its far safer to simply ensure that only one instance of the python interpreter is being used.
On *nix, all shared objects in a process, including the executable, contribute their exported names into a common pool; any of the shared objects can then pull any of the names from the pool and use them as they like. This allows e.g. cStringIO.so to pull the relevant Python library functions from the main executable when the Python library is statically-linked.
On Windows, each shared object has its own independent pool of names it can use. This means that it must read the relevant different shared objects it needs functions from. Since it is a lot of work to get all the names from the main executable, the Python functions are separated out into their own DLL.
Basically for this case, I am using the _winreg module in Python v2.6 but the python package I have to use is v2.5. When I try to use:
_winreg.ExpandEnvironmentStrings
it complains about not having this attribute in this module. I have successfully transferred other modules like comtypes from site-packages folder.
But the problem is I don't know which files to copy/replace. Is there a way to do this? Also is site-packages the main places for 3rd party modules?
It's a compiled C extension, not pure Python, so you generally can't simply copy the DLL/so file across from one installation to another: the Python binary interface changes on 0.1 version number updates (but not 0.0.1 updates). In any case, _winreg seems to be statically build into Python.exe on the current official Windows builds rather than being dropped into the ‘DLLs’ folder.
_winreg.ExpandEnvironmentStrings is not available pre-2.6, but you could usefully fall back to os.path.expandvars, which does more or less the same thing. (It also supports $VAR variables, which under Windows you might not want, but this may not be a practical problem.) You're right: %-syntax for expandvars under Windows was only introduced in 2.6, how useless. Looks like you'll need the below.
If the worst comes to the worst it's fairly simple to write by hand:
import re, os
def expandEnvironmentStrings(s):
r= re.compile('%([^%]+)%')
return r.sub(lambda m: os.environ.get(m.group(1), m.group(0)), s)
Though either way there is always Python 2.x's inability to read Unicode envvars to worry about.