Compiling a tkinter game - python

I understand the general process of how to compile programs using py2exe, portable python, and other ways and always some of the issues that can cause problems such as matplotlib etc. However, I'm curious as to how a compiler would work if a game is using pickle. Would the game still be able to save and load states if it is compiled or would it no longer be able to have this option?
Also, if anyone doesn't mind I'm a bit confused as to how compiling a program actually works, as in the process that the compiler goes through to be able to make your program an executable a general explanation of this process would be awesome.

Basically, python interprets the lines of code with the language parser... and then compiles the parsed lines to byte code. This byte code is "compiled python".
Let's build a bit of code:
# file: foo.py
class Bar(object):
x = 1
def __init__(self, y):
self.y = y
Now we import it.
>>> import foo
>>> foo
<module 'foo' from 'foo.py'>
>>> reload(foo)
<module 'foo' from 'foo.pyc'>
What you'll notice is that the first time we import foo, it says it was imported from foo.py. That's because python had to byte compile the code into a module object. Doing so, however, leaves a .pyc file in your directory... that's a compiled python file. Python prefers to use compiled code, as a time-saver as opposed to compiling the code again... so when you reload the module, python picks the compiled code to import. Basically, when you are "installing" python modules, you are just moving compiled code into somewhere python can import it (on your PYTHONPATH).
>>> import numpy
>>> numpy
<module 'numpy' from '/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/__init__.pyc'>
The site-packages directory is the default place that compiled 3rd party code gets installed. Indeed a module is just a python object representation of a file. Meaning, a module instance is a compiled file. Once you "compile" the file in to a module, it's no longer going to care what's in the file... python only needs the compiled byte code after that.
>>> import types
>>> types.ModuleType.mro()
[<type 'module'>, <type 'object'>]
>>> foo.__class__.mro()
[<type 'module'>, <type 'object'>]
>>> i = object()
>>> object
<type 'object'>
>>> i
<object object at 0x1056f60b0>
Here we see (using types) that foo is an instance of a ModuleType... so basically a compiled file. mro shows modules inherit from a python object, which is the primary object in python. (Yes, it's object-oriented).
Here i is an instance of an object, just as foo is an instance of a ModuleType. Python works with instances of compiled objects, not the underlying code... just like (almost?) every other language. So, when you work with a class that you built in the foo module, you are working with the byte compiled instance of the class. You can dynamically modify the class instance, by adding methods on-the-fly... and it doesn't change the underlying file foo.py... but it does alter the byte-compiled instance of the module foo that's held in memory.
>>> zap = foo.Bar(2)
>>> zap.x, zap.y
(1, 2)
>>> foo.Bar
<class 'foo.Bar'>
>>> foo.Bar.mro()
[<class 'foo.Bar'>, <type 'object'>]
>>>
>>> def wow(self):
... return self.x + self.y
...
>>> wow(zap)
3
>>> foo.Bar.wow = wow
>>> foo.Bar.wow(zap)
3
>>> zap.wow()
3
Again, the file foo.py would be unchanged... however, I added wow to the class Bar, so it's usable as if wow were in the code in the first place. So working with "compiled" python is not static at all... it just means that you are working with code that has been byte compiled to save some time when you are importing it the first time. Note that since the module foo is an instance, you can also edit it in memory (not just objects that already live in it's contents).
>>> foo.square = lambda x:x**2
>>>
>>> from foo import square
>>> square(3)
9
Here I added squared to foo -- not to foo.py, but to the byte compiled copy of foo that lives in memory.
So can you pickle and unpickle objects in compiled code? Absolutely. You are probably doing that already if you've used pickle.
P.S. If you are talking about building C++ extensions to python, and compiling the code to shared libraries... it's still fundamentally no different.
If you are looking for some nitty-gritty details on byte compiling, check out my question and answer here: How is a python function's name reference found inside it's declaration?.

Related

Python modules usage

I was writing a code in python and got stuck with a doubt. Seems irrelevant but can't get over it. The thing is when I import a module and use it as below:
import math
print math.sqrt(9)
Here I see math(module) as a class which had a method sqrt(). If that is the case then how can I directly use the class without creating an object of it. I am basically unable to understand here the abstraction between class and and object.
Modules are more like objects, not like classes. You don't "instantiate" a module, there's only one instance of each module and you can access it using the import statement.
Specifically, modules are objects of type 'module':
>>> import math
>>> type(math)
<type 'module'>
Each module is going to have a different set of variables and methods.
Modules are instantiated by Python, whenever they are first imported. Modules that have been instantiated are stored in sys.modules:
>>> import sys
>>> 'math' in sys.modules
False
>>> import math
>>> 'math' in sys.modules
True
>>> sys.modules['math'] is math
True
AFAIK all python modules (like math and million more) are instantiated when you have imported it. How many times are they instantiated you ask ? Just once! All modules are singletons.
Just saying the above statement isn't enough so let's dive deep into it.
Create a python module ( module is basically any file ending with ".py" extension ) say "p.py" containing some code as follows:
In p.py
print "Instantiating p.py module. Please wait..."
# your good pythonic optimized functions, classes goes here
print "Instantiating of p.py module is complete."
and in q.py try importing it
import p
and when you run q.py you will see..
Instantiating p.py module. Please wait...
Instantiating of p.py module is complete.
Now have you created an instance of it ? NO! But still you have it up and running ready to be used.
In your case math is not a class. When you import math the whole module math is imported. You can see it like the inclusion of a library (the concept of it).
If you want to avoid to import the whole module (in order to not have everything included in your program), you can do something like this:
from math import sqrt
print sqrt(9)
This way only sqrt is imported and not everything from the math module.
Here I see math(module) as a class which had a method sqrt(). If that is the case then how can I directly use the class without creating an object of it. I am basically unable to understand here the abstraction between class and and object.
When you import a module, the module object is created. Just like when you use open('file.txt') a file object will be created.
You can use a class without creating an object from it by referencing the class name:
class A:
value = 2 + 2
A.value
class A is an object of class type--the built-in class used to create classes. Everything in Python is an object.
When you call the class A() that's how you create an object. *Sometimes objects are created by statements like import creates a module object, def creates a function object, classcreates a class object that creates other objects and many other statements...

Access module 'sys' without using import machinery

Sandboxing Python code is notoriously difficult due to the power of the reflection facilities built into the language. At a minimum one has to take away the import mechanism and most of the built-in functions and global variables, and even then there are holes ({}.__class__.__base__.__subclasses__(), for instance).
In both Python 2 and 3, the 'sys' module is built into the interpreter and preloaded before user code begins to execute (even in -S mode). If you can get a handle to the sys module, then you have access to the global list of loaded modules (sys.modules) which enables you to do all sorts of naughty things.
So, the question: Starting from an empty module, without using the import machinery at all (no import statement, no __import__, no imp library, etc), and also without using anything normally found in __builtins__ unless you can get a handle to it some other way, is it possible to acquire a reference to either sys or sys.modules? (Each points to the other.) Am interested in both 2.x and 3.x answers.
__builtins__ can usually be recovered, giving you a path back to __import__ and thus to any module.
For Python 3 this comment from eryksun works, for example:
>>> f = [t for t in ().__class__.__base__.__subclasses__()
... if t.__name__ == 'Sized'][0].__len__
>>> f.__globals__['__builtins__']['__import__']('sys')
<module 'sys' (built-in)>
In Python 2, you just look for a different object:
>>> f = [t for t in ().__class__.__base__.__subclasses__()
... if t.__name__ == 'catch_warnings'][0].__exit__.__func__
>>> f.__globals__['__builtins__']['__import__']('sys')
<module 'sys' (built-in)>
Either method looks for subclasses of a built-in type you can create with literal syntax (here a tuple), then referencing a function object on that subclass. Function objects have a __globals__ dictionary reference, which will give you the __builtins__ object back.
Note that you can't just say no __import__ because it is part of __builtins__ anyway.
However, many of those __globals__ objects are bound to have sys present already. Searching for a sys module on Python 3, for example, gives me access to one in a flash:
>>> next(getattr(c, f).__globals__['sys']
... for c in ().__class__.__base__.__subclasses__()
... for f in dir(c)
... if isinstance(getattr(c, f, None), type(lambda: None)) and
... 'sys' in getattr(c, f).__globals__)
<module 'sys' (built-in)>
The Python 2 version only need to unwrap the unbound methods you find on classes to get the same results:
>>> next(getattr(c, f).__func__.__globals__['sys']
... for c in ().__class__.__base__.__subclasses__()
... for f in dir(c)
... if isinstance(getattr(c, f, None), type((lambda: 0).__get__(0))) and
... 'sys' in getattr(c, f).__func__.__globals__)
<module 'sys' (built-in)>

Getting a C GObject pointer from a Python gobject

I'm working with pywebkitgtk, which is a codegen'd binding- so there are a ton of GObject subclasses. The binding isn't complete, and I use ctypes to do a bunch of stuff in addition.
But now I need to use an object I've got- in Python- as an argument to a ctypes library call. Clearly, that won't work, and passing the memory address of the Python object isn't really a winner, either. How can I get a memory reference to the GObject backing the Python object?
Here's an example of something that doesn't work, but might give you an idea what I'm talking about.
>>> import ctypes
>>> libwebkit = ctypes.CDLL('libwebkit-1.0.so')
>>> import webkit
>>> webview = webkit.WebView()
>>> libwebkit.webkit_web_view_get_zoom_level(webview) #yes, I know the binding exposes this
ArgumentError: argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1
Again, this is just an example to illustrate the point- I want memory refs for gobjects to use with ctypes.

How to generate a module object from a code object in Python

Given that I have the code object for a module, how do I get the corresponding module object?
It looks like moduleNames = {}; exec code in moduleNames does something very close to what I want. It returns the globals declared in the module into a dictionary. But if I want the actual module object, how do I get it?
EDIT:
It looks like you can roll your own module object. The module type isn't conveniently documented, but you can do something like this:
import sys
module = sys.__class__
del sys
foo = module('foo', 'Doc string')
foo.__file__ = 'foo.pyc'
exec code in foo.__dict__
As a comment already indicates, in today's Python the preferred way to instantiate types that don't have built-in names is to call the type obtained via the types module from the standard library:
>>> import types
>>> m = types.ModuleType('m', 'The m module')
note that this does not automatically insert the new module in sys.modules:
>>> import sys
>>> sys.modules['m']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'm'
That's a task you must perform by hand:
>>> sys.modules['m'] = m
>>> sys.modules['m']
<module 'm' (built-in)>
This can be important, since a module's code object normally executes after the module's added to sys.modules -- for example, it's perfectly correct for such code to refer to sys.modules[__name__], and that would fail (KeyError) if you forgot this step. After this step, and setting m.__file__ as you already have in your edit,
>>> code = compile("a=23", "m.py", "exec")
>>> exec code in m.__dict__
>>> m.a
23
(or the Python 3 equivalent where exec is a function, if Python 3 is what you're using, of course;-) is correct (of course, you'll normally have obtained the code object by subtler means than compiling a string, but that's not material to your question;-).
In older versions of Python you would have used the new module instead of the types module to make a new module object at the start, but new is deprecated since Python 2.6 and removed in Python 3.

How to re import an updated package while in Python Interpreter? [duplicate]

This question already has answers here:
How do I unload (reload) a Python module?
(22 answers)
Closed 5 years ago.
I often test my module in the Python Interpreter, and when I see an error, I quickly update the .py file. But how do I make it reflect on the Interpreter ? So, far I have been exiting and reentering the Interpreter because re importing the file again is not working for me.
Update for Python3: (quoted from the already-answered answer, since the last edit/comment here suggested a deprecated method)
In Python 3, reload was moved to the imp module. In 3.4, imp was deprecated in favor of importlib, and reload was added to the latter. When targeting 3 or later, either reference the appropriate module when calling reload or import it.
Takeaway:
Python3 >= 3.4: importlib.reload(packagename)
Python3 < 3.4: imp.reload(packagename)
Python2: continue below
Use the reload builtin function:
https://docs.python.org/2/library/functions.html#reload
When reload(module) is executed:
Python modules’ code is recompiled and the module-level code reexecuted, defining a new set of objects which are bound to names in the module’s dictionary. The init function of extension modules is not called a second time.
As with all other objects in Python the old objects are only reclaimed after their reference counts drop to zero.
The names in the module namespace are updated to point to any new or changed objects.
Other references to the old objects (such as names external to the module) are not rebound to refer to the new objects and must be updated in each namespace where they occur if that is desired.
Example:
# Make a simple function that prints "version 1"
shell1$ echo 'def x(): print "version 1"' > mymodule.py
# Run the module
shell2$ python
>>> import mymodule
>>> mymodule.x()
version 1
# Change mymodule to print "version 2" (without exiting the python REPL)
shell2$ echo 'def x(): print "version 2"' > mymodule.py
# Back in that same python session
>>> reload(mymodule)
<module 'mymodule' from 'mymodule.pyc'>
>>> mymodule.x()
version 2
All the answers above about reload() or imp.reload() are deprecated.
reload() is no longer a builtin function in python 3 and imp.reload() is marked deprecated (see help(imp)).
It's better to use importlib.reload() instead.
So, far I have been exiting and reentering the Interpreter because re importing the file again is not working for me.
Yes, just saying import again gives you the existing copy of the module from sys.modules.
You can say reload(module) to update sys.modules and get a new copy of that single module, but if any other modules have a reference to the original module or any object from the original module, they will keep their old references and Very Confusing Things will happen.
So if you've got a module a, which depends on module b, and b changes, you have to ‘reload b’ followed by ‘reload a’. If you've got two modules which depend on each other, which is extremely common when those modules are part of the same package, you can't reload them both: if you reload p.a it'll get a reference to the old p.b, and vice versa. The only way to do it is to unload them both at once by deleting their items from sys.modules, before importing them again. This is icky and has some practical pitfalls to do with modules entries being None as a failed-relative-import marker.
And if you've got a module which passes references to its objects to system modules — for example it registers a codec, or adds a warnings handler — you're stuck; you can't reload the system module without confusing the rest of the Python environment.
In summary: for all but the simplest case of one self-contained module being loaded by one standalone script, reload() is very tricky to get right; if, as you imply, you are using a ‘package’, you will probably be better off continuing to cycle the interpreter.
In Python 3, the behaviour changes.
>>> import my_stuff
... do something with my_stuff, then later:
>>>> import imp
>>>> imp.reload(my_stuff)
and you get a brand new, reloaded my_stuff.
No matter how many times you import a module, you'll get the same copy of the module from sys.modules - which was loaded at first import mymodule
I am answering this late, as each of the above/previous answer has a bit of the answer, so I am attempting to sum it all up in a single answer.
Using built-in function:
For Python 2.x - Use the built-in reload(mymodule) function.
For Python 3.x - Use the imp.reload(mymodule).
For Python 3.4 - In Python 3.4 imp has been deprecated in favor of importlib i.e. importlib.reload(mymodule)
Few caveats:
It is generally not very useful to reload built-in or dynamically
loaded modules. Reloading sys, __main__, builtins and other key
modules is not recommended.
In many cases extension modules are not
designed to be initialized more than once, and may fail in arbitrary
ways when reloaded. If a module imports objects from another module
using from ... import ..., calling reload() for the other module does
not redefine the objects imported from it — one way around this is to
re-execute the from statement, another is to use import and qualified
names (module.name) instead.
If a module instantiates instances of a
class, reloading the module that defines the class does not affect
the method definitions of the instances — they continue to use the
old class definition. The same is true for derived classes.
External packages:
reimport - Reimport currently supports Python 2.4 through 2.7.
xreload- This works by executing the module in a scratch namespace, and then
patching classes, methods and functions in place. This avoids the
need to patch instances. New objects are copied into the target
namespace.
livecoding - Code reloading allows a running application to change its behaviour in response to changes in the Python scripts it uses. When the library detects a Python script has been modified, it reloads that script and replaces the objects it had previously made available for use with newly reloaded versions. As a tool, it allows a programmer to avoid interruption to their workflow and a corresponding loss of focus. It enables them to remain in a state of flow. Where previously they might have needed to restart the application in order to put changed code into effect, those changes can be applied immediately.
Short answer:
try using reimport: a full featured reload for Python.
Longer answer:
It looks like this question was asked/answered prior to the release of reimport, which bills itself as a "full featured reload for Python":
This module intends to be a full featured replacement for Python's reload function. It is targeted towards making a reload that works for Python plugins and extensions used by longer running applications.
Reimport currently supports Python 2.4 through 2.6.
By its very nature, this is not a completely solvable problem. The goal of this module is to make the most common sorts of updates work well. It also allows individual modules and package to assist in the process. A more detailed description of what happens is on the overview page.
Note: Although the reimport explicitly supports Python 2.4 through 2.6, I've been trying it on 2.7 and it seems to work just fine.
Basically reload as in allyourcode's asnwer. But it won't change underlying the code of already instantiated object or referenced functions. Extending from his answer:
#Make a simple function that prints "version 1"
shell1$ echo 'def x(): print "version 1"' > mymodule.py
# Run the module
shell2$ python
>>> import mymodule
>>> mymodule.x()
version 1
>>> x = mymodule.x
>>> x()
version 1
>>> x is mymodule.x
True
# Change mymodule to print "version 2" (without exiting the python REPL)
shell2$ echo 'def x(): print "version 2"' > mymodule.py
# Back in that same python session
>>> reload(mymodule)
<module 'mymodule' from 'mymodule.pyc'>
>>> mymodule.x()
version 2
>>> x()
version 1
>>> x is mymodule.x
False
Not sure if this does all expected things, but you can do just like that:
>>> del mymodule
>>> import mymodule
import sys
del sys.modules['module_name']
See here for a good explanation of how your dependent modules won't be reloaded and the effects that can have:
http://pyunit.sourceforge.net/notes/reloading.html
The way pyunit solved it was to track dependent modules by overriding __import__ then to delete each of them from sys.modules and re-import. They probably could've just reload'ed them, though.
dragonfly's answer worked for me (python 3.4.3).
import sys
del sys.modules['module_name']
Here is a lower level solution :
exec(open("MyClass.py").read(), globals())

Categories

Resources