when reading python documentation and various mailing lists I always read what looks a little bit like a dogma. Global variables should be avoided like hell, they are poor design ... OK, why not ? But there are some real lifes situation where I do not how to avoid such a pattern.
Say that I have a GUI from which several files can be loaded from the main menu.
The file objects corresponding to the loaded files may be used througout all the GUI (e.g. an image viewer that will display an image and on which various actions can be performed on via different dialogs/plugins).
Is there something really wrong with building the following design:
Menu.py --> the file will be loaded from here
Main.py --> the loaded file objects can be used here
Dialog1.py --> or here
Dialog2.py --> or there
Dialog3.py --> or there
...
Globals.py
where Globals.py will store a dictionary whose key are the name of the loaded files and the value the corresponding file objects. Then, from there, the various part of the code that needs those data would access it via weak references.
Sorry if my question looks (or is) stupid, but do you see any elegant or global-free alternatives ? One way would be to encapsulate the loaded data dictionary in the main application class of Main.py by considering it as the central access part of the GUI. However, that would also bring some complications as this class should be easily accessible from all the dialogs that needs the data even if they are necesseraly direct children of it.
thank a lot for your help
Global variables should be avoided because they inhibit code reuse. Multiple widgets/applications can nicely live within the same main loop. This allows you to abstract what you now think of as a single GUI into a library that creates such GUI on request, so that (for instance) a single launcher can launch multiple top-level GUIs sharing the same process.
If you use global variables, this is impossible because multiple GUI instances will trump each other's state.
The alternative to global variables is to associate the needed attributes with a top-level widget, and to create sub-widgets that point to the same top-level widgets. Then, for example, a menu action will use its top-level widget to reach the currently opened file in order to operate on it.
I would manage global data by encapsulating the data in one ore more classes and implement the borg pattern for these classes.
See Why is the Borg pattern better than the Singleton pattern in Python
Related
I have an application that dynamically generates a lot of Python modules with class factories to eliminate a lot of redundant boilerplate that makes the code hard to debug across similar implementations and it works well except that the dynamic generation of the classes across the modules (hundreds of them) takes more time to load than simply importing from a file. So I would like to find a way to save the modules to a file after generation (unless reset) then load from those files to cut down on bootstrap time for the platform.
Does anyone know how I can save/export auto-generated Python modules to a file for re-import later. I already know that pickling and exporting as a JSON object won't work because they make use of thread locks and other dynamic state variables and the classes must be defined before they can be pickled. I need to save the actual class definitions, not instances. The classes are defined with the type() function.
If you have ideas of knowledge on how to do this I would really appreciate your input.
You’re basically asking how to write a compiler whose input is a module object and whose output is a .pyc file. (One plausible strategy is of course to generate a .py and then byte-compile that in the usual fashion; the following could even be adapted to do so.) It’s fairly easy to do this for simple cases: the .pyc format is very simple (but note the comments there), and the marshal module does all of the heavy lifting for it. One point of warning that might be obvious: if you’ve already evaluated, say, os.getcwd() when you generate the code, that’s not at all the same as evaluating it when loading it in a new process.
The “only” other task is constructing the code objects for the module and each class: this requires concatenating a large number of boring values from the dis module, and will fail if any object encountered is non-trivial. These might be global/static variables/constants or default argument values: if you can alter your generator to produce modules directly, you can probably wrap all of these (along with anything else you want to defer) in function calls by compiling something like
my_global=(lambda: open(os.devnull,'w'))()
so that you actually emit the function and then a call to it. If you can’t so alter it, you’ll have to have rules to recognize values that need to be constructed in this fashion so that you can replace them with such calls.
Another detail that may be important is closures: if your generator uses local functions/classes, you’ll need to create the cell objects, perhaps via “fake” closures of your own:
def cell(x): return (lambda: x).__closure__[0]
After recently starting a role which essentially incorporates professional software development in python, I have noticed in the code that I am working with that people are tending to read in files as objects instead of variables, and I cannot understand why.
For example, I work with a limited amount of raster files. In the parts of the code that deal with these raster files, there might a class defined explicitly for reading in the rasters. The class takes a file location as in input and will then utilise the python package rasterio to open, read and access other characteristics of the file, but then stores these as attributes. For further example, we may have something like the following:
class test(object):
def __init__(self, fileLocation):
self.fileRead = open(fileLocation)
self.fileSplit = self.fileRead.split()
My instinct would have been to just read the file straight in as a variable and access its properties when I needed them, avoiding the expenditure of extra effort.
I know the idea of classes is to allow for streamlined data handling when dealing with quantities of similar types of data (i.e. student information for students in a school), but this class might initiate one instance in each run of the parent program. So to me, it seems to be a bit overkill to go to the effort of creating a class just to hold the information obtained through rasterio, when it would probably be much cleaner just to access the file data as you want it through explicit calls with rasterio.
Now, this sort of way of structuring code appears to be quite prevalent, and I just can't seem to understand why this would be a good way of doing things. As such, I wondered if there is some hidden benefit that I am missing which someone could explain to me? Or else to disregard it, and carry on in the manner outlined.
Studying Tkinter and I've only found tutorials on Tkinter without OOP, but looking at the Python.org documentation it looks like it's all in OOP. What's the benefit of using classes? It seems like more work and the syntax looks night and day from what I've learned so far.
This is going to be a really generic answer and most of the answers to this will be opinionated anyways. Speaking of which,the answer will likely be downvoted and closed because of this.
Anyways... Let's say you have a big GUI with a bunch of complicated logic sure you could write one huge file with hundreds, if not thousands of lines, and proxy a bunch of stuff through different functions and make it work. But, the logic is messy.
What if you could compartmentalize different sections of the GUI and all the logic surrounding them. Then, takes those components and aggregate them into the sum which makes the GUI?
This is exactly what you can use classes for in Tkinter. More generally, this is essentially what you use classes for - abstracting things into (reusable - instances) objects which provide a useful utility.
Example:
An app I built ages ago with Tkinter when I first learned it was a file moving program. The file moving program let you select the source / destination directory, had logging capabilities, search functions, monitoring processes for when downloads complete, and regex renaming options, unzipping archives, etcetera. Basically, everything I could think of for moving files.
So, what I did was I split the app up like this (at a high level)
1) Have a main which is the aggregate of the components forming the main GUI
Aggregates were essentially a sidebar, buttons / labels for selection various options split into their own sections as needed, and a scrolled text area for operation logging + search.
So, the main components were split like this:
2) A sidebar which had the following components
Section which contained the options for monitoring processes
Section which contained options for custom regular expressions or premade ones for renaming files
Section for various flag such as unpacking
3) A logging / text area section with search functionality build in + the options to dump (save) log files or view them.
That's a high level description of the "big" components which were comprised from the smaller components which were their own classes. So, by using classes I was able to wrap the complicated logic up into small pieces that were self contained.
Granted, you can do the same thing with functions, but you have "pieces" of a GUI which you can consider objects (classes) which fit together. So, it just makes for cleaner code / logic.
Like what pythonista just said...
OOP makes your GUI code more organized and if you need to create new windows eg.toplevel() you will find it extremely useful because you won't need to write all that code again and again and again... Plus if you have to use variables that are inside another function you will not need to declare it as a global. OOP with Tkinter is the best approach
Im teaching myself python (3.x) and I'm trying to understand the use case for classes. I'm starting to understand what they actually do, but I'm struggling to understand why you would use a class as opposed to creating a module with functions.
For example, how does:
class cls1:
def func1(arguments...):
#do some stuff
obj1 = cls1()
obj2 = cls1()
obj1.func1(arg1,arg2...)
obj2.func1(arg1,arg2...)
Differ from:
#module.py contents
def func1(arguments...):
#do some stuff
import module
x = module.func1(arg1,arg2...)
y = module.func1(arg1,arg2...)
This is probably very simple but I just can't get my head around it.
So far, I've had quite a bit of success writing python programs, but they have all been pretty procedural, and only importing basic module functions. Classes are my next biggest hurdle.
You use class if you need multiple instance of it, and you want that instances don't interfere each other.
Module behaves like a singleton class, so you can have only one instance of them.
EDIT: for example if you have a module called example.py:
x = 0
def incr():
global x
x = x + 1
def getX():
return x
if you try to import these module twice:
import example as ex1
import example as ex2
ex1.incr()
ex1.getX()
1
ex2.getX()
1
This is why the module is imported only one time, so ex1 and ex2 points to the same object.
As long as you're only using pure functions (functions that only works on their arguments, always return the same result for the same arguments set, don't depend on any global/shared state and don't change anything - neither their arguments nor any global/shared state - IOW functions that don't have any side effects), then classes are indeed of a rather limited use. But that's functional programming, and while Python can technically be used in a functional style, it's possibly not the best choice here.
As soon has you have to share state between functions, and specially if some of these functions are supposed to change this shared state, you do have a use for OO concepts. There are mainly two ways to share state between functions: passing the state from function to function, or using globals.
The second solution - global state - is known to be troublesome, first because it makes understanding of the program flow (hence debugging) harder, but also because it prevents your code from being reentrant, which is a definitive no-no for quite a lot of now common use cases (multithreaded execution, most server-side web application code etc). Actually it makes your code practically unusable or near-unusable for anything except short simple one-shot scripts...
The second solution most often implies using half-informal complex datastructures (dicts with a given set of keys, often holding other dicts, lists, lists of dicts, sets etc), correctly initialising them and passing them from function to function - and of course have a set of functions that works on a given datastructure. IOW you are actually defining new complex datatypes (a data structure and a set of operations on that data structure), only using the lowest level tools the language provide.
Classes are actually a way to define such a data type at a higher level, grouping together the data and operations. They also offer a lot more, specially polymorphism, which makes for more generic, extensible code, and also easier unit testing.
Consider you have a file or a database with products, and each product has product id, price, availability, discount, published at web status, and more values. And you have second file with thousands of products that contain new prices and availability and discount. You want to update the values and keep control on how many products will be change and other stats. You can do it with Procedural programming and Functional programming but you will find yourself trying to discover tricks to make it work and most likely you will be lost in many different lists and sets.
On the other hand with Object-oriented programming you can create a class Product with instance variables the product-id, the old price, the old availability, the old discount, the old published status and some instance variables for the new values (new price, new availability, new discount, new published status). Than all you have to do is to read the first file/database and for every product to create a new instance of the class Product. Than you can read the second file and find the new values for your product objects. In the end every product of the first file/database will be an object and will be labeled and carry the old values and the new values. It is easier this way to track the changes, make statistics and update your database.
One more example. If you use tkinter, you can create a class for a top level window and every time you want to appear an information window or an about window (with custom color background and dimensions) you can simply create a new instance of this class.
For simple things classes are not needed. But for more complex things classes sometimes can make the solution easier.
I think the best answer is that it depends on what your indented object is supposed to be/do. But in general, there are some differences between a class and an imported module which will give each of them different features in the current module. Which the most important thing is that class has been defined to be objects, this means that they have a lot of options to act like an object which modules don't have. For example some special attributes like __getattr__, __setattr__, __iter__, etc. And the ability to create a lot of instances and even controlling the way that they are created. But for modules, the documentation describes their use-case perfectly:
If you quit from the Python interpreter and enter it again, the
definitions you have made (functions and variables) are lost.
Therefore, if you want to write a somewhat longer program, you are
better off using a text editor to prepare the input for the
interpreter and running it with that file as input instead. This is
known as creating a script. As your program gets longer, you may want
to split it into several files for easier maintenance. You may also
want to use a handy function that you’ve written in several programs
without copying its definition into each program.
To support this, Python has a way to put definitions in a file and use
them in a script or in an interactive instance of the interpreter.
Such a file is called a module; definitions from a module can be
imported into other modules or into the main module (the collection of
variables that you have access to in a script executed at the top
level and in calculator mode).
I have a bunch of Objects from the same Class in Python.
I've decided to put each object in a different file since it's
easier to manage them (If I plan to add more objects or edit them individually)
However, I'm not sure how to run through all of them, they are in another Package
So if I look at Netbeans I have TopLevel... and there's also a Package named Shapes
in Shapes I have Ball.py, Circle.py, Triangle.py (inside the files is a call for a constructor with the details of the specific shape) and they are all from class GraphicalShape
That is configured in GraphicalShape.py in the TopLevel Package.
Now, I have also on my Toplevel Package a file named newpythonproject.py, which would start the
process of calling each shape and doing things with it, how do I run through all of the shapes?
also: Is it a good way to do this?
p.s. never mind the uppercase lowercase stuff...
Just to clarify, I added a picture of the Project Tree
http://i47.tinypic.com/2i1nomw.png
It seems that you're misunderstanding the Python jargon. The Python term "object" means an actual run-time instance of a class. As far as I can tell, you have "sub-classes" of the Shape class called ball, circle and triangle. Note that a sub-class is also a class. You are keeping the code for each such sub-class in a separate file, which is fine.
I think you're getting mixed up because you're focusing on the file layout of your project far too early. With Python it is often easier to start with just one file, writing everything you need in that file (functions, classes, etc.). Just get things working first. Later, when you've got working code and you just want to split a part of it into another file for organizational reasons, it will be much more obvious (to you!) how this should be done.
In Python, every class does not have to be defined in its own separate file. You can do this if you like, but it is not compulsory.
it's not clear what you mean when you say "run through them all".
If you mean "import them for use", then you should:
Make sure the parent folder of shapes is on the PYTHONPATH environment variable; then use
from shapes import ball.