I am attempting to use a module named "global_data" to save global state information without success. The code is getting large already, so I`ll try to post only the bare essentials.
from view import cube_control
from ioserver.ioserver import IOServer
from manager import config_manager, global_data
if __name__ == "__main__":
#sets up initial data
config_manager.init_manager()
#modifies data
io = IOServer()
#verify global data modified from IOServer.__init__
global_data.test() #success
#start pyqt GUI
cube_control.start_view()
So far so good. However in the last line cube_control.start_view() it enters this code:
#inside cube_control.py
def start_view():
#verify global data modified from IOServer.__init__
global_data.test() #fail ?!?!
app = QApplication(sys.argv)
w = MainWindow()
sys.exit(app.exec_())
Running the global_data.test() in this case fails. printing the entire global state reveals it now somehow reverted back to the data setup by config_manager.init_manager()
How is this possible?
While Qt is running I have a scheduler called every 10 seconds, also reporting a failed test.
However once the Qt GUI is stopped (clicked "x"), and I run the test from console, it succeeds again.
Inside the global_data module I`ve attempted to store the data in a dict inside both a simple python object as well as a ZODB in memory database:
#inside global_data
state = {
"units" : {}
}
db = ZODB.DB(None) #creates an in memory db
def test(identity="no-id"):
con = db.open()
r = con.root()
print("test online: ", r["units"]["local-test"]["online"], identity)
con.close()
Both have the exact same problem. Above the test is only done using the db.
The reason I attempted to use a db is that I understand threads can create a completely new global dictionary. However the 2 first tests are in the same thread. The cyclic one is in its own thread and could potentially create such a problem...?
File organization
If it helps my program is organized with the following structure:
There is also a "view" folder with some qt5 GUI files.
The IOServer attempts to connect to a bunch of OPC-UA servers using the opcua module. No threads are manually started there, although I suppose the opcua module does to stay connected.
global_data id()
I attempted to also print(id(global_data)) together with the tests and found that the ID is the same in IOServer AND top level code, but changes inside the cube_control.py#start_view. Should not these always refer to the same module?
I`m still not sure what exactly happened. But apparently this was solved by removing the init.py file inside of the folder named manager. Now all imports of the module named "global_data" points to the same ID.
How using a init.py file caused a second instance of the same module remains a mystery
Related
I have a file, Foo.py which holds the code below. When I run the file from command line using, python Foo.py everything works. However, if I use CLI of python
python
import Foo
Foo.main()
Foo.main()
Foo.main()
The first call works fine, the second brings forward all hell of warnings, the first of which is
(python:5389): Gtk-CRITICAL **: IA__gtk_container_add: assertion 'GTK_IS_CONTAINER (container)' failed
And the last cause a segmentation fault. What's the problem with my code?
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
import os
from PyQt4 import Qt
from PyQt4 import QtGui,QtCore
class Foo (QtGui.QWidget):
def __init__(self,parent=None):
super(Foo,self).__init__()
self.setUI()
self.showMaximized()
def setUI(self):
self.setGeometry(100,100,1150,650)
self.grid = QtGui.QGridLayout()
self.setLayout(self.grid)
#For convininece, I set different ui "concepts" in their own function
self.setInterfaceLine()
self.setMainText()
def setMainText(self):
#the main box, where information is displayed
self.main_label = QtGui.QLabel('Details')
self.main_text = QtGui.QLabel()
self.main_text.setAlignment(QtCore.Qt.AlignTop)
#Reading the welcome message from file
self.main_text.setText('')
self.main_text.setWordWrap(True) #To handle long sentenses
self.grid.addWidget(self.main_text,1,1,25,8)
def setInterfaceLine(self):
#Create the interface section
self.msg_label = QtGui.QLabel('Now Reading:',self)
self.msg_line = QtGui.QLabel('commands',self) #the user message label
self.input_line = QtGui.QLineEdit('',self) #The command line
self.input_line.returnPressed.connect(self.executeCommand)
self.grid.addWidget(self.input_line,26,1,1,10)
self.grid.addWidget(self.msg_label,25,1,1,1)
self.grid.addWidget(self.msg_line,25,2,1,7)
def executeCommand(self):
fullcommand = self.input_line.text() #Get the command
args = fullcommand.split(' ')
if fullcommand =='exit':
self.exit()
def exit(self):
#Exit the program, for now, no confirmation
QtGui.QApplication.quit()
def main():
app = QtGui.QApplication(sys.argv)
foo = Foo(sys.argv)
app.exit(app.exec_())
if __name__ in ['__main__']:
main()
I'm able to reproduce in Python 3 but not Python 2.
It's something about garbage collection and multiple QApplications. Qt doesn't expect multiple QApplications to be used in the same process, and regardless of whether it looks like you're creating a new one every time, the old one is living on somewhere in the interpreter. On the first run of your main() method, You need to create a QApplication and prevent it from being garbage collected by storing it somewhere like a module global or an attribute to a class or instance in the global scope that won't be garbage collected when main() returns.
Then, on subsequent runs, you should access the existing QApplication instead of making a new one. If you have multiple modules that might need a QApplication but you don't want them to have to coordinate, you can access an existing instance with QApplication.instance(), and then only instantiate one if none exists.
So, changing your main() method to the following works:
def main():
global app
app = QtGui.QApplication.instance()
if app is None:
app = QtGui.QApplication(sys.argv)
foo = Foo(sys.argv)
app.exit(app.exec_())
It's odd you have to keep a reference to ensure the QApplication is not garbage collected. Since there is only supposed to be one per process, I would have expected it to live forever even if you don't hold onto a reference to it. That's what seems to happen in Python 2. So ideally the global app line would not be needed above, but it's there to prevent this garbage collection business.
We're kind of stuck in the middle about how immortal this QApplication object is ... it's too long lived for you to be able to use a new one each time, but not long-lived enough on its own for you to re-use it each run without having to prevent its garbage collection by holding onto a reference. This might be bug in PyQt, which probably ought to do the reference holding for us.
There must be only on QApplication instance in one process. GUI frameworks are not prepared for multi-application mode.
I can't actually reproduce the problem, which at least shows there's nothing fundamentally wrong with the code.
The problem is probably caused by some garbage-collection issues when the main function returns (the order of deletion can be unpredictable).
Try putting del foo after the event-loop exits.
Hello generous SO'ers,
This is a somewhat complicated question, but hopefully relevant to the more general use of global objects from a child-module.
I am using some commercial software that provides a python library for interfacing with their application through TCP. (I can't post the code for their library I don't think.)
I am having an issue with calling an object from a child module, that I think is more generally related to global variables or some such. Basically, the object's state is as expected when the child-module is in the same directory as all the other modules (including the module that creates the object).
But when I move the offending child module into a subfolder, it can still access the object but the state appears to have been altered, and the object's connection to the commercial app doesn't work anymore.
Following some advice from this question on global vars, I have organized my module's files as so:
scriptfile.py
pyFIMM/
__init__.py # imports all the other files
__globals.py # creates the connection object used in most other modules
__pyfimm.py # main module functions, such as pyFIMM.connect()
__Waveguide.py # there are many of these files with various classes and functions
(...other files...)
PhotonDesignLib/
__init__.py # imports all files in this folder
pdPythonLib.py # the commercial library
proprietary/
__init__.py # imports all files in this folder
University.py # <-- The offending child-module with issues
pyFIMM/__init__.py imports the sub-files like so:
from __globals import * # import global vars & create FimmWave connection object `fimm`
from __pyfimm import * # import the main module
from __Waveguide import *.
(...import the other files...)
from proprietary import * # imports the subfolder containing `University.py`
The __init__.py's in the subfolders "PhotonDesignLib" & "proprietary" both cause all files in the subfolders to imported, so, for example, scriptfile.py would access my proprietary files as so: import pyFIMM.proprietary.University. This is accomplished via this hint, coded as follows in proprietary/__init__.py:
import os, glob
__all__ = [ os.path.basename(f)[:-3] for f in glob.glob(os.path.dirname(__file__)+"/*.py")]
(Numerous coders from a few different institutions will have their own proprietary code, so we can share the base code but keep our proprietary files/functions to ourselves this way, without having to change any base code/import statements. I now realize that, for the more static PhotonDesignLib folder, this is overkill.)
The file __globals.py creates the object I need to use to communicate with their commercial app, with this code (this is all the code in this file):
import PhotonDesignLib.pdPythonLib as pd # the commercial lib/object
global fimm
fimm = pd.pdApp() # <- - this is the offending global object
All of my sub-modules contain a from __global import * statement, and are able to access the object fimm without specifically declaring it as a global var, without any issue.
So I run scriptfile.py, which has an import statement like from pyFIMM import *.
Most importantly, scriptfile.py initiates the TCP connection made to the application via fimm.connect() right at the top, before issuing any commands that require the communication, and all the other modules call fimm.Exec(<commands for app>) in various routines, which has been working swimmingly well - the fimm object has so-far been accessible to all modules, and keeps it's connection state without issue.
The issue I am running into is that the file proprietary/University.py can only successfully use the fimm object when it's placed in the pyFIMM root-level directory (ie. the same folder as __globals.py etc.). But when University.py is imported from within the proprietary sub-folder, it gives me an "application not initialized" error when I use the fimm object, as if the object had been overwritten or re-initialized or something. The object still exists, it just isn't maintaining it's connection state when called by this sub-module. (I've checked that it's not reinitialized in another module.)
If, after the script fails in proprietary/University.py, I use the console to send a command eg. pyFimm.fimm.Exec(<command to app>), it communicates just fine!
I set proprietary/University.py to print a dir(fimm) as a test right at the beginning, which works fine and looks like the fimm object exists as expected, but a subsequent call in the same file to fimm.Exec() indicates that the object's state is not correct, returning the "application not initialized" error.
This almost looks like there are two fimm objects - one that the main python console (and pyFIMM modules) see, which works great, and another that proprietary/University.py sees which doesn't know that we called fimm.connect() already. Again, if I put University.py in the main module folder "pyFIMM" it works fine - the fimm.Exec() calls operate as expected!
FYI proprietary/University.py imports the __globals.py file as so:
import sys, os, inspect
ScriptDir = inspect.currentframe().f_code.co_filename # get path to this module file
(ParentDir , tail) = os.path.split(ScriptDir) # split off top-level directory from path
(ParentDir , tail) = os.path.split(ParentDir) # split off top-level directory from path
sys.path.append(ParentDir) # add ParentDir to the python search path
from __globals import * # import global vars & FimmWave connection object
global fimm # This line makes no difference, was just trying it.
(FYI, Somewhere on SO it was stated that inspect was better than __file__, hence the code above.)
Why do you think having the sub-module in a sub-folder causes the object to lose it's state?
I suspect the issue is either the way I instruct University.py to import __globals.py or the "import all files in this folder" method I used in proprietary/__init__.py. But I have little idea how to fix it!
Thank you for looking at this question, and thanks in advance for your insightful comments.
have a python web service based on webware for python. In this app, there a multiple "contexts", essentially extensions of the base app with specializations. There are five of these. The one added today is throwing an error, but the others added days ago do not.
import MyModule as mv
class Main():
def __init__(self):
self.vendorcode = mv.id
The code above raises the error at "self.vendorcode = mv.id" saying the module does not have the attribute "id". The file "MyModule.py" that is imported contains the following:
# config parameters
id = 'TEST'
code = 'test_c'
name = 'test_n'
There is nothing else in the file.
Why would this setup work in four other sub directories, aka "contexts", but not in this last one? To put this another way, what should I be looking for? I have confirmed file permissions and file char set type, which appear to fine.
UPDATE: I changed "id" to "ident" but the error was still raised. Using the print verifies that the module file is the correct one. Using the python 2.4.3 interpreter, I see that the file is loaded. In the sample below "TRANS" is the expected return value.
>>> import MoluVendor as mv
>>> print mv.ident
TRANS
>>>
So I wrote a quick test, saved it as a file in the directly with MyModule, as follows:
import MyModule as mv
class MainTest:
def __init__(self):
self.vendorcode = mv.ident
def showme(self):
print self.vendorcode
if __name__ == '__main__':
mt = MainTest()
mt.showme()
That reports correctly too. So the import is working in the simple case.
What bothers me is that there are four other sets of files, including a "MyModule.py" in each, that work fine. I compared the code the code and I cannot find any differences. All of these files sets are invoked by one app running as a deamon in Apache. For that reason, having only one not work is perplexing.
I went through the files involved and checked each for gremlins. That is, I made certain the files were ASCII, no unicode or embedded characters, and saved them using an acceptable code page spec. I reloaded the app and everything now works.
So, in the end, the problem was between the chair and the keyboard. When I modified the app files for the new "context", I neglected (apparently) to save the files in the proper format.
I'm using a modified version on juno (http://github.com/breily/juno/) in Google App Engine. The problem I'm having is I have code like this:
import juno
import pprint
#get('/')
def home(web):
pprint.pprint("test")
def main():
run()
if __name__ == '__main__':
main()
The first time I start the app up in the dev environment it works fine. The second time and every time after that it can't find pprint. I get this error:
AttributeError: 'NoneType' object has no attribute 'pprint'
If I set the import inside the function it works every time:
#get('/')
def home(web):
import pprint
pprint.pprint("test")
So it seems like it is caching the function but for some reason the imports are not being included when it uses that cache. I tried removing the main() function at the bottom to see if that would remove the caching of this script but I get the same problem.
Earlier tonight this code was working fine, I'm not sure what could have changed to cause this. Any insight is appreciated.
I would leave it that way. I saw a slideshare that Google put out about App Engine optimization that said you can get better performance by keeping imports inside of the methods, so they are not imported unless necessary.
Is it possible you are reassigning the name pprint somewhere? The only two ways I know of for a module-level name (like what you get from the import statement) to become None is if you either assign it yourself pprint = None or upon interpreter shutdown, when Python's cleanup assigns all module-level names to None as it shuts things down.
I'm sorry for the verbal description.
I have a wxPython app in a file called applicationwindow.py that resides in a package called garlicsimwx. When I launch the app by launching the aforementioned file, it all works well. However, I have created a file rundemo.py in a folder which contains the garlicsimwx package, which runs the app as well. When I use rundemo.py, the app launches, however, when the main wx.Frame imports a sub-package of garlicsimwx, namely simulations.life, for some reason a new instance of my application is created (i.e., a new identical window pops out.)
I have tried stepping through the commands one-by-one, and although the bug happens only after importing the sub-package, the import statement doesn't directly cause it. Only when control returns to PyApp.MainLoop the second window opens.
How do I stop this?
I think you have code in one of your modules that looks like this:
import wx
class MyFrame(wx.Frame):
def __init__(...):
...
frame = MyFrame(...)
The frame will be created when this module is first imported. To prevent that, use the common Python idiom:
import wx
class MyFrame(wx.Frame):
def __init__(...):
...
if __name__ == '__main__':
frame = MyFrame(...)
Did I guess correctly?
You could create a global boolean variable like g_window_was_drawn and check it in the function that does the work of creating a window. The value would be false at the start of the program and would change to True when first creating a window. The function that creates the window would check if the g_window_was_drawn is already true, and if it is, it would throw an exception. Then You will have a nice stacktrace telling You who is responsible of executing this function.
I hope that helps You find it. I'm sorry for the verbal solution ;)
Got it: There was no
if __name__=='__main__':
in my rundemo file. It was actually a multiprocessing issue: The new window was opened in a separate process.