How to integrate APScheduler and Imp? - python

I have built a plugin-based application where "plugins" (python modules) can be loaded by imp and then scheduled for later execution by APScheduler, I was able to successfully integrate them but I want to implement persistence in case of crashes or application reestarts, so I changed the default memory job store to the SqlAlchemyJobStore, it works quite well the first time you execute the program: tasks are loaded, scheduled, saved at the database and executed at the right time.
Problem is when I try to load the application again I get this traceback:
ERROR:apscheduler.jobstores.default:Unable to restore job "d3e0f0068df54d15986e9b7b6757f665" -- removing it
Traceback (most recent call last):
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 126, in _get_jobs
jobs.append(self._reconstitute_job(row.job_state))
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 114, in _reconstitute_job
job.__setstate__(job_state)
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/job.py", line 228, in __setstate__
self.func = ref_to_obj(self.func_ref)
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/util.py", line 257, in ref_to_obj
raise LookupError('Error resolving reference %s: could not import module' % ref)
LookupError: Error resolving reference __init__:run: could not import module
So it is obvious that there is a problem when attempting to import the function again
Here is my scheduler initialization:
executors = {'default': ThreadPoolExecutor(5)}
jobstores = {'default': SQLAlchemyJobStore(url='sqlite:///jobs.sqlite')}
self.scheduler = BackgroundScheduler(executors = executors,jobstores=jobstores)
I have a "tests" dictionary containing the "plugins" that should be loaded and some parameters, "load_plugin" uses imp to load a plugin by it's name.
for test,parameters in tests.items():
if test in pluggins:
module=load_plugin(pluggins[test])
self.jobs[test]=self.scheduler.add_job(module.run,"interval",seconds=parameters["interval"],name=test)
Any idea about how can I handle reconstituting jobs?

Something in the automatic detection of the module name is going wrong. Hard to say what, but the alternative is to manually give it the proper lookup path as a string (e.g. "package.module:function"). If you can do this, you can avoid this problem.

Related

Python can't import WMI under special circumstance

I've created a standalone exe Windows service written in Python and built with pyInstaller. When I try to import wmi, an exception is thrown.
What's really baffling is that I can do it without a problem if running the code in a foreground exe, or a foreground python script, or a python script running as a background service via pythonservice.exe!
Why does it fail under this special circumstance of running as a service exe?
import wmi
Produces this error for me:
com_error: (-2147221020, 'Invalid syntax', None, None)
Here's the traceback:
Traceback (most recent call last):
File "<string>", line 43, in onRequest
File "C:\XXX\XXX\XXX.pyz", line 98, in XXX
File "C:\XXX\XXX\XXX.pyz", line 31, in XXX
File "C:\XXX\XXX\XXX.pyz", line 24, in XXX
File "C:\XXX\XXX\XXX.pyz", line 34, in XXX
File "C:\Program Files (x86)\PyInstaller-2.1\PyInstaller\loader\pyi_importers.py", line 270, in load_module
File "C:\XXX\XXX\out00-PYZ.pyz\wmi", line 157, in <module>
File "C:\XXX\XXX\out00-PYZ.pyz\win32com.client", line 72, in GetObject
File "C:\XXX\XXX\out00-PYZ.pyz\win32com.client", line 87, in Moniker
wmi.py line 157 has a global call to GetObject:
obj = GetObject ("winmgmts:")
win32com\client__init.py__ contains GetObject(), which ends up calling Moniker():
def GetObject(Pathname = None, Class = None, clsctx = None):
"""
Mimic VB's GetObject() function.
ob = GetObject(Class = "ProgID") or GetObject(Class = clsid) will
connect to an already running instance of the COM object.
ob = GetObject(r"c:\blah\blah\foo.xls") (aka the COM moniker syntax)
will return a ready to use Python wrapping of the required COM object.
Note: You must specifiy one or the other of these arguments. I know
this isn't pretty, but it is what VB does. Blech. If you don't
I'll throw ValueError at you. :)
This will most likely throw pythoncom.com_error if anything fails.
"""
if clsctx is None:
clsctx = pythoncom.CLSCTX_ALL
if (Pathname is None and Class is None) or \
(Pathname is not None and Class is not None):
raise ValueError("You must specify a value for Pathname or Class, but not both.")
if Class is not None:
return GetActiveObject(Class, clsctx)
else:
return Moniker(Pathname, clsctx)
The first line in Moniker(), i.e. MkParseDisplayName() is where the exception is encountered:
def Moniker(Pathname, clsctx = pythoncom.CLSCTX_ALL):
"""
Python friendly version of GetObject's moniker functionality.
"""
moniker, i, bindCtx = pythoncom.MkParseDisplayName(Pathname)
dispatch = moniker.BindToObject(bindCtx, None, pythoncom.IID_IDispatch)
return __WrapDispatch(dispatch, Pathname, clsctx=clsctx)
Note: I tried using
pythoncom.CoInitialize()
which apparently solves this import problem within a thread, but that didn't work...
I also face the same issue and I figure out this issue finally,
import pythoncom and CoInitialize pythoncom.CoInitialize (). They import wmi
import pythoncom
pythoncom.CoInitialize ()
import wmi
I tried solving this countless ways. In the end, I threw in the towel and had to just find a different means of achieving the same goals I had with wmi.
Apparently that invalid syntax error is thrown when trying to create an object with an invalid "moniker name", which can simply mean the service, application, etc. doesn't exist on the system. Under this circumstance "winmgmts" just can't be found at all it seems! And yes, I tried numerous variations on that moniker with additional specs, and I tried running the service under a different user account, etc.
Honestly I didn't dig in order to understand why this occurs.
Anyway, the below imports solved my problem - which was occurring only when ran from a Flask instance:
import os
import pythoncom
pythoncom.CoInitialize()
from win32com.client import GetObject
import wmi
The error "com_error: (-2147221020, 'Invalid syntax', None, None)" is exactly what popped up in my case so I came here after a long time of searching the web and voila:
Under this circumstance "winmgmts" just can't be found at all it
seems!
This was the correct hint for because i had just a typo , used "winmgmt:" without trailing 's'. So invalid sythax refers to the first methods parameter, not the python code itself. o_0 Unfortunately I can't find any reference which objects we can get with win32com.client.GetObject()... So if anybody has a hint to which params are "allowed" / should work, please port it here. :-)
kind regards
ChrisPHL

Yapsy throws TypeError on init, missing arguments on init

I've been working with Yapsy (v. 1.10.423) lately, and I've run across an issue with (I think) the package, which is latest from PyPi.
The trace I'm getting is below.
Traceback (most recent call last):
File "./clayrd.py", line 256, in <module>
run()
File "./clayrd.py", line 202, in run
loadPlugins()
File "./clayrd.py", line 121, in loadPlugins
_pluginMgr.collectPlugins()
File "/usr/local/lib/python2.7/dist-packages/yapsy/PluginManager.py", line 531, in collectPlugins
self.loadPlugins()
File "/usr/local/lib/python2.7/dist-packages/yapsy/PluginManager.py", line 513, in loadPlugins
plugin_info.plugin_object = element()
TypeError: __init__() takes exactly 3 arguments (1 given)
The method in question that begins that trace is below
def loadPlugins():
"""
Load up all of our plugins
"""
# Set plugin dir and horde them
_pluginMgr = PluginManager() # Defined at start of script
_pDir = os.path.join(_config['run_dir'], _pluginDir)
_logger.info("Worker is loading plugins from {}".format(_pDir))
_pluginMgr.setPluginPlaces([_pDir])
_pluginMgr.collectPlugins() # This is line 121
# Attempt plugin activation
for plugin in _pluginMgr.getAllPlugins():
_logger.info("Worker attempting to activate plugin {}".format(plugin.name))
_loaded = _pluginMgr.activatePluginByName(plugin.name)
if _loaded == False:
_logger.warn("Failed to load plugin {}".format(plugin.name))
continue
else:
_logger.info("Plugin {} loaded successfully. Loading dependencies...".format(plugin.name))
My question is simply: is this truly a bug with Yapsy, or am I missing something else?
The element that is being 'called' at the bottom of the stack is actually the plugin class that yapsy is trying to instanciate. So that element() actually calls the plugin class's __init__ method.
Getting back to the exception message, this seems to indicate that your plugin class has a constructor that requires more arguments than just self but yapsy expects the plugin class to require no explicit argument at construction time.
As a consequence you should check the definition of the plugin class of the plugin that is being loaded because it's very likely where the problem is.
If the class' init only has one arg self then you may have a look at the trouble shooting documentation for yapsy that describes possibly related caveats.
If none of this helped, you can submit a small code sample of a plugin file that causes the problem.

errors with gae-sessions and nose

I'm running into a few problems with adding gae-sessions to a relatively mature GAE app. I followed the readme carefully and also looked at the demo.
First, just adding the gaesesions directory to my app causes the following error when running tests with nose and nose-gae:
Exception ImportError: 'No module named threading' in <bound method local.__del__ of <_threading_local.local object at 0x103e10628>> ignored
All the tests run fine so not a big problem but suggests that something isn't right.
Next, if I add the following two lines of code:
from gaesessions import get_current_session
session = get_current_session()
And run my tests, then I get the following error:
Traceback (most recent call last):
File "/Users/.../unit_tests.py", line 1421, in testParseFBRequest
data = tasks.parse_fb_request(sr)
File "/Users/.../tasks.py", line 220, in parse_fb_request
session = get_current_session()
File "/Users/.../gaesessions/__init__.py", line 36, in get_current_session
return _tls.current_session
File "/Library/.../python2.7/_threading_local.py", line 193, in __getattribute__
return object.__getattribute__(self, name)
AttributeError: 'local' object has no attribute 'current_session'
This error does not happen on the dev server.
Any suggestions on fixing the above would be greatly appreciated.
I ran into the same problem. The problem seems to be that the gae testbed behaves differently than the development server. I don't know the specifics but ended up solving it by adding
def setUp(self):
testbed.Testbed().activate()
# after activating the testbed:
from gaesessions import Session, set_current_session
set_current_session(Session())

Import web2py's DAL to be used with Google Cloud SQL on App Engine

I want to build an app on App Engine which uses Cloud SQL as backend database instead of App engine's own datastore facility (which doesn't support common SQL operations such as JOIN).
Cloud SQL has a DB-API and hence I was looking for a lightweight Data Abstraction Layer (DAL) to help easily manipulate the cloud databases. A little research revealed that web2py has a pretty neat DAL which is compatible with Cloud SQL.
Since I don't actually need the whole full-stack web2py framework, I copied the dal.py file out from the /gluon folder into a simple testing app's main directory and included this line in my app:
from dal import DAL, Field
db=DAL('google:sql://myproject:myinstance/mydatabase')
However, this generated an error after I deployed the app and tried to run it.
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 701, in __call__
handler.get(*groups)
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/helloworld2.py", line 13, in get
db=DAL('google:sql://serangoon213home:rainman001/guestbook')
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 5969, in __init__
raise RuntimeError, "Failure to connect, tried %d times:\n%s" % (attempts, tb)
RuntimeError: Failure to connect, tried 5 times:
Traceback (most recent call last):
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 5956, in __init__
self._adapter = ADAPTERS[self._dbname](*args)
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 3310, in __init__
self.folder = folder or '$HOME/'+thread.folder.split('/applications/',1)[1]
File "/base/python_runtime/python_dist/lib/python2.5/_threading_local.py", line 199, in __getattribute__
return object.__getattribute__(self, name)
AttributeError: 'local' object has no attribute 'folder'
It looks like that it was due to an error with the 'folder' attribute which was assigned by the statement
self.folder = folder or '$HOME/'+thread.folder.split('/applications/',1)[1]
Does anyone know what this attribute does and how can I resolve this problem?
folder is a parm in the DAL contructor. It points to the folder where you store DBs (sqlite). Thus, I don't think that's the problem in your case. I would check again the connection string.
From the web2py docs:
The DAL can be used from any Python program simply by doing this:
from gluon import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases')
i.e. import the DAL, Field, connect and specify the folder which contains the .table files (the app/databases folder).
To access the data and its attributes we still have to define all the tables we are going to access with db.define_tables(...).
If we just need access to the data but not to the web2py table attributes, we get away without re-defining the tables but simply asking web2py to read the necessary info from the metadata in the .table files:
from gluon import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases',
auto_import=True))
This allows us to access any db.table without need to re-define it.

Is it possible to serialize tasklet code (not just exec state) using SPickle without doing a RPC?

Trying to use stackless python (2.7.2) with SPickle to send a test method over celery for execution on a different machine. I would like the test method (code) to be included with the pickle and not forced to exist on the executing machines python path.
Been referencing following presentation:
https://ep2012.europython.eu/conference/talks/advanced-pickling-with-stackless-python-and-spickle
Trying to use the technique shown in the checkpointing slide 11. The RPC example doesn't seem right given that we are using celery:
Client code:
from stackless import run, schedule, tasklet
from sPickle import SPickleTools
def test_method():
print "hello from test method"
tasks = []
test_tasklet = tasklet(test_method)()
tasks.append(test_tasklet)
pt = SPickleTools(serializeableModules=['__test_method__'])
pickled_task = pt.dumps(tasks)
Server code:
pt = sPickle.SPickleTools()
unpickledTasks = pt.loads(pickled_task)
Results in:
[2012-03-09 14:24:59,104: ERROR/MainProcess] Task
celery_tasks.test_exec_method[8f462bd6-7952-4aa1-9adc-d84ee4a51ea6] raised exception:
AttributeError("'module'
object has no attribute 'test_method'",)
Traceback (most recent call last):
File "c:\Python27\lib\site-packages\celery\execute\trace.py", line 153, in trace_task
R = retval = task(*args, **kwargs)
File "c:\Python27\celery_tasks.py", line 16, in test_exec_method
unpickledTasks = pt.loads(pickled_task)
File "c:\Python27\lib\site-packages\sPickle\_sPickle.py", line 946, in loads
return unpickler.load()
AttributeError: 'module' object has no attribute 'test_method'
Any suggestions on what I am doing incorrect or if this is even possible?
Alternative suggestions for doing dynamic module loading in a celeryd would also be good (as an alternative for using sPickle). I have experimented with doing:
py_mod = imp.load_source(module_name,'some script path')
sys.modules.setdefault(module_name,py_mod)
but the dynamically loaded module does not seem to persist through different calls to celeryd, i.e. different remote calls.
You must define test_method within its own module. Currently sPickle detects whether test_method is defined in a module that can be imported. An alternative way is to set the __module__ attribute of the function to None.
def test_method():
pass
test_method.__module__ = None

Categories

Resources