When I shutdown the Zope server it shows an AttributeError - python

I am using Plone 4.3.3 for creating my Plone site but when I shut-down the server it shows the following error.
Traceback (most recent call last):
File "/Plone/zinstance/parts/instance/bin/interpreter", line 298, in <module>
exec(compile(__file__f.read(), __file__, "exec"))
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 76, in <module>
run()
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 26, in run
starter.run()
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 108, in run
self.shutdown()
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 113, in shutdown
db.close()
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 624, in close
user#user-Vostro-3300:~/Plone/zinstance$ #self._connectionMap
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 506, in _connectionMap
self.pool.map(f)
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 206, in map
self.all.map(f)
File "/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/weakset.py", line 58, in map
f(elt)
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 628, in _
c._release_resources()
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/Connection.py", line 1075, in _release_resources
c._storage.release()
AttributeError: 'NoneType' object has no attribute 'release'

There is an issue with Zope2 shutdown that tries to close a database connection (and in turn, a storage). However, this late-running sequence has some cosmetic side-effect for users of RelStorage. This is annoying, but not fundamentally a problem that should cause any data integrity issues.
Users of FileStorage or ZEO should not see this.
References:
https://github.com/zopefoundation/Zope/commit/5032027470091957a6c0028da04c0fc0a1ed646b
https://mail.zope.org/pipermail/zodb-dev/2013-August/015119.html

Related

Getting weird error on running python in cmd then continue normally

When I run python on cmd I get this weird error:
Failed calling sys.__interactivehook__
Traceback (most recent call last):
File "C:\Users\Khaled\AppData\Local\Programs\Python\Python310\lib\site.py", line 446, in register_readline
import readline
File "C:\Users\Khaled\AppData\Local\Programs\Python\Python310\lib\site-packages\readline.py", line 34, in <module>
rl = Readline()
File "C:\Users\Khaled\AppData\Local\Programs\Python\Python310\lib\site-packages\pyreadline\rlmain.py", line 422, in __init__
BaseReadline.__init__(self)
File "C:\Users\Khaled\AppData\Local\Programs\Python\Python310\lib\site-packages\pyreadline\rlmain.py", line 62, in __init__
mode.init_editing_mode(None)
File "C:\Users\Khaled\AppData\Local\Programs\Python\Python310\lib\site-packages\pyreadline\modes\emacs.py", line 633, in init_editing_mode
self._bind_key('space', self.self_insert)
File "C:\Users\Khaled\AppData\Local\Programs\Python\Python310\lib\site-packages\pyreadline\modes\basemode.py", line 162, in _bind_key
if not callable(func):
File "C:\Users\Khaled\AppData\Local\Programs\Python\Python310\lib\site-packages\pyreadline\py3k_compat.py", line 8, in callable
return isinstance(x, collections.Callable)
AttributeError: module 'collections' has no attribute 'Callable'
But then I continue using python normally:
>>>
I did not find any effects of this problem python is running normally but I do not want it to appear and to know what does it do.

Multiprocessing; How to debug: _pickle.PicklingError: Could not pickle object as excessively deep recursion required

I have a simulation which I can run using Python code and want to create multiple instances of it using a SubProcVecEnv from stable-baselines3. This uses subprocessing to run the simulations on different cores and it was working before I made a number of changes to my code. However, now I receive the error below and do not know how to debug it, because I don't understand which part of my code is causing it. Is there a way to find out which object/ method is causing the recursion depth being exceeded? I also do not remember writing a recursive method anywhere in my code. Researching the error message was not successful.
/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/site-packages/gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
Traceback (most recent call last):
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 639, in reducer_override
if sys.version_info[:2] < (3, 7) and _is_parametrized_type_hint(obj): # noqa # pragma: no branch
RecursionError: maximum recursion depth exceeded in comparison
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/philipp/Code/ba_pw/train.py", line 84, in <module>
venv = utils.make_venv(env_class, network, params, remote_ports, monitor_log_dir)
File "/home/philipp/Code/ba_pw/sumo_rl/utils/utils.py", line 170, in make_venv
return vec_env.SubprocVecEnv(env_fs)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/site-packages/stable_baselines3/common/vec_env/subproc_vec_env.py", line 106, in __init__
process.start()
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/multiprocessing/context.py", line 291, in _Popen
return Popen(process_obj)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/multiprocessing/popen_forkserver.py", line 35, in __init__
super().__init__(process_obj)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/multiprocessing/popen_forkserver.py", line 47, in _launch
reduction.dump(process_obj, buf)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/site-packages/stable_baselines3/common/vec_env/base_vec_env.py", line 372, in __getstate__
return cloudpickle.dumps(self.var)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/home/philipp/anaconda3/envs/sumo_rl/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 570, in dump
raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
I finally figured out a solution using the answer to this question:
I looks like the object which I want to pickle has too many layers. I called:
sys.setrecursionlimit(3000)
and now it works.

KeyError: 'browser' with Splinter and Behaving automated testing

I followed the instructions here: http://shon.github.io/2014/06/19/ui_testing_and_bdd.html about setting up Splinter with Behaving to run automated tests. I'm able to run a test successfully, but at the end of the test, it throws an error saying:
KeyError: 'browser'
and it won't continue testing any additional feature files. I'm pretty new to python and need some help in troubleshooting this.
Exception KeyError: 'browser'
Traceback (most recent call last):
File "/usr/local/bin/behave", line 11, in <module> sys.exit(main())
File "/Library/Python/2.7/site-packages/behave/__main__.py", line 109, in main
failed = runner.run()
File "/Library/Python/2.7/site-packages/behave/runner.py", line 672, in run
return self.run_with_paths()
File "/Library/Python/2.7/site-packages/behave/runner.py", line 693, in run_with_paths
return self.run_model()
File "/Library/Python/2.7/site-packages/behave/runner.py", line 483, in run_model
failed = feature.run(self)
File "/Library/Python/2.7/site-packages/behave/model.py", line 523, in run
failed = scenario.run(runner)
File "/Library/Python/2.7/site-packages/behave/model.py", line 867, in run
runner.run_hook('before_scenario', runner.context, self)
File "/Library/Python/2.7/site-packages/behave/runner.py", line 405, in run_hook
self.hooks[name](context, *args)
File "features/environment.py", line 48, in before_scenario
context.browser = default_browser
File "/Library/Python/2.7/site-packages/behave/runner.py", line 223, in __setattr__
record = self._record[attr]
KeyError: 'browser'
I found the issue. It is related to the Feature file structure. The Feature file was missing:
Background:
Given a browser
This also required changes to the environment.py file based on the info here: https://github.com/ggozad/behaving

Scrapy: Unhandled Error

My scraper runs fine for about an hour. After a while I start seeing these errors:
2014-01-16 21:26:06+0100 [-] Unhandled Error
Traceback (most recent call last):
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/crawler.py", line 93, in start
self.start_reactor()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/crawler.py", line 130, in start_reactor
reactor.run(installSignalHandlers=False) # blocking call
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 1192, in run
self.mainLoop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 1201, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/utils/reactor.py", line 41, in __call__
return self._func(*self._a, **self._kw)
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/engine.py", line 106, in _next_request
if not self._next_request_from_scheduler(spider):
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/engine.py", line 132, in _next_request_from_scheduler
request = slot.scheduler.next_request()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/scheduler.py", line 64, in next_request
request = self._dqpop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/scheduler.py", line 94, in _dqpop
d = self.dqs.pop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/queuelib/pqueue.py", line 43, in pop
m = q.pop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/squeue.py", line 18, in pop
s = super(SerializableQueue, self).pop()
File "/home/scraper/.fakeroot/lib/python2.7/site-packages/queuelib/queue.py", line 157, in pop
self.f.seek(-size-self.SIZE_SIZE, os.SEEK_END)
exceptions.IOError: [Errno 22] Invalid argument
What could possibly be causing this? My version is 0.20.2. Once I get this error, scrapy stops doing anything. Even if I stop and run it again (using a JOBDIR directory), it still gives me these errors. I need to delete the job directory and start over if I need to get rid of these errors.
Try this:
Ensure that you're running latest Scrapy version (current: 0.24)
Search inside the resumed folder, and backup the file requests.seen
After backed up, remove the scrapy job folder
Start the crawl resuming with JOBDIR= option again
Stop the crawl
Replace the newly created requests.seen with previously backed up
Start crawl again

DeleteResource from Google Docs with Python

I am trying to delete a spreadsheet in Google Docs with this function:
def f_DeleteResource(xls_name):
"""Delete a resource"""
client=Auth()
for e1 in client.GetResources().entry:
e2 = client.GetResource(e1)
if xls_name==e2.title.text:
client.DeleteResource(e2.resource_id.text,True)
And I obtain different errors when I change the first parameter of client.DeleteResource(p1,p2):
client.DeleteResource(e2.resource_id.text,True):
Traceback (most recent call last):
File "C:\xmp\D6GDocsDeleteUpload.py", line 164, in <module> main()
File "C:\xmp\D6GDocsDeleteUpload.py", line 157, in main f_DeleteResource(sys.argv[2])
File "C:\xmp\D6GDocsDeleteUpload.py", line 144, in f_DeleteResource client.DeleteResource(e2.resource_id.text,True)
File "C:\Python27\lib\site-packages\gdata\docs\client.py", line 540, in delete_resource uri = entry.GetEditLink().href
AttributeError: 'str' object has no attribute 'GetEditLink'
client.DeleteResource(e2,True):
Traceback (most recent call last):
File "C:\xmp\D6GDocsDeleteUpload.py", line 164, in <module> main()
File "C:\xmp\D6GDocsDeleteUpload.py", line 157, in main f_DeleteResource(sys.argv[2])
File "C:\xmp\D6GDocsDeleteUpload.py", line 144, in f_DeleteResource client.DeleteResource(e2,True)
File "C:\Python27\lib\site-packages\gdata\docs\client.py", line 543, in delete_resource return super(DocsClient, self).delete(uri, **kwargs)
File "C:\Python27\lib\site-packages\gdata\client.py", line 748, in delete **kwargs)
File "C:\Python27\lib\site-packages\gdata\docs\client.py", line 66, in request return super(DocsClient, self).request(method=method, uri=uri, **kwargs)
File "C:\Python27\lib\site-packages\gdata\client.py", line 319, in request RequestError)
gdata.client.RequestError: Server responded with: 403, <errors xmlns='http://schemas.google.com/g/2005'><error><domain>GData</domain><code>matchHeaderRequired</code><location type='header'>If-Match|If-None-Match</location><internalReason>If-Match or If-None-Match header or entry etag attribute required</internalReason></error></errors>
Anyone can help me?
It seems to be a bug in Google API Python library. I checked gdata-2.0.16 and noticed that DeleteResource() function uses only URL of the resource (gdata/docs/client.py lines 540-543), but later checks for hasattr(entry_or_uri, 'etag') (gdata/client.py lines 737-741) and of course string value (uri) doesn't have etag attribute.
You may work around it using force keyword argument:
import gdata.docs.data
import gdata.docs.client
client = gdata.docs.client.DocsClient()
client.ClientLogin('xxxxxx#gmail.com', 'xxxxxx', 'XxX')
for doc in client.GetAllResources():
if doc.title.text == 'qpqpqpqpqpqp':
client.DeleteResource(doc, force=True)
break
If you want you may report an error to library maintainers (if it isn't already reported).
This issue has been fixed in this revision:
http://code.google.com/p/gdata-python-client/source/detail?r=f98fff494fb89fca12deede00c3567dd589e5f97
If you sync you client to the repository, you should be able to delete a resource without having to specify 'force=True'.

Categories

Resources