awswrangler: Can't start a new thread when trying to read table - python

I am trying to access a table in an AWS bucket. When I try to access it using awswrangler.read_parquet function I get an error saying that I am not able to access that file because I can't create new threads. I am usually able to access that file after waiting 30min+, but that doesn't tell me how to solve the problem. Here are more details about the command:
aws_df = wr.s3.read_parquet(path=self._filepath, **self._load_args)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_read_parquet.py", line 721, in read_parquet
read_func=_read_parquet, paths=paths, version_ids=versions, use_threads=use_threads, kwargs=args
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_read.py", line 145, in _read_dfs_from_multiple_paths
return list(df for df in executor.map(partial_read_func, paths, versions))
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_read.py", line 145, in <genexpr>
return list(df for df in executor.map(partial_read_func, paths, versions))
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 586, in result_iterator
yield fs.pop().result()
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_read_parquet.py", line 495, in _read_parquet
version_id=version_id,
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_read_parquet.py", line 440, in _read_parquet_file
source=f, read_dictionary=categories
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_read_parquet.py", line 40, in _pyarrow_parquet_file_wrapper
return pyarrow.parquet.ParquetFile(source=source, read_dictionary=read_dictionary)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyarrow/parquet.py", line 201, in __init__
read_dictionary=read_dictionary, metadata=metadata)
File "pyarrow/_parquet.pyx", line 1021, in pyarrow._parquet.ParquetReader.open
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_fs.py", line 569, in read
self._fetch(self._loc, self._loc + length)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_fs.py", line 376, in _fetch
self._cache = self._fetch_range_proxy(self._start, self._end)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/awswrangler/s3/_fs.py", line 359, in _fetch_range_proxy
itertools.repeat(self._version_id),
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 575, in map
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 575, in <listcomp>
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 160, in submit
self._adjust_thread_count()
File "/home/ec2-user/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 181, in _adjust_thread_count
t.start()
File "/home/ec2-user/anaconda3/lib/python3.7/threading.py", line 847, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread

I was able to solve the problem by adding a sleep between my commands.
import time
time.sleep(10)

Related

Issue TypeError: argument must be a string or number

There is only one categorical column and I want to encode it, it is working fine on notebook but when it is being uploaded to aicrowd platform it is creating this trouble.
There are totally 3 categorical features where one is the target feature, one is the row of ids and after excluding them for the training I am left with one feature.
df[['intersection_pos_rel_centre']]
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
df[['intersection_pos_rel_centre']]=le.fit_transform(df[['intersection_pos_rel_centre']])
df[['intersection_pos_rel_centre']]
My error is
Selecting runtime language: python
[NbConvertApp] Converting notebook predict.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python
Traceback (most recent call last):
File "/opt/conda/bin/jupyter-nbconvert", line 11, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/jupyter_core/application.py", line 254, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/traitlets/config/application.py", line 845, in launch_instance
app.start()
File "/opt/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 350, in start
self.convert_notebooks()
File "/opt/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 524, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 489, in convert_single_notebook
output, resources = self.export_single_notebook(notebook_filename, resources, input_buffer=input_buffer)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 418, in export_single_notebook
output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 181, in from_filename
return self.from_file(f, resources=resources, **kw)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 199, in from_file
return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/exporters/notebook.py", line 32, in from_notebook_node
nb_copy, resources = super().from_notebook_node(nb, resources, **kw)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 143, in from_notebook_node
nb_copy, resources = self._preprocess(nb_copy, resources)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 318, in _preprocess
nbc, resc = preprocessor(nbc, resc)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
return self.preprocess(nb, resources)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/preprocessors/execute.py", line 79, in preprocess
self.execute()
File "/opt/conda/lib/python3.8/site-packages/nbclient/util.py", line 74, in wrapped
return just_run(coro(*args, **kwargs))
File "/opt/conda/lib/python3.8/site-packages/nbclient/util.py", line 53, in just_run
return loop.run_until_complete(coro)
File "/opt/conda/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/opt/conda/lib/python3.8/site-packages/nbclient/client.py", line 553, in async_execute
await self.async_execute_cell(
File "/opt/conda/lib/python3.8/site-packages/nbconvert/preprocessors/execute.py", line 123, in async_execute_cell
cell, resources = self.preprocess_cell(cell, self.resources, cell_index)
File "/opt/conda/lib/python3.8/site-packages/nbconvert/preprocessors/execute.py", line 146, in preprocess_cell
cell = run_sync(NotebookClient.async_execute_cell)(self, cell, index, store_history=self.store_history)
File "/opt/conda/lib/python3.8/site-packages/nbclient/util.py", line 74, in wrapped
return just_run(coro(*args, **kwargs))
File "/opt/conda/lib/python3.8/site-packages/nbclient/util.py", line 53, in just_run
return loop.run_until_complete(coro)
File "/opt/conda/lib/python3.8/site-packages/nest_asyncio.py", line 98, in run_until_complete
return f.result()
File "/opt/conda/lib/python3.8/asyncio/futures.py", line 178, in result
raise self._exception
File "/opt/conda/lib/python3.8/asyncio/tasks.py", line 280, in __step
result = coro.send(None)
File "/opt/conda/lib/python3.8/site-packages/nbclient/client.py", line 852, in async_execute_cell
self._check_raise_for_error(cell, exec_reply)
File "/opt/conda/lib/python3.8/site-packages/nbclient/client.py", line 760, in _check_raise_for_error
raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)
nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
------------------
df[['intersection_pos_rel_centre']]
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
df[['intersection_pos_rel_centre']]=le.fit_transform(df[['intersection_pos_rel_centre']])
df[['intersection_pos_rel_centre']]
------------------
TypeError: argument must be a string or number

IndexError in random.shuffle when class is executed concurrently

I have created a custom environment for reinforcement learning with tf-agents (not needed to answer this question), which works fine if I instantiate one thread by setting num_parallel_environments to 1, but throws infrequent and seemingly random errors like an IndexError inside random.shuffle(), when I increase num_parallel_environments to 50. Here's the code:
inside train.py
tf_env = tf_py_environment.TFPyEnvironment(
batched_py_environment.BatchedPyEnvironment(
[environment.CardGameEnv()] * num_parallel_environments))
inside my environment, this is run in threads
self.cardStack = getFullDeck()
random.shuffle(self.cardStack)
this is a normal function, imported in every thread class
def getFullDeck():
deck = []
for rank in Ranks:
for suit in Suits:
deck.append(Card(rank, suit))
return deck
And here's one of the possible errors:
Traceback (most recent call last):
File "e:\Users\tmp\.vscode\extensions\ms-python.python-2019.1.0\pythonFiles\ptvsd_launcher.py", line 45, in <module>
main(ptvsdArgs)
File "e:\Users\tmp\.vscode\extensions\ms-python.python-2019.1.0\pythonFiles\lib\python\ptvsd\__main__.py", line 348, in main
run()
File "e:\Users\tmp\.vscode\extensions\ms-python.python-2019.1.0\pythonFiles\lib\python\ptvsd\__main__.py", line 253, in run_file
runpy.run_path(target, run_name='__main__')
File "C:\Python37\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Python37\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "e:\Users\tmp\Documents\Programming\Neural Nets\Poker_AI\train_v2.py", line 320, in <module>
app.run(main)
File "C:\Python37\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "C:\Python37\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "e:\Users\tmp\Documents\Programming\Neural Nets\Poker_AI\train_v2.py", line 315, in main
num_eval_episodes=FLAGS.num_eval_episodes)
File "E:\Users\tmp\AppData\Roaming\Python\Python37\site-packages\gin\config.py", line 1032, in wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "E:\Users\tmp\AppData\Roaming\Python\Python37\site-packages\gin\utils.py", line 49, in augment_exception_message_and_reraise
six.raise_from(proxy.with_traceback(exception.__traceback__), None)
File "<string>", line 3, in raise_from
File "E:\Users\tmp\AppData\Roaming\Python\Python37\site-packages\gin\config.py", line 1009, in wrapper
return fn(*new_args, **new_kwargs)
File "e:\Users\tmp\Documents\Programming\Neural Nets\Poker_AI\train_v2.py", line 251, in train_eval
collect_driver.run()
File "C:\Python37\lib\site-packages\tf_agents\drivers\dynamic_episode_driver.py", line 149, in run
maximum_iterations=maximum_iterations)
File "C:\Python37\lib\site-packages\tf_agents\utils\common.py", line 111, in with_check_resource_vars
return fn(*fn_args, **fn_kwargs)
File "C:\Python37\lib\site-packages\tf_agents\drivers\dynamic_episode_driver.py", line 180, in _run
name='driver_loop'
File "C:\Python37\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2457, in while_loop_v2
return_same_structure=True)
File "C:\Python37\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2689, in while_loop
loop_vars = body(*loop_vars)
File "C:\Python37\lib\site-packages\tf_agents\drivers\dynamic_episode_driver.py", line 103, in loop_body
next_time_step = self.env.step(action_step.action)
File "C:\Python37\lib\site-packages\tf_agents\environments\tf_environment.py", line 232, in step
return self._step(action)
File "C:\Python37\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 232, in graph_wrapper
return func(*args, **kwargs)
File "C:\Python37\lib\site-packages\tf_agents\environments\tf_py_environment.py", line 218, in _step
_step_py, flat_actions, self._time_step_dtypes, name='step_py_func')
File "C:\Python37\lib\site-packages\tensorflow\python\ops\script_ops.py", line 488, in numpy_function
return py_func_common(func, inp, Tout, stateful=True, name=name)
File "C:\Python37\lib\site-packages\tensorflow\python\ops\script_ops.py", line 452, in py_func_common
result = func(*[x.numpy() for x in inp])
File "C:\Python37\lib\site-packages\tf_agents\environments\tf_py_environment.py", line 203, in _step_py
self._time_step = self._env.step(packed)
File "C:\Python37\lib\site-packages\tf_agents\environments\py_environment.py", line 174, in step
self._current_time_step = self._step(action)
File "C:\Python37\lib\site-packages\tf_agents\environments\batched_py_environment.py", line 140, in _step
zip(self._envs, unstacked_actions))
File "C:\Python37\lib\multiprocessing\pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Python37\lib\multiprocessing\pool.py", line 657, in get
raise self._value
File "C:\Python37\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Python37\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Python37\lib\site-packages\tf_agents\environments\batched_py_environment.py", line 139, in <lambda>
lambda env_action: env_action[0].step(env_action[1]),
File "C:\Python37\lib\site-packages\tf_agents\environments\py_environment.py", line 174, in step
self._current_time_step = self._step(action)
File "e:\Users\tmp\Documents\Programming\Neural Nets\Poker_AI\environment.py", line 116, in _step
canRoundContinue = self._table.runUntilChoice(action)
File "e:\Users\tmp\Documents\Programming\Neural Nets\Poker_AI\table.py", line 326, in runUntilChoice
random.shuffle(self.cardStack)
File "C:\Python37\lib\random.py", line 278, in shuffle
x[i], x[j] = x[j], x[i]
IndexError: list index out of range
In call to configurable 'train_eval' (<function train_eval at 0x000002722713A158>)
I suspect this error occurs because the threads are changing the array simultaneously, but I do not see why this would be the case:
Everything happens inside a class instance and the array getFullDeck() is returning is recreated every time the function is called, so there should be no way multiple threads have access to the same reference, right?
tf_env = tf_py_environment.TFPyEnvironment(
batched_py_environment.BatchedPyEnvironment(
[environment.CardGameEnv()] * num_parallel_environments))
You are reusing the same environment for each of the parallel instances rather than creating a new environment for each one. You might want to try something like
tf_env = tf_py_environment.TFPyEnvironment(
batched_py_environment.BatchedPyEnvironment(
[environment.CardGameEnv() for _ in range(num_parallel_environments)]))

sqlalchemy.exc.ResourceClosedError: This transaction is closed

Facing the following error in a Pyramid Application while using SQLAlchemy and Zope Transaction Manager.
This is how I am creating a scoped session:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.schema import MetaData
from sqlalchemy.orm import (
scoped_session,
sessionmaker,
)
from zope.sqlalchemy import ZopeTransactionExtension
metadata = MetaData(naming_convention=NAMING_CONVENTION)
Base = declarative_base(metadata=metadata)
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension(keep_session=True)))
I am syncing my session creation and removal with my requests in the following way:
#view_config(route_name='social_get_individual_assets', renderer='json', effective_principals=2, permission='edit')
def social_get_individual_assets(requestJson):
session = DBSession()
try:
body = requestJson.request.json_body
search_term = body['text']
horizontal = body['horizontal']
vertical = body['vertical']
log.info("social get individual assets request: %s", str(search_term).encode(encoding='utf_8'))
json_data = get_individual_assets(session, search_term, horizontal, vertical)
log.info("social get assets response: %s", str(search_term).encode(encoding='utf_8'))
transaction.commit()
DBSession.remove()
return json_data
except Exception as e:
session.rollback()
log.exception(e)
raise e
For some reason, I keep running into this error all the time:
2017-12-06 13:32:07,965 ERROR [invideoapp.views.default:465] An operation previously failed, with traceback:
File "/usr/lib64/python3.5/threading.py", line 882, in _bootstrap
self._bootstrap_inner()
File "/usr/lib64/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.5/site-packages/waitress/task.py", line 78, in handler_thread
task.service()
File "/usr/local/lib/python3.5/site-packages/waitress/channel.py", line 338, in service
task.service()
File "/usr/local/lib/python3.5/site-packages/waitress/task.py", line 169, in service
self.execute()
File "/usr/local/lib/python3.5/site-packages/waitress/task.py", line 399, in execute
app_iter = self.channel.server.application(env, start_response)
File "/usr/local/lib/python3.5/site-packages/pyramid/router.py", line 270, in __call__
response = self.execution_policy(environ, self)
File "/usr/local/lib/python3.5/site-packages/pyramid_retry/__init__.py", line 114, in retry_policy
response = router.invoke_request(request)
File "/usr/local/lib/python3.5/site-packages/pyramid/router.py", line 249, in invoke_request
response = handle_request(request)
File "/usr/local/lib/python3.5/site-packages/pyramid_tm/__init__.py", line 136, in tm_tween
response = handler(request)
File "/usr/local/lib/python3.5/site-packages/pyramid/tweens.py", line 39, in excview_tween
response = handler(request)
File "/usr/local/lib/python3.5/site-packages/pyramid/router.py", line 156, in handle_request
view_name
File "/usr/local/lib/python3.5/site-packages/pyramid/view.py", line 642, in _call_view
response = view_callable(context, request)
File "/usr/local/lib/python3.5/site-packages/pyramid/viewderivers.py", line 439, in rendered_view
result = view(context, request)
File "/usr/local/lib/python3.5/site-packages/pyramid/viewderivers.py", line 148, in _requestonly_view
response = view(request)
File "/home/ttv/invideoapp/invideoapp/views/default.py", line 148, in upload_image
transaction.commit()
File "/usr/local/lib/python3.5/site-packages/transaction/_manager.py", line 131, in commit
return self.get().commit()
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 308, in commit
t, v, tb = self._saveAndGetCommitishError()
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 301, in commit
self._commitResources()
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 446, in _commitResources
reraise(t, v, tb)
File "/usr/local/lib/python3.5/site-packages/transaction/_compat.py", line 54, in reraise
raise value
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 423, in _commitResources
rm.tpc_vote(self)
File "/usr/local/lib/python3.5/site-packages/zope/sqlalchemy/datamanager.py", line 109, in tpc_vote
self.tx.commit()
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/session.py", line 459, in commit
self._assert_active(prepared_ok=True)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/session.py", line 285, in _assert_active
raise sa_exc.ResourceClosedError(closed_msg)
sqlalchemy.exc.ResourceClosedError: This transaction is closed
Traceback (most recent call last):
File "/home/ttv/invideoapp/invideoapp/views/default.py", line 451, in update_master_json
user = getUser(session,user_id)
File "/home/ttv/invideoapp/invideoapp/invideomodules/auth_processing.py", line 12, in getUser
raise e
File "/home/ttv/invideoapp/invideoapp/invideomodules/auth_processing.py", line 8, in getUser
query = session.query(User).filter(User.user_id == userid).first()
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/query.py", line 2755, in first
ret = list(self[0:1])
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/query.py", line 2547, in __getitem__
return list(res)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/query.py", line 2855, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/query.py", line 2876, in _execute_and_instances
close_with_result=True)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/query.py", line 2885, in _get_bind_args
**kw
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/query.py", line 2867, in _connection_from_session
conn = self.session.connection(**kw)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/session.py", line 966, in connection
execution_options=execution_options)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/session.py", line 971, in _connection_for_bind
engine, execution_options)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/session.py", line 417, in _connection_for_bind
self.session.dispatch.after_begin(self.session, self, conn)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/event/attr.py", line 256, in __call__
fn(*args, **kw)
File "/usr/local/lib/python3.5/site-packages/zope/sqlalchemy/datamanager.py", line 237, in after_begin
join_transaction(session, self.initial_state, self.transaction_manager, self.keep_session)
File "/usr/local/lib/python3.5/site-packages/zope/sqlalchemy/datamanager.py", line 211, in join_transaction
DataManager(session, initial_state, transaction_manager, keep_session=keep_session)
File "/usr/local/lib/python3.5/site-packages/zope/sqlalchemy/datamanager.py", line 73, in __init__
transaction_manager.get().join(self)
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 179, in join
self._prior_operation_failed() # doesn't return
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 173, in _prior_operation_failed
self._failure_traceback.getvalue())
transaction.interfaces.TransactionFailedError: An operation previously failed, with traceback:
File "/usr/lib64/python3.5/threading.py", line 882, in _bootstrap
self._bootstrap_inner()
File "/usr/lib64/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.5/site-packages/waitress/task.py", line 78, in handler_thread
task.service()
File "/usr/local/lib/python3.5/site-packages/waitress/channel.py", line 338, in service
task.service()
File "/usr/local/lib/python3.5/site-packages/waitress/task.py", line 169, in service
self.execute()
File "/usr/local/lib/python3.5/site-packages/waitress/task.py", line 399, in execute
app_iter = self.channel.server.application(env, start_response)
File "/usr/local/lib/python3.5/site-packages/pyramid/router.py", line 270, in __call__
response = self.execution_policy(environ, self)
File "/usr/local/lib/python3.5/site-packages/pyramid_retry/__init__.py", line 114, in retry_policy
response = router.invoke_request(request)
File "/usr/local/lib/python3.5/site-packages/pyramid/router.py", line 249, in invoke_request
response = handle_request(request)
File "/usr/local/lib/python3.5/site-packages/pyramid_tm/__init__.py", line 136, in tm_tween
response = handler(request)
File "/usr/local/lib/python3.5/site-packages/pyramid/tweens.py", line 39, in excview_tween
response = handler(request)
File "/usr/local/lib/python3.5/site-packages/pyramid/router.py", line 156, in handle_request
view_name
File "/usr/local/lib/python3.5/site-packages/pyramid/view.py", line 642, in _call_view
response = view_callable(context, request)
File "/usr/local/lib/python3.5/site-packages/pyramid/viewderivers.py", line 439, in rendered_view
result = view(context, request)
File "/usr/local/lib/python3.5/site-packages/pyramid/viewderivers.py", line 148, in _requestonly_view
response = view(request)
File "/home/ttv/invideoapp/invideoapp/views/default.py", line 148, in upload_image
transaction.commit()
File "/usr/local/lib/python3.5/site-packages/transaction/_manager.py", line 131, in commit
return self.get().commit()
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 308, in commit
t, v, tb = self._saveAndGetCommitishError()
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 301, in commit
self._commitResources()
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 446, in _commitResources
reraise(t, v, tb)
File "/usr/local/lib/python3.5/site-packages/transaction/_compat.py", line 54, in reraise
raise value
File "/usr/local/lib/python3.5/site-packages/transaction/_transaction.py", line 423, in _commitResources
rm.tpc_vote(self)
File "/usr/local/lib/python3.5/site-packages/zope/sqlalchemy/datamanager.py", line 109, in tpc_vote
self.tx.commit()
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/session.py", line 459, in commit
self._assert_active(prepared_ok=True)
File "/usr/local/lib64/python3.5/site-packages/sqlalchemy/orm/session.py", line 285, in _assert_active
raise sa_exc.ResourceClosedError(closed_msg)
sqlalchemy.exc.ResourceClosedError: This transaction is closed
Once I get this error, none of my Database connection stuff works. Only way out is to restart the entire application to make things work. Any idea why this is happening?
I had a similar problem, and I found using DBSession.begin_nested() solved the issue. So something like:
#view_config(route_name='social_get_individual_assets', renderer='json', effective_principals=2, permission='edit')
def social_get_individual_assets(requestJson):
try:
DBSession.begin_nested()
body = requestJson.request.json_body
search_term = body['text']
horizontal = body['horizontal']
vertical = body['vertical']
log.info("social get individual assets request: %s", str(search_term).encode(encoding='utf_8'))
json_data = get_individual_assets(session, search_term, horizontal, vertical)
log.info("social get assets response: %s", str(search_term).encode(encoding='utf_8'))
return json_data
except Exception as e:
DBSession.rollback()
log.exception(e)
raise e

Async query throws AssertionError first time (AppEngine, NDB)

When I use fetch_async() on a query it crashes with AssertionError the first time it is run. If I run it again immediately, it's fine.
Eg.
With model:
class User(ndb.Model):
user = ndb.UserProperty()
name = ndb.StringProperty()
penname = ndb.StringProperty()
first_login = ndb.DateTimeProperty(auto_now_add=True)
contact = ndb.BooleanProperty()
The following works straight away, returning an empty list:
users = User.query(User.penname == "asdf").fetch()
But this crashes:
future = User.query(User.penname == "asdf").fetch_async()
users = future.get_result()
With:
Traceback (most recent call last):
File "/opt/google-appengine-python/google/appengine/ext/admin/__init__.py", line 320, in post
exec(compiled_code, globals())
File "<string>", line 6, in <module>
File "/opt/google-appengine-python/google/appengine/ext/ndb/tasklets.py", line 320, in get_result
self.check_success()
File "/opt/google-appengine-python/google/appengine/ext/ndb/tasklets.py", line 357, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/opt/google-appengine-python/google/appengine/ext/ndb/query.py", line 887, in _run_to_list
batch = yield rpc
File "/opt/google-appengine-python/google/appengine/ext/ndb/tasklets.py", line 435, in _on_rpc_completion
result = rpc.get_result()
File "/opt/google-appengine-python/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/opt/google-appengine-python/google/appengine/datastore/datastore_query.py", line 2386, in __query_result_hook
self._batch_shared.conn.check_rpc_success(rpc)
File "/opt/google-appengine-python/google/appengine/datastore/datastore_rpc.py", line 1191, in check_rpc_success
rpc.check_success()
File "/opt/google-appengine-python/google/appengine/api/apiproxy_stub_map.py", line 558, in check_success
self.__rpc.CheckSuccess()
File "/opt/google-appengine-python/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/opt/google-appengine-python/google/appengine/api/datastore_file_stub.py", line 568, in MakeSyncCall
response)
File "/opt/google-appengine-python/google/appengine/api/apiproxy_stub.py", line 87, in MakeSyncCall
method(request, response)
File "/opt/google-appengine-python/google/appengine/datastore/datastore_stub_util.py", line 2367, in UpdateIndexesWrapper
self._UpdateIndexes()
File "/opt/google-appengine-python/google/appengine/datastore/datastore_stub_util.py", line 2656, in _UpdateIndexes
self._index_yaml_updater.UpdateIndexYaml()
File "/opt/google-appengine-python/google/appengine/datastore/datastore_stub_index.py", line 244, in UpdateIndexYaml
all_indexes, manual_indexes)
File "/opt/google-appengine-python/google/appengine/datastore/datastore_stub_index.py", line 85, in GenerateIndexFromHistory
required, kind, ancestor, props, num_eq_filters = datastore_index.CompositeIndexForQuery(query)
File "/opt/google-appengine-python/google/appengine/datastore/datastore_index.py", line 424, in CompositeIndexForQuery
assert filter.property(0).name() == ineq_property
AssertionError
But if I run it again immediately, for example by doing:
future = User.query(User.penname == "asdf").fetch_async()
try:
users = future.get_result()
except:
users = future.get_result()
It works.
This has been cropping up all over the place and I'm struggling to pin down the root cause.
The solution was to upgrade to the 1.7.1 SDK.

Python-Reportlab error: ValueError: format not resolved

when I was using python-reportlab to create a pdf document, sometimes it throws out an exception: ValueError: format not resolved talk.google.com, I wonder why this came out, and how to solve it, the full error stack is like below:
File "/usr/lib64/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/site-packages/tweets2pdf/tweets2pdf.py", line 42,
in generate_thread
tpdoc.dump()
File "/usr/lib/python2.7/site-packages/tweets2pdf/pdfgen.py", line 609, in
dump
self.pdfdoc.build(self.elements, onFirstPage = self.on_first_page,
onLaterPages = self.on_later_pages)
File "/usr/lib64/python2.7/site-
packages/reportlab/platypus/doctemplate.py", line 1117, in build
BaseDocTemplate.build(self,flowables, canvasmaker=canvasmaker)
File "/usr/lib64/python2.7/site-
packages/reportlab/platypus/doctemplate.py", line 906, in build
self._endBuild()
File "/usr/lib64/python2.7/site-
packages/reportlab/platypus/doctemplate.py", line 848, in _endBuild
if getattr(self,'_doSave',1): self.canv.save()
File "/usr/lib64/python2.7/site-packages/reportlab/pdfgen/canvas.py", line
1123, in save
self._doc.SaveToFile(self._filename, self)
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 235, in SaveToFile
f.write(self.GetPDFData(canvas))
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 257, in GetPDFData
return self.format()
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 417, in format
IOf = IO.format(self)
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 869, in format
fcontent = format(self.content, document, toplevel=1) # yes this is at
top level
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 102, in format
f = element.format(document)
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 1635, in format
return D.format(document)
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 667, in format
L = [(format(PDFName(k),document)+" "+format(dict[k],document)) for k in
keys]
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 102, in format
f = element.format(document)
File "/usr/lib64/python2.7/site-packages/reportlab/pdfbase/pdfdoc.py",
line 1764, in format
if f is None: raise ValueError, "format not resolved %s" % self.name
ValueError: format not resolved talk.google.com

Categories

Resources