Python: 'int' object is not iterable in map.Pool - python

For some reason,
result = pool.map(_delete_load_log,list(logs_to_delete))
is now giving me an 'int' object is not iterable error.
as per the screenshot, logs_to_delete is clearly an array (added list() to see if it changed anything, and nope). This worked earlier, but I can't track back what changed to make it not work. Any ideas?
mapping function:
def _delete_load_log(load_log_id):
logging.debug('** STARTING API CALL FOR: ' + get_function_name())
input_args = "/15/" + str(load_log_id)
logging.debug('url: ' + podium_url + '\nusername: ' + podium_username)
podium = podiumapi.Podium('https://xx/podium','podium','podium')
#podium = podiumapi.Podium(podium_url,podium_username,podium_password)
data = None
response_code = 0
try:
api_url = PodiumApiUrl(podium_url,input_args)
(response_code,data) = podium._podium_curl_setopt_put(api_url)
if not data[0]['hasErrors']:
return data[0]['id']
elif data[0]['hasErrors']:
raise Exception("Errors detected on delete")
else:
raise Exception('Unmanaged exception while retrieving entity load status.')
except Exception as err:
raise Exception(str(err))
File "c:\Repos\Tools\podium-dataload\scripts\podiumlogdelete.py", line 69, in _delete_source_load_logs_gte_epoch
deleted_load_ids = _delete_logs_in_parallel(load_logs_to_delete)
File "c:\Repos\Tools\podium-dataload\scripts\podiumlogdelete.py", line 85, in _delete_logs_in_parallel
result = pool.map(_delete_load_log,logs_to_delete)
File "C:\Python27\lib\multiprocessing\pool.py", line 253, in map
return self.map_async(func, iterable, chunksize).get()
File "C:\Python27\lib\multiprocessing\pool.py", line 572, in get
raise self._value
Exception: 'int' object is not iterable
output of list:
[154840, 154841, 154842, 154843, 154844, 154845, 154846, 154847, 154848, 154849, 154850, 154851, 154852, 154853, 154854, 154855, 154856, 154857, 154858, 154859, 154860, 154861, 154862, 154863, 154864, 154865, 154866, 154867, 154868, 154869, 154870, 154871, 154872, 154873, 154874, 154875, 154876, 154877, 154878, 154879, 154880, 154881, 154882, 154883, 154884, 154885, 154886, 154887, 154888, 154889, 154890, 154891, 154892, 154893, 154894, 154895, 154896, 154897, 154898, 154899, 154900, 154901, 154902, 154903, 154904, 154905, 154906, 154907, 154908, 154909, 154910, 154911, 154912, 154913, 154914, 154915, 154916, 154917, 154918, 154919, 154920, 154921, 154922, 154923, 154924, 154925, 154926, 154927, 154928, 154929, 154930, 154931, 154932, 154933, 154934, 154935, 154936, 154937, 154938, 154939]

Your problem is clear from the traceback. It's not caused by the iterable in Pool.map(), otherwise the exception would be raised from Python source code line
iterable = list(iterable)
Here the exception is raised from
File "C:\Python27\lib\multiprocessing\pool.py", line 253, in map
return self.map_async(func, iterable, chunksize).get()
File "C:\Python27\lib\multiprocessing\pool.py", line 572, in get
raise self._value
This is because your _delete_load_log() raised some exception, and Pool.map re-raise it. See https://github.com/python/cpython/blob/2.7/Lib/multiprocessing/pool.py
In other words, Exception: 'int' object is not iterable is not from the python library part, it's from your own function _delete_load_log().

In all honesty, just take a close look at
#podium = podiumapi.Podium(podium_url,podium_username,podium_password)
data = None
response_code = 0
try:
The response code is equivalent to 0, although the traceback function isn't inputted as "invalid", it makes you have to re-patch a lot afterwards.

Related

Error when create new payment method in Odoo 15

I want to create new method payment but it gives me this error in Odoo V15.
` File
"/cityvape/cityvape-server/addons/account/models/account_payment_method.py",
line 28, in create
if information.get('mode') == 'multi': Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File
"/cityvape/cityvape-server/odoo/http.py", line 643, in
_handle_exception
return super(JsonRequest, self)._handle_exception(exception) File "/cityvape/cityvape-server/odoo/http.py", line 301, in
_handle_exception
raise exception.with_traceback(None) from new_cause AttributeError: 'NoneType' object has no attribute 'get'`
This is the code
#api.model_create_multi
def create(self, vals_list):
payment_methods = super().create(vals_list)
methods_info = self._get_payment_method_information()
for method in payment_methods:
information = methods_info.get(method.code)
if information.get('mode') == 'multi':
method_domain = method._get_payment_method_domain()
journals = self.env['account.journal'].search(method_domain)
self.env['account.payment.method.line'].create([{
'name': method.name,
'payment_method_id': method.id,
'journal_id': journal.id
} for journal in journals])
return payment_methods
I installed third party module but it gave me same error.
information = methods_info.get(method.code) this line has error... It is returning None Value because it seems either methods_info.get(method.code) is returning an empty dictionary or information.get('mode') sometimes returns an empty dictionary.
Use Logger info to trace the value of both or use print function to print the value in terminal to check whether the correct value is passed or not

object of type set is not JSON serializable

i'm trying to pass the below list to ajax response but i'm getting below error
remove_duplicates = [{**l[0], 'record_id': set([x['record_id'] for x in l])} if len(l:=list(v)) > 1 else l[0] for _, v in groupby(sorted(list_of_dict, key=func), key=func)]
i'm using set to remove the duplicate record ids
list_of_dicts = [{'f_note':'text','record_id':{4691}},{'f_note':'sample','record_id':{4692}}]
return jsonify(list_of_dicts )
object of type set is not JSON serializable
please suggest me with the better option.
When you are creating your list of dicts your inner value of {4691} is actually a set not a dict.
https://realpython.com/python-sets/
Try this instead:
list_of_dicts = [{'f_note':'text','record_id': 4691},{'f_note':'sample','record_id': 4692}]
return jsonify(list_of_dicts)
See the error you encountered isolated
import json
test = {5565}
json.dumps(test)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\p\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "C:\Users\p\AppData\Local\Programs\Python\Python39\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Users\p\AppData\Local\Programs\Python\Python39\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "C:\Users\p\AppData\Local\Programs\Python\Python39\lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type set is not JSON serializable
To keep the set functionaility you can cast it back to a list
remove_duplicates = [{**l[0], 'record_id': list(set([x['record_id'] for x in l]))} if len(l:=list(v)) > 1 else l[0] for _, v in groupby(sorted(list_of_dict, key=func), key=func)]

Python concurrent.futures - TypeError: zip argument #1 must support iteration

I want to process mongodb documents in a batch of 1000 using multiprocessing. However, below code snippet is giving TypeError: zip argument #1 must support iteration
Code:
def documents_processing(skip):
conn = get_connection()
db = conn["db_name"]
print("Process::{} -- db.Transactions.find(no_cursor_timeout=True).skip({}).limit(10000)".format(os.getpid(), skip))
documents = db.Transactions.find(no_cursor_timeout=True).skip(skip).limit(10000)
# Do some processing in mongodb
max_workers = 20
def skip_list():
for i in range(0, 100000, 10000):
yield [j for j in range(i, i + 10000, 1000)]
def main_f():
try:
with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
executor.map(documents_processing, skip_list)
except Exception:
print("exception:", traceback.format_exc())
main_f()
Error traceback:
(rpc_venv) [user#localhost ver2_mt]$ python main_mongo_v3.py
exception: Traceback (most recent call last):
File "main_mongo_v3.py", line 113, in main_f
executor.map(documents_processing, skip_list)
File "/usr/lib64/python3.6/concurrent/futures/process.py", line 496, in map
timeout=timeout)
File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 575, in map
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 575, in <listcomp>
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "/usr/lib64/python3.6/concurrent/futures/process.py", line 137, in _get_chunks
it = zip(*iterables)
TypeError: zip argument #1 must support iteration
How to fix this error? Thanks.
Invoke the skip_list function to return the generator.
Currently, you're passing a function as the second argument and not an iterable.
executor.map(documents_processing, skip_list())
Since you're retrieving 10k documents in each process starting at n, you can declare skip_list as:
def skip_list():
for i in range(0, 100000, 10000):
yield i

Having a difficult time resolving "TypeError: 'list' object is not callable" issue

Error:
Traceback (most recent call last): File
"/root/PycharmProjects/Capstone2/main", line 207, in
for paramIndex in range(0, 4): TypeError: 'list' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"/root/PycharmProjects/Capstone2/main", line 249, in
print('stream ending') File "/usr/lib/python3/dist-packages/picamera/camera.py", line 758, in
exit
self.close() File "/usr/lib/python3/dist-packages/picamera/camera.py", line 737, in
close
self.stop_recording(splitter_port=port) File "/usr/lib/python3/dist-packages/picamera/camera.py", line 1198, in
stop_recording
encoder.close() File "/usr/lib/python3/dist-packages/picamera/encoders.py", line 431, in
close
self.stop() File "/usr/lib/python3/dist-packages/picamera/encoders.py", line 815, in
stop
super(PiVideoEncoder, self).stop() File "/usr/lib/python3/dist-packages/picamera/encoders.py", line 419, in
stop
self._close_output() File "/usr/lib/python3/dist-packages/picamera/encoders.py", line 349, in
_close_output
mo.close_stream(output, opened) File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 371, in
close_stream
stream.flush() ValueError: flush of closed file
Relevant Code:
angle = []
distance = []
speed = []
current = []
timestamp = []
parameterList = []
parameterList.extend((angle, distance, speed, current, timestamp))
for paramIndex in range(0, 4): # LINE 207
# Select Range
range = getRange(index, paramIndex + 5)
cell_list = sheet.range(range[0], range[1], range[2], range[3])
cellIndex = 0
for cell in cell_list:
try:
cell.value = parameterList[paramIndex][cellIndex]
except:
print("PI: " + str(paramIndex))
print("CI: " + str(cellIndex))
print("PL LEN: " + str(len(parameterList)))
print("P LEN: " + str(len(parameterList[paramIndex])))
My Thoughts:
The error makes me think that paramIndex is a list and not an integer but the code executes fine for the first four iterations. This makes me think that there is something wrong with my last list (timestamp). The only thing that I can imagine being wrong with my last list is some sort of index out of bounds issue but...
The except block is never hit
The largest value cellIndex reaches is 30 (expected)
The length of parameterList is 5 (expected)
The length of timestamp is 31 (expected)
I am stumped. If anyone can offer some help that would be greatly appreciated.
paramIndex is fine but you need to avoid calling variables the same name as your functions. In this case, range() is a standard python function but you create a variable called 'range'. Thereafter if you tried to use the range function you'd get an 'object is not callable' error because it's trying to use your range object as the standard range function.
If you insist on wanting to keep the range object name then use xrange() instead of range() where you define your for loop and you shouldn't get any conflicts.

Ways to handle exceptions in Dask distributed

I'm having a lot of success using Dask and Distributed to develop data analysis pipelines. One thing that I'm still looking forward to improving, however, is the way I handle exceptions.
Right now if, I write the following
def my_function (value):
return 1 / value
results = (dask.bag
.from_sequence(range(-10, 10))
.map(my_function))
print(results.compute())
... then on running the program I get a long, long list of tracebacks (one per worker, I'm guessing). The most relevant segment being
distributed.utils - ERROR - division by zero
Traceback (most recent call last):
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/distributed/utils.py", line 193, in f
result[0] = yield gen.maybe_future(func(*args, **kwargs))
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/tornado/gen.py", line 1015, in run
value = future.result()
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/tornado/concurrent.py", line 237, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/tornado/gen.py", line 1021, in run
yielded = self.gen.throw(*exc_info)
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/distributed/client.py", line 1473, in _get
result = yield self._gather(packed)
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/tornado/gen.py", line 1015, in run
value = future.result()
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/tornado/concurrent.py", line 237, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/tornado/gen.py", line 1021, in run
yielded = self.gen.throw(*exc_info)
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/distributed/client.py", line 923, in _gather
st.traceback)
File "/Users/ajmazurie/test/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/mnt/lustrefs/work/aurelien.mazurie/test_dask/.env/pyenv-3.6.0-default/lib/python3.6/site-packages/dask/bag/core.py", line 1411, in reify
File "test.py", line 9, in my_function
return 1 / value
ZeroDivisionError: division by zero
Here, of course, a visual inspection will tell me that the error was dividing a number by zero. What I'm wondering is if there is a better way to track these errors. For example, I cannot seem to be able to catch the exception itself:
import dask.bag
import distributed
try:
dask_scheduler = "127.0.0.1:8786"
dask_client = distributed.Client(dask_scheduler)
def my_function (value):
return 1 / value
results = (dask.bag
.from_sequence(range(-10, 10))
.map(my_function))
#dask_client.persist(results)
print(results.compute())
except Exception as e:
print("error: %s" % e)
EDIT: Note that in my example I'm using distributed, not just dask. There is a dask-scheduler listening on port 8786 with four dask-worker processes registered to it.
This code will produce the exact same output as above, meaning that I'm not actually catching the exception with my try/except block.
Now, since we're talking of distributed tasks across a cluster it is obviously non trivial to propagate exceptions back to me. Is there any guideline to do so? Right now my solution is to have functions return both a result and an optional error message, then process the results and error messages separately:
def my_function (value):
try:
return {"result": 1 / value, "error": None}
except ZeroDivisionError:
return {"result": None, "error": "boom!"}
results = (dask.bag
.from_sequence(range(-10, 10))
.map(my_function))
dask_client.persist(results)
errors = (results
.pluck("error")
.filter(lambda x: x is not None)
.compute())
print(errors)
results = (results
.pluck("result")
.filter(lambda x: x is not None)
.compute())
print(results)
This works, but I'm wondering if I'm sandblasting the soup cracker here. EDIT: Another option would be to use something like a Maybe monad, but once again I'd like to know if I'm overthinking it.
Dask automatically packages up exceptions that occurred remotely and reraises them locally. Here is what I get when I run your example
In [1]: from dask.distributed import Client
In [2]: client = Client('localhost:8786')
In [3]: import dask.bag
In [4]: try:
...: def my_function (value):
...: return 1 / value
...:
...: results = (dask.bag
...: .from_sequence(range(-10, 10))
...: .map(my_function))
...:
...: print(results.compute())
...:
...: except Exception as e:
...: import pdb; pdb.set_trace()
...: print("error: %s" % e)
...:
distributed.utils - ERROR - division by zero
> <ipython-input-4-17aa5fbfb732>(13)<module>()
-> print("error: %s" % e)
(Pdb) pp e
ZeroDivisionError('division by zero',)
You could wrap your function like so:
def exception_handler(orig_func):
def wrapper(*args,**kwargs):
try:
return orig_func(*args,**kwargs)
except:
import sys
sys.exit(1)
return wrapper
You could use a decorator or do:
wrapped = exception_handler(my_function)
dask_client.map(wrapper, range(100))
This seems to automatically rebalance tasks if a worker fails. But I don't know how to remove the failed worker from the pool.

Categories

Resources