I have had multiple problems trying to use PP. I am running python2.6 and pp 1.6.0 rc3. Using the following test code:
import pp
nodes=('mosura02','mosura03','mosura04','mosura05','mosura06',
'mosura09','mosura10','mosura11','mosura12')
def pptester():
js=pp.Server(ppservers=nodes)
tmp=[]
for i in range(200):
tmp.append(js.submit(ppworktest,(),(),('os',)))
return tmp
def ppworktest():
return os.system("uname -a")
gives me the following result:
In [10]: Exception in thread run_local:
Traceback (most recent call last):
File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.6/threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local
job.finalize(sresult)
UnboundLocalError: local variable 'sresult' referenced before assignment
Exception in thread run_local:
Traceback (most recent call last):
File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.6/threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local
job.finalize(sresult)
UnboundLocalError: local variable 'sresult' referenced before assignment
Exception in thread run_local:
Traceback (most recent call last):
File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.6/threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local
job.finalize(sresult)
UnboundLocalError: local variable 'sresult' referenced before assignment
Exception in thread run_local:
Traceback (most recent call last):
File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.6/threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pp.py", line 751, in _run_local
job.finalize(sresult)
UnboundLocalError: local variable 'sresult' referenced before assignment
Any help is greatly appreciated.
I can't read your code because it's not formatted properly, but I can tell you your exact problem: you're trying to modify a global variable named "sresult" from inside a function, but you did not add this line to the beginning of your function:
global sresult
If you don't declare a variable global, Python will assume it's local to the function if you try to assign it within the function, so when you try to modify or access it, Python will complain that you haven't yet "bound the local variable" (that is, assigned it or given it a value).
It's a bug in the pp library. Fix it, or wait for it to be fixed.
Related
Recently I implemented lock.py in my python project for locking threads, using memcached. However I encoutered an error which I have trouble solving. It says that memcache does not containt attribute Client. I am not using Client directly, I believe it's just being called inside the imported modules. I have installed both dogpile.cache==1.1.5 and python-memcached==1.59 using pip. Could someone please help me?
Here is the full error message:
Exception in thread Thread-1 (mock_process_repo):
Traceback (most recent call last):
File "/usr/lib64/python3.10/threading.py", line 1009, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.10/threading.py", line 946, in run
self._target(*self._args, **self._kwargs)
File "/home/jdonic/code/prograde/prograde/lock.py", line 164, in mock_process_repo
with RepoSyncLock("repository-name"):
File "/home/jdonic/code/prograde/prograde/lock.py", line 122, in __enter__
while not lock_cache_region.backend.client.add(
File "/home/jdonic/.local/lib/python3.10/site-packages/dogpile/cache/backends/memcached.py", line 181, in client
return self._clients.memcached
File "/home/jdonic/.local/lib/python3.10/site-packages/dogpile/util/langhelpers.py", line 78, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/home/jdonic/.local/lib/python3.10/site-packages/dogpile/cache/backends/memcached.py", line 169, in _clients
return ClientPool()
File "/home/jdonic/.local/lib/python3.10/site-packages/dogpile/cache/backends/memcached.py", line 167, in __init__
self.memcached = backend._create_client()
File "/home/jdonic/.local/lib/python3.10/site-packages/dogpile/cache/backends/memcached.py", line 313, in _create_client
return memcache.Client(self.url)
AttributeError: module 'memcache' has no attribute 'Client'
I am running some code with Python's concurrent.futures module, and I am having issues running it. I am on Windows 10 and using Python 3.8. I am trying to run a simulation, which is running fine on the Mac that my friend has. I tried setting
Max Workers = 1, :cf.ProcessPoolExecutor(max_workers=1) as executor:
as was suggested in another thread, but it also also failed. I am kind of out of ideas.
The stack trace looks like this—
Error Description: BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
Process SpawnProcess-1:
Traceback (most recent call last):
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\process.py", line 233, in _process_worker
call_item = call_queue.get(block=True)
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\queues.py", line 116, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'runTest' on <module '__main__' (built-in)>
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<stdin>", line 11, in doit2
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\process.py", line 484, in _chain_from_iterable_of_lists
for element in iterable:
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\_base.py", line 611, in result_iterator
yield fs.pop().result()
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\_base.py", line 439, in result
return self.__get_result()
File "C:\Users\Lotfi\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\_base.py", line 388, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I have python-socketio==4.3.1 installed, and I can connect to the socket.io server correctly.
Whenever I receive a message though, I get an exception. The data I would get would be a list on the connect event, and a dictionary on message events.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/site-packages/socketio/client.py", line 581, in _handle_eio_message
self._handle_event(pkt.namespace, pkt.id, pkt.data)
File "/usr/local/lib/python3.7/site-packages/socketio/client.py", line 470, in _handle_event
r = self._trigger_event(data[0], namespace, *data[1:])
File "/usr/local/lib/python3.7/site-packages/socketio/client.py", line 514, in _trigger_event
if namespace in self.handlers and event in self.handlers[namespace]:
TypeError: unhashable type: 'list'
Exception in thread Thread-6:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/site-packages/socketio/client.py", line 581, in _handle_eio_message
self._handle_event(pkt.namespace, pkt.id, pkt.data)
File "/usr/local/lib/python3.7/site-packages/socketio/client.py", line 470, in _handle_event
r = self._trigger_event(data[0], namespace, *data[1:])
File "/usr/local/lib/python3.7/site-packages/socketio/client.py", line 514, in _trigger_event
if namespace in self.handlers and event in self.handlers[namespace]:
TypeError: unhashable type: 'dict'
I am able to log the messages correctly with a node client. Any ideas?
Code is just this right now:
import socketio
io = socketio.Client()
#io.event
def connect():
print('connected')
#io.event
def message(data):
print(data)
url = '...'
io.connect(url)
Turns out the error was in the server implementation.
I am running LDAMulticore from the python gensim library, and the script cannot seem to create more than one thread. Here is the error:
Traceback (most recent call last):
File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 97, in worker
initializer(*initargs)
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamulticore.py", line 333, in worker_e_step
worker_lda.do_estep(chunk) # TODO: auto-tune alpha?
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamodel.py", line 725, in do_estep
gamma, sstats = self.inference(chunk, collect_sstats=True)
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamodel.py", line 655, in inference
ids = [int(idx) for idx, _ in doc]
TypeError: 'int' object is not iterable
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 325, in _handle_workers
pool._maintain_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 229, in _maintain_pool
self._repopulate_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 222, in _repopulate_pool
w.start()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/usr/lib64/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
I'm creating my LDA model like this:
ldamodel = LdaMulticore(corpus, num_topics=50, id2word = dictionary, workers=3)
I have actually asked another question about this script, so the full script can be found here:
Gensim LDA Multicore Python script runs much too slow
If it's relevant, I'm running this on a CentOS server. Let me know if I should include any other information.
Any help is appreciated!
OSError: [Errno 12] Cannot allocate memory sounds like you are running out of RAM.
Check your available free memory and swap.
You can try to to reduce the number of threads with the workers parameter or the number of documents to be used in each training chunk with the chunksize parameter.
I am getting this error while traing using tensor flow, would you mind helping me fixing this? Thank you a lot!
Exception in thread Thread-16923:
Traceback (most recent call last):
File "/home/aryan/miniconda2/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/home/aryan/miniconda2/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/aryan/miniconda2/lib/python2.7/site-packages/tflearn
/data_flow.py", line 240, in wait_for_threads
self.coord.join(self.threads)
File "/home/aryan/miniconda2/lib/python2.7/site-packages/tensorflow/python
/training/coordinator.py", line 397, in join
" ".join(stragglers))
"RuntimeError: Coordinator stopped with threads still running: Thread-16922"