Python Threading problems - python

Im trying to understand threading (im new to it) to get my code better. For now, i have a class in a .py file with some functions.
In my main, i initialize an object for this class in each program i have. But, with threads, i would like to be able to create all this objects in one program and call the function with thread.
def inicializa():
clientList = list()
thread_list = list()
config = configparser.ConfigParser()
config.read("accounts.ini")
for section in config.sections(): #define a section da conta que vou usar
email = config.get(section,'email')
password = config.get(section,'password')
the_hash = config.get(section,'hash')
authdata = config.get(section,'authdata')
authdata = eval(authdata)
client = MyClient(email,password,the_hash,authdata)
clientList.append(client)
for client in clientList:
t = threading.Thread(target=client.getBla()) # this function is inside of my class, its work OK outside of the thread if i put client.getBla.
thread_list.append(t)
for thread in thread_list:
thread.start()
return clientList
the error i get when i try to use the thread to start the function client.getBla is:
Exception in thread Thread-1:
TypeError: int object is not callable.
My function dosnt take any arguments, so i don't know whats going on, because i client.getBla() outside of the threads works ok.
Thank you all.

t = threading.Thread(target=client.getBla())
What this line does is evaluate client.getBla() (which returns a int) and pass it as a named argument to the Thread. The target argument takes a callable, so you should do this instead:
t = threading.Thread(target=client.getBla)
Doing this, you pass the function itself, not the function result

Related

Python: Threads are not running in parrallel

I'm trying to create a networking project using UDP connections. The server that I'm creating has to multithread in order to be able to receive multiple commands from multiple clients. However when trying to multithread the server, only one thread is running. Here is the code:
def action_assigner():
print('Hello Assign')
while True:
if work_queue.qsize() != 0:
data, client_address, request_number = work_queue.get()
do_actions(data, client_address, request_number)
def task_putter():
request_number = 0
print('Hello Task')
while True:
data_received = server_socket.recvfrom(1024)
request_number += 1
taskRunner(data_received, request_number)
try:
thread_task = threading.Thread(target=task_putter())
action_thread = threading.Thread(target=action_assigner())
action_thread.start()
thread_task.start()
action_thread.join()
thread_task.join()
except Exception as e:
server_socket.close()
When running the code, I only get Hello Task as the result meaning that the action_thread never started. Can someone explain how to fix this?
The problem here is that you are calling the functions that should be the "body" of each thread when creating the Threads themselves.
Upon executing the line thread_task = threading.Thread(target=task_putter()) Python will resolve first the expession inside the parentheses - it calls the function task_putter, which never returns. None of the subsequent lines on your program is ever run.
What we do when creating threads, and other calls that takes callable objects as arguments, is to pass the function itself, not calling it (which will run the function and evaluate to its return value).
Just change both lines creating the threads to not put the calling parentheses on the target= argument and you will get past this point:
...
try:
thread_task = threading.Thread(target=task_putter)
action_thread = threading.Thread(target=action_assigner)
...

Python Threading - Make threads start without waiting for previous thread to finish

I want all of my threads to start at the same time, but in my code, it waits for the previous thread to finish it's process before starting a new one. I want all of the threads to start in parallel.
My Code:
class Main(object):
start = True
config = True
givenName = True
def obscure(self, i):
i = i
while self.start:
Config.userInfo(i)
break
while self.config:
Config.open()
break
while self.givenName:
Browser.openSession()
break
Main = Main()
while __name__=='__main__':
Config.userInfo()
Config.open()
for i in range(len(Config.names)):
Task = Thread(target=Main.obscure(i))
Task.start()
break
This line is the main problem:
Task = Thread(target=Main.obscure(i))
target is passed the result of calling Main.obscure(i), not the function to be run in the thread. You are currently running the function in the main thread then passing, essentially, target=None.
You want:
Task = Thread(target=Main.obscure, args=(i,))
Then, Thread will arrange to call Main.obscure with the listed arguments inside the thread.
Also, Main = Main() overwrites the class Main declaration with an instance of Main...but you'll never be able to make another instance since you've lost the reference to the class. Use another name, such as main = Main().

Python namedtuple as argument to apply_async(..) callback

I'm writing a short program where I want to call a function asynchronously so that it doesn't block the caller. To do this, I'm using Pool from python's multiprocessing module.
In the function being called asynchronously I want to return a namedtuple to fit with the logic of the rest of my program, but I'm finding that a namedtuple does not seem to be a supported type to pass from the spawned process to the callback (probably because it cannot be pickled). Here is a minimum repro of the problem.
from multiprocessing import Pool
from collections import namedtuple
logEntry = namedtuple("LogEntry", ['logLev', 'msg'])
def doSomething(x):
# Do actual work here
logCode = 1
statusStr = "Message Here"
return logEntry(logLev=logCode, msg=statusStr)
def callbackFunc(result):
print(result.logLev)
print(result.msg)
def userAsyncCall():
pool = Pool()
pool.apply_async(doSomething, [1,2], callback=callbackFunc)
if __name__ == "__main__":
userAsyncCall() # Nothing is printed
# If this is uncommented, the logLev and status are printed as expected:
# y = logEntry(logLev=2, msg="Hello World")
# callbackFunc(y)
Does anyone know if there is a way to pass a namedtuple return value from the async process to the callback? Is there a better/more pythonic approach for what I'm doing?
The problem is that the case is different for the return value of namedtuple() and its typename parameter. That is, there's a mismatch between the named tuple's class definition and the variable name you've given it. You need the two to match:
LogEntry = namedtuple("LogEntry", ['logLev', 'msg'])
And update the return statement in doSomething() correspondingly.
Full code:
from multiprocessing import Pool
from collections import namedtuple
LogEntry = namedtuple("LogEntry", ['logLev', 'msg'])
def doSomething(x):
# Do actual work here
logCode = 1
statusStr = "Message Here"
return LogEntry(logLev=logCode, msg=statusStr)
def callbackFunc(result):
print(result.logLev)
print(result.msg)
def userAsyncCall():
pool = Pool()
return pool.apply_async(doSomething, [1], callback=callbackFunc)
if __name__ == "__main__":
c = userAsyncCall()
# To see whether there was an exception, you can attempt to get() the AsyncResult object.
# print c.get()
(To see the class definition, add verbose=True to namedtuple().)
The reason nothing is printed is that apply_async failed silently. By the way, I think this is a bad behavior which just make people confused. You can pass error_callback to handle error.
def errorCallback(exception):
print(exception)
def userAsyncCall():
pool = Pool()
pool.apply_async(doSomething, [1], callback=callbackFunc, error_callback=errorCallback)
# You passed wrong arguments. doSomething() takes 1 positional argument.
# I replace [1,2] with [1].
if __name__ == "__main__":
userAsyncCall()
import time
time.sleep(3) # You need this, otherwise you will never see the output.
When you came here, the output is
Error sending result: 'LogEntry(logLev=1, msg='Message Here')'. Reason: 'PicklingError("Can't pickle <class '__mp_main__.LogEntry'>: attribute lookup LogEntry on __mp_main__ failed",)'
PicklingError! You're right, namedtuple cannot be passed from the spawned process to the callback.
Maybe it's not a more accpetable way, but you can send dict as result instead of namedtuple.
As Dag Høidahl corrected, namedtuple can be passed. The following line works.
LogEntry = namedtuple("LogEntry", ['logLev', 'msg'])

What is wrong with this Python Multiprocessing Code?

I am trying to create some multiprocessing code for my project. I have created a snippet of the things that I want to do. However its not working as per my expectations. Can you please let me know what is wrong with this.
from multiprocessing import Process, Pipe
import time
class A:
def __init__(self,rpipe,spipe):
print "In the function fun()"
def run(self):
print"in run method"
time.sleep(5)
message = rpipe.recv()
message = str(message).swapcase()
spipe.send(message)
workers = []
my_pipe_1 = Pipe(False)
my_pipe_2 = Pipe(False)
proc_handle = Process(target = A, args=(my_pipe_1[0], my_pipe_2[1],))
workers.append(proc_handle)
proc_handle.run()
my_pipe_1[1].send("hello")
message = my_pipe_2[0].recv()
print message
print "Back in the main function now"
The trace back displayed when i press ctrl-c:
^CTraceback (most recent call last):
File "sim.py", line 22, in <module>
message = my_pipe_2[0].recv()
KeyboardInterrupt
When I run this above code, the main process does not continue after calling "proc_handle.run". Why is this?
You've misunderstood how to use Process. You're creating a Process object, and passing it a class as target, but target is meant to be passed a callable (usually a function) that Process.run then executes. So in your case it's just instantiating A inside Process.run, and that's it.
You should instead make your A class a Process subclass, and just instantiate it directly:
#!/usr/bin/python
from multiprocessing import Process, Pipe
import time
class A(Process):
def __init__(self,rpipe,spipe):
print "In the function fun()"
super(A, self).__init__()
self.rpipe = rpipe
self.spipe = spipe
def run(self):
print"in run method"
time.sleep(5)
message = self.rpipe.recv()
message = str(message).swapcase()
self.spipe.send(message)
if __name__ == "__main__":
workers = []
my_pipe_1 = Pipe(False)
my_pipe_2 = Pipe(False)
proc_handle = A(my_pipe_1[0], my_pipe_2[1])
workers.append(proc_handle)
proc_handle.start()
my_pipe_1[1].send("hello")
message = my_pipe_2[0].recv()
print message
print "Back in the main function now"
mgilson was right, though. You should call start(), not run(), to make A.run execute in a child process.
With these changes, the program works fine for me:
dan#dantop:~> ./mult.py
In the function fun()
in run method
HELLO
Back in the main function now
Taking a stab at this one, I think it's because you're calling proc_handle.run() instead of proc_handle.start().
The former is the activity that the process is going to do -- the latter actually arranges for run to be called on a separate process. In other words, you're never forking the process, so there's no other process for my_pipe_1[1] to communicate with so it hangs.

Add function not working for set in Python

I am trying to add the references of a function in a set (in exposed_setCallback method).
The answer is given at the end. Somehow, it is not adding the reference for the second attempt. The links of the source files are:
http://pastebin.com/BNde5Cgr
http://pastebin.com/aCi6yMT9
Below is the code:
import rpyc
test = ['hi']
myReferences= set()
class MyService(rpyc.Service):
def on_connect(self):
"""Think o+ this as a constructor of the class, but with
a new name so not to 'overload' the parent's init"""
self.fn = None
def exposed_setCallback(self,fn):
# i = 0
self.fn = fn # Saves the remote function for calling later
print self.fn
myReferences.add(self.fn)
#abc.append(i)
#i+=1
print myReferences
for x in myReferences:
print x
#print abc
if __name__ == "__main__":
# lists are pass by reference, so the same 'test'
# will be available to all threads
# While not required, think about locking!
from rpyc.utils.server import ThreadedServer
t = ThreadedServer(MyService, port = 18888)
t.start()
Answer:
<function myprint at 0x01FFD370>
set([<function myprint at 0x01FFD370>])
<function myprint at 0x01FFD370>
<function myprint at 0x022DD370>
set([<function myprint at 0x022DD370>,
Please help
I think the issue is because you have a ThreadedServer which is of course going to be multithreaded.
However, Python sets are not threadsafe, (they are not allowed to be accessed by multiple threads at the same time) so you need to implement a lock for whenever you access the set. You use the lock with a Python context manager (the with statement), which handle acquiring/releasing the lock for you and the Lock itself can only be acquired by one context manager at a time, thus preventing simultaneous access to your set. See the modified code below:
import rpyc
import threading
test = ['hi']
myReferences= set()
myReferencesLock = threading.Lock()
class MyService(rpyc.Service):
def on_connect(self):
"""Think o+ this as a constructor of the class, but with
a new name so not to 'overload' the parent's init"""
self.fn = None
def exposed_setCallback(self,fn):
# i = 0
self.fn = fn # Saves the remote function for calling later
print self.fn
with myReferencesLock:
myReferences.add(self.fn)
#abc.append(i)
#i+=1
with myReferencesLock:
print myReferences
for x in myReferences:
print x
#print abc
if __name__ == "__main__":
# lists are pass by reference, so the same 'test'
# will be available to all threads
# While not required, think about locking!
from rpyc.utils.server import ThreadedServer
t = ThreadedServer(MyService, port = 18888)
t.start()
Welcome to the world of threaded programming. Make sure you protect data shared between threads with locks!
If you want to modify a global variable, you should use global statement on top of your function
def exposed_setCallback(self, fn):
global myReferences
# i = 0
self.fn = fn # Saves the remote function for calling later
print self.fn
myReferences.add(self.fn)

Categories

Resources