I am trying to write a program using which I wish to alternate between two threads, thread1 and thread2. The tricky part is that the thread should begin execution first must be thread1.
This is the code I have so far:
Class Client:
#member variables
def sendFile(self,cv1,lock1):
sent=0;
while (i<self.size):
message = self.data[i:1024+i]
cv1.acquire()
BadNet.transmit(self.clientSocket,message,self.serverIP,self.serverPort)
cv1.notify()
cv1.release()
i = i+1024
sent+=1
lock1.wait()
print "File sent successfully !"
self.clientSocket.close()
def receiveAck(self,cv1,lock2):
i=0
while (1):
lock1.clear()
cv1.acquire()
cv1.wait()
print "\nentered ack !\n"
self.ack, serverAddress = self.clientSocket.recvfrom(self.buf)
cv1.release()
lock1.set()
if __name__ == "__main__":
lock1 = Event()
cv1 = Condition()
cv2= Condition()
client = Client();
client.readFile();
thread1 = Thread(target = client.sendFile, args=[cv1,lock1])
thread2 = Thread(target = client.receiveAck, args=[cv1,lock1])
thread1.start()
thread2.start()
thread1.join()
thread2.join()
The problem I am currently facing is that initially the program does alternate between two threads (confirmed by the output on the console. But after an arbitrary number of iterations (usually between 20 and 80) the program just hangs and no further iterations are performed.
There are at least two problems with your synchronization.
First, you're using cv1 wrong. Your receive thread has to loop around its cv, checking the condition and calling wait each time. Otherwise, you're just using a cv as a broken event + lock combination. You don't have such a loop. More importantly, you don't even have a condition to wait for.
Second, you're using lock1 wrong. Your receive thread sets the event and then immediately clears it. But there's no guarantee that the send thread has gotten to the wait yet. (The race from the previous problem makes this more of a problem, but it's still a problem even if you fix that.) On a multi-core machine, it will usually get there in time, but "usually" is even worse than never in threaded programming. So, eventually the send thread will get to the wait after the receive thread has already done the clear, and therefore it will wait forever. The receive thread, meanwhile, will be waiting to be notified by the send thread, which will never happen. So you're deadlocked.
For future reference, adding print statements before and after every blocking operation, especially sync operations, would make this a lot to debug: you would see the receive thread's last message was "receive waiting on cv1", while the send thread's last message was "send waiting on lock1", and it would be obvious where the deadlock was.
Anyway, I'm not sure what it would even mean to "fix" a cv with no condition, or an event that you're trying to use as a cv, so instead I'll show how to write something sensible with two cvs. In this case, we might as well just use a flag that we flip back and forth as the condition for both cvs.
While I'm at it, I'll fix a couple other problems that made your code not even testable (e.g., i is never initialized), and include the debugging information, and what I had to fill in to make this a complete example, but otherwise I'll try to leave your structure and irrelevant problems (like Client being an old-style class) intact.
class Client:
def __init__(self):
self.clientSocket = socket(AF_INET, SOCK_DGRAM)
self.serverIP = '127.0.0.1'
self.serverPort = 11111
self.buf = 4
self.waitack = False
def readFile(self):
self.data = ', '.join(map(str, range(100000)))
self.size = len(self.data)
#member variables
def sendFile(self,cv1,lock1):
i = 0
sent=0
while (i<self.size):
message = self.data[i:1024+i]
print "s cv1 acquire"
with cv1:
print "s sendto"
self.clientSocket.sendto(message, (self.serverIP, self.serverPort))
self.waitack = True
print "s cv1 notify"
cv1.notify()
i = i+1024
sent+=1
print "s cv2 acquire"
with cv2:
print "s cv2 wait"
while self.waitack:
cv2.wait()
print "File sent successfully !"
self.clientSocket.close()
def receiveAck(self,cv1,lock2):
i=0
while (1):
print "r cv1 acquire"
with cv1:
while not self.waitack:
print "r cv1 wait"
cv1.wait()
print "r recvfrom"
self.ack, serverAddress = self.clientSocket.recvfrom(self.buf)
i += 1
print self.ack, i
print "r cv2 acquire"
with cv2:
self.waitack = False
print "r cv2 notify"
cv2.notify()
And here's a test server for it:
from itertools import *
from socket import *
s = socket(AF_INET, SOCK_DGRAM)
s.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
s.bind(('127.0.0.1', 11111))
for i in count():
data, addr = s.recvfrom(1024)
print(i)
s.sendto('ack\n', addr)
Start the server, start the client, the server will count up to 672, the client will count up to 673 (since your code counts 1-based) with 673 balanced pairs of messages and a "File sent successfully !" at the end. (Of course the client will then hang forever because receiveAck has no way to finish, and the server because I wrote it as an infinite loop.)
Related
I'm having trouble with a thread hanging when I call join() on it. What I am trying to do is use the Go-back N protocol for sending/receiving packets over a network, and I created a separate thread for handling the ACK's that come back from the server.
I have a single thread run on this method that checks for incoming packets and retrieves the ACK number, then stores that number in a variable set-up in the init called self.lastAck. Simplified version of the method:
#Anything not explicitly defined here is global variable
def ack_check(self):
ack_num = 0
pktHdrData = '!BBBBHHLLQQLL'
# Listening for ack number from server and store it in self.lastAck.
while True:
# variable also inside the __init__ method
if (self.finish == 1):
break
data,address = sock.recvfrom(4096)
clientAck = struct.unpack(pktHdrData,data)
ackNumRecv = clientAck[9]
self.lastAck = ackNumRecv
A simplified version of the function that creates the thread and handles the sending of the client packets:
def send(self,buffer):
# Assume packet header and all relevant data is set up correctly here
# ...
t1 = threading.Thread(target = self.ack_check, args=())
t1.setDaemon = True
t1.start()
# All of this works perfectly and breaks as expected
while True:
# Packets/data get sent here and break when self.lastAck reaches a specific number. Assume this works properly and breaks
self.finish = 1
print("About to hang here")
t1.join()
return bytessent
I end up hanging right after printing the About to end here and I can't figure out why. I can get it to work if I break out of the while True loop in the else section, but then I end up closing the thread before I receive all the ACK numbers from the receiver. So instead of the full 32 ACK's I'll end up with anywhere from 1 ACK to the full 32.
I think the problem lies in the def ack_check(self) method where it doesn't break out of the loop even though it should be after I call self.finish = 1 but it just ends up hanging every time.
Additionally there is nothing else outside of these two methods that are calling self.finish and self.lastAck. I know about deadlocking but I couldn't see how that would be possible in this situation.
Sidenote: I realize the Go-Back N protocol is not properly implemented at all here, but this was the first step I took in creating it.
As per the comments, the recvfrom call in ack_check left the thread hanging. Fixed code:
def ack_check(self):
ack_num = 0
pktHdrData = '!BBBBHHLLQQLL'
# Listening for ack number from server and store it in self.lastAck.
while True:
# variable also inside the __init__ method
if (self.finish == 1):
break
sock.timeout(0.2)
try:
data,address = sock.recvfrom(4096)
except socket.timeout:
break
clientAck = struct.unpack(pktHdrData,data)
ackNumRecv = clientAck[9]
self.lastAck = ackNumRecv
I use multiprocessing.connection.Listener for communication between processes, and it works as a charm for me. Now i would really love my mainloop to do something else between commands from client. Unfortunately listener.accept() blocks execution until connection from client process is established.
Is there a simple way of managing non blocking check for multiprocessing.connection? Timeout? Or shall i use a dedicated thread?
# Simplified code:
from multiprocessing.connection import Listener
def mainloop():
listener = Listener(address=(localhost, 6000), authkey=b'secret')
while True:
conn = listener.accept() # <--- This blocks!
msg = conn.recv()
print ('got message: %r' % msg)
conn.close()
One solution that I found (although it might not be the most "elegant" solution is using conn.poll. (documentation) Poll returns True if the Listener has new data, and (most importantly) is nonblocking if no argument is passed to it. I'm not 100% sure that this is the best way to do this, but I've had success with only running listener.accept() once, and then using the following syntax to repeatedly get input (if there is any available)
from multiprocessing.connection import Listener
def mainloop():
running = True
listener = Listener(address=(localhost, 6000), authkey=b'secret')
conn = listener.accept()
msg = ""
while running:
while conn.poll():
msg = conn.recv()
print (f"got message: {msg}")
if msg == "EXIT":
running = False
# Other code can go here
print(f"I can run too! Last msg received was {msg}")
conn.close()
The 'while' in the conditional statement can be replaced with 'if,' if you only want to get a maximum of one message at a time. Use with caution, as it seems sort of 'hacky,' and I haven't found references to using conn.poll for this purpose elsewhere.
You can run the blocking function in a thread:
conn = await loop.run_in_executor(None, listener.accept)
I've not used the Listener object myself- for this task I normally use multiprocessing.Queue; doco at the following link:
https://docs.python.org/2/library/queue.html#Queue.Queue
That object can be used to send and receive any pickle-able object between Python processes with a nice API; I think you'll be most interested in:
in process A
.put('some message')
in process B
.get_nowait() # will raise Queue.Empty if nothing is available- handle that to move on with your execution
The only limitation with this is you'll need to have control of both Process objects at some point in order to be able to allocate the queue to them- something like this:
import time
from Queue import Empty
from multiprocessing import Queue, Process
def receiver(q):
while 1:
try:
message = q.get_nowait()
print 'receiver got', message
except Empty:
print 'nothing to receive, sleeping'
time.sleep(1)
def sender(q):
while 1:
message = 'some message'
q.put('some message')
print 'sender sent', message
time.sleep(1)
some_queue = Queue()
process_a = Process(
target=receiver,
args=(some_queue,)
)
process_b = Process(
target=sender,
args=(some_queue,)
)
process_a.start()
process_b.start()
print 'ctrl + c to exit'
try:
while 1:
time.sleep(1)
except KeyboardInterrupt:
pass
process_a.terminate()
process_b.terminate()
process_a.join()
process_b.join()
Queues are nice because you can actually have as many consumers and as many producers for that exact same Queue object as you like (handy for distributing tasks).
I should point out that just calling .terminate() on a Process is bad form- you should use your shiny new messaging system to pass a shutdown message or something of that nature.
The multiprocessing module comes with a nice feature called Pipe(). It is a nice way to share resources between two processes(never tried more than two before). With the dawn of python 3.80 came the shared memory function in the multiprocessing module but i have not really tested that so i cannot vouch for it
You will use the pipe function something like
from multiprocessing import Pipe
.....
def sending(conn):
message = 'some message'
#perform some code
conn.send(message)
conn.close()
receiver, sender = Pipe()
p = Process(target=sending, args=(sender,))
p.start()
print receiver.recv() # prints "some message"
p.join()
with this you should be able to have separate processes running independently and when you get to the point which you need the input from one process. If there is somehow an error due to the unrelieved data of the other process you can put it on a kind of sleep or halt or use a while loop to constantly check pending when the other process finishes with that task and sends it over
while not parent_conn.recv():
time.sleep(5)
this should keep it in an infinite loop until the other process is done running and sends the result. This is also about 2-3 times faster than Queue. Although queue is also a good option personally I do not use it.
I have a few computers on a network and I'm trying to coordinate work between them by broadcasting instructions and receiving replies from individual workers. When I use zmq to assign a single socket to each program it works fine, but when I try to assign another, none of them work. For example, the master program runs on one machine. With the code as such it works fine as a publisher, but when I uncomment the commented lines neither socket works. I've seen example code extremely similar to this so I believe it should work, but I must be missing something.
Here's some example code, first with the master program and then the worker program. The idea is to control the worker programs from the master based on input from the workers to the master.
import zmq
import time
import sys
def master():
word = sys.argv[1]
numWord = sys.argv[2]
port1 = int(sys.argv[3])
port2 = int(sys.argv[4])
context = zmq.Context()
publisher = context.socket(zmq.PUB)
publisher.bind("tcp://*:%s" % port1)
#receiver = context.socket(zmq.REP)
#receiver.bind("tcp://*:%s" % port2)
for i in range(int(numWord)):
print str(i)+": "+word
print "Publishing 1"
publisher.send("READY_FOR_NEXT_WORD")
print "Publishing 2"
publisher.send(word)
#print "Published. Waiting for REQ"
#word = receiver.recv()
#receiver.send("Master IRO")
time.sleep(1)
print "Received: "+word
publisher.send("EXIT_NOW")
master()
Ditto for the workers:
import zmq
import random
import zipfile
import sys
def worker(workerID, fileFirst, fileLast):
print "Worker "+ str(workerID) + " started"
port1 = int(sys.argv[4])
port2 = int(sys.argv[5])
# Socket to talk to server
context = zmq.Context()
#pusher = context.socket(zmq.REQ)
#pusher.connect("tcp://10.122.102.45:%s" % port2)
receiver = context.socket(zmq.SUB)
receiver.connect ("tcp://10.122.102.45:%s" % port1)
receiver.setsockopt(zmq.SUBSCRIBE, '')
found = False
done = False
while True:
print "Ready to receive"
word = receiver.recv()
print "Received order: "+word
#pusher.send("Worker #"+str(workerID)+" IRO "+ word)
#pusher.recv()
#print "Confirmed receipt"
worker(sys.argv[1], sys.argv[2], sys.argv[3])
Well, PUB-SUB patterns are not meant to be reliable specially on initialization (while the connection is established).
Your "master" publishes the first two messages in that loop and then waits for a request from the "worker". Now, if those messages get lost (something that may happen with the first messages sent with PUB-SUB patterns), then the "worker" will be stuck waiting for a publication from the "master". So, basically, they are both stuck waiting for an incoming message.
Apart from that, notice that you are publishing 2 messages from the "master" node while only processing 1 from the "worker". Your "worker" wont be able to catch-up with your "master" and, therefore, messages will be dropped or you'll get a crash.
I am trying to run two tcp clients from the same code using multithreading. The issue is that the second thread never runs. And main() never reaches the last 'Its here!' string print. I have the following code:
def main():
t = Thread(None,connect(),None,)
t2 = Thread(None,connect2(),None,)
t.start()
t2.start()
print "it's here!"
def connect_mktData():
# create Internet TCP socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# connect to server
s.connect(('IP', PORT))
while(1):
print 'data1'
k = 'enter a letter1:'
s.send(k) # send k to server
v = s.recv(1024) # receive v from server (up to 1024 bytes)
print v
time.sleep(1)
s.close() # close socket
def connect_mktData2():
# create Internet TCP socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# connect to server
s.connect(('IP', PORT))
while(1):
print 'data2'
# get letter
k = raw_input('enter a letter2:')
s.send(k) # send k to server
v = s.recv(1024) # receive v from server (up to 1024 bytes)
print v
time.sleep(1)
s.close() # close socket
main()
I get the following output:
data1
enter a letter1:
data1
enter a letter1:
data1
enter a letter1:
data1
enter a letter1:
data1
Even though both functions are mostly identical, ultimately I will have two different connections doing two different things 'simultaneously' and alternating between each other. Shouldn't both threads run independently? thanks for the help!
It looks like your issue is that this:
t = Thread(None,connect(),None,)
t2 = Thread(None,connect2(),None,)
Should be this:
t = Thread(None,connect,None,)
t2 = Thread(None,connect2,None,)
You want to pass the function objects connect and connect2 to the Thread object. When you use connect() instead of connect, you end up calling connect in the main thread, and then pass its return value to the Thread object, which isn't what you want.
Also, it is much more readable to create the Thread objects like this:
t = Thread(target=connect)
t2 = Thread(target=connect2)
Use the target kwarg, so you don't have to include the None for the group.
Also note that while this will make both functions run in concurrently, they will only truly being running at the same time while they're doing blocking I/O operations (meaning inside send, recv, or raw_input). Because of Python's Global Interpeter Lock (GIL), only one thread can be doing CPU-bound operations at a time. So your threads will end up doing a mixture of true concurrency (during I/O) and cooperative multitasking (during CPU-bound operations).
Yes, yes I know I could just use nmap but I want to try this out myself.
I'm trying to write a threaded script to find open ports on a target IP address. This is what I have right now:
import socket, Queue
from threading import Thread
print "Target to scan: "
targetIP = raw_input("> ")
print "Number of threads: "
threads = int(raw_input("> "))
q = Queue.Queue()
# Fill queue with port numbers
for port in range(1, 1025):
q.put(port)
def scan(targetIP, port):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(4)
result = s.connect_ex((targetIP, port))
if result == 0:
print 'Port {0} is open'.format(port)
s.close
q.task_done()
while q.full:
for i in range(threads):
port = q.get()
t = Thread(target=scan, args =(targetIP, port))
t.daemon = True
t.start()
However I have a few issues:
1) When I run this as is, it will iterate through the port queue but then just hang, never breaking from the while loop even though the queue empties.
2) If I add a print line to scan to see whats happening, basically add a "Scanning port X" line in the beginning and a print result line at the end, stdout gets flooded with the "Scanning port" line for all ports in the queue, and THEN the result lines get printed. Meaning, it looks like currently the script is not waiting for result to get a value, and just continue iterating on as if it had.
What am I doing wrong here?
Your actual question has already been answered by a few people, so here's an alternative solution with multiprocessing.Pool instead of threading:
import socket
from multiprocessing import Pool
def scan(arg):
target_ip, port = arg
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(2)
try:
sock.connect((target_ip, port))
sock.close()
return port, True
except (socket.timeout, socket.error):
return port, False
if __name__ == '__main__':
target_ip = raw_input('Target IP: ')
num_procs = int(raw_input('Number of processes: '))
ports = range(1, 1025)
pool = Pool(processes=num_procs)
for port, status in pool.imap_unordered(scan, [(target_ip, port) for port in ports]):
print port, 'is', 'open' if status else 'closed'
You have several problems here, the first is:
while q.full:
Presumably you meant to call the function:
while q.full():
But you have an infinite queue (you created it with no maxsize), so it's never full; so if you make that change, it won't call scan() at all.
Assuming you fix this in some other way (e.g., using q.empty()), what happens if range(threads) does not evenly divide the items in the queue? For instance, suppose you use 3 threads and put port numbers 1, 2, 3, and 4 into q. You'll call q.get() three times (getting 1, 2, and 3) in the first trip through the outer while, and then call it three times again in the second trip—but it only has one more value in it, 4, so the call to q.get() after that will wait for someone to execute a q.put(), and you will get stuck.
You need to rewrite the logic, in other words.
Edit: same problem with s.close vs s.close(). Others addressed the whole pool-of-threads aspect. #Blender's version, using multiprocessing, is a lot simpler since multiprocessing takes care of that for you.
There are a few issues with your code. First, the while loop continues until q.full, which is a function, is falsy. But actually there's no need to loop in your main thread.
I would add sentinel values to the end of the queue, one per worker thread. When the worker thread gets a sentinel, it quits its processing loop. This way you don't have to daemonize the Threads.
So you code should be like:
put ports into queue
put sentinels into queue
start the desired number of threads, have them take ports from the queue and process them, put the results in another queue
wait for the threads to terminate, calling t.join() on the workers
use the results
Well you have to know that by just iterating through the numbers in the range of the number of threads and executing the thread, you don't keep the number of desired threads. It just loops 4 times, creates 4 threads, loops again and enters the another same loop and creates another 4 without being sure that those 4 have finish their task, so you get that flood of messages when you put prints in scan function.
You would have to wait for the childs to finish at the end of the while body section.
I think:
threading.wait()
does the thing.
Try this:
import socket
import threading
from queue import Queue
print_lock = threading.Lock()
target = 'pythonprogramming.net'
def portscan(port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
con = s.connect((target,port))
with print_lock:
print('port',port,'is open!')
con.close()
except:
pass
def threader():
while True:
worker = q.get()
portscan(worker)
q.task_done()
q = Queue()
for x in range(30):
t = threading.Thread(target=threader)
t.daemon = True
t.start()
for worker in range(1,10000):
q.put(worker)
q.join()