Console buffers when I clear during loops - python

Code:
import os, ctypes, threading
cool = 0
started_t = True
threads = 10
lock = threading.Lock()
def output():
global cool
cool += 1
lock.acquire()
os.system("cls")
print("Current Counter:", cool)
lock.release()
while started_t:
if threading.active_count() <= threads:
try:
threading.Thread(target = output).start()
except:
pass
The code is an example but what I am trying to do is: A specific value will be changed throughout my code and constantly using threading the console is cleared (cls) and then printed. Even using threading.Lock() it still doesn't print properly.
The main problem:
It prints too fast, I heard something like "buffers" and there are ways to get around it. I googled and try to find anything but no luck. Sometimes it's printing more than 1 at once etc. More threads the worse :(

Related

Python: Event-Handler for background task complete

I have a _global_variable = Big_Giant_Class(). Big_Giant_Class takes a long time to run, but it also has constantly refreshing 'live-data' behind it, so I always want as new a instance of it as possible. Its not IO-bound, just a load of CPU computations.
Further, my program has a number of functions that reference that global instance of Big_Giant_Class.
I'm trying to figure out a way to create Big_Giant_Class in an endless loop (so I always have the latest-and-greatest!), but without it being blocking to all the other functions that reference _global_variable.
Conceptually, I kind of figure the code would look like:
import time
class Big_Giant_Class():
def __init__(self, val, sleep_me = False):
self.val = val
if sleep_me:
time.sleep(10)
def print_val(self):
print(self.val)
async def run_loop():
while True:
new_instance_value = await asyncio.run(Big_Giant_Class(val = 1)) # <-- takes a while
# somehow assign new_instance_value to _global_variable when its done!
def do_stuff_that_cant_be_blocked():
global _global_variable
return _global_variable.print_val()
_global_variable = Big_Giant_Class(val = 0)
if __name__ == "__main__":
asyncio.run(run_loop()) #<-- maybe I have to do this somewhere?
for i in range(20):
do_stuff_that_cant_be_blocked()
time.sleep(1)
Conceptual Out:
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
The kicker is, I have a number of functions [ie, do_stuff_that_cant_be_blocked] that can't be blocked.
I simply want them to use the last _global_variable value (which gets periodically updated by some unblocking...thing?). Thats why I figure I can't await the results, because that would block the other functions?
Is it possible to do something like that? I've done very little asyncio, so apologies if this is basic. I'm open to any other packages that might be able to do this (although I dont think Trio works, because I have incompatible required packages that are used)
Thanks for any help in advance!
So you have two cpu bound "loops" in your program. Python has a quirky threading model. So first off python CANNOT do two things at once, meaning calculations. Threading and async allow python to fake doing two things.
Threading allows you to "do" two things cause python switches between the threads and does work but doesnt run both at the same time
Async allows you to "do" two things if you can await the operation. While python awaits the operation it can jump and do something else. However awaiting a cpu bound operation will not allow it to jump and do other things.
The easiest solution is to use a thread though there will be some time where both loops are blocking cause work is being done on the other. But work will be split about 50/50 between threads.
from threading import Thread
_global_variable = some_initial_value
def update_global():
global _global_variable
while True:
_global_variable = get_new_global_instance()
call_some_event()
def main():
background_thread = Thread(target=update_global, daemon=True)
background_thread.start()
while True:
do_important_work()
The harder but truly multiprocessing version would be to use a Process instead of a Thread but would also need to use either shared state or a queue or something like that.
https://docs.python.org/3/library/multiprocessing.html#sharing-state-between-processes

Python Lock always re-acquired by the same thread

I got this as an interview problem a few days ago. I don't really know parallel programming, and the obvious solution I've tried isn't working.
The question is: write two functions, one printing "foo", one printing "bar", that will be run on separate threads. How to ensure output is always:
foo
bar
foo
bar
...
Here's what I've tried:
from threading import Lock, Thread
class ThreadPrinting:
def __init__(self):
self.lock = Lock()
self.count = 10
def foo(self):
for _ in range(self.count):
with self.lock:
print("foo")
def bar(self):
for _ in range(self.count):
with self.lock:
print("bar")
if __name__ == "__main__":
tp = ThreadPrinting()
t1 = Thread(target=tp.foo)
t2 = Thread(target=tp.bar)
t1.start()
t2.start()
But this just produces 10 "foo"s and then 10 "bar"s. Seemingly the same thread manages to loop around and re-acquire the lock before the other. What might be the solution here? Thank you.
this just produces 10 "foo"s and then 10 "bar"s. Seemingly the same thread manages to loop around and re-acquire the lock before the other.
No surprise there. The problem with using a threading.Lock object (a.k.a., a "mutex") in this way is that, like the (default) mutexes in most programming systems, it makes no attempt to be fair.
The very next thing that either of your two threads does after it releases the lock is, it immediately tries to acquire the lock again. Meanwhile, the other thread is sleeping (a.k.a., "blocked",) waiting for its turn to acquire the lock.
The goal of most operating systems, when there is heavy demand for CPU time, is to maximize the amount of useful work that the CPU(s) can do. The best way to do that is to award the lock to the thread that already is running on some CPU instead of wasting time waking up some other thread that is sleeping.
That strategy works well in programs that use locks the way locks were meant to be used—that is to say, programs where the threads spend most of their time unlocked, and only briefly grab a lock, every so often, in order to examine or update some (group of) shared variables.
In order to make your threads take turns printing their messages, you are going to need to find some way to let the threads explicitly say to each other, "It's your turn now."
See my comments on your question for a hint about how you might do that.
#Solomon Slow provided a great explanation and pointed me in the right direction. I initially wanted some kind of a "lock with value" that can be acquired only conditionally. But this doesn't really exist, and busy-waiting in a cycle of "acquire lock - check variable - loop around" is not great. Instead I solved this with a pair of threading.Condition objects that the threads use to talk to each other. I'm sure there's a simpler solution, but here's mine:
from threading import Thread, Condition
class ThreadPrinting:
def __init__(self):
self.fooCondition = Condition()
self.barCondition = Condition()
self.count = 10
def foo(self):
for _ in range(self.count):
with self.fooCondition:
self.fooCondition.wait()
print("foo")
with self.barCondition:
self.barCondition.notify()
def bar(self):
with self.fooCondition:
self.fooCondition.notify() # Bootstrap the cycle
for _ in range(self.count):
with self.barCondition:
self.barCondition.wait()
print("bar")
with self.fooCondition:
self.fooCondition.notify()
if __name__ == "__main__":
tp = ThreadPrinting()
t1 = Thread(target=tp.foo)
t2 = Thread(target=tp.bar)
t1.start()
t2.start()
The way I did it was just to have the first thread send 'foo' then 1 second of sleep before the second sends 'bar'. Both functions sleep for 2 seconds between sends. This allows for them to always alternate, sending one word per second.
from threading import Thread
import time
def foo():
num = 0
while num < 10:
print("foo")
num = num + 1
time.sleep(2)
def bar():
num = 0
while num < 10:
print("bar")
num = num + 1
time.sleep(2)
t1 = Thread(target=foo)
t2 = Thread(target=bar)
t1.start()
time.sleep(1)
t2.start()
I tried this for 100 of each 'foo' and 'bar' and it still alternated.

Python3: detect keypresses asynchronously and communicate them to the main thread

Disclaimer: the import statements are within the functions, I know that this is uncommon. I am showing my whole program here by showing it function for function while telling my issue and my thinking. In reality, I am doing something different, I made this minimal example just for this Stackoverflow question. There are duplicate questions, but I did not find good answers in them since those only say "use multithreading" (e.g. this answer). This particular question concerns itself with how to use multithreading.
The Story: I am running a program in Python. Let's say that it is a while loop going on to infinity. It just runs happily. For example,
def job(threadname, q):
from time import sleep
c = 0
while True:
sleep(0.1) #slow the loop down
c += 1
print(c)
What I want to be able to do is that it asynchronously detects a keypress on stdin and then interrupts execution, so that I can do whatever I want within the function it is interrupted in (or if I am running it with python3 -i program.py, to switch over to the REPL with all my modules loaded in, remember this is a minimal example in which I do not want to highlight such concerns too much).
My idea was: I have one function that asynchronously gets the keypress, sends it via a queue to the other thread and it works. So I extended the job function as such:
def job(threadname, q):
from time import sleep
c = 0
while True:
sleep(0.1) #slow the loop down
c += 1
print(c)
ch = q.get() #extension for multithreading
handle_keypress(ch) #extension for handling keypresses
The code for handle_keypress(ch) is:
def handle_keypress(key):
if (key == "q"):
print("Quit thread")
exit(0)
elif (key == "s"):
print("would you like to change the step size? This has not been implemented yet.")
else:
print("you pressed another key, how nice! Unfortunately, there are not anymore options available yet.")
In other words, not that interesting other than to showcase that I want to be able to do this.
At first the issue seemed to be in the job() function. The culprit is q.get(), which is hanging. However, it is hanging because my input thread for some reason is not asynchronous and blocks. I have no clue how to make it unblocked.
This is the function of my input thread:
def get_input(threadname, q):
#get one character, this code is adapted from https://stackoverflow.com/questions/510357/python-read-a-single-character-from-the-user
while True:
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
q.put(ch)
It is obvious to me that sys.stdin.read(1) is blocking, but I don't know how to make it unblocked. In the state that it is now, I also cannot think of a way to handle the poison pill situation, which is why q.get() in the job() function is blocking.
I run the program by calling the following function:
def run_program():
from threading import Thread
from queue import Queue
queue = Queue()
thread1 = Thread( target=get_input, args=("Thread-1", queue) )
thread2 = Thread( target=job, args=("Thread-2", queue) )
thread1.start()
thread2.start()
thread1.join()
thread2.join()
My Questions: is this how you would design a program to deal with asynchronous keypresses? If so, how do I make the get_input() function unblocked?
Thanks to Sav I found a way to answer this question. In my opinion, his comment is the answer. So if he'll rewrite his comment. I'll accept his answer. For now, I will show what part of the code I changed in order to get a non-blocking implementation working:
def job(threadname, q):
from queue import Empty
from time import sleep
c = 0
while True:
sleep(0.1) #slow the loop down
c += 1
print(c)
#Below is the changed part
ch = None
try:
ch = q.get(block=False)
except Empty:
pass
if ch is not None:
handle_keypress(ch)

Need of the printer_lock when the print is inside an outer lock

I was viewing the talk given by Reymond Hettinger on concurrency in Python (https://www.youtube.com/watch?v=9zinZmE3Ogk) and i came across one of his code snippets which demonstrates the use of locks in Python . Below is the sample code of the same :
import threading
from threading import Thread
counter = 0
counter_lock = threading.Lock()
printer_lock = threading.Lock()
def worker():
global counter
with counter_lock:
counter += 1
with printer_lock:
print(f"The count is {counter}")
print("----------------")
with printer_lock:
print("Starting up ...")
worker_threads = []
for i in range(10):
t = Thread(target=worker)
worker_threads.append(t)
t.start()
for t in worker_threads:
t.join()
My question is with this code block :
with printer_lock:
print(f"The count is {counter}")
print("----------------")
Why do we need a printer lock when only one thread is executing these lines( due to the outer counter_lock) at any one time ?
Can anyone throw some more light on this please.
Thanks in advance.
In this particular example, I believe that it WONT make a difference if you remove the printer lock. Thats because the only printing each thread is doing is printing the counter. However, if your thread was was doing some other stuff, outside the counter_lock, then you would want to take the printer_lock each time you are printing, to prevent the output from multiple threads getting interleaved.

My understanding of threading

I have not programmed in over a year now so sorry if this is a stupid question.
I looked up lots of examples on this site for threading but I seem to be getting a different outcome to other people.
From my understanding of threading, using this simple code, it should be printing YO and LO together, something like
YO
LO
YO
LO
but instead I just get
YO
YO
YO
...
from threading import Thread
import time
def printYo():
while(3>1):
print ("YO")
time.sleep(1)
def printLo():
while(3>1):
print ("LO")
time.sleep(1)
t2 = Thread(target=printLo())
t = Thread(target=printYo())
t2.start()
t.start()
You are calling the function instead of just giving it as target for your thread.
t2 = Thread(target=printLo)
t = Thread(target=printYo)
There are two problems, first of all you should pass the thread function to the Thread constructor (ie Thread(target=printLo)). When the thread is started it will call that function in a separate thread.
Second you may (probably) want to keep the main thread running which could be done by having an idle loop. If you don't you will not be able to stop it with ^C, but you still has to handle the termination of the process.
The complete code will consequently be:
from threading import Thread
import time
def printYo():
while(3>1):
print ("YO")
time.sleep(1)
def printLo():
while(3>1):
print ("LO")
time.sleep(1)
t2 = Thread(target=printLo)
t = Thread(target=printYo)
t2.start()
t.start()
try:
while True:
time.sleep(1)
except:
os._exit(0)
A minor note is that the prints will print the output on separate lines, an important note is that there is no guarantee which order the YOs and LOs will be printed - at first they are likely to alternate, but eventually some of the threads may get to print twice before the other gets to print.

Categories

Resources