Python keyboard library ignoring time.sleep() - python

the question started of with an anki addon that is being written in python, and in the middle of tuning my function (i thought the functionality was flawed due to things not registering, so i added timeouts but it turns out to be other stuff). i noticed that everything from the keyboard library seems to ignore the time.sleep(), wait the total time, then burst out everything at once.
def someFunction(self):
keyboard.send("ctrl+a")
time.sleep(1)
keyboard.send("ctrl+c")
time.sleep(3)
rawText = pyperclip.paste()
newText = string_manipulation(rawText)
keyboard.write(newText)
this is the code from my project. which is equivalent to below:
time.sleep(4) #1+3=4
keyboard.send("ctrl+a")
keyboard.send("ctrl+c")
keyboard.write(newtext)
I thought it might be because i bundled the library myself. so i went to use notepad++, plain editor with cmd to recreate the problem. and to make it easier to observe, i made the time difference very obvious in between the sleep.
def example():
time.sleep(3)
keyboard.send("a")
time.sleep(1)
keyboard.send("b")
time.sleep(10)
keyboard.send("c")
so when running the script in cmd, and staying in the cmd, it waits for 11 seconds then have an outburst of "abc".
but quickly switch to a text editor after executing the script in cmd, then in the text editor it treats the time.sleep() normally.
system: windows
python version: 3.6.4
keyboard library version: 0.13.4 (latest install, on 10.06.2019)
so my question follows:
what is the cause python to treat time.sleep() in a chunky fashion.
if it is the keyboard library itself, then is there ways around it?
(in the documentation it mentioned sometimes the library can plain out not work at all in other applications)
if there is no other way around it, is there other alternative libraries?
(option that isnt pyautogui, because i've tried so hard to bundle it into my project, but the imports of this library loops back on itself all the time. causing everything to break.)
p.s. for the python experts and pyqt addon experts out there, I know this is far from optimal to achieve this goal, i am still learning on my own, and very new to programming, so if there are any advises on other means of accomplishing it. I would love to hear your ideas on it! :)

I'm new to Python myself so I can't give you a pythonic answer, but in C/C++ and other languages I've used, what Sleep() does is tell the system, "Hand off the rest of my processing time slice with the CPU back to the system for another thread/process to use, and don't give me any time for however many seconds I specified."
So:
time.sleep(3)
keyboard.send("a")
time.sleep(1)
keyboard.send("b")
time.sleep(10)
keyboard.send("c")
This code first relinquishes processing to immediately and for about three seconds, and then it's going to come back to your thread eventually and keyboard.send("a") is going to be called. It probably ends up tossing the "a" on a queue of characters to be sent to the keyboard, but then you immediately tell your process to time.sleep(1) which interrupts the flow of your code and gives up approximately one second of time to the other threads/processes, then you send "b" to the queue and relinquish about 10 more seconds to the other threads/processes.
When you finally come back to the keyboard.send("c") it's likely that you have "a" and "b" still in the queue because you never gave the under-the-hood processing a chance to do anything. If this is the main thread, you could be stopping all kinds of messages from being processed through the internal message queue, and now since you're not calling sleep anymore, you get "a", "b" and "c" sent to the keyboard out of the queue, seemingly all at once.
That's my best guess based on my old knowledge of other languages and how operating systems treat events being "sent" and message queues and those sorts of things.
Good luck! I could be completely wrong as to how the Python engine works, but ultimately this has to get down to the system level stuff and in Windows, there is a message queue that posts events like this into the queue to be processed.
Perhaps you can spin off another thread where the send and sleep's happen, so that in the main thread, where the system message processing usually exists, that can keep ticking along and getting your characters to the keyboard. This way you're not putting the main thread that has lots of work to do to give up it's CPU time.

Related

Python telnet read issue

im trying to read from a telnet server that sends no ending line or special character to tell the python telnet client that the read should be finished. this data is then sent to a tkinter text entry widget where i want it to constantly update with new data sent from the telnet server. the problem in having is i cant find a way "without blocking the loop" to read from the telnet server. thanks
def Telnet_Client(self):
HOST = self.TelnetHostIP
tn = telnetlib.Telnet(HOST)
tn.write("s")
tnrecv = tn.read_until(">", timeout=1)
self.R.insert(tk.END, tnrecv)
tn.close()
i have used read_some() but i dont get all the data, and read_until(">", timeout=1) blocks the code becasue it never gets a ending line or command to stop reading
The traditional solution to this problem is to spawn a background thread to talk to the socket. That background thread can block on the read, and it won't affect any other threads. However, there is a problem with this: tkinter is not thread-safe, and attempting to update your Entry widget from a background thread will fail. (Depending on your platform, it may crash, block the program, or, worst of all, work intermittently and cause a slew of mysterious bugs.)
There are workarounds you can search for, but none of them are great.
The basic idea is to have the background thread send messages to the main thread—e.g., by posting them on a queue.Queue, which the main thread can check (with a get(block=False)). But checking each time through the event loop may be too often while you're moving the mouse, but not often enough while you're idle—and if you ask tkinter to fire your check every N seconds, that can keep a laptop from going to sleep. Also, getting this right isn't exactly hard, but it's not trivial.
There used to be a nice library that wrapped this all up as well as possible, called mtTkinter, but it was abandoned long ago. I ported it to Python 3 a few years back, but ended up not using it, so that version is effectively abandoned too. It might just work, but I'm not making any promises.
The advantage of this solution is that it's very easy: import mttkinter as tkinter, add a threading.Thread(target=telnet_loop), a couple more minor changes, and you're done… if it works.
The more modern solution is to use asyncio (or a predecessor like Twisted or a competitor like Curio).
You can drive the asyncio loop from the Tkinter event loop, and it's a lot cleaner than any of the threading workarounds. A there are ready-made libraries to do it for you. (I don't know the current state of things, but I used the original asyncio-tkinter a few years back.)
The only problem is that you can't use telnetlib, because it wasn't designed for asyncio. But there are almost certainly more modern Telnet libraries out there that were. (From a quick search, I found telnetlib3, which looks promising, but I don't know nearly enough to recommend it.)
Of course this solution requires rewriting most of your networking code—but you don't have very much of it, and it's not working, so that doesn't seem like too much of a tragedy. Your tkinter code, meanwhile, should only require a one-line change.

Isolating code with a Python thread

I'm writing a program in which I want to evaluate a piece of code asynchronously. I want it to be isolated from the main thread so that it can raise an error, enter an infinite loop, or just about anything else without disrupting the main program. I was hoping to use threading.Thread, but this has a major problem; I can't figure out how to stop it. I have tried Thread._stop(), but that frequently doesn't work. I end up with a thread that I can't control hogging both interpreter time and CPU power. The code in the thread doesn't open any files or do anything else that would cause problems if I hard-killed it.
Python's multiprocessing.Process.terminate() does this really well; unfortunately, initiating a process on Windows takes nearly a second, which is long enough to cause annoying delays in my GUI.
Does anyone know either a: how to kill a Python thread (I don't think I care how dirty the exit is), or b: how to speed up starting a process?
A third possibility would be a third-party library that provides an alternative method for asynchronous execution, but I've never heard of any such thing.
In my case, the best way to do this seems to be to maintain a running worker process, and send the code to it on an as-needed basis. If the process acts up, I kill it and then start a new one immediately to avoid any delay the next time.

How to identify the cause in Python of code that is not interruptible with a CTRL +C

I am using requests to pull some files. I have noticed that the program seems to hang after some large number of iterations that varies from 5K to 20K. I can tell it is hanging because the folder where the results are stored has not changed in several hours. I have been trying to interrupt the process (I am using IDLE) by hitting CTRL + C to no avail. I would like to interrupt instead of killing the process because restart is easier. I have finally had to kill the process. I restart and it runs fine again until I have the same symptoms. I would like to figure out how to diagnose the problem but since I am having to kill everything I have no idea where to start.
Is there an alternate way to view what is going on or to more robustly interrupt the process?
I have been assuming that if I can interrupt without killing I can look at globals and or do some other mucking around to figure out where my code is hanging.
In case it's not too late: I've just faced the same problems and have some tips
First thing: In python most waiting apis are not interruptible (ie Thread.join(), Lock.acquire()...).
Have a look at theese pages for more informations:
http://snakesthatbite.blogspot.fr/2010/09/cpython-threading-interrupting.html
http://docs.python.org/2/library/thread.html
Then if a thread is waiting on such a call, it cannot be stopped.
There is another thing to know: if a normal thread is running (or hanged) the main program will stay indefinitely untill all threads are stopped or the process is killed.
To avoid that, you can make the thread a daemon thread: Thread.daemon=True before calling Thread.start().
Second thing, to find where your program is hanged, you can launch it with a debugger but I prefer logging because logs are always there in case its to late to debug.
Try logging before and after each waiting call to see how much time your threads have been hanged. To have high quality logs, uses python logging configured with file handler, html handler or even better with a syslog handler.

Python: Continuously and cancelably repeat execution with fixed interval

What is the best way to continuously repeat the execution of a given function at a fixed interval while being able to terminate the executor (thread or process) immediately?
Basically I know two approaches:
use multiprocessing and function with infinite cycle and time.sleep at the end. Processing is terminated with process.terminate() in any state.
use threading and constantly recreate timers at the end of the thread function. Processing is terminated by timer.cancel() while sleeping.
(both “in any state” and “while sleeping” are fine, even though the latter may be not immediate). The problem is that I have to use both multiprocessing and threading as the latter appears not to work on ARM (some fuzzy interaction of python interpreter and vim, outside of vim everything is fine) (I was using the second approach there, have not tried threading+cycle; no code is currently left) and the former spawns way too many processes which I would like not to see unless really required. This leads to a problem of having to code two different approaches while threading with cycle is just a few more imports for drop-in replacements of all multiprocessing stuff wrapped in if/else (except that there is no thread.terminate()). Is there some better way to do the job?
Currently used code is here (currently with cycle for both jobs), but I do not think it will be much useful to answer the question.
Update: The reason why I am using this solution are functions that display file status (and some other things like branch) in version control systems in vim statusline. These statuses must be updated, but updating them immediately cannot be done without using hooks and I have no idea how to set hooks temporary and remove on vim quit without possibly spoiling user configuration. Thus standard solution is cache expiring after N seconds. But when cache expired I need to do an expensive shell call and the delay appears to be noticeable, the more noticeable the heavier IO load is. What I am implementing now is updating values for viewed buffers each N seconds in a separate process thus delays are bothering that process and not me. Threads are likely to also work because GIL does not affect calls to external programs.
I'm not clear on why a single long-lived thread that loops infinitely over the tasks wouldn't work for you? Or why you end up with many processes in the multiprocess option?
My immediate reaction would have been a single thread with a queue to feed it things to do. But I may be misunderstanding the problem.
I do not know how do it simply and/or cleanly in Python, but I was wondering if maybe you couldn't take avantage of an existing system scheduler, e.g. crontab for *nix system.
There is an API in python and it might satisfied your needs.

Apparent time-travelling via python's multiprocessing module: surely I've done something wrong

I use python for video-game-like experiments in cognitive science. I'm testing out a device that detects eye movements via EOG, and this device talks to the computer via USB. To ensure that data is being continuously read from the USB while the experiment does other things (like changing the display, etc), I thought I'd use the multiprocessing module (with a multicore computer of course), put the USB reading work in a separate worker process, and use a queue to tell that worker when events of interest occur in the experiment. However, I've encountered some strange behaviour such that even when there is 1 second between the enqueuing of 2 different messages to the worker, when I look at the worker's output at the end, it seems to have received the second almost immediately after the first. Surely I've coded something awry, but I can't see what, so I'd very much appreciate help anyone can provide.
I've attempted to strip down my code to a minimal example demonstrating this behaviour. If you go to this gist:
https://gist.github.com/914070
you will find "multiprocessing_timetravel.py", which codes the example, and "analysis.R", which analyzes the "temp.txt" file that results from running "multiprocessing_timetravel.py". "analysis.R" is written in R and requires you have the plyr library installed, but I've also included example of the analysis output in the "analysis_results.txt" file at the gist.
Despite working with multiprocessing, your queue still uses synchronization objects (two locks and a semaphore) and the put method spawns another thread (based on the 2.7 source). So GIL contention (and other fun stuff) may come into play, as suggested by BlueRaja. You can try playing with sys.checkinterval and see if decreasing it also decreases the observed discrepancy, although you don't want to run normally in that condition.
Note that, if your USB reading code drops the GIL (e.g. ctypes code, or a Python extension module designed to drop the GIL), you do get true multithreading, and a threaded approach might be more productive than using multiprocessing.
Ah, I solved it and it turned out to be much simpler than I expected. There were 5 events per "trial" and the final event triggered a write of data to the HD. If this final write takes a long time, the worker may not grab the next trial's first event until the second event has already been put into the queue. When this happens, the first event lasts (to the worker's eyes) for only one of its loops before it encounters the second event. I'll have to either figure out a faster way to write out the data or leave the data in memory until a break in the experiment permits a long write.

Categories

Resources