Python telnet read issue - python

im trying to read from a telnet server that sends no ending line or special character to tell the python telnet client that the read should be finished. this data is then sent to a tkinter text entry widget where i want it to constantly update with new data sent from the telnet server. the problem in having is i cant find a way "without blocking the loop" to read from the telnet server. thanks
def Telnet_Client(self):
HOST = self.TelnetHostIP
tn = telnetlib.Telnet(HOST)
tn.write("s")
tnrecv = tn.read_until(">", timeout=1)
self.R.insert(tk.END, tnrecv)
tn.close()
i have used read_some() but i dont get all the data, and read_until(">", timeout=1) blocks the code becasue it never gets a ending line or command to stop reading

The traditional solution to this problem is to spawn a background thread to talk to the socket. That background thread can block on the read, and it won't affect any other threads. However, there is a problem with this: tkinter is not thread-safe, and attempting to update your Entry widget from a background thread will fail. (Depending on your platform, it may crash, block the program, or, worst of all, work intermittently and cause a slew of mysterious bugs.)
There are workarounds you can search for, but none of them are great.
The basic idea is to have the background thread send messages to the main thread—e.g., by posting them on a queue.Queue, which the main thread can check (with a get(block=False)). But checking each time through the event loop may be too often while you're moving the mouse, but not often enough while you're idle—and if you ask tkinter to fire your check every N seconds, that can keep a laptop from going to sleep. Also, getting this right isn't exactly hard, but it's not trivial.
There used to be a nice library that wrapped this all up as well as possible, called mtTkinter, but it was abandoned long ago. I ported it to Python 3 a few years back, but ended up not using it, so that version is effectively abandoned too. It might just work, but I'm not making any promises.
The advantage of this solution is that it's very easy: import mttkinter as tkinter, add a threading.Thread(target=telnet_loop), a couple more minor changes, and you're done… if it works.
The more modern solution is to use asyncio (or a predecessor like Twisted or a competitor like Curio).
You can drive the asyncio loop from the Tkinter event loop, and it's a lot cleaner than any of the threading workarounds. A there are ready-made libraries to do it for you. (I don't know the current state of things, but I used the original asyncio-tkinter a few years back.)
The only problem is that you can't use telnetlib, because it wasn't designed for asyncio. But there are almost certainly more modern Telnet libraries out there that were. (From a quick search, I found telnetlib3, which looks promising, but I don't know nearly enough to recommend it.)
Of course this solution requires rewriting most of your networking code—but you don't have very much of it, and it's not working, so that doesn't seem like too much of a tragedy. Your tkinter code, meanwhile, should only require a one-line change.

Related

Python keyboard library ignoring time.sleep()

the question started of with an anki addon that is being written in python, and in the middle of tuning my function (i thought the functionality was flawed due to things not registering, so i added timeouts but it turns out to be other stuff). i noticed that everything from the keyboard library seems to ignore the time.sleep(), wait the total time, then burst out everything at once.
def someFunction(self):
keyboard.send("ctrl+a")
time.sleep(1)
keyboard.send("ctrl+c")
time.sleep(3)
rawText = pyperclip.paste()
newText = string_manipulation(rawText)
keyboard.write(newText)
this is the code from my project. which is equivalent to below:
time.sleep(4) #1+3=4
keyboard.send("ctrl+a")
keyboard.send("ctrl+c")
keyboard.write(newtext)
I thought it might be because i bundled the library myself. so i went to use notepad++, plain editor with cmd to recreate the problem. and to make it easier to observe, i made the time difference very obvious in between the sleep.
def example():
time.sleep(3)
keyboard.send("a")
time.sleep(1)
keyboard.send("b")
time.sleep(10)
keyboard.send("c")
so when running the script in cmd, and staying in the cmd, it waits for 11 seconds then have an outburst of "abc".
but quickly switch to a text editor after executing the script in cmd, then in the text editor it treats the time.sleep() normally.
system: windows
python version: 3.6.4
keyboard library version: 0.13.4 (latest install, on 10.06.2019)
so my question follows:
what is the cause python to treat time.sleep() in a chunky fashion.
if it is the keyboard library itself, then is there ways around it?
(in the documentation it mentioned sometimes the library can plain out not work at all in other applications)
if there is no other way around it, is there other alternative libraries?
(option that isnt pyautogui, because i've tried so hard to bundle it into my project, but the imports of this library loops back on itself all the time. causing everything to break.)
p.s. for the python experts and pyqt addon experts out there, I know this is far from optimal to achieve this goal, i am still learning on my own, and very new to programming, so if there are any advises on other means of accomplishing it. I would love to hear your ideas on it! :)
I'm new to Python myself so I can't give you a pythonic answer, but in C/C++ and other languages I've used, what Sleep() does is tell the system, "Hand off the rest of my processing time slice with the CPU back to the system for another thread/process to use, and don't give me any time for however many seconds I specified."
So:
time.sleep(3)
keyboard.send("a")
time.sleep(1)
keyboard.send("b")
time.sleep(10)
keyboard.send("c")
This code first relinquishes processing to immediately and for about three seconds, and then it's going to come back to your thread eventually and keyboard.send("a") is going to be called. It probably ends up tossing the "a" on a queue of characters to be sent to the keyboard, but then you immediately tell your process to time.sleep(1) which interrupts the flow of your code and gives up approximately one second of time to the other threads/processes, then you send "b" to the queue and relinquish about 10 more seconds to the other threads/processes.
When you finally come back to the keyboard.send("c") it's likely that you have "a" and "b" still in the queue because you never gave the under-the-hood processing a chance to do anything. If this is the main thread, you could be stopping all kinds of messages from being processed through the internal message queue, and now since you're not calling sleep anymore, you get "a", "b" and "c" sent to the keyboard out of the queue, seemingly all at once.
That's my best guess based on my old knowledge of other languages and how operating systems treat events being "sent" and message queues and those sorts of things.
Good luck! I could be completely wrong as to how the Python engine works, but ultimately this has to get down to the system level stuff and in Windows, there is a message queue that posts events like this into the queue to be processed.
Perhaps you can spin off another thread where the send and sleep's happen, so that in the main thread, where the system message processing usually exists, that can keep ticking along and getting your characters to the keyboard. This way you're not putting the main thread that has lots of work to do to give up it's CPU time.

Calling Python code from Twisted

First of all, I should say that this might more a design question rather than about code itself.
I have a network with one Server and multiple Clients (written in Twisted cause I need these asynchronous non-blocking features), such server-client couple it's just only receiving-sending messages.
However, at some point, I want one client to run a python file when received a certain message. That client should keep listening and talking to the Server, also, I should be able to stop that file if needed, so my first thought is starting a thread for that python file and forget about it.
At the end it should go like this: Server sends message to ClientA, ClientA and its dataReceived function interprets the message and decides to run that python file (which I don't know how long will take and maybe contains blocking calls), when that python file finishes running should send the result to ClientB.
So, questions are:
Would it be starting a thread a good idea for that python file in ClientA?
As I want to send the result of that python file to ClientB, can I have another reactor loop inside that python file?
In any case I would highly appreciate any kind of advice as both python and twisted are not my specialty and all these ideas may not be the best ones.
Thanks!
At first reading, I though you were implying twisted isn't python. If you are thinking that, keep in mind the following:
Twisted is a python framework, I.E. it is python. Specifically it's about getting the most out of a single process/thread/core by allowing the programmer to hand-tune the scheduling/order-of-ops in their own code (which is nearly the opposite of the typical use of threads).
While you can interact with threads in twisted, its quite tricky to do without ruining the efficiency of twisted. (for longer description of threads vs events see SO: https://stackoverflow.com/a/23876498/3334178 )
If you really want to spawn your new python away from your twisted python (I.E. get this work running on a different core) then I would look at spawning it off as a process, see Glyph's answer in this SO: https://stackoverflow.com/a/5720492/3334178 for good libraries to get that done.
Processes give the proper separation to allow your twisted apps to run without meaningful slowdown and you should find all your starting/stoping/pausing/killing needs will be fulfilled.
Answering your questions specifically:
Would it be starting a thread a good idea for that python file in ClientA?
I would say "No" its generally not a good idea, and in your specific case you should look at using processes instead.
Can I have another reactor loop inside that python file?
Strictly speaking "no you can't have multiple reactors" - but what twisted can do is concurrently manage hundreds or thousands of separate tasks, all in the same reactor, will do what you need. I.E. run all your different async tasks in one reactor, that is what twisted is built for.
BTW I always recommend the following tutorial for twisted: krondo Twisted Introduction http://krondo.com/?page_id=1327 Its long, but if you get through it this kind of work will become very clear.
All the best!

Freezing of threads that doesn't allow to recognize a pattern

I have a Python application that uses wxPython and some additional threads. One thread uses PIL.Image.open. Under certain circumstances the application freezes so that you see an uncomplete GUI. I found out that it hangs at PIL.Image.open. When I put debug prints in the PIL module, I can see one time it hangs here, one time there ... -- which I can't understand. It seems totally unrelated.
Is there anything a thread can do in Python, that causes other threads to stop at actually unproblematic lines like import string? Or is wxPython able to give such influence?
Long running tasks will freeze a GUI, like wxPython or Tkinter. Putting the long running process into a thread usually takes care of the issue though. I am guessing that you are doing something in your thread that communicates with wxPython in a non-thread-safe manner. If you are not using wx.CallAfter, wx.CallLater or wx.PostEvent to communicate with wxPython from the thread, then that is the issue. You have to use one of those methods.
Otherwise we'll need a small runnable example to diagnose the issue.

Multi-Threading and Asynchronous sockets in python

I'm quite new to python threading/network programming, but have an assignment involving both of the above.
One of the requirements of the assignment is that for each new request, I spawn a new thread, but I need to both send and receive at the same time to the browser.
I'm currently using the asyncore library in Python to catch each request, but as I said, I need to spawn a thread for each request, and I was wondering if using both the thread and the asynchronous is overkill, or the correct way to do it?
Any advice would be appreciated.
Thanks
EDIT:
I'm writing a Proxy Server, and not sure if my client is persistent. My client is my browser (using firefox for simplicity)
It seems to reconnect for each request. My problem is that if I open a tab with http://www.google.com in it, and http://www.stackoverflow.com in it, I only get one request at a time from each tab, instead of multiple requests from google, and from SO.
I answered a question that sounds amazingly similar to your, where someone had a homework assignment to create a client server setup, with each connection being handled in a new thread: https://stackoverflow.com/a/9522339/496445
The general idea is that you have a main server loop constantly looking for a new connection to come in. When it does, you hand it off to a thread which will then do its own monitoring for new communication.
An extra bit about asyncore vs threading
From the asyncore docs:
There are only two ways to have a program on a single processor do
“more than one thing at a time.” Multi-threaded programming is the
simplest and most popular way to do it, but there is another very
different technique, that lets you have nearly all the advantages of
multi-threading, without actually using multiple threads. It’s really
only practical if your program is largely I/O bound. If your program
is processor bound, then pre-emptive scheduled threads are probably
what you really need. Network servers are rarely processor bound,
however.
As this quote suggests, using asyncore and threading should be for the most part mutually exclusive options. My link above is an example of the threading approach, where the server loop (either in a separate thread or the main one) does a blocking call to accept a new client. And when it gets one, it spawns a thread which will then continue to handle the communication, and the server goes back into a blocking call again.
In the pattern of using asyncore, you would instead use its async loop which will in turn call your own registered callbacks for various activity that occurs. There is no threading here, but rather a polling of all the open file handles for activity. You get the sense of doing things all concurrently, but under the hood it is scheduling everything serially.

Apparent time-travelling via python's multiprocessing module: surely I've done something wrong

I use python for video-game-like experiments in cognitive science. I'm testing out a device that detects eye movements via EOG, and this device talks to the computer via USB. To ensure that data is being continuously read from the USB while the experiment does other things (like changing the display, etc), I thought I'd use the multiprocessing module (with a multicore computer of course), put the USB reading work in a separate worker process, and use a queue to tell that worker when events of interest occur in the experiment. However, I've encountered some strange behaviour such that even when there is 1 second between the enqueuing of 2 different messages to the worker, when I look at the worker's output at the end, it seems to have received the second almost immediately after the first. Surely I've coded something awry, but I can't see what, so I'd very much appreciate help anyone can provide.
I've attempted to strip down my code to a minimal example demonstrating this behaviour. If you go to this gist:
https://gist.github.com/914070
you will find "multiprocessing_timetravel.py", which codes the example, and "analysis.R", which analyzes the "temp.txt" file that results from running "multiprocessing_timetravel.py". "analysis.R" is written in R and requires you have the plyr library installed, but I've also included example of the analysis output in the "analysis_results.txt" file at the gist.
Despite working with multiprocessing, your queue still uses synchronization objects (two locks and a semaphore) and the put method spawns another thread (based on the 2.7 source). So GIL contention (and other fun stuff) may come into play, as suggested by BlueRaja. You can try playing with sys.checkinterval and see if decreasing it also decreases the observed discrepancy, although you don't want to run normally in that condition.
Note that, if your USB reading code drops the GIL (e.g. ctypes code, or a Python extension module designed to drop the GIL), you do get true multithreading, and a threaded approach might be more productive than using multiprocessing.
Ah, I solved it and it turned out to be much simpler than I expected. There were 5 events per "trial" and the final event triggered a write of data to the HD. If this final write takes a long time, the worker may not grab the next trial's first event until the second event has already been put into the queue. When this happens, the first event lasts (to the worker's eyes) for only one of its loops before it encounters the second event. I'll have to either figure out a faster way to write out the data or leave the data in memory until a break in the experiment permits a long write.

Categories

Resources