Python function to print a dot to console while HTTP call executes - python

I am somewhat new to Python. I have looked around but cannot find an answer that fits exactly what I am looking for.
I have a function that makes an HTTP call using the requests package. I'd like to print a '.' to the screen (or any char) say every 10 seconds while the HTTP requests executes, and stop printing when it finishes. So something like:
def make_call:
rsp = requests.Post(url, data=file)
while requests.Post is executing print('.')
Of course the above code is just pseudo code but hopefully illustrates what I am hoping to accomplish.

Every function call from the requests module is blocking, so your program waits until the function returns a value. The simplest solution is to use the built-in threading library which was already suggested. Using this module allows you to use code "parallelism"*. In your example you need one thread for the request which will be blocked until the request finished and the other for printing.
If you want to learn more about more advanced solutions see this answer https://stackoverflow.com/a/14246030/17726897
Here's how you can achieve your desired functionality using the threading module
def print_function(stop_event):
while not stop_event.is_set():
print(".")
sleep(10)
should_stop = threading.Event()
thread = threading.Thread(target=print_function, args=[should_stop])
thread.start() # start the printing before the request
rsp = requests.Post(url, data=file) # start the requests which blocks the main thread but not the printing thread
should_stop.set() # request finished, signal the printing thread to stop
thread.join() # wait for the thread to stop
# handle response
* parallelism is in quotes because of something like the Global Interpreter Lock (GIL). Code statements from different threads aren't executed at the same time.

i don't really getting what you looking for but if you want two things processed at the same time you can use multithreading module
Example:
import threading
import requests
from time import sleep
def make_Request():
while True:
req = requests.post(url, data=file)
sleep(10)
makeRequestThread = threading.Thread(target=make_Request)
makeRequestThread.start()
while True:
print("I used multi threading to do two tasks at a same time")
sleep(10)
or you can use very simple schedule module to schedule your tasks in a easy way
docs: https://schedule.readthedocs.io/en/stable/#when-not-to-use-schedule

import threading
import requests
from time import sleep
#### Function print and stop when answer comes ###
def Print_Every_10_seconds(stop_event):
while not stop_event.is_set():
print(".")
sleep(10)
### Separate flow of execution ###
Stop = threading.Event()
thread = threading.Thread(target=Print_Every_10_seconds, args=[Stop])
### Before the request, the thread starts printing ###
thread.start()
### Blocking of the main thread (the print thread continues) ###
Block_thread_1 = requests.Post(url, data=file)
### Thread stops ###
Stop.set()
thread.join()

The below code also solves the problem asked. It will print "POST Data.." and additional trailing '.' every second until the HTTP POST returns.
import concurrent.futures as fp
import logging
with fp.ThreadPoolExecutor(max_workers=1) as executor:
post = executor.submit(requests.post, url, data=fileobj, timeout=20)
logging.StreamHandler.terminator = ''
logging.info("POST Data..")
while (post.running()):
print('.', end='', flush=True)
sleep(1)
print('')
logging.StreamHandler.terminator = '\n'
http_response = post.result()

Related

How to send a CTRL-C signal to individual threads in Python?

I am trying to figure out how to properly send a CTRL-C signal on Windows using Python. Earlier I was messing around with youtube-dl and embedded it into a PyQt Qthread to do the processing and created a stop button to stop the thread but when trying to download a livestream I was unable to get FFMPEG to stop even after closing the application and I'd have to manually kill the process which breaks the video every time.
I knew I'd have to send it a CTRL-C signal somehow and ended up using this.
os.kill(signal.CTRL_C_EVENT, 0)
I was actually able to get it to work but if you try to download more than one video and try to stop one of the threads with the above signal it would kill all the downloads.
Is there any way to send the signal to just one thread without effecting the others?
Here is an example of some regular Python code with 2 seperate threads where the CTRL-C signal is fired in thread_2 after 10 seconds which ends up killing thread_1.
import os
import signal
import threading
import time
import youtube_dl
def thread_1():
print("thread_1 running")
url = 'https://www.cbsnews.com/common/video/cbsn_header_prod.m3u8'
path = 'C:\\Users\\Richard\\Desktop\\'
ydl_opts = {
'format': 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best',
'outtmpl': '{0}%(title)s-%(id)s.%(ext)s'.format(path),
'nopart': True,
}
ydl_opts = ydl_opts
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
try:
ydl.download([url])
except KeyboardInterrupt:
print('stopped')
def thread_2():
print("thread_2 running")
time.sleep(10)
os.kill(signal.CTRL_C_EVENT, 0)
def launch_thread(target, message, args=[], kwargs={}):
def thread_msg(*args, **kwargs):
target(*args, **kwargs)
print(message)
thread = threading.Thread(target=thread_msg, args=args, kwargs=kwargs)
thread.start()
return thread
if __name__ == '__main__':
thread1 = launch_thread(thread_1, "finished thread_1")
thread2 = launch_thread(thread_2, "finished thread_2")
Does anyone have any suggestions or ideas? Thanks.
It is not possible to send signals to another thread, so you need to do something else.
You could possibly raise an exception in another thread, using this hack (for which I won't copy the source here because it comes with an MIT license):
http://tomerfiliba.com/recipes/Thread2/
With that, you could send a KeyboardInterrupt exception to the other thread, which is what happens with Ctrl-C anyway.
While it seems like this would do what you want, it would still break the video which is currently downloading.
On the other hand, since you seem to only be interested in killing all threads when the main thread exits, that can be done in a much simpler way:
Configure all threads as daemons, e.g.:
thread = threading.Thread(target=thread_msg, args=args, kwargs=kwargs)
thread.daemon = True
thread.start()
These threads will exit when the main thread exits, without any additional intervention needed from you.
Is there any way to send the signal to just one thread without effecting the others?
I am not a Python expert, but if I was trying to solve your problem, after reading about signal handling in Python3, I would start planning to use multiple processes instead of using multiple threads within a single process.
You can use signal.pthread_kill
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped

Python how can I do a multithreading/asynchronous HTTP server with twisted

Now I wrote ferver by this tutorial:
https://twistedmatrix.com/documents/14.0.0/web/howto/web-in-60/asynchronous-deferred.html
But it seems to be good only for delayng process, not actually concurently process 2 or more requests. My full code is:
from twisted.internet.task import deferLater
from twisted.web.resource import Resource
from twisted.web.server import Site, NOT_DONE_YET
from twisted.internet import reactor, threads
from time import sleep
class DelayedResource(Resource):
def _delayedRender(self, request):
print 'Sorry to keep you waiting.'
request.write("<html><body>Sorry to keep you waiting.</body></html>")
request.finish()
def make_delay(self, request):
print 'Sleeping'
sleep(5)
return request
def render_GET(self, request):
d = threads.deferToThread(self.make_delay, request)
d.addCallback(self._delayedRender)
return NOT_DONE_YET
def main():
root = Resource()
root.putChild("social", DelayedResource())
factory = Site(root)
reactor.listenTCP(8880, factory)
print 'started httpserver...'
reactor.run()
if __name__ == '__main__':
main()
But when I passing 2 requests console output is like:
Sleeping
Sorry to keep you waiting.
Sleeping
Sorry to keep you waiting.
But if it was concurrent it should be like:
Sleeping
Sleeping
Sorry to keep you waiting.
Sorry to keep you waiting.
So the question is how to make twisted not to wait until response is finished before processing next?
Also make_delayIRL is a large function with heavi logic. Basically I spawn lot of threads and make requests to other urls and collecting results intro response, so it can take some time and not easly to be ported
Twisted processes everything in one event loop. If somethings blocks the execution, it also blocks Twisted. So you have to prevent blocking calls.
In your case you have time.sleep(5). It is blocking. You found the better way to do it in Twisted already: deferLater(). It returns a Deferred that will continue execution after the given time and release the events loop so other things can be done meanwhile. In general all things that return a deferred are good.
If you have to do heavy work that for some reason can not be deferred, you should use deferToThread() to execute this work in a thread. See https://twistedmatrix.com/documents/15.5.0/core/howto/threading.html for details.
You can use greenlents in your code (like threads).
You need to install the geventreactor - https://gist.github.com/yann2192/3394661
And use reactor.deferToGreenlet()
Also
In your long-calculation code need to call gevent.sleep() for change context to another greenlet.
msecs = 5 * 1000
timeout = 100
for xrange(0, msecs, timeout):
sleep(timeout)
gevent.sleep()

python sockets stop recv from hanging?

I am trying to create a two player game in pygame using sockets, the thing is, when I try to receive data on on this line:
message = self.conn.recv(1024)
python hangs until it gets some data. The problem with this is that is pauses the game loop when the client is not sending anything through the socket and causes a black screen. How can I stop recv from doing this?
Thanks in advance
Use nonblocking mode. (See socket.setblocking.)
Or check if there is data available before call recv.
For example, using select.select:
r, _, _ = select.select([self.conn], [], [])
if r:
# ready to receive
message = self.conn.recv(1024)
you can use signal module to stop an hangs recv thread.
in recv thread:
try:
data = sock.recv(1024)
except KeyboardInterrupt:
pass
in interpret thread:
signal.pthread_kill(your_recving_thread.ident, signal.SIGINT)
I know that this is an old post, but since I worked on a similar project lately, I wanted to add something that hasn't already been stated yet for anybody having the same issue.
You can use threading to create a new thread, which will receive data. After this, run your game loop normally in your main thread, and check for received data in each iteration. Received data should be placed inside a queue by the data receiver thread and read from that queue by the main thread.
#other imports
import queue
import threading
class MainGame:
def __init__(self):
#any code here
self.data_queue = queue.Queue()
data_receiver = threading.Thread(target=self.data_receiver)
data_receiver.start()
self.gameLoop()
def gameLoop(self):
while True:
try:
data = self.data_queue.get_nowait()
except queue.Empty:
pass
self.gameIteration(data)
def data_receiver(self):
#Assuming self.sock exists
data = self.sock.recv(1024).decode("utf-8")
#edit the data in any way necessary here
self.data_queue.put(data)
def gameIteration(self, data):
#Assume this method handles updating, drawing, etc
pass
Note that this code is in Python 3.

Why is infinite loop needed when using threading and a queue in Python

I'm trying to understand how to use threading and I came across this nice example at http://www.ibm.com/developerworks/aix/library/au-threadingpython/
#!/usr/bin/env python
import Queue
import threading
import urllib2
import time
hosts = ["http://yahoo.com", "http://google.com", "http://amazon.com",
"http://ibm.com", "http://apple.com"]
queue = Queue.Queue()
class ThreadUrl(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
#grabs urls of hosts and prints first 1024 bytes of page
url = urllib2.urlopen(host)
print url.read(1024)
#signals to queue job is done
self.queue.task_done()
start = time.time()
def main():
#spawn a pool of threads, and pass them queue instance
for i in range(5):
t = ThreadUrl(queue)
t.setDaemon(True)
t.start()
#populate queue with data
for host in hosts:
queue.put(host)
#wait on the queue until everything has been processed
queue.join()
main()
print "Elapsed Time: %s" % (time.time() - start)
The part I don't understand is why the run method has an infinite loop:
def run(self):
while True:
... etc ...
Just for laughs I ran the program without the loop and it looks like it runs fine!
So can someone explain why this loop is needed?
Also how is the loop exited as there is no break statement?
Do you want the thread to perform more than one job? If not, you don't need the loop. If so, you need something that's going to make it do that. A loop is a common solution. Your sample data contains five job, and the program starts five threads. So you don't need any thread to do more than one job here. Try adding one more URL to your workload, though, and see what changes.
The loop is required as without it each worker thread terminates as soon as it completes its first task. What you want is to have the worker take another task when it finishes.
In the code above, you create 5 worker threads, which just happens to be sufficient to cover the 5 URL's you are working with. If you had >5 URL's you would find only the first 5 were processed.

Need some assistance with Python threading/queue

import threading
import Queue
import urllib2
import time
class ThreadURL(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
host = self.queue.get()
sock = urllib2.urlopen(host)
data = sock.read()
self.queue.task_done()
hosts = ['http://www.google.com', 'http://www.yahoo.com', 'http://www.facebook.com', 'http://stackoverflow.com']
start = time.time()
def main():
queue = Queue.Queue()
for i in range(len(hosts)):
t = ThreadURL(queue)
t.start()
for host in hosts:
queue.put(host)
queue.join()
if __name__ == '__main__':
main()
print 'Elapsed time: {0}'.format(time.time() - start)
I've been trying to get my head around how to perform Threading and after a few tutorials, I've come up with the above.
What it's supposed to do is:
Initialiase the queue
Create my Thread pool and then queue up the list of hosts
My ThreadURL class should then begin work once a host is in the queue and read the website data
The program should finish
What I want to know first off is, am I doing this correctly? Is this the best way to handle threads?
Secondly, my program fails to exit. It prints out the Elapsed time line and then hangs there. I have to kill my terminal for it to go away. I'm assuming this is due to my incorrect use of queue.join() ?
Your code looks fine and is quite clean.
The reason your application still "hangs" is that the worker threads are still running, waiting for the main application to put something in the queue, even though your main thread is finished.
The simplest way to fix this is to mark the threads as daemons, by doing t.daemon = True before your call to start. This way, the threads will not block the program stopping.
looks fine. yann is right about the daemon suggestion. that will fix your hang. my only question is why use the queue at all? you're not doing any cross thread communication, so it seems like you could just send the host info as an arg to ThreadURL init() and drop the queue.
nothing wrong with it, just wondering.
One thing, in the thread run function, the while True loop, if some exception happened, the task_done() may not be called however the get() has already been called. Thus the queue.join() may never end.

Categories

Resources