I have to curl a website and display a message if the header status is not 200. The logic works fine, but I'm facing trouble with calling the method once.
The threading.Time is supposed to call the method ONCE every 20 seconds but apparently, it calls it multiple times. Could someone please tell me how can I make it call the method once every 20 seconds?
import requests
import threading
def check_status(url):
while True:
status = requests.get(url)
if status.status_code != 200:
print('faulty')
def main():
threading.Timer(2.0, check_status, args=('https://www.google.com',)).start()
if __name__ == "__main__":
main()
Just create a new timer after you finish the old one.
import requests
import threading
def check_status(url):
status = requests.get(url)
if status.status_code != 200:
print('faulty')
threading.Timer(2.0, check_status, args=('https://www.google.com',)).start()
def main():
threading.Timer(2.0, check_status, args=('https://www.google.com',)).start()
if __name__ == "__main__":
main()
Every 20 seconds, you are creating a thread that enters an infinite loop which checks the HTTP status. Even if you did not use threading.Time, you would still get multiple prints. Remove the while loop and it will work as you expect.
Update
My mistake, looking at the documentation: https://docs.python.org/2/library/threading.html#timer-objects
Time will execute the function after the time has passed. Then it will exit. What you need to do, is have time.sleep inside the while loop, and call the function just once inside your main.
Related
I have been trying for the past day to come up with a fix to my current problem.
I have a python script which is supposed to count up using threads and perform requests based on each thread.
Each thread is going through a function called doit(), which has a while True function. This loop only breaks if it meets a certain criteria and when it breaks, the following thread breaks as well.
What I want to achieve is that once one of these threads/workers gets status code 200 from their request, all workers/threads should stop. My problem is that it won't stop even though the criteria is met.
Here is my code:
import threading
import requests
import sys
import urllib.parse
import concurrent.futures
import simplejson
from requests.auth import HTTPDigestAuth
from requests.packages import urllib3
from concurrent.futures import ThreadPoolExecutor
def doit(PINStart):
PIN = PINStart
while True:
req1 = requests.post(url, data=json.dumps(data), headers=headers1, verify=False)
if str(req1.status_code) == "200":
print(str(PINs))
c0 = req1.content
j0 = simplejson.loads(c0)
AuthUID = j0['UserId']
print(UnAuthUID)
AuthReqUser()
#Kill all threads/workers if any of the threads get here.
break
elif(PIN > (PINStart + 99)):
break
else:
PIN+=1
def main():
threads = 100
threads = int(threads)
Calcu = 10000/threads
NList = [0]
for i in range(1,threads):
ListAdd = i*Calcu
if ListAdd == 10000:
NList.append(int(ListAdd))
else:
NList.append(int(ListAdd)+1)
with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor:
tGen = {executor.submit(doit, PinS): PinS for PinS in NList}
for NLister in concurrent.futures.as_completed(tGen):
PinS = tGen[NLister]
if __name__ == "__main__":
main()
I understand why this is happening. As I only break the while True loop in one of the threads, so the other 99 (I run the code with 100 threads by default) doesn't break until they finish their count (which is running through the loop 100 times or getting status code 200).
What I originally did was to define a global variable at the top of the code and I changed while Counter < 10000, meaning it will run the loop for all workers until Counter is greater than 10000. And inside the loop it will increment the global variable. This way, when a worker gets status code 200, I set Counter (my global variable) to for example 15000 (something above 10000), so all the other workers stop running the loop 100 times.
This did not work. When I add that into the code, all threads instantly stop, doesn't even run through the loop once.
Here is an example code of this solution:
import threading
import requests
import sys
import urllib.parse
import concurrent.futures
import simplejson
from requests.auth import HTTPDigestAuth
from requests.packages import urllib3
from concurrent.futures import ThreadPoolExecutor
global Counter
def doit(PINStart):
PIN = PINStart
while Counter < 10000:
req1 = requests.post(url, data=json.dumps(data), headers=headers1, verify=False)
if str(req1.status_code) == "200":
print(str(PINs))
c0 = req1.content
j0 = simplejson.loads(c0)
AuthUID = j0['UserId']
print(UnAuthUID)
AuthReqUser()
#Kill all threads/workers if any of the threads get here.
Counter = 15000
break
elif(PIN > (PINStart + 99)):
Counter = Counter+1
break
else:
Counter = Counter+1
PIN+=1
def main():
threads = 100
threads = int(threads)
Calcu = 10000/threads
NList = [0]
for i in range(1,threads):
ListAdd = i*Calcu
if ListAdd == 10000:
NList.append(int(ListAdd))
else:
NList.append(int(ListAdd)+1)
with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor:
tGen = {executor.submit(doit, PinS): PinS for PinS in NList}
for NLister in concurrent.futures.as_completed(tGen):
PinS = tGen[NLister]
if __name__ == "__main__":
main()
Any idea on how I can kill all workers once I get status code 200 from one of the requests I sending out?
The problem is that you're not using a global variable.
To use a global variable in a function, you have to put the global statement in that function, not at the top level. Because you didn't, the Counter inside doit is a local variable. Any variable that you assign to anywhere in a function is local, unless you have a global (or nonlocal) declaration.
And the first time you use that local Counter is right at the top of the while loop, before you've assigned anything to it. So, it's going to raise an UnboundLocalError immediately.
This exception will be propagated back to the main thread as the result of the future. Which you would have seen, except that you never actually evaluate your futures. You just do this:
tGen = {executor.submit(doit, PinS): PinS for PinS in NList}
for NLister in concurrent.futures.as_completed(tGen):
PinS = tGen[NLister]
So, you get the PinS corresponding to the function you ran, but you don't look at the result or exception; you just ignore it. Hence you don't see that you're getting back 100 exceptions, any of which would have told you what was actually wrong. This is equivalent to having a bare except: pass in non-threaded code. Even if you don't want to check the result of your futures in "production" for some reason, you definitely should do it when debugging a problem.
Anyway, just put the global in the right place, and your bug is fixed.
However, you do have at least two other problems.
First, sharing globals between threads without synchronizing them is not safe. In CPython, thanks to the GIL, you never get a segfault because of it, and you often get away with it completely, but you often don't. You can miss counts because two threads tried to do Counter = Counter + 1 at the same time, so they both incremented it from 42 to 43. And you can get a stale value in the while Counter < 10000: check and go through the loop an extra time.
Second, you don't check the Counter until you've finished downloading and processing a complete requests. This could take seconds, maybe even minutes depending on your timeout settings. And add that to the fact that you might go through the loop an extra time before knowing it's time to quit…
I have got an XMLRPC server and client runs some functions on server and gets returned value. If the function executes quickly then everything is fine but I have got a function that reads from file and returns some value to user. Reading takes about minute(there is some complicated stuff) and when one client runs this function on the server then server is not able to respond for other users until the function is done.
I would like to create new thread that will read this file and return value for user. Is it possible somehow?
Are there any good solutions/patters to do not block server when one client run some long function?
Yes it is possible , this way
#starting the thread
def start_thread(self):
threading.Thread(target=self.new_thread,args=()).start()
# the thread you are running your logic
def new_thread(self, *args):
#call the function you want to retrieve data from
value_returned = partial(self.retrieved_data_func,arg0)
#the function that returns
def retrieved_data_func(self):
arg0=0
return arg0
Yes, using the threading module you can spawn new threads. See the documentation. An example would be this:
import threading
import time
def main():
print("main: 1")
thread = threading.Thread(target=threaded_function)
thread.start()
time.sleep(1)
print("main: 3")
time.sleep(6)
print("main: 5")
def threaded_function():
print("thread: 2")
time.sleep(4)
print("thread: 4")
main()
This code uses time.sleep to simulate that an action takes a certain amount of time. The output should look like this:
main: 1
thread: 2
main: 3
thread: 4
main: 5
I'm new to python and tornado. I was trying some stuff with coroutines.
def doStuff(callback):
def task():
callback("One Second Later")
Timer(1,task).start()
#gen.coroutine
def routine1():
ans = yield gen.Task(doStuff)
raise gen.Return(ans)
if __name__ == "__main__":
print routine1()
I'm trying to get the result of doStuff() function, which I expect to be "One Second Later". But it's not working. Any help would be appreciated. Thank you
What's probably happening is, you haven't started the IOLoop, nor are you waiting for your coroutine to complete before your script exits. You'll probably notice your script runs in a couple milliseconds, rather than pausing for a second as it ought. Do this:
if __name__ == "__main__":
from tornado.ioloop import IOLoop
print IOLoop.instance().run_sync(routine1)
I have written a piece of code for scraping, in python. i have a list of url's which need to be scraped, but after a while script's get lost while reading web pages in loop. So i need to set a fixed time, after which script should come out of the loop and start reading the next web page.
Below is the sample code.
def main():
if <some condition>:
list_of_links=['http://link1.com', 'http://link2.com', 'http://link3.com']
for link in list_of_links:
process(link)
def process():
<some code to read web page>
return page_read
The scripts gets lost inside method process() which is called inside for loop again and again. I want for loop to skip to next link if process() method is taking more that a minute to read the webpage.
the script gets lost probably because the remote server does not respond anything, or too slow to respond.
you may set a timeout to the socket to avoid this behavior of the process function. at the very beginning of main function
def main():
socket.setdefaulttimeout(3.0)
# process urls
if ......
the above code fragment means that, if getting no response after waiting for 3 seconds, terminate the process and raise a timeout exception. so
try:
process()
except:
pass
will work.
You probably can use a timer. It depends on the code inside your process function.
If your main and process functions are methods of a class, then :
class MyClass:
def __init__(self):
self.stop_thread = False
def main():
if <some condition>:
list_of_links=['http://link1.com', 'http://link2.com', 'http://link3.com']
for link in list_of_links:
process(link)
def set_stop(self):
self.stop_thread = True
def process():
t = Timer(60.0, self.set_stop)
t.start()
# I don't know your code here
# If you use some kind of loop it could be :
while True:
# Do something..
if self.stop_thread:
break
# Or :
if self.stop_thread:
return
I'm struggling with a issue for some time now.
I'm building a little script which uses a main loop. This is a process that needs some attention from the users. The user responds on the steps and than some magic happens with use of some functions
Beside this I want to spawn another process which monitors the computer system for some specific events like pressing specif keys. If these events occur then it will launch the same functions as when the user gives in the right values.
So I need to make two processes:
-The main loop (which allows user interaction)
-The background "event scanner", which searches for specific events and then reacts on it.
I try this by launching a main loop and a daemon multiprocessing process. The problem is that when I launch the background process it starts, but after that I does not launch the main loop.
I simplified everything a little to make it more clear:
import multiprocessing, sys, time
def main_loop():
while 1:
input = input('What kind of food do you like?')
print(input)
def test():
while 1:
time.sleep(1)
print('this should run in the background')
if __name__ == '__main__':
try:
print('hello!')
mProcess = multiprocessing.Process(target=test())
mProcess.daemon = True
mProcess.start()
#after starting main loop does not start while it prints out the test loop fine.
main_loop()
except:
sys.exit(0)
You should do
mProcess = multiprocessing.Process(target=test)
instead of
mProcess = multiprocessing.Process(target=test())
Your code actually calls test in the parent process, and that call never returns.
You can use the locking synchronization to have a better control over your program's flow. Curiously, the input function raise an EOF error, but I'm sure you can find a workaround.
import multiprocessing, sys, time
def main_loop(l):
time.sleep(4)
l.acquire()
# raise an EOFError, I don't know why .
#_input = input('What kind of food do you like?')
print(" raw input at 4 sec ")
l.release()
return
def test(l):
i=0
while i<8:
time.sleep(1)
l.acquire()
print('this should run in the background : ', i+1, 'sec')
l.release()
i+=1
return
if __name__ == '__main__':
lock = multiprocessing.Lock()
#try:
print('hello!')
mProcess = multiprocessing.Process(target=test, args = (lock, ) ).start()
inputProcess = multiprocessing.Process(target=main_loop, args = (lock,)).start()
#except:
#sys.exit(0)