Python Multi-Threading use data from a thread in another thread - python

I'm new to Python threading and what I'm trying to do is :
1 Thread with While loop that will execute a GET request to an API each N seconds to refresh the data
A second Thread with a While loop that will use the data (ip addresses) to ping targets each N seconds.
So I was looking for a way to start the first Thread then only start the second after the first API call and then share these data to the second Thread so it can execute its logic.
Anyone can help me pls ? Thanks.

As per you requirements, here is a simple boilerplate code you might want to try,
import time
import threading
available = False
def thread1():
global available
while True:
# TODO: call API
# --------------
available = True # set available True after API call
time.sleep(5) # perform API calls after every 5 seconds
def thread2():
while True:
# TODO: perform ping
# --------------
# perform ping request after every 5 seconds
time.sleep(5)
if __name__ == "__main__":
t1 = threading.Thread(target=thread1, name="thread1")
t2 = threading.Thread(target=thread2, name="thread2")
t1.start()
while not available:
time.sleep(0.1)
else:
t2.start()

Related

Unable to sleep execution within an api subscription callback

I am making an api subscription to fetch real time live data from one of the api providers,
However i only want to pull the data every few seconds (eg 5 seconds).
I am using the below code snippet, however am unable to implement the sleep or delay effectively.
Can you please help to guide why the api is not adhering to the 5 second wait ?
api_ABC_connection=apiConnect(api_key="<apikey")
api_ABC_connection.ws_connect()
abc_list=[]
# Callback to receive ticks.
def on_ticks(ticks):
print('###################')
print(datetime.now())
time.sleep(5)
fetch_time_dict = {}
fetch_time_dict['fetch_time'] = datetime.now()
abc_list.append(fetch_time_dict)
#print("Ticks: {}".format(ticks))
abc_list.append(ticks)
print(datetime.now())
return abc_list
# Assign the callbacks.
api_ABC_connection.on_ticks = on_ticks
# subscribe stocks feeds
api_ABC_connection.subscribe_feeds(<feeds parameters>)
Assuming api_ABC_connection is calling callback function asynchronously, you can try and add the lock. Try this, it may work:
lock = multiprocessing.Lock()
def on_ticks(ticks):
print('###################')
print(datetime.now())
lock.acquire()
time.sleep(5)
lock.release()
fetch_time_dict = {}
fetch_time_dict['fetch_time'] = datetime.now()
abc_list.append(fetch_time_dict)
#print("Ticks: {}".format(ticks))
abc_list.append(ticks)
print(datetime.now())
return abc_list
You may want to put both acquire() and release() methods into another place. It depends on what behavior you actually expect.

Python - If nothing happens for 1 minute, proceed code

I am writing a script which sends a serial message over a websocket to a device. When I want to start the device I write:
def start(ws):
"""
Function to send the start command
"""
print("start")
command = dict()
command["commandId"] = 601
command["id"] = 54321
command["params"] = {}
send_command(ws, command)
Every 5 hours or so the device restarts, during the restart, my function start request does not run and my code stops completely.
My question is, is there a way to tell python: "If nothing has happened for 1 minute, try again"
It's not clear exactly what ws is or how you set it up; but you want to add a timeout to the socket.
https://websockets.readthedocs.io/en/stable/api.html#websockets.client.connect has a timeout keyword; refer to the documentation for details about what it does.
If this is not the websocket library you are using, please update your question with details.
You can use sleep from time module
import time
time.sleep(60) # waits for 1 minute
Also, do consider Multithreading for sleep
import threading
import time
def print_hello():
for i in range(4):
time.sleep(0.5)
print("Hello")
def print_hi():
for i in range(4):
time.sleep(0.7)
print("Hi")
t1 = threading.Thread(target=print_hello)
t2 = threading.Thread(target=print_hi)
t1.start()
t2.start()
The above program has two threads. Have used time.sleep(0.5) and time.sleep(0.75) to suspend execution of these two threads for 0.5 seconds and 0.7 seconds respectively.
more here

Python: continues method call hold until configured delay time

From different thread/interface my class getting work,my class has to process the work with configured delay time.
def getJob(job):
work = self._getNextWorkToRun(job)
if work is None:
return {}
#proceed to do work
job sends by different package to this class. I wanted to call _getNextWorkToRun() method every five minutes once only. but the job comes every seconds/less than seconds. So I have to wait until 5 minutes to call _getNextWorkToRun() once again with new job. Every job has reference (JOB1,JOB2...etc.,) and all the jobs have to complete with the delay of 5 mins.
What is the best way to achieve this.
below is an example of using threads, jobs will be added anytime to job queue from any other function and a get_job() function will run continuously to monitor jobs and process them on fixed interval until get a stop flag
from threading import Thread
from queue import Queue
import time
from random import random
jobs = Queue() # queue safely used between threads to pass jobs
run_flag = True
def job_feeder():
for i in range(10):
# adding a job to jobs queue, job could be anything, here we just add a string for simplicity
jobs.put(f'job-{i}')
print(f'adding job-{i}')
time.sleep(random()) # simulate adding jobs randomly
print('job_feeder() finished')
def get_job():
while run_flag:
if jobs.qsize(): # check if there is any jobs in queue first
job = jobs.get() # getting the job
print(f'executing {job}')
time.sleep(3)
print('get_job finished')
t1 = Thread(target=job_feeder)
t2 = Thread(target=get_job)
t1.start()
t2.start()
# we can make get_job() thread quit anytime by setting run_flag
time.sleep(20)
run_flag = False
# waiting for threads to quit
t1.join()
t2.join()
print('all clear')
output:
adding job-0
executing job-0
adding job-1
adding job-2
adding job-3
adding job-4
adding job-5
adding job-6
adding job-7
executing job-1
adding job-8
adding job-9
job_feeder() finished
executing job-2
executing job-3
executing job-4
executing job-5
executing job-6
get_job finished
all clear
note get_job() processed only 6 jobs because we send quit signal after 20 seconds

Returning value from thread in python without blocking main thread

I have got an XMLRPC server and client runs some functions on server and gets returned value. If the function executes quickly then everything is fine but I have got a function that reads from file and returns some value to user. Reading takes about minute(there is some complicated stuff) and when one client runs this function on the server then server is not able to respond for other users until the function is done.
I would like to create new thread that will read this file and return value for user. Is it possible somehow?
Are there any good solutions/patters to do not block server when one client run some long function?
Yes it is possible , this way
#starting the thread
def start_thread(self):
threading.Thread(target=self.new_thread,args=()).start()
# the thread you are running your logic
def new_thread(self, *args):
#call the function you want to retrieve data from
value_returned = partial(self.retrieved_data_func,arg0)
#the function that returns
def retrieved_data_func(self):
arg0=0
return arg0
Yes, using the threading module you can spawn new threads. See the documentation. An example would be this:
import threading
import time
def main():
print("main: 1")
thread = threading.Thread(target=threaded_function)
thread.start()
time.sleep(1)
print("main: 3")
time.sleep(6)
print("main: 5")
def threaded_function():
print("thread: 2")
time.sleep(4)
print("thread: 4")
main()
This code uses time.sleep to simulate that an action takes a certain amount of time. The output should look like this:
main: 1
thread: 2
main: 3
thread: 4
main: 5

Python parallel threads

Here the code which download 3 files, and do something with it.
But before starting Thread2 it waits until Thread1 will be finished. How make them run together?
Specify some examples with commentary. Thanks
import threading
import urllib.request
def testForThread1():
print('[Thread1]::Started')
resp = urllib.request.urlopen('http://192.168.85.16/SOME_FILE')
data = resp.read()
# Do something with it
return 'ok'
def testForThread2():
print('[Thread2]::Started')
resp = urllib.request.urlopen('http://192.168.85.10/SOME_FILE')
data = resp.read()
# Do something with it
return 'ok'
if __name__ == "__main__":
t1 = threading.Thread(name="Hello1", target=testForThread1())
t1.start()
t2 = threading.Thread(name="Hello2", target=testForThread2())
t2.start()
print(threading.enumerate())
t1.join()
t2.join()
exit(0)
You are executing the target function for the thread in the thread instance creation.
if __name__ == "__main__":
t1 = threading.Thread(name="Hello1", target=testForThread1()) # <<-- here
t1.start()
This is equivalent to:
if __name__ == "__main__":
result = testForThread1() # == 'ok', this is the blocking execution
t1 = threading.Thread(name="Hello1", target=result)
t1.start()
It's Thread.start()'s job to execute that function and store its result somewhere for you to reclaim. As you can see, the previous format was executing the blocking function in the main thread, preventing you from being able to parallelize (e.g. it would have to finish that function execution before getting to the line where it calls the second function).
The proper way to set the thread in a non-blocking fashion would be:
if __name__ == "__main__":
t1 = threading.Thread(name="Hello1", target=testForThread1) # tell thread what the target function is
# notice no function call braces for the function "testForThread1"
t1.start() # tell the thread to execute the target function
For this, we can use threading but it's not efficient since you want to download files. so the total time will be equal to the sum of download time of all files.
If you have good internet speed, then multiprocessing is the best way.
import multiprocessing
def test_function():
for i in range(199999998):
pass
t1 = multiprocessing.Process(target=test_function)
t2 = multiprocessing.Process(target=test_function)
t1.start()
t2.start()
This is the fastest solution. You can check this using following command:
time python3 filename.py
you will get the following output like this:
real 0m6.183s
user 0m12.277s
sys 0m0.009s
here, real = user + sys
user time is the time taken by python file to execute.
but you can see that above formula doesn't satisfy because each function takes approx 6.14. But due to multiprocessing, both take 6.18 seconds and reduced total time by multiprocessing in parallel.
You can get more about it from here.

Categories

Resources