I am making an api subscription to fetch real time live data from one of the api providers,
However i only want to pull the data every few seconds (eg 5 seconds).
I am using the below code snippet, however am unable to implement the sleep or delay effectively.
Can you please help to guide why the api is not adhering to the 5 second wait ?
api_ABC_connection=apiConnect(api_key="<apikey")
api_ABC_connection.ws_connect()
abc_list=[]
# Callback to receive ticks.
def on_ticks(ticks):
print('###################')
print(datetime.now())
time.sleep(5)
fetch_time_dict = {}
fetch_time_dict['fetch_time'] = datetime.now()
abc_list.append(fetch_time_dict)
#print("Ticks: {}".format(ticks))
abc_list.append(ticks)
print(datetime.now())
return abc_list
# Assign the callbacks.
api_ABC_connection.on_ticks = on_ticks
# subscribe stocks feeds
api_ABC_connection.subscribe_feeds(<feeds parameters>)
Assuming api_ABC_connection is calling callback function asynchronously, you can try and add the lock. Try this, it may work:
lock = multiprocessing.Lock()
def on_ticks(ticks):
print('###################')
print(datetime.now())
lock.acquire()
time.sleep(5)
lock.release()
fetch_time_dict = {}
fetch_time_dict['fetch_time'] = datetime.now()
abc_list.append(fetch_time_dict)
#print("Ticks: {}".format(ticks))
abc_list.append(ticks)
print(datetime.now())
return abc_list
You may want to put both acquire() and release() methods into another place. It depends on what behavior you actually expect.
Related
Is there any simple way to activate the thread to fire up the function every X sec, to display some data?
def send_data():
data = "Message from client"
socket.sendall(data.encode())
write_thread = threading.Thread(target=send_data())
write_thread.start()
You could try the ischedule module - it provides very straightforward syntax for scheduling any given function.
Here's an example straight from the GitHub page:
from ischedule import run_loop, schedule
#schedule(interval=0.1)
def task():
print("Performing a task")
run_loop(return_after=1)
The return_after param in run_loop() is an optional timeout.
Also, in case you're unfamiliar, the # syntax is a Python decorator.
A simple way would be this:
import time
while True:
task()
time.sleep(1)
I am writing a script which sends a serial message over a websocket to a device. When I want to start the device I write:
def start(ws):
"""
Function to send the start command
"""
print("start")
command = dict()
command["commandId"] = 601
command["id"] = 54321
command["params"] = {}
send_command(ws, command)
Every 5 hours or so the device restarts, during the restart, my function start request does not run and my code stops completely.
My question is, is there a way to tell python: "If nothing has happened for 1 minute, try again"
It's not clear exactly what ws is or how you set it up; but you want to add a timeout to the socket.
https://websockets.readthedocs.io/en/stable/api.html#websockets.client.connect has a timeout keyword; refer to the documentation for details about what it does.
If this is not the websocket library you are using, please update your question with details.
You can use sleep from time module
import time
time.sleep(60) # waits for 1 minute
Also, do consider Multithreading for sleep
import threading
import time
def print_hello():
for i in range(4):
time.sleep(0.5)
print("Hello")
def print_hi():
for i in range(4):
time.sleep(0.7)
print("Hi")
t1 = threading.Thread(target=print_hello)
t2 = threading.Thread(target=print_hi)
t1.start()
t2.start()
The above program has two threads. Have used time.sleep(0.5) and time.sleep(0.75) to suspend execution of these two threads for 0.5 seconds and 0.7 seconds respectively.
more here
I'm new to Python threading and what I'm trying to do is :
1 Thread with While loop that will execute a GET request to an API each N seconds to refresh the data
A second Thread with a While loop that will use the data (ip addresses) to ping targets each N seconds.
So I was looking for a way to start the first Thread then only start the second after the first API call and then share these data to the second Thread so it can execute its logic.
Anyone can help me pls ? Thanks.
As per you requirements, here is a simple boilerplate code you might want to try,
import time
import threading
available = False
def thread1():
global available
while True:
# TODO: call API
# --------------
available = True # set available True after API call
time.sleep(5) # perform API calls after every 5 seconds
def thread2():
while True:
# TODO: perform ping
# --------------
# perform ping request after every 5 seconds
time.sleep(5)
if __name__ == "__main__":
t1 = threading.Thread(target=thread1, name="thread1")
t2 = threading.Thread(target=thread2, name="thread2")
t1.start()
while not available:
time.sleep(0.1)
else:
t2.start()
I have written a program that I am using to benchmark a mongodb database performing under multithreaded bulk write conditions.
The problem is that the program hangs and does not finish executing.
I am quite sure that the problem is due to writing 530838 records to the database and using 10 threads to bulk write 50 records at a time. This leaves a modulo value of 38 records, however the run method fetches 50 records from the queue so the process hangs when 530800 records have been written and never writes the final 38 records as the following code never finishes executing
for object in range(50):
objects.append(self.queue.get())
I would like the program to write 50 records at a time until fewer than 50 remain at which point it should write the remaining records in the queue and then exit the thread when no records remain in the queue.
Thanks in advance :)
import threading
import Queue
import json
from pymongo import MongoClient, InsertOne
import datetime
#Set the number of threads
n_thread = 10
#Create the queue
queue = Queue.Queue()
#Connect to the database
client = MongoClient("mongodb://mydatabase.com")
db = client.threads
class ThreadClass(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
#Assign thread working with queue
self.queue = queue
def run(self):
while True:
objects = []
#Get next 50 objects from queue
for object in range(50):
objects.append(self.queue.get())
#Insert the queued objects into the database
db.threads.insert_many(objects)
#signals to queue job is done
self.queue.task_done()
#Create number of processes
threads = []
for i in range(n_thread):
t = ThreadClass(queue)
t.setDaemon(True)
#Start thread
t.start()
#Start timer
starttime = datetime.datetime.now()
#Read json object by object
content = json.load(open("data.txt","r"))
for jsonobj in content:
#Put object into queue
queue.put(jsonobj)
#wait on the queue until everything has been processed
queue.join()
for t in threads:
t.join()
#Print the total execution time
endtime = datetime.datetime.now()
duration = endtime-starttime
print(divmod(duration.days * 86400 + duration.seconds, 60))
From the docs on Queue.get you can see that the default settings are block=True and timeout=None, which results in blocked waiting on an empty queue to have a next item that can be taken.
You could use get_nowait or get(False) to ensure you're not blocking. If you want the blocking to be conditional on whether the queue has 50 items, whether it is empty, or other conditions, you can use Queue.empty and Queue.qsize, but note that they do not provide race-condition-proof guarantees of non-blocking behavior... they would merely be heuristics for whether to use block=False with get.
Something like this:
def run(self):
while True:
objects = []
#Get next 50 objects from queue
block = self.queue.qsize >= 50
for i in range(50):
try:
item = self.queue.get(block=block)
except Queue.Empty:
break
objects.append(item)
#Insert the queued objects into the database
db.threads.insert_many(objects)
#signals to queue job is done
self.queue.task_done()
Another approach would be to set timeout and use a try ... except block to catch any Empty exceptions that are raised. This has the advantage that you can decide how long to wait, rather than heuristically guessing when to immediately return, but they are similar.
Also note that I changed your loop variable from object to i ... you should most likely avoid having your loop variable ghost the global object class.
I have got an XMLRPC server and client runs some functions on server and gets returned value. If the function executes quickly then everything is fine but I have got a function that reads from file and returns some value to user. Reading takes about minute(there is some complicated stuff) and when one client runs this function on the server then server is not able to respond for other users until the function is done.
I would like to create new thread that will read this file and return value for user. Is it possible somehow?
Are there any good solutions/patters to do not block server when one client run some long function?
Yes it is possible , this way
#starting the thread
def start_thread(self):
threading.Thread(target=self.new_thread,args=()).start()
# the thread you are running your logic
def new_thread(self, *args):
#call the function you want to retrieve data from
value_returned = partial(self.retrieved_data_func,arg0)
#the function that returns
def retrieved_data_func(self):
arg0=0
return arg0
Yes, using the threading module you can spawn new threads. See the documentation. An example would be this:
import threading
import time
def main():
print("main: 1")
thread = threading.Thread(target=threaded_function)
thread.start()
time.sleep(1)
print("main: 3")
time.sleep(6)
print("main: 5")
def threaded_function():
print("thread: 2")
time.sleep(4)
print("thread: 4")
main()
This code uses time.sleep to simulate that an action takes a certain amount of time. The output should look like this:
main: 1
thread: 2
main: 3
thread: 4
main: 5