How to execute program at specific times - python

I have a program that I need to execute at certain intervals. For instance, I may want it to execute it every five minutes. I have several coordinators that communicate with several end nodes devices. The code below is on the coordinators. I need it so that if the interval is set to 5 then it runs and records information at for example: 9:05, 9:10, 9:15, 9:20, 9:25 and etc. The code I have so far is as follows:
if __name__ == '__main__':
while True:
try:
r = json.read(rqst_command())
interval = r.get('intvl')
collect_time = r.get('c_time')
command = r.get('cmd')
send_stats(cmd_nodes(command, collect_time))
time.sleep(interval)
except Exception, e:
print e
print "**Top Level Exception"
pass
The problem is that if I set the interval to 5 minutes it does not record exactly every 5 minutes. The execution time seems to slowly increase that time. So for example the above code may record as 9:05:09, 9:10:19, 9:15:29, 9:20:41, 9:25:50. The time it takes for the program to run depends on how fast the nodes communicate back.
Does anybody have any ideas on how I can change my code so that the program will execute exactly every 5 minutes?
EDIT/UPDATE
I think I have figured out a way to handle my problem. I grab the current datetime and then check to see if it is on the 5 minute mark. If it is then record the datetime and send it to the send_stats function. That way the datetime will always be exactly what I want it to be. If it is not on the 5 minute mark then sleep for awhile and then check again. I have the code mostly completed. However, I am getting the following error when I run the program: 'builtin_function_or_method' object has no attribute 'year'.
What am I doing incorrectly?
Here is my new code:
import os
import json
import datetime
from datetime import datetime
import urllib2
from urllib import urlencode
from socket import *
import time
import zigbee
import select
if __name__ == '__main__':
while True:
try:
r = json.read(rqst_command())
interval = r.get('intvl')
collect_time = r.get('c_time')
command = r.get('cmd')
tempTest = True
while tempTest == True:
start_time = datetime.now
compare_time = datetime(start_time.year, start_time.month, start_time.day, start_time.hour, 0, 0)
difff = int((start_time - compare_time).total_seconds() / 60)
if((difff % interval) == 0):
c_t = datetime(start_time.year, start_time.month, start_time.day, start_time.hour, start_time.minute, 0)
send_stats(cmd_nodes(command, collect_time), c_t)
tempTest = False
else:
time.sleep(30)
except Exception, e:
print e
print "**Top Level Exception"
pass

The below code is what ended up solving my problem. Now no matter what the interval is, it will always use the datetime starting at the beginning of each hour and incrementing by the interval. So, if the interval is 5 then the datetime shows up as for instance: 9:00:00, 9:05:00, 9:10:00, 9:15:00, 9:20:00, 9:25:00, and etc. If the interval is 3 then the datetime shows up as for instance: 5:00:00, 5:03:00, 5:06:00, 5:09:00, 5:12:00, 5:15:00, and etc. The coordinator gets the data from the end nodes and then sends the data to a remote server along with the datetime.
if __name__ == '__main__':
last_minute = -1
while True:
try:
r = json.read(rqst_command())
print r
command = r.get('cmd')
interval = r.get('intvl')
collect_time = r.get('c_time')
tempTest = True
while tempTest == True:
start_time = datetime.now()
s_minute = start_time.minute
if(((s_minute % interval) == 0) and (s_minute != last_minute)):
tempTest = False
c_t = datetime(start_time.year, start_time.month, start_time.day, start_time.hour, start_time.minute, 0)
last_minute = c_t.minute
send_stats(cmd_nodes(command, collect_time), c_t)
time.sleep(1)
else:
time.sleep(1)
except Exception, e:
print e
print "**Top Level Exception"
pass

As others have pointed out, there already are ready-to-use job/task schedulers like cron. You can just use them. But you could also implement your own simple solution in Python, which would be fine. You just have to do it the right way. The fundamental problem in your approach is that you sleep for a certain interval between actions and do not regularly check the system time. In your method, the duration of the action is the error of your time measurement. And this error sums up with each action. You need to have a time reference that is free of this error, which is the system time.
Implementation example:
Consider, for instance, one second precision is good enough for you. Then check the system time each second within a loop. This you can safely realize with a time.sleep(1). If the system time is e.g. 5 minutes later than the last_action_execution_time (which you have obviously stored somewhere), store the current time as last_action_execution_time and execute the action. As long as this action for sure lasts less then 5 minutes, the next execution will happen at last_action_execution_time + 5 min with only a very small error. Most importantly, this error does not grow during runtime with the number of executions.
For a rock-solid Python-based solution you should also look at http://docs.python.org/library/sched.html.

how about:
while true:
try:
start=time.time() # save the beginning time before execution starts
r = json.read(rqst_command())
interval = r.get('intvl')
collect_time = r.get('c_time')
command = r.get('cmd')
start_time = datetime.now()
send_stats(cmd_nodes(command, collect_time))
end_time = datetime.now()
sleepy_time = interval - (end_time - start_time)
while time.time() <= start + sleepy_time*60: #To wait until interval has ended note: I'm assuming sleepy_time is in minutes.
pass
except Exception, e:
print e
print "**Top Level Exception"
pass

If you use Linux, you probably want to set up a cronjob which runs your script in certain intervals.

There are two ways to do this.
The first and best would be to use your OS's task scheduler (Task Scheduler in Windows, cron in Linux). The developers of these tools probably anticipated more issues than you can imagine and code you don't have to do yourself is time and probably bugs saved.
Otherwise, you need to take into account the execution time of your script. The simplest way to do that is instead of sleeping for your interval (which as you saw slowly slides forward), you would compute when is the next time you should execute based on when you last woke up and after execution, sleep only for the interval between now and then.

I'm assuming you want to do this in Python and not rely on any other systems.
You just need to account for when your process starts and when it ends and set your interval accordingly. The code will look something like this.
if __name__ == '__main__':
while True:
try:
r = json.read(rqst_command())
interval = r.get('intvl')
collect_time = r.get('c_time')
command = r.get('cmd')
start_time = datetime.now()
send_stats(cmd_nodes(command, collect_time))
end_time = datetime.now()
sleepy_time = interval - (end_time - start_time)
time.sleep(sleepy_time)
except Exception, e:
print e
print "**Top Level Exception"
pass

Related

Schedule an iterative function every x seconds without drifting

Complete newbie here so bare with me. I've got a number of devices that report status updates to a singular location, and as more sites have been added, drift with time.sleep(x) is becoming more noticeable, and with as many sites connected now it has completely doubles the sleep time between iterations.
import time
...
def client_list():
sites=pandas.read_csv('sites')
return sites['Site']
def logs(site):
time.sleep(x)
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
log = open(f"{site}/log", 'a')
log.write(f",{stamp},{site},hit\n")
log.close()
os.remove(f"{site}/target/hit")
else:
stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
log = open(f"{site}/log", 'a')
log.write(f",{stamp},{site},miss\n")
log.close()
...
if __name__ == '__main__':
while True:
try:
client_list()
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(logs, client_list())
...
I did try adding calculations for drift with this:
from datetime import datetime, timedelta
def logs(site):
first_called=datetime.now()
num_calls=1
drift=timedelta()
time_period=timedelta(seconds=5)
while 1:
time.sleep(n-drift.microseconds/1000000.0)
current_time = datetime.now()
num_calls += 1
difference = current_time - first_called
drift = difference - time_period* num_calls
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
...
It ends up with a duplicate entries in the log, and the process still drifts.
Is there a better way to schedule the function to run every x seconds and account for the drift in start times?
Create a variable equal to the desired system time at the next interval. Increment that variable by 5 seconds each time through the loop. Calculate the sleep time so that the sleep will end at the desired time. The timings will not be perfect because sleep intervals are not super precise, but errors will not accumulate. Your logs function will look something like this:
def logs(site):
next_time = time.time() + 5.0
while 1:
time.sleep(time.time() - next_time)
next_time += 5.0
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
# do something that takes a while
So I managed to find another route that doesn't drift. The other method still drifted over time. By capturing the current time and seeing if it is divisible by x (5 in the example below) I was able to keep the time from deviating.
def timer(t1,t2)
return True if t1 % t2 == 0 else False
def logs(site):
while 1:
try:
if timer(round(time.time(), 0), 5.0):
if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
# do something that takes a while
time.sleep(1) ''' this kept it from running again immediately if the process was shorter than 1 second. '''
...

How to sleep python script for xx minutes after every hour execution?

I am trying to make a python script that works in a loop mode with iteration through a text file to run for periods of one hour and make 30minute pauses between each hour loop .
After some searching I found this piece of code :
import datetime
import time
delta_hour = 0
while:
now_hour = datetime.datetime.now().hour
if delta_hour != now_hour:
# run your code
delta_hour = now_hour
time.sleep(1800) # 1800 seconds sleep
# add some way to exit the infinite loop
This code has a few issues though :
It does not consider one hour periods since the script starts running
It does not seem to work continuously for periods over one hour
Considering what I am trying to achieve (running script 1hour before each time it pauses for 30mins) what is the best approach to this ? Cron is not an option here .
For clarification :
1hour run -- 30min pause -- repeat
Thanks
Here is a so simple code, I have written for teaching purposes, which is very clear
from datetime import datetime
class control_process():
def __init__(self, woking_period, sleeping_period):
self.woking_period = woking_period # working period in minutes
self.sleeping_period = sleeping_period # sleeping period in minutes
self.reset()
def reset(self):
self.start_time = datetime.utcnow() # set starting point
def manage(self):
m = (datetime.utcnow() - self.start_time).seconds / 60 # how long since starting point
if m >= self.woking_period: # if exceeded the working period
time.sleep(self.sleeping_period * 60) # time to sleep in seconds
self.reset() # then reset time again
return # go to continue working
cp = control_process(60, 30) # release for 60 minutes and sleep for 30 minutes
while True: # you code loop
cp.manage()
'''
your code
'''
in which 'control_processobject - I calledcp- callscp.manage()` inside your executing loop.
you reset time via cp.reset() before going in the loop or whenever you want
Based on Comments
The simplicity I mean is to add this class to your general library so you can use it whenever you want by instantiation of cp then one or two controlling functions 'cp.manage()` which control the working cycles, and cp.reset() if you want to use it in another location of the code. I believe that use a function is better than a long condition statement.
Using the default library you could do something like call the script itself using subprocess. By checking whether conditions are met the process could do a task and call itself. Extending the logic with a kill pill would make it stop (I leave that up to you).
import argparse, time
from subprocess import call
DELAY = 60 * 30 # minutes
WORK_TIME = 60 * 60 # minutes
parser = argparse.ArgumentParser()
parser.add_argument("-s",
help = "interval start time",
type = float,
default = time.time())
parser.add_argument("-t",
help = "interval stop time",
type = float,
default = time.time() + WORK_TIME)
def do_task():
# implement task
print("working..")
return
if __name__ == "__main__":
args = parser.parse_args()
start = args.s
stop = args.t
# work
if start < time.time() < stop:
do_task()
# shift target
else:
start = time.time() + DELAY
stop = start + WORK_TIME
call(f"python test.py -t {stop} -s {start}".split())
The simplest solution I could come up with was the following piece of code, which I added inside my main thread :
start_time = int(time())
... #main thread code
#main thread code end
if int(time() - start_time >= 60 * 60):
print("pausing time")
sleep(30 * 60)
start_time = int(time())
From the moment the script starts this will pause every hour for 30mins and resume afterwards .
Simple yet effective !

Generate some "random" start times for scripts to run based on a period of time in python

I'm trying to generate some random seeded times to tell my script when to fire each of the scripts from within a main script.
I want to set a time frame of:
START_TIME = "02:00"
END_TIME = "03:00"
When it reaches the start time, it needs to look at how many scripts we have to run:
script1.do_proc()
script2.alter()
script3.noneex()
In this case there are 3 to run, so it needs to generate 3 randomized times to start those scripts with a minimum separation of 5 mins between each script but the times must be within the time set in START_TIME and END_TIME
But, it also needs to know that script1.main is ALWAYS the first script to fire, other scripts can be shuffled around (random)
So we could potentially have script1 running at 01:43 and then script3 running at 01:55 and then script2 might run at 02:59
We could also potentially have script1 running at 01:35 and then script3 running at 01:45 and then script2 might run at 01:45 which is also fine.
My script so far can be found below:
import random
import pytz
from time import sleep
from datetime import datetime
import script1
import script2
import script3
START_TIME = "01:21"
END_TIME = "03:00"
while 1:
try:
# Set current time & dates for GMT, London
CURRENT_GMTTIME = datetime.now(pytz.timezone('Europe/London')).strftime("%H%M")
CURRENT_GMTDAY = datetime.now(pytz.timezone('Europe/London')).strftime("%d%m%Y")
sleep(5)
# Grab old day for comparisons
try:
with open("DATECHECK.txt", 'rb') as DATECHECK:
OLD_DAY = DATECHECK.read()
except IOError:
with open("DATECHECK.txt", 'wb') as DATECHECK:
DATECHECK.write("0")
OLD_DAY = 0
# Check for new day, if it's a new day do more
if int(CURRENT_GMTDAY) != int(OLD_DAY):
print "New Day"
# Check that we are in the correct period of time to start running
if int(CURRENT_GMTTIME) <= int(START_TIME.replace(":", "")) and int(CURRENT_GMTTIME) >= int(END_TIME.replace(":", "")):
print "Correct time, starting"
# Unsure how to seed the start times for the scripts below
script1.do_proc()
script2.alter()
script3.noneex()
# Unsure how to seed the start times for above
# Save the current day to prevent it from running again today.
with open("DATECHECK.txt", 'wb') as DATECHECK:
DATECHECK.write(CURRENT_GMTDAY)
print "Completed"
else:
pass
else:
pass
except Exception:
print "Error..."
sleep(60)
EDIT 31/03/2016
Let's say I add the following
SCRIPTS = ["script1.test()", "script2.test()", "script3.test()"]
MAIN_SCRIPT = "script1.test()"
TIME_DIFFERENCE = datetime.strptime(END_TIME, "%H:%M") - datetime.strptime(START_TIME, "%H:%M")
TIME_DIFFERENCE = TIME_DIFFERENCE.seconds
We now have the the number of scripts to run
We have the list of the script to run.
We have the name of the main script, the one to run first.
We have the time in seconds to show how much time we have in total to run all the scripts within.
Surely there is a way we can just plug some sort of loop to make it do it all..
for i in range(len(SCRIPTS)), which is 3 times
Generate 3 seeds, making sure the minimum time is of 300 and all together the 3 seeds must not exceed TIME_DIFFERENCE
Create the start time based on RUN_TIME = START_TIME and then RUN_TIME = RUN_TIME + SEED[i]
First loop would check that that MAIN_SCRIPT exists within SCRIPTS, if it does then it would run that script first, delete itself from SCRIPTS and then on next loops, as it doesn't exist in SCRIPTS it would switch to randomly calling one of the other scripts.
Seeding the times
The following appears to work, there might be an easier way of doing this though.
CALCULATE_SEEDS = 0
NEW_SEED = 0
SEEDS_SUCESSS = False
SEEDS = []
while SEEDS_SUCESSS == False:
# Generate a new seed number
NEW_SEED = random.randrange(0, TIME_DIFFERENCE)
# Make sure the seed is above the minimum number
if NEW_SEED > 300:
SEEDS.append(NEW_SEED)
# Make sure we have the same amount of seeds as scripts before continuing.
if len(SEEDS) == len(SCRIPTS):
# Calculate all of the seeds together
for SEED in SEEDS:
CALCULATE_SEEDS += SEED
# Make sure the calculated seeds added together is smaller than the total time difference
if CALCULATE_SEEDS >= TIME_DIFFERENCE:
# Reset and try again if it's not below the number
SEEDS = []
else:
# Exit while loop if we have a correct amount of seeds with minimum times.
SEEDS_SUCESSS = True
Use datetime.timedelta to compute time differences. This code assumes all three processes run on the same day
from datetime import datetime, timedelta
from random import randint
YR, MO, DY = 2016, 3, 30
START_TIME = datetime( YR, MO, DY, 1, 21, 00 ) # "01:21"
END_TIME = datetime( YR, MO, DY, 3, 0, 0 ) # "3:00"
duration_all = (END_TIME - START_TIME).seconds
d1 = ( duration_all - 600 ) // 3
#
rnd1 = randint(0,d1)
rnd2 = rnd1 + 300 + randint(0,d1)
rnd3 = rnd2 + 300 + randint(0,d1)
#
time1 = START_TIME + timedelta(seconds=rnd1)
time2 = START_TIME + timedelta(seconds=rnd2)
time3 = START_TIME + timedelta(seconds=rnd3)
#
print (time1)
print (time2)
print (time3)
Values of rnd1, rnd2and rnd3 are at least 5 minutes (300 seconds) apart.
Values of rnd3 cannot be greater than the total time interval (3 * d1 + 600). So all three times occur inside the interval.
NB You did not specify how much time each script runs. That is why I did not use time.sleep. A possible option would be threading.Timer (see python documentation).
Assume you store all the method.func() in an array and, as u described, subsequent scripts must be at least 5 mins after script1. They can be executed randomly, so we can launch multiple processes and let them sleep for a period before they can automatically start. (Timing is in seconds)
from multiprocessing import Process
import os
import random
import time
#store all scripts you want to execute here
eval_scripts = ["script1.test()","script2.test()", "script3.test()"]
#run job on different processes. non-blocking
def run_job(eval_string,time_sleep):
#print out script + time to test
print eval_string + " " + str(time_sleep)
time.sleep(time_sleep) #wait to be executed
#time to start
eval(eval_string)
def do_my_jobs():
start_time = []
#assume the duration between start_time and end_time is 60 mins, leave some time for other jobs after the first job (5-10 mins). This is just to be careful in case random.randrange returns the largest number
#adjust this according to the duration between start_time and end_time since calculating (end_time - star_time) is trivial.
proc1_start_time = random.randrange(60*60 - 10*60)
start_time.append(proc1_start_time)
#randomize timing for other procs != first script
for i in range(len(eval_scripts)-1):
#randomize time from (proc1_start_time + 5 mins) to (end_time - star_time)
start_time.append(random.randint(proc1_start_time+5*60, 60*60))
for i in range(len(eval_scripts)):
p_t = Process(target = run_job, args = (eval_scripts[i],start_time[i],))
p_t.start()
p_t.join()
Now all you need to do is to call do_my_jobs() only ONCE at START_TIME every day.

Set a timer for running a process, pause the timer under certain conditions

I've got this program:
import multiprocessing
import time
def timer(sleepTime):
time.sleep(sleepTime)
fooProcess.terminate()
fooProcess.join() #line said to "cleanup", not sure if it is required, refer to goo.gl/Qes6KX
def foo():
i=0
while 1
print i
time.sleep(1)
i
if i==4:
#pause timerProcess for X seconds
fooProcess = multiprocessing.Process(target=foo, name="Foo", args=())
timer()
fooProcess.start()
And as you can see in the comment, under certain conditions (in this example i has to be 4) the timer has to stop for a certain X time, while foo() keeps working.
Now, how do I implement this?
N.B.: this code is just an example, the point is that I want to pause a process under certain conditions for a certain amount of time.
I am think you're going about this wrong for game design. Games always (no exceptions come to mind) use a primary event loop controlled in software.
Each time through the loop you check the time and fire off all the necessary events based on how much time has elapsed. At the end of the loop you sleep only as long as necessary before you got the next timer or event or refresh or ai check or other state change.
This gives you the best performance regarding lag, consistency, predictability, and other timing features that matter in games.
roughly:
get the current timestamp at the time start time (time.time(), I presume)
sleep with Event.wait(timeout=...)
wake up on an Event or timeout.
if on Event: get timestamp, subtract initial on, subtract result from timer; wait until foo() stops; repeat Event.wait(timeout=[result from 4.])
if on timeout: exit.
Here is an example, how I understand, what your Programm should do:
import threading, time, datetime
ACTIVE = True
def main():
while ACTIVE:
print "im working"
time.sleep(.3)
def run(thread, timeout):
global ACTIVE
thread.start()
time.sleep(timeout)
ACTIVE = False
thread.join()
proc = threading.Thread(target = main)
print datetime.datetime.now()
run(proc, 2) # run for 2 seconds
print datetime.datetime.now()
In main() it does a periodic task, here printing something. In the run() method you can say, how long main should do the task.
This code producess following output:
2014-05-25 17:10:54.390000
im working
im working
im working
im working
im working
im working
im working
2014-05-25 17:10:56.495000
please correct me, if I've understood you wrong.
I would use multiprocessing.Pipe for signaling, combined with select for timing:
#!/usr/bin/env python
import multiprocessing
import select
import time
def timer(sleeptime,pipe):
start = time.time()
while time.time() < start + sleeptime:
n = select.select([pipe],[],[],1) # sleep in 1s intervals
for conn in n[0]:
val = conn.recv()
print 'got',val
start += float(val)
def foo(pipe):
i = 0
while True:
print i
i += 1
time.sleep(1)
if i%7 == 0:
pipe.send(5)
if __name__ == '__main__':
mainpipe,foopipe = multiprocessing.Pipe()
fooProcess = multiprocessing.Process(target=foo,name="Foo",args=(foopipe,))
fooProcess.start()
timer(10,mainpipe)
fooProcess.terminate()
# since we terminated, mainpipe and foopipe are corrupt
del mainpipe, foopipe
# ...
print 'Done'
I'm assuming that you want some condition in the foo process to extend the timer. In the sample I have set up, every time foo hits a multiple of 7 it extends the timer by 5 seconds while the timer initially counts down 10 seconds. At the end of the timer we terminate the process - foo won't finish nicely at all, and the pipes will get corrupted, but you can be certain that it'll die. Otherwise you can send a signal back along mainpipe that foo can listen for and exit nicely while you join.

How do I ensure that a Python while-loop takes a particular amount of time to run?

I'm reading serial data with a while loop. However, I have no control over the sample rate.
The code itself seems to take 0.2s to run, so I know I won't be able to go any faster than that. But I would like to be able to control precisely how much slower I sample.
I feel like I could do it using 'sleep', but the problem is that there is potential that at different points the loop itself will take longer to read(depending on precisely what is being transmitted over serial data), so the code would have to make up the balance.
For example, let's say I want to sample every 1s, and the loop takes anywhere from 0.2s to 0.3s to run. My code needs to be smart enough to sleep for 0.8s (if the loop takes 0.2s) or 0.7s (if the loop takes 0.3s).
import serial
import csv
import time
#open serial stream
while True:
#read and print a line
sample_value=ser.readline()
sample_time=time.time()-zero
sample_line=str(sample_time)+','+str(sample_value)
outfile.write(sample_line)
print 'time: ',sample_time,', value: ',sample_value
Just measure the time running your code takes every iteration of the loop, and sleep accordingly:
import time
while True:
now = time.time() # get the time
do_something() # do your stuff
elapsed = time.time() - now # how long was it running?
time.sleep(1.-elapsed) # sleep accordingly so the full iteration takes 1 second
Of course not 100% perfect (maybe off one millisecond or another from time to time), but I guess it's good enough.
Another nice approach is using twisted's LoopingCall:
from twisted.internet import task
from twisted.internet import reactor
def do_something():
pass # do your work here
task.LoopingCall(do_something).start(1.0)
reactor.run()
An rather elegant method is you're working on UNIX : use the signal library
The code :
import signal
def _handle_timeout():
print "timeout hit" # Do nothing here
def second(count):
signal.signal(signal.SIGALRM, _handle_timeout)
signal.alarm(1)
try:
count += 1 # put your function here
signal.pause()
finally:
signal.alarm(0)
return count
if __name__ == '__main__':
count = 0
count = second(count)
count = second(count)
count = second(count)
count = second(count)
count = second(count)
print count
And the timing :
georgesl#cleese:~/Bureau$ time python timer.py
5
real 0m5.081s
user 0m0.068s
sys 0m0.004s
Two caveats though : it only works on *nix, and it is not multithread-safe.
At the beginning of the loop check if the appropriate amount of time has passed. If it has not, sleep.
# Set up initial conditions for sample_time outside the loop
sample_period = ???
next_min_time = 0
while True:
sample_time = time.time() - zero
if sample_time < next_min_time:
time.sleep(next_min_time - sample_time)
continue
# read and print a line
sample_value = ser.readline()
sample_line = str(sample_time)+','+str(sample_value)
outfile.write(sample_line)
print 'time: {}, value: {}'.format(sample_time, sample_value)
next_min_time = sample_time + sample_period

Categories

Resources