Event Synchronization Using NTP - python

I would like to synchronize two events between two (or more) wire networked Linux machines. Can I use NTP to do this?
NTP seems to be mostly focused on synchronizing to a time server, where I need two machines to be synchronized to each other. There is a subtle difference there. For example, if one machine is located half as many hops away as a second machine from the time server, I might be able to get better synchronization if I try to synchronize the two machines to each other directly instead of synchronizing both to a time server.
A slightly different question: If I were to use NTP, what would be the best way to schedule events? A cronjob or at script? Could I get better (sub second) synchronization if I were to use a library like this one.
Finally, does anyone know of any time synchronization software packages that are suited to synchronizing two (or more) machines together, not necessarily synchronizing to a time server.
Thanks for any help.

You might try delegating one machine as the master, and the remaining machines as slaves. When the synchronized events should occur, the master triggers the slaves to commence.
The synchronization would be limited only by the latency (ping) between the machines, and you wouldn't need to worry about system clocks consistency.

What is ping latency variation between your hosts? What kind of start time discrepancy between coordinated processes is ok for you? How are going to start processes? Cron is very imprecise, and startup time for processes needs to be accounted for, too.
If ping times to different hosts vary significantly, I'd do something like this.
Use a reliable public NTP server to synchronize clocks on all coordinated hosts a few minutes before the event. With frequent events, 3-4 synchronizations a day should be plenty enough, though.
Using a low-precision scheduler like cron, 2-3 minutes ahead of time, start a simple wrapper shell script that will wait() until e.g. 15 seconds before event. Then the wrapper script starts the tagert app, with higher-than-normal priority.
The app loads (disk access and dynamic linking is slow), reads whatever data files needed, makes all time-consuming calculations, etc. Then it waits until the start moment with subsecond precision using usleep() and ftime() or gettimeofday(), put right before the fireLasersAtTheMoon() or what happens to be your target action.
Obviously, it makes little sense to synchronize so precisely actions that are naturally imprecise, like network communication. If your network has predictable latency, you just measure it using round-trip times of ping and make a master process on one host to send a start command via ssh to other host(s) taking latency to account.

Related

Which is better to send notifications, a python loop/timer, or as a cron job?

I've built a notification system w/ raspberry pi which checks a Database every two minutes, and if any new entries are found, it sends out emails. I have it working two ways..
A python script starts at boot and runs forever. It has a timer built into the loop. Every two minutes, the DB is checked and emails are sent.
A python script is set to check DB and send emails. A cron job is set to run this script every two minutes.
which would be the better choice and why?
Your first option, even if you use a sleep implements a kind of busy-waiting strategy
(https://en.wikipedia.org/wiki/Busy_waiting),
this stragegy uses more CPU/memory than your second option (the cron approach)
because you will have in memory your processus footprint
even if it is actually doing nothing.
On the other hand, in the cron approach your processus will only appear while doing useful activities.
Just Imagine if you implement this kind of approach
for many programs running on your machine,
a lot of memory will be consume by processus in waiting states,
it will also have an impact (memory/CPU usage) on the scheduling algorithm of your OS
since it will have more processes in queue to manage.
Therefore, I would absolutely recommend the cron/scheduling approach.
Anyway,your cron daemon will be running in background whether you add the entry or not in the crontab, so why not adding it?
Last but not least, imagine if your busy-waiting processus is killed for any reason, if you go for the first option you will need to restart it manually and you might lose a couple of monitoring entries.
Hope it helps you.

How to do weighted fair task queues for CPU intensive tasks (in Python)?

Problem
We run several calculations on geographical data from user input (called a "system"). Sometimes one system needs 10 locations to do calculations for, sometimes 1000+. One location takes approximately 1 second to calculate, hopefully we can speed this up in the future. We currently do this by using a multiprocessing Pool (from billiard) from within a Celery worker. This works in that it utilises all cores 100%, but there are two problems:
There are lingering connections (pipes, probably to the child procs) that cause the worker to hang when reaching the max open file limit (investigated, but haven't found a solution after more than a day of work)
We can't spread the calculations over multiple machines.
To solve these problems, I would could run each calculation as a separate Celery task. However, we also want to schedule these calculations "fairly" for our users, so that:
Users working on small systems (say <50 locations) don't have to wait until a large system (>1000 locations) is finished. The larger the system, the less the increased waiting time matters to the user (they are doing something else anyway, and can get a notification). So this would be something akin to Weighted fair queueing
.
I have not been able to find a distributed task runner that implements this possibility of prioritisation. Did I miss one? I looked at Celery, RQ, Huey, MRQ, Pulsar Queue and some more, as well as into data processing pipelines like Luigi and Pinball, but none seem to easily enable this.
Most of these suggest creating priority by adding more workers for higher priority queues. However, that wouldn't work as the workers would start fighting for CPU time. (RQ does it differently by emptying the complete first passed in queue, before moving on to the next).
Proposed architecture
What I imagine would work is running a multiprocessing program, with a process per CPU, that fetches, in a WFQ fashion, from multiple Redis lists, each being a certain queue.
Would this be the right approach? Of course there is quite some work to be done on making the queue configuration be dynamic (for example also storing it in Redis, and reloading it upon each couple of processed tasks), and getting event monitoring to be able to get insight.
Additional thoughts:
Each task needs around 3MB of data, coming from Postgres, which is the same for each location in the system (or at least per a couple of 100 locations). With the current approach, this resides in the shared memory, and each process can access it quickly. I'll probably have to setup a local Redis instance on each machine to cache this data to, so not every process is going to fetch it over and over again.
I keep hitting up on ZeroMQ, and it has a lot of enticing possibilities, but besides maybe the monitoring, it doesn't seem to be a good fit. Or am I wrong?
What would make more sense: running each worker as a separate program, and managing it with something like supervisor, or starting a single program, that forks a child for each CPU (no CPU count config necessary), and maybe also monitors its children for stuck processes?
We already run both RabbitMQ and Redis, so I could also use RMQ for the queues. It seems to me the only thing gained by using RMQ is the possibility of not losing tasks on worker crash by using acknowledgements, at the cost of using a more difficult library/complicated protocol.
Any other advice?

How do I use multiprocessing/multithreading to make my Python script quicker?

I am fairly new to Python and programming in general. I have written a script to go through a long list (~7000) of URLs and check their status to find any broken links. Predictably, this takes a few hours to request each URL one by one. I have heard that multiprocessing (or multithreading?) can be used to speed things up. What is the best approach to this? How many processes/threads should I run in one go? Do I have to create batches of URLs to check concurrently?
The answer to the question depends on whether the process spends most of its time processing data or waiting for the network. If it is the former, then you need to use multiprocessing, and spawn about as many processes as you have physical cores on the system. Do not forget to make sure that you choose the appropriate algorithm for the task. Finally, if all else fails, coding parts of the program in C can be a viable solution as well.
If your program is slow because it spends a lot of time waiting for individual server responses, you can parallelize network access using threads or an asynchronous IO framework. In this case you can use many more threads than you have physical processor cores because most of the time your cores will be sleeping waiting for something interesting to happen. You will need to measure the results on your machine to find out the best number of threads that works for you.
Whatever you do, please make sure that your program is not hammering the remote servers with a large number of concurrent or repeated requests.

Async spawing of processes: design question - Celery or Twisted

All: I'm seeking input/guidance/and design ideas. My goal is to find a lean but reliable way to take XML payload from an HTTP POST (no problems with this part), parse it, and spawn a relatively long-lived process asynchronously.
The spawned process is CPU intensive and will last for roughly three minutes. I don't expect much load at first, but there's a definite possibility that I will need to scale this out horizontally across servers as traffic hopefully increases.
I really like the Celery/Django stack for this use: it's very intuitive and has all of the built-in framework to accomplish exactly what I need. I started down that path with zeal, but I soon found my little 512MB RAM cloud server had only 100MB of free memory and I started sensing that I was headed for trouble once I went live with all of my processes running full-tilt. Also, it's got several moving parts: RabbitMQ, MySQL, cerleryd, ligthttpd and the django container.
I can absolutely increase the size of my server, but I'm hoping to keep my costs down to a minimum at this early phase of this project.
As an alternative, I'm considering using twisted for the process management, as well as perspective broker for the remote systems, should they be needed. But for me at least, while twisted is brilliant, I feel like I'm signing up for a lot going down that path: writing protocols, callback management, keeping track of job states, etc. The benefits here are pretty obvious - excellent performance, far fewer moving parts, and a smaller memory footprint (note: I need to verify the memory part). I'm heavily skewed toward Python for this - it's much more enjoyable for me than the alternatives :)
I'd greatly appreciate any perspective on this. I'm concerned about starting things off on the wrong track, and redoing this later with production traffic will be painful.
-Matt
On my system, RabbitMQ running with pretty reasonable defaults is using about 2MB of RAM. Celeryd uses a bit more, but not an excessive amount.
In my opinion, the overhead of RabbitMQ and celery are pretty much negligible compared to the rest of the stack. If you're processing jobs that are going to take several minutes to complete, those jobs are what will overwhelm your 512MB server as soon as your traffic increases, not RabbitMQ. Starting off with RabbitMQ and Celery will at least set you up nicely to scale those jobs out horizontally though, so you're definitely on the right track there.
Sure, you could write your own job control in Twisted, but I don't see it gaining you much. Twisted has pretty good performance, but I wouldn't expect it to outperform RabbitMQ by enough to justify the time and potential for introducing bugs and architectural limitations. Mostly, it just seems like the wrong spot to worry about optimizing. Take the time that you would've spent re-writing RabbitMQ and work on reducing those three minute jobs by 20% or something. Or just spend an extra $20/month and double your capacity.
I'll answer this question as though I was the one doing the project and hopefully that might give you some insight.
I'm working on a project that will require the use of a queue, a web server for the public facing web application and several job clients.
The idea is to have the web server continuously running (no need for a very powerful machine here). However, the work is handled by these job clients which are more powerful machines that can be started and stopped at will. The job queue will also reside on the same machine as the web application. When a job gets inserted into the queue, a process that starts the job clients will kick into action and spin the first client. Using a load balancer that can start new servers as the load increases, I don't have to bother about managing the number of servers running to process jobs in the queue. If there are no jobs in the queue after a while, all job clients can be terminated.
I will suggest using a setup similar to this. You don't want job execution to affect the performance of your web application.
I Add, quite late another possibility: using Redis.
Currently I using redis with twisted : I distribute work to worker. They perform work and return result asynchronously.
The "List" type is very useful :
http://www.redis.io/commands/rpoplpush
So you can use the Reliable queue Pattern to send work and having a process that block/wait until he have a new work to do(a new message coming in queue.
you can use several worker on the same queue.
Redis have a low memory foot print but be careful of number of pending message , that will increase the memory that Redis use.

Python script performance as a background process

Im in the process of writing a python script to act as a "glue" between an application and some external devices. The script itself is quite straight forward and has three distinct processes:
Request data (from a socket connection, via UDP)
Receive response (from a socket connection, via UDP)
Process response and make data available to 3rd party application
However, this will be done repetitively, and for several (+/-200 different) devices. So once its reached device #200, it would start requesting data from device #001 again. My main concern here is not to bog down the processor whilst executing the script.
UPDATE:
I am using three threads to do the above, one thread for each of the above processes. The request/response is asynchronous as each response contains everything i need to be able to process it (including the senders details).
Is there any way to allow the script to run in the background and consume as little system resources as possible while doing its thing? This will be running on a windows 2003 machine.
Any advice would be appreciated.
If you are using blocking I/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data.
Twisted -- the best async framework for Python -- would allow you do perform these tasks with the minimal hogging of system resources, most especially though not exclusively if you want to process several devices "at once" rather than just round-robin among the several hundreds (the latter might result in too long a cycle time, especially if there's a risk that some device will have very delayed answer or even fail to answer once in a while and result in a "timeout"; as a rule of thumb I'd suggest having at least half a dozens devices "in play" at any given time to avoid this excessive-delay risk).

Categories

Resources