Passing how to pass multiprocessing flags in Python - python

I have a somewhat large Python program. Running it spawns multiple Processes (multiprocessing.Process) that communicate over various Events and Queues. I also have a growing number of command line flags (handled with argparse) that change various data paths or execution of the Processes.
Currently I put all the flags in a list and pass the list to each Processes when I create them. Not every Process uses every flag, but this approach means I just have to update the affected Processes when I add or remove a flag. However, this gets complicated as I have to remember where in each list each flag is and the different default values. I've considered making a named tuple to handle these flags or just passing the ArgumentParser.
Is these some established paradigm or Pythonic way to handle this sort of situation?

I don't think there is established paradigm. However, I can't advise you a pattern that worked for me in several cases not only in Python, but in Ruby as well.
Don't store flags passed directly in the command line, parse them once and put results into object properties (you can use namedtuple, but it's not always best way, I prefer just my own class).
Then instead of passing all your flags into new processes, ask them to wait for configuration object in a queue and send it as first thing to every started process to hold as its configuration. Such object can be shared in different way, but that would depend on your scenarios.
Other option is to pickle such object and pass it as file to be loaded by every started process, or encode pickling (base64 for example) and pass that as a single argument into the command line for new process.
It's hard to describe a single best pattern for your case, not knowing exactly how your code is shaped, how much is shared, etc.

Related

Using one dictionary vs. many to store program configurations

I am writing a Python program with many (approx ~30-40) parameters, all of which have default values, and all should be adjustable at run time by the user. The way I set it up, these parameters are grouped into 4 dictionaries, corresponding to 4 different modules of the program. However, I have encountered a few cases of a single parameter required by more then one of these modules, leading me to consider just unifying the dictionaries into one big config dictionary, or perhaps even one config object, passed to each module.
My questions are
Would this have any effect on run time? I suspect not, but want to be sure.
Is this considered good practice? Is there some other solution to the problem I have described?
probably no effect on runtime. larger dictionaries could take longer to lookup in, but in your case, we are talking about 40 items. that's nothing.
we use a single settings file in which we initialize globals by calling a method that either read the config from the environment, a file or a Python file (as globals). the method that reads the config can get the desired type and default value. Others use YAML or TOML for representing configuration and I'm guessing then stores them a globally accessible object. If your settings can be changed in runtime, you have to protect this object in terms of thread-safety (if you have threads of course).

Proper way to use multiple python processes in GUI application accessing underlying data store?

I'm strictly a python script writer, only ever done one-off scripts. Mainly for string manipulation etc. However, I consider myself proficient enough to be able to handle (with much searching) most of the implementation details of what I want to do (most of which is already done in various scripts).
My current project would involve a UI (let's assume in PyQT, I have not decided but probably wouldn't go with tkinter) which displays data. I haven't done the UI as my scripts so far have all be command-line.
I'd like there to be a separate process which handles the updating of said data. The data store would be a bunch of XML files (unfortunately this is a requirement of the project[1]). Due to the unbounded number of XML files potentially available, I think a separate process would prevent my UI getting locked. In my language of choice (C++ with QT) I'd just use threading, but reading about the GIL it seems I should instead use processes.
My current idea is for one process which reads the XML files and potentially encodes them in some convenient format for my UI process. This process would probably also monitor the data store for any possible file additions/deletions/modifications. Finally, in the encoding process, I probably also want to maintain an index of search terms to increase responsiveness. I expect fairly heavy computational load in this process, which is why I intend to split it off. A full scan of my current data store (not yet doing all the processing I would want) takes about half a second, and I plan to grow it.
The UI process accepts user input (for example, a search term) and displays the necessary results. There will also be a slight amount of processing, but nothing taxing. The user may also choose to save the record she's currently viewing, but I'm undecided whether the actual file change should be done by the UI process or it should be handed off to the background process.
In conclusion:-
What's the best way to share what I presume will be a large-ish python object between my processes? Is it queues, pipes, writing/reading to a separate database object, or something else?
I'm operating on the assumption that the UI process needs the ENTIRE data store. In practice, it possibly only needs a summary (think client-server architecture between UI process and data store process), but this would of course involve more overhead coding/maintenance wise. Is this considered good practice for an application which will always only run on one device?
Additional information:-
[1] - Requirement for XML files is because they are easily shared between devices via file-sync services such as dropbox etc. in a reasonably atomic manner. Since this project requires record-based synchronization, including allowing simultaneous edits (post-merging is possible) in different machines, I'd rather let the third party file-sync service handle it than write my own buggy synchronization tool. Also, and most crucially, there are already users of this data store using it in its current XML form, so it would be extremely difficult to change it.

Persistent in-memory Python object for nginx/uwsgi server

I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question):
I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB.
Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit).
Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second.
All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second.
But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken.
Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above!
So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly.
P.S:
Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now
If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
What you're suggesting isn't directly feasible. Since new processes can be spun up and down outside of your control, there's no way to keep native Python data in memory.
However, there are a few ways around this.
Often, one level of key-value storage is all you need. And sometimes, having fixed-size buffers for values (which you can use directly as str/bytes/bytearray objects; anything else you need to struct in there or otherwise serialize) is all you need. In that case, uWSGI's built-in caching framework will take care of everything you need.
If you need more precise control, you can look at how the cache is implemented on top of SharedArea and do something customize. However, I wouldn't recommend that. It basically gives you the same kind of API you get with a file, and the only real advantages over just using a file are that the server will manage the file's lifetime; it works in all uWSGI-supported languages, even those that don't allow files; and it makes it easier to migrate your custom cache to a distributed (multi-computer) cache if you later need to. I don't think any of those are relevant to you.
Another way to get flat key-value storage, but without the fixed-size buffers, is with Python's stdlib anydbm. The key-value lookup is as pythonic as it gets: it looks just like a dict, except that it's backed up to an on-disk BDB (or similar) database, cached as appropriate in memory, instead of being stored in an in-memory hash table.
If you need to handle a few other simple types—anything that's blazingly fast to un/pickle, like ints—you may want to consider shelve.
If your structure is rigid enough, you can use key-value database for the top level, but access the values through a ctypes.Structure, or de/serialize with struct. But usually, if you can do that, you can also eliminate the top level, at which point your whole thing is just one big Structure or Array.
At that point, you can just use a plain file for storage—either mmap it (for ctypes), or just open and read it (for struct).
Or use multiprocessing's Shared ctypes Objects to access your Structure directly out of a shared memory area.
Meanwhile, if you don't actually need all of the cache data all the time, just bits and pieces every once in a while, that's exactly what databases are for. Again, anydbm, etc. may be all you need, but if you've got complex structure, draw up an ER diagram, turn it into a set of tables, and use something like MySQL.
"python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken."
you are mistaken.
the whole point of using uwsgi over, say, the CGI mechanism is to persist data across threads and save the overhead of initialization for each call. you must set processes = 1 in your .ini file, or, depending on how uwsgi is configured, it might launch more than 1 worker process on your behalf. log the env and look for 'wsgi.multiprocess': False and 'wsgi.multithread': True, and all uwsgi.core threads for the single worker should show the same data.
you can also see how many worker processes, and "core" threads under each, you have by using the built-in stats-server.
that's why uwsgi provides lock and unlock functions for manipulating data stores by multiple threads.
you can easily test this by adding a /status route in your app that just dumps a json representation of your global data object, and view it every so often after actions that update the store.
You said nothing about writing this data back, is it static? In this case, the solution is every simple, and I have no clue what is up with all the "it's not feasible" responses.
Uwsgi workers are always-running applications. So data absolutely gets persisted between requests. All you need to do is store stuff in a global variable, that is it. And remember it's per-worker, and workers do restart from time to time, so you need proper loading/invalidation strategies.
If the data is updated very rarely (rarely enough to restart the server when it does), you can save even more. Just create the objects during app construction. This way, they will be created exactly once, and then all the workers will fork off the master, and reuse the same data. Of course, it's copy-on-write, so if you update it, you will lose the memory benefits (same thing will happen if python decides to compact its memory during a gc run, so it's not super predictable).
I have never actually tried it myself, but could you possibly use uWSGI's SharedArea to accomplish what you're after?

How to convert Python threading code to multiprocessing code?

I need to convert a threading application to a multiprocessing application for multiple reasons (GIL, memory leaks). Fortunately the threads are quite isolated and only communicate via Queue.Queues. This primitive is also available in multiprocessing so everything looks fine. Now before I enter this minefield I'd like to get some advice on the upcoming problems:
How to ensure that my objects can be transfered via the Queue? Do I need to provide some __setstate__?
Can I rely on put returning instantly (like with threading Queues)?
General hints/tips?
Anything worthwhile to read apart from the Python documentation?
Answer to part 1:
Everything that has to pass through a multiprocessing.Queue (or Pipe or whatever) has to be picklable. This includes basic types such as tuples, lists and dicts. Also classes are supported if they are top-level and not too complicated (check the details). Trying to pass lambdas around will fail however.
Answer to part 2:
A put consists of two parts: It takes a semaphore to modify the queue and it optionally starts a feeder thread. So if no other Process tries to put to the same Queue at the same time (for instance because there is only one Process writing to it), it should be fast. For me it turned out to be fast enough for all practical purposes.
Partial answer to part 3:
The plain multiprocessing.queue.Queue lacks a task_done method, so it cannot be used as a drop-in replacement directly. (A subclass provides the method.)
The old processing.queue.Queue lacks a qsize method and the newer multiprocessing version is inaccurate (just keep this in mind).
Since filedescriptors normally inherited on fork, care needs to be taken about closing them in the right processes.

Lightweight crash recovery for Python

What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the pickle module.
I want this to be completely cross platform - well at least as cross platform as Python is
I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
UPDATE: This is intended to be a lightweight module so a DBMS is out of the question.
UPDATE: Alex is correct in that I don't actually need to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
UPDATE Added a highly limited solution using standard input below
There's no good way to guard against "your program crashing while writing a checkpoint to a file", but why should you worry so much about that?! What ELSE is your program doing at that time BESIDES "saving checkpoint to a file", that could easily cause it to crash?!
It's hard to beat pickle (or cPickle) for portability of serialization in Python, but, that's just about "turning your keys and values to strings". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (don't pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are).
If your environment is incredibly crash-prone for whatever reason (very cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end).
If you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-)
The pickle module supports serializing objects to a file (and loading from file):
http://docs.python.org/library/pickle.html
One possibility would be to create a number of smaller files ... each representing a subset of the state that you're trying to preserve and each with a checksum or tag indicating that it's complete as the last line/datum of the file (just before the file is closed).
If the checksum/tag is good then the rest of the data can be considered valid ... though program would then have to find all of these files, open and read all of them, and use meta data you've provided (in their headers or their names?) to determine which ones constitute the most recent cohesive state representation (or checkpoint) from which you can continue processing.
Without knowing more about the nature of the data that you're working with it's impossible to be more specific.
You can use files, of course, or you could use a DBMS system just about as easily. Any decent DBMS (PostgreSQL, MySQL if you're using the proper storage back-ends) can give you ACID guarantees and transactional support. So the data you read back should always be consistent with the constraints that you put in your schema and/or with the transactions (BEGIN, COMMIT, ROLLBACK) that you processed.
A possible advantage of posting your serialized date to a DBMS is that you can host the DBMS on a separate system (which is unlikely to suffer the same instabilities as your test host at the same times).
Pickle/cPickle have problems.
I use the JSON module to serialize objects out. I like it because not only does it work on any OS, but it will work fine in other programming languages, too; many other languages and platforms have readily-accessible JSON deserialization support, which makes it easy to use the same objects in different programs.
Solution with severe restrictions
If I don't worry about it crashing while writing out and I only want to allow manual termination, I can use standard output to control this. Unfortunately, this can only terminate the program when a control point is reached. This could be solved by creating a new thread to read standard input. This thread could use a global lock to check if the main thread is inside a critical section (writing to a file) and terminate the program if this is not the case.
Downsides:
This is reasonably complex
It adds an extra thread
It stops me using standard input for anything else

Categories

Resources