Python: Pre-loading memory - python
I have a python program where I need to load and de-serialize a 1GB pickle file. It takes a good 20 seconds and I would like to have a mechanism whereby the content of the pickle is readily available for use. I've looked at shared_memory but all the examples of its use seem to involve numpy and my project doesn't use numpy. What is the easiest and cleanest way to achieve this using shared_memory or otherwise?
This is how I'm loading the data now (on every run):
def load_pickle(pickle_name):
return pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
I would like to be able to edit the simulation code in between runs without having to reload the pickle. I've been messing around with importlib.reload but it really doesn't seem to work well for a large Python program with many file:
def main():
data_manager.load_data()
run_simulation()
while True:
try:
importlib.reload(simulation)
run_simulation()
except:
print(traceback.format_exc())
print('Press enter to re-run main.py, CTRL-C to exit')
sys.stdin.readline()
This could be an XY problem, the source of which being the assumption that you must use pickles at all; they're just awful to deal with due to how they manage dependencies and are fundamentally a poor choice for any long-term data storage because of it
The source financial data is almost-certainly in some tabular form to begin with, so it may be possible to request it in a friendlier format
A simple middleware to deserialize and reserialize the pickles in the meantime will smooth the transition
input -> load pickle -> write -> output
Converting your workflow to use Parquet or Feather which are designed to be efficient to read and write will almost-certainly make a considerable difference to your load speed
Further relevant links
Answer to How to reversibly store and load a Pandas dataframe to/from disk
What are the pros and cons of parquet format compared to other formats?
You may also be able to achieve this with hickle, which will internally use a HDH5 format, ideally making it significantly faster than pickle, while still behaving like one
An alternative to storing the unpickled data in memory would be to store the pickle in a ramdisk, so long as most of the time overhead comes from disk reads. Example code (to run in a terminal) is below.
sudo mkdir mnt/pickle
mount -o size=1536M -t tmpfs none /mnt/pickle
cp path/to/pickle.pkl mnt/pickle/pickle.pkl
Then you can access the pickle at mnt/pickle/pickle.pkl. Note that you can change the file names and extensions to whatever you want. If disk read is not the biggest bottleneck, you might not see a speed increase. If you run out of memory, you can try turning down the size of the ramdisk (I set it at 1536 mb, or 1.5gb)
You can use shareable list:
So you will have 1 python program running which will load the file and save it in memory and another python program which can take the file from memory. Your data, whatever is it you can load it in dictionary and then dump it as json and then reload json.
So
Program1
import pickle
import json
from multiprocessing.managers import SharedMemoryManager
YOUR_DATA=pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
data_dict={'DATA':YOUR_DATA}
data_dict_json=json.dumps(data_dict)
smm = SharedMemoryManager()
smm.start()
sl = smm.ShareableList(['alpha','beta',data_dict_json])
print (sl)
#smm.shutdown() commenting shutdown now but you will need to do it eventually
The output will look like this
#OUTPUT
>>>ShareableList(['alpha', 'beta', "your data in json format"], name='psm_12abcd')
Now in Program2:
from multiprocessing import shared_memory
load_from_mem=shared_memory.ShareableList(name='psm_12abcd')
load_from_mem[1]
#OUTPUT
'beta'
load_from_mem[2]
#OUTPUT
yourdataindictionaryformat
You can look for more over here
https://docs.python.org/3/library/multiprocessing.shared_memory.html
Adding another assumption-challenging answer, it could be where you're reading your files from that makes a big difference
1G is not a great amount of data with today's systems; at 20 seconds to load, that's only 50MB/s, which is a fraction of what even the slowest disks provide
You may find you actually have a slow disk or some type of network share as your real bottleneck and that changing to a faster storage medium or compressing the data (perhaps with gzip) makes a great difference to read and writing
Here are my assumptions while writing this answer:
Your Financial data is being produced after complex operations and you want the result to persist in memory
The code that consumes must be able to access that data fast
You wish to use shared memory
Here are the codes (self-explanatory, I believe)
Data structure
'''
Nested class definitions to simulate complex data
'''
class A:
def __init__(self, name, value):
self.name = name
self.value = value
def get_attr(self):
return self.name, self.value
def set_attr(self, n, v):
self.name = n
self.value = v
class B(A):
def __init__(self, name, value, status):
super(B, self).__init__(name, value)
self.status = status
def set_attr(self, n, v, s):
A.set_attr(self, n,v)
self.status = s
def get_attr(self):
print('\nName : {}\nValue : {}\nStatus : {}'.format(self.name, self.value, self.status))
Producer.py
from multiprocessing import shared_memory as sm
import time
import pickle as pkl
import pickletools as ptool
import sys
from class_defs import B
def main():
# Data Creation/Processing
obj1 = B('Sam Reagon', '2703', 'Active')
#print(sys.getsizeof(obj1))
obj1.set_attr('Ronald Reagon', '1023', 'INACTIVE')
obj1.get_attr()
###### real deal #########
# Create pickle string
byte_str = pkl.dumps(obj=obj1, protocol=pkl.HIGHEST_PROTOCOL, buffer_callback=None)
# compress the pickle
#byte_str_opt = ptool.optimize(byte_str)
byte_str_opt = bytearray(byte_str)
# place data on shared memory buffer
shm_a = sm.SharedMemory(name='datashare', create=True, size=len(byte_str_opt))#sys.getsizeof(obj1))
buffer = shm_a.buf
buffer[:] = byte_str_opt[:]
#print(shm_a.name) # the string to access the shared memory
#print(len(shm_a.buf[:]))
# Just an infinite loop to keep the producer running, like a server
# a better approach would be to explore use of shared memory manager
while(True):
time.sleep(60)
if __name__ == '__main__':
main()
Consumer.py
from multiprocessing import shared_memory as sm
import pickle as pkl
from class_defs import B # we need this so that while unpickling, the object structure is understood
def main():
shm_b = sm.SharedMemory(name='datashare')
byte_str = bytes(shm_b.buf[:]) # convert the shared_memory buffer to a bytes array
obj = pkl.loads(data=byte_str) # un-pickle the bytes array (as a data source)
print(obj.name, obj.value, obj.status) # get the values of the object attributes
if __name__ == '__main__':
main()
When the Producer.py is executed in one terminal, it will emit a string identifier (say, wnsm_86cd09d4) for the shared memory. Enter this string in the Consumer.py and execute it in another terminal.
Just run the Producer.py in one terminal and the Consumer.py on another terminal on the same machine.
I hope this is what you wanted!
You can take advantage of multiprocessing to run the simulations inside of subprocesses, and leverage the copy-on-write benefits of forking to unpickle/process the data only once at the start:
import multiprocessing
import pickle
# Need to use forking to get copy-on-write benefits!
mp = multiprocessing.get_context('fork')
# Load data once, in the parent process
data = pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
def _run_simulation(_):
# Wrapper for `run_simulation` that takes one argument. The function passed
# into `multiprocessing.Pool.map` must take one argument.
run_simulation()
with mp.Pool() as pool:
pool.map(_run_simulation, range(num_simulations))
If you want to parameterize each simulation run, you can do so like so:
import multiprocessing
import pickle
# Need to use forking to get copy-on-write benefits!
mp = multiprocessing.get_context('fork')
# Load data once, in the parent process
data = pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
with mp.Pool() as pool:
simulations = ('arg for simulation run', 'arg for another simulation run')
pool.map(run_simulation, simulations)
This way the run_simulation function will be passed in the values from the simulations tuple, which can allow for having each simulation run with different parameters, or even just assign each run a ID number of name for logging/saving purposes.
This whole approach relies on fork being available. For more information about using fork with Python's built-in multiprocessing library, see the docs about contexts and start methods. You may also want to consider using the forkserver multiprocessing context (by using mp = multiprocessing.get_context('fork')) for the reasons described in the docs.
If you don't want to run your simulations in parallel, this approach can be adapted for that. The key thing is that in order to only have to process the data once, you must call run_simulation within the process that processed the data, or one of its child processes.
If, for instance, you wanted to edit what run_simulation does, and then run it again at your command, you could do it with code resembling this:
main.py:
import multiprocessing
from multiprocessing.connection import Connection
import pickle
from data import load_data
# Load/process data in the parent process
load_data()
# Now child processes can access the data nearly instantaneously
# Need to use forking to get copy-on-write benefits!
mp = multiprocessing.get_context('fork') # Consider using 'forkserver' instead
# This is only ever run in child processes
def load_and_run_simulation(result_pipe: Connection) -> None:
# Import `run_simulation` here to allow it to change between runs
from simulation import run_simulation
# Ensure that simulation has not been imported in the parent process, as if
# so, it will be available in the child process just like the data!
try:
run_simulation()
except Exception as ex:
# Send the exception to the parent process
result_pipe.send(ex)
else:
# Send this because the parent is waiting for a response
result_pipe.send(None)
def run_simulation_in_child_process() -> None:
result_pipe_output, result_pipe_input = mp.Pipe(duplex=False)
proc = mp.Process(
target=load_and_run_simulation,
args=(result_pipe_input,)
)
print('Starting simulation')
proc.start()
try:
# The `recv` below will wait until the child process sends sometime, or
# will raise `EOFError` if the child process crashes suddenly without
# sending an exception (e.g. if a segfault occurs)
result = result_pipe_output.recv()
if isinstance(result, Exception):
raise result # raise exceptions from the child process
proc.join()
except KeyboardInterrupt:
print("Caught 'KeyboardInterrupt'; terminating simulation")
proc.terminate()
print('Simulation finished')
if __name__ == '__main__':
while True:
choice = input('\n'.join((
'What would you like to do?',
'1) Run simulation',
'2) Exit\n',
)))
if choice.strip() == '1':
run_simulation_in_child_process()
elif choice.strip() == '2':
exit()
else:
print(f'Invalid option: {choice!r}')
data.py:
from functools import lru_cache
# <obtain 'DATA_ROOT' and 'pickle_name' here>
#lru_cache
def load_data():
with open(DATA_ROOT + pickle_name, 'rb') as f:
return pickle.load(f)
simulation.py:
from data import load_data
# This call will complete almost instantaneously if `main.py` has been run
data = load_data()
def run_simulation():
# Run the simulation using the data, which will already be loaded if this
# is run from `main.py`.
# Anything printed here will appear in the output of the parent process.
# Exceptions raised here will be caught/handled by the parent process.
...
The three files detailed above should all be within the same directory, alongside an __init__.py file that can be empty. The main.py file can be renamed to whatever you'd like, and is the primary entry-point for this program. You can run simulation.py directly, but that will result in a long time spent loading/processing the data, which was the problem you ran into initially. While main.py is running, the file simulation.py can be edited, as it is reloaded every time you run the simulation from main.py.
For macOS users: forking on macOS can be a bit buggy, which is why Python defaults to using the spawn method for multiprocessing on macOS, but still supports fork and forkserver for it. If you're running into crashes or multiprocessing-related issues, try adding OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES to your environment. See https://stackoverflow.com/a/52230415/5946921 for more details.
As I understood:
something is needed to be loaded
it is needed to be loaded often, because file with code which uses this something is edited often
you don't want to wait until it will be loaded every time
Maybe such solution will be okay for you.
You can write script loader file in such way (tested on Python 3.8):
import importlib.util, traceback, sys, gc
# Example data
import pickle
something = pickle.loads(pickle.dumps([123]))
if __name__ == '__main__':
try:
mod_path = sys.argv[1]
except IndexError:
print('Usage: python3', sys.argv[0], 'PATH_TO_SCRIPT')
exit(1)
modules_before = list(sys.modules.keys())
argv = sys.argv[1:]
while True:
MOD_NAME = '__main__'
spec = importlib.util.spec_from_file_location(MOD_NAME, mod_path)
mod = importlib.util.module_from_spec(spec)
# Change to needed global name in the target module
mod.something = something
sys.modules[MOD_NAME] = mod
sys.argv = argv
try:
spec.loader.exec_module(mod)
except:
traceback.print_exc()
del mod, spec
modules_after = list(sys.modules.keys())
for k in modules_after:
if k not in modules_before:
del sys.modules[k]
gc.collect()
print('Press enter to re-run, CTRL-C to exit')
sys.stdin.readline()
Example of module:
# Change 1 to some different number when first script is running and press enter
something[0] += 1
print(something)
Should work. And should reduce the reload time of pickle close to zero 🌝
UPD
Add a possibility to accept script name with command line arguments
This is not exact answer to the question as the Q looks as pickle and SHM are required, but others went of the path, so I am going to share a trick of mine. It might help you. There are some fine solutions here using the pickle and SHM anyway. Regarding this I can offer only more of the same. Same pasta with slight sauce modifications.
Two tricks I employ when dealing with your situations are as follows.
First is to use sqlite3 instead of pickle. You can even easily develop a module for a drop-in replacement using sqlite. Nice thing is that data will be inserted and selected using native Python types, and you can define yourown with converter and adapter functions that would use serialization method of your choice to store complex objects. Can be a pickle or json or whatever.
What I do is to define a class with data passed in through *args and/or **kwargs of a constructor. It represents whatever obj model I need, then I pick-up rows from "select * from table;" of my database and let Python unwrap the data during the new object initialization. Loading big amount of data with datatype conversions, even the custom ones is suprisingly fast. sqlite will manage buffering and IO stuff for you and do it faster than pickle. The trick is construct your object to be filled and initiated as fast as possible. I either subclass dict() or use slots to speed up the thing.
sqlite3 comes with Python so that's a bonus too.
The other method of mine is to use a ZIP file and struct module.
You construct a ZIP file with multiple files within. E.g. for a pronunciation dictionary with more than 400000 words I'd like a dict() object. So I use one file, let say, lengths.dat in which I define a length of a key and a length of a value for each pair in binary format. Then I have a one file of words and one file of pronunciations all one after the other.
When I load from file, I read the lengths and use them to construct a dict() of words with their pronunciations from two other files. Indexing bytes() is fast, so, creating such a dictionary is very fast. You can even have it compressed if diskspace is a concern, but some speed loss is introduced then.
Both methods will take less place on a disk than the pickle would.
The second method will require you to read into RAM all the data you need, then you will be constructing the objects, which will take almost double of RAM that the data took, then you can discard the raw data, of course. But alltogether shouldn't require more than the pickle takes. As for RAM, the OS will manage almost anything using the virtual memory/SWAP if needed.
Oh, yeah, there is the third trick I use. When I have ZIP file constructed as mentioned above or anything else which requires additional deserialization while constructing an object, and number of such objects is great, then I introduce a lazy load. I.e. Let say we have a big file with serialized objects in it. You make the program load all the data and distribute it per object which you keep in list() or dict().
You write your classes in such a way that when the object is first asked for data it unpacks its raw data, deserializes and what not, removes the raw data from RAM then returns your result. So you will not be losing loading time until you actually need the data in question, which is much less noticeable for a user than 20 secs taking for a process to start.
I implemented the python-preloaded script, which can help you here. It will store the CPython state at an early stage after some modules are loaded, and then when you need it, you can restore from this state and load your normal Python script. Storing currently means that it will stay in memory, and restoring means that it does a fork on it, which is very fast. But these are implementation details of python-preloaded and should not matter to you.
So, to make it work for your use case:
Make a new module, data_preloaded.py or so, and in there, just this code:
preloaded_data = load_pickle(...)
Now run py-preloaded-bundle-fork-server.py data_preloaded -o python-data-preloaded.bin. This will create python-data-preloaded.bin, which can be used as a replacement for python.
I assume you have started python your_script.py before. So now run ./python-data-preloaded.bin your_script.py. Or also just python-data-preloaded.bin (no args). The first time, this will still be slow, i.e. take about 20 seconds. But now it is in memory.
Now run ./python-data-preloaded.bin your_script.py again. Now it should be extremely fast, i.e. a few milliseconds. And you can start it again and again and it will always be fast, until you restart your computer.
Related
'unlink()' does not work in Python's shared_memory on Windows
I am using Python 3.8's new shared_memory module and fail to free the shared memory without terminating the processes using it. After creating and using a block shm of shared memory, I close it via shm.close() in all processes and finally free it via shm.unlink in the main process. However, the reseource monitor shows me that the memory is not freed up until the program is terminated. This is a serious problem for me, because my program needs to run for a long time. The problem can be reproduced on Windows/Python 3.8 with the following program: from multiprocessing import shared_memory, Pool from itertools import repeat from time import sleep def fun(dummy, name): # access shared memory shm = shared_memory.SharedMemory(name=name) # do work sleep(1) # release shared memory shm.close() return dummy def meta_fun(pool): # create shared array arr = shared_memory.SharedMemory(create=True, size=500000000) # compute result result = sum(pool.starmap(fun, zip(range(10), repeat(arr.name)))) # release and free memory arr.close() arr.unlink() return result if __name__ == '__main__': # use one Pool for many method calls to save the time for repeatedly # creating processes with Pool() as pool: for i in range(100): print(meta_fun(pool)) Caution: when executing this script, you may quickly fill your entire memory! Watch the "virtual memory" panel in the resource monitor. After doing some research, I found out that (1) the unlink() function does nothing on Windows: def unlink(self): """Requests that the underlying shared memory block be destroyed. In order to ensure proper cleanup of resources, unlink should be called once (and only once) across all processes which have access to the shared memory block.""" if _USE_POSIX and self._name: from .resource_tracker import unregister _posixshmem.shm_unlink(self._name) unregister(self._name, "shared_memory") and (2) Windows seems to free up shared memory once the processeses that created/used it have stopped (see the comments here and here). This may be the cause for Python not handling this explicitly. In response I have built an ugly workaround via saving and reusing the same shared memory block repeatedly without ever unlinking it. Obviously, this is not a satisfactory solution, especially if the sizes of the needed memory blocks change dynamically. Is there a way I can manually free up the shared memory on Windows?
This is a bug in the multiprocessing module, which was reported as Issue 40882. There is an open pull request that fixes it, PR 20684, though apparently it’s been slow to merge. The bug is as follows: in SharedMemory.__init__, we have an invocation of the MapViewOfFile API without a corresponding UnmapViewOfFile, and the mmap object does not take ownership of it either (it maps the block again on its own). In the meantime, you can monkey-patch the shared_memory module so that the missing UnmapViewOfFile call is added after mmap is constructed. You will probably have to rely on ctypes, as the _winapi module does not export UnmapViewOfFile, despite exporting MapViewOfFile (!). Something like this (not tested): import ctypes, ctypes.wintypes import multiprocessing, multiprocessing.shared_memory UnmapViewOfFile = ctypes.windll.kernel32.UnmapViewOfFile UnmapViewOfFile.argtypes = (ctypes.wintypes.LPCVOID,) UnmapViewOfFile.restype = ctypes.wintypes.BOOL def _SharedMemory_init(self, name=None, create=False, size=0): ... # copy from SharedMemory.__init__ in the original module try: p_buf = _winapi.MapViewOfFile( h_map, _winapi.FILE_MAP_READ, 0, 0, 0 ) finally: _winapi.CloseHandle(h_map) try: size = _winapi.VirtualQuerySize(p_buf) self._mmap = mmap.mmap(-1, size, tagname=name) finally: UnmapViewOfFile(p_buf) ... # copy from SharedMemory.__init__ in the original module multiprocessing.shared_memory.SharedMemory.__init__ = _SharedMemory_init Put the above code into a module and remember to load it before using anything from the multiprocessing module. Alternatively, you can directly edit the shared_memory.py file in the multiprocessing module's directory to contain the required UnmapViewOfFile call. This is not the cleanest solution, but it’s meant to be temporary anyway (famous last words); the long-term solution is to have this fixed upstream (as is apparently underway).
What is the safest method to save files generated by different processes with multiprocessing in Python?
I am totally new to using the multiprocessing package. I have built an agent-based model and would like to run a large number of simulations with different parameters in parallel. My model takes an xml file, extracts some parameters and runs a simulation, then generates two pandas dataframes and saves them as pickle files. I'm trying to use the multiprocessing.Process() class, but the two dataframes are not saved correctly, rather for some simulation I get a single dataframe for others no dataframe. Am I using the right class for this type of work? What is the safest method to write my simulation results to disk using the multiprocessing module? I add, If I launch the simulations sequentially with a simple loop I get the right outputs. Thanks for the support I add an example of code that is not reproducible because I don't have the possibility to share the model, composed by many modules and xml files. import time import multiprocessing from model import ProtonOC import random import os import numpy as np import sys sys.setrecursionlimit(100000) def load(dir): result = list() names = list() for filename in os.scandir(dir): if filename.path.endswith('.xml'): result.append(filename.path) names.append(filename.name[:-4]) return result, names def run(xml, name): model = MYMODEL() model.override_xml(xml) model.run() new_dir = os.path.join("C:\\", name) os.mkdir(new_dir) model.datacollector.get_agent_vars_dataframe().to_pickle(os.path.join(new_dir, "agents" + ".pkl")) model.datacollector.get_model_vars_dataframe().to_pickle(os.path.join(new_dir, "model" + ".pkl")) if __name__ == '__main__': paths, names = load("C:\\") #The folder that contains xml files processes = [] for path, name in zip(paths, names): p = multiprocessing.Process(target=run, args=(path, name)) processes.append(p) p.start() for process in processes: process.join()
I can elaborate on my comment, but alas, looking at your code and not knowing anything about your model, I do not see an obvious cause for the problems you mentioned. I mentioned in my comment that I would use either a thread pool or processor pool according to whether your processing was I/O bound or CPU bound in order to better control the number of threads/processes you create. And while threads have less overhead to create, the Python interpreter would be executed within the same process and there is thus no parallelism when executing Python bytecode due to the Global Interpreter Lock (GIL) having to first be obtained. So it is for that reason that processor pools are generally recommended for CPU-intensive jobs. However, when execution is occurring in runtime libraries implemented in the C language, such as often the case with numpy and pandas, the Python interpreter releases the GIL and you can still have a high degree of parallelism with threads. But I don't know what the nature processing being done by the ProtonOC class instance. Some if it is clearly I/O related. So for now I will recommend that you initially try a thread pool for which I have arbitrarily set a maximum size of 20 (a number I pulled out of thin air). The issue here is that you are doing concurrent operations to your disk and I don't know whether too many threads will slow down disk operations (do you have a solid-state drive where arm movement is not an issue?) If you run the following code example with MAX_CONCURRENCY set to 1, presumably it should work. Of course, that is not your end goal. But it demonstrates how easily you can set the concurrency. import time from concurrent.futures import ThreadPoolExecutor as Executor from model import ProtonOC import random import os import numpy as np import sys sys.setrecursionlimit(100000) def load(dir): result = list() names = list() for filename in os.scandir(dir): if filename.path.endswith('.xml'): result.append(filename.path) names.append(filename.name[:-4]) return result, names def run(xml, name): model = ProtonOC() model.override_xml(xml) model.run() new_dir = os.path.join("C:\\", name) os.mkdir(new_dir) model.datacollector.get_agent_vars_dataframe().to_pickle(os.path.join(new_dir, "agents.pkl")) model.datacollector.get_model_vars_dataframe().to_pickle(os.path.join(new_dir, "model.pkl")) if __name__ == '__main__': paths, names = load("C:\\") #The folder that contains xml files MAX_CONCURRENCY = 20 # never more than this N_WORKERS = min(len(paths), MAX_CONCURRENCY) with Executor(max_workers=N_WORKERS) as executor: executor.map(run, paths, names) To use a process pool, change: from concurrent.futures import ThreadPoolExecutor as Executor to: from concurrent.futures import ProcessPoolExecutor as Executor You may then wish to change MAX_CONCURRENCY. But because the jobs still involve a lot of I/O and give up the processor when doing this I/O, you might benefit from this value being greater than the number of CPUs you have. Update An alternative to using the map method of the ThreadPoolExecutor class is to use submit. This gives you an opportunity to handle any exception on an individual job-submission basis: if __name__ == '__main__': paths, names = load("C:\\") #The folder that contains xml files MAX_CONCURRENCY = 20 # never more than this N_WORKERS = min(len(paths), MAX_CONCURRENCY) with Executor(max_workers=N_WORKERS) as executor: futures = [executor.submit(run, path, name) for path, name in zip(paths, names)] for future in futures: try: result = future.get() # return value from run, which is None except Exception as e: # any exception run might have thrown print(e) # handle this as you see fit You should be aware that this submits jobs one by one whereas map, when used with the ProcessPoolExecutor, allows you to specify a chunksize parameter. When you have a pool size of N and M jobs to submit where M is much greater than N, it is more efficient to place on the work queue for each process in the pool chunksize jobs at a time rather than one at a time to reduce the number of shared memory transfers required. But as long as you are using a thread pool, this is not relevant.
How to share variables in multiprocessing [duplicate]
The following does not work one.py import shared shared.value = 'Hello' raw_input('A cheap way to keep process alive..') two.py import shared print shared.value run on two command lines as: >>python one.py >>python two.py (the second one gets an attribute error, rightly so). Is there a way to accomplish this, that is, share a variable between two scripts?
Hope it's OK to jot down my notes about this issue here. First of all, I appreciate the example in the OP a lot, because that is where I started as well - although it made me think shared is some built-in Python module, until I found a complete example at [Tutor] Global Variables between Modules ??. However, when I looked for "sharing variables between scripts" (or processes) - besides the case when a Python script needs to use variables defined in other Python source files (but not necessarily running processes) - I mostly stumbled upon two other use cases: A script forks itself into multiple child processes, which then run in parallel (possibly on multiple processors) on the same PC A script spawns multiple other child processes, which then run in parallel (possibly on multiple processors) on the same PC As such, most hits regarding "shared variables" and "interprocess communication" (IPC) discuss cases like these two; however, in both of these cases one can observe a "parent", to which the "children" usually have a reference. What I am interested in, however, is running multiple invocations of the same script, ran independently, and sharing data between those (as in Python: how to share an object instance across multiple invocations of a script), in a singleton/single instance mode. That kind of problem is not really addressed by the above two cases - instead, it essentially reduces to the example in OP (sharing variables across two scripts). Now, when dealing with this problem in Perl, there is IPC::Shareable; which "allows you to tie a variable to shared memory", using "an integer number or 4 character string[1] that serves as a common identifier for data across process space". Thus, there are no temporary files, nor networking setups - which I find great for my use case; so I was looking for the same in Python. However, as accepted answer by #Drewfer notes: "You're not going to be able to do what you want without storing the information somewhere external to the two instances of the interpreter"; or in other words: either you have to use a networking/socket setup - or you have to use temporary files (ergo, no shared RAM for "totally separate python sessions"). Now, even with these considerations, it is kinda difficult to find working examples (except for pickle) - also in the docs for mmap and multiprocessing. I have managed to find some other examples - which also describe some pitfalls that the docs do not mention: Usage of mmap: working code in two different scripts at Sharing Python data between processes using mmap | schmichael's blog Demonstrates how both scripts change the shared value Note that here a temporary file is created as storage for saved data - mmap is just a special interface for accessing this temporary file Usage of multiprocessing: working code at: Python multiprocessing RemoteManager under a multiprocessing.Process - working example of SyncManager (via manager.start()) with shared Queue; server(s) writes, clients read (shared data) Comparison of the multiprocessing module and pyro? - working example of BaseManager (via server.serve_forever()) with shared custom class; server writes, client reads and writes How to synchronize a python dict with multiprocessing - this answer has a great explanation of multiprocessing pitfalls, and is a working example of SyncManager (via manager.start()) with shared dict; server does nothing, client reads and writes Thanks to these examples, I came up with an example, which essentially does the same as the mmap example, with approaches from the "synchronize a python dict" example - using BaseManager (via manager.start() through file path address) with shared list; both server and client read and write (pasted below). Note that: multiprocessing managers can be started either via manager.start() or server.serve_forever() serve_forever() locks - start() doesn't There is auto-logging facility in multiprocessing: it seems to work fine with start()ed processes - but seems to ignore the ones that serve_forever() The address specification in multiprocessing can be IP (socket) or temporary file (possibly a pipe?) path; in multiprocessing docs: Most examples use multiprocessing.Manager() - this is just a function (not class instantiation) which returns a SyncManager, which is a special subclass of BaseManager; and uses start() - but not for IPC between independently ran scripts; here a file path is used Few other examples serve_forever() approach for IPC between independently ran scripts; here IP/socket address is used If an address is not specified, then an temp file path is used automatically (see 16.6.2.12. Logging for an example of how to see this) In addition to all the pitfalls in the "synchronize a python dict" post, there are additional ones in case of a list. That post notes: All manipulations of the dict must be done with methods and not dict assignments (syncdict["blast"] = 2 will fail miserably because of the way multiprocessing shares custom objects) The workaround to dict['key'] getting and setting, is the use of the dict public methods get and update. The problem is that there are no such public methods as alternative for list[index]; thus, for a shared list, in addition we have to register __getitem__ and __setitem__ methods (which are private for list) as exposed, which means we also have to re-register all the public methods for list as well :/ Well, I think those were the most critical things; these are the two scripts - they can just be ran in separate terminals (server first); note developed on Linux with Python 2.7: a.py (server): import multiprocessing import multiprocessing.managers import logging logger = multiprocessing.log_to_stderr() logger.setLevel(logging.INFO) class MyListManager(multiprocessing.managers.BaseManager): pass syncarr = [] def get_arr(): return syncarr def main(): # print dir([]) # cannot do `exposed = dir([])`!! manually: MyListManager.register("syncarr", get_arr, exposed=['__getitem__', '__setitem__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']) manager = MyListManager(address=('/tmp/mypipe'), authkey='') manager.start() # we don't use the same name as `syncarr` here (although we could); # just to see that `syncarr_tmp` is actually <AutoProxy[syncarr] object> # so we also have to expose `__str__` method in order to print its list values! syncarr_tmp = manager.syncarr() print("syncarr (master):", syncarr, "syncarr_tmp:", syncarr_tmp) print("syncarr initial:", syncarr_tmp.__str__()) syncarr_tmp.append(140) syncarr_tmp.append("hello") print("syncarr set:", str(syncarr_tmp)) raw_input('Now run b.py and press ENTER') print print 'Changing [0]' syncarr_tmp.__setitem__(0, 250) print 'Changing [1]' syncarr_tmp.__setitem__(1, "foo") new_i = raw_input('Enter a new int value for [0]: ') syncarr_tmp.__setitem__(0, int(new_i)) raw_input("Press any key (NOT Ctrl-C!) to kill server (but kill client first)".center(50, "-")) manager.shutdown() if __name__ == '__main__': main() b.py (client) import time import multiprocessing import multiprocessing.managers import logging logger = multiprocessing.log_to_stderr() logger.setLevel(logging.INFO) class MyListManager(multiprocessing.managers.BaseManager): pass MyListManager.register("syncarr") def main(): manager = MyListManager(address=('/tmp/mypipe'), authkey='') manager.connect() syncarr = manager.syncarr() print "arr = %s" % (dir(syncarr)) # note here we need not bother with __str__ # syncarr can be printed as a list without a problem: print "List at start:", syncarr print "Changing from client" syncarr.append(30) print "List now:", syncarr o0 = None o1 = None while 1: new_0 = syncarr.__getitem__(0) # syncarr[0] new_1 = syncarr.__getitem__(1) # syncarr[1] if o0 != new_0 or o1 != new_1: print 'o0: %s => %s' % (str(o0), str(new_0)) print 'o1: %s => %s' % (str(o1), str(new_1)) print "List is:", syncarr print 'Press Ctrl-C to exit' o0 = new_0 o1 = new_1 time.sleep(1) if __name__ == '__main__': main() As a final remark, on Linux /tmp/mypipe is created - but is 0 bytes, and has attributes srwxr-xr-x (for a socket); I guess this makes me happy, as I neither have to worry about network ports, nor about temporary files as such :) Other related questions: Python: Possible to share in-memory data between 2 separate processes (very good explanation) Efficient Python to Python IPC Python: Sending a variable to another script
You're not going to be able to do what you want without storing the information somewhere external to the two instances of the interpreter. If it's just simple variables you want, you can easily dump a python dict to a file with the pickle module in script one and then re-load it in script two. Example: one.py import pickle shared = {"Foo":"Bar", "Parrot":"Dead"} fp = open("shared.pkl","w") pickle.dump(shared, fp) two.py import pickle fp = open("shared.pkl") shared = pickle.load(fp) print shared["Foo"]
sudo apt-get install memcached python-memcache one.py import memcache shared = memcache.Client(['127.0.0.1:11211'], debug=0) shared.set('Value', 'Hello') two.py import memcache shared = memcache.Client(['127.0.0.1:11211'], debug=0) print shared.get('Value')
What you're trying to do here (store a shared state in a Python module over separate python interpreters) won't work. A value in a module can be updated by one module and then read by another module, but this must be within the same Python interpreter. What you seem to be doing here is actually a sort of interprocess communication; this could be accomplished via socket communication between the two processes, but it is significantly less trivial than what you are expecting to have work here.
you can use the relative simple mmap file. you can use the shared.py to store the common constants. The following code will work across different python interpreters \ scripts \processes shared.py: MMAP_SIZE = 16*1024 MMAP_NAME = 'Global\\SHARED_MMAP_NAME' * The "Global" is windows syntax for global names one.py: from shared import MMAP_SIZE,MMAP_NAME def write_to_mmap(): map_file = mmap.mmap(-1,MMAP_SIZE,tagname=MMAP_NAME,access=mmap.ACCESS_WRITE) map_file.seek(0) map_file.write('hello\n') ret = map_file.flush() != 0 if sys.platform.startswith('win'): assert(ret != 0) else: assert(ret == 0) two.py: from shared import MMAP_SIZE,MMAP_NAME def read_from_mmap(): map_file = mmap.mmap(-1,MMAP_SIZE,tagname=MMAP_NAME,access=mmap.ACCESS_READ) map_file.seek(0) data = map_file.readline().rstrip('\n') map_file.close() print data *This code was written for windows, linux might need little adjustments more info at - https://docs.python.org/2/library/mmap.html
Share a dynamic variable by Redis: script_one.py from redis import Redis from time import sleep cli = Redis('localhost') shared_var = 1 while True: cli.set('share_place', shared_var) shared_var += 1 sleep(1) Run script_one in a terminal (a process): $ python script_one.py script_two.py from redis import Redis from time import sleep cli = Redis('localhost') while True: print(int(cli.get('share_place'))) sleep(1) Run script_two in another terminal (another process): $ python script_two.py Out: 1 2 3 4 5 ... Dependencies: $ pip install redis $ apt-get install redis-server
I'd advise that you use the multiprocessing module. You can't run two scripts from the commandline, but you can have two separate processes easily speak to each other. From the doc's examples: from multiprocessing import Process, Queue def f(q): q.put([42, None, 'hello']) if __name__ == '__main__': q = Queue() p = Process(target=f, args=(q,)) p.start() print q.get() # prints "[42, None, 'hello']" p.join()
You need to store the variable in some sort of persistent file. There are several modules to do this, depending on your exact need. The pickle and cPickle module can save and load most python objects to file. The shelve module can store python objects in a dictionary-like structure (using pickle behind the scenes). The dbm/bsddb/dbhash/gdm modules can store string variables in a dictionary-like structure. The sqlite3 module can store data in a lightweight SQL database. The biggest problem with most of these are that they are not synchronised across different processes - if one process reads a value while another is writing to the datastore then you may get incorrect data or data corruption. To get round this you will need to write your own file locking mechanism or use a full-blown database.
If you wanna read and modify shared data between 2 scripts which run separately, a good solution would be to take advantage of python multiprocessing module and use a Pipe() or a Queue() (see differences here). This way you get to sync scripts and avoid problems regarding concurrency and global variables (like what happens if both scripts wanna modify a variable at the same time). The best part about using pipes/queues is that you can pass python objects through them. Also there are methods to avoid waiting for data if there hasn't been passed yet (queue.empty() and pipeConn.poll()). See an example using Queue() below: # main.py from multiprocessing import Process, Queue from stage1 import Stage1 from stage2 import Stage2 s1= Stage1() s2= Stage2() # S1 to S2 communication queueS1 = Queue() # s1.stage1() writes to queueS1 # S2 to S1 communication queueS2 = Queue() # s2.stage2() writes to queueS2 # start s2 as another process s2 = Process(target=s2.stage2, args=(queueS1, queueS2)) s2.daemon = True s2.start() # Launch the stage2 process s1.stage1(queueS1, queueS2) # start sending stuff from s1 to s2 s2.join() # wait till s2 daemon finishes # stage1.py import time import random class Stage1: def stage1(self, queueS1, queueS2): print("stage1") lala = [] lis = [1, 2, 3, 4, 5] for i in range(len(lis)): # to avoid unnecessary waiting if not queueS2.empty(): msg = queueS2.get() # get msg from s2 print("! ! ! stage1 RECEIVED from s2:", msg) lala = [6, 7, 8] # now that a msg was received, further msgs will be different time.sleep(1) # work random.shuffle(lis) queueS1.put(lis + lala) queueS1.put('s1 is DONE') # stage2.py import time class Stage2: def stage2(self, queueS1, queueS2): print("stage2") while True: msg = queueS1.get() # wait till there is a msg from s1 print("- - - stage2 RECEIVED from s1:", msg) if msg == 's1 is DONE ': break # ends loop time.sleep(1) # work queueS2.put("update lists") EDIT: just found that you can use queue.get(False) to avoid blockage when receiving data. This way there's no need to check first if the queue is empty. This is no possible if you use pipes.
Use text files or environnement variables. Since the two run separatly, you can't really do what you are trying to do.
In your example, the first script runs to completion, and then the second script runs. That means you need some sort of persistent state. Other answers have suggested using text files or Python's pickle module. Personally I am lazy, and I wouldn't use a text file when I could use pickle; why should I write a parser to parse my own text file format? Instead of pickle you could also use the json module to store it as JSON. This might be preferable if you want to share the data to non-Python programs, as JSON is a simple and common standard. If your Python doesn't have json, get simplejson. If your needs go beyond pickle or json -- say you actually want to have two Python programs executing at the same time and updating the persistent state variables in real time -- I suggest you use the SQLite database. Use an ORM to abstract the database away, and it's super easy. For SQLite and Python, I recommend Autumn ORM.
This method seems straight forward for me: class SharedClass: def __init__(self): self.data = {} def set_data(self, name, value): self.data[name] = value def get_data(self, name): try: return self.data[name] except: return "none" def reset_data(self): self.data = {} sharedClass = SharedClass() PS : you can set the data with a parameter name and a value for it, and to access the value you can use the get_data method, below is the example: to set the data example 1: sharedClass.set_data("name","Jon Snow") example 2: sharedClass.set_data("email","jon#got.com")\ to get the data sharedClass.get_data("email")\ to reset the entire state simply use sharedClass.reset_data() Its kind of accessing data from a json object (dict in this case) Hope this helps....
You could use the basic from and import functions in python to import the variable into two.py. For example: from filename import variable That should import the variable from the file. (Of course you should replace filename with one.py, and replace variable with the variable you want to share to two.py.)
You can also solve this problem by making the variable as global python first.py class Temp: def __init__(self): self.first = None global var1 var1 = Temp() var1.first = 1 print(var1.first) python second.py import first as One print(One.var1.first)
Keeping Python Variables between Script Calls
I have a python script, that needs to load a large file from disk to a variable. This takes a while. The script will be called many times from another application (still unknown), with different options and the stdout will be used. Is there any possibility to avoid reading the large file for each single call of the script? I guess i could have one large script running in the background that holds the variable. But then, how can I call the script with different options and read the stdout from another application?
Make it a (web) microservice: formalize all different CLI arguments as HTTP endpoints and send requests to it from main application.
(I misunderstood the original question, but the first answer I wrote has a different solution, which might be useful to someone fitting that scenario, so I am keeping that one as is and proposing second solution. ) For a single machine, OS provided pipes are the best solution for what you are looking. Essentially you will create a forever running process in python which reads from pipe, and process the commands entering the pipe, and then prints to sysout. Reference: http://kblin.blogspot.com/2012/05/playing-with-posix-pipes-in-python.html From above mentioned source Workload In order to simulate my workload, I came up with the following simple script called pipetest.py that takes an output file name and then writes some text into that file. #!/usr/bin/env python import sys def main(): pipename = sys.argv[1] with open(pipename, 'w') as p: p.write("Ceci n'est pas une pipe!\n") if __name__ == "__main__": main() The Code In my test, this "file" will be a FIFO created by my wrapper code. The implementation of the wrapper code is as follows, I will go over the code in detail further down this post: #!/usr/bin/env python import tempfile import os from os import path import shutil import subprocess class TemporaryPipe(object): def __init__(self, pipename="pipe"): self.pipename = pipename self.tempdir = None def __enter__(self): self.tempdir = tempfile.mkdtemp() pipe_path = path.join(self.tempdir, self.pipename) os.mkfifo(pipe_path) return pipe_path def __exit__(self, type, value, traceback): if self.tempdir is not None: shutil.rmtree(self.tempdir) def call_helper(): with TemporaryPipe() as p: script = "./pipetest.py" subprocess.Popen(script + " " + p, shell=True) with open(p, 'r') as r: text = r.read() return text.strip() def main(): call_helper() if __name__ == "__main__": main()
Since you already can read the data into a variable, then you might consider memory mapping the file using mmap. This is safe if multiple processes are only reading it - to support a writer would require a locking protocol. Assuming you are not familiar with memory mapped objects, I'll wager you use them every day - this is how the operating system loads and maintains executable files. Essentially your file becomes part of the paging system - although it does not have to be in any special format. When you read a file into memory it is unlikely it is all loaded into RAM, it will be paged out when "real" RAM becomes over-subscribed. Often this paging is a considerable overhead. A memory mapped file is just your data "ready paged". There is no overhead in reading into memory (virtual memory, that is), it is there as soon as you map it . When you try to access the data a page fault occurs and a subset (page) is loaded into RAM - all done by the operating system, the programmer is unaware of this. While a file remains mapped it is connected to the paging system. Another process mapping the same file will access the same object, provided changes have not been made (See MAP_SHARED). It needs a daemon to keep the memory mapped object current in kernel, but other than creating the object linked to the physical file, it does not need to do anything else - it can sleep or wait on a shutdown signal. Other processes open the file (use os.open()) and map the object. See the examples in the documentation, here and also Giving access to shared memory after child processes have already started
You can store the processed values in a file, and then read the values from that file in another script. >>> import pickle as p >>> mystr="foobar" >>> p.dump(mystr,open('/tmp/t.txt','wb')) >>> mystr2=p.load(open('/tmp/t.txt','rb')) >>> mystr2 'foobar'
Shared memory between python processes
I'm trying to figure out a way to share memory between python processes. Basically there is are objects that exists that multiple python processes need to be able to READ (only read) and use (no mutation). Right now this is implemented using redis + strings + cPickle, but cPickle takes up precious CPU time so I'd like to not have to use that. Most of the python shared memory implementations I've seen on the internets seem to require files and pickles which is basically what I'm doing already and exactly what I'm trying to avoid. What I'm wondering is if there'd be a way to write a like...basically an in-memory python object database/server and a corresponding C module to interface with the database? Basically the C module would ask the server for an address to write an object to, the server would respond with an address, then the module would write the object, and notify the server that an object with a given key was written to disk at the specified location. Then when any of the processes wanted to retrieve an object with a given key they would just ask the db for the memory location for the given key, the server would respond with the location and the module would know how to load that space in memory and transfer the python object back to the python process. Is that wholly unreasonable or just really damn hard to implement? Am I chasing after something that's impossible? Any suggestions would be welcome. Thank you internet.
From Python 3.8 and onwards you can use multiprocessing.shared_memory.SharedMemory
Not unreasonable. IPC can be done with a memory mapped file. Python has functionality built in: http://docs.python.org/library/mmap.html Just mmap the file in both processes and hey-presto you have a shared file. Of course you'll have to poll it in both processes to see what changes. And you'll have to co-operate writes between both. And decide what format you want to put your data in. But it's a common solution to your problem.
If you don't want pickling, multiprocessing.sharedctypes might fit. It's a bit low-level, though; you get single values or arrays of specified types. Another way to distribute data to child processes (one way) is multiprocessing.Pipe. That can handle Python objects, and it's implemented in C, so I cannot tell you wether it uses pickling or not.
Python do NOT support shared memory between independent processes. You can implement your own in C language, or use SharedArray if you are working with libsvm, numpy.ndarray, scipy.sparse. pip install SharedArray def test (): def generateArray (): print('generating') from time import sleep sleep(3) return np.ones(1000) a = Sarr('test/1', generateArray) # use same memory as a, also work in a new process b = Sarr('test/1', generateArray) c = Sarr('test/1', generateArray) import re import SharedArray import numpy as np class Sarr (np.ndarray): def __new__ (self, name, getData): if not callable(getData) and getData is None: return None self.orig_name = name shm_name = 'shm://' + re.sub(r'[./]', '_', name) try: shm = SharedArray.attach(shm_name) print('[done] reuse shared memory:', name) return shm except Exception as err: self._unlink(shm_name) data = getData() if callable(getData) else getData shm = SharedArray.create(shm_name, data.size) shm[:] = data[:] print('[done] loaded data to shared memory:', name) return shm def _unlink (name): try: SharedArray.delete(name[len('shm://'):]) print('deleted shared memory:', name) except: pass if __name__ == '__main__': test()