Mmap-like behavior in pure Python3 - python

I would like to use re module with streams, but not necessarily file streams, at minimal development cost.
For file streams, there's mmap module that is able to impersonate a string and as such can be used freely with re.
Now I wonder how mmap manages to craft an object that re can further reuse. If I just pass whatever, re protect itself against usage of too incompatible objects with TypeError: expected string or bytes-like object. So I thought I'd create a class that derives from string or bytes and override a few methods such as __getitem__ etc. (this intuitively fits the duck typing philosophy of Python), and make them interact with my original stream. However, this doesn't seem to work at all - my overrides are completely ignored.
Is it possible to create such a "lazy" string in pure Python, without C extensions? If so, how?
A bit of background to disregard alternative solutions:
Can't use mmap (the stream contents are not a file)
Can't dump the whole thing to the HDD (too slow)
Can't load the whole thing to the memory (too large)
Can seek, know the size and compute the content at runtime
Example code that demonstrates bytes resistance to modification:
class FancyWrapper(bytes):
def __init__(self, base_str):
pass #super() isn't called and yet the code below finds abc, aaa and bbb
print(re.findall(b'[abc]{3}', FancyWrapper(b'abc aaa bbb def')))

Well, I found out that it's not possible, not currently.
Python's re module internally operates on the strings in the sense that it scans through a plain C buffer, which requires the object it receives to satisfy these properties:
Their representation must reside in the system memory,
Their representation must be linear, e.g. it cannot contain gaps of any sort,
Their representation must contain the content we're searching in as a whole.
So even if we managed to make re work with something else than bytes or string, we'd have to use mmap-like behavior, i.e. impersonate our content provider as linear region in the system memory.
But the mmap mechanism will work only for files, and in fact, even this is also pretty limited. For example, one can't mmap a large file if one tries to write to it, as per this answer.
Even the regex module, which contains many super duper additions such as (?r), doesn't accommodate for content sources outside string and bytes.
For completeness: does this mean we're screwed and can't scan through large dynamic content with re? Not necessarily. There's a way to do it, if we permit a limit on max match size. The solution is inspired by cfi's comment, and extends it to binary files.
Let n = max match size.
Start a search at position x
While there's content:
Navigate to position x
Read 2*n bytes to scan buffer
Find the first match within scan buffer
If match was found:
Let x = x + match_pos + match_size
Notify about the match_pos and match_size
If match wasn't found:
Let x = x + n
What this accomplishes by using twice as big buffer as the max match size? Imagine the user searches for A{3} and the max match size is set to 3. If we'd read just max match size bytes to the scan buffer and the data at current x contained AABBBA:
This iteration would look at AAB. No match.
The next iteration would move the pointer to x+3.
Now the scan buffer would look like this: BBA. Still no match.
This is obviously bad, and the simple solution is to read twice as many bytes as we jump over, to ensure the anomaly near the scan buffer's tail is resolved.
Note that the short-circuiting on the first match within the scan buffer is supposed to protect against other anomalies such as buffer underscans. It could probably be tweaked to minimize reads for scan buffers that contain multiple matches, but I wanted to avoid further complicating things.
This probably isn't the most performant algorithm made, but is good enough for my use case, so I'm leaving it here.

Related

Python Multiprocessing & ctype arrays

I'm trying to do some work on a file, the file has various data in it, and I'm pulling it in in string/raw format, and then working on the strings.
I'm trying to make the process multithreaded, so I can work on several chunks at once, but of course the files are quite large, several gigabytes, so memory is an issue.
The processes don't need to modify the input data, so they don't need their own copies. However, I don't know how to make an array of strings as a ctype in Python 2.7.
Currently I have:
import multiprocessing, ctypes
from multiprocessing.sharedctypes import Value, Array
with open('test.txt', 'r') as fin:
rawdata = Array('c', fin.readlines(), lock=False)
But this doesn't work as I'd hoped, it sees the whole thing as one massive char buffer array and fails as it wants a single string object. I need to be able to pull out the original lines and work with them with existing python code that examines the contents of the lines and does some operations, which vary from substring matching, to pulling out integer and float values from the strings for mathematical operations. Is there any sensible way I can achieve this that I'm missing? Perhaps I'm using the wrong item (Array), to push the data to a shared c format?
Do you want your strings to end up as Python strings, or as c-style strings a.k.a. null-terminated character arrays? If you're working with python string processing, then simply reading the file into a non-ctypes python string and using that everywhere is the way to go -- python doesn't copy strings by default, since they're immutable anyway. If you want to use c-style strings, then you will want to allocate a character buffer using ctypes, and use fin.readinto(buffer).

change specific indexes in string to same value python

Goal
Reading in a massive binary file approx size 1.3GB and change certain bits and then writing it back to a separate file (cannot modify original file).
Method
When I read in the binary file it gets stored in a massive string encoded in hex format which is immutable since I am using python.
My algorithm loops through the entire file and stores in a list all the indexes of the string that need to be modified. The catch is that all the indexes in the string need to be modified to the same value. I cannot do this in place due to immutable nature. I cannot convert this into a list of chars because that blows up my memory constraints and takes a hell lot of time. The viable thing to do is to store it in a separate string, but due to the immutable nature I have to make a ton of string objects and keep on concatenating to them.
I used some ideas from https://waymoot.org/home/python_string/ however it doesn't give me a good performance. Any ideas, the goal is to copy an existing super long string exactly into another except for certain placeholders determined by the values in the index List ?
So, to be honest, you shouldn't be reading your file into a string. You shouldn't especially be writing anything but the bytes you actually change.
That is just a waste of resources, since you only seem to be reading linearly through the file, noting the down the places that need to be modified.
On all OSes with some level of mmap support (that is, Unixes, among them Linux, OS X, *BSD and other OSes like Windows), you can use Python's mmap module to just open the file in read/write mode, scan through it and edit it in place, without the need to ever load it to RAM completely and then write it back out. Stupid example, converting all 12-valued bytes by something position-dependent:
Note: this code is mine, and not MIT-licensed. It's for text-enhancement purposes and thus covered by CC-by-SA. Thanks SE for making this stupid statement necessary.
import mmap
with open("infilename", "r") as in_f:
in_view = mmap.mmap(in_f.fileno(), 0) ##length = 0: complete file mapping
length = in_view.size()
with open("outfilename", "w") as out_f
out_view = mmap.mmap(out_f.fileno(), length)
for i in range(length):
if in_view[i] == 12:
out_view[i] = in_view[i] + i % 10
else:
out_view[i] = in_view[i]
What about slicing the string, modify each slice, write it back on the disk before moving on to the next slice? Too intensive for the disk?

Writing and reading headers with struct

I have a file header which I am reading and planning on writing which contains information about the contents; version information, and other string values.
Writing to the file is not too difficult, it seems pretty straightforward:
outfile.write(struct.pack('<s', "myapp-0.0.1"))
However, when I try reading back the header from the file in another method:
header_version = struct.unpack('<s', infile.read(struct.calcsize('s')))
I have the following error thrown:
struct.error: unpack requires a string argument of length 2
How do I fix this error and what exactly is failing?
Writing to the file is not too difficult, it seems pretty straightforward:
Not quite as straightforward as you think. Try looking at what's in the file, or just printing out what you're writing:
>>> struct.pack('<s', 'myapp-0.0.1')
'm'
As the docs explain:
For the 's' format character, the count is interpreted as the size of the string, not a repeat count like for the other format characters; for example, '10s' means a single 10-byte string, while '10c' means 10 characters. If a count is not given, it defaults to 1.
So, how do you deal with this?
Don't use struct if it's not what you want. The main reason to use struct is to interact with C code that dumps C struct objects directly to/from a buffer/file/socket/whatever, or a binary format spec written in a similar style (e.g. IP headers). It's not meant for general serialization of Python data. As Jon Clements points out in a comment, if all you want to store is a string, just write the string as-is. If you want to store something more complex, consider the json module; if you want something even more flexible and powerful, use pickle.
Use fixed-length strings. If part of your file format spec is that the name must always be 255 characters or less, just write '<255s'. Shorter strings will be padded, longer strings will be truncated (you might want to throw in a check for that to raise an exception instead of silently truncating).
Use some in-band or out-of-band means of passing along the length. The most common is a length prefix. (You may be able to use the 'p' or 'P' formats to help, but it really depends on the C layout/binary format you're trying to match; often you have to do something ugly like struct.pack('<h{}s'.format(len(name)), len(name), name).)
As for why your code is failing, there are multiple reasons. First, read(11) isn't guaranteed to read 11 characters. If there's only 1 character in the file, that's all you'll get. Second, you're not actually calling read(11), you're calling read(1), because struct.calcsize('s') returns 1 (for reasons which should be obvious from the above). Third, either your code isn't exactly what you've shown above, or infile's file pointer isn't at the right place, because that code as written will successfully read in the string 'm' and unpack it as 'm'. (I'm assuming Python 2.x here; 3.x will have more problems, but you wouldn't have even gotten that far.)
For your specific use case ("file header… which contains information about the contents; version information, and other string values"), I'd just use write the strings with newline terminators. (If the strings can have embedded newlines, you could backslash-escape them into \n, use C-style or RFC822-style continuations, quote them, etc.)
This has a number of advantages. For one thing, it makes the format trivially human-readable (and human-editable/-debuggable). And, while sometimes that comes with a space tradeoff, a single-character terminator is at least as efficient, possibly more so, than a length-prefix format would be. And, last but certainly not least, it means the code is dead-simple for both generating and parsing headers.
In a later comment you clarify that you also want to write ints, but that doesn't change anything. A 'i' int value will take 4 bytes, but most apps write a lot of small numbers, which only take 1-2 bytes (+1 for a terminator/separator) if you write them as strings. And if you're not writing small numbers, a Python int can easily be too large to fit in a C int—in which case struct will silently overflow and just write the low 32 bits.

Pythonic and efficient way of defining multiple regexes for use over many iterations

I am presently writing a Python script to process some 10,000 or so input documents. Based on the script's progress output I notice that the first 400+ documents get processed really fast and then the script slows down although the input documents all are approximately the same size.
I am assuming this may have to do with the fact that most of the document processing is done with regexes that I do not save as regex objects once they have been compiled. Instead, I recompile the regexes whenever I need them.
Since my script has about 10 different functions all of which use about 10 - 20 different regex patterns I am wondering what would be a more efficient way in Python to avoid re-compiling the regex patterns over and over again (in Perl I could simply include a modifier //o).
My assumption is that if I store the regex objects in the individual functions using
pattern = re.compile()
the resulting regex object will not be retained until the next invocation of the function for the next iteration (each function is called but once per document).
Creating a global list of pre-compiled regexes seems an unattractive option since I would need to store the list of regexes in a different location in my code than where they are actually used.
Any advice here on how to handle this neatly and efficiently?
The re module caches compiled regex patterns. The cache is cleared when it reaches a size of re._MAXCACHE which by default is 100. (Since you have 10 functions with 10-20 regexes each (i.e. 100-200 regexes), your observed slow-down makes sense with the clearing of the cache.)
If you are okay with changing private variables, a quick and dirty fix to your program might be to set re._MAXCACHE to a higher value:
import re
re._MAXCACHE = 1000
Last time I looked, re.compile maintained a rather small cache, and when it filled up, just emptied it. DIY with no limit:
class MyRECache(object):
def __init__(self):
self.cache = {}
def compile(self, regex_string):
if regex_string not in self.cache:
self.cache[regex_string] = re.compile(regex_string)
return self.cache[regex_string]
Compiled regular expression are automatically cached by re.compile, re.search and re.match, but the maximum cache size is 100 in Python 2.7, so you're overflowing the cache.
Creating a global list of pre-compiled regexes seems an unattractive option since I would need to store the list of regexes in a different location in my code than where they are actually used.
You can define them near the place where they are used: just before the functions that use them. If you reuse the same RE in a different place, then it would have been a good idea to define it globally anyway to avoid having to modify it in multiple places.
In the spirit of "simple is better" I'd use a little helper function like this:
def rc(pattern, flags=0):
key = pattern, flags
if key not in rc.cache:
rc.cache[key] = re.compile(pattern, flags)
return rc.cache[key]
rc.cache = {}
Usage:
rc('[a-z]').sub...
rc('[a-z]').findall <- no compilation here
I also recommend you to try regex. Among many other advantages over the stock re, its MAXCACHE is 500 by default and won't get dropped completely on overflow.

Python regex parse stream

Is there any way to use regex match on a stream in python?
like
reg = re.compile(r'\w+')
reg.match(StringIO.StringIO('aa aaa aa'))
And I don't want to do this by getting the value of the whole string. I want to know if there's any way to match regex on a srtream(on-the-fly).
I had the same problem. The first thought was to implement a LazyString class, which acts like a string but only reading as much data from the stream as currently needed (I did this by reimplementing __getitem__ and __iter__ to fetch and buffer characters up to the highest position accessed...).
This didn't work out (I got a "TypeError: expected string or buffer" from re.match), so I looked a bit into the implementation of the re module in the standard library.
Unfortunately using regexes on a stream seems not possible. The core of the module is implemented in C and this implementation expects the whole input to be in memory at once (I guess mainly because of performance reasons). There seems to be no easy way to fix this.
I also had a look at PYL (Python LEX/YACC), but their lexer uses re internally, so this wouldnt solve the issue.
A possibility could be to use ANTLR which supports a Python backend. It constructs the lexer using pure python code and seems to be able to operate on input streams. Since for me the problem is not that important (I do not expect my input to be extensively large...), I will probably not investigate that further, but it might be worth a look.
In the specific case of a file, if you can memory-map the file with mmap and if you're working with bytestrings instead of Unicode, you can feed a memory-mapped file to re as if it were a bytestring and it'll just work. This is limited by your address space, not your RAM, so a 64-bit machine with 8 GB of RAM can memory-map a 32 GB file just fine.
If you can do this, it's a really nice option. If you can't, you have to turn to messier options.
The 3rd-party regex module (not re) offers partial match support, which can be used to build streaming support... but it's messy and has plenty of caveats. Things like lookbehinds and ^ won't work, zero-width matches would be tricky to get right, and I don't know if it'd interact correctly with other advanced features regex offers and re doesn't. Still, it seems to be the closest thing to a complete solution available.
If you pass partial=True to regex.match, regex.fullmatch, regex.search, or regex.finditer, then in addition to reporting complete matches, regex will also report things that could be a match if the data was extended:
In [10]: regex.search(r'1234', '12', partial=True)
Out[10]: <regex.Match object; span=(0, 2), match='12', partial=True>
It'll report a partial match instead of a complete match if more data could change the match result, so for example, regex.search(r'[\s\S]*', anything, partial=True) will always be a partial match.
With this, you can keep a sliding window of data to match, extending it when you hit the end of the window and discarding consumed data from the beginning. Unfortunately, anything that would get confused by data disappearing from the start of the string won't work, so lookbehinds, ^, \b, and \B are out. Zero-width matches would also need careful handling. Here's a proof of concept that uses a sliding window over a file or file-like object:
import regex
def findall_over_file_with_caveats(pattern, file):
# Caveats:
# - doesn't support ^ or backreferences, and might not play well with
# advanced features I'm not aware of that regex provides and re doesn't.
# - Doesn't do the careful handling that zero-width matches would need,
# so consider behavior undefined in case of zero-width matches.
# - I have not bothered to implement findall's behavior of returning groups
# when the pattern has groups.
# Unlike findall, produces an iterator instead of a list.
# bytes window for bytes pattern, unicode window for unicode pattern
# We assume the file provides data of the same type.
window = pattern[:0]
chunksize = 8192
sentinel = object()
last_chunk = False
while not last_chunk:
chunk = file.read(chunksize)
if not chunk:
last_chunk = True
window += chunk
match = sentinel
for match in regex.finditer(pattern, window, partial=not last_chunk):
if not match.partial:
yield match.group()
if match is sentinel or not match.partial:
# No partial match at the end (maybe even no matches at all).
# Discard the window. We don't need that data.
# The only cases I can find where we do this are if the pattern
# uses unsupported features or if we're on the last chunk, but
# there might be some important case I haven't thought of.
window = window[:0]
else:
# Partial match at the end.
# Discard all data not involved in the match.
window = window[match.start():]
if match.start() == 0:
# Our chunks are too small. Make them bigger.
chunksize *= 2
This seems to be an old problem. As I have posted to a a similar question, you may want to subclass the Matcher class of my solution streamsearch-py and perform regex matching in the buffer. Check out the kmp_example.py for a template. If it turns out classic Knuth-Morris-Pratt matching is all you need, then your problem would be solved right now with this little open source library :-)
The answers here are now outdated. Modern Python re package now supports bytes-like objects, which have an api you can implement yourself and get streaming behaviour.
Yes - using the getvalue method:
import cStringIO
import re
data = cStringIO.StringIO("some text")
regex = re.compile(r"\w+")
regex.match(data.getvalue())

Categories

Resources