finding all float literals in Python code - python

I am trying to find all occurrences of a literal float value in Python code. Can I do that in Komodo (or in any other way)?
In other words, I want to find every line where something like 0.0 or 1.5 or 1e5 is used, assuming it is interpreted by Python as a float literal (so no comments, for example).
I'm using Komodo 6.0 with Python 3.1.
If possible, a way to find string and integer literals would be nice to have as well.

Our SD Source Code Search Engine (SCSE) can easily do this.
SCSE is a tool for searching large source code bases, much faster than grep, by indexing the elements of the source code languages of interest. Queries can then be posed, which use the index to enable fast location of search hits. Queries and hits are displayed in a GUI, and a click on a hit will show the block of source code containing the hit.
The SCSE knows the lexical structure of each language it has indexed with the precision as that langauge's compiler. (It uses front ends from family of accurate programming language processors; this family is pretty large and happens to include the OP's target langauge of Python/Perl/Java/...). Thus it knows exactly where identifiers, comments, and literals (integral, float, character or string) are, and exactly their content.
SCSE queries are composed of commands representing sequences of language elements of interest. The query
'for' ... I '=' N=103
finds a for keyword near ("...") an arbitrary identifier(I) which is initialized ("=") with the numeric value ("N") of 103. Because SCSE understands the language structure, it ignores language-whitespace between the tokens, e.g., it can find this regardless off intervening blanks, whitespace, newlines or comments.
The query tokens I, N, F, S, C represent I(dentifier), Natural (number), F(loat), S(tring) and C(omment) respectively. The OP's original question, of finding all the floats, is thus the nearly trivial query
F
Similarly for finding all String literals ("S") and integral literals ("N"). If you wanted to find just copies of values near Pi, you'd add low and upper bound constraints:
F>3.14<3.16
(It is pretty funny to run this on large Fortran codes; you see all kinds of bad approximations of Pi).
SCSE won't find a Float in a comment or a string, because it intimately knows the difference. Writing a grep-style expression to handle all the strange combinations to eliminate whitespace or surrounding quotes and commente delimiters should be obviously a lot more painful. Grep ain't the way to do this.

You could do that by selecting what you need with regular expressions.
This command (run it on a terminal) should do the trick:
sed -r "s/^([^#]*)#.*$/\1/g" YOUR_FILE | grep -P "[^'\"\w]-?[1-9]\d*[.e]\d*[^'\"\w]"
You'll probably need to tweak it to get a better result.
`sed' cuts out comments, while grep selects only lines containing (a small subsect of - the expression I gave is not perfect) float values...
Hope it helps.

Related

Python opposite of eval()

Is it possible to do the opposite of eval() in python which treats inputs as literal strings? For example f(Node(2)) -> 'Node(2)' or if there's some kind of higher-order operator that stops interpreting the input like lisp's quote() operator.
The answer to does Python have quote is no, Python does not have quote. There's no homoiconicity, and no native infrastructure to represent the languages own source code.
Now for what is the opposite of eval: you seem to be missing some part the picture here. Python's eval is not the opposite of quote. There's three types values can be represented as: as unparsed strings, as expressions, and as values (of any type).
quote and LISP-style "eval" convert between expressions and values.
Parsing and pretty printing form another pair between expressions and strings.
Python's actual eval goes directly from strings to values, and repr goes back (in the sense of printing a parsable value1).
Since SO does not support drawing diagrams:
So if you are looking for the opposite of eval, it's either trivially repr, but that might not really help you, or it would be the composition of quote and pretty priniting, if there were quoting in Python.
This composition with string functions a bit cumbersome, which is why LISPs let the reader deal with parsing once and for all, and from that point on only go between expressions and values. I don't know any language that has such a "quote + print" function, actually (C#'s nameof comes close, but is restricted to names).
Some caveats about the diagram:
It does not commute: you have deal with pure values or disregard side effects, for one, and most importantly quote is not really a function in some sense, of course. It's a special form. That's why repr(x) is not the same as prettyprint(quote(x)), in general.
It's not depicting pairs of isomorphisms. Multiple expressions can eval to the same value etc.
1That's not always the case in reality, but that's what it's suppsedly there for.

How is PLY's parsetab.py formatted?

I'm working on a project to convert MATLAB code to Python, and have been somewhat successful after building off others work. The tool uses PLY (an implementation of lex and yacc parsing tools for Python) to parse the MATLAB input. Unfortunately, it is a requirement that my code is written in Python 3, not Python 2. The tool runs without issue in Python 2, but I get a strange error in Python 3 (Assuming A is an array):
log_idx = A <= 16;
^
SyntaxError: Unexpected "=" (parser)
The MATLAB code I am trying to convert is:
idx = A <= 16;
which should convert to almost the same thing in Python 3:
idx = A <= 16
The only real difference between the Python 3 code and the Python 2 code is the PLY-generated parsetab.py file, which has substantial differences in the following variables:
_tabversion
_lr_signature
_lr_action_items
_lr_goto_items
I'm having trouble understanding the purpose of these variables and why they could be different when the only difference was the Python version used to generate the parsetab.py file.
I tried searching for documentation on this, but was unsuccessful. I originally suspected it could be a difference in the way strings are formatted between Python 2 and Python 3, but that didn't turn anything up either. Is there anyone familiar with PLY that could give some insight into how these variables are generated, or why the Python version is creating this difference?
Edit: I'm not sure if this would be useful to anyone because the file is very long and cryptic, but below is an example of part of the first lines of _lr_action_items and _lr_goto_items
Python 2:
_lr_action_items = {'DOTDIV':([6,9,14,20,22,24,32,34,36,42,46,47,52,54,56,57,60,71,72,73,74,75 ...
_lr_goto_items = {'lambda_args':([45,80,238,],[99,161,263,]),'unwind':([1,8,28,77,87,160,168,177 ...
Python 3:
_lr_action_items = {'END_STMT':([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,27,39,41,48,50 ...
_lr_goto_items = {'top':([0,],[1,]),'stmt':([1,44,46,134,137,207,212,214,215,244,245,250 ...
I'm going to go out on a limb here, because you have provided practically no indication of what code you are actually using. So I'm just going to assume that you copied the lexer.py file from the github repository you linked to in your question.
There's an important clue in this error message:
log_idx = A <= 16;
^
SyntaxError: Unexpected "=" (parser)
Evidently, <= is not being scanned as a single token; otherwise, the parser would not see an = token at that point in the input. This can only mean that the scanner is returning two tokens, < and =, and if that's the case, it is most certainly a syntax error, as you would expect from
log_idx = A < = 16;
To figure out why the lexer would do this, it's important to understand how the Ply (default) lexer works. It gathers up all the lexer patterns from variables whose names start t_, which must be either functions or variables whose values are strings. It then sorts them as follows:
function docstrings, in order by line number in the source file.
string values, in reverse order by length.
See Specification of Tokens in the Ply manual.
That usually does the right thing, but not always. The intention of sorting in reverse order by length is that a prefix pattern will come after a pattern which matches a longer string. So if you had patterns '<' and '<=', '<=' would be tried first, and so in the case where the input had <=, the < pattern would never be tried. That's important, since if '<' is tried first, '<=' will never be recognised.
However, this simple heuristic does not always work. The fact that a regular expression is shorter does not necessarily mean that its match will be shorter. So if you expect "maximal munch" semantics, you sometimes have to be careful about your patterns. (Or you can supply them as docstrings, because then you have complete control over the order.)
And whoever created that lexer.py file was not careful about their patterns, because it includes (among other issues):
t_LE = r"<="
t_LT = r"\<"
Note that since these are raw strings, the backslash is retained in the second string, so both patterns are of length 2:
>>> len(r"\<")
2
>>> len(r"<=")
2
Since the two patterns have the same length, their relative order in the sort is unspecified. And it is quite possible that the two versions of Python produce different sort orders, either because of differences in the implementation of sort or because of differences in the order which the dictionary of variables is iterated, or some combination of the above.
< has no special significance in a Python regular expression, so there is no need to backslash-escape it in the definition of t_LT. (Clearly, since it is not backslash-escaped in t_LE.) So the simplest solution would be to make the sort order unambiguous by removing the backslash:
t_LE = r"<="
t_LT = r"<"
Now, t_LE is longer and will definitely be tried first.
That's not the only instance of this problem in the lexer file, so you might want to revise it carefully.
Note: You could also fix the problem by adding an unnecessary backslash to the t_LE pattern; there is an argument for taking the attitude, "When in doubt, escape." However, it is useful to know which characters need to be escaped in a Python regex, and the Python documentation for the re package contains a complete list. Also, consider using long raw strings for patterns which include quotes, since neither " nor ' need to be backslash escaped in a Python regex.

position-independent comparison of Hangul characters

I am writing a python3 program that has to handle text in various writing systems, including Hangul (Korean) and I have problems with the comparison of the same character in different positions.
For those unfamiliar with Hangul (not that I know much about it, either), this script has the almost unique feature of combining the letters of a syllable into square blocks. For example 'ㅎ' is pronounced [h] and 'ㅏ' is pronounced [a], the syllable 'hah' is written '핳' (in case your system can't render Hangul: the first h is displayed in the top-left corner, the a is in the top-right corner and the second h is under them in the middle). Unicode handles this by having two different entries for each consonant, depending on whether it appears in the onset or the coda of a syllable. For example, the previous syllable is encoded as '\u1112\u1161\u11c2'.
My code needs to compare two chars, considering them as equal if they only differ for their positions. This is not the case with simple comparison, even applying Unicode normalizations. Is there a way to do it?
You will need to use a tailored version of the Unicode Collation Algorithm (UCA) that assigns equal weights to identical syllables. The UCA technical report describes the general problem for sorting Hangul.
Luckily, the ICU library has a set of collation rules that does exactly this: ko-u-co-search – Korean (General-Purpose Search); which you can try out on their demo page. To use this in Python, you will either need use a library like PyICU, or one that implements the UCA and supports the ICU rule file format (or lets you write your own rules).
I'm the developer for Python jamo (the Hangul letters are called jamo). An easy way to do this would be to cast all jamo code points to their respective Hangul compatibility jamo (HCJ) code points. HCJ is the display form of jamo characters, so initial and final forms of consonants are the same code point.
For example:
>>> import jamo
>>> initial, vowel, final = jamo.j2hcj('\u1112\u1161\u11c2')
>>> initial == final
True
The way this is done internally is with a lookup table copied from the Unicode specifications.

What character-shifting / pseudo-encryption algorithm is used here?

This is a cry for help from all you cryptologists out there.
Scenario: I have a Windows application (likely built with VC++ or VB and subsequently moved to .Net) that saves some passwords in an XML file. Given a password A0123456789abcDEFGH, the resulting "encrypted" value is 04077040940409304092040910409004089040880408704086040850404504044040430407404073040720407104070
Looking at the string, I've figured out that this is just character shifting: '04' delimits actual character values, which are decimal; if I then subtract these values from 142, I get back the original ASCII code. In Jython (2.2), my decryption routine looks like this (EDITED thanks to suggestions in comments):
blocks = [ pwd[i:i+5] for i in range(0, len(pwd), 5) ]
# now a block looks like '04093'
decrypted = [ chr( 142 - int(block[3:].lstrip('0')) ) for block in blocks ]
This is fine for ASCII values (127 in total) and a handful of accented letters, but 8-bit charsets have another 128 characters; limiting accepted values to 142 doesn't make sense from a decimal perspective.
EDIT: I've gone rummaging through our systems and found three non-ASCII chars:
è 03910
Ø 03926
Õ 03929
From these values, it looks like actually subtracting the 4-number block from 4142 (leaving only '0' as separator) gives me the correct character.
So my question is:
is anybody familiar with this sort of obfuscation scheme in the Windows world? Could this be the product of a standard library function? I'm not very familiar with Win32 and .Net development, to be honest, so I might be missing something very simple.
If it's not a library function, can you think of a better method to de-obfuscate these values without resorting to the magic 142 number, i.e. a scheme that can actually be applied on non-ASCII characters without special-casing them? I'm crap at bit shifting and all that, so again I might be missing something obvious to the trained eye.
is anybody familiar with this sort of obfuscation scheme in the Windows world?
Once you understand it correctly, it's just a trivial rotation cipher like ROT13.
Why would anyone use this?
Well, in general, this is very common. Let's say you have some data that you need to obfuscate. But the decryption algorithm and key have to be embedded in software that the viewers have. There's no point using something fancy like AES, because someone can always just dig the algorithm and key out of your code instead of cracking AES. An encryption scheme that's even marginally harder to crack than finding the hidden key is just as good as a perfect encryption scheme—that is, good enough to deter casual viewers, and useless against serious attackers. (Often you aren't even really worried about stopping attacks, but about proving after the fact that your attacker must have acted in bad faith for contractual/legal reasons.) So, you use either a simple rotation cipher, or a simple xor cipher—it's fast, it's hard to get wrong and easy to debug, and if worst comes to worst you can even decrypt it manually to recover corrupted data.
As for the particulars:
If you want to handle non-ASCII characters, you pretty much have to use Unicode. If you used some fixed 8-bit charset, or the local system's OEM charset, you wouldn't be able to handle passwords from other machines.
A Python script would almost certainly handle Unicode characters, because in Python you either deal in bytes in a str, or Unicode characters in a unicode. But a Windows C or .NET app would be much more likely to use UTF-16, because Windows native APIs deal in UTF-16-LE code points in a WCHAR * (aka a string of 16-bit words).
So, why 4142? Well, it really doesn't matter what the key is. I'm guessing some programmer suggested 42. His manager then said "That doesn't sound very secure." He sighed and said, "I already explained why no key is going to be any more secure than… you know what, forget it, what about 4142?" The manager said, "Ooh, that sounds like a really secure number!" So that's why 4142.
If it's not a library function, can you think of a better method to de-obfuscate these values without resorting to the magic 142 number.
You do need to resort to the magic 4142, but you can make this a lot simpler:
def decrypt(block):
return struct.pack('>H', (4142 - int(block, 10)) % 65536)
So, each block of 5 characters is the decimal representation of a UTF-16 code unit, subtracted from 4142, using C unsigned-short wraparound rules.
This would be trivial to implement in native Windows C, but it's slightly harder in Python. The best transformation function I can come up with is:
def decrypt_block(block):
return struct.pack('>H', (4142 - int(block, 10)) % 65536)
def decrypt(pwd):
blocks = [pwd[i:i+5] for i in range(0, len(pwd), 5)]
return ''.join(map(decrypt_block, blocks)).decode('utf-16-be')
This would be a lot more trivial in C or C#, which is probably what they implemented things in, so let me explain what I'm doing.
You already know how to transform the string into a sequence of 5-character blocks.
My int(block, 10) is doing the same thing as your int(block.lstrip('0')), making sure that a '0' prefix doesn't make Python treat it as an octal numeral instead of decimal, but more explicitly. I don't think this is actually necessary in Jython 2.2 (it definitely isn't in more modern Python/Jython), but I left it just in case.
Next, in C, you'd just do unsigned short x = 4142U - y;, which would automatically underflow appropriately. Python doesn't have unsigned short values, just signed int, so we have to do the underflow manually. (Because Python uses floored division and remainder, the sign is always the same as the divisor—this wouldn't be true in C, at least not C99 and most platforms' C89.)
Then, in C, we'd just cast the unsigned short to a 16-bit "wide character"; Python doesn't have any way to do that, so we have to use struct.pack. (Note that I'm converting it to big-endian, because I think that makes this easier to debug; in C you'd convert to native-endian, and since this is Windows, that would be little-endian.)
So, now we've got a sequence of 2-character UTF-16-BE code points. I just join them into one big string, then decode it as UTF-16-BE.
If you really want to test that I've got this right, you'll need to find characters that aren't just non-ASCII, but non-Western. In particular, you need:
A character that's > U+4142 but < U+10000. Most CJK ideographs, like U+7000 (瀀), fit the bill. This should appear as '41006', because that's 4142-0x7000 rolled over as an unsigned short.
A character that's >= U+10000. This includes uncommon CJK characters, specialized mathematical characters, characters from ancient scripts, etc. For example, the Old Italic character U+10300 (𐌀) encodes to the surrogate pair (0xd800, 0xdf00); 4142-0xd800=14382, and 4142-0xdf00=12590, so you'd get '1438212590'.
The first will be hard to find—even most Chinese- and Japanese-native programmers I've dealt with use ASCII passwords. And the second, even more so; nobody but a historical linguistics professor is likely to even think of using archaic scripts in their passwords. By Murphy's Law, if you write the correct code, it will never be used, but if you don't, it's guaranteed to show up as soon as you ship your code.

Writing and reading headers with struct

I have a file header which I am reading and planning on writing which contains information about the contents; version information, and other string values.
Writing to the file is not too difficult, it seems pretty straightforward:
outfile.write(struct.pack('<s', "myapp-0.0.1"))
However, when I try reading back the header from the file in another method:
header_version = struct.unpack('<s', infile.read(struct.calcsize('s')))
I have the following error thrown:
struct.error: unpack requires a string argument of length 2
How do I fix this error and what exactly is failing?
Writing to the file is not too difficult, it seems pretty straightforward:
Not quite as straightforward as you think. Try looking at what's in the file, or just printing out what you're writing:
>>> struct.pack('<s', 'myapp-0.0.1')
'm'
As the docs explain:
For the 's' format character, the count is interpreted as the size of the string, not a repeat count like for the other format characters; for example, '10s' means a single 10-byte string, while '10c' means 10 characters. If a count is not given, it defaults to 1.
So, how do you deal with this?
Don't use struct if it's not what you want. The main reason to use struct is to interact with C code that dumps C struct objects directly to/from a buffer/file/socket/whatever, or a binary format spec written in a similar style (e.g. IP headers). It's not meant for general serialization of Python data. As Jon Clements points out in a comment, if all you want to store is a string, just write the string as-is. If you want to store something more complex, consider the json module; if you want something even more flexible and powerful, use pickle.
Use fixed-length strings. If part of your file format spec is that the name must always be 255 characters or less, just write '<255s'. Shorter strings will be padded, longer strings will be truncated (you might want to throw in a check for that to raise an exception instead of silently truncating).
Use some in-band or out-of-band means of passing along the length. The most common is a length prefix. (You may be able to use the 'p' or 'P' formats to help, but it really depends on the C layout/binary format you're trying to match; often you have to do something ugly like struct.pack('<h{}s'.format(len(name)), len(name), name).)
As for why your code is failing, there are multiple reasons. First, read(11) isn't guaranteed to read 11 characters. If there's only 1 character in the file, that's all you'll get. Second, you're not actually calling read(11), you're calling read(1), because struct.calcsize('s') returns 1 (for reasons which should be obvious from the above). Third, either your code isn't exactly what you've shown above, or infile's file pointer isn't at the right place, because that code as written will successfully read in the string 'm' and unpack it as 'm'. (I'm assuming Python 2.x here; 3.x will have more problems, but you wouldn't have even gotten that far.)
For your specific use case ("file header… which contains information about the contents; version information, and other string values"), I'd just use write the strings with newline terminators. (If the strings can have embedded newlines, you could backslash-escape them into \n, use C-style or RFC822-style continuations, quote them, etc.)
This has a number of advantages. For one thing, it makes the format trivially human-readable (and human-editable/-debuggable). And, while sometimes that comes with a space tradeoff, a single-character terminator is at least as efficient, possibly more so, than a length-prefix format would be. And, last but certainly not least, it means the code is dead-simple for both generating and parsing headers.
In a later comment you clarify that you also want to write ints, but that doesn't change anything. A 'i' int value will take 4 bytes, but most apps write a lot of small numbers, which only take 1-2 bytes (+1 for a terminator/separator) if you write them as strings. And if you're not writing small numbers, a Python int can easily be too large to fit in a C int—in which case struct will silently overflow and just write the low 32 bits.

Categories

Resources