Parsing, securing python expression before passing it to eval() - python

I want to take an input from the user may be like foo() > 90 and boo() == 9 or do() > 100 and use eval on the server side to to evaluate this expression.
For security I want to restrict user to add limited functions and operators by keeping a check (against some data-structure) before I pass it to eval function.
PS: Input comes from a web page
Thanks

Basically the only way to do this is to parse it yourself. You navigate the parse tree to guarantee that each part is in a whitelist of perfectly benign and safe operations, making the entire expression safe by construction. Ned Batchelder's answer is actually a (simple) form of this. You could pass it to eval() after that, although, what would be the point? You could compute the value of each subexpression as part of verification (this is especially a good idea because it makes your parser resistant to changes in Python syntax and so on). This whitelist must be extremely tiny, and there are a lot of things that you might think are okay, but aren't (e.g. general call operator; getattr function). You have to be very careful.
A blacklist is absolutely out of the question (such as the suggestion to "reject suspicious entries"). Reject anything that is not obviously good. If you don't, it will be trivial to work around your filter and give an expression that does something bad, barring the unlikely possibility that your code is better than any other blacklisting filter for Python ever created.
There have been attempts at restricting Python execution, one is the infamous and now-disabled (because it didn't work) rexec module (and company), and another is PyPy's sandbox. This second option doesn't do exactly what you asked for, but it's certainly worth looking into. It's probably what I would use-- it just means that it won't be as easy as eval(safematize(user_input)).

the more secure way is to do everything at the back end. Users just key in the necessary parameters. For example you can prompt them to key in numeric values for foo(), boo() and do(). Then at the back end, pass these values to appropriate functions to do the calculations.

Perhaps the simplest check would be to look at all the words in the expression, and check them against a whitelist. Reject the expression if any of the words isn't on the whitelist.
import re
expr = "foo() > 90 and boo() == 9 or do() > 100"
whitelist = "and or foo boo do".split()
for word in re.findall(r"[a-zA-Z_]\w+", expr):
if word not in whitelist:
raise Exception("Warning! Warning!")
This works because you have a limited domain that you need users to be able to express themselves in, and also because I don't think there's a way to cause damage with eval without using identifiers.
You'll have to be careful that your whitelist doesn't inadvertently include possibly malicious Python identifiers, though.

You need to lock down the input format, or it will be a gaping security hole. Either implement a full blown parser, as lpthnc suggests, with a reasonable set of operations (but no more), or at least use a regular expression (or several regex patterns in a matching hierarchy and/or loop) to strip out recognized patterns, and reject suspicious entries as "not allowed".

Related

Regex to split tags from non-tag string [duplicate]

There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...
Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.
The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.
Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.
A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.
A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.
A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}
There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.
So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.
Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags
In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.

Check if something is a list

What is the easiest way to check if something is a list?
A method doSomething has the parameters a and b. In the method, it will loop through the list a and do something. I'd like a way to make sure a is a list, before looping through - thus avoiding an error or the unfortunate circumstance of passing in a string then getting back a letter from each loop.
This question must have been asked before - however my googles failed me. Cheers.
To enable more usecases, but still treat strings as scalars, don't check for a being a list, check that it isn't a string:
if not isinstance(a, basestring):
...
Typechecking hurts the generality, simplicity, and maintainability of your code. It is seldom used in good, idiomatic Python programs.
There are two main reasons people want to typecheck:
To issue errors if the caller provides the wrong type.
This is not worth your time. If the user provides an incompatible type for the operation you are performing, an error will already be raised when the compatibility is hit. It is worrisome that this might not happen immediately, but it typically doesn't take long at all and results in code that is more robust, simple, efficient, and easier to write.
Oftentimes people insist on this with the hope they can catch all the dumb things a user can do. If a user is willing to do arbitrarily dumb things, there is nothing you can do to stop him. Typechecking mainly has the potential of keeping a user who comes in with his own types that are drop-in replacements for the ones replaced or when the user recognizes that your function should actually be polymorphic and provides something different that can accept the same operation.
If I had a big system where lots of things made by lots of people should fit together right, I would use a system like zope.interface to make testing that everything fits together right.
To do different things based on the types of the arguments received.
This makes your code worse because your API is inconsistent. A function or method should do one thing, not fundamentally different things. This ends up being a feature not usually worth supporting.
One common scenario is to have an argument that can either be a foo or a list of foos. A cleaner solution is simply to accept a list of foos. Your code is simpler and more consistent. If it's an important, common use case only to have one foo, you can consider having another convenience method/function that calls the one that accepts a list of foos and lose nothing. Providing the first API would not only have been more complicated and less consistent, but it would break when the types were not the exact values expected; in Python we distinguish between objects based on their capabilities, not their actual types. It's almost always better to accept an arbitrary iterable or a sequence instead of a list and anything that works like a foo instead of requiring a foo in particular.
As you can tell, I do not think either reason is compelling enough to typecheck under normal circumstances.
I'd like a way to make sure a is a list, before looping through
Document the function.
Usually it's considered not a good style to perform type-check in Python, but try
if isinstance(a, list):
...
(I think you may also check if a.__iter__ exists.)

Safety of Python 'eval' For List Deserialization

Are there any security exploits that could occur in this scenario:
eval(repr(unsanitized_user_input), {"__builtins__": None}, {"True":True, "False":False})
where unsanitized_user_input is a str object. The string is user-generated and could be nasty. Assuming our web framework hasn't failed us, it's a real honest-to-god str instance from the Python builtins.
If this is dangerous, can we do anything to the input to make it safe?
We definitely don't want to execute anything contained in the string.
See also:
Funny blog post about eval safety
Previous Question
Blog: Fast deserialization in Python
The larger context which is (I believe) not essential to the question is that we have thousands of these:
repr([unsanitized_user_input_1,
unsanitized_user_input_2,
unsanitized_user_input_3,
unsanitized_user_input_4,
...])
in some cases nested:
repr([[unsanitized_user_input_1,
unsanitized_user_input_2],
[unsanitized_user_input_3,
unsanitized_user_input_4],
...])
which are themselves converted to strings with repr(), put in persistent storage, and eventually read back into memory with eval.
Eval deserialized the strings from persistent storage much faster than pickle and simplejson. The interpreter is Python 2.5 so json and ast aren't available. No C modules are allowed and cPickle is not allowed.
It is indeed dangerous and the safest alternative is ast.literal_eval (see the ast module in the standard library). You can of course build and alter an ast to provide e.g. evaluation of variables and the like before you eval the resulting AST (when it's down to literals).
The possible exploit of eval starts with any object it can get its hands on (say True here) and going via .__class_ to its type object, etc. up to object, then gets its subclasses... basically it can get to ANY object type and wreck havoc. I can be more specific but I'd rather not do it in a public forum (the exploit is well known, but considering how many people still ignore it, revealing it to wannabe script kiddies could make things worse... just avoid eval on unsanitized user input and live happily ever after!-).
If you can prove beyond doubt that unsanitized_user_input is a str instance from the Python built-ins with nothing tampered, then this is always safe. In fact, it'll be safe even without all those extra arguments since eval(repr(astr)) = astr for all such string objects. You put in a string, you get back out a string. All you did was escape and unescape it.
This all leads me to think that eval(repr(x)) isn't what you want--no code will ever be executed unless someone gives you an unsanitized_user_input object that looks like a string but isn't, but that's a different question--unless you're trying to copy a string instance in the slowest way possible :D.
With everything as you describe, it is technically safe to eval repred strings, however, I'd avoid doing it anyway as it's asking for trouble:
There could be some weird corner-case where your assumption that only repred strings are stored (eg. a bug / different pathway into the storage that doesn't repr instantly becmes a code injection exploit where it might otherwise be unexploitable)
Even if everything is OK now, assumptions might change at some point, and unsanitised data may get stored in that field by someone unaware of the eval code.
Your code may get reused (or worse, copy+pasted) into a situation you didn't consider.
As Alex Martelli pointed out, in python2.6 and higher, there is ast.literal_eval which will safely handle both strings and other simple datatypes like tuples. This is probably the safest and most complete solution.
Another possibility however is to use the string-escape codec. This is much faster than eval (about 10 times according to timeit), available in earlier versions than literal_eval, and should do what you want:
>>> s = 'he\nllo\' wo"rld\0\x03\r\n\tabc'
>>> repr(s)[1:-1].decode('string-escape') == s
True
(The [1:-1] is to strip the outer quotes repr adds.)
Generally, you should never allow anyone to post code.
So called "paid professional programmers" have a hard-enough time writing code that actually works.
Accepting code from the anonymous public -- without benefit of formal QA -- is the worst of all possible scenarios.
Professional programmers -- without good, solid formal QA -- will make a hash of almost any web site. Indeed, I'm reverse engineering some unbelievably bad code from paid professionals.
The idea of allowing a non-professional -- unencumbered by QA -- to post code is truly terrifying.
repr([unsanitized_user_input_1,
unsanitized_user_input_2,
...
... unsanitized_user_input is a str object
You shouldn't have to serialise strings to store them in a database..
If these are all strings, as you mentioned - why can't you just store the strings in a db.StringListProperty?
The nested entries might be a bit more complicated, but why is this the case? When you have to resort to eval to get data from the database, you're probably doing something wrong..
Couldn't you store each unsanitized_user_input_x as it's own db.StringProperty row, and have group them by an reference field?
Either of those may not be applicable, since I've no idea what you're trying to achieve, but my point is - can you not structure the data in a way you where don't have to rely on eval (and also rely on it not being a security issue)?

Security of Python's eval() on untrusted strings?

If I am evaluating a Python string using eval(), and have a class like:
class Foo(object):
a = 3
def bar(self, x): return x + a
What are the security risks if I do not trust the string? In particular:
Is eval(string, {"f": Foo()}, {}) unsafe? That is, can you reach os or sys or something unsafe from a Foo instance?
Is eval(string, {}, {}) unsafe? That is, can I reach os or sys entirely from builtins like len and list?
Is there a way to make builtins not present at all in the eval context?
There are some unsafe strings like "[0] * 100000000" I don't care about, because at worst they slow/stop the program. I am primarily concerned about protecting user data external to the program.
Obviously, eval(string) without custom dictionaries is unsafe in most cases.
eval() will allow malicious data to compromise your entire system, kill your cat, eat your dog and make love to your wife.
There was recently a thread about how to do this kind of thing safely on the python-dev list, and the conclusions were:
It's really hard to do this properly.
It requires patches to the python interpreter to block many classes of attacks.
Don't do it unless you really want to.
Start here to read about the challenge: http://tav.espians.com/a-challenge-to-break-python-security.html
What situation do you want to use eval() in? Are you wanting a user to be able to execute arbitrary expressions? Or are you wanting to transfer data in some way? Perhaps it's possible to lock down the input in some way.
You cannot secure eval with a blacklist approach like this. See Eval really is dangerous for examples of input that will segfault the CPython interpreter, give access to any class you like, and so on.
You can get to os using builtin functions: __import__('os').
For python 2.6+, the ast module may help; in particular ast.literal_eval, although it depends on exactly what you want to eval.
Note that even if you pass empty dictionaries to eval(), it's still possible to segfault (C)Python with some syntax tricks. For example, try this on your interpreter: eval("()"*8**5)
You are probably better off turning the question around:
What sort of expressions are you wanting to eval?
Can you insure that only strings matching some narrowly defined syntax are eval()d?
Then consider if that is safe.
For example, if you are wanting to let the user enter an algebraic expression for evaluation, consider limiting them to one letter variable names, numbers, and a specific set of operators and functions. Don't eval() strings containing anything else.
There is a very good article on the un-safety of eval() in Mark Pilgrim's Dive into Python tutorial.
Quoted from this article:
In the end, it is possible to safely
evaluate untrusted Python expressions,
for some definition of “safe” that
turns out not to be terribly useful in
real life. It’s fine if you’re just
playing around, and it’s fine if you
only ever pass it trusted input. But
anything else is just asking for
trouble.

Partial evaluation for parsing

I'm working on a macro system for Python (as discussed here) and one of the things I've been considering are units of measure. Although units of measure could be implemented without macros or via static macros (e.g. defining all your units ahead of time), I'm toying around with the idea of allowing syntax to be extended dynamically at runtime.
To do this, I'm considering using a sort of partial evaluation on the code at compile-time. If parsing fails for a given expression, due to a macro for its syntax not being available, the compiler halts evaluation of the function/block and generates the code it already has with a stub where the unknown expression is. When this stub is hit at runtime, the function is recompiled against the current macro set. If this compilation fails, a parse error would be thrown because execution can't continue. If the compilation succeeds, the new function replaces the old one and execution continues.
The biggest issue I see is that you can't find parse errors until the affected code is run. However, this wouldn't affect many cases, e.g. group operators like [], {}, (), and `` still need to be paired (requirement of my tokenizer/list parser), and top-level syntax like classes and functions wouldn't be affected since their "runtime" is really load time, where the syntax is evaluated and their objects are generated.
Aside from the implementation difficulty and the problem I described above, what problems are there with this idea?
Here are a few possible problems:
You may find it difficult to provide the user with helpful error messages in case of a problem. This seems likely, as any compilation-time syntax error could be just a syntax extension.
Performance hit.
I was trying to find some discussion of the pluses, minuses, and/or implementation of dynamic parsing in Perl 6, but I couldn't find anything appropriate. However, you may find this quote from Nicklaus Wirth (designer of Pascal and other languages) interesting:
The phantasies of computer scientists
in the 1960s knew no bounds. Spurned
by the success of automatic syntax
analysis and parser generation, some
proposed the idea of the flexible, or
at least extensible language. The
notion was that a program would be
preceded by syntactic rules which
would then guide the general parser
while parsing the subsequent program.
A step further: The syntax rules would
not only precede the program, but they
could be interspersed anywhere
throughout the text. For example, if
someone wished to use a particularly
fancy private form of for statement,
he could do so elegantly, even
specifying different variants for the
same concept in different sections of
the same program. The concept that
languages serve to communicate between
humans had been completely blended
out, as apparently everyone could now
define his own language on the fly.
The high hopes, however, were soon
damped by the difficulties encountered
when trying to specify, what these
private constructions should mean. As
a consequence, the intreaguing idea of
extensible languages faded away rather
quickly.
Edit: Here's Perl 6's Synopsis 6: Subroutines, unfortunately in markup form because I couldn't find an updated, formatted version; search within for "macro". Unfortunately, it's not too interesting, but you may find some things relevant, like Perl 6's one-pass parsing rule, or its syntax for abstract syntax trees. The approach Perl 6 takes is that a macro is a function that executes immediately after its arguments are parsed and returns either an AST or a string; Perl 6 continues parsing as if the source actually contained the return value. There is mention of generation of error messages, but they make it seem like if macros return ASTs, you can do alright.
Pushing this one step further, you could do "lazy" parsing and always only parse enough to evaluate the next statement. Like some kind of just-in-time parser. Then syntax errors could become normal runtime errors that just raise a normal Exception that could be handled by surrounding code:
def fun():
not implemented yet
try:
fun()
except:
pass
That would be an interesting effect, but if it's useful or desirable is a different question. Generally it's good to know about errors even if you don't call the code at the moment.
Macros would not be evaluated until control reaches them and naturally the parser would already know all previous definitions. Also the macro definition could maybe even use variables and data that the program has calculated so far (like adding some syntax for all elements in a previously calculated list). But this is probably a bad idea to start writing self-modifying programs for things that could usually be done as well directly in the language. This could get confusing...
In any case you should make sure to parse code only once, and if it is executed a second time use the already parsed expression, so that it doesn't lead to performance problems.
Here are some ideas from my master's thesis, which may or may not be helpful.
The thesis was about robust parsing of natural language.
The main idea: given a context-free grammar for a language, try to parse a given
text (or, in your case, a python program). If parsing failed, you will have a partially generated parse tree. Use the tree structure to suggest new grammar rules that will better cover the parsed text.
I could send you my thesis, but unless you read Hebrew this will probably not be useful.
In a nutshell:
I used a bottom-up chart parser. This type of parser generates edges for productions from the grammar. Each edge is marked with the part of the tree that was consumed. Each edge gets a score according to how close it was to full coverage, for example:
S -> NP . VP
Has a score of one half (We succeeded in covering the NP but not the VP).
The highest-scored edges suggest a new rule (such as X->NP).
In general, a chart parser is less efficient than a common LALR or LL parser (the types usually used for programming languages) - O(n^3) instead of O(n) complexity, but then again you are trying something more complicated than just parsing an existing language.
If you can do something with the idea, I can send you further details.
I believe looking at natural language parsers may give you some other ideas.
Another thing I've considered is making this the default behavior across the board, but allow languages (meaning a set of macros to parse a given language) to throw a parse error at compile-time. Python 2.5 in my system, for example, would do this.
Instead of the stub idea, simply recompile functions that couldn't be handled completely at compile-time when they're executed. This will also make self-modifying code easier, as you can modify the code and recompile it at runtime.
You'll probably need to delimit the bits of input text with unknown syntax, so that the rest of the syntax tree can be resolved, apart from some character sequences nodes which will be expanded later. Depending on your top level syntax, that may be fine.
You may find that the parsing algorithm and the lexer and the interface between them all need updating, which might rule out most compiler creation tools.
(The more usual approach is to use string constants for this purpose, which can be parsed to a little interpreter at run time).
I don't think your approach would work very well. Let's take a simple example written in pseudo-code:
define some syntax M1 with definition D1
if _whatever_:
define M1 to do D2
else:
define M1 to do D3
code that uses M1
So there is one example where, if you allow syntax redefinition at runtime, you have a problem (since by your approach the code that uses M1 would be compiled by definition D1). Note that verifying if syntax redefinition occurs is undecidable. An over-approximation could be computed by some kind of typing system or some other kind of static analysis, but Python is not well known for this :D.
Another thing that bothers me is that your solution does not 'feel' right. I find it evil to store source code you can't parse just because you may be able to parse it at runtime.
Another example that jumps to mind is this:
...function definition fun1 that calls fun2...
define M1 (at runtime)
use M1
...function definition for fun2
Technically, when you use M1, you cannot parse it, so you need to keep the rest of the program (including the function definition of fun2) in source code. When you run the entire program, you'll see a call to fun2 that you cannot call, even if it's defined.

Categories

Resources