While I tried to submit my code which contains eval() function, all of the web compilers like those in pythontutor.com, programmr.com, coursera.org autotester, etc., returned a name error. What is the reason for not implementing this function on web compilers?
An eval function can certainly be abused and is a security risk.
Code execution can be limited on multiple levels. The most powerful tool to limit the functionality of code is AST inspection and AST modification. Python code is parsed, tokenized, transformed into an abstract syntax tree and finally the abstract syntax tree is validated and modified.
Eval and exec can be abused to sneak code around AST inspection. Because the feature is not needed for online examples it's simply omitted.
I'll take a stab at this:
EDIT
I don't like admitting I'm wrong but in this case I think I have to.
I'll point out that it's still true that you're executing use code either way, so you need to actually secure the interpreter. I still believe that "there is no point whatsoever trying to blacklist, or even whitelist, allowed actions if those actions exist within the interpreter" and thus blocking exec and eval aids only minor benefit.
Here's the key point:
The maker of that Python Tutor executed code handed to them from the outside. Despite blocking many things, including exec, because they allowed executing of foreign code there is an exploit I have found. Currently I've only imported a blocked module, but reading the source this also allows me to do unlimited IO.
Further Edit: I've contacted the owner of Python Tutor and he's said he's aware of such, supposedly protections exist at a lower level, too. So it's not an exploit, but that's primarily because there are protections at "a lower level". That's how you're meant to guard against attacks, not blacklists.
Further, Further Edit: I've brought down Python Tutor, although the site seems to be up again (not functional, just up), 'cause I was seeing whether I could read/write anything with this exploit of mine (at the request of Philip Guo, the owner). It seems I can at least break the site :/. Sorry, 'twas an accident. It does prove that running untrusted code is dangerous, though, which is what I've been saying all along.
However, the question asks about intention, not about whether blocking exec actually makes a difference.
Reading http://pgbovine.net/projects/pubs/guo-sigcse-preprint_2012-11-13.pdf, namely
Since the backend is running untrusted Python code from the web, it implements sandboxing to prevent the execution of dangerous constructs such as eval, exec, file I/O and most module imports (except for a customisable whitelist of modules such as math).
I won't admit that I'm wrong about how you should treat exec and eval, but I was wrong about what the site's owner thought you should do (although admittedly he realises that you also have to sandbox).
What I previously wrote:
Firstly, it is almost definitely nothing to do with safety. Safety by obscurity never works, and it's a silly thing to try. You cannot blacklist bad functions by looking for calls to the function in a dynamic language. An easy example of obscuring
import os; while True: os.fork()
would be
import os
while True:
os.__dict__["fork"]()
so there's no protection. I'll say it again: there is no point whatsoever trying to blacklist, or even whitelist, allowed actions if those actions exist within the interpreter. This means that anything eval can reach can be reached without eval, so blocking eval gives you no security.
I reckon the real reason is that these aren't typical Python compiler-interpreters. Much of what the code is doing is pushed onto Javascript or some other awkward platform.
If you ever deeply inspect code, you'll realise that at compile-to-bytecode-time some stuff gets hard coded, such as name bindings. When running things like "eval" you cannot just run the code inline, you have to compile and dynamically extract things from a scope. Supporting such dynamism in a target language is a challenging task, and the writers probably just didn't think that it was worth doing.
This is especially true if the code is compiled in the cloud and executed locally: eval would therefore have to take a round-trip to the server in order to work! This isn't an advised course of action.
That's my guess, although I don't know how this stuff is actually implemented.
Related
I'm developing a web game in pure Python, and want some simple scripting available to allow for more dynamic game content. Game content can be added live by privileged users.
It would be nice if the scripting language could be Python. However, it can't run with access to the environment the game runs on since a malicious user could wreak havoc which would be bad. Is it possible to run sandboxed Python in pure Python?
Update: In fact, since true Python support would be way overkill, a simple scripting language with Pythonic syntax would be perfect.
If there aren't any Pythonic script interpreters, are there any other open source script interpreters written in pure Python that I could use? The requirements are support for variables, basic conditionals and function calls (not definitions).
This is really non-trivial.
There are two ways to sandbox Python. One is to create a restricted environment (i.e., very few globals etc.) and exec your code inside this environment. This is what Messa is suggesting. It's nice but there are lots of ways to break out of the sandbox and create trouble. There was a thread about this on Python-dev a year ago or so in which people did things from catching exceptions and poking at internal state to break out to byte code manipulation. This is the way to go if you want a complete language.
The other way is to parse the code and then use the ast module to kick out constructs you don't want (e.g. import statements, function calls etc.) and then to compile the rest. This is the way to go if you want to use Python as a config language etc.
Another way (which might not work for you since you're using GAE), is the PyPy sandbox. While I haven't used it myself, word on the intertubes is that it's the only real sandboxed Python out there.
Based on your description of the requirements (The requirements are support for variables, basic conditionals and function calls (not definitions)) , you might want to evaluate approach 2 and kick out everything else from the code. It's a little tricky but doable.
Roughly ten years after the original question, Python 3.8.0 comes with auditing. Can it help? Let's limit the discussion to hard-drive writing for simplicity - and see:
from sys import addaudithook
def block_mischief(event,arg):
if 'WRITE_LOCK' in globals() and ((event=='open' and arg[1]!='r')
or event.split('.')[0] in ['subprocess', 'os', 'shutil', 'winreg']): raise IOError('file write forbidden')
addaudithook(block_mischief)
So far exec could easily write to disk:
exec("open('/tmp/FILE','w').write('pwned by l33t h4xx0rz')", dict(locals()))
But we can forbid it at will, so that no wicked user can access the disk from the code supplied to exec(). Pythonic modules like numpy or pickle eventually use the Python's file access, so they are banned from disk write, too. External program calls have been explicitly disabled, too.
WRITE_LOCK = True
exec("open('/tmp/FILE','w').write('pwned by l33t h4xx0rz')", dict(locals()))
exec("open('/tmp/FILE','a').write('pwned by l33t h4xx0rz')", dict(locals()))
exec("numpy.savetxt('/tmp/FILE', numpy.eye(3))", dict(locals()))
exec("import subprocess; subprocess.call('echo PWNED >> /tmp/FILE', shell=True)", dict(locals()))
An attempt of removing the lock from within exec() seems to be futile, since the auditing hook uses a different copy of locals that is not accessible for the code ran by exec. Please prove me wrong.
exec("print('muhehehe'); del WRITE_LOCK; open('/tmp/FILE','w')", dict(locals()))
...
OSError: file write forbidden
Of course, the top-level code can enable file I/O again.
del WRITE_LOCK
exec("open('/tmp/FILE','w')", dict(locals()))
Sandboxing within Cpython has proven extremely hard and many previous attempts have failed. This approach is also not entirely secure e.g. for public web access:
perhaps hypothetical compiled modules that use direct OS calls cannot be audited by Cpython - whitelisting the safe pure pythonic modules is recommended.
Definitely there is still the possibility of crashing or overloading the Cpython interpreter.
Maybe there remain even some loopholes to write the files on the harddrive, too. But I could not use any of the usual sandbox-evasion tricks to write a single byte. We can say the "attack surface" of Python ecosystem reduces to rather a narrow list of events to be (dis)allowed: https://docs.python.org/3/library/audit_events.html
I would be thankful to anybody pointing me to the flaws of this approach.
EDIT: So this is not safe either! I am very thankful to #Emu for his clever hack using exception catching and introspection:
#!/usr/bin/python3.8
from sys import addaudithook
def block_mischief(event,arg):
if 'WRITE_LOCK' in globals() and ((event=='open' and arg[1]!='r') or event.split('.')[0] in ['subprocess', 'os', 'shutil', 'winreg']):
raise IOError('file write forbidden')
addaudithook(block_mischief)
WRITE_LOCK = True
exec("""
import sys
def r(a, b):
try:
raise Exception()
except:
del sys.exc_info()[2].tb_frame.f_back.f_globals['WRITE_LOCK']
import sys
w = type('evil',(object,),{'__ne__':r})()
sys.audit('open', None, w)
open('/tmp/FILE','w').write('pwned by l33t h4xx0rz')""", dict(locals()))
I guess that auditing+subprocessing is the way to go, but do not use it on production machines:
https://bitbucket.org/fdominec/experimental_sandbox_in_cpython38/src/master/sandbox_experiment.py
AFAIK it is possible to run a code in a completely isolated environment:
exec somePythonCode in {'__builtins__': {}}, {}
But in such environment you can do almost nothing :) (you can not even import a module; but still a malicious user can run an infinite recursion or cause running out of memory.) Probably you would want to add some modules that will be the interface to you game engine.
I'm not sure why nobody mentions this, but Zope 2 has a thing called Python Script, which is exactly that - restricted Python executed in a sandbox, without any access to filesystem, with access to other Zope objects controlled by Zope security machinery, with imports limited to a safe subset.
Zope in general is pretty safe, so I would imagine there are no known or obvious ways to break out of the sandbox.
I'm not sure how exactly Python Scripts are implemented, but the feature was around since like year 2000.
And here's the magic behind PythonScripts, with detailed documentation: http://pypi.python.org/pypi/RestrictedPython - it even looks like it doesn't have any dependencies on Zope, so can be used standalone.
Note that this is not for safely running arbitrary python code (most of the random scripts will fail on first import or file access), but rather for using Python for limited scripting within a Python application.
This answer is from my comment to a question closed as a duplicate of this one: Python from Python: restricting functionality?
I would look into a two server approach. The first server is the privileged web server where your code lives. The second server is a very tightly controlled server that only provides a web service or RPC service and runs the untrusted code. You provide your content creator with your custom interface. For example you if you allowed the end user to create items, you would have a look up that called the server with the code to execute and the set of parameters.
Here's and abstract example for a healing potion.
{function_id='healing potion', action='use', target='self', inventory_id='1234'}
The response might be something like
{hp='+5' action={destroy_inventory_item, inventory_id='1234'}}
Hmm. This is a thought experiment, I don't know of it being done:
You could use the compiler package to parse the script. You can then walk this tree, prefixing all identifiers - variables, method names e.t.c. (also has|get|setattr invocations and so on) - with a unique preamble so that they cannot possibly refer to your variables. You could also ensure that the compiler package itself was not invoked, and perhaps other blacklisted things such as opening files. You then emit the python code for this, and compiler.compile it.
The docs note that the compiler package is not in Python 3.0, but does not mention what the 3.0 alternative is.
In general, this is parallel to how forum software and such try to whitelist 'safe' Javascript or HTML e.t.c. And they historically have a bad record of stomping all the escapes. But you might have more luck with Python :)
I think your best bet is going to be a combination of the replies thus far.
You'll want to parse and sanitise the input - removing any import statements for example.
You can then use Messa's exec sample (or something similar) to allow the code execution against only the builtin variables of your choosing - most likely some sort of API defined by yourself that provides the programmer access to the functionality you deem relevant.
When I try to view the built-in function all() in PyCharm, I could just see "pass" in the function body. How to view the actual implementation so that I could know what exactly the built-in function is doing?
def all(*args, **kwargs): # real signature unknown
"""
Return True if bool(x) is True for all values x in the iterable.
If the iterable is empty, return True.
"""
pass
Assuming you’re using the usual CPython interpreter, all is a builtin function object, which just has a pointer to a compiled function statically linked into the interpreter (or libpython). Showing you the x86_64 machine code at that address probably wouldn’t be very useful to the vast majority of people.
Try running your code in PyPy instead of CPython. Many things that are builtins in CPython are plain old Python code in PyPy.1 Of course that isn’t always an option (e.g., PyPy doesn’t support 3.7 features yet, there are a few third-party extension modules that are still way too slow to use, it’s harder to build yourself if you’re on some uncommon platform…), so let’s go back to CPython.
The actual C source for that function isn’t too hard to find online. It’s in bltinmodule.c. But, unlike the source code to the Python modules in your standard library, you probably don’t have these files around. Even if you do have them, the only way to connect the binary to the source is through debugging output emitted when you compiled CPython from that source, which you probably didn’t do. But if you’re thinking that sounds like a great idea—it is. Build CPython yourself (you may want a Py_DEBUG build), and then you can just run it in your C source debugger/IDE and it can handle all the clumsy bits.
But if that sounds more scary than helpful, even though you can read basic C code and would like to find it…
How did I know where to find that code on GitHub? Well, I know where the repo is; I know the basic organization of the source into Python, Objects, Modules, etc.; I know how module names usually map to C source file names; I know that builtins is special in a few ways…
That’s all pretty simple stuff. Couldn’t you just program all that knowledge into a script, which you could then build a PyCharm plugin out of?
You can do the first 50% or so in a quick evening hack, and such things litter the shores of GitHub. But actually doing it right requires handling a ton of special cases, parsing some ugly C code, etc. And for anyone capable of writing such a thing, it’s easier to just lldb Python than to write it.
1. Also, even the things that are builtins are written in a mix of Python and a Python subset called RPython, which you might find easier to understand than C—then again, it’s often even harder to find that source, and the multiple levels that all look like Python can be hard to keep straight.
I want to allow users to make their own Python "mods" for my game, by placing their scripts in a special folder which the game "scans" for Python modules and imports.
What would be the simplest way to prevent "dangerous" scripts from being imported? I don't want people complaining to me that they used someone's mod and it erased their hard drive.
Things I would like to limit is accessing/modifying/creating any files outside of their folder and connecting to the internet/downloading/sending data. If you can thik of anything else, let me know.
So how can this be done?
Restricted Python seems to able to restrict functionality for code in a clean way and is compatible with python up to 2.7.
http://pypi.python.org/pypi/RestrictedPython/
e.g.
By supplying a different __builtins__ dictionary, we can rule out unsafe operations, such as opening files [...]
The obvious way to do it is to load the module as a string and exec it. This has just as many security risks, but might be easier to block by using custom globals and locals. Have a look at this question - it gives some really good guidance on this. As pointed out in Delnan's comments, this isn't completely secure though.
You could also try this. I haven't used it, but it seems to provide a safe environment for unsafe scripts.
There are some serious shortcomings for sandboxed python execution. aquavitae's answer links to some good discussion on the matter, especially this blog post. Read that first.
There is a kernel of secure execution within cPython. The fundamental idea is to replace the __builtins__ global (Note: not the __builtin__ module), which informs python to turn on some security features; making a handful of attributes on certain objects inaccessible, and removing most of the implementation objects from the interpreter when evaulating that bit of code.
You'll then need to write an actual implementation; in such a way that the protected modules are not the leaked into the sandbox. A fairly tested "file" replacement is provided in the linked blog. Getting a look on that might give you an idea of how involved and complex this problem is.
So now that you have understood that this is a challenge in python; you should take a look at languages with sandbox execution as a core feature, such as Lua, which is very popular in games.
Giving them python execution and trying to limit what they do is asking for trouble. See this SO question for discussion and a pointer to a good article. (You would presumably disable "eval", but it wouldn't make much difference in practice.
My suggestion: Turn the question around. Your goal is to provide them with scripting facilities so they can enhance the game. Find or define an interpreter for a suitable scripting language that has the features you need, and use it to execute their scripts. For example, you could support data persistence in a simple keystore model, without giving them file creation access. Or give them a command to create files but ensure it only accepts a path-less filename. The essential thing is to ensure that there is NO way for them to execute python commands directly.
I'm making a wxpython app that I will compile with the various freezing utility out there to create an executable for multiple platforms.
the program will be a map editer for a tile-based game engine
in this app I want to provide a scripting system so that advanced users can modify the behavior of the program such as modifying project data, exporting the project to a different format ect.
I want the system to work like so.
the user will place the python script they wish to run into a styled textbox and then press a button to execute the script.
I'm good with this so far thats all really simple stuff.
obtain the script from the text-box as a string compile it to a cod object with the inbuilt function compile() then execute the script with an exec statment
script = textbox.text #bla bla store the string
code = compile(script, "script", "exec") #make the code object
eval(code, globals())
the thing is, I want to make sure that this feature can't cause any errors or bugs
say if there is an import statement in the script. will this cause any problems taking into account that the code has been compiled with something like py2exe or py2app?
how do I make sure that the user can't break critical part of the program like modifying part of the GUI while still allowing them to modify the project data (the data is held in global properties in it's own module)? I think that this would mean modifying the globals dict that is passed to the eval function.
how to I make sure that this eval can't cause the program to hang due to a long or infinite loop?
how do I make sure that an error raised inside the user's code can't crash the whole app?
basically, how to I avoid all those problems that can arise when allowing the user to run their own code?
EDIT: Concerning the answers given
I don't feel like any of the answers so far have really answered my questions
yes they have been in part answered but not completely. I'm well aware the it is impossible to completely stop unsafe code. people are just too clever for one man (or even a teem) to think of all the ways to get around a security system and prevent them.
in fact I don't really care if they do. I'm more worried about some one unintentional breaking something they didn't know about. if some one really wanted to they could tear the app to shreds with the scripting functionality, but I couldn't care less. it will be their instance and all the problem they create will be gone when they restart the app unless they have messed with files on the HD.
I want to prevent the problems that arise when the user dose something stupid.
things like IOError's, SystaxErrors, InfiniteLoopErrors ect.
now the part about scope has been answered. I now understand how to define what functions and globals can be accessed from the eval function
but is there a way to make sure that the execution of their code can be stopped if it is taking too long?
a green thread system perhaps? (green because it would be eval to make users worry about thread safety)
also if a users uses an import module statement to load a module from even the default library that isn't used in the rest of the class. could this cause problems with the app being frozen by Py2exe, Py2app, or Freeze? what if they call a modal out side of the standard library? would it be enough that the modal is present in the same directory as the frozen executable?
I would like to get these answers with out creating a new question but I will if I must.
Easy answer: don't.
You can forbid certain keywords (import) and operations, and accesses to certain data structures, but ultimately you're giving your power users quite a bit of power. Since this is for a rich client that runs on the user's machine, a malicious user can crash or even trash the whole app if they really feel like it. But it's their instance to crash. Document it well and tell people what not to touch.
That said, I've done this sort of thing for web apps that execute user input and yes, call eval like this:
eval(code, {"__builtins__":None}, {safe_functions})
where safe_functions is a dictionary containing {"name": func} type pairs of functions you want your users to be able to access. If there's some essential data structure that you're positive your users will never want to poke at, just pop it out of globals before passing them in.
Incidentally, Guido addressed this issue on his blog a while ago. I'll see if I can find it.
Edit: found.
Short Answer: No
Is using eval in Python a bad practice?
Other related posts:
Safety of Python 'eval' For List Deserialization
It is not easy to create a safety net. The details too many and clever hacks are around:
Python: make eval safe
On your design goals:
It seems you are trying to build an extensible system by providing user to modify a lot of behavior and logic.
Easiest option is to ask them to write a script which you can evaluate (eval) during the program run.
How ever, a good design describes , scopes the flexibility and provides scripting mechanism through various design schemes ranging from configuration, plugin to scripting capabilities etc. The scripting apis if well defined can provide more meaningful extensibility. It is safer too.
I'd suggest providing some kind of plug-in API and allowing users to provide plug-ins in the form of text files. You can then import them as modules into their own namespace, catching syntax errors in the process, and call the various functions defined in the plug-in module, again checking for errors. You can provide an API module that defines the functions/classes from your program that the plug-in module has access to. That gives you the freedom to make changes to your application's architecture without breaking plug-ins, since you can just adapt the API module to expose the functionality in the same way.
If you have the option to switch to Tkinter you can use the bundled tcl interpreter to process your script. For that matter you can probably do that with a wxpython app if you don't start the tk event loop; just use the tcl interpreter without creating any windows.
Since the tcl interpreter is a separate thing it should be nearly impossible to crash the python interpreter if you are careful about what commands you expose to tcl. Plus, tcl makes creating DSLs very easy.
Python - the only scripting language with a built-in scripting engine :-).
I'm responsible for developing a large Python/Windows/Excel application used by a financial institution which has offices all round the world. Recently the regulations in one country have changed, and as a result we have been told that we need to create a "locked-down" version of our distribution.
After some frustrating conversations with my foreign counterpart, it seems that they are concerned that somebody might misuse the python interpreter on their computer to generate non-standard applications which might be used to circumvent security.
My initial suggestion was just to take away execute-rights on python.exe and pythonw.exe: Our application works as an Excel plugin which only uses the Python DLL. Those exe files are never actually used.
My counterpart remained concerned that somebody could make calls against the Python DLL - a hacker could exploit the "exec" function, for example from another programming language or virtual machine capable of calling functions in Windows DLLs, for example VBA.
Is there something we can do to prevent the DLL we want installed from being abused? At this point I ran out of ideas. I need to find a way to ensure that Python will only run our authorized programs.
Of course, there is an element of absurdity to this question: Since the computers all have Excel and Word they all have VBA which is a well-known scripting language somewhat equivalent in capability to Python.
It obviously does not make sense to worry about python when Excel's VBA is wide-open, however this is corporate politics and it's my team who are proposing to use Python, so we need to prove that our stuff can be made reasonably safe.
"reasonably safe" defined arbitrarily as "safer than Excel and VBA".
You can't win that fight. Because the fight is over the wrong thing. Anyone can use any EXE or DLL.
You need to define "locked down" differently -- in a way that you can succeed.
You need to define "locked down" as "cannot change the .PY files" or "we can audit all changes to the .PY files". If you shift the playing field to something you can control, you stand a small chance of winning.
Get the regulations -- don't trust anyone else to interpret them for you.
Be absolutely sure what the regulations require -- don't listen to someone else's interpretation.
Sadly, Python is not very safe in this way, and this is quite well known. Its dynamic nature combined with the very thin layer over standard OS functionality makes it hard to lock things down. Usually the best option is to ensure the user running the app has limited rights, but I expect that is not practical. Another is to run it in a virtual machine where potential damage is at least limited to that environment.
Failing that, someone with knowledge of Python's C API could build you a bespoke .dll that explicitly limits the scope of that particular Python interpreter, vetting imports and file writes etc., but that would require quite a thorough inspection of what functionality you need and don't need and some equally thorough testing after the fact.
One idea is to run IronPython inside an AppDomain on .NET. See David W.'s comment here.
Also, there is a module you can import to restrict the interpreter from writing files. It's called safelite.
You can use that, and point out that even the inventor of Python was unable to break the security of that module! (More than twice.)
I know that writing files is not the only security you're worried about. But what you are being asked to do is stupid, so hopefully the person asking you will see "safe Python" and be satisfied.
Follow the geordi way.
Each suspect program is run in its own jailed account. Monitor system calls from the process (I know how to do this in Windows, but it is beyond the scope of this question) and kill the process if needed.
I agree that you should examine the regulations to see what they actually require. If you really do need a "locked down" Python DLL, here's a way that might do it. It will probably take a while and won't be easy to get right. BTW, since this is theoretical, I'm waving my hands over work that varies from massive to trivial :-).
The idea is to modify Python.DLL so it only imports .py modules that have been signed by your private key. It verifies this by using the public key and a code signature stashed in a variable you add to each .py that you trust via a coding signing tool.
Thus you have:
Build a private build from the Python source.
In your build, change the import statement to check for a code signature on every module when imported.
Create a code signing tool that signs all the modules you use and assert are safe (aka trusted code). Something like __code_signature__ = "ABD03402340"; but cryptographically secure in each module's __init__.py file.
Create a private/public key pair for signing the code and guard the private key.
You probably don't want to sign any of the modules with exec() or eval() type capabilities.
.NET's code signing model might be helpful here if you can use IronPython.
I'm not sure its a great idea to pursue, but in the end you would be able to assert that your build of the Python.DLL would only run the code that you have signed with your private key.