In my python file, I have made a GUI widget that takes some inputs from user. I have imported a python module in my python file that takes some input using raw_input(). I have to use this module as it is, I have no right to change it. When I run my python file, it ask me for the inputs (due to raw_input() of imported module). I want to use GUI widget inputs in that place.
How can I pass the user input (that we take from widget) as raw_input() of imported module?
First, if importing it directly into your script isn't actually a requirement (and it's hard to imagine why it would be), you can just run the module (or a simple script wrapped around it) as a separate process, using subprocess or pexpect.
Let's make this concrete. Say you want to use this silly module foo.py:
def bar():
x = raw_input("Gimme a string")
y = raw_input("Gimme another")
return 'Got two strings: {}, {}'.format(x, y)
First write a trivial foo.wrapper.py:
import foo
print(foo.bar())
Now, instead of calling foo.do_thing() directly in your real script, run foo_wrapper as a child process.
I'm going to assume that you already have the input you want to send it in a string, because that makes the irrelevant parts of the answer simpler (in fact, it makes them possible—if you wanted to use some GUI code for that, there's really no way I could show you how unless you first tell us which GUI library you're using).
So:
foo_input = 'String 1\nString 2\n'
with subprocess.Popen([sys.executable, 'foo_wrapper.py'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE) as p:
foo_output, _ = p.communicate(foo_input)
Of course in real life you'll want to use an appropriate path for foo_wrapper.py instead of assuming that it's in the current working directory, but this should be enough to illustrate the idea.
Meanwhile, if "I have no right to change it" just means "I don't (and shouldn't) have checkin rights to the foo project's github site or the relevant subtree on our company's P4 server" or whatever, there's a really easy answer: Fork it, and change the fork.
Even if it's got a weak copyleft license like LGPL: fork it, change the fork, publish your fork under the same license as the original, then use your fork.
If you're depending on the foo package being installed on every target system, and can't depend on your replacement foo being installed instead, that's a bit more of a problem. But if the function or method that actually calls raw_input is just a small fraction of the actual code in foo, you can fix that by monkeypatching foo at runtime.
And that leads to the last-ditch possibility: You can always monkeypatch raw_input itself.
Again, I'm going to assume that you already have the input you need to give it to make things simpler.
So, first you write a replacement function:
foo_input = ['String 1\n', 'String 2\n']
def fake_raw_input(prompt):
global foo_input
return foo_input.pop()
Now, there are two ways you can patch this in. Usually, you want to do this:
import foo
foo.raw_input = fake_raw_input
This means any code in foo that calls raw_input will see the function you crammed into its module globals instead of the normal builtin. Unless it does something really funky (like looking up the builtin directly and copying it to a local variable or something), this is the answer.
If you need to handle one of those really funky edge cases, and you don't mind doing something questionable, you can do this:
import __builtin__
__builtin__.raw_input = fake_raw_input
You must do this before the first import foo anywhere in your problem. Also, it's not clear whether this is intentionally guaranteed to work, accidentally guaranteed to work (and should be fixed in the future), or not guaranteed to work. But it does work (at least for CPython 2.5-2.7, which is what you're probably using).
Related
I need to run a .tcl file via command line which get invoked with a Python script. However, a single line in that .tcl file needs to change based on input from the user. For example:
info = input("Prompt for the user: ")
Now I need the string contained in info to replace one of the lines in .tcl file.
Rewriting the script is one of the trickier options to pick. It makes things harder to audit and it is tremendously easy to make a mess of. It's not recommended at all unless you take special steps, such as factoring out the bit you set into its own file:
File that you edit, e.g., settings.tcl (simple enough that it is pretty trivial to write and you can rewrite the whole lot each time without making a mess of it)
set value "123"
Use of that file:
set value 0
if {[file readable settings.tcl]} {
source settings.tcl
}
puts "value is $value"
More sophisticated versions of that are possible with safe interpreters and language profiling… but they're only really needed when the settings and the code are in different trust domains.
That said, there are other approaches that are usually easier. If you are invoking the Tcl script by running a subprocess, the easiest ways to pass an arbitrary parameter are to use one of:
A command line argument. These can be read on the Tcl side from the $argv global, which holds a list of all arguments after the script name. (The lindex and lassign commands tend to be useful here, e.g., set value [lindex $argv 0].)
An environment variable. These can be read on the Tcl side from the env global array, e.g., set value $env(MyVarName)
On standard input. A line can be read from that on the Tcl side using set line [gets stdin].
In more complex cases, you'd pass values in their own files, or by writing them into something like an SQLite database, or… well, there's lots of options.
If on the other hand the Tcl interpreter is in the same process, pass the values by setting the variables in it before asking for the script to run. (Tcl has almost no true globals — environment variables are a special exception, and only because the OS forces it upon us — so everything is specific to the interpreter context.)
Specifically, if you've got a Tcl instance object from tkinter (Tk is a subclass of that) then you can do:
import tkinter
interp = tkinter.Tcl()
interp.call("set", "value", 123)
interp.eval("source program.tcl")
# Or interp.call("source", "program.tcl")
That has the advantage of doing all the quoting for you.
By default, ipython uses ipdb as debugger with %pdb or %debug magics.
However, I much prefer pdb++... Is there a way of changing the debugger called with these magics ? (I am aware I can simply use pdb.xpm() on exception with pdb++, but I'd like to make it work with ipython magic commands so that I don't have to wrap the code each time...)
So at least for limited circumstances and not in a way I'd necessarily recommend, the answer here is yes. I can't promise the below will work outside the confines of what I did, but it might give you enough insight to play around with it yourself. Caution is warranted because it involves changing undocumented attributes of the ipython shell class at runtime. TLDR: I hunted down how ipython calls the debugger when the %pdb magic is on or when you call the %debug magic, and I updated it to use the debugger I wanted. Skip the next two paragraphs if you just want the approach that worked for me and don't care about the hunt.
Long version: when you run ipython it starts an instance of TerminalInteractiveShell, which has a debugger_cls attribute telling you the debugger that ipython will launch. Unfortunately, at the level of TerminalInteractiveShell, debugger_cls is actually a property of the class, and has no setter that lets you modify it. Rather, it either gets set to Pdb (actually a more featureful ipython Pdb than the traditional pdb) or TerminalPdb (even more features).
If you dig deeper, however, you find that debugger_cls gets passed up to InteractiveShell to initialize how tracebacks are handled. There it seems to disappear into the initialization of InteractiveShell's InteractiveTB property, but actually just ends up as the debugger_cls attribute of that (InteractiveTB) class (by setting the inherited attribute from TBTools). Finally, this debugger_cls attribute only gets used to set the pdb attribute (more or less by doing TBToolsInstance.pdb = TBToolsInstance.debugger_cls()) in one of several places. In any case, it turns out that these attributes can be changed! And if you change them correctly they will percolate to the shell itself! Importantly, this relies on the fact that ipython makes use of the Traitlets package to create a Singleton object for the shell, and this allows you to gain access to that object from within the terminal itself. More on that below.
Below I show the code you can run in the ipython shell to achieve your desired result. As an example, I'm replacing the default debugger (TerminalPdb) with a modified version I created that deals more nicely with certain list comprehensions (LcTerminalPdb). The process (which you can run in the ipython shell) is as follows.
# import the TerminalInteractiveShell class
from IPython.terminal.interactiveshell import TerminalInteractiveShell
# grab the specific instance of the already initialized ipython
shl = TerminalInteractiveShell().instance()
# grab the InteractiveTB attribute (which is a class)
tbClass = shl.InteractiveTB
# load the debugger class you want to use; I assume it's accessible on your path
from LcTerminalPdb import LcTerminalPdb
# change tbClass's debugger_cls to the debugger you want (for consistency
# with the next line)
tbClass.debugger_cls = LcTerminalPdb
# more importantly, set the pdb attribute to an instance of the class
tbClass.pdb = tbClass.debugger_cls()
# The above line is necessary if you already have the terminal running
# (and have entered pdb at least once); otherwise, ipython will run it on
# its own
That's it! Note that because you call the instance() method of TerminalInteractiveShell, you are grabbing the object for the currently running shell, which is why the modifications will affect the shell itself and so all following debugs. For a bonus, you can add these lines of code to your ipython_config.py file, so the debugger you want (LcTerminalPdb here) is always loaded with ipython:
c.InteractiveShellApp.exec_lines = [
'%pdb on',
'from LcTerminalPdb import LcTerminalPdb',
'from IPython.terminal.interactiveshell import TerminalInteractiveShell',
'shl = TerminalInteractiveShell().instance().InteractiveTB',
'shl.debugger_cls = LcTerminalPdb',
]
Note that above I don't need to write the extra shl.pdb = shl.debugger_cls() line, as ipython will take care of it the first time a debug point is entered. But feel free to, to be sure.
NOTES:
I have only tested this using LcTerminalPdb, and only briefly, but it seems to work appropriately
I suspect as long as other pdb debuggers have the same API as pdb (i.e. if they can be used by the PYTHONBREAKPOINT environment variable) then it should work
It's really unclear to me whether changing such deep attributes will have unexpected effects, so not sure how much I recommend this approach
I have inherited a python script which appears to have multiple distinct entry points. For example:
if __name__ == '__main__1':
... Do stuff for option 1
if __name__ == '__main__2':
... Do stuff for option 2
etc
Google has turned up a few other examples of this syntax (e.g. here) but I'm still no wiser on how to use it.
So the question is: How can I call a specific entry point in a python script that has multiple numbered __main__ sections?
Update:
I found another example of it here, where the syntax appears to be related to a specfic tool.
https://github.com/brython-dev/brython/issues/163
The standard doc mentions only main as a reserved module namespace. After looking at your sample I notice that every main method seems separate, does its imports, performs some enclosed functionality. My suspicion is that the developer wanted to quickly swap functionalities and didn't bother to use command line arguments for that, opting instead to swap 'main2' to 'main' as needed.
This is by no means proven, though - any chance of contacting the one who wrote this in the first place?
I'm in the process of learning how a large (356-file), convoluted Python program is set up. Besides manually reading through and parsing the code, are there any good methods for following program flow?
There are two methods which I think would be useful:
Something similar to Bash's "set -x"
Something that displays which file outputs each line of output
Are there any methods to do the above, or any other ways that you have found useful?
I don't know if this is actually a good idea, but since I actually wrote a hook to display the file and line before each line of output to stdout, I might as well give it to you…
import inspect, sys
class WrapStdout(object):
_stdout = sys.stdout
def write(self, buf):
frame = sys._getframe(1)
try:
f = inspect.getsourcefile(frame)
except TypeError:
f = 'unknown'
l = frame.f_lineno
self._stdout.write('{}:{}:{}'.format(f, l, buf))
def flush(self):
self._stdout.flush()
sys.stdout = WrapStdout()
Just save that as a module, and after you import it, every chunk of stdout will be prefixed with file and line number.
Of course this will get pretty ugly if:
Anyone tries to print partial lines (using stdout.write directly, or print magic comma in 2.x, or end='' in 3.x).
You mix Unicode and non-Unicode in 2.x.
Any of the source files have long pathnames.
etc.
But all the tricky deep-Python-magic bits are there; you can build on top of it pretty easily.
Could be very tedious, but using a debugger to trace the flow of execution, instruction by instruction could probably help you to some extent.
import pdb
pdb.set_trace()
You could look for a cross reference program. There is an old program called pyxr that does this. The aim of cross reference is to let you know how classes refer to each other. Some of the IDE's also do this sort of thing.
I'd recommend running the program inside an IDE like pydev or pycharm. Being able to stop the program and inspect its state can be very helpful.
I'm looking at several cases where it would be far, far, far easier to accept nearly-raw code. So,
What's the worst you can do with an expression if you can't lambda, and how?
What's the worst you can do with executed code if you can't use import and how?
(can't use X == string is scanned for X)
Also, B is unecessary if someone can think of such an expr that given d = {key:value,...}:
expr.format(key) == d[key]
Without changing the way the format looks.
The worst you can do with an expression is on the order of
__import__('os').system('rm -rf /')
if the server process is running as root. Otherwise, you can fill up memory and crash the process with
2**2**1024
or bring the server to a grinding halt by executing a shell fork bomb:
__import__('os').system(':(){ :|:& };:')
or execute a temporary (but destructive enough) fork bomb in Python itself:
[__import__('os').fork() for i in xrange(2**64) for x in range(i)]
Scanning for __import__ won't help, since there's an infinite number of ways to get to it, including
eval(''.join(['__', 'im', 'po', 'rt', '__']))
getattr(__builtins__, '__imp' + 'ort__')
getattr(globals()['__built' 'ins__'], '__imp' + 'ort__')
Note that the eval and exec functions can also be used to create any of the above in an indirect way. If you want safe expression evaluation on a server, use ast.literal_eval.
Arbitrary Python code?
Opening, reading, writing, creating files on the partition. Including filling up all the disk space.
Infinite loops that put load on the CPU.
Allocating all the memory.
Doing things that are in pure Python modules without importing them by copy/pasting their code into the expression (messing with built in Python internals and probably finding a way to access files, execute them or import modules).
...
No amount of whitelisting or blacklisting is going to keep people from getting to dangerous parts of Python. You mention running in a sandbox where "open" is not defined, for example. But I can do this to get it:
real_open = getattr(os, "open")
and if you say I won't have os, then I can do:
real_open = getattr(sys.modules['os'], "open")
or
real_open = random.__builtins__['open']
etc, etc, etc. Everything is connected, and the real power is in there somewhere. Bad guys will find it.