I have a Python script but I don't want to change it.
I want to use another script to modify the original one and call to run the original one with all the "print" or "time.sleep" statements being commentted out(not run).
I search for it and find a method using AST, but I really don't have a idea of how to use it.
Thank you very much!
You might be able to manipulate the AST to achieve that, but it would probably be easier to monkeypatch whatever objects it uses prior to running. In your specific example, to incapacitate print and time.sleep, you could do this:
def insomniac(duration):
pass # don't sleep
_original_sleep = time.sleep
time.sleep = insomniac
def dont_write(stuff):
pass # don't write
_original_write = sys.stdout.write
sys.stdout.write = dont_write
To get the functionality back, you can set the relevant functions back to the stored originals. If you want to be truer to your original intention such that calls to these functions from the script in question are nullified but calls from other modules still work, you can inspect the stack to see what module the caller is in and selectively call the original or ignore the call.
Related
I have two functions:
def f1(p1=raw_input("enter data")):
...do something
def f2(p2=raw_input("enter data")):
...do something else
p1 and p2 are the same data, so I want to avoid asking for the input twice. Is there a way I can pass the argument supplied to f1 to f2 without asking for it again? Ideally I would be able to use something like you would in a class. Like f1.p1Is this possible?
EDIT: To add some clarity, I looked into using the ** operator to unpack arguments and I'm aware that using the main body of the program to access the arguments is cleaner. However, the former does not match what I'm trying to do, which is gain a better understanding of what is accessible in a function. I also looked at using the inspect and locals, but these are for inspecting arguments within the function, not outside.
Yes, depending on your needs. The best would be to ask in the main program, and simply pass that value to each function as you call it. Another possibility is to have one function call the other.
# Main program
user_input = raw_input("enter data")
f1(user_input)
f2(user_input)
Ideally I would be able to use something like you would in a class.
Like f1.p1 Is this possible?
That's an advanced technique, and generally dangerous practice. Yes, you can go into the call stack, get the function object, and grab the local variable -- but the function has to be active for this to have any semantic use.
That's not the case you presented. In your code, you have f1 and f2 independently called. Once you return from f1, the value of p1 is popped off the stack and lost.
If you have f1 call f2, then it's possible for f2 to reach back to its parent and access information. Don't go there. :-)
Assume I have a list my_list, a variable var, and a block of code that modifies the list using the variable
my_list = ['foo']
var = 'bar'
my_list.append(var)
In the actual task I have a lot of variables like var and a lot of commands like append which modify the list. I want to relegate those commands to another module. In the case at hand I would like to have two modules: modify.py which contains the modifying commands
my_list.append(var)
and main.py which defines the list and the variable and somehow uses the code from the modify.py
my_list = ['foo']
var = 'bar'
import_and_run modify
The goal is to make the main file more readable. Modifying commands in my case can be nicely grouped and would really be good as separate modules. However, I am only aware of the practice when one imports a function from a module, not a block of code. I do not want to make the whole modify.py module a function because
1) I don't want to pass all the arguments needed. Rather, I want modify.py to directly have access to main.py name space.
2) code in modify.py is not really a function. It runs only once. Also, I do not the whole module to be a body of a function, that just does not feel right.
How do I achieve that? Or the whole attitude is wrong?
If your goal is to make the code more readable, I'd suggest taking these steps.
Decompose your problem into a series of separate actions.
Give these actions names.
Define a function main in your module that calls functions named
after the actions:
def main():
do_setp1()
do_step2()
# etc
return
Separate you existing code into the functions that you're calling in
main()
As #flaschbier suggested, collect related, common parameters into dictionaries to make passing the around easier to manage.
Consider repeating these steps on your new functions, decomposing
them into sub-functions.
Done well, you should be left with a file that's easier to look at, because the function definitions and their indented bodies break up the flow of text.
The code should be easier to reason about because you only need to understand one function at a time, instead of the entire script.
Generally you want to keep all the code related to a particular task in a single module, unless there's more than say 500 lines. But before moving code into separate modules see if you can reduce the total lines of code by factoring repeated code into functions, or making your code more succinct: for example see if for loops can be replaced by list comprehensions.
Consider using code linting tools to help you make the code well-formatted.
So in summary: don't go against the grain of Python by hiding code in another
module and going down the import_and_run route. Instead use good code organisation and Python's inherent good visual structure to make your code readable.
By the way, seems like you still haven't grasped the concept of Python modules.
Well, modules in Python are the .py files. Each function, class or even variables in a .py file can be imported into another program.
Consider a (perhaps crazy) example like this crazy.py:
class crazyCl:
# crazy stuffs
pass
def crazyFn():
# some another crazy stuffs
crazyVar = 'Please do not try this at home'
Now, to import any of these, into another program, say goCrazy.py in the same folder, simply do this
import crazy # see ma, no .py
if __name__ == '__main__':
print crazy.crazyVar # Please do not try this at home
This is a simple introduction to Python modules. There are many other features like packages that have to be tried out.
As a simple introduction, this should do. Hope you got some idea.
I've been developing a sudoku solver in Python and the following question came up while trying to improve performance:
Does python remember the result of a calculation if the same calculation has to be performed multiple times throughout the code? Example: compare the following 2 bits of code:
if get_single(foo, bar) is not None:
position = get_single(foo, bar)
single = get_single(foo, bar)
if single is not None:
position = single
Are these 2 pieces of code equal in performance or does the second piece perform faster because the calculation is only performed once?
No, Python does not remember function calls or other calculations automatically. In general, it would be very bad if it did—imagine if every call to, say, random.randrange(6) returned the same value as the first call.
However, it's not hard to explicitly make it remember calls for specific functions where it's useful. This is usually called "memoization".
See the lru_cache decorator in the docs, for a nice example built into the stdlib.* All you have to do to make it remember every call to get_single(foo, bar) is change the definition of get_single like this;
#functools.lru_cache(maxsize=None)
def get_single(foo, bar):
# etc.
Or, if get_single is someone else's code that you're importing and can't touch, you can just wrap it:
get_single = functools.lru_cache(maxsize=None)(othermod.get_single)
… and then call your wrapper instead of the module's version.
* Note that lru_cache was added in Python 3.2. If you're using 2.7 (or, for some reason, 3.0-3.1), you can install the backport from PyPI, or find any of dozens of other memoizing caches on PyPI or ActiveState—or even, noticing that the functools docs link to the source, like many other stdlib modules meant to also serve as example code, copy the source to your own project. Although, IIRC, the 3.2 code needs a small change to work with 2.7 because it relies on nonlocal to hide its internals.
That being said, even if you know get_single is memoized, it's still not very good style to call it twice. If you only need to do this once, just write the three lines of code. If you need to do it repeatedly, write a wrapper function that wraps up those three lines or code, and then calling that function will be shorter than even the two-line version.
Suppose I have a method which can return a value, or can just be quickly called to see if I even did get the return value I expected.
from pprint import pprint
from my_module import get_data
def quicktest():
#pseudocode here to illustrate what I want
if isUsedForAssignment:
return get_data()
else:
pprint(get_data())
The idea here being I'm checking this returned data to ensure the structure is correct; however if I don't care about that, I'd rather assign the value. This way I just go into my Python interpreter and type:
import thismodule as thism
thism.quicktest()
…as opposed to some way of doing it where I'm continually importing pprint just to see my data structure correctly.
This is maybe a slightly pedantic example, but it prompted the question in me as to whether or not a method can tell if it's being used to assign a value or just to be called straight-up.
Technically you could inspect the parent frame's bytecode or source code. But this is not only incredibly fragile and hacky and complicated, it's also a surefire way to indicate that you're doing something wrongTM. Just don't do that. Write the method to always simply return the value, and do the printing at the call site. Alternatively, if the printing is nontrivial, write a separate method to do the printing.
I've been thinking about this far too long and haven't gotten any idea, maybe some of you can help.
I have a folder of python scripts, all of which have the same surrounding body (literally, I generated it from a shell script), but have one chunk that's different than all of them. In other words:
Top piece of code (always the same)
Middle piece of code (changes from file to file)
Bottom piece of code (always the same)
And I realized today that this is a bad idea, for example, if I want to change something from the top or bottom sections, I need to write a shell script to do it. (Not that that's hard, it just seems like it's very bad code wise).
So what I want to do, is have one outer python script that is like this:
Top piece of code
Dynamic function that calls the middle piece of code (based on a parameter)
Bottom piece of code
And then every other python file in the folder can simply be the middle piece of code. However, normal module wouldn't work here (unless I'm mistaken), because I would get the code I need to execute from the arguement, which would be a string, and thus I wouldn't know which function to run until runtime.
So I thought up two more solutions:
I could write up a bunch of if statements, one to run each script based on a certain parameter. I rejected this, as it's even worse than the previous design.
I could use:
os.command(sys.argv[0] scriptName.py)
which would run the script, but calling python to call python doesn't seem very elegant to me.
So does anyone have any other ideas? Thank you.
If you know the name of the function as a string and the name of module as a string, then you can do
mod = __import__(module_name)
fn = getattr(mod, fn_name)
fn()
Another possible solution is to have each of your repetitive files import the functionality from the main file
from topAndBottom import top, bottom
top()
# do middle stuff
bottom()
In addition to the several answers already posted, consider the Template Method design pattern: make an abstract class such as
class Base(object):
def top(self): ...
def bottom(self): ...
def middle(self): raise NotImplementedError
def doit(self):
self.top()
self.middle()
self.bottom()
Every pluggable module then makes a class which inherits from this Base and must override middle with the relevant code.
Perhaps not warranted for this simple case (you do still have to import the right module in order to instantiate its class and call doit on it), but still worth keeping in mind (together with its many Pythonic variations, which I have amply explained in many tech talks now available on youtube) for cases where the number or complexity of "pluggable pieces" keeps growing -- Template Method (despite its horrid name;-) is a solid, well-proven and highly scalable pattern [[sometimes a tad too rigid, but that's exactly what I address in those many tech talks -- and that problem doesn't apply to this specific use case]].
However, normal module wouldn't work here (unless I'm mistaken), because I would get the code I need to execute from the arguement, which would be a string, and thus I wouldn't know which function to run until runtime.
It will work just fine - use __import__ builtin or, if you have very complex layout, imp module to import your script. And then you can get the function by module.__dict__[funcname] for example.
Importing a module (as explained in other answers) is definitely the cleaner way to do this, but if for some reason that doesn't work, as long as you're not doing anything too weird you can use exec. It basically runs the content of another file as if it were included in the current file at the point where exec is called. It's the closest thing Python has to a source statement of the kind included in many shells. As a bare minimum, something like this should work:
exec(open(filename).read(None))
How about this?
function do_thing_one():
pass
function do_thing_two():
pass
dispatch = { "one" : do_thing_one,
"two" : do_thing_two,
}
# do something to get your string from the command line (optparse, argv, whatever)
# and put it in variable "mystring"
# do top thing
f = dispatch[mystring]
f()
# do bottom thing