While browsing some code, I came across this line:
if False: #shedskin
I understand that Shedskin is a kind of Python -> C++ compiler, but I can't understand that line.
Shouldn't if False: never execute? What's going on here?
For context:
This is the whole block:
if False: # shedskin
AStar(SQ_MapHandler([1], 1, 1)).findPath(SQ_Location(1,1), SQ_Location(1,1))
More context is on Google Code (scroll down all the way).
It won't execute, because it isn't supposed to. The if False: is there to intentionally prevent the next line from executing, because that code's only purpose is seemingly to help Shed Skin infer type information about the argument to the AStar() function.
You can see another example of this in httplib:
# Useless stuff to help type info
if False :
conn._set_tunnel("example.com")
It will never get executed. It's one way to temporarily disable part of the code.
Theoretically, it could get executed:
True, False = False, True
if False: print 'foo'
But typically this will be used to temporarily disable a code path.
You are correct in assuming that this will never evaluate to true. This is sometimes done when the programmer has a lot of debugging code but does not want to remove the debugging code in a release, so they just put if False: above it all.
not enough reputation to comment yet apparently, but tim stone's answer is correct. suppose we have a function like this:
def blah(a,b):
return a+b
now in order to perform type inference, there has to be at least one call to blah, or it becomes impossible to know the types of the arguments at compile-time.
for a stand-alone program, this not a problem, since everything that has to be compiled for it to run is called indirectly from somewhere..
for an extension module, calls can come from the 'outside', so sometimes we have to add a 'fake' call to a function for type inference to become possible.. hence the 'if False'.
in the shedskin example set there are a few programs that are compiled as extension modules, in order to be combined with for example pygame or multiprocessing.
Related
I'm using the flake8-pytest-style plugin and it flags a certain test as violating PT012. This is about having too much logic in the raises() statement.
The code in question is this:
def test_bad_python_version(capsys) -> None:
import platform
from quendor.__main__ import main
with pytest.raises(SystemExit) as pytest_wrapped_e, mock.patch.object(
platform,
"python_version",
) as v_info:
v_info.return_value = "3.5"
main()
terminal_text = capsys.readouterr()
expect(terminal_text.err).to(contain("Quendor requires Python 3.7"))
expect(pytest_wrapped_e.type).to(equal(SystemExit))
expect(pytest_wrapped_e.value.code).to(equal(1))
Basically this is testing the following code:
def main() -> int:
if platform.python_version() < "3.7":
sys.stderr.write("\nQuendor requires Python 3.7 or later.\n")
sys.stderr.write(f"Your current version is {platform.python_version()}\n\n")
sys.exit(1)
What I do is just pass in a version of Python that is less than the required and make sure the error appears as expected. The test itself works perfectly fine. (I realize it can be questionable as to whether this should be a unit test at all since it's really testing more of an aspect of Python than my own code.)
Clearly the lint check is suggesting that my test is a little messy and I can certainly understand that. But it's not clear from the above referenced page what I'm supposed to do about it.
I do realize I could just disable the quality check for this particular test but I'm trying to craft as good of Python code as I can, particularly around tests. And I'm at a loss as to how to refactor this code to meet the criteria.
I know I can create some other test helper function and then have that function called from the raises block. But that strikes me as being less clear overall since now you have to look in two places in order to see what the test is doing.
the lint error is a very good one! in fact in your case because the lint error is not followed you have two lines of unreachable code (!) (the two capsys-related lines) because main() always raises
the lint is suggesting that you only have one line in a raises() block -- the naive refactor from your existing code is:
with mock.patch.object(
platform,
"python_version",
return_value="3.5",
):
with pytest.raises(SystemExit) as pytest_wrapped_e:
main()
terminal_text = capsys.readouterr()
expect(terminal_text.err).to(contain("Quendor requires Python 3.7"))
expect(pytest_wrapped_e.type).to(equal(SystemExit))
expect(pytest_wrapped_e.value.code).to(equal(1))
an aside, you should never use platform.python_version() for version comparisons as it produces incorrect results for python 3.10 -- more on that and a linter for it here
I am migrating python2 code to python3 and have learned there are some minor differences in the call parameter handling that is stricter in python3. This same code used to determine if a feature was enabled by the conditional boolean floatclock_enabled. After adding several log print lines in the code, I was able to confirm that the first line of code was always resolving to True, even when I logged the string before and after the test. The print results were both False, yet it ran True in the flow.
if ss.floatclock_enabled:
It looks like a bug to me and the way I solved it was the following line.
if str(ss.floatclock_enabled) == 'True':
Any suggestions where this anomaly might be caused? My other _enabled variables work normally, so far.
Probably it's not next Python, check this https://docs.python.org/3/library/logging.html#logrecord-attributes for thread and threadName to include this into your logs to be sure that they are from the same thread. Also print floatclock_enabled via repr, if it has string value "False" it would be printed as False, but evaluates to boolean True.
Thank you very much for the quick responses. After further analysis, I found in my long list of configparser load items I had used config.get instead of config.getboolean. Since my initial load used the boolean get, I assumed it wouldn't matter later until I scrutinized my code. Again, I appreciate the quick response and suggestions. I will be scrutinizing more before the next time I post!
I'm returning 0 all over the place in a python script but would prefer something more semantic, something more readable. I don't like that magic number. Is there an idea in python similar to how in C you can return EXIT_SUCCESS instead of just 0?
I was unable to find it here:
https://docs.python.org/3.5/library/errno.html
I'm returning 0
return is not how you set your script's exit code in Python. If you want to exit with an exit code of 0, just let your script complete normally. The exit code will automatically be set to 0. If you want to exit with a different exit code, sys.exit is the tool to use.
If you're using return values of 0 or 1 within your code to indicate whether functions succeeded or failed, this is a bad idea. You should raise an appropriate exception if something goes wrong.
Since you discuss return here, it seems like you may be programming Python like C. Most Python functions should ideally be written to raise an exception if they fail, and the calling code can determine how to handle the exceptional conditions. For validation functions it's probably best to return True or False - not as literals, usually, but as the result of some expression like s.isdigit().
When talking about the return value of a process into its environment you caonnt use return because a module isn't a function, so a return statement at top level would be flagged as a syntax error. Instead you should use sys.exit.
Python might seem a little minimalist in this respect, but a call to sys.exit with no arguments defaults to success (i.e. return code zero). So the easiest way to simplify your program might be to stop coding it with an argument where you don't want to indicate failure!
As the documentation reminds us, integer arguments are passed back, and string arguments result in a return code of 1 and the string is printed to stderr.
The language doesn't, as far as I am aware, contain any constants (while it does have features specific to some environments, if it provided exit codes their values might need to be implementation- or platform-specific and the language developers prefer to avoid this where possible.
I am trying to debug current code and I am new in python,
Need to understand below code
Why they use this code and Whats use of this code in python
def _init_():
if(true):
Before figuring out what if(True) does, think of if(False) first. if(False) is actually a commonly used idiom which has the same effect as commenting out multiple lines - all the code that's in its indentation block will not be executed since the condition is always evaluated as false. Later, if you want those lines of code below if(False) to be executed again, you can simply change False to True - that's how if(True) comes in. Itself doesn't do anything, it is its opposite if(False) that is useful.
Is there a neat way to inject failures in a Python script? I'd like to avoid sprinkling the source code with stuff like:
failure_ABC = True
failure_XYZ = True
def inject_failure_ABC():
raise Exception('ha! a fake error')
def inject_failure_XYZ():
# delete some critical file
pass
# some real code
if failure_ABC:
inject_failure_ABC()
# some more real code
if failure_XYZ:
inject_failure_XYZ()
# even more real code
Edit:
I have the following idea: insert "failure points" as specially-crafted comments. The write a simple parser that will be called before the Python interpreter, and will produce the actual instrumented Python script with the actual failure code. E.g:
#!/usr/bin/parser_script_producing_actual_code_and_calls python
# some real code
# FAIL_123
if foo():
# FAIL_ABC
execute_some_real_code()
else:
# FAIL_XYZ
execute_some_other_real_code()
Anything starting with FAIL_ is considered as a failure point by the script, and depending on a configuration file the failure is enabled/disabled. What do you think?
You could use mocking libraries, for example unittest.mock, there also exist many third party ones as well. You can then mock some object used by your code such that it throws your exception or behaves in whatever way you want it to.
When testing error handling, the best approach is to isolate the code that can throw errors in a new method which you can override in a test:
class ToTest:
def foo(...):
try:
self.bar() # We want to test the error handling in foo()
except:
....
def bar(self):
... production code ...
In your test case, you can extend ToTest and override bar() with code that throws the exceptions that you want to test.
EDIT You should really consider splitting large methods into smaller ones. It will make the code easier to test, to understand and to maintain. Have a look at Test Driven Development for some ideas how to change your development process.
Regarding your idea to use "Failure Comments". This looks like a good solution. There is one small problem: You will have to write your own Python parser because Python doesn't keep comments when it produces bytecode.
So you can either spend a couple of weeks to write this or a couple of weeks to make your code easier to test.
There is one difference, though: If you don't go all the way, the parser will be useless. Also, the time spent won't have improved one bit of your code. Most of the effort will go into the parser and tools. So after all that time, you will still have to improve the code, add failure comments and write the tests.
With refactoring the code, you can stop whenever you want but the time spent so far will be meaningful and not wasted. Your code will start to get better with the first change you make and it will keep improving.
Writing a complex tool takes time and it will have it's own bugs which need to fix or work around. None of this will improve your situation in the short term and you don't have a guarantee that it will improve the long term.
If you only want to stop your code at some point, and fall back to interactive interpreter, one can use:
assert 1==0
But this only works if you do not run python with -O
Edit
Actually, my first answer was to quick, without really understanding what you want to do, sorry.
Maybe your code becomes already more readable if you do parameterization through parameters, not through variable/function suffices. Something like
failure = {"ABC": False, "XYZ":False}
#Do something, maybe set failure
def inject_failure(failure):
if not any(failure.values()):
return
if failure["ABC"]:
raise Exception('ha! a fake error')
elif failure["XYZ"]:
# delete some critical file
pass
inject_failure(failure)