How to detect assignment to a mis-cased instance variable in Python - python

I just ran into a somewhat hard to detect issue in Python. We're using the xml module (xml.etree.ElementTree.Element), where you have elements and the elements have a "text" instance variable. But someone used "Text" (note, uppercase "T"). The generated XML of course did not contain the expected string.
This didn't throw any errors or anything, other than the program just not working properly, and I think I understand why.
My question is, is there a good way to detect this problem? I feel like if there were I'd probably find a few similar mistakes if I were able to scan all of our files.
I've tried pylint and pyflakes but neither tool seems to detect this.
I remember in some other language there was an option you could set to require explicit declaration before usage (it's funny, I forget which language that was now, Ada maybe?), and I know declaration isn't very Pythonic, but it would seem adding "names" to an instance of a class shouldn't be very standard usage either.

Related

How to get the expected parameters of another function in python?

I would like to know if it is possible, given a function (as an instance, or a string), to get its paramaters, if defined default values for each paramater and, if possible, the type of each parameters (probably using the type of default value, if defined) in Python 3.5.
Why would you want that ?!
Long story short, I am generating a XML file containing details of different functions in my project. Since the generator has to be future-proof in case someone modifies, add, or delete a function, the next generated file must be updated. I succesfully retrieved the functions I wanted either as instance or a string of the code calling it.
I have two solutions (well, more the beginnings of solutions) to solve this problem, using inspect and jedi.
Inspect
Using inspect.signature(function), I can retrieve the name and default values of all the parameters. The main issue I see here, would be analyzing this function:
def fct(a=None):
# Whatever the function does...
Analyzing the type of the default value will lead to misunderstandigs. Is there a way to fix that ?
Jedi
Jedi is an extremely powerful tool, maybe even too much ! Getting the function in a one line code string, and analyzing it through Jedi gives an extraordinary amount of information, that I am lost with to be completely honest. Plus, I might get bad autocompletion (example: instead of having the paramaters for print, I might get autocompleted to println)
If someone had used one of these tools for this prupose, or even better if you know a better, more "pythonic" way of doing this, I would be really grateful !

How do you read and quickly edit python code

I typically work with C++ but off late have to program a lot in Python. Coming from a C++ background, I am finding dynamic typing to be very inconvenient when I have to modify an existing codebase. I know I am missing something very basic and hence turning to the stackoverflow community to understand best practices.
Imagine, there is a class with a number of methods and I need to edit an existing method. Now, in C++, I could explicitly see the datatype of every parameter, check out the .h files of the corresponding class if need be and could quickly understand what's happening. In python on the other hand, all I see are some variable names. I am not sure if it is a list or a dictionary or maybe some custom datastructure with its getters and setters. To figure this out, I need to look at some existing usages of this function or run the code with breakpoints and see what kind of datastructure am I getting. I find either methods to be very time consuming. Is there a faster way to resolve this problem? How should I quickly determine what's the datatype of a particular variable?
The general impression is that code is easier to read/write in Python, but I am not finding it very quick to read python code because of lack of types. What am I missing here?
I feel your pain, too! I frequently switch between Python and C++, so paradigm shifting does give me paranoia.
However, I've been readjusting my codes with:
Type Annotations
It doesn't improve runtime performance, but it provides sense of comfort when reading through tens of thousands line of codes. Also, you can run your python programs with this to further verify your type annotations:
mypy
These are the following things i follow:
Comment clearly what is being returned and what is the input in the docstring
Use a debug(or a Flag) variable, which is by default set to False, and keep a if block as follows.
if debug:
print(type(variable))
So, in that way, you would be sure to see what is the type of the variable.
In Python, you can see the data type of any variable by using
type(variable_name)
It will show you data type of that variable. Such as int, bool, str, etc.

Simple tokenizer for C++ in Python

Struggling to find a Python library of script to tokenize (find specific tokens like function definition names, variable names, keywords etc.).
I have managed to find keywords, whitespaces etc. using something like this but I found it quite a challenge for function/class definition names etc. I was hoping of using a pre-existent script; I explored Pygments with no success. Its lexer seems amazing for what I want but have no idea how to utilize it in Python and to also get positions for each found token.
For example I am looking at doing something like that:
int fac(int n)
{
return (n>1) ? n∗fac(n−1) : 1;
}
from the source code above I would like to get:
function_name: 'fac' at position (x, y)
variable_name: 'n' at position (x, y+8)
EDITED:
Any suggestions will be appreciated since I am in the dark here regarding tokenizations and parsing in C++?
Eli Bendersky is a smart guy, and sometimes active here on SO. He's got a blog post on this issue which I'll refer you directly to: Parsing C++ in Python with Clang.
Because things disappear, here's the takeaway:
Eli Bendersky wrote a C language (not C++) parser in Python, called pycparser. People keep asking him if he's going to add support for C++. He is not. He recommends instead that people use the Python bindings for libclang to get access to "a C API that the Clang team vows to keep relatively stable, allowing the user to examine parsed code at the level of an abstract syntax tree (AST)".
You can find the bindings separately on PyPI here. Note though that you'll have to have clang installed, so you may just want to point your PYTHON_PATH directly at the install location.
You're struggling to find a python library to do what you want because what you want is impossible to do, fundamentally.
I have managed to find keywords, whitespaces etc. using something like this but I found it quite a challenge for function/class definition names etc
You mean like this:
foo = 3
def foo():pass
What is foo? All a tokenizer should/can tell you is that foo is an identifier. It's context tells you whether it's a variable or a function declaration. You need a parser to handle context free grammars. Mathematically, the space of context free grammars is too large for a standard lexer to tackle.
Try a parser: here's one in python
Normally I'd try and provide you links here to distinguish between the topics, but this is too broad to provide a single good link to. If you're interested, start with any standard compiler text. Elsewhere on SE, we see this question pop up as a theoretical question and, in some form, as a famous question about html.
Once you realize that tokenizers are (usually) built (largely) on regular expressions, it becomes more obvious why your task is not going to end happily.
Now that you know the terminology, I think you'll find this SO article useful, which recommends gcc-ml. I don't know how up-to-date it is, but it's the type of program you're looking for.

Python modding - prevent dangerous scripts to be imported?

I want to allow users to make their own Python "mods" for my game, by placing their scripts in a special folder which the game "scans" for Python modules and imports.
What would be the simplest way to prevent "dangerous" scripts from being imported? I don't want people complaining to me that they used someone's mod and it erased their hard drive.
Things I would like to limit is accessing/modifying/creating any files outside of their folder and connecting to the internet/downloading/sending data. If you can thik of anything else, let me know.
So how can this be done?
Restricted Python seems to able to restrict functionality for code in a clean way and is compatible with python up to 2.7.
http://pypi.python.org/pypi/RestrictedPython/
e.g.
By supplying a different __builtins__ dictionary, we can rule out unsafe operations, such as opening files [...]
The obvious way to do it is to load the module as a string and exec it. This has just as many security risks, but might be easier to block by using custom globals and locals. Have a look at this question - it gives some really good guidance on this. As pointed out in Delnan's comments, this isn't completely secure though.
You could also try this. I haven't used it, but it seems to provide a safe environment for unsafe scripts.
There are some serious shortcomings for sandboxed python execution. aquavitae's answer links to some good discussion on the matter, especially this blog post. Read that first.
There is a kernel of secure execution within cPython. The fundamental idea is to replace the __builtins__ global (Note: not the __builtin__ module), which informs python to turn on some security features; making a handful of attributes on certain objects inaccessible, and removing most of the implementation objects from the interpreter when evaulating that bit of code.
You'll then need to write an actual implementation; in such a way that the protected modules are not the leaked into the sandbox. A fairly tested "file" replacement is provided in the linked blog. Getting a look on that might give you an idea of how involved and complex this problem is.
So now that you have understood that this is a challenge in python; you should take a look at languages with sandbox execution as a core feature, such as Lua, which is very popular in games.
Giving them python execution and trying to limit what they do is asking for trouble. See this SO question for discussion and a pointer to a good article. (You would presumably disable "eval", but it wouldn't make much difference in practice.
My suggestion: Turn the question around. Your goal is to provide them with scripting facilities so they can enhance the game. Find or define an interpreter for a suitable scripting language that has the features you need, and use it to execute their scripts. For example, you could support data persistence in a simple keystore model, without giving them file creation access. Or give them a command to create files but ensure it only accepts a path-less filename. The essential thing is to ensure that there is NO way for them to execute python commands directly.

Partial evaluation for parsing

I'm working on a macro system for Python (as discussed here) and one of the things I've been considering are units of measure. Although units of measure could be implemented without macros or via static macros (e.g. defining all your units ahead of time), I'm toying around with the idea of allowing syntax to be extended dynamically at runtime.
To do this, I'm considering using a sort of partial evaluation on the code at compile-time. If parsing fails for a given expression, due to a macro for its syntax not being available, the compiler halts evaluation of the function/block and generates the code it already has with a stub where the unknown expression is. When this stub is hit at runtime, the function is recompiled against the current macro set. If this compilation fails, a parse error would be thrown because execution can't continue. If the compilation succeeds, the new function replaces the old one and execution continues.
The biggest issue I see is that you can't find parse errors until the affected code is run. However, this wouldn't affect many cases, e.g. group operators like [], {}, (), and `` still need to be paired (requirement of my tokenizer/list parser), and top-level syntax like classes and functions wouldn't be affected since their "runtime" is really load time, where the syntax is evaluated and their objects are generated.
Aside from the implementation difficulty and the problem I described above, what problems are there with this idea?
Here are a few possible problems:
You may find it difficult to provide the user with helpful error messages in case of a problem. This seems likely, as any compilation-time syntax error could be just a syntax extension.
Performance hit.
I was trying to find some discussion of the pluses, minuses, and/or implementation of dynamic parsing in Perl 6, but I couldn't find anything appropriate. However, you may find this quote from Nicklaus Wirth (designer of Pascal and other languages) interesting:
The phantasies of computer scientists
in the 1960s knew no bounds. Spurned
by the success of automatic syntax
analysis and parser generation, some
proposed the idea of the flexible, or
at least extensible language. The
notion was that a program would be
preceded by syntactic rules which
would then guide the general parser
while parsing the subsequent program.
A step further: The syntax rules would
not only precede the program, but they
could be interspersed anywhere
throughout the text. For example, if
someone wished to use a particularly
fancy private form of for statement,
he could do so elegantly, even
specifying different variants for the
same concept in different sections of
the same program. The concept that
languages serve to communicate between
humans had been completely blended
out, as apparently everyone could now
define his own language on the fly.
The high hopes, however, were soon
damped by the difficulties encountered
when trying to specify, what these
private constructions should mean. As
a consequence, the intreaguing idea of
extensible languages faded away rather
quickly.
Edit: Here's Perl 6's Synopsis 6: Subroutines, unfortunately in markup form because I couldn't find an updated, formatted version; search within for "macro". Unfortunately, it's not too interesting, but you may find some things relevant, like Perl 6's one-pass parsing rule, or its syntax for abstract syntax trees. The approach Perl 6 takes is that a macro is a function that executes immediately after its arguments are parsed and returns either an AST or a string; Perl 6 continues parsing as if the source actually contained the return value. There is mention of generation of error messages, but they make it seem like if macros return ASTs, you can do alright.
Pushing this one step further, you could do "lazy" parsing and always only parse enough to evaluate the next statement. Like some kind of just-in-time parser. Then syntax errors could become normal runtime errors that just raise a normal Exception that could be handled by surrounding code:
def fun():
not implemented yet
try:
fun()
except:
pass
That would be an interesting effect, but if it's useful or desirable is a different question. Generally it's good to know about errors even if you don't call the code at the moment.
Macros would not be evaluated until control reaches them and naturally the parser would already know all previous definitions. Also the macro definition could maybe even use variables and data that the program has calculated so far (like adding some syntax for all elements in a previously calculated list). But this is probably a bad idea to start writing self-modifying programs for things that could usually be done as well directly in the language. This could get confusing...
In any case you should make sure to parse code only once, and if it is executed a second time use the already parsed expression, so that it doesn't lead to performance problems.
Here are some ideas from my master's thesis, which may or may not be helpful.
The thesis was about robust parsing of natural language.
The main idea: given a context-free grammar for a language, try to parse a given
text (or, in your case, a python program). If parsing failed, you will have a partially generated parse tree. Use the tree structure to suggest new grammar rules that will better cover the parsed text.
I could send you my thesis, but unless you read Hebrew this will probably not be useful.
In a nutshell:
I used a bottom-up chart parser. This type of parser generates edges for productions from the grammar. Each edge is marked with the part of the tree that was consumed. Each edge gets a score according to how close it was to full coverage, for example:
S -> NP . VP
Has a score of one half (We succeeded in covering the NP but not the VP).
The highest-scored edges suggest a new rule (such as X->NP).
In general, a chart parser is less efficient than a common LALR or LL parser (the types usually used for programming languages) - O(n^3) instead of O(n) complexity, but then again you are trying something more complicated than just parsing an existing language.
If you can do something with the idea, I can send you further details.
I believe looking at natural language parsers may give you some other ideas.
Another thing I've considered is making this the default behavior across the board, but allow languages (meaning a set of macros to parse a given language) to throw a parse error at compile-time. Python 2.5 in my system, for example, would do this.
Instead of the stub idea, simply recompile functions that couldn't be handled completely at compile-time when they're executed. This will also make self-modifying code easier, as you can modify the code and recompile it at runtime.
You'll probably need to delimit the bits of input text with unknown syntax, so that the rest of the syntax tree can be resolved, apart from some character sequences nodes which will be expanded later. Depending on your top level syntax, that may be fine.
You may find that the parsing algorithm and the lexer and the interface between them all need updating, which might rule out most compiler creation tools.
(The more usual approach is to use string constants for this purpose, which can be parsed to a little interpreter at run time).
I don't think your approach would work very well. Let's take a simple example written in pseudo-code:
define some syntax M1 with definition D1
if _whatever_:
define M1 to do D2
else:
define M1 to do D3
code that uses M1
So there is one example where, if you allow syntax redefinition at runtime, you have a problem (since by your approach the code that uses M1 would be compiled by definition D1). Note that verifying if syntax redefinition occurs is undecidable. An over-approximation could be computed by some kind of typing system or some other kind of static analysis, but Python is not well known for this :D.
Another thing that bothers me is that your solution does not 'feel' right. I find it evil to store source code you can't parse just because you may be able to parse it at runtime.
Another example that jumps to mind is this:
...function definition fun1 that calls fun2...
define M1 (at runtime)
use M1
...function definition for fun2
Technically, when you use M1, you cannot parse it, so you need to keep the rest of the program (including the function definition of fun2) in source code. When you run the entire program, you'll see a call to fun2 that you cannot call, even if it's defined.

Categories

Resources