I would like to know if it is possible, given a function (as an instance, or a string), to get its paramaters, if defined default values for each paramater and, if possible, the type of each parameters (probably using the type of default value, if defined) in Python 3.5.
Why would you want that ?!
Long story short, I am generating a XML file containing details of different functions in my project. Since the generator has to be future-proof in case someone modifies, add, or delete a function, the next generated file must be updated. I succesfully retrieved the functions I wanted either as instance or a string of the code calling it.
I have two solutions (well, more the beginnings of solutions) to solve this problem, using inspect and jedi.
Inspect
Using inspect.signature(function), I can retrieve the name and default values of all the parameters. The main issue I see here, would be analyzing this function:
def fct(a=None):
# Whatever the function does...
Analyzing the type of the default value will lead to misunderstandigs. Is there a way to fix that ?
Jedi
Jedi is an extremely powerful tool, maybe even too much ! Getting the function in a one line code string, and analyzing it through Jedi gives an extraordinary amount of information, that I am lost with to be completely honest. Plus, I might get bad autocompletion (example: instead of having the paramaters for print, I might get autocompleted to println)
If someone had used one of these tools for this prupose, or even better if you know a better, more "pythonic" way of doing this, I would be really grateful !
Related
I want to change the signature of a function in the Python module, which has been used many times in the current project.
If you use the refactoring feature of IDEA (with Python plug-in), changing the function name can take effect within the scope of the whole project; but changing the signature can't, and don't see this option of scope.
Go to Google to search the relevant content, did not find more useful information.
Does IDEA really have no way to intelligently change the signature of functions in the whole project scope?
If the answer is No, are there any other solutions?
Note: Of course, regular expressions can also be used. This option is already known.
You can't set the range like renaming, it's not quite clear what the exact range is, but it's definitely not limited to the current file.
If it only takes effect in the current file, then the correct parsing relationship is probably not established, then you should Find Usages ( Control+left mouse button is the default shortcut), check it, and if you find any errors, can use File | Invalidate Caches to try to clear the cache and restart IDE.
Thanks LazyOne.
I typically work with C++ but off late have to program a lot in Python. Coming from a C++ background, I am finding dynamic typing to be very inconvenient when I have to modify an existing codebase. I know I am missing something very basic and hence turning to the stackoverflow community to understand best practices.
Imagine, there is a class with a number of methods and I need to edit an existing method. Now, in C++, I could explicitly see the datatype of every parameter, check out the .h files of the corresponding class if need be and could quickly understand what's happening. In python on the other hand, all I see are some variable names. I am not sure if it is a list or a dictionary or maybe some custom datastructure with its getters and setters. To figure this out, I need to look at some existing usages of this function or run the code with breakpoints and see what kind of datastructure am I getting. I find either methods to be very time consuming. Is there a faster way to resolve this problem? How should I quickly determine what's the datatype of a particular variable?
The general impression is that code is easier to read/write in Python, but I am not finding it very quick to read python code because of lack of types. What am I missing here?
I feel your pain, too! I frequently switch between Python and C++, so paradigm shifting does give me paranoia.
However, I've been readjusting my codes with:
Type Annotations
It doesn't improve runtime performance, but it provides sense of comfort when reading through tens of thousands line of codes. Also, you can run your python programs with this to further verify your type annotations:
mypy
These are the following things i follow:
Comment clearly what is being returned and what is the input in the docstring
Use a debug(or a Flag) variable, which is by default set to False, and keep a if block as follows.
if debug:
print(type(variable))
So, in that way, you would be sure to see what is the type of the variable.
In Python, you can see the data type of any variable by using
type(variable_name)
It will show you data type of that variable. Such as int, bool, str, etc.
I just want to check whether a particular function is called by other function or not. If yes then I have to store it in a different category and the function that does not call a particular function will be stored in different category.
I have 3 .py files with classes and functions in them. I need to check each and every function. e.g. let's say a function trial(). If a function calls this function, then that function is in example category else non-example.
I have no idea what you are asking, but even if it is be technically possible, the one and only answer: don't do that.
If your design is as such that method A needs to know whether it was called from method B or C; then your design is most likely ... broken. Having such dependencies within your code will quickly turn the whole thing un-maintainable. Simply because you will very soon be constantly asking yourself "that path seems fine, but will happen over here?"
One way out of that: create different methods; so that B can call something else as C does; but of course, you should still extract the common parts into one method.
Long story short: my non-answer is: take your current design; and have some other people review it. As you should step back from whatever you are doing right now; and find a way to it differently! You know, most of the times, when you start thinking about strange/awkward ways to solve a problem within your current code, the real problem is your current code.
EDIT: given your comments ... The follow up questions would be: what is the purpose of this classification? How often will it need to take place? You know, will it happen only once (then manual counting might be an option), or after each and any change? For the "manual" thing - ideas such as pycharm are pretty good in analyzing python source code, you can do simple things like "search usages in workspaces" - so the IDE lists you all those methods that invoke some method A. But of course, that works only for one level.
The other option I see: write some test code that imports all your methods; and then see how far the inspect module helps you. Probably you could really iterate through a complete method body and simply search for matching method names.
Does anyone know how pydev determines what to use for code completion? I'm trying to define a set of classes specifically to enable code completion. I've tried using __new__ to set __dict__ and also __slots__, but neither seems to get listed in pydev autocomplete.
I've got a set of enums I want to list in autocomplete, but I'd like to set them in a generator, not hardcode them all for each class.
So rather than
class TypeA(object):
ValOk = 1
ValSomethingSpecificToThisClassWentWrong = 4
def __call__(self):
return 42
I'd like do something like
def TYPE_GEN(name, val, enums={}):
def call(self):
return val
dct = {}
dct["__call__"] = call
dct['__slots__'] = enums.keys()
for k, v in enums.items():
dct[k] = v
return type(name, (), dct)
TypeA = TYPE_GEN("TypeA",42,{"ValOk":1,"ValSomethingSpecificToThisClassWentWrong":4})
What can I do to help the processing out?
edit:
The comments seem to be about questioning what I am doing. Again, a big part of what I'm after is code completion. I'm using python binding to a protocol to talk to various microcontrollers. Each parameter I can change (there are hundreds) has a name conceptually, but over the protocol I need to use its ID, which is effectively random. Many of the parameters accept values that are conceptually named, but are again represented by integers. Thus the enum.
I'm trying to autogenerate a python module for the library, so the group can specify what they want to change using the names instead of the error prone numbers. The __call__ property will return the id of the parameter, the enums are the allowable values for the parameter.
Yes, I can generate the verbose version of each class. One line for each type seemed clearer to me, since the point is autocomplete, not viewing these classes.
Ok, as pointed, your code is too dynamic for this... PyDev will only analyze your own code statically (i.e.: code that lives inside your project).
Still, there are some alternatives there:
Option 1:
You can force PyDev to analyze code that's in your library (i.e.: in site-packages) dynamically, in which case it could get that information dynamically through a shell.
To do that, you'd have to create a module in site-packages and in your interpreter configuration you'd need to add it to the 'forced builtins'. See: http://pydev.org/manual_101_interpreter.html for details on that.
Option 2:
Another option would be putting it into your predefined completions (but in this case it also needs to be in the interpreter configuration, not in your code -- and you'd have to make the completions explicit there anyways). See the link above for how to do this too.
Option 3:
Generate the actual code. I believe that Cog (http://nedbatchelder.com/code/cog/) is the best alternative for this as you can write python code to output the contents of the file and you can later change the code/rerun cog to update what's needed (if you want proper completions without having to put your code as it was a library in PyDev, I believe that'd be the best alternative -- and you'd be able to grasp better what you have as your structure would be explicit there).
Note that cog also works if you're in other languages such as Java/C++, etc. So, it's something I'd recommend adding to your tool set regardless of this particular issue.
Fully general code completion for Python isn't actually possible in an "offline" editor (as opposed to in an interactive Python shell).
The reason is that Python is too dynamic; basically anything can change at any time. If I type TypeA.Val and ask for completions, the system had to know what object TypeA is bound to, what its class is, and what the attributes of both are. All 3 of those facts can change (and do; TypeA starts undefined and is only bound to an object at some specific point during program execution).
So the system would have to know st what point in the program run do you want the completions from? And even if there were some unambiguous way of specifying that, there's no general way to know what the state of everything in the program is like at that point without actually running it to that point, which you probably don't want your editor to do!
So what pydev does instead is guess, when it's pretty obvious. If you have a class block in a module foo defining class Bar, then it's a safe bet that the name Bar imported from foo is going to refer to that class. And so you know something about what names are accessible under Bar., or on an object created by obj = Bar(). Sure, the program could be rebinding foo.Bar (or altering its set of attributes) at runtime, or could be run in an environment where import foo is hitting some other file. But that sort of thing happens rarely, and the completions are useful in the common case.
What that means though is that you basically lose completions whenever you use "too much" of Python's dynamic language flexibility. Defining a class by calling a function is one of those cases. It's not ready to guess that TypeA has names ValOk and ValSomethingSpecificToThisClassWentWrong; after all, there's presumably lots of other objects that result from calls to TYPE_GEN, but they all have different names.
So if your main goal is to have completions, I think you'll have to make it easy for pydev and write these classes out in full. Of course, you could use similar code to generate the python files (textually) if you wanted. It looks though like there's actually more "syntactic overhead" of defining these with dictionaries than as a class, though; you're writing "a": b, per item rather than a = b. Unless you can generate these more systematically or parse existing definition files or something, I think I'd find the static class definition easier to read and write than the dictionary driving TYPE_GEN.
The simpler your code, the more likely completion is to work. Would it be reasonable to have this as a separate tool that generates Python code files containing the class definitions like you have above? This would essentially be the best of both worlds. You could even put the name/value pairs in a JSON or INI file or what have you, eliminating the clutter of the methods call among the name/value pairs. The only downside is needing to run the tool to regenerate the code files when the codes change, but at least that's an automated, simple process.
Personally, I would just go with making things more verbose and writing out the classes manually, but that's just my opinion.
On a side note, I don't see much benefit in making the classes callable vs. just having an id class variable. Both require knowing what to type: TypeA() vs TypeA.id. If you want to prevent instantiation, I think throwing an exception in __init__ would be a bit more clear about your intentions.
I'm a long time Python developer and I really love the dynamic nature of the language, but I wonder if Python would benefit from optional static typing.
Would it be beneficial to be able to apply static typing to the API of a library, and what would the disadvantages of this be?
I quickly sketched up a decorator implementing runtime-static type checking on pastebin and it works like this:
# A TypeError will be thrown if the argument "string" is not a "str" and if
# the returned value is not an "int"
#typed(int, string = str)
def getStringLength(string):
return len(string)
Would it be practical to use a decorator like this on the API-functions of a library? In my point of view type checking is not needed in the internal workings of a domain specific module of a library, but on the connection points between the library and it's client a simple version of design by contract by applying type checking could be useful. Especially as a type of enforced documentation which clearly states to the client of the library what it expects and returns.
Like this example where addObjectToQueue() and isObjectProcessed() are exposed for use by the client and processTheQueueAndDoAdvancedStuff() is an internal library function. I think type checking could be useful on the outward facing functions but would only bloat and restrict the dynamicness and usefulness of python if used on the internal functions.
# some_library_module.py
#typed(int, name = string)
def addObjectToQueue(name):
return random.randint() # Some object id
def processTheQueueAndDoAdvancedStuff(arg_of_library_specific_type)
# Function body here
#typed(bool, object_id = int)
def isObjectProcessed(object_id):
return True
What would the disadvantages of using this technique be?
What would the disadvantages of my naive implementation on pastebin be?
I don't want answers discussing the conversion of Python to a statically typed language, but thoughts about API design-specific pros/cons. (please move this to programmers.stackexchange.com if you consider it not a question)
Personally, I don't find this idea attractive for Python. This is all just my opinion, of course, but for context I'll tell you that Python and Haskell are probably my two favourite programming languages - I like languages at both extreme ends of the static vs dynamic typing spectrum.
I see the main benefits of static typing as follows:
Increased likelihood that your code is correct once the compiler has accepted it; if I know I've threaded my values through all the operations I invoked in such a way that the result type of one always matches the input type of another, and the final result type is the one I wanted, it increases the probability that I've selected the correct operations. This point is of deeply arguable value, since it only really matters if you're not testing very much, which would be bad. But it is true that, when programming in Haskell, when I sit back and say "there, done!" I am actually done a lot of the time, whereas that's almost never true of my Python code.
The compiler automatically points out most of the places that need changing when I make an incompatible change to a data structure or interface (most of the time). Again, tests are still needed to actually be sure you've caught all the implications, but most of the time the compiler's nagging is actually sufficient, in my experience, which deeply simplifies such refactoring; you can go straight from implementing the core of the refactoring to testing that the program still works okay, because the actual work of making all the flow-on changes is almost mechanical.
Efficient implementation. The compiler gets to use all the knowledge it has about types to do optimisation.
Your suggested system doesn't really provide any of these benefits.
Having written a program making use of your library, I still don't know if it contains any type-incorrect uses of your functions until I do extensive testing with full code coverage to see if any execution path contains a bad call.
When I refactor something, I need to go through many many rounds of "run full test suite, look for exception, find where it came from, fix the code" to get anything at all like a static-typing compiler's problem detection.
Python will still be behaving as if those variables could be anything at any time.
And to get even that much, you've sacrificed the flexibility of Python duck-typing; it's not enough that I provide a sufficiently "list-like" object, I have to actually provide a list.
To me, this sort of static typing is the worst of both worlds. The main dynamic typing argument is "you have to test your code anyway, so you may as well use those tests to catch type errors and free yourself from having to work around the type system when it doesn't help you". That may or may not be a good argument with respect to a really good static type system, but it absolutely is a compelling argument with respect to a weak partial static type system that only detects type errors at runtime. I don't think nicer error messages (which is all it really buys you most of the time; a type error not caught at the interface is almost certainly going to throw an exception deeper in the call stack) is worth the loss of flexibility.