While looking through Keras framework I have noticed an interesting style used there.
my_method(some_parameter)(variable_to_call_method_on_apperently)
Could you please provide any insights, what exactly is going on there, and where I can find more information about this?
p.s. I tried to google, but it is not use, since I do not know what to write in search bar.
EDITED: I am confused by the syntax. So, it is not Keras specific. I mentioned Keras trying to make my question more clear.
This is not keras specific. The first call to f(x) returns a value that is callable (that is a function, or an object with a __call__ method) instead of a plain value. Then you go on calling the returned object with the second parentheses arguments.
You might also lookup currying and partial application for related concepts.
Related
I would like to know if it is possible, given a function (as an instance, or a string), to get its paramaters, if defined default values for each paramater and, if possible, the type of each parameters (probably using the type of default value, if defined) in Python 3.5.
Why would you want that ?!
Long story short, I am generating a XML file containing details of different functions in my project. Since the generator has to be future-proof in case someone modifies, add, or delete a function, the next generated file must be updated. I succesfully retrieved the functions I wanted either as instance or a string of the code calling it.
I have two solutions (well, more the beginnings of solutions) to solve this problem, using inspect and jedi.
Inspect
Using inspect.signature(function), I can retrieve the name and default values of all the parameters. The main issue I see here, would be analyzing this function:
def fct(a=None):
# Whatever the function does...
Analyzing the type of the default value will lead to misunderstandigs. Is there a way to fix that ?
Jedi
Jedi is an extremely powerful tool, maybe even too much ! Getting the function in a one line code string, and analyzing it through Jedi gives an extraordinary amount of information, that I am lost with to be completely honest. Plus, I might get bad autocompletion (example: instead of having the paramaters for print, I might get autocompleted to println)
If someone had used one of these tools for this prupose, or even better if you know a better, more "pythonic" way of doing this, I would be really grateful !
I typically work with C++ but off late have to program a lot in Python. Coming from a C++ background, I am finding dynamic typing to be very inconvenient when I have to modify an existing codebase. I know I am missing something very basic and hence turning to the stackoverflow community to understand best practices.
Imagine, there is a class with a number of methods and I need to edit an existing method. Now, in C++, I could explicitly see the datatype of every parameter, check out the .h files of the corresponding class if need be and could quickly understand what's happening. In python on the other hand, all I see are some variable names. I am not sure if it is a list or a dictionary or maybe some custom datastructure with its getters and setters. To figure this out, I need to look at some existing usages of this function or run the code with breakpoints and see what kind of datastructure am I getting. I find either methods to be very time consuming. Is there a faster way to resolve this problem? How should I quickly determine what's the datatype of a particular variable?
The general impression is that code is easier to read/write in Python, but I am not finding it very quick to read python code because of lack of types. What am I missing here?
I feel your pain, too! I frequently switch between Python and C++, so paradigm shifting does give me paranoia.
However, I've been readjusting my codes with:
Type Annotations
It doesn't improve runtime performance, but it provides sense of comfort when reading through tens of thousands line of codes. Also, you can run your python programs with this to further verify your type annotations:
mypy
These are the following things i follow:
Comment clearly what is being returned and what is the input in the docstring
Use a debug(or a Flag) variable, which is by default set to False, and keep a if block as follows.
if debug:
print(type(variable))
So, in that way, you would be sure to see what is the type of the variable.
In Python, you can see the data type of any variable by using
type(variable_name)
It will show you data type of that variable. Such as int, bool, str, etc.
I am currently in the process of implementing a Scala API for TensorFlow and I am using scala.meta to automatically generate the Op creation methods. I am calling TF_GetAllOpList from C API to obtain an OpList protocol buffer and I am then parsing that to create the methods.
Each Op can have a set of attributes and each attribute has a type and optionally a default value. I am confused as to what the default value for a func attribute should look like and also, how to set that attribute value through the C API when creating the Op (i.e., there is no TF_SetAttrFunc function). Could someone provide some clarification with respect to that? Also, a reference to what is currently done when generating the Python API (and to the relevant part of the code that generates the API) would be greatly appreciated.
I am also a bit confused with respect to the placeholder attribute type and could use some help with respect to its meaning and use too, but I could create a separate question for that, if necessary.
Thank you!
What is the optimal way to get documentation about a specific function in Python? I am trying to download stock price data such that I can view it in my Spyder IDE.
The function I am interested in is:
ystockquote.get_historical_prices
How do I know how many inputs the function takes, the types of inputs it accepts, and the date format for example?
Just finding documentation
I suspect this question was super-downvoted because the obvious answer is to look at the documentation. It depends where your function came from, but googling is typically a good way to find it (I found the class here in a few seconds of googling).
It is also very trivialy to just check the source code
In order to import a function, you need to know where the source file it comes from is; open that file: in python, docstrings are what generate the documentation and can be found in triple-quotes beneath the function declaration. The arguments can be inferred from the function signature, but because python is dynamically typed, any type "requirements" are just suggestions. Some good documenters will provide the optimal types, too.
While "how do I google for documentation" is not a suitable question, the question of how to dynamically infer documentation is more reasonable. The answer is
The help function, built in here
The file __doc__ accessible on any python object, as a string
Using inspection
The question is even more reasonable if you are working with python extensions, like from external packages. I don't if the package you specifically asked about has any of those, but they can be tricky to work with if the authors haven't defined docstrings in the module. The problem is that in these cases, the typing can be rigidly inforced. There is no great way to get the tpye requirements in this case, as inspection will fail. If you can get at the source code, though, (perhaps by googling), this is where the documentation would be provided
I am trying to set the grouping settings in the hog.detectMultiScale method from the OpenCV2 library (version 2.4.9).
What happens is that the group_threshold and groupThreshold parameters
are both not recognized in the python binding:
TypeError: 'group_threshold' is an invalid keyword argument for this function
and
TypeError: 'groupThreshold' is an invalid keyword argument for this function
How can I fix this? Is there a way to set this parameter?
group_threshold or groupThreshold does not exist in the Python wrapper of hog.detectMultiScale. Unfortunately, there is no documentation to prove it (typical of OpenCV docs), but there is a related doc in the GPU version of the HOG Descriptor here - http://docs.opencv.org/2.4.9/modules/gpu/doc/object_detection.html#gpu-hogdescriptor-detectmultiscale
However, there seems to be inconsistency with the Python wrapper. If you type in help(cv2.HOGDescriptor().detectMultiScale) in the Python REPL, this is what we get:
detectMultiScale(...)
detectMultiScale(img[, hitThreshold[, winStride[, padding[, scale[,
finalThreshold[, useMeanshiftGrouping]]]]]]) -> foundLocations, foundWeights
If you compare the docs with the Python wrapper, we can clearly see that there are some input parameters are missing, as well as different parameters between them both.
As such, it doesn't look like you can vary this parameter :(. Sorry if this isn't what you wanted to hear! However, this StackOverflow post may prove to be insightful if you want to get it working relatively well:
HOGDescriptor with videos to recognize objects
Good luck!