pycharm does not show all attributes and methods of a object - python

When I use Python to write opencv, I have got the image object using the imread method, but when I try to use the object, I cannot see any attribute or method of the method.
Like this
When I use iPython or use the dir() method to check, I can see it

This happens when PyCharm can't guess the type of the object returned by method - imread() in this case. Some methods return different types of object based on the input. You'd have to take a look into opencv source code to see why it isn't clear what the returned type is. Static analysis of the code detects obvious cases.
IPython has already executed the method, so it's clear what type was returned.
One solution, if you know the type returned, is to use type comments like this:
import cv2
import numpy
# I've checked with IPython that the returned object is a `numpy.ndarray` instance
img = cv2.imread('/home/me/Pictures/image.jpg') # type: numpy.ndarray
And then if you type img. you will see
The feauture is described on PEP 0484.
This PEP says it's introduced in Python 3.5. However it might be that PyCharm could handle this simple case in older Python versions than 3.5 but I haven't check.
This PEP describes features of typing module which is not available in older Python versions, so most of the features from this document won't work but I'm not sure if PyCharm is really using typing module to parse type comments or does it natively.

Related

How to use type hints in python 3.6?

I noticed Python 3.5 and Python 3.6 added a lot of features about static type checking, so I tried with the following code (in python 3.6, stable version).
from typing import List
a: List[str] = []
a.append('a')
a.append(1)
print(a)
What surprised me was that, Python didn't give me an error or warning, although 1 was appended to a list which should only contain strings. Pycharm detected the type error and gave me a warning about it, but it was not obvious and it was not shown in the output console, I was afraid sometimes I might miss it. I would like the following effects:
If it's obvious that I used the wrong type just as shown above, throw out a warning or error.
If the compiler couldn't reliably check if the type I used was right or wrong, ignore it.
Is that possible? Maybe mypy could do it, but I'd prefer to use Python-3.6-style type checking (like a: List[str]) instead of the comment-style (like # type List[str]) used in mypy. And I'm curious if there's a switch in native python 3.6 to achieve the two points I said above.
Type hints are entirely meant to be ignored by the Python runtime, and are checked only by 3rd party tools like mypy and Pycharm's integrated checker. There are also a variety of lesser known 3rd party tools that do typechecking at either compile time or runtime using type annotations, but most people use mypy or Pycharm's integrated checker AFAIK.
In fact, I actually doubt that typechecking will ever be integrated into Python proper in the foreseable future -- see the 'non-goals' section of PEP 484 (which introduced type annotations) and PEP 526 (which introduced variable annotations), as well as Guido's comments here.
I'd personally be happy with type checking being more strongly integrated with Python, but it doesn't seem the Python community at large is ready or willing for such a change.
The latest version of mypy should understand both the Python 3.6 variable annotation syntax and the comment-style syntax. In fact, variable annotations were basically Guido's idea in the first place (Guido is currently a part of the mypy team) -- basically, support for type annotations in mypy and in Python was developed pretty much simultaneously.
Is that possible? Maybe mypy could do it, but I'd prefer to use Python-3.6-style type checking (like a: List[str]) instead of the comment-style (like # type: List[str]) used in mypy. And I'm curious if there's a switch in native python 3.6 to achieve the two points I said above.
There's no way Python will do this for you; you can use mypy to get type checking (and PyCharms built-in checker should do it too). In addition to that, mypy also doesn't restrict you to only type comments # type List[str], you can use variable annotations as you do in Python 3.6 so a: List[str] works equally well.
With mypy as is, because the release is fresh, you'll need to install typed_ast and execute mypy with --fast-parser and --python-version 3.6 as documented in mypy's docs. This will probably change soon but for now you'll need them to get it running smoothly
Update: --fast-parser and --python-version 3.6 aren't needed now.
After you do that, mypy detects the incompatibility of the second operation on your a: List[str] just fine. Let's say your file is called tp_check.py with statements:
from typing import List
a: List[str] = []
a.append('a')
a.append(1)
print(a)
Running mypy with the aforementioned arguments (you must first pip install -U typed_ast):
python -m mypy --fast-parser --python-version 3.6 tp_check.py
catches the error:
tp_check.py:5: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "str"
As noted in many other answers on type hinting with Python, mypy and PyCharms' type-checkers are the ones performing the validation, not Python itself. Python doesn't use this information currently, it only stores it as metadata and ignores it during execution.
Type annotations in Python are not meant to be type-enforcing. Anything involving runtime static-type dependency would mean changes so fundamental that it would not even make sense to continue call the resulting language "Python".
Notice that the dynamic nature of Python does ALLOW for one to build an external tool, using pure-python code, to perform runtime type checking. It would make the program run (very) slowly, but maybe it is suitable for certain test categories.
To be sure - one of the fundamentals of the Python language is that everything is an object, and that you can try to perform any action on an object at runtime. If the object fails to have an interface that conforms with the attempted operation, it will fail - at runtime.
Languages that are by nature statically typed work in a different way: operations simply have to be available on objects when tried at run time. At the compile step, the compiler creates the spaces and slots for the appropriate objects all over the place - and, on non-conforming typing, breaks the compilation.
Python's typechecking allows any number of tools to do exactly that: to break and warn at a step prior to actually running the application (but independent from the compiling itself). But the nature of the language can't be changed to actually require the objects to comply in runtime - and veryfying the typing and breaking at the compile step itself would be artificial.
Although, one can expect that future versions of Python may incoroporate compile-time typechecking on the Python runtime itself - most likely through and optional command line switch. (I don't think it will ever be default - at least not to break the build - maybe it can be made default for emitting warnings)
So, Python does not require static type-checking at runtime because it would cease being Python. But at least one language exists that makes use both of dynamic objects and static typing - the Cython language, that in practice works as a Python superset. One should expect Cython to incorporate the new type-hinting syntax to be actual type-declaring very soon. (Currently it uses a differing syntax for the optional statically typed variables)
The pydantic package has a validate_arguments decorator that checks type hints at runtime. You can add this decorator to all functions or methods where you want type hints enforced.
I wrote some code to help automate this for an entire module, so that I could enable runtime checks for my test suite to help with debugging, but then have them off for code that uses the library so there's no performance impact.
import sys
import inspect
import types
from pydantic import validate_arguments
class ConfigAllowArbitraryTypes:
"""allows for custom classes to be used in type hints"""
arbitrary_types_allowed = True
def add_runtime_type_checks(module):
"""adds runtime typing checks to the given module/class"""
if isinstance(module, str):
module = sys.modules[module]
for attr in module.__dict__:
obj = getattr(module, attr)
if isinstance(obj, types.FunctionType):
setattr(module, attr, validate_arguments(obj, config=ConfigAllowArbitraryTypes))
elif inspect.isclass(obj):
# recursively add decorator to class methods
add_runtime_type_checks(obj)
In my test suite I then add decorators by calling add_runtime_type_checks with the name of the module.
import mymodule
def setup_module(module):
"""executes once"""
add_runtime_type_checks('mymodule')
def test_behavior():
...
Note that with pydantic it might do some unexpected conversions when type checking, so if the desired type is an int and you pass the function 0.2, it will cast it to 0 silently rather than failing. In principle, you could do almost the same thing with the typen library's enforce_type_hints decorator, but that does not do recursive checks (so you can't use types like list[int], only list).

How to find the real type of an object in Python

How to find the real return type of a method in Python?
For example, I want to enable type hinting in Pycharm so that autocompletion works, but it is surprisingly not easy to find the exact object type even while in the debugger, and even after looking at the method bodies.
I expected type(someObject) to return something meaningful for objects but for most objects it returns <type 'instance'> which is hardly of any use.
For example, how can I find out what is the type of the response object returned by a call to urllib2.urlopen(url) so that I can mark the local variable using a Pycharm type hint to make autocompletion work?
I realize that everything is dynamic in Python, and that programming to an interface is not a widely used Python paradigm, but at least I would like to know how far can we go in terms of type inference, specifically for comfortable IDE support and 'lazy' programming with autocompletion.
The real type of an instance of an old style class is instance, regardless it is of any use or not. If you want to know the class it is an instance of you can use obj.__class__. But old style classes are not types.
And there's no such a thing like the "return type of a method" in python.

Creating Extended Python Type Object within Embedded Interpreter C/C++

I'm trying to define a new python type object using the documentation found here https://docs.python.org/3/extending/newtypes.html. At the moment I am just following the basics section defining a new type with the same names as used in the documentation. I am then embedding the python interpreter within a simple application by calling PyImport_AppendInittab("noddy", &PyInit_noddy); followed by Py_Initialize(); and then I am running a simple python script using PyRun_SimpleString(script); where "script" is actually the following
import noddy
mytest = noddy.Noddy()
which is within the documentation as an example of creating an object of the new extended type. The problem I am having is that this produces an error
TypeError: cannot create 'noddy.Noddy' instances
What am I doing wrong here? I appreciate I have not provided any source code but I have simply copied the example from the documentation. I understand what each part is doing but I cannot find a problem. The module called noddy is created, the Noddy object is added so why can I not create a noddy.Noddy() object as stated in the documentation?

win32com COM method call returning name of object's type instead of object

My company has an internal C# library that I'm trying to call from CPython 2.7 using win32com. We have an object which I'll call Type1, which is in a C# namespace that I'll call Company.System.SubSystem1. One particular method of this object (let's call it GetCurrentType2Object) returns an object of type Company.System.SubSystem2.Type2. The code I have is as follows:
import win32com.client
type1_object = win32com.client.Dispatch("Company.System.SubSystem1.Type1")
type2_object = type1_object.GetCurrentType2Object()
The problem is that type2_object does not get assigned a Type2 object; it gets the string "Company.System.SubSystem2.Type2".
There are other methods on the same object that return void, integers, or enum values, and those all appear to succeed. There is no other method that returns a class type for me to try, this is the only one.
I've tried using the makepy.py script on the library in question before running my code, and it has its own problems before even getting this far; the generated file seems to contain a small subset of the actual interface, which does not include many of the methods I need. The Microsoft COM object viewer shows the same subset, so I can't blame this on the script, and I gave up trying to figure it out.
I'm aware of IronPython, and I may very well end up just using it instead, since that would be a much less roundabout way of working with .NET code, but I would really like to understand what's happening here before I make any decisions, and just to satisfy my own curiosity.
It appears what may be happening is that win32com.client.dynamic is deciding that the method I'm trying to call is actually a property, and so the method doesn't get invoked correctly. I'm not certain because I don't really understand how that decision is made, but I was able to work around the problem by forcing a method to be generated instead of a property. You can do that by calling win32com's _FlagAsMethod. For the example in the question, that would look like:
type1_object._FlagAsMethod("GetCurrentType2Object")
Do that before trying to call GetCurrentType2Object and everything works fine.

unable to adjust grouping settings in OpenCV's hog.detectMultiScale (python)

I am trying to set the grouping settings in the hog.detectMultiScale method from the OpenCV2 library (version 2.4.9).
What happens is that the group_threshold and groupThreshold parameters
are both not recognized in the python binding:
TypeError: 'group_threshold' is an invalid keyword argument for this function
and
TypeError: 'groupThreshold' is an invalid keyword argument for this function
How can I fix this? Is there a way to set this parameter?
group_threshold or groupThreshold does not exist in the Python wrapper of hog.detectMultiScale. Unfortunately, there is no documentation to prove it (typical of OpenCV docs), but there is a related doc in the GPU version of the HOG Descriptor here - http://docs.opencv.org/2.4.9/modules/gpu/doc/object_detection.html#gpu-hogdescriptor-detectmultiscale
However, there seems to be inconsistency with the Python wrapper. If you type in help(cv2.HOGDescriptor().detectMultiScale) in the Python REPL, this is what we get:
detectMultiScale(...)
detectMultiScale(img[, hitThreshold[, winStride[, padding[, scale[,
finalThreshold[, useMeanshiftGrouping]]]]]]) -> foundLocations, foundWeights
If you compare the docs with the Python wrapper, we can clearly see that there are some input parameters are missing, as well as different parameters between them both.
As such, it doesn't look like you can vary this parameter :(. Sorry if this isn't what you wanted to hear! However, this StackOverflow post may prove to be insightful if you want to get it working relatively well:
HOGDescriptor with videos to recognize objects
Good luck!

Categories

Resources