VSCode IntelliSense thinks a Python 'function()' class exists - python

VSCode / IntelliSense is completing a Python class called function() that does not appear to exist.
For example, this appears to be valid code:
def foo(value):
return function(value)
foo(0)
But function is not defined in this scope, so running this raises a NameError:
Traceback (most recent call last):
File "/home/hayesall/wip.py", line 4, in <module>
foo(0)
File "/home/hayesall/wip.py", line 2, in foo
return function(value)
NameError: name 'function' is not defined
I expected IntelliSense to warn me about function being undefined. function() does not appear to have a docstring, and I cannot find anything about it in the wider Python/CPython/VSCode documentation. (Side note: pylint recognizes "Undefined variable 'function'").
What is function()? Or: is there an explanation for why IntelliSense is matching this?
Screenshots:
Writing the word function provides an autocomplete:
function is not defined in this scope, but IntelliSense seems to think that it is:
Some version info:
Debian
code 1.75.1 (x86)
Pylance v2023.2.30
Python 3.9.15 (CPython, GCC 11.2.0)

function class is type of function (functions are objects in python), consider that
def return_one():
return 1
print(type(return_one))
gives output
<class 'function'>

Related

Problems about python ctypes module

Just trying to mess around and learn about python ctypes according to the official documentation at https://docs.python.org/3.8/library/ctypes.html
Everything works just fine until:
ValueError is raised when you call an stdcall function with the cdecl calling convention, or vice versa:
>>> cdll.kernel32.GetModuleHandleA(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Procedure probably called with not enough arguments (4 bytes missing)
>>>
>>> windll.msvcrt.printf(b"spam")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Procedure probably called with too many arguments (4 bytes in excess)
>>>
quoted from official documentaion, while i get is:
>>> cdll.kernel32.GetModuleHandleA(None)
1374486528
>>> windll.msvcrt.printf(b"spam")
4
according to those MS documentation, seems these function calls work just fine
What's more, I also tried to mess around with the argument number so as to raise a ValueError, but that's what I get:
>>> cdll.kernel32.GetModuleHandleA(None,0,0)
1374486528
>>> windll.kernel32.GetModuleHandleA(0,0,0)
1374486528
>>> windll.kernel32.GetModuleHandleA()
0
>>> cdll.kernel32.GetModuleHandleA()
0
Seems the last two function calls does return null as there was an error, but no Value error exception.
The only error i got is OSError, just as the documentation example shows.
Can anyone explain this? I create virtual environment using conda and I test these codes both in python 3.6.12 and python 3.8.5.
And by the way ,according to the documentation: "ValueError is raised when you call an stdcall function with the cdecl calling convention, or vice versa", I wonder what exactly "call an stdcall function with cdecl calling convention" means? Maybe just by giving different number of arguments rather than the function required?
__stdcall and _cdecl have no difference on 64-bit compilers. There is only one calling convention and the notations are ignored and both WinDLL and CDLL work. 32-bit code is where it matters and the correct one must be used.
You should still use the appropriate WinDLL or CDLL in scripts if you want the script to work correctly on both 32-bit and 64-bit Python.

Perl and Python difference in predeclaring functions

Perl
test();
sub test {
print 'here';
}
Output
here
Python
test()
def test():
print('here')
return
Output
Traceback (most recent call last):
File "pythontest", line 2, in <module>
test()
NameError: name 'test' is not defined
I understand that in Python we need to define functions before calling them and hence the above code doesn't work for Python.
I thought it was same with Perl but it works!
Could someone explain why is it working in the case of Perl?
Perl uses a multi-phase compilation model. Subroutines are defined in an early phase before the actual run time, so no forward declarations are necessary.
In contrast, Python executes function definitions at runtime. The variable which holds a function must be assigned (implicitly by the def) before it can be called as a function.
If we translate these runtime semantics back to Perl, the code would look like:
# at runtime:
$test->();
my $test = \&test;
# at compile time:
sub test { print 'here' }
Note that the $test variable is accessed before it was declared and assigned.

Altering traceback of a non-callable module

I'm a minor contributor to a package where people are meant to do this (Foo.Bar.Bar is a class):
>>> from Foo.Bar import Bar
>>> s = Bar('a')
Sometimes people do this by mistake (Foo.Bar is a module):
>>> from Foo import Bar
>>> s = Bar('a')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
This might seems simple, but users still fail to debug it, I would like to make it easier. I can't change the names of Foo or Bar but I would like to add a more informative traceback like:
TypeError("'module' object is not callable, perhaps you meant to call 'Bar.Bar()'")
I read the Callable modules Q&A, and I know that I can't add a __call__ method to a module (and I don't want to wrap the whole module in a class just for this). Anyway, I don't want the module to be callable, I just want a custom traceback. Is there a clean solution for Python 3.x and 2.7+?
Add this to top of Bar.py: (Based on this question)
import sys
this_module = sys.modules[__name__]
class MyModule(sys.modules[__name__].__class__):
def __call__(self, *a, **k): # module callable
raise TypeError("'module' object is not callable, perhaps you meant to call 'Bar.Bar()'")
def __getattribute__(self, name):
return this_module.__getattribute__(name)
sys.modules[__name__] = MyModule(__name__)
# the rest of file
class Bar:
pass
Note: Tested with python3.6 & python2.7.
What you want is to change the error message when is is displayed to the user. One way to do that is to define your own excepthook.
Your own function could:
search the calling frame in the traceback object (which contains informations about the TypeError exception and the function which does that),
search the Bar object in the local variables,
alter the error message if the object is a module instead of a class or function.
In Foo.__init__.py you can install a your excepthook
import inspect
import sys
def _install_foo_excepthook():
_sys_excepthook = sys.excepthook
def _foo_excepthook(exc_type, exc_value, exc_traceback):
if exc_type is TypeError:
# -- find the last frame (source of the exception)
tb_frame = exc_traceback
while tb_frame.tb_next is not None:
tb_frame = tb_frame.tb_next
# -- search 'Bar' in the local variable
f_locals = tb_frame.tb_frame.f_locals
if 'Bar' in f_locals:
obj = f_locals['Bar']
if inspect.ismodule(obj):
# -- change the error message
exc_value.args = ("'module' object is not callable, perhaps you meant to call 'Foo.Bar.Bar()'",)
_sys_excepthook(exc_type, exc_value, exc_traceback)
sys.excepthook = _foo_excepthook
_install_foo_excepthook()
Of course, you need to enforce this algorithm…
With the following demo:
# coding: utf-8
from Foo import Bar
s = Bar('a')
You get:
Traceback (most recent call last):
File "/path/to/demo_bad.py", line 5, in <module>
s = Bar('a')
TypeError: 'module' object is not callable, perhaps you meant to call 'Foo.Bar.Bar()'
There are a lot of ways you could get a different error message, but they all have weird caveats and side effects.
Replacing the module's __class__ with a types.ModuleType subclass is probably the cleanest option, but it only works on Python 3.5+.
Besides the 3.5+ limitation, the primary weird side effects I've thought of for this option are that the module will be reported callable by the callable function, and that reloading the module will replace its class again unless you're careful to avoid such double-replacement.
Replacing the module object with a different object works on pre-3.5 Python versions, but it's very tricky to get completely right.
Submodules, reloading, global variables, any module functionality besides the custom error message... all of those are likely to break if you miss some subtle aspect of the implementation. Also, the module will be reported callable by callable, just like with the __class__ replacement.
Trying to modify the exception message after the exception is raised, for example in sys.excepthook, is possible, but there isn't a good way to tell that any particular TypeError came from trying to call your module as a function.
Probably the best you could do would be to check for a TypeError with a 'module' object is not callable message in a namespace where it looks plausible that your module would have been called - for example, if the Bar name is bound to the Foo.Bar module in either the frame's locals or globals - but that's still going to have plenty of false negatives and false positives. Also, sys.excepthook replacement isn't compatible with IPython, and whatever mechanism you use would probably conflict with something.
Right now, the problems you have are easy to understand and easy to explain. The problems you would have with any attempt to change the error message are likely to be much harder to understand and harder to explain. It's probably not a worthwhile tradeoff.

Compiler Python, why are some wrong things overlooked?

I wrote a Python routine with a mistake in it: false instead of False. However, it was not discovered at compilation. The program had to run until this line to notify the wrongdoing.
Why is it so? What in the Python interpreter/compiler things make it work so?
Do you have some reference?
Due to Python's dynamic nature, it is impossible to detect undefined names at compile time. Only the syntax is checked; if the syntax is fine, the compiler generates the bytecode, and Python starts to execute the code.
In the given example, you will get a reference to a global name false. Only when the bytecode interpreter tries to actually access this global name, you will get an error.
To illustrate, here is an example. Do you think the following code executes fine?
globals()["snyfr".decode("rot13")] = 17
x = false
It actually does, since the first line dynamically generates a variable named false.
You can think of this as the interpreter being 'lazy' about when to look up names: it does so as late as possible, because other bits of the program can fiddle around with its dictionary of known variables.
Consider the program
>>> def foo():
... return false
...
>>> def bar():
... global false
... false = False
...
>>> foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in foo
NameError: global name 'false' is not defined
>>> bar()
>>> foo()
False
Notice that the first call to foo raised a NameError, because at the time that foo ran Python didn't know what false was. But bar then modified the global scope and inserted false as another name for False.
This sort of namespace-mucking allows for tremendous flexibility in how one writes programs. Of course, it also removes a lot of things that a more restrictive language could check for you.

calling execfile() in custom namespace executes code in '__builtin__' namespace

When I call execfile without passing the globals or locals arguments it creates objects in the current namespace, but if I call execfile and specify a dict for globals (and/or locals), it creates objects in the __builtin__ namespace.
Take the following example:
# exec.py
def myfunc():
print 'myfunc created in %s namespace' % __name__
exec.py is execfile'd from main.py as follows.
# main.py
print 'execfile in global namespace:'
execfile('exec.py')
myfunc()
print
print 'execfile in custom namespace:'
d = {}
execfile('exec.py', d)
d['myfunc']()
when I run main.py from the commandline I get the following output.
execfile in global namespace:
myfunc created in __main__ namespace
execfile in custom namespace:
myfunc created in __builtin__ namespace
Why is it being run in __builtin__ namespace in the second case?
Furthermore, if I then try to run myfunc from __builtins__, I get an AttributeError. (This is what I would hope happens, but then why is __name__ set to __builtin__?)
>>> __builtins__.myfunc()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'module' object has no attribute 'myfunc'
Can anyone explain this behaviour?
Thanks
First off, __name__ is not a namespace - its a reference to the name of the module it belongs to, ie: somemod.py -> somemod.__name__ == 'somemod'
The exception to this being if you run a module as an executable from the commandline, then the __name__ is '__main__'.
in your example there is a lucky coincidence that your module being run as main is also named main.
Execfile executes the contents of the module WITHOUT importing it as a module. As such, the __name__ doesn't get set, because its not a module - its just an executed sequence of code.
The execfile function is similar to the exec statement. If you look at the documentation for exec you'll see the following paragraph that explains the behavior.
As a side effect, an implementation may insert additional keys into the dictionaries given besides those corresponding to variable names set by the executed code. For example, the current implementation may add a reference to the dictionary of the built-in module __builtin__ under the key __builtins__ (!).
Edit: I now see that my answer applies to one possible interpretation of the question title. My answer does not apply to the actual question asked.
As an aside, I prefer using __import__() over execfile:
module = __import__(module_name)
value = module.__dict__[function_name](arguments)
This also works well when adding to the PYTHONPATH, so that modules in other directories can be imported:
sys.path.insert(position, directory)

Categories

Resources