iter() not working with datetime.now() - python

A simple snippet in Python 3.6.1:
import datetime
j = iter(datetime.datetime.now, None)
next(j)
returns:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
instead of printing out the classic now() behavior with each next().
I've seen similar code working in Python 3.3, am I missing something or has something changed in version 3.6.1?

This is definitely a bug introduced in Python 3.6.0b1. The iter() implementation recently switched to using _PyObject_FastCall() (an optimisation, see issue 27128), and it must be this call that is breaking this.
The same issue arrises with other C classmethod methods backed by Argument Clinic parsing:
>>> from asyncio import Task
>>> Task.all_tasks()
set()
>>> next(iter(Task.all_tasks, None))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
If you need a work-around, wrap the callable in a functools.partial() object:
from functools import partial
j = iter(partial(datetime.datetime.now), None)
I filed issue 30524 -- iter(classmethod, sentinel) broken for Argument Clinic class methods? with the Python project. The fix for this has landed and is part of 3.6.2rc1.

I assume you're using CPython and not another Python implementation. And I can reproduce the issue with CPython 3.6.1 (I don't have PyPy, Jython, IronPython, ... so I can't check these).
The offender in this case is the replacement of PyObject_Call with _PyObject_CallNoArg in the C equivalent of the callable_iterator.__next__ (your object is a callable_iterator) method.
The PyObject_Call does return a new datetime.datetime instance while _PyObject_CallNoArg returns NULL (which is roughly equivalent to an exception in Python).
Digging a bit through the CPython source code:
The _PyObject_CallNoArg is just a macro for _PyObject_FastCall which in turn is a macro for _PyObject_FastCallDict.
This _PyObject_FastCallDict function checks the type of the function (C-function or Python function or something else) and delegates to _PyCFunction_FastCallDict in this case because datetime.now is a C function.
Since datetime.datetime.now has the METH_FASTCALL flag it ends up in the fourth case but there _PyStack_UnpackDict returns NULL and the function is never even called.
I'll stop there and let the Python devs figure out what's wrong in there. #Martijn Pieters already filed a Bug report and they will fix it (I just hope they fix it soonish).
So it's a Bug they introduced in 3.6 and until it's fixed you need to make sure the method isn't a CFunction with the METH_FASTCALL flag. As workaround you can wrap it. Apart from the possibilities #Martijn Pieters mentioned there is also a simple:
def now():
return datetime.datetime.now()
j = iter(now, None)
next(j) # datetime.datetime(2017, 5, 31, 14, 23, 1, 95999)

Related

Problems about python ctypes module

Just trying to mess around and learn about python ctypes according to the official documentation at https://docs.python.org/3.8/library/ctypes.html
Everything works just fine until:
ValueError is raised when you call an stdcall function with the cdecl calling convention, or vice versa:
>>> cdll.kernel32.GetModuleHandleA(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Procedure probably called with not enough arguments (4 bytes missing)
>>>
>>> windll.msvcrt.printf(b"spam")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Procedure probably called with too many arguments (4 bytes in excess)
>>>
quoted from official documentaion, while i get is:
>>> cdll.kernel32.GetModuleHandleA(None)
1374486528
>>> windll.msvcrt.printf(b"spam")
4
according to those MS documentation, seems these function calls work just fine
What's more, I also tried to mess around with the argument number so as to raise a ValueError, but that's what I get:
>>> cdll.kernel32.GetModuleHandleA(None,0,0)
1374486528
>>> windll.kernel32.GetModuleHandleA(0,0,0)
1374486528
>>> windll.kernel32.GetModuleHandleA()
0
>>> cdll.kernel32.GetModuleHandleA()
0
Seems the last two function calls does return null as there was an error, but no Value error exception.
The only error i got is OSError, just as the documentation example shows.
Can anyone explain this? I create virtual environment using conda and I test these codes both in python 3.6.12 and python 3.8.5.
And by the way ,according to the documentation: "ValueError is raised when you call an stdcall function with the cdecl calling convention, or vice versa", I wonder what exactly "call an stdcall function with cdecl calling convention" means? Maybe just by giving different number of arguments rather than the function required?
__stdcall and _cdecl have no difference on 64-bit compilers. There is only one calling convention and the notations are ignored and both WinDLL and CDLL work. 32-bit code is where it matters and the correct one must be used.
You should still use the appropriate WinDLL or CDLL in scripts if you want the script to work correctly on both 32-bit and 64-bit Python.

How to clone function in cython? I getting SystemError: unknown opcode

In cpython this code would work:
import inspect
from types import FunctionType
def f(a, b): # line 5
print(a, b)
f_clone = FunctionType(
f.__code__,
f.__globals__,
closure=f.__closure__,
name=f.__name__
)
f_clone.__annotations__ = {'a': int, 'b': int}
f_clone.__defaults__ = (1, 2)
print(inspect.signature(f_clone)) # (a: int = 1, b: int = 2)
print(inspect.signature(f)) # (a, b)
f_clone() # 1 2
f(1, 2) # 1 2
try:
f()
except TypeError as e:
print(e) # f() missing 2 required positional arguments: 'a' and 'b'
However in cython when calling f_clone, I get:
XXX lineno: 5, opcode: 0
Traceback (most recent call last):
...
File "test.py", line 5, in f # line of f definitio
SystemError: unknown opcode
I need this to create a copy of class __init__ method on each class creation and and modify its signature, but keep original __init__ signature untouched.
Edit:
Changes made to signature of copied object must not affect runtime calls and needed only for inspection purposes.
I am relatively convinced this is never going to work well. If I were you I'd modify your code to fail elegantly for unclonable functions (maybe by just using the original __init__ and not replacing it, since this seems to be a purely cosmetic approach to generate prettier docstrings). After that you could submit an issue to the Cython issue tracker - however the maintainers of Cython know that full-introspection compatibility with Python is very challenging, so may not be hugely interested.
One of the main reasons I think you should just handle the error rather than find a workaround is that Cython is not the only method to accelerate Python. For example Numba can generate classes containing JIT accelerated code, or people can write their own functions in C (either as a C-API function, or perhaps wrapped with Ctypes or CFFI). These are all situations where your rather fragile introspection approach is likely to break. Handling the error fixes it for all of these; while you're likely to need an individual workaround for each one, plus all the methods I haven't thought of, plus any that are developed in the future.
Some details about Cython functions: at the moment a Cython has a compilation option called binding that can generate functions in two different modes:
With binding=False functions have the type builtin_function_or_method, which has minimum introspection capacities, and so no __code__, __globals__, __closure__ (or most other) attributes.
With binding=True functions have the type cython_function_or_method. This has improved introspection capacity, so does provide most of the expected annotations. However some of them are nonsense defaults - specifically __code__. The __code__ attribute is expected to be Python bytecode, however Cython doesn't use Python bytecode (since it's compiled to C). Therefore it just provides a dummy attribute.
It looks like Cython defaults to binding=True when compiling a .py file and when compiling a regular (non-cdef) class, giving the behaviour you report. However, when compiling a .pyx file it currently defaults to binding=False. It's possible you may also want to handle the binding=False case in some circumstances too.
Having established that trying to create a regular Python function object with the __code__ attribute of a cython_function_or_method isn't going to work, let's look at a few other options:
>>> print(f)
<cyfunction f at 0x7f08a1c63550>
>>> type(f)()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create 'cython_function_or_method' instances
So you can't create your own cython_function_or_method and populate it from Python - the type does not have a user callable constructor.
copy.copy appears to work, but doesn't actually create a new instance:
>>> import copy
>>> copy.copy(f)
<cyfunction f at 0x7f08a1c63550>
Note, however, that this has exactly the same address - it isn't a copy:
>>> copy.copy(f) is f
True
At which point I'm out of ideas.
What I don't quite get is why you don't use functools.wraps?
#functools.wraps(f):
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
This updates wrapper with most of the relevant introspection attributes from f, works for both types of Cython function (to an extent - the binding=False case doesn't provide much useful information), and should work for most other types of function too.
It's possible I'm missing something, but it seems a whole lot less fragile than your scheme of copying code objects.

import filters - TypeError: type() doesn't support MRO entry resolution

Python 3.7.1, filters 1.3.2
>>> import filters
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "E:\anaconda3\lib\site-packages\filters\__init__.py", line 27, in <modu
from filters.extensions import FilterExtensionRegistry
File "E:\anaconda3\lib\site-packages\filters\extensions.py", line 11, in <mo
from class_registry import EntryPointClassRegistry
File "E:\anaconda3\lib\site-packages\class_registry\__init__.py", line 5, in
from .registry import *
File "E:\anaconda3\lib\site-packages\class_registry\registry.py", line 33, i
class BaseRegistry(with_metaclass(ABCMeta, Mapping)):
File "E:\anaconda3\lib\site-packages\six.py", line 827, in __new__
return meta(name, bases, d)
File "E:\anaconda3\lib\abc.py", line 126, in __new__
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
TypeError: type() doesn't support MRO entry resolution; use types.new_class()
Looks like it is related to this and got fixed. However, I have the latest Python 3.7 and filters package. Any ideas?
Maintainer of filters and class-registry here. I apologise that it took me so long to find this!
The issue is caused by a couple of lines in the class-registry package:
class BaseRegistry(with_metaclass(ABCMeta, Mapping)):
...
class MutableRegistry(with_metaclass(ABCMeta, BaseRegistry)):
...
The error occurs because with_metaclass() creates a dynamic type internally, which conflicts when using it with a generic type like Mapping and MutableMapping.
This issue was discussed on https://bugs.python.org/issue33188 and it appears that the outcome was, "works as intended":
This is not a bug but an explicit design decision. Generic classes are static typing concept and therefore are not supposed to work freely with dynamic class creation. During discussion of PEP 560 it was decided that there should be at least one way to dynamically create generic classes, types.new_class was chosen for this, see https://www.python.org/dev/peps/pep-0560/#dynamic-class-creation-and-types-resolve-bases
Also the exception message is quite clear about this. Unfortunately, PEPs 560 and 557 were discussed in parallel so not every possible interactions where thought out. But is it critical for dataclasses to call type? I believe there should be no other differences with types.new_class. I would say the latter is even better than type because it correctly treats __prepare__ on the metaclass IIRC. So I would propose to switch from type() to types.new_class() for dynamic creation of dataclasses.
There are two possible ways to resolve this issue:
Use add_metaclass() instead of with_metaclass().
Drop support for Python 2 and replace with_metaclass() with Python-3-style base/metaclass declarations.
Both solutions are represented in pull requests submitted by stj.
I no longer work at EFL Global [now LenddoEFL], so I don't have direct access to that repo any more. It may be some time before a new version is released; in the meantime, I have forked the project and released a new version that explicitly supports Python 3.7 (and drops support for Python 2):
https://pypi.org/project/phx-filters/

How to prevent overwritting Python Built-in Function by accident?

I know that it is a bad idea to name a variable that is the same name as a Python built-in function. But say if a person doesn't know all the "taboo" variable names to avoid (e.g. list, set, etc.), is there a way to make Python at least to stop you (e.g. via error messages) from corrupting built-in functions?
For example, command line 4 below allows me to overwrite / corrupt the built-in function set() without stopping me / producing errors. (This error was left un-noticed until it gets to command line 6 below when set() is called.). Ideally I would like Python to stop me at command line 4 (instead of waiting till command line 6).
Note: following executions are performed in Python 2.7 (iPython) console. (Anaconda Spyder IDE).
In [1]: myset = set([1,2])
In [2]: print(myset)
set([1, 2])
In [3]: myset
Out[3]: {1, 2}
In [4]: set = set([3,4])
In [5]: print(set)
set([3, 4])
In [6]: set
Out[6]: {3, 4}
In [7]: myset2 = set([5,6])
Traceback (most recent call last):
File "<ipython-input-7-6f49577a7a45>", line 1, in <module>
myset2 = set([5,6])
TypeError: 'set' object is not callable
Background: I was following the tutorial at this HackerRank Python Set Challenge. The tutorial involves creating a variable valled set (which has the same name as the Python built-in function). I tried out the tutorial line-by-line exactly and got the "set object is not callable" error. The above test is driven by this exercise. (Update: I contacted HackerRank Support and they have confirmed they might have made a mistake creating a variable with built-in name.)
As others have said, in Python the philosophy is to allow users to "misuse" things rather than trying to imagine and prevent misuses, so nothing like this is built-in. But, by being so open to being messed around with, Python allows you to implement something like what you're talking about, in a limited way*. You can replace certain variable namespace dictionaries with objects that will prevent your favorite variables from being overwritten. (Of course, if this breaks any of your code in unexpected ways, you get both pieces.)
For this, you need to use use something like eval(), exec, execfile(), or code.interact(), or override __import__(). These allow you to provide objects, that should act like dictionaries, which will be used for storing variables. We can create a "safer" replacement dictionary by subclassing dict:
class SafeGlobals(dict):
def __setitem__(self, name, value):
if hasattr(__builtins__, name) or name == '__builtins__':
raise SyntaxError('nope')
return super(SafeGlobals, self).__setitem__(name, value)
my_globals = SafeGlobals(__builtins__=__builtins)
With my_globals set as the current namespace, setting a variable like this:
x = 3
Will translate to the following:
my_globals['x'] = 3
The following code will execute a Python file, using our safer dictionary for the top-level namespace:
execfile('safetyfirst.py', SafeGlobals(__builtins__=__builtins__))
An example with code.interact():
>>> code.interact(local=SafeGlobals(__builtins__=__builtins__))
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> x = 2
>>> x
2
>>> dict(y=5)
{'y': 5}
>>> dict = "hi"
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "<stdin>", line 4, in __setitem__
SyntaxError: nope
*Unfortunately, this approach is very limited. It will only prevent overriding built-ins in the top-level namespace. You're free to override built-ins in other namespaces:
>>> def f():
... set = 1
... return set
...
>>> f()
1
This is an interesting idea; unfortunately, Python is not very restrictive and does not offer out-of-the-box solutions for such intentions. Overriding lower-level identifiers in deeper nested scopes is part of Python's philosophy and is wanted and often used, in fact. If you disabled this feature somehow, I guess a lot of library code would be broken at once.
Nevertheless you could create a check function which tests if anything in the current stack has been overridden. For this you would step through all the nested frames you are in and check if their locals also exist in their parent. This is very introspective work and probably not what you want to do but I think it could be done. With such a tool you could use the trace facility of Python to check after each executed line whether the state is still clean; that's the same functionality a debugger uses for step-by-step debugging, so this is again probably not what you want.
It could be done, but it would be like nailing glasses to a wall to make sure you never forget where they are.
Some more practical approach:
For builtins like the ones you mentioned you always can access them by writing explicitly __builtins__.set etc. For things imported, import the module and call the things by their module name (e. g. sys.exit() instead of exit()). And normally one knows when one is going to use an identifier, so just do not override it, e. g. do not create a variable named set if you are going to create a set object.

urllib.request.Request - unexpected keyword argument 'method'

Attempting to use the method argument as seen here yields the following error.
Python 3.2.3 (default, Sep 25 2013, 18:22:43)
>>> import urllib.request as r
>>> r.Request('http://example.com', method='POST')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() got an unexpected keyword argument 'method'
>>>
No matter what/where I search, I can't seem to find a solution to my problem.
You're looking at the docs for Python 3.3 but running Python 3.2. In Python 3.2 the Request initializer doesn't have a method argument: http://docs.python.org/3.2/library/urllib.request.html#urllib.request.Request
FWIW depending on what kind of request you make (for example if the request includes a body) urllib will automatically use the appropriate method (i.e. POST). If you need to make a more specialized type of request such as HEAD you need to dig a little deeper. There are other answers on SO that help with that.

Categories

Resources