Python C interface PyArg_ParseTuple failing - python

I have a Python module written in C with a number of functions exposed. One of them has a Python definition of:
def SetPowerSupply(voltage, current, supply):
where voltage = float, current = float, and supply = int. On the C side, I have this:
float voltage, current;
int supply;
if (!PyArg_ParseTuple(args, "ffi", &voltage, &current, &supply))
{
// Failed to parse
// ...
}
One of my scripters has a script wherein this function fails to parse the arguments, complaining that an integer is expected. So far as I can tell, an integer is in fact being passed in since if in the error branch I do this:
PyObject *num = PyNumber_Float(PyTuple_GetItem(args, 0));
voltage = PyFloat_AsDouble(num);
Py_XDECREF(num);
num = PyNumber_Float(PyTuple_GetItem(args, 1));
current = PyFloat_AsDouble(num);
Py_XDECREF(num);
num = PyNumber_Int(PyTuple_GetItem(args, 2));
supply = PyLong_AsLong(num);
Py_XDECREF(num);
... then everything works as expected. Other script running through this module do not exhibit this behaviour, and I can see no differences. Both of them call the function the same:
SetPowerSupply(37.5, 0.5, 1)
SetPowerSupply(0, 0, 1)
In the offending script I can do something like this:
Any ideas???
Thank you.
Edit:
The problem was caused by another function which was being called several calls prior to this function. It was:
if(!PyArg_ParseTuple(args, "s|siss", &board, &component, &pin, &colorStr, &msg))
{
// Parsing the pin as an int failed, try as a string
if(!PyArg_ParseTuple(args, "s|ssss", &board, &component, &sPin, &colorStr, &msg))
{
// ...
The purpose of this was to basically overload the third argument to accept either a string or numerical value. When someone fed a string to it, the Python error from the failed parse was never cleared. Updated code resolving the issue follows.
if(!PyArg_ParseTuple(args, "s|siss", &board, &component, &pin, &colorStr, &msg))
{
PyErr_Clear();
// Parsing the pin as an int failed, try as a string
if(!PyArg_ParseTuple(args, "s|ssss", &board, &component, &sPin, &colorStr, &msg))
{
// ...
Many thanks to Ignacio for the clue.

One of your other functions is failing to return None when appropriate, and you're catching this error message by accident.

Related

Pygame/Pyscroll: TypeError: Source objects must be a surface [duplicate]

Everything works fine when I wrote the js logic in a closure as a single js file, as:
(function(win){
//main logic here
win.expose1 = ....
win.expose2 = ....
})(window)
but when I try to insert a logging alternative function before that closure in the same js file,
window.Glog = function(msg){
console.log(msg)
}
// this was added before the main closure.
(function(win){
//the former closure that contains the main javascript logic;
})(window)
it complains that there is a TypeError:
Uncaught TypeError: (intermediate value)(...) is not a function
What did I do wrong?
The error is a result of the missing semicolon on the third line:
window.Glog = function(msg) {
console.log(msg);
}; // <--- Add this semicolon
(function(win) {
// ...
})(window);
The ECMAScript specification has specific rules for automatic semicolon insertion, however in this case a semicolon isn't automatically inserted because the parenthesised expression that begins on the next line can be interpreted as an argument list for a function call.
This means that without that semicolon, the anonymous window.Glog function was being invoked with a function as the msg parameter, followed by (window) which was subsequently attempting to invoke whatever was returned.
This is how the code was being interpreted:
window.Glog = function(msg) {
console.log(msg);
}(function(win) {
// ...
})(window);
To make semicolon rules simple
Every line that begins with a (, [, `, or any arithmetic operator, must begin with a semicolon if you want it to be interpreted as its own line ~ Otherwise, it may combine with the previous line by accident. All other line breaks have implicit semicolons.
That's it. Done.
Note that /, +, - are the only valid operators you would want to do this for anyway. You would never want a line to begin with '*', since it's a binary operator that could never make sense at the beginning of a line.
You should put the semicolons at the start of the line when doing this. You should not try to "fix" the issue by adding a semicolon to the previous line, or any reordering or moving of the code will cause the issue to potentially manifest again. Many of the other answers (including top answers) make this suggestion, but it's not a good practice.
Why do those particular characters need initial semicolons?
Consider the following:
func()
;[0].concat(myarr).forEach(func)
;(myarr).forEach(func)
;`hello`.forEach(func)
;/hello/.exec(str)
;+0
;-0
By following the rules given, you prevent the above code from being reinterpreted as
func()[0].concat(myarr).forEach(func)(myarr).forEach(func)`hello`.forEach(func)/hello/.forEach(func)+0-0
Additional Notes
To mention what will happen: brackets will index, parentheses will be treated as function parameters. The backtick would transform into a tagged template, regex will turn into division, and explicitly +/- signed integers will turn into plus/minus operators.
Of course, you can avoid this by just adding a semicolon to the end of every linebreak, but do not believe that doing this can let you code like a C programmer. Since it is still the case that when you don't end a line with a semicolon, Javascript might implicitly add one on your behalf against your desires. So, keep in mind statements like
return // Implicit semicolon, will return undefined.
(1+2);
i // Implicit semicolon on this line
++; // But, if you really intended "i++;"
// and you actually wrote it like this,
// you need help.
The above case will happen to return/continue/break/++/--. Any linter will catch the former case with dead-code, or the latter case with ++/-- syntax error.
Finally, if you want file concatenation to work, make sure each file ends with a semicolon. If you're using a bundler program (recommended), it should do this automatically.
Error Case:
var userListQuery = {
userId: {
$in: result
},
"isCameraAdded": true
}
( cameraInfo.findtext != "" ) ? searchQuery : userListQuery;
Output:
TypeError: (intermediate value)(intermediate value) is not a function
Fix: You are missing a semi-colon (;) to separate the expressions
userListQuery = {
userId: {
$in: result
},
"isCameraAdded": true
}; // Without a semi colon, the error is produced
( cameraInfo.findtext != "" ) ? searchQuery : userListQuery;
For me it was much more simple but it took me a while to figure it out. We basically had in our .jslib
some_array.forEach(item => {
do_stuff(item);
});
Turns out Unity (emscripten?) just doesn't like that syntax. We replaced it with a good old for-loop and it stoped complaining right away.
I really hate it that it doesn't show the line it is complaining about, but anyway, fool me twice shame on me.
When I create a root class, whose methods I defined using the arrow functions. When inheriting and overwriting the original function I noticed the same issue.
class C {
x = () => 1;
};
class CC extends C {
x = (foo) => super.x() + foo;
};
let add = new CC;
console.log(add.x(4));
this is solved by defining the method of the parent class without arrow functions
class C {
x() {
return 1;
};
};
class CC extends C {
x = foo => super.x() + foo;
};
let add = new CC;
console.log(add.x(4));
**Error Case:**
var handler = function(parameters) {
console.log(parameters);
}
(function() { //IIFE
// some code
})();
Output: TypeError: (intermediate value)(intermediate value) is not a function
*How to Fix IT -> because you are missing semi colan(;) to separate expressions;
**Fixed**
var handler = function(parameters) {
console.log(parameters);
}; // <--- Add this semicolon(if you miss that semi colan ..
//error will occurs )
(function() { //IIFE
// some code
})();
why this error comes??
Reason :
specific rules for automatic semicolon insertion which is given ES6 stanards
I faced same issue with this situation:
let brand, capacity, color;
let car = {
brand: 'benz',
capacity: 80,
color: 'yellow',
}
({ color, capacity, brand } = car);
And with just a ; at the end of car declaration the error disappred:
let car = {
brand: 'benz',
capacity: 80,
color: 'yellow',
}; // <-------------- here a semicolon is needed
Actually, before ({ color, capacity, brand } = car); it is needed to see semicolon.
I have faced this issue when I created a new ES2015 class where the property name was equal to the method name.
e.g.:
class Test{
constructor () {
this.test = 'test'
}
test (test) {
this.test = test
}
}
let t = new Test()
t.test('new Test')
Please note this implementation was in NodeJS 6.10.
As a workaround (if you do not want to use the boring 'setTest' method name), you could use a prefix for your 'private' properties (like _test).
Open your Developer Tools in jsfiddle.
My case: (Angular, PrimeNG )
My Error:
My versions:
"#angular/animations": "^12.2.0",
"#angular/cdk": "^12.2.0",
"#angular/common": "^12.2.0",
"#angular/compiler": "^12.2.0",
"#angular/core": "^12.2.0",
"#angular/forms": "^12.2.0",
"#angular/platform-browser": "^12.2.0",
"#angular/platform-browser-dynamic": "^12.2.0",
"#angular/router": "^12.2.0",
"primeng": "^13.0.0-rc.2",
"quill": "^1.3.7"
My solution:
node_modules/primeng/fesm2015/primeng-editor.mjs
update the import for Quill like in the image
I had the same error in React and it took me ages to figure out the problem,
The cause was not wrapping context around my app
Go to your index.jsx (or main.jsx in ViteJS) and check that you have Context wrapped around your app.
If comming from Ionic Angular update to latest version
ng update #ionic/angular

How does one deal with various errors in statically typed languages (or when typing in general)

For context, my primary langauge is Python, and I'm just beginning to use annotations. This is in preparation for learning C++ (and because, intuitively, it feels better).
I have something like this:
from models import UserLocation
from typing import Optional
import cluster_module
import db
def get_user_location(user_id: int, data: list) -> Optional[UserLocation]:
loc = UserLocation.query.filter_by(user_id=user_id).one_or_none()
if loc:
return loc
try:
clusters = cluster_module.cluster(data)
except ValueError:
return None # cluster throws an error if there is not enough data to cluster
if list(clusters.keys()) == [-1]:
return None # If there is enough data to cluster, the cluster with an index of -1 represents all data that didn't fit into a cluster. It's possible for NO data to fit into a cluster.
loc = UserLocation(user_id=user_id, location = clusters[0].center)
db.session.add(loc)
db.session.commit()
return loc
So, I use typing.Optional to ensure that I can return None in case there's an error (if I understand correctly, the static-typing-language equivalent of this would be to return a null pointer of the appropriate type). Though, how does one distinguish between the two errors? What I'd like to do, for example, is return -1 if there's not enough data to cluster and -2 if there's data, but none of them fit into a cluster (or some similar thing). In Python, this is easy enough (because it isn't statically typed). Even with mypy, I can say something like typing.Union[UserLocation, int].
But, how does one do this in, say, C++ or Java? Would a Java programmer need to do something like set the function to return int, and return the ID of UserLocation instead of the object itself (then, whatever code uses the get_user_location function would itself do the lookup)? Is there runtime benefit to doing this, or is it just restructuring the code to fit the fact that a language is statically typed?
I believe I understand most of the obvious benefits of static typing w.r.t. code readability, compile-time, and efficiency at runtime—but I'm not sure what to make of this particular issue.
In a nutshell: How does one deal with functions (which return a non-basic type) indicating they ran into different errors in statically typed languages?
The direct C++ equivalent to the python solution would be std::variant<T, U> where T is the expected return value and U the error code type. You can then check which of the types the variant contains and go from there. For example :
#include <cstdlib>
#include <iostream>
#include <string>
#include <variant>
using t_error_code = int;
// Might return either `std::string` OR `t_error_code`
std::variant<std::string, t_error_code> foo()
{
// This would cause a `t_error_code` to be returned
//return 10;
// This causes an `std::string` to be returned
return "Hello, World!";
}
int main()
{
auto result = foo();
// Similar to the Python `if isinstance(result, t_error_code)`
if (std::holds_alternative<t_error_code>(result))
{
const auto error_code = std::get<t_error_code>(result);
std::cout << "error " << error_code << std::endl;
return EXIT_FAILURE;
}
std::cout << std::get<std::string>(result) << std::endl;
}
However this isn't often seen in practice. If a function is expected to fail, then a single failed return value like a nullptr or end iterator suffices. Such failures are expected and aren't errors. If failure is unexpected, exceptions are preferred which also eliminates the problem you describe here. It's unusual to both expect failure and care about the details of why the failure occurred.

Arbitrary builtin error on UUID call

Sorry, this was not a good question [edited, revised, summarized and diagnosed].
I have a Python C-API that works with UUID. I will omit error checking, but it is done for all Python and internal functions. [edit: ok, sorry about that, my bad... see diagnose at bottom]
// Get the raw data through get_bytes method
bytes_uuid = PyObject_CallMethod(pyuuid, "get_bytes", NULL);
uuid.setBytes(PyString_AsString(bytes_uuid));
Py_DECREF(bytes_uuid);
This generally works as expected. To create UUIDs I use:
// Call constructor
PyObject *UUIDkwargs = Py_BuildValue ("{s:s#}", "bytes", uuid.getBytes(), 16);
PyObject *emptyArgs = PyTuple_New(0);
ret = PyObject_Call(uuidClass, emptyArgs, UUIDkwargs);
Py_DECREF(UUIDkwargs);
Py_DECREF(emptyArgs);
return ret;
(lots of things omitted for readibility).
It worked on most functions but not on a certain one, and failed in a chr() call from the UUID modue itself.
DIAGNOSE: I performed a call to PyObject_IsInstance, and checked for 0 but not for -1. The error was there, but uncatched, and the first call to built in failed. Well, not the first call. A chr() call with non-constant argument.
Because there was a lot of C code in between, I didn't expect that to be the problem.

NULL return when calling a Python function from a C extension

I am calling a function in a Python module from a C extension using the method suggested in the 2.7 tutorial. Here is my code in the C extension:
result = PyObject_CallObject(pFailureFormatFn, failureObj);
if (result)
{ // We got a good result from the function
if (PyArg_ParseTuple(result, "OO", &resultDict, &resultString))
{ // Decode success
< snip >
Py_DECREF(resultDict);
Py_DECREF(resultString);
}
Py_DECREF(result);
}
else
{ // Bad result from the function
logEvent("The call to pFailureFormatFn returned a NULL result");
PyErr_Clear();
}
I find that this call returns good results for a number of invocations and then it starts returning NULL for all invocations until the program exits. The function, in the Python domain (call it funcA), that is being called calls another function that calls itself recursively (call that function funcB). I have determined that the problem occurs in funcB.
My question is how can I find out what the problem in funcB is? I have used gdb to run the program and nothing bad happens in there. That is, the NULL result is returned and execution continues. I want to figure out what is causing the NULL return (can I see the Python stack trace?).

Assignment into Python 3.x Buffers with itemsize > 1

I am trying to expose a buffer of image pixel information (32 bit RGBA) through the Python 3.x buffer interface. After quite a bit of playing around, I was able to get this working like so:
int Image_get_buffer(PyObject* self, Py_buffer* view, int flags)
{
int img_len;
void* img_bytes;
// Do my image fetch magic
get_image_pixel_data(self, &img_bytes, &img_len);
// Let python fill my buffer
PyBuffer_FillInfo(view, self, img_bytes, img_len, 0, flags);
}
And in python I can play with it like so:
mv = memoryview(image)
print(mv[0]) # prints b'\x00'
mv[0] = b'\xFF' # set the first pixels red component to full
mx[0:4] = b'\xFF\xFF\xFF\xFF' # set the first pixel to white
And that works splendidly. However, it would be great if I could work with the full pixel value (int, 4 byte) instead of individual bytes, so I modified the buffer fetch like so:
int Image_get_buffer(PyObject* self, Py_buffer* view, int flags)
{
int img_len;
void* img_bytes;
// Do my image fetch magic
get_image_pixel_data(self, &img_bytes, &img_len);
// Fill my buffer manually (derived from the PyBuffer_FillInfo source)
Py_INCREF(self);
view->readonly = 0;
view->obj = self;
view->buf = img_bytes;
view->itemsize = 4;
view->ndim = 1;
view->len = img_len;
view->suboffsets = NULL;
view->format = NULL;
if ((flags & PyBUF_FORMAT) == PyBUF_FORMAT)
view->format = "I";
view->shape = NULL;
if ((flags & PyBUF_ND) == PyBUF_ND)
{
Py_ssize_t shape[] = { (int)(img_len/4) };
view->shape = shape;
}
view->strides = NULL;
if((flags & PyBUF_STRIDED) == PyBUF_STRIDED)
{
Py_ssize_t strides[] = { 4 };
view->strides = strides;
}
return 0;
}
This actually returns the data and I can read it correctly, but any attempt to assign a value into it now fails!
mv = memoryview(image)
print(mv[0]) # prints b'\x00\x00\x00\x00'
mv[0] = 0xFFFFFFFF # ERROR (1)
mv[0] = b'\xFF\xFF\xFF\xFF' # ERROR! (2)
mv[0] = mv[0] # ERROR?!? (3)
In case 1 the error informs me that 'int' does not support the buffer interface, which is a shame and a bit confusing (I did specify that the buffer format was "I" after all), but I can deal with that. In case 2 and 3 things get really weird, though: Both cases gime me an TypeError reading mismatching item sizes for "my.Image" and "bytes" (Where my.Image is, obviously, my image type)
This is very confusing to me, since the data I'm passing in is obviously the same size as what I get out of that element. It seems as though buffers simply stop allowing assignment if the itemsize is greater than 1. Of course, the documentation for this interface is really sparse and perusing through the python code doesn't really give any usage examples so I'm fairly stuck. Am I missing some snippit of documentation that states "buffers become essentially useless when itemsize > 1", am I doing something wrong that I can't see, or is this a bug in Python? (Testing against 3.1.1)
Thanks for any insight you can give on this (admittedly advanced) issue!
I found this in the python code (in memoryobject.c in Objects) in the function memory_ass_sub:
/* XXX should we allow assignment of different item sizes
as long as the byte length is the same?
(e.g. assign 2 shorts to a 4-byte slice) */
if (srcview.itemsize != view->itemsize) {
PyErr_Format(PyExc_TypeError,
"mismatching item sizes for \"%.200s\" and \"%.200s\"",
view->obj->ob_type->tp_name, srcview.obj->ob_type->tp_name);
goto _error;
}
that's the source of the latter two errors. It looks like the itemsize for even mv[0] is still not equal to itself.
Update
Here's what I think is going on. When you try to assign something in mv, it calls memory_ass_sub in Objects/memoryobject.c, but that function takes only a PyObject as input. This object is then changed into a buffer inside using the PyObject_GetBuffer function even though in the case of mv[0] it is already a buffer (and the buffer you want!). My guess is that this function takes the object and makes it into a simple buffer of itemsize=1 regardless of whether it is already a buffer or not. That is why you get the mismatching item sizes even for
mv[0] = mv[0]
The problem with the first assignment,
mv[0] = 0xFFFFFFFF
stems (I think) from checking if the int is able to be used as a buffer, which currently it isn't set-up for from what I understand.
In other words, the buffer system isn't currently able to handle item sizes bigger from 1. It doesn't look like it is so far off, but it would take a bit more work on your end. If you do get it working, you should probably submit the changes back to the main Python distribution.
Another Update
The error code from your first try at assigning mv[0] stems from the int failing the PyObject_CheckBuffer when PyObject_CheckBuffer is called on it. Apparently the system only handles copies from bufferable objects. This seems like it should be changed too.
Conclusion
Currently the Python buffer system can't handle items with itemsize > 1 as you guessed. Also, it can't handle assignments to a buffer from non-bufferable objects such as ints.

Categories

Resources