How can I access a Rust Iterator from Python using PyO3? - python

I'm quite new with Rust, and my first 'serious' project has involved writing a Python wrapper for a small Rust library using PyO3. This has mostly been quite painless, but I'm struggling to work out how to expose lazy iterators over Rust Vecs to Python code.
So far, I have been collecting the values produced by the iterator and returning a list, which obviously isn't the best solution. Here's some code which illustrates my problem:
use pyo3::prelude::*;
// The Rust Iterator, from the library I'm wrapping.
pub struct RustIterator<'a> {
position: usize,
view: &'a Vec<isize>
}
impl<'a> Iterator for RustIterator<'a> {
type Item = &'a isize;
fn next(&mut self) -> Option<Self::Item> {
let result = self.view.get(self.position);
if let Some(_) = result { self.position += 1 };
result
}
}
// The Rust struct, from the library I'm wrapping.
struct RustStruct {
v: Vec<isize>
}
impl RustStruct {
fn iter(&self) -> RustIterator {
RustIterator{ position: 0, view: &self.v }
}
}
// The Python wrapper class, which exposes the
// functions of RustStruct in a Python-friendly way.
#[pyclass]
struct PyClass {
rust_struct: RustStruct,
}
#[pymethods]
impl PyClass {
#[new]
fn new(v: Vec<isize>) -> Self {
let rust_struct = RustStruct { v };
Self{ rust_struct }
}
// This is what I'm doing so far, which works
// but doesn't iterate lazily.
fn iter(&self) -> Vec<isize> {
let mut output_v = Vec::new();
for item in self.rust_struct.iter() {
output_v.push(*item);
}
output_v
}
}
I've tried to wrap the RustIterator class with a Python wrapper, but I can't use PyO3's #[pyclass] proc. macro with lifetime parameters. I looked into pyo3::types::PyIterator but this looks like a way to access a Python iterator from Rust rather than the other way around.
How can I access a lazy iterator over RustStruct.v in Python? It's safe to assume that the type contained in the Vec always derives Copy and Clone, and answers which require some code on the Python end are okay (but less ideal).

You can make your RustIterator a pyclass and then implement the proper trait (PyIterProtocol) using the rust iter itself.
Not tested, but something like:
#[pyclass]
pub struct RustIterator<'a> {
position: usize,
view: &'a Vec<isize>
}
impl<'a> Iterator for RustIterator<'a> {
type Item = &'a isize;
fn next(&mut self) -> Option<Self::Item> {
let result = self.view.get(self.position);
if let Some(_) = result { self.position += 1 };
result
}
}
#[pyproto]
impl PyIterProtocol for Iter {
fn __next__(mut slf: PyRefMut<Self>) -> IterNextOutput<usize, &'static str> {
match self.next() {
Some(value) => IterNextOutput::Yield(value),
None => IterNextOutput::Return("Ended")
}
}
}

Related

Implementing pymemcache from python to rust

I am migrating a project from python to rust, more specifically right now a file which uses the pymemchace library to call the set and get methods.
In order to set and get the cache a class was created, something like this:
class Example:
def __init__(self):
# Create connection with to memcached server
def set_cache(self, ... ):
# Set cache
def get_cache(self, ... ):
# Get cache
Instance = Example()
I know the OOP concepts like classes are implemented a differently on Rust compared to python.
How do you guys think I should approach this ? Build a struct and implement a method using the memcache crate or build a trait and call it as I needed ?
Edit[7/11/2022]:
After reading the comments this is the first model I have for the implementation of the memecache main methods:
rust
extern crate memcache;
use memcache::{Client, MemcacheError};
pub struct CacheManager {
client: memcache::Client,
}
impl CacheManager {
pub fn __ini__() -> Self {
CacheManager {
client: memcache::Client::connect("memcache://localhost:11211").unwrap(),
}
}
pub fn set_cache(&self, key: &str, val: &str, expire: u32) -> bool {
self.client.set(key, val, expire).unwrap();
self.client.replace(" ", "", 100000000).unwrap();
true
}
pub fn get_cache(&self, key: &str) -> Result<Option<String>, MemcacheError> {
self.client.replace(" ", "", 100000000).unwrap();
self.client.get(key)
}
pub fn get_or_set(self, key: &str, val: &str) -> bool {
//If status cache already exist and value is the same then do nothing else set it
let cache_val = self.get_cache(key);
match cache_val.unwrap() {
Some(i) => {
if (i.parse::<i32>().unwrap() - val.parse::<i32>().unwrap()) < 1 {
true
} else {
false
}
}
None => {
self.set_cache(key, val, 86400);
false
}
}
}
}
What do you guys think ?
I will build some testing mod to try it out now.

How to implement rust-cpython as_object() and into_object()?

I am trying to implement the trait PythonObject for a custom type.
impl PythonObject for PyTerm {
#[inline]
fn as_object(&self) -> &PyObject {
//???
}
#[inline]
fn into_object(self) -> PyObject {
//???
}
#[inline]
unsafe fn unchecked_downcast_from(o: PyObject) -> PyTerm {
std::mem::transmute(o)
}
#[inline]
unsafe fn unchecked_downcast_borrow_from(o: &PyObject) -> &PyTerm {
std::mem::transmute(o)
}
}
PyTerm is defined as a custom struct:
struct PyTerm {
// not important fields
}
You may be interested in cpython::pyobject_newtype! (sources).
It looks like the macros is a blessed way to define PythonObjects.

Using pyo3 pyclass on a rust struct that contains types from a different crate

I have a rust struct that uses the pyo3 pyclass macro to allow using in python. This works fine for simple structs but if I have a struct that contains a type from a different library it becomes more difficult.
Example:
use geo::Point;
#[pyclass]
#[derive(Clone, Copy)]
pub struct CellState {
pub id: u32,
pub position: Point<f64>,
pub population: u32,
}
Above we use the Point type from the rust geo library. The compiler provides the following error:
the trait `pyo3::PyClass` is not implemented for `geo::Point<f64>
If I then try to implement PyClass on the Point:
impl PyClass for Point<f64> {}
It gives me the following compiler error:
impl doesn't use only types from inside the current crate
Any ideas on a clean and simple way to resolve this?
Until a better answer comes up my solution was to nest the class inside a python class but I then had to manually write the getters and setters.
#[pyclass]
#[derive(Clone)]
pub struct CellStatePy {
pub inner: CellState,
}
#[pymethods]
impl CellStatePy {
#[new]
pub fn new(id: u32, pos: (f64, f64), population: u32) -> Self {
CellStatePy {
inner: CellState {
id: CellIndex(id),
position: point!(x: pos.0, y: pos.1),
population,
},
}
}
}
Then implement PyObjectProtocol:
#[pyproto]
impl PyObjectProtocol for CellStatePy {
fn __str__(&self) -> PyResult<&'static str> {
Ok("CellStatePy")
}
fn __repr__<'a>(&'a self) -> PyResult<String> {
Ok(format!("CellStateObj id: {:?}", self.inner.id))
}
fn __getattr__(&'a self, name: &str) -> PyResult<String> {
let out: String = match name {
"id" => self.inner.id.into(),
"position" => format!("{},{}", self.inner.position.x(), self.inner.position.y()),
"population" => format!("{}", self.inner.population),
&_ => "INVALID FIELD".to_owned(),
};
Ok(out)
}
}

Making python generator via c++20 coroutines

Let's say I have this python code:
def double_inputs():
while True:
x = yield
yield x * 2
gen = double_inputs()
next(gen)
print(gen.send(1))
It prints "2", just as expected.
I can make a generator in c++20 like that:
#include <coroutine>
template <class T>
struct generator {
struct promise_type;
using coro_handle = std::coroutine_handle<promise_type>;
struct promise_type {
T current_value;
auto get_return_object() { return generator{coro_handle::from_promise(*this)}; }
auto initial_suspend() { return std::suspend_always{}; }
auto final_suspend() { return std::suspend_always{}; }
void unhandled_exception() { std::terminate(); }
auto yield_value(T value) {
current_value = value;
return std::suspend_always{};
}
};
bool next() { return coro ? (coro.resume(), !coro.done()) : false; }
T value() { return coro.promise().current_value; }
generator(generator const & rhs) = delete;
generator(generator &&rhs)
:coro(rhs.coro)
{
rhs.coro = nullptr;
}
~generator() {
if (coro)
coro.destroy();
}
private:
generator(coro_handle h) : coro(h) {}
coro_handle coro;
};
generator<char> hello(){
//TODO:send string here via co_await, but HOW???
std::string word = "hello world";
for(auto &ch:word){
co_yield ch;
}
}
int main(int, char**) {
for (auto i = hello(); i.next(); ) {
std::cout << i.value() << ' ';
}
}
This generator just produces a string letter by letter, but the string is hardcoded in it. In python, it is possible not only to yield something FROM the generator but to yield something TO it too. I believe it could be done via co_await in C++.
I need it to work like this:
generator<char> hello(){
std::string word = co_await producer; // Wait string from producer somehow
for(auto &ch:word){
co_yield ch;
}
}
int main(int, char**) {
auto gen = hello(); //make consumer
producer("hello world"); //produce string
for (; gen.next(); ) {
std::cout << gen.value() << ' '; //consume string letter by letter
}
}
How can I achieve that? How to make this "producer" using c++20 coroutines?
You have essentially two problems to overcome if you want to do this.
The first is that C++ is a statically typed language. This means that the types of everything involved need to be known at compile time. This is why your generator type needs to be a template, so that the user can specify what type it shepherds from the coroutine to the caller.
So if you want to have this bi-directional interface, then something on your hello function must specify both the output type and the input type.
The simplest way to go about this is to just create an object and pass a non-const reference to that object to the generator. Each time it does a co_yield, the caller can modify the referenced object and then ask for a new value. The coroutine can read from the reference and see the given data.
However, if you insist on using the future type for the coroutine as both output and input, then you need to both solve the first problem (by making your generator template take OutputType and InputType) as well as this second problem.
See, your goal is to get a value to the coroutine. The problem is that the source of that value (the function calling your coroutine) has a future object. But the coroutine cannot access the future object. Nor can it access the promise object that the future references.
Or at least, it can't do so easily.
There are two ways to go about this, with different use cases. The first manipulates the coroutine machinery to backdoor a way into the promise. The second manipulates a property of co_yield to do basically the same thing.
Transform
The promise object for a coroutine is usually hidden and inaccessible from the coroutine. It is accessible to the future object, which the promise creates and which acts as an interface to the promised data. But it is also accessible during certain parts of the co_await machinery.
Specifically, when you perform a co_await on any expression in a coroutine, the machinery looks at your promise type to see if it has a function called await_transform. If so, it will call that promise object's await_transform on every expression you co_await on (at least, in a co_await that you directly write, not implicit awaits, such as the one created by co_yield).
As such, we need to do two things: create an overload of await_transform on the promise type, and create a type whose sole purpose is to allow us to call that await_transform function.
So that would look something like this:
struct generator_input {};
...
//Within the promise type:
auto await_transform(generator_input);
One quick note. The downside of using await_transform like this is that, by specifying even one overload of this function for our promise, we impact every co_await in any coroutine that uses this type. For a generator coroutine, that's not very important, since there's not much reason to co_await unless you're doing a hack like this. But if you were creating a more general mechanism that could distinctly await on arbitrary awaitables as part of its generation, you'd have a problem.
OK, so we have this await_transform function; what does this function need to do? It needs to return an awaitable object, since co_await is going to await on it. But the purpose of this awaitable object is to deliver a reference to the input type. Fortunately, the mechanism co_await uses to convert the awaitable into a value is provided by the awaitable's await_resume method. So ours can just return an InputType&:
//Within the `generator<OutputType, InputType>`:
struct passthru_value
{
InputType &ret_;
bool await_ready() {return true;}
void await_suspend(coro_handle) {}
InputType &await_resume() { return ret_; }
};
//Within the promise type:
auto await_transform(generator_input)
{
return passthru_value{input_value}; //Where `input_value` is the `InputType` object stored by the promise.
}
This gives the coroutine access to the value, by invoking co_await generator_input{};. Note that this returns a reference to the object.
The generator type can easily be modified to allow the ability to modify an InputType object stored in the promise. Simply add a pair of send functions for overwriting the input value:
void send(const InputType &input)
{
coro.promise().input_value = input;
}
void send(InputType &&input)
{
coro.promise().input_value = std::move(input);
}
This represents an asymmetric transport mechanism. The coroutine retrieves a value at a place and time of its own choosing. As such, it is under no real obligation to respond instantly to any changes. This is good in some respects, as it allows a coroutine to insulate itself from deleterious changes. If you're using a range-based for loop over a container, that container cannot be directly modified (in most ways) by the outside world or else your program will exhibit UB. So if the coroutine is fragile in that way, it can copy the data from the user and thus prevent the user from modifying it.
All in all, the needed code isn't that large. Here's a run-able example of your code with these modifications:
#include <coroutine>
#include <exception>
#include <string>
#include <iostream>
struct generator_input {};
template <typename OutputType, typename InputType>
struct generator {
struct promise_type;
using coro_handle = std::coroutine_handle<promise_type>;
struct passthru_value
{
InputType &ret_;
bool await_ready() {return true;}
void await_suspend(coro_handle) {}
InputType &await_resume() { return ret_; }
};
struct promise_type {
OutputType current_value;
InputType input_value;
auto get_return_object() { return generator{coro_handle::from_promise(*this)}; }
auto initial_suspend() { return std::suspend_always{}; }
auto final_suspend() { return std::suspend_always{}; }
void unhandled_exception() { std::terminate(); }
auto yield_value(OutputType value) {
current_value = value;
return std::suspend_always{};
}
void return_void() {}
auto await_transform(generator_input)
{
return passthru_value{input_value};
}
};
bool next() { return coro ? (coro.resume(), !coro.done()) : false; }
OutputType value() { return coro.promise().current_value; }
void send(const InputType &input)
{
coro.promise().input_value = input;
}
void send(InputType &&input)
{
coro.promise().input_value = std::move(input);
}
generator(generator const & rhs) = delete;
generator(generator &&rhs)
:coro(rhs.coro)
{
rhs.coro = nullptr;
}
~generator() {
if (coro)
coro.destroy();
}
private:
generator(coro_handle h) : coro(h) {}
coro_handle coro;
};
generator<char, std::string> hello(){
auto word = co_await generator_input{};
for(auto &ch: word){
co_yield ch;
}
}
int main(int, char**)
{
auto test = hello();
test.send("hello world");
while(test.next())
{
std::cout << test.value() << ' ';
}
}
Be more yielding
An alternative to using an explicit co_await is to exploit a property of co_yield. Namely, co_yield is an expression and therefore it has a value. Specifically, it is (mostly) equivalent to co_await p.yield_value(e), where p is the promise object (ohh!) and e is what we're yielding.
Fortunately, we already have a yield_value function; it returns std::suspend_always. But it could also return an object that always suspends, but also which co_await can unpack into an InputType&:
struct yield_thru
{
InputType &ret_;
bool await_ready() {return false;}
void await_suspend(coro_handle) {}
InputType &await_resume() { return ret_; }
};
...
//in the promise
auto yield_value(OutputType value) {
current_value = value;
return yield_thru{input_value};
}
This is a symmetric transport mechanism; for every value you yield, you receive a value (which may be the same one as before). Unlike the explicit co_await method, you can't receive a value before you start to generate them. This could be useful for certain interfaces.
And of course, you could combine them as you see fit.

Memory leak when trying to free array of CString

The following (MCVE if compiled as a cdylib called libffitest, requires libc as a dependency) demonstrates the problem:
use libc::{c_char, c_void, size_t};
use std::ffi::CString;
use std::mem;
use std::slice;
#[repr(C)]
#[derive(Clone)]
pub struct Array {
pub data: *const c_void,
pub len: size_t,
}
#[no_mangle]
pub unsafe extern "C" fn bar() -> Array {
let v = vec![
CString::new("Hi There").unwrap().into_raw(),
CString::new("Hi There").unwrap().into_raw(),
];
v.into()
}
#[no_mangle]
pub extern "C" fn drop_word_array(arr: Array) {
if arr.data.is_null() {
return;
}
// Convert incoming data to Vec so we own it
let mut f: Vec<c_char> = arr.into();
// Deallocate the underlying c_char data by reconstituting it as a CString
let _: Vec<CString> = unsafe { f.iter_mut().map(|slice| CString::from_raw(slice)).collect() };
}
// Transmute to array for FFI
impl From<Vec<*mut c_char>> for Array {
fn from(sl: Vec<*mut c_char>) -> Self {
let array = Array {
data: sl.as_ptr() as *const c_void,
len: sl.len() as size_t,
};
mem::forget(sl);
array
}
}
// Reconstitute from FFI
impl From<Array> for Vec<c_char> {
fn from(arr: Array) -> Self {
unsafe { slice::from_raw_parts_mut(arr.data as *mut c_char, arr.len).to_vec() }
}
}
I thought that by reconstituting the incoming Array as a slice, taking ownership of it as a Vec, then reconstituting the elements as CString, I was freeing any allocated memory, but I'm clearly doing something wrong. Executing this Python script tells me that it's trying to free a pointer that was not allocated:
python(85068,0x10ea015c0) malloc: *** error for object 0x7ffdaa512ca1: pointer being freed was not allocated
import sys
import ctypes
from ctypes import c_void_p, Structure, c_size_t, cast, POINTER, c_char_p
class _FFIArray(Structure):
"""
Convert sequence of structs to C-compatible void array
"""
_fields_ = [("data", c_void_p),
("len", c_size_t)]
def _arr_to_wordlist(res, _func, _args):
ls = cast(res.data, POINTER(c_char_p * res.len))[0][:]
print(ls)
_drop_wordarray(res)
prefix = {"win32": ""}.get(sys.platform, "lib")
extension = {"darwin": ".dylib", "win32": ".dll"}.get(sys.platform, ".so")
lib = ctypes.cdll.LoadLibrary(prefix + "ffitest" + extension)
lib.bar.argtypes = ()
lib.bar.restype = _FFIArray
lib.bar.errcheck = _arr_to_wordlist
_drop_wordarray = lib.drop_word_array
if __name__ == "__main__":
lib.bar()
Well, that was a fun one to go through.
Your biggest problem is the following conversion:
impl From<Array> for Vec<c_char> {
fn from(arr: Array) -> Self {
unsafe { slice::from_raw_parts_mut(arr.data as *mut c_char, arr.len).to_vec() }
}
}
You start with what comes out of the FFI boundary as an array of strings (i.e. *mut *mut c_char). For some reason, you decide that all of a sudden, it is a Vec<c_char> and not a Vec<*const c_char> as you would expect for the CString conversion. That's UB #1 - and the cause of your use-after-free.
The unnecessarily convoluted conversions made things even muddier due to the constant juggling between types. If your FFI boundary is Vec<CString>, why do you split the return into two separate calls? That's literally calling for disaster, as it happened.
Consider the following:
impl From<Array> for Vec<CString> {
fn from(arr: Array) -> Self {
unsafe {
slice::from_raw_parts(
arr.data as *mut *mut c_char,
arr.len
)
.into_iter().map(|r| CString::from_raw(*r))
.collect()
}
}
}
This gives you a one-step FFI boundary conversion (without the necessity for the second unsafe block in your method), clean types and no leaks.

Categories

Resources