How to send a rumble effect to a device using python evdev - python

I like to send a rumble effect to a device using python evdev.
This should be achieved with the upload_effect() function, which requires a buffer object as input.
This is what capabilities() reveals:
('EV_FF', 21L): [
(['FF_EFFECT_MIN', 'FF_RUMBLE'], 80L),
('FF_PERIODIC', 81L),
(['FF_SQUARE', 'FF_WAVEFORM_MIN'], 88L),
('FF_TRIANGLE', 89L),
('FF_SINE', 90L),
('FF_GAIN', 96L),
],
How do I create that buffer?

Python-evdev 1.1.0 supports force-feedback effect uploads. Here's an example from the documentation:
from evdev import ecodes, InputDevice, ff
# Find first EV_FF capable event device (that we have permissions
# to use).
for name in evdev.list_devices():
dev = InputDevice(name)
if ecodes.EV_FF in dev.capabilities():
break
rumble = ff.Rumble(strong_magnitude=0x0000, weak_magnitude=0xffff)
effect_type = ff.EffectType(ff_rumble_effect=rumble)
duration_ms = 1000
effect = ff.Effect(
ecodes.FF_RUMBLE, -1, 0,
ff.Trigger(0, 0),
ff.Replay(duration_ms, 0),
effect_type
)
repeat_count = 1
effect_id = dev.upload_effect(effect)
dev.write(ecodes.EV_FF, effect_id, repeat_count)
dev.erase_effect(effect_id)

Related

Fill a visio element with color using python

I’m using this code to draw myself a server in visio:
import win32com.client as w32
visio = w32.Dispatch("visio.Application")
visio.Visible = 1
doc = visio.Documents.Add("Detailed Network Diagram.vst")
page = doc.Pages.Item(1)
page.name = "My drawing"
stn2 = visio.Documents("Servers.vss")
server = stn2.Masters("Server")
serv = page.Drop(server, 0, 0)
for ssh in serv.shapes:
ssh.Cells( 'Fillforegnd' ).FormulaForceU = 'RGB(255,0,0)'
And my problem is when I’m trying to fill the object with a color (instead of the regular server color) it doesn’t work.
Nothing really worked. I’m using python 3.8.
Try this code please
import win32com.client as w32
visio = w32.Dispatch("visio.Application")
visio.Visible = 1
doc = visio.activedocument
page = doc.pages(1)
page.name = "Mydrawing"
stn2 = visio.Documents(2)
server = stn2.Masters(2)
serv = page.Drop(server, 0, 0)
#iterate all sub-shapes into Serv-shape
for ssh in serv.shapes:
ssh.Cells( 'Fillforegnd' ).FormulaForceU = 'RGB(255,255,0)'
If you dont need iterate all sub-shapes, you can change only same of them
#iterate 2nd, 3rd and 4rd sub-shapes into Serv-shape #
for i in range (2,5):
ssh = serv.shapes(i)
# if you need get solid color for sub-shapes uncomment next line
# ssh.Cells('FillPattern').FormulaForceU = '1'
ssh.Cells('Fillforegnd').FormulaU = 'RGB(255,255,0)'
Code in my Jupyterlab notebook change only 3 sub-shapes, which i select and delete for demonstrate difference…
PS The user's problem was not in the code, but in the Visio sub-shapes, which did not want to inherit the color of the main shape. Because these sub-shapes had formulas that used functions like THEMEGUARD and similar to it in their cells.
I modified the shape from the built-in set of elements and the problem was solved…
PPS Solved! To remove the dependency on those sub-shapes, you need to change their Fillstyle to Normal. Just add new line of code ssh.FillStyle = 'Normal'.
Look at code ↓
import win32com.client as w32
visio = w32.Dispatch("visio.Application")
visio.Visible = 1
# create document based on Detailed Network Diagram template (use full path)
doc = visio.Documents.Add ("C:\Program Files\Microsoft Office\root\Office16\visio content\1033\dtlnet_m.vstx")
# use one of docked stencils
stn2 = visio.Documents("PERIPH_M.vssx")
# define 'Server' master-shape
server = stn2.Masters("Server")
# define page
page = doc.Pages.Item(1)
# rename page
page.name = "My drawing"
# drop master-shape on page, define 'Server' instance
serv = page.Drop(server, 0, 0)
# iterate sub-shapes (side edges)
for i in range (2,6):
# define one od side edges from 'Server'
ssh = serv.shapes(i)
# Change Fill Style to 'Normal'
ssh.FillStyle = 'Normal'
# fix FillForegnd cell for side edge
ssh.Cells( 'Fillforegnd' ).FormulaForceU = 'Guard(Sheet.' + str(serv.id) + '!FillForegnd)'
# fix FillBkgnd cell for side edge
ssh.Cells( 'FillBkgnd' ).FormulaForceU = 'Guard(Sheet.' + str(serv.id) + '!FillBkgnd)'
# instead formula 'Guard(x)' rewrite formula 'Guard(1)'
ssh.Cells( 'FillPattern' ).FormulaForceU = 'Guard(1)'
# fill main shape in 'Server' master-shape
serv.Cells("FillForegnd").FormulaForceU = '5'

python3 how to change the atribute .P0 with passing argument to call

I am trying to use passing a choice of pins to the raspberry py when creating channels and want to change only the .P(value) when calling the method. For if I call the class in another class I currently have to import all libraries again with the way it is now. Below is code.
import busio
import digitalio
import board
import adafruit_mcp3xxx.mcp3008 as MCP
from adafruit_mcp3xxx.analog_in import AnalogIn
def createChannel(self, channelNumber):
# create the spi bus
spi = busio.SPI(clock=board.SCK, MISO=board.MISO, MOSI=board.MOSI)
# create the cs (chip select)
cs = digitalio.DigitalInOut(board.D22)
# create the mcp object
mcp = MCP.MCP3008(spi, cs)
self.channelNumber = channelNumber
chan = self.channelNumber
chan = AnalogIn(mcp, self.channelNumber)
rawValue = chan.voltage
return rawValue
Then I call it like
sensor = createChannel()
rawValue = sensor.createChannel(MCP.P0)
So when I create another class to use the sensor retrieved data and I call the function I need to import all the libraries again that works with the MCP. I want to call it like this
sensor = createChannel()
rawValue = sensor.createChannel(P0)
But I can not find a way to just change the last part 'MCP.P0') by passing a argument in the call that works.
So when I create the other class I have to do this and import all libraries again
def sensorOne(self):
# create the spi bus
spi = busio.SPI(clock=board.SCK, MISO=board.MISO, MOSI=board.MOSI)
# create the cs (chip select)
cs = digitalio.DigitalInOut(board.D22)
# create the mcp object
mcp = MCP.MCP3008(spi, cs)
#get date and time
outTime = str(datetime.now())
# instance of createSpi class. so if tds sensor is connected to pin 0 you do as below. I will also do ph on pin 2 but comment it out for I am not sure if anyting is connected there yet.
sensor = createChannel()
#get data from sensor on pin 1
outData = sensor.createChannel(MCP.P1)
return outTime, outData
If spacing is not hundred persent please excuse I can not see for I am blind, but the code works just need to try and be able to change just the .P0 to for instance P1 by passing a argument to the call.
Thank you
I think you can define constants P0 to P7 in your module that defines createChannel, and then other files can import those constants from that module instead of getting them from MCP directly. Also, you can just specify a channel with an integer from 0 to 7.
I found some online documentation for adafruit_mcp3xxx.mcp3008. I think it means the channel names like MCP.P0 and MCP.P1 are really just integer values like 0 and 1, respectively.
The ADC chips’ input pins (AKA “channels”) are aliased in this library
as integer variables whose names start with “P” (eg MCP3008.P0 is
channel 0 on the MCP3008 chip). Each module that contains a driver
class for a particular ADC chip has these aliases predefined
accordingly. This is done for code readability and prevention of
erroneous SPI commands.
You can make channel names available to users of your createChannel method by defining constants P0, P1 and so forth in the module that defines createChannel.
## File createChannel.py
import adafruit_mcp3xxx.mcp3008 as MCP
# Alias the channel constants for convenience.
P0 = MCP.P0
P1 = MCP.P1
# etc.
P7 = MCP.P7
class createChannel():
def createChannel(self, channelNumber):
# ... Do stuff with channelNumber.
return channelNumber # Or whatever you need to return.
In another file that wants to use createChannel you can import the channel constants as well as the method. Alternatively, I think you can just access a channel by specifying an integer from 0 to 7.
## Another file.
from createChannel import createChannel, P0, P1, P7
sensor = createChannel()
# Access a single pin.
rawValue = sensor.createChannel(P0)
# Access each pin.
rawBus = [sensor.createChannel(pin) for pin in range(8)]

Pygame read MIDI input

I referenced the Pygame MIDI documentation and this code to try to get MIDI input to work.
The MIDI Interface (Avid Eleven Rack) receives MIDI data from my MIDI controller just fine in my audio software (Pro Tools). Using Pygame, however, I can not seem to read any information at all.
Source Code
import pygame
from pygame.locals import *
from pygame import midi
class MidiInput():
def __init__(self):
# variables
self.elevenRackInID = 2
# init methods
pygame.init()
pygame.midi.init()
self.midiInput = pygame.midi.Input(self.elevenRackInID, 100)
def run(self):
# print(pygame.midi.Input(3, 100))
# for i in range(10):
# print(pygame.midi.get_device_info(i), i)
self.read = self.midiInput.read(100)
# self.convert = pygame.midi.midis2events(self.read, self.elevenRackInID)
print(self.read)
test = MidiInput()
while True:
test.run()
The only thing printed to the console are empty square brackets:
[]
Additional Info
I just checked again: the input ID is the right one and it is in fact an input.
"self.midiInput.poll()" returns False. So according to the Pygame documentation there is no data coming in.
You can see the data, poll and device info below:
data: [] || poll: False || device info: (b'MMSystem', b'Eleven Rack', 1, 0, 1)
A list of all my MIDI devices according to Pygame (with indexes):
(b'MMSystem', b'Microsoft MIDI Mapper', 0, 1, 0) 0
(b'MMSystem', b'External', 1, 0, 0) 1
(b'MMSystem', b'Eleven Rack', 1, 0, 1) 2
(b'MMSystem', b'Maschine Mikro MK2 In', 1, 0, 0) 3
(b'MMSystem', b'Microsoft GS Wavetable Synth', 0, 1, 0) 4
(b'MMSystem', b'External', 0, 1, 0) 5
(b'MMSystem', b'Eleven Rack', 0, 1, 0) 6
(b'MMSystem', b'Maschine Mikro MK2 Out', 0, 1, 0) 7
Any help or suggestions are greatly appreciated!
I got an answer in another forum. It turns out that there is an example file which shows how to get the code to work.
So if someone else stumbles over this problem here is the useful part of example code:
import sys
import os
import pygame as pg
import pygame.midi
def print_device_info():
pygame.midi.init()
_print_device_info()
pygame.midi.quit()
def _print_device_info():
for i in range(pygame.midi.get_count()):
r = pygame.midi.get_device_info(i)
(interf, name, input, output, opened) = r
in_out = ""
if input:
in_out = "(input)"
if output:
in_out = "(output)"
print(
"%2i: interface :%s:, name :%s:, opened :%s: %s"
% (i, interf, name, opened, in_out)
)
def input_main(device_id=None):
pg.init()
pg.fastevent.init()
event_get = pg.fastevent.get
event_post = pg.fastevent.post
pygame.midi.init()
_print_device_info()
if device_id is None:
input_id = pygame.midi.get_default_input_id()
else:
input_id = device_id
print("using input_id :%s:" % input_id)
i = pygame.midi.Input(input_id)
pg.display.set_mode((1, 1))
going = True
while going:
events = event_get()
for e in events:
if e.type in [pg.QUIT]:
going = False
if e.type in [pg.KEYDOWN]:
going = False
if e.type in [pygame.midi.MIDIIN]:
print(e)
if i.poll():
midi_events = i.read(10)
# convert them into pygame events.
midi_evs = pygame.midi.midis2events(midi_events, i.device_id)
for m_e in midi_evs:
event_post(m_e)
del i
pygame.midi.quit()
You can find the file yourself in this directory:
C:\Users\myUser\AppData\Roaming\Python\Python37\site-packages\pygame\examples\midi.py
Replace 'myUser' with your Win username. Also, 'Python37' can vary on the version of Python you have installed.
I don't believe the code posted below by Leonhard W is usable: neither the pygame.midi poll() method nor the pygame.midi read() method are blocking. The result is that CPU consumption goes through the roof (~50%).
Of course in practice the code to read the MIDI events would be run in a separate thread, though this won't help with CPU consumption.
In response to another very useful comment elsewhere, I've taken a look at the Mido library (https://mido.readthedocs.io/en/latest/index.html#). This provides blocking read methods and with just a few lines of code I can look for messages from a MIDI controller keyboard and pass them onto a MIDI synth.
import mido
names = mido.get_input_names()
print(names)
out_port = mido.open_output()
with mido.open_input(names[0]) as inport:
for msg in inport:
out_port.send(msg)
The only downside is that I'm getting a significant delay (perhaps 1/4s) between hitting the key and hearing the note. Oh well, onwards and upwards.

Pepper Live 2-way Audio Streaming Error

I'm trying to establish a real-time audio communication between Pepper's tablet and my PC. I'm using Gstreamer to establish that. The audio from Pepper's mic to PC is working but there seems to be no audio from my PC to Pepper's tablet. What am I doing wrong?
PC side:
audio_pipeline = Gst.Pipeline('audio_pipeline')
audio_udpsrc = Gst.ElementFactory.make('udpsrc', None)
audio_udpsrc.set_property('port', args.audio)
audio_caps = Gst.caps_from_string('application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96')
audio_filter = Gst.ElementFactory.make('capsfilter', None)
audio_filter.set_property('caps',audio_caps)
audio_depay = Gst.ElementFactory.make('rtpL16depay', None)
audio_convert = Gst.ElementFactory.make('audioconvert', None)
audio_sink = Gst.ElementFactory.make('alsasink', None)
audio_sink.set_property('sync',False)
audio_pipeline.add(audio_udpsrc,audio_filter,audio_depay,audio_convert,audio_sink)
audio_udpsrc.link(audio_filter)
audio_filter.link(audio_depay)
audio_depay.link(audio_convert)
audio_convert.link(audio_sink)
Robot side (Choregraphe):
audio_src = gst.element_factory_make('autoaudiosrc')
audio_convert = gst.element_factory_make('audioconvert')
audio_caps = gst.caps_from_string('audio/x-raw-int,channels=1,depth=16,width=16,rate=44100')
audio_filter = gst.element_factory_make('capsfilter')
audio_filter.set_property('caps',audio_caps)
# audio_enc = gst.element_factory_make('mad')
audio_pay = gst.element_factory_make('rtpL16pay')
audio_udp = gst.element_factory_make('udpsink')
audio_udp.set_property('host',user_ip)
audio_udp.set_property('port',int(user_audio_port))
self.audio_pipeline.add(audio_src,audio_convert,audio_filter,audio_pay,audio_udp)
gst.element_link_many(audio_src,audio_convert,audio_filter,audio_pay,audio_udp)
or
Robot's side (Python SDK):
GObject.threads_init()
Gst.init(None)
audio_pipeline = Gst.Pipeline('audio_pipeline')
audio_src = Gst.ElementFactory.make('autoaudiosrc')
audio_convert = Gst.ElementFactory.make('audioconvert')
audio_caps = Gst.ElementFactory.make('audio/x-raw-int,channels=2,depth=16,width=16,rate=44100')
audio_filter = Gst.ElementFactory.make('capsfilter')
audio_filter.set_property('caps',audio_caps)
audio_pay = Gst.ElementFactory.make('rtpL16pay')
audio_udp = Gst.ElementFactory.make('udpsink')
audio_udp.set_property('host',user_ip)
audio_udp.set_property('port',int(user_audio_port))
audio_pipeline.add(audio_src,audio_convert,audio_filter,audio_pay,audio_udp)
audio_src.link(audio_convert)
audio_convert.link(audio_filter)
audio_filter.link(audio_pay)
audio_pay.link(audio_udp)
audio_pipeline.set_state(Gst.State.PLAYING)
Computer's mic to Pepper:
audio_port = 80
s_audio_pipeline = Gst.Pipeline('s_audio_pipeline')
s_audio_src = Gst.ElementFactory.make('autoaudiosrc')
s_audio_convert = Gst.ElementFactory.make('audioconvert')
s_audio_caps = Gst.ElementFactory.make('audio/x-raw-int,channels=2,depth=16,width=16,rate=44100')
s_audio_filter = Gst.ElementFactory.make('capsfilter')
s_audio_filter.set_property('caps',audio_caps)
s_audio_pay = Gst.ElementFactory.make('rtpL16pay')
s_audio_udp = Gst.ElementFactory.make('udpsink')
s_audio_udp.set_property('host',ip)
s_audio_udp.set_property('port',int(audio_port))
s_audio_pipeline.add(s_audio_src,s_audio_convert,s_audio_filter,s_audio_pay,s_audio_udp)
s_audio_src.link(s_audio_convert)
s_audio_convert.link(s_audio_filter)
s_audio_filter.link(s_audio_pay)
s_audio_pay.link(s_audio_udp)
Pepper receiving:
audio = 80
r_audio_pipeline = Gst.Pipeline('r_audio_pipeline')
#defining audio pipeline attributes
r_audio_udpsrc = Gst.ElementFactory.make('udpsrc', None)
r_audio_udpsrc.set_property('port', audio)
r_audio_caps = Gst.caps_from_string('application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)2, format=(string)S16LE, channel-positions=(int)1, payload=(int)96')
r_audio_filter = Gst.ElementFactory.make('capsfilter', None)
r_audio_filter.set_property('caps',r_audio_caps)
r_audio_depay = Gst.ElementFactory.make('rtpL16depay', None)
r_audio_convert = Gst.ElementFactory.make('audioconvert', None)
r_audio_sink = Gst.ElementFactory.make('alsasink', None)
r_audio_sink.set_property('sync',False)
#linking the various attributes
r_audio_pipeline.add(r_audio_udpsrc,r_audio_filter,r_audio_depay,r_audio_convert,r_audio_sink)
r_audio_udpsrc.link(r_audio_filter)
r_audio_filter.link(r_audio_depay)
r_audio_depay.link(r_audio_convert)
r_audio_convert.link(r_audio_sink)
r_audio_pipeline.set_state(Gst.State.PLAYING)
I think there might be a problem with the pepper's receiving port number... I tried different port numbers (including 9559) but nothing seemed to work. Is the source ID wrong?
Is it possible to run the 2-way stream in the same pipeline?
I took a look at other libraries like ffmpeg and PyAudio, but I couldn't any method for live streaming.
Make sure you run the Python script on the robot.
Also, did you run the GMainLoop ?
Choregraphe behaviors are run in NAOqi, and NAOqi runs a GMainLoop already in the background. Maybe this is what is missing in your stand-alone script.
Finally, you show no piece of code in your snippets that is meant to take the PC's audio to the network, nor from the network to Pepper's speakers.

No video mode large enough error in pygame when using 1,8" TFT via a RPi

First off i just want to mention that i'm not actually sure that this questions is asked on the correct StackExchange, since it is a lot of different topics involved i might need to ask it somewhere else. I do believe it boils down to a problem somewhere between Python (pygame) and SDL.
What i've done:
I've hooked up a Sainsmart 1,8" TFT to my Raspberry Pi running on a
MINIBIAN image (a small footprint version of Raspbian), i've got the
TFT to work since i can send console output to it and display images
using fbi (writes directly to the frame buffer).
Installed and loaded the fbtft drivers (Linux Framebuffer drivers for small TFT LCD display modules)
From dmesg:
[ 12.377397] graphics fb1: fb_st7735r frame buffer, 128x160, 40 KiB video memory, 4 KiB DMA buffer memory, fps=20, spi0.0 at 32 MHz
The problem:
What i now want to test is to display a clock on my TFT using pygame, the code i've got is (borrowed from http://gerfficient.com/2014/02/12/connecting-1-8-tft-lcd-to-raspberry-pi/):
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import time
import pygame
time_stamp_prev=0
os.environ["SDL_FBDEV"] = "/dev/fb1"
os.environ['SDL_VIDEODRIVER']="fbcon"
def displaytext(text,size,line,color,clearscreen):
if clearscreen:
screen.fill((0,0,0))
font = pygame.font.Font(None,size)
text = font.render(text,0,color)
rotated = pygame.transform.rotate(text,90)
textpos = rotated.get_rect()
textpos.centery = 80
if line == 1:
textpos.centerx = 99
screen.blit(rotated,textpos)
elif line == 2:
textpos.centerx = 61
screen.blit(rotated,textpos)
elif line == 3:
textpos.centerx = 25
screen.blit(rotated,textpos)
def main():
global screen
pygame.init()
pygame.mouse.set_visible(0)
size = width,height = 128,160
screen = pygame.display.set_mode(size)
while True:
displaytext(time.strftime("%d.%m.%Y",time.gmtime()),40,1,(255,255,255),True)
displaytext(time.strftime("%H:%M:%S",time.gmtime()),40,2,(255,255,255),False)
displaytext("gerfficient.com",20,3,(100,100,255),False)
pygame.display.flip()
time.sleep(1)
if __name__ == '__main__':
main()
The error i get is this:
Traceback (most recent call last):
File "test.py", line 19, in <module>
pygame.display.set_mode()
pygame.error: No video mode large enough for 128x160
When i'm displaying more info via pygame and run print pygame.display.Info() i get:
<VideoInfo(hw = 1, wm = 0,video_mem = 40
blit_hw = 0, blit_hw_CC = 0, blit_hw_A = 0,
blit_sw = 0, blit_sw_CC = 0, blit_sw_A = 0,
bitsize = 16, bytesize = 2,
masks = (63488, 2016, 31, 0),
shifts = (11, 5, 0, 0),
losses = (3, 2, 3, 8),
current_w = 128, current_h = 160
>
Output of fbset -i -fb /dev/fb1:
mode "128x160"
geometry 128 160 128 160 16
timings 0 0 0 0 0 0 0
nonstd 1
rgba 5/11,6/5,5/0,0/0
endmode
Frame buffer device information:
Name : fb_st7735r
Address : 0
Size : 40960
Type : PACKED PIXELS
Visual : TRUECOLOR
XPanStep : 0
YPanStep : 0
YWrapStep : 0
LineLength : 256
Accelerator : No
When checking the two SDL environment variables they're set correctly.
It seems that the when i use /dev/fb1 (the TFT) as the frame buffer device pygame/SDL picks that information up but still can't use it via pygame.
Just to mention, i installed the package python-pygame via the Raspbian repositories.
The versions of pygame and SDL i'm using:
ii python-pygame 1.9.1release+dfsg-8
ii libsdl-image1.2:armhf 1.2.12-2
ii libsdl-mixer1.2:armhf 1.2.12-3
ii libsdl-ttf2.0-0:armhf 2.0.11-2
ii libsdl1.2-dev 1.2.15-5
ii libsdl1.2debian:armhf 1.2.15-5
Change
size = width,height = 128,160
to
size = width,height = 160,128
The problem solved itself, what i did was to restart the RPi, since then the TFT screen have worked flawlessly with the pygame module.
I think the problem could be found somewhere between Python and the kernel module fbtft that i've been using for this project, or maybe in the way i installed and initialized the kernel module. This is only speculations from my part.
I didn't mention the fact that i was using a third party driver package, stupid of me, there's a small hint that i use these drivers in the dmesg output but nothing more.
I will update the question with this information.
It is most likely that you were missing the mode item in /etc/fb.modes .
I had this issue with a 128x128 display.
You can get the mode via
fbset -i
And then simply put it's output into /etc/fb.modes .
Hope that helps...

Categories

Resources