multithread communication in python3.6 freeze - python

I'm trying to develop a multithread function in python 3.6 and sometime my code freeze. from my tests I think that the problem come from os.write() or os.read(), but I don't know why.
here is my code (I don't think that partialTransform() cause the freeze but I put it to understand the code):
def naiveTransform(netData,**kwargs):
#parralelisable part
def partialTransform(debut, fin) :
for i in range(debut, fin) :
j = 0
#calcul of all the distances :
while j < nbrPoint :
distance[j] = euclidianDistance(netData[i], netData[j])
j += 1
#construction of the graph :
j = 0
del distance[i]
while j < k :
nearest = min(distance, key=distance.get)
del distance[nearest] #if k > 1 we don't want to get always the same point.
graph.append([i, nearest])
j += 1
return graph
k = kwargs.get('k', 1) # valeur par défault à definir.
nbrCore = kwargs.get('Core', 1)
nbrPoint = len(netData)
nbrPointCore = nbrPoint//nbrCore
distance = dict()
graph = []
#pipes
r = [-1]*nbrCore
w = [-1]*nbrCore
pid = [-1]*nbrCore
for i in range(nbrCore):
r[i], w[i] = os.pipe()
try:
pid[i] = os.fork()
except OSError:
exit("Could not create a child process\n")
if pid[i] == 0:
if i < nbrCore-1 :
g = partialTransform(i*nbrPointCore, (i+1)*nbrPointCore)
else :
g = partialTransform(i*nbrPointCore, nbrPoint) #to be sure that there is not a forgoten point.
print("write in " + str(i))
import sys
print(sys.getsizeof(g))
os.write(w[i], pickle.dumps(g))
print("exit")
exit()
for i in range(nbrCore):
print("waiting " + str(i))
finished = os.waitpid(pid[i], 0)
print("received")
graph += pickle.loads(os.read(r[i], 250000000))
return graph
When the argument k is superior or equal to 5 the code freeze after the
print(sys.getsizeof(g))
For my example case when k = 4 the size is of 33928 and for k = 5 the size is of 43040 so I don't think that it's the problem ?
The number of core used don't seem to have any influence on the freeze.
I'm still a beginner in python so it may be something obvious but I didn't find any similar problem on internet. Do you have any idea of what could cause theses freeze ?

Pipes have limited size buffers and the child will block writing the pipe until the parent reads it. But the parent is waiting for the child to exit, so you hang. You can avoid the buffer limit by writing the object to a temporary file instead. The data will be in the operation system file cache when the parent reads so it will still be fast.
There is a trick in all this. The parent needs to convince libc to re-examine the file after the child writes it or the read will just be satisfied by its 0 length internal cache. You can do that with a seek.
import tempfile
def naiveTransform(netData,**kwargs):
// *** code removed for example ***
# files
tmp = [tempfile.TemporaryFile() for _ in range(nbrCore)]
pid = [-1]*nbrCore
for i in range(nbrCore):
try:
pid[i] = os.fork()
except OSError:
exit("Could not create a child process\n")
if pid[i] == 0:
if i < nbrCore-1 :
g = partialTransform(i*nbrPointCore, (i+1)*nbrPointCore)
else :
g = partialTransform(i*nbrPointCore, nbrPoint) #to be sure that there is not a forgoten point.
print("write in " + str(i))
import sys
print(sys.getsizeof(g))
pickle.dump(g, tmp[i])
tmp[i].close()
print("exit")
exit()
for i in range(nbrCore):
print("waiting " + str(i))
finished = os.waitpid(pid[i], 0)
print("received")
# seek to get updated file content
tmp[i].seek(0,2)
tmp[i].seek(0)
graph += pickle.load(tmp[i])
return graph

Related

How to use python block to remove white noise?

I want to design a python block to remove the white noise of my received signal
As shown in the red circle:
So I first tested on Spyder whether my code can remove noise normally
I got the result as pictured:
Just when I finished the test, I wanted to transfer it to the python block, but it couldn't execute normally, causing a crash
Below is the test program on my python block and spyder:
i = 0
j = 0
t = 0
data=[]
buf=[]
wav = numpy.fromfile(open('C:/Users/user/Desktop/datas'), dtype=numpy.uint8)
for i in range(int(len(wav)/10)):
for j in range(10):
buf.append(wav[(i*10)+j])
if (buf[j]<=180)and(buf[j]>=90):
t = t+1
if t < 6:
data = numpy.append(data,buf)
# else:
# data = numpy.append(data,numpy.zeros(10))
t= 0
j = 0
buf.clear()
"""
Embedded Python Blocks:
Each time this file is saved, GRC will instantiate the first class it finds
to get ports and parameters of your block. The arguments to __init__ will
be the parameters. All of them are required to have default values!
"""
import numpy as np
from gnuradio import gr
i = 0
j = 0
t = 0
data=[]
buf=[]
class blk(gr.sync_block): # other base classes are basic_block, decim_block, interp_block
"""Embedded Python Block example - a simple multiply const"""
def __init__(self): # only default arguments here
"""arguments to this function show up as parameters in GRC"""
gr.sync_block.__init__(
self,
name='noise out', # will show up in GRC
in_sig=[np.float32],
out_sig=[np.float32]
)
# if an attribute with the same name as a parameter is found,
# a callback is registered (properties work, too).
def work(self, input_items, output_items):
"""example: multiply with constant"""
np.frombuffer(input_items, dtype=np.uint8)
for i in range(int((len(input_items[0]))/10)):
for j in range(10):
buf.append(input_items[0][(i*10)+j])
if (buf[j]<=180)and(buf[j]>=90):
t = t+1
if t < 6:
data = numpy.append(data,buf)
else:
data = numpy.append(data,numpy.zeros(10))
t= 0
j = 0
buf.clear()
for i in range(len(output_items[0])):
output_items[0][i]=data[i]
return len(output_items[0])
What should I do to modify it if I want it to run normally?

How to setup internet networking for controlling a robot?

I built a robot and setup local networking by having it act as an access point. I then connect to the access point from a laptop and send commands over UDP using python.
I would like to do the same thing but over the internet and Im really lost on how to get started. I've seen some IoT services but they dont seem well suited for such an application. I know some basics of networking enough to get a local network running but I have no idea how to get started on sending data over the internet.
Here is how I sent data over my local network (I would love to be able to do the same over the internet). I understand the requirements are a little different as I can't have a static IP address. If you could even point me to resources, I would appreciate it.
Note: the non-networking stuff is irrelevant to my question but I included it in the code for the sake of completion.
import pygame
import socket
import time
import math
#UDP_IP = "192.168.4.1" # if using ESP32
UDP_IP = "10.42.0.1" # if using jetson nano AP
UDP_PORT = 8080
l1=127/2
l2=127/2
r1=127/2
r2=127/2
intake = 127/2
elevator = 127/2
base_line=127/2
joy_deadzone= 0.2
'''
Joystick setup
'''
pygame.joystick.init()
pygame.init()
joysticks = [pygame.joystick.Joystick(x) for x in range(pygame.joystick.get_count())]
counter = 0
for joy in joysticks:
joy_index = counter
counter+=1
joysticks[joy_index].init()
print("The number of joysticks: " + str(pygame.joystick.get_count()))
print("The name of joystick: " + joysticks[joy_index].get_name())
print("The number of axis: " + str(joysticks[joy_index].get_numaxes()))
hats = joysticks[joy_index].get_numhats()
print ("Number of hats: {}".format(hats) )
'''
end of joystick setup
'''
time.sleep(1)
keepRunning = True
def numToStringF(num):
# Make each pwm value 3 characters
num = int(num)
ans = str(num)
if len(ans) == 1:
return "00" + ans
elif len(ans) == 2:
return "0" + ans
else:
return ans
axisX = 0;
axisY = 0;
X = 25;
up_button_last = False
down_button_last = False
left_button_last = False
right_button_last = False
while keepRunning:
#for event in pygame.event.get():
#time.sleep(1)
if abs(round(joysticks[joy_index].get_axis(3), 1))>joy_deadzone:
axisY = (round(joysticks[joy_index].get_axis(3), 1)) * -base_line
else:
axisY=0
if abs(round(joysticks[joy_index].get_axis(0), 1))>joy_deadzone:
axisX = round(joysticks[joy_index].get_axis(0), 1) * base_line
else:
axisX=0
if abs(round(joysticks[joy_index].get_axis(2), 1))>joy_deadzone:
axisS = round(joysticks[joy_index].get_axis(2), 1) * -base_line
else:
axisS=0
print('Calculated:')
print("X: " + str(axisX))
print("S: " + str(axisS))
print("Y: " + str(axisY) + "\n\n")
print('-------')
print('joystick axis:')
print(round(joysticks[joy_index].get_axis(3), 1))
print(round(joysticks[joy_index].get_axis(1), 1))
print('-------')
l1 = int(base_line+axisY+axisX-axisS)
l2 = int(base_line+-axisY-axisX-axisS)
r1 = int(base_line+axisY-axisX+axisS)
r2 = int(base_line+-axisY+axisX+axisS)
if(l1>127):
l1=127
elif(l1<0):
l1=0
if(l2>127):
l2=127
elif(l2<0):
l2=0
if(r1>127):
r1=127
elif(r1<0):
r1=0
if(r2>127):
r2=127
elif(r2<0):
r2=0
intake = int(base_line+joysticks[joy_index].get_button(14)*30)
elevator = int(base_line+joysticks[joy_index].get_button(14)*30)
print('the motors in sequence (l1,l2,r1,r2) are:')
print((l1,l2,r1,r2))
pygame.time.delay(50)
pygame.event.pump()
#time.sleep(0.1)
MESSAGE = "d" + numToStringF(l1) + numToStringF(l2)+numToStringF(r1)+numToStringF(r2)+numToStringF(intake)+numToStringF(elevator)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
sock.sendto(MESSAGE.encode("utf-8"), (UDP_IP, UDP_PORT))
print('the message sent is')
print(MESSAGE)
print('-----')
I ended up using firebase which has a real-time database and seems suitable for data streaming. I still have to do some testing to make sure it's good enough for robot control.
As for interfacing firebase with python, you can use this library:
https://github.com/thisbejim/Pyrebase
This doesn't seem to be a topic with many resources on the internet. I hope more roboticists discuss it/post tutorials and resources if they do some IoT-like robotics.

Python for Robotics: How to generate an app for localization of a Pepper robot

These days I have tried to generate an application using the pythonapp template from the Github project Jumpstarter(https://github.com/aldebaran/robot-jumpstarter) to do the localization of Pepper. My basic idea is to combine the LandmarkDetector module in the generated app „Lokalisierung“(Localization of German).
You can read the whole code of "LandmarkDetector.py","main.py" and"MainLandmarkDetection.py" here:
"LandmarkDetector.py":
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""Example: Demonstrates a way to localize the robot with
ALLandMarkDetection"""
import qi
import time
import sys
import argparse
import math
import almath
class LandmarkDetector(object):
"""
We first instantiate a proxy to the ALLandMarkDetection module
Note that this module should be loaded on the robot's naoqi.
The module output its results in ALMemory in a variable
called "LandmarkDetected".
We then read this ALMemory value and check whether we get
interesting things.
After that we get the related position of the landmark compared to robot.
"""
def __init__(self, app):
"""
Initialisation of qi framework and event detection.
"""
super(LandmarkDetector, self).__init__()
app.start()
session = app.session
# Get the service ALMemory.
self.memory = session.service("ALMemory")
# Connect the event callback.
# Get the services ALMotion & ALRobotPosture.
self.motion_service = session.service("ALMotion")
self.posture_service = session.service("ALRobotPosture")
self.subscriber = self.memory.subscriber("LandmarkDetected")
print "self.subscriber = self.memory.subscriber(LandmarkDetected)"
self.subscriber.signal.connect(self.on_landmark_detected)
print "self.subscriber.signal.connect(self.on_landmark_detected)"
# Get the services ALTextToSpeech, ALLandMarkDetection and ALMotion.
self.tts = session.service("ALTextToSpeech")
self.landmark_detection = session.service("ALLandMarkDetection")
# print "self.landmark_detection" is repr(self.landmark_detection)
self.motion_service = session.service("ALMotion")
self.landmark_detection.subscribe("LandmarkDetector", 500, 0.0 )
print "self.landmark_detection.subscribe(LandmarkDetector, 500, 0.0 )"
self.got_landmark = False
# Set here the size of the landmark in meters.
self.landmarkTheoreticalSize = 0.06 #in meters 0 #.05 or 0.06?
# Set here the current camera ("CameraTop" or "CameraBottom").
self.currentCamera = "CameraTop"
def on_landmark_detected(self, markData):
"""
Callback for event LandmarkDetected.
"""
while markData == [] : # empty value when the landmark disappears
self.got_landmark = False
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
if not self.got_landmark: # only speak the first time a landmark appears
self.got_landmark = True
#stop.motion_service.moveTo
print "Ich sehe eine Landmarke! "
self.tts.say("Ich sehe eine Landmarke! ")
# Retrieve landmark center position in radians.
wzCamera = markData[1][0][0][1]
wyCamera = markData[1][0][0][2]
# Retrieve landmark angular size in radians.
angularSize = markData[1][0][0][3]
# Compute distance to landmark.
distanceFromCameraToLandmark = self.landmarkTheoreticalSize / ( 2 * math.tan( angularSize / 2))
# Get current camera position in NAO space.
transform = self.motion_service.getTransform(self.currentCamera, 2, True)
transformList = almath.vectorFloat(transform)
robotToCamera = almath.Transform(transformList)
# Compute the rotation to point towards the landmark.
cameraToLandmarkRotationTransform = almath.Transform_from3DRotation(0, wyCamera, wzCamera)
# Compute the translation to reach the landmark.
cameraToLandmarkTranslationTransform = almath.Transform(distanceFromCameraToLandmark, 0, 0)
# Combine all transformations to get the landmark position in NAO space.
robotToLandmark = robotToCamera * cameraToLandmarkRotationTransform *cameraToLandmarkTranslationTransform
# robotTurnAroundAngle = almath.rotationFromAngleDirection(0, 1, 1, 1)
# print "robotTurnAroundAngle = ", robotTurnAroundAngle
print "x " + str(robotToLandmark.r1_c4) + " (in meters)"
print "y " + str(robotToLandmark.r2_c4) + " (in meters)"
print "z " + str(robotToLandmark.r3_c4) + " (in meters)"
def run(self):
"""
Loop on, wait for events until manual interruption.
"""
# Wake up robot
self.motion_service.wakeUp()
# Send robot to Pose Init
self.posture_service.goToPosture("StandInit", 0.5)
# Example showing how to get a simplified robot position in world.
useSensorValues = False
result = self.motion_service.getRobotPosition(useSensorValues)
print "Robot Position", result
# Example showing how to use this information to know the robot's diplacement.
useSensorValues = False
# initRobotPosition = almath.Pose2D(self.motion_service.getRobotPosition(useSensorValues))
# Make the robot move
for i in range(1, 12, 1):
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
print "self.motion_service.moveTo(0, 0, (0.1)*math.pi)"
print "Starting LandmarkDetector"
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print "Interrupted by user, stopping LandmarkDetector"
self.landmark_detection.unsubscribe("LandmarkDetector")
#stop
sys.exit(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="10.0.0.10",
help="Robot IP address. On robot or Local Naoqi: use
'10.0.0.10'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
try:
# Initialize qi framework.
connection_url = "tcp://" + args.ip + ":" + str(args.port)
app = qi.Application(["LandmarkDetector", "--qi-url=" + connection_url])
except RuntimeError:
print ("Can't connect to Naoqi at ip \"" + args.ip + "\" on port " + str(args.port) +".\n"
"Please check your script arguments. Run with -h option for help.")
sys.exit(1)
landmark_detector = LandmarkDetector(app)
landmark_detector.run()
"main.py":
""" A sample showing how to make a Python script as an app. """
version = "0.0.8"
copyright = "Copyright 2015, Aldebaran Robotics"
author = 'YOURNAME'
email = 'YOUREMAIL#aldebaran.com'
import stk.runner
import stk.events
import stk.services
import stk.logging
class Activity(object):
"A sample standalone app, that demonstrates simple Python usage"
APP_ID = "com.aldebaran.lokalisierung"
def __init__(self, qiapp):
self.qiapp = qiapp
self.events = stk.events.EventHelper(qiapp.session)
self.s = stk.services.ServiceCache(qiapp.session)
self.logger = stk.logging.get_logger(qiapp.session, self.APP_ID)
def on_touched(self, *args):
"Callback for tablet touched."
if args:
self.events.disconnect("ALTabletService.onTouchDown")
self.logger.info("Tablet touched: " + str(args))
self.s.ALTextToSpeech.say("Yay!")
self.stop()
def on_start(self):
"Ask to be touched, waits, and exits."
# Two ways of waiting for events
# 1) block until it's called
self.s.ALTextToSpeech.say("Touch my forehead.")
self.logger.warning("Listening for touch...")
while not self.events.wait_for("FrontTactilTouched"):
pass
# 2) explicitly connect a callback
if self.s.ALTabletService:
self.events.connect("ALTabletService.onTouchDown", self.on_touched)
self.s.ALTextToSpeech.say("okay, now touch my tablet.")
# (this allows to simltaneously speak and watch an event)
else:
self.s.ALTextToSpeech.say("touch my tablet ... oh. " + \
"I don't haave one.")
self.stop()
def stop(self):
"Standard way of stopping the application."
self.qiapp.stop()
def on_stop(self):
"Cleanup"
self.logger.info("Application finished.")
self.events.clear()
if __name__ == "__main__":
stk.runner.run_activity(Activity)
"MainLandmarkDetection.py":
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""A sample showing how to make a Python script as an app to localize
the robot with ALLandMarkDetection"""
version = "0.0.8"
copyright = "Copyright 2015, Aldebaran Robotics"
author = 'YOURNAME'
email = 'YOUREMAIL#aldebaran.com'
import stk.runner
import stk.events
import stk.services
import stk.logging
import time
import sys
import math
import almath
class Activity(object):
"A sample standalone app, that demonstrates simple Python usage"
APP_ID = "com.aldebaran.lokalisierung"
def __init__(self, qiapp):
self.qiapp = qiapp
self.events = stk.events.EventHelper(qiapp.session)
self.s = stk.services.ServiceCache(qiapp.session)
self.logger = stk.logging.get_logger(qiapp.session, self.APP_ID)
self.qiapp.start()
session = qiapp.session
# Get the service ALMemory.
self.memory = session.service("ALMemory")
# Connect the event callback.
# Get the services ALMotion & ALRobotPosture.
self.motion_service = session.service("ALMotion")
self.posture_service = session.service("ALRobotPosture")
self.subscriber = self.memory.subscriber("LandmarkDetected")
print "self.subscriber = self.memory.subscriber(LandmarkDetected)"
self.subscriber.signal.connect(self.on_landmark_detected)
print "self.subscriber.signal.connect(self.on_landmark_detected)"
# Get the services ALTextToSpeech, ALLandMarkDetection and ALMotion.
self.tts = session.service("ALTextToSpeech")
self.landmark_detection = session.service("ALLandMarkDetection")
# print "self.landmark_detection" is repr(self.landmark_detection)
self.motion_service = session.service("ALMotion")
self.landmark_detection.subscribe("Activity", 500, 0.0)
print "self.landmark_detection.subscribe(Activity, 500, 0.0 )"
self.got_landmark = False
# Set here the size of the landmark in meters.
self.landmarkTheoreticalSize = 0.06 # in meters 0 #.05 or 0.06?
# Set here the current camera ("CameraTop" or "CameraBottom").
self.currentCamera = "CameraTop"
def on_landmark_detected(self, markData):
"""
Callback for event LandmarkDetected.
"""
while markData == []: # empty value when the landmark disappears
self.got_landmark = False
# self.motion_service.moveTo(0, 0, 0.1 * math.pi)
if not self.got_landmark: # only speak the first time a landmark appears
self.got_landmark = True
# stop.motion_service.moveTo
print "Ich sehe eine Landmarke! "
# self.tts.say("Ich sehe eine Landmarke! ")
# Retrieve landmark center position in radians.
wzCamera = markData[1][0][0][1]
wyCamera = markData[1][0][0][2]
# Retrieve landmark angular size in radians.
angularSize = markData[1][0][0][3]
# Compute distance to landmark.
distanceFromCameraToLandmark = self.landmarkTheoreticalSize / (2 * math.tan(angularSize / 2))
# Get current camera position in NAO space.
transform = self.motion_service.getTransform(self.currentCamera, 2, True)
transformList = almath.vectorFloat(transform)
robotToCamera = almath.Transform(transformList)
# Compute the rotation to point towards the landmark.
cameraToLandmarkRotationTransform = almath.Transform_from3DRotation(0, wyCamera, wzCamera)
# Compute the translation to reach the landmark.
cameraToLandmarkTranslationTransform = almath.Transform(distanceFromCameraToLandmark, 0, 0)
# Combine all transformations to get the landmark position in NAO space.
robotToLandmark = robotToCamera * cameraToLandmarkRotationTransform * cameraToLandmarkTranslationTransform
# robotTurnAroundAngle = almath.rotationFromAngleDirection(0, 1, 1, 1)
# print "robotTurnAroundAngle = ", robotTurnAroundAngle
print "x " + str(robotToLandmark.r1_c4) + " (in meters)"
print "y " + str(robotToLandmark.r2_c4) + " (in meters)"
print "z " + str(robotToLandmark.r3_c4) + " (in meters)"
def run(self):
"""
Loop on, wait for events until manual interruption.
"""
# Wake up robot
self.motion_service.wakeUp()
# Send robot to Pose Init
self.posture_service.goToPosture("StandInit", 0.5)
# Example showing how to get a simplified robot position in world.
useSensorValues = False
result = self.motion_service.getRobotPosition(useSensorValues)
print "Robot Position", result
# Example showing how to use this information to know the robot's diplacement.
useSensorValues = False
# initRobotPosition = almath.Pose2D(self.motion_service.getRobotPosition(useSensorValues))
# Make the robot move
for i in range(1, 20, 1):
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
print "self.motion_service.moveTo(0, 0, (0.1)*math.pi)"
print "Starting Activity"
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print "Interrupted by user, stopping Activity"
self.landmark_detection.unsubscribe("Activity")
# stop
sys.exit(0)
def stop(self):
"Standard way of stopping the application."
self.qiapp.stop()
def on_stop(self):
"Cleanup"
self.logger.info("Application finished.")
self.events.clear()
if __name__ == "__main__":
stk.runner.run_activity(Activity)
landmark_detector = Activity()
landmark_detector.run()
This is how it worked in cmd.exe:
And I have a question to the parameter by ”landmark_detector = Activity()” in line 157 at almost the end of the program because of the Error in the image, which I should pass. After reading the answers to the similar question here by StackoverflowPython: TypeError: __init__() takes exactly 2 arguments (1 given), I am still confused. I think it should be a value which is given to the "qiapp", but what value?
Best regards,
Frederik
Actually, when you call
stk.runner.run_activity(Activity)
... it will instantiate that activity object for you, with the right parameters, you don't need to landmark_detector = Activity() etc. in MainLandmarkDetector.py
And if this object has a method called on_start, that method will be called once everything is ready (so you may only need to rename your run() method to on_start()
Note also that instead of calling sys.stop(), you can just call self.stop() or self.qiapp.stop() (which is a bit cleaner in terms of letting the cleanup code in on_stop to be called, if you need to unsubscribe to things etc.)
See here for some documentation on stk.runner.
Note also that instead of doing
self.motion_service = session.service("ALMotion")
(...)
transform = self.motion_service.getTransform(self.currentCamera, 2, True)
you can directly do
transform = self.s.ALMotion.getTransform(self.currentCamera, 2, True)
... which (in my opinion) makes it easier to see what is being done exactly (and reduces the number of variables).

segmentation fault :pyaudio recording in no blocking mode

I've been trying to record audio using pyaudio untill silence is met in the input stream .but segmentation fault happens while running it .i don't think anything is wrong with pyaudio/portaudio installed in my raspberry pi because pyaudio works when i tried to run examples in pyaudio docs it works without any issue .i tried to debug it with pdb and
gdb these are the results :
Recording: Setting up
Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x7652a298 in ?? ()
from /usr/lib/python2.7/dist-packages/_portaudio.arm-linux- gnueabihf.so
(gdb) backtrace
#0 0x7652a298 in ?? ()
from /usr/lib/python2.7/dist-packages/_portaudio.arm-linux- gnueabihf.so
#1 0x764f47b0 in Pa_GetDeviceInfo ()
from /usr/lib/arm-linux-gnueabihf/libportaudio.so.2
#2 0x7effe2c4 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)
pyaudio callback function
def _callback(self, in_data, frame_count, time_info, status): # pylint: disable=unused-argument
debug = logging.getLogger('alexapi').getEffectiveLevel() == logging.DEBUG
if not in_data:
self._queue.put(False)
return None, pyaudio.paAbort
do_VAD = True
if self._callback_data['force_record'] and not self._callback_data['force_record'][1]:
do_VAD = False
# do not count first 10 frames when doing VAD
if do_VAD and (self._callback_data['frames'] < self._callback_data['throwaway_frames']):
self._callback_data['frames'] += 1
# now do VAD
elif (self._callback_data['force_record'] and self._callback_data['force_record'][0]()) \
or (do_VAD and (self._callback_data['thresholdSilenceMet'] is False)
and ((time.time() - self._callback_data['start']) < self.MAX_RECORDING_LENGTH)):
if do_VAD:
if int(len(in_data) / 2) == self.VAD_PERIOD:
isSpeech = self._vad.is_speech(in_data, self.VAD_SAMPLERATE)
if not isSpeech:
self._callback_data['silenceRun'] += 1
else:
self._callback_data['silenceRun'] = 0
self._callback_data['numSilenceRuns'] += 1
# only count silence runs after the first one
# (allow user to speak for total of max recording length if they haven't said anything yet)
if (self._callback_data['numSilenceRuns'] != 0) \
and ((self._callback_data['silenceRun'] * self.VAD_FRAME_MS) > self.VAD_SILENCE_TIMEOUT):
self._callback_data['thresholdSilenceMet'] = True
else:
self._queue.put(False)
return None, pyaudio.paComplete
self._queue.put(in_data)
if debug:
self._callback_data['audio'] += in_data
return None, pyaudio.paContinue
pyaudio
def _callback(self, in_data, frame_count, time_info, status): # pylint: disable=unused-argument
debug = logging.getLogger('alexapi').getEffectiveLevel() == logging.DEBUG
if not in_data:
self._queue.put(False)
return None, pyaudio.paAbort
do_VAD = True
if self._callback_data['force_record'] and not self._callback_data['force_record'][1]:
do_VAD = False
# do not count first 10 frames when doing VAD
if do_VAD and (self._callback_data['frames'] < self._callback_data['throwaway_frames']):
self._callback_data['frames'] += 1
# now do VAD
elif (self._callback_data['force_record'] and self._callback_data['force_record'][0]()) \
or (do_VAD and (self._callback_data['thresholdSilenceMet'] is False)
and ((time.time() - self._callback_data['start']) < self.MAX_RECORDING_LENGTH)):
if do_VAD:
if int(len(in_data) / 2) == self.VAD_PERIOD:
isSpeech = self._vad.is_speech(in_data, self.VAD_SAMPLERATE)
if not isSpeech:
self._callback_data['silenceRun'] += 1
else:
self._callback_data['silenceRun'] = 0
self._callback_data['numSilenceRuns'] += 1
# only count silence runs after the first one
# (allow user to speak for total of max recording length if they haven't said anything yet)
if (self._callback_data['numSilenceRuns'] != 0) \
and ((self._callback_data['silenceRun'] * self.VAD_FRAME_MS) > self.VAD_SILENCE_TIMEOUT):
self._callback_data['thresholdSilenceMet'] = True
else:
self._queue.put(False)
return None, pyaudio.paComplete
self._queue.put(in_data)
if debug:
self._callback_data['audio'] += in_data
return None, pyaudio.paContinue
These are actually adaptation of the code that i found somewhere on the internet.i double checked my device index and sample rate there is nothing wrong with them
can someone help me sort it out ?
complete code is here
pdb result
> /usr/lib/python2.7/dist-packages/pyaudio.py(438)__init__()
-> arguments['stream_callback'] = stream_callback
(Pdb) step
> /usr/lib/python2.7/dist-packages/pyaudio.py(441)__init__()
-> self._stream = pa.open(**arguments)
(Pdb) step
Segmentation fault
root#raspberrypi:/home/pi/Desktop# python -m pdb rp3test.py
Idk may it's just a bug in pyaudio and everylibs that uses pyaudio such as python sounddevice . cause i tried it with sounddevice library . Finally made it work with this code
def silence_listener(throwaway_frames,filename = "recording.wav"):
# Reenable reading microphone raw data
inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NORMAL, alsa_card)
inp.setchannels(1)
inp.setrate(VAD_SAMPLERATE)
inp.setformat(alsaaudio.PCM_FORMAT_S16_LE)
inp.setperiodsize(VAD_PERIOD)
audio = ""
# Buffer as long as we haven't heard enough silence or the total size is within max size
thresholdSilenceMet = False
frames = 0
numSilenceRuns = 0
silenceRun = 0
start = time.time()
# do not count first 10 frames when doing VAD
while (frames < throwaway_frames): # VAD_THROWAWAY_FRAMES):
l, data = inp.read()
frames = frames + 1
if l:
audio += data
isSpeech = vad.is_speech(data, VAD_SAMPLERATE)
# now do VAD
while (thresholdSilenceMet == False) and ((time.time() - start) < MAX_RECORDING_LENGTH):
l, data = inp.read()
if l:
audio += data
if (l == VAD_PERIOD):
isSpeech = vad.is_speech(data, VAD_SAMPLERATE)
if (isSpeech == False):
silenceRun = silenceRun + 1
#print "0"
else:
silenceRun = 0
numSilenceRuns = numSilenceRuns + 1
#print "1"
# only count silence runs after the first one
# (allow user to speak for total of max recording length if they haven't said anything yet)
if (numSilenceRuns != 0) and ((silenceRun * VAD_FRAME_MS) > VAD_SILENCE_TIMEOUT):
thresholdSilenceMet = True
if debug: print ("End recording")
rf = open(filename, 'w')
rf.write(audio)
rf.close()
inp.close()
return

Qthread locking up Gui PySide

I am trying to run a process in a separate thread but it is freezing my Gui and I cant understand why.
I am initialising the thread in the init function of my class:
self.cipher = Cipher()
self.cipher_thread = QThread()
self.cipher.moveToThread(self.cipher_thread)
self.cipher_thread.started.connect(lambda: self.cipher.encrypt(self.plaintext_file_path,
self.ciphertext_file_path,
self.init_vector,
self.key))
self.cipher_thread.start()
The encrypt method of the cipher class is:
def encrypt(self):
# check that both the key and the initialisation vector are 16 bytes long
if len(self.k) == self.key_byte_length and len(self.init_vector) == self.byte_length:
if not self.used:
self.used = True
# get the padding bytes and store in a list
self.padding_bytes = self.__getPaddingBytes()
# generate sub keys
# initial subkey is first four words from key
init_subkey_words = []
for i in range(0, self.key_byte_length-3,4):
init_subkey_words.append(self.k[i:i+4])
self.__genSubKeys(init_subkey_words)
# read file and append the padding to it
with open(self.plaintext_file_path, 'rb') as f:
self.plaintext_data = bytearray(f.read())
self.plaintext_data += self.padding_bytes
# set total size
self.total_size_bytes = len(self.plaintext_data)
# insert the initialisation vector as the first 16 bytes in the ciphertext data
self.ciphertext_data = self.init_vector
'''
begin encryption
--------------------------------------------------------------------------------------------------------
'''
self.start_time = datetime.datetime.now()
# loop through the file 16 bytes at a time
for i in range(0, int(len(self.plaintext_data)), self.byte_length): # i increases by 16 each loop
# if self.block_time is not None:
# print('block time is', datetime.datetime.now()-self.block_time)
self.block_time = datetime.datetime.now()
# set the 16 byte state - bytearray Object
state = copy.deepcopy(self.plaintext_data[i:i+self.byte_length])
# xor the state with the initialisation vector and first subkey
for j in range(self.byte_length):
state[j] ^= self.init_vector[j]
state[j] ^= self.sub_keys[0][j]
# round start
# --------------------------------------------------------------------------------------------------
for j in range(self.num_rounds):
self.current_round += 1 # increment current round counter
'''
arrange the data into a 4x4 matrix
[[1, 5, 9, 13],
[2, 6, 10, 14],
[3, 7, 11, 15],
[4, 8, 12, 16]]
'''
state_matrix = np.array(state)
state_matrix.resize(4, 4)
state_matrix.swapaxes(0, 1)
# byte substitution
# ----------------------------------------------------------------------------------------------
for row in state_matrix:
for byte in row:
byte = self.__sBoxSubstitution(byte)
# shift row - row k shifts left k places
# ----------------------------------------------------------------------------------------------
state_matrix = state_matrix.tolist()
for row in range(1, 4):
for l in range(0, row):
state_matrix[row].append(state_matrix[row].pop(0))
state_matrix = np.array(state_matrix)
# mix column - not included in last round
# ----------------------------------------------------------------------------------------------
if self.current_round is not self.num_rounds:
# swap axes of state matrix
state_matrix.swapaxes(0, 1)
# create temporary holder for the computed values
mixed_col_bytes = [[], [], [], []]
for k in range(4):
for l in range(4):
mixed_col_bytes[k].append(
self.__GFMult(self.MIX_COL_MATRIX[l][0], state_matrix[k][0]) ^
self.__GFMult(self.MIX_COL_MATRIX[l][1], state_matrix[k][1]) ^
self.__GFMult(self.MIX_COL_MATRIX[l][2], state_matrix[k][2]) ^
self.__GFMult(self.MIX_COL_MATRIX[l][3], state_matrix[k][3]))
# restore state matrix from temporary holder and swap axes back
state_matrix = np.array(copy.deepcopy(mixed_col_bytes))
state_matrix.swapaxes(0, 1)
# restore single bytearray state
state_matrix = state_matrix.flatten()
state_matrix = state_matrix.tolist()
state = bytearray(state_matrix)
# key addition
# ----------------------------------------------------------------------------------------------
for k in range(self.byte_length):
state[k] ^= self.sub_keys[self.current_round][k]
self.ciphertext_data += state # append state to ciphertext data
self.init_vector = self.ciphertext_data[-16:] # update the initialisation vector
self.current_round = 0 # reset current round number
self.completed_size_bytes += self.byte_length
self.percent_done = (self.completed_size_bytes/self.total_size_bytes)*100
self.updateProgressSig.emit(int(self.percent_done))
# finish encryption
self.__saveEncryptedData()
print('total encryption time:', datetime.datetime.now() - self.start_time)
# finish
self.finish(self.ciphertext_file_path)
# either the key of the initialisation vector are not the correct length
else:
print(' either the key length or initialisation vector is the wrong length')
print('---')
print('key length:', len(self.k))
print('iv length:', len(self.init_vector))
The issue you are experiencing is that the function you are connecting to the started signal is not run in the thread, it's run in the context of where it was set, which seems to be your UI thread.
Normally you would want to create a custom class which inherits from QThread, and any code that you want to be executed would be in the run() function of that class. Like so:
class MyTask(QThread):
def __init__ (self):
QThread.__init__(self)
def run(self):
print("Code to run in the thread goes here.")
If that seems like overkill you could just set the value of self.cipher_thread.run to your own function. Here's an example:
import time
from PySide.QtCore import QThread
from PySide import QtGui
app = QtGui.QApplication("")
def main():
task = SomeTask()
thread = QThread()
# Just some variables to pass into the task
a, b, c = (1, 2, 3)
thread.run = lambda: task.runTask(a, b, c)
print("Starting thread")
thread.start()
# Doing this so the application does not exit while we wait for the thread to complete.
while not thread.isFinished():
time.sleep(1)
print("Thread complete")
class SomeTask():
def runTask(self, a, b, c):
print a, b, c
print("runTask Started")
time.sleep(5)
print("runTask Complete")
if __name__ == "__main__":
main()
As Ekhumoro suggested, I was running into issues with the GIL. using the Multiprocessing module has worked for me.

Categories

Resources