These error can be reproduce with any adafruit device.These example is for GPS.
I have tested several adafruit products, they are all great quality. However they all seems to present the same problem when use with the multiprocessing module. The script dose not run and throws a Segmentation fault (core dumped). The script runs with threading but not multiprocessing.
These does not works:
import time
import board
import adafruit_bno055
import threading
import multiprocessing
fpsFilt = 0
timeStamp = 0
i2c = board.I2C()
sensor = adafruit_bno055.BNO055_I2C(i2c)
def test():
while True:
print("Quaternion: {}".format(sensor.quaternion))
Gps = multiprocessing.Process(target=test)
Gps.start()
But these works:
import time
import board
import adafruit_bno055
import threading
import multiprocessing
fpsFilt = 0
timeStamp = 0
i2c = board.I2C()
sensor = adafruit_bno055.BNO055_I2C(i2c)
def test():
while True:
print("Quaternion: {}".format(sensor.quaternion))
Gps = threading.Thread(target=test)
Gps.start()
Is there any way to use an adafruit product with multiprocessing?Thanks.
Try this program. I have eliminated all the global variables, initialized the device entirely in the secondary Process, and protected the program's entry point with a test for __main__. These are all standard practices when writing this type of program.
Otherwise it is the same code as your program.
import time
import board
import adafruit_bno055
import threading
import multiprocessing
def test():
i2c = board.I2C()
sensor = adafruit_bno055.BNO055_I2C(i2c)
while True:
print("Quaternion: {}".format(sensor.quaternion))
def main():
Gps = multiprocessing.Process(target=test)
Gps.start()
if __name__ == "__main__":
main()
while True:
time.sleep(1.0)
Related
I programmed a gateway to a opcua-server with python-opcua.
The gateway is subscribing some values in the opcua. That is working good and fast.
Now I want to call a script that writes to the opcua.
In principle, it works too. But because I have to import the whole gateway(and all opcua stuff), it is very slow...
My Question: Is is possible to trigger a function in my class-instance without imorting everything?
To start e.g. function setBool(), I have to import Gateway...
#!/usr/bin/env python3.5 -u
# -*- coding: utf-8 -*-
import time
import sys
import logging
from logging.handlers import RotatingFileHandler
from threading import Thread
from opcua import Client
from opcua import ua
from subscribeOpcua import SubscribeOpcua
from cmdHandling import CmdHandling
from keepConnected import KeepConnected
class Gateway(object):
def __init__(self):
OPCUA_IP = '1.25.222.222'
OPCUA_PORT = '4840'
OPCUA_URL = "opc.tcp://{}:{}".format(OPCUA_IP, str(OPCUA_PORT))
addr = "OPCUA-URL:{}.".format(OPCUA_URL)
# Setting up opcua-handler
self.client = Client(OPCUA_URL)
self.opcuaHandlers = [SubscribeOpcua()]
# Connect to opcua
self.connecter = KeepConnected(self.client,self.opcuaHandlers)
self.connecter.start()
def setBool(self, client):
"""Set e boolean variable on opcua-server.
"""
path = ["0:Objects","2:DeviceSet"...]
root = client.get_root_node()
cmd2opcua = root.get_child(path)
cmd2opcua.set_value(True)
if __name__ == "__main__":
"""Open connecter when gateway is opened directly.
"""
connect = Gateway()
The only way to prevent a code from runing when importing a module is to put it inside a method:
def import_first_part():
global re
global defaultdict
print('import this first part')
# import happen locally
# because when you do `import re` actually
# re = __import__('re')
import re
from collections import defaultdict
def import_second_part():
print('import pandas')
# really unnecessary check here because if we import
# pandas for the second time it will just retrieve the object of module
# the code of module is executed only in the first import in life of application.
if 'pandas' in globals():
return
global pandas
import pandas
def use_regex():
import_first_part()
# do something here
if __name__ == '__main__':
use_regex()
re.search('x', 'xb') # works fine
I checked that 'pandas' is in global scope before reimport it again but really this is not necessary, because when you import a module for the second time it's just retrieved no heavy calculation again.
In a project, I would like to separate the visualization and calculation in two different modules. The goal is to transfer the variables of the calculation-module to a main-script, in order to visualize it with the visualization-script.
Following this post
Using global variables between files?,
I am able to use a config-script in order to transfer a variable between to scripts now. But unfortunately, this is not working when using threading. The Output of the main.py is always "get: 1".
Does anyone have an idea?
main.py:
from threading import Thread
from time import sleep
import viz
import change
add_Thread = Thread(target=change.add)
add_Thread.start()
viz.py:
import config
from time import sleep
while True:
config.init()
print("get:", config.x)
sleep(1)
config.py:
x = 1
def init():
global x
change.py:
import config
def add():
while True:
config.x += 1
config.init()
OK, fount the answer by myself. Problem was in the "main.py". One has to put the "import viz" after starting the thread:
from threading import Thread
from time import sleep
import change
add_Thread = Thread(target=change.add)
add_Thread.start()
import viz
I want to detect change in gpio input of raspberry pi and set handler using signal module of python. I am new to signal module and I can't understand how to use it. I am using this code now:
import RPi.GPIO as GPIO
import time
from datetime import datetime
import picamera
i=0
j=0
camera= picamera.PiCamera()
camera.resolution = (640, 480)
# handle the button event
def buttonEventHandler (pin):
global j
j+=1
#camera.close()
print "handling button event"
print("pressed",str(datetime.now()))
time.sleep(4)
camera.capture( 'clicked%02d.jpg' %j )
#camera.close()
def main():
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(2,GPIO.IN,pull_up_down=GPIO.PUD_UP)
GPIO.add_event_detect(2,GPIO.FALLING)
GPIO.add_event_callback(2,buttonEventHandler)
# RPIO.add_interrupt_callback(2,buttonEventHandler,falling,RPIO.PUD_UP,False,None)
while True:
global i
print "Hello world! {0}".format(i)
i=i+1
time.sleep(5)
# if(GPIO.input(2)==GPIO.LOW):
# GPIO.cleanup()
if __name__=="__main__":
main()
I just changed code in a different manner tough you are free to implement same using SIGNAL module.You can start new thread and poll or register call back event their, by using following code and write whatever your functional logic in it's run() method.
import threading
import RPi.GPIO as GPIO
import time
import time
from datetime import datetime
import picamera
i=0
j=0
camera= picamera.PiCamera()
camera.resolution = (640, 480)
PIN = 2
class GPIOThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
while True:
if GPIO.input(PIN) == False: # adjust this statement as per your pin status i.e HIGH/LOW
global j
j+=1
#camera.close()
print "handling button event"
print("pressed",str(datetime.now()))
time.sleep(4)
camera.capture( 'clicked%02d.jpg' %j )
def main():
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(PIN,GPIO.IN,pull_up_down=GPIO.PUD_UP)
GPIO.add_event_detect(PIN,GPIO.FALLING)
gpio_thread = GPIOThread()
gpio_thread.start()
while True:
global i
print "Hello world! {0}".format(i)
i=i+1
time.sleep(5)
if __name__=="__main__":
main()
The above code will iterate until PIN input goes high, so once PIN goes high the condition in while loop inside run method breaks and picture is captured.
So, in order to call above thread do this.
gpio_thread = GPIOThread()
gpio_thread.start()
this will call the thread constructor init and will initialize the variable inside constructor if any, and execute the run method.
You can also call join() method , to wait until thread completes it's execution.
gpio_thread.join()
This always works for me, so Cheers!!
How can I get returned values from a method in another instance of multiprocessing.Process?
I have two files:
file hwmgr.py:
import multiprocessing as mp
from setproctitle import setproctitle
import smbus
import myLoggingModule as log
class HWManager(mp.Process):
def __init__(self):
mp.Process.__init__(self, cmd_q, res_q)
self.i2c_lock = mp.Lock()
self.commandQueue = cmd_q
self.responseQueue = res_q
def run(self):
setproctitle('hwmgr')
while True:
cmd, args = self.commandQueue.get()
if cmd is None: self.terminate()
method = self.__getattribute__(cmd)
result = method(**args)
if result is not None:
self.responseQueue.put(result)
def get_voltage(self):
with self.i2c_lock:
# ...do i2c stuff to get a voltage with smbus module
return voltage
file main.py:
import multiprocessing as mp
import hwmgr
cmd_q = mp.Queue()
res_q = mp.Queue()
hwm = hwmgr.HWManager(cmd_q, res_q)
hwm.start()
cmd_q.put(('get_voltage', {}))
battery = res_q.get()
print battery
While this solution works, the complexity of the HWManager process is likely to grow in the future, and other processes are spawned off main.py (code is simplified) which use the same mechanism. There's obviously a chance that the wrong process will get the wrong return data from it's res_q.get() command.
What would be a more robust way of doing this?
(I'm trying to avoid having one return mp.Queue for each other process - as this would require reworking the HWManager class each time to accommodate the additional Queues)
OK - WIP code is as follows:
hwmgr.py:
import multiprocessing as mp
from multiprocessing.connection import Listener
from setproctitle import setproctitle
import smbus
class HWManager(mp.Process):
def __init__(self):
mp.Process.__init__(self)
self.i2c_lock = mp.Lock()
def run(self):
setproctitle('hwmgr')
self.listener = Listener('/tmp/hw_sock', 'AF_UNIX')
with self.i2c_lock:
pass # Set up I2C bus to take ADC readings
while True:
conn = self.listener.accept()
cmd, args = conn.recv()
if cmd is None: self.terminate()
method = self.__getattribute__(cmd)
result = method(**args)
conn.send(result)
def get_voltage(self):
with self.i2c_lock:
voltage = 12.0 # Actually, do i2c stuff to get a voltage with smbus module
return voltage
file client.py
import multiprocessing as mp
from multiprocessing.connection import Client
from setproctitle import setproctitle
from time import sleep
class HWClient(mp.Process):
def __init__(self):
mp.Process.__init__(self)
self.client = Client('/tmp/hw_sock', 'AF_UNIX')
def run(self):
setproctitle('client')
while True:
self.client.send(('get_voltage', {}))
battery = self.client.recv()
print battery
sleep(5)
main.py:
import hwmgr
import client
cl = client.HWClient() # Put these lines here = one error (conn refused)
cl.start()
hwm = hwmgr.HWManager()
hwm.start()
# cl = client.HWClient() # ...Or here, gives the other (in use)
# cl.start()
This sounds like it calls for a standard client-server architecture. You can use UNIX domain sockets (or named pipes on Windows). The multiprocessing module makes it super easy to pass python objects between processes. Sample structure of server code:
from multiprocessing.connection import Listener
listener = Listener('somefile', 'AF_UNIX')
queue = Queue()
def worker():
while True:
conn, cmd = queue.get()
result = execute_cmd(cmd)
conn.send(result)
queue.task_done()
for i in range(num_worker_threads):
t = Thread(target=worker)
t.daemon = True
t.start()
while True:
conn = listener.accept()
cmd = conn.recv()
queue.put((conn, cmd)) # Do processing of the queue in another thread/process and write result to conn
The client side would look like:
from multiprocessing.connection import Client
client = Client('somefile', 'AF_UNIX')
client.send(cmd)
result = client.recv()
The above code uses threads for workers, but you could just as easily have processes for workers using the multiprocessing module. See the docs for details.
Okay, time for another question/post...
So currently i am trying to develop a simple python program that has a webkit/ webpage view and a serial port interface.. Not that it should matter, but this is also running on a raspberry pi.
The following code works fine.. But it will freeze the system as soon as i uncomment the serial port line that you can see commented out.
The day has been long and this one for some reason has my brain fried.. Python is not my strongest point, but mind you this is just a quick test script for now... Yes i have used google and other resources...
#!/usr/bin/env python
import sys
import serial
import threading
import time
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
sURL = ""
sURL2 = ""
objSerial = serial.Serial(0)
def SerialLooper():
global objSerial
if objSerial.isOpen() == True:
print("is_responding")
#objSerial.write("is_responding")
time.sleep(10)
SerialLooper()
class TestCLASS(object):
def __init__(self):
global sURL
global sURL2
global objSerial
objSerial = serial.Serial(0)
sURL = "http://localhost/tester"
app = QApplication(sys.argv)
webMain = QWebView()
webMain.loadFinished.connect(self.load_finished)
webMain.load(QUrl(sURL))
webMain.show()
thread = threading.Thread(target=SerialLooper)
thread.start()
sys.exit(app.exec_())
def load_finished(boolNoErrors):
global sURL
print("Url - " + sURL)
#something here
#something else here
newObjClass = TestCLASS()
EDIT
Futher on this, it appears its not the multithreading but the serial.write()
It has been a while since I used serial, but IIRC it is not threadsafe (on Windows at least). You are opening the port in the main thread and performing a write in another thread. It's a bad practice anyway. You might also consider writing a simple single-threaded program to see if the serial port is actually working.
PS Your program structure could use some work. You only need one of the global statements (global objSerial), the rest do nothing. It would be better to get rid of that one, too.
And the recursive call to SerialLooper() will eventually fail when the recursion depth is exceeded; why not just use a while loop...
def SerialLooper():
while objSerial().isOpen(): # Drop the == True
# print something
# write to the port
# Sleep or do whatever