Update variable from imported module with Python when using Threading - python

In a project, I would like to separate the visualization and calculation in two different modules. The goal is to transfer the variables of the calculation-module to a main-script, in order to visualize it with the visualization-script.
Following this post
Using global variables between files?,
I am able to use a config-script in order to transfer a variable between to scripts now. But unfortunately, this is not working when using threading. The Output of the main.py is always "get: 1".
Does anyone have an idea?
main.py:
from threading import Thread
from time import sleep
import viz
import change
add_Thread = Thread(target=change.add)
add_Thread.start()
viz.py:
import config
from time import sleep
while True:
config.init()
print("get:", config.x)
sleep(1)
config.py:
x = 1
def init():
global x
change.py:
import config
def add():
while True:
config.x += 1
config.init()

OK, fount the answer by myself. Problem was in the "main.py". One has to put the "import viz" after starting the thread:
from threading import Thread
from time import sleep
import change
add_Thread = Thread(target=change.add)
add_Thread.start()
import viz

Related

Adafruit Doesn't work with Multiprocessing

These error can be reproduce with any adafruit device.These example is for GPS.
I have tested several adafruit products, they are all great quality. However they all seems to present the same problem when use with the multiprocessing module. The script dose not run and throws a Segmentation fault (core dumped). The script runs with threading but not multiprocessing.
These does not works:
import time
import board
import adafruit_bno055
import threading
import multiprocessing
fpsFilt = 0
timeStamp = 0
i2c = board.I2C()
sensor = adafruit_bno055.BNO055_I2C(i2c)
def test():
while True:
print("Quaternion: {}".format(sensor.quaternion))
Gps = multiprocessing.Process(target=test)
Gps.start()
But these works:
import time
import board
import adafruit_bno055
import threading
import multiprocessing
fpsFilt = 0
timeStamp = 0
i2c = board.I2C()
sensor = adafruit_bno055.BNO055_I2C(i2c)
def test():
while True:
print("Quaternion: {}".format(sensor.quaternion))
Gps = threading.Thread(target=test)
Gps.start()
Is there any way to use an adafruit product with multiprocessing?Thanks.
Try this program. I have eliminated all the global variables, initialized the device entirely in the secondary Process, and protected the program's entry point with a test for __main__. These are all standard practices when writing this type of program.
Otherwise it is the same code as your program.
import time
import board
import adafruit_bno055
import threading
import multiprocessing
def test():
i2c = board.I2C()
sensor = adafruit_bno055.BNO055_I2C(i2c)
while True:
print("Quaternion: {}".format(sensor.quaternion))
def main():
Gps = multiprocessing.Process(target=test)
Gps.start()
if __name__ == "__main__":
main()
while True:
time.sleep(1.0)

Sharing variables across threads

already tried to solve my issue via existing posts - however, did not manage to. Thx for your help in advance.
The objective is to share a variable across threads. Please find below the code. It keeps printing '1' for the accounting variable, although I want it to print '2'. Any suggestions why?
main.py:
account = 1
import threading
import cfg
import time
if __name__ == "__main__":
thread_cfg = threading.Thread(target=cfg.global_cfg(),args= ())
thread_cfg.start()
time.sleep(5)
print(account)
cfg.py:
def global_cfg():
global account
account = 2
return()
Globals are not shared across files.
If we disregard locks and other synchronization primitives, just put account inside cfg.py:
account = 1
def global_cfg():
global account
account = 2
return
And inside main.py:
import threading
import time
import cfg
if __name__ == "__main__":
thread_cfg = threading.Thread(target=cfg.global_cfg,args= ())
print(cfg.account)
thread_cfg.start()
time.sleep(5)
print(cfg.account)
Running it:
> py main.py
1
2
In more advanced cases, you should use Locks, Queues and other structures, but that's out of scope.

AttributeError: module 'apscheduler.schedulers.asyncio' has no attribute 'get_event_loop'

I am trying to implement clock process to use in Heroku dyno. I am using Python 3.6. Clock process will run each 3 hours. This is the code:
import os
import sys
import requests
from apscheduler.schedulers import asyncio
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.triggers.interval import IntervalTrigger
from webdriverdownloader import GeckoDriverDownloader
from scraper.common import main
def get_driver():
return True
def notify(str):
return True;
if __name__ == '__main__':
scheduler = AsyncIOScheduler()
get_driver()
scheduler.add_job(main, trigger=IntervalTrigger(hours=3))
scheduler.start()
# Execution will block here until Ctrl+C (Ctrl+Break on Windows) is pressed.
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait())
except (KeyboardInterrupt, SystemExit):
pass
At first I tried with
asyncio.get_event_loop().run_forever()
However, I read that this is not supported in python 3.6, so I changed this statement to run_until_complete.
If I run this example, the code prints out:
AttributeError: module 'apscheduler.schedulers.asyncio' has no attribute 'get_event_loop'
Does anyone know why this error occurs? Any help would be much appreciated. Thanks in advance!
You're not importing the asyncio module from the standard library but the asyncio module in the apscheduler library. You can see that by visiting the link here.
There are only two things you can import from that namespace:
run_in_event_loop
AsyncIOScheduler
If you need to use the low-level asyncio API just import asyncio directly from the standard library.

How to trigger a function without importing everything?

I programmed a gateway to a opcua-server with python-opcua.
The gateway is subscribing some values in the opcua. That is working good and fast.
Now I want to call a script that writes to the opcua.
In principle, it works too. But because I have to import the whole gateway(and all opcua stuff), it is very slow...
My Question: Is is possible to trigger a function in my class-instance without imorting everything?
To start e.g. function setBool(), I have to import Gateway...
#!/usr/bin/env python3.5 -u
# -*- coding: utf-8 -*-
import time
import sys
import logging
from logging.handlers import RotatingFileHandler
from threading import Thread
from opcua import Client
from opcua import ua
from subscribeOpcua import SubscribeOpcua
from cmdHandling import CmdHandling
from keepConnected import KeepConnected
class Gateway(object):
def __init__(self):
OPCUA_IP = '1.25.222.222'
OPCUA_PORT = '4840'
OPCUA_URL = "opc.tcp://{}:{}".format(OPCUA_IP, str(OPCUA_PORT))
addr = "OPCUA-URL:{}.".format(OPCUA_URL)
# Setting up opcua-handler
self.client = Client(OPCUA_URL)
self.opcuaHandlers = [SubscribeOpcua()]
# Connect to opcua
self.connecter = KeepConnected(self.client,self.opcuaHandlers)
self.connecter.start()
def setBool(self, client):
"""Set e boolean variable on opcua-server.
"""
path = ["0:Objects","2:DeviceSet"...]
root = client.get_root_node()
cmd2opcua = root.get_child(path)
cmd2opcua.set_value(True)
if __name__ == "__main__":
"""Open connecter when gateway is opened directly.
"""
connect = Gateway()
The only way to prevent a code from runing when importing a module is to put it inside a method:
def import_first_part():
global re
global defaultdict
print('import this first part')
# import happen locally
# because when you do `import re` actually
# re = __import__('re')
import re
from collections import defaultdict
def import_second_part():
print('import pandas')
# really unnecessary check here because if we import
# pandas for the second time it will just retrieve the object of module
# the code of module is executed only in the first import in life of application.
if 'pandas' in globals():
return
global pandas
import pandas
def use_regex():
import_first_part()
# do something here
if __name__ == '__main__':
use_regex()
re.search('x', 'xb') # works fine
I checked that 'pandas' is in global scope before reimport it again but really this is not necessary, because when you import a module for the second time it's just retrieved no heavy calculation again.

How to run multiple test case from python nose test

I am a newbie in process of learning python and currently working on a automation project.
And i have N numbers of testcase which needs to be run on reading material people suggest me to use nosetest.
What is the way to run multiple testcase using nosetest?
And is the correct approach doing it:
import threading
import time
import logging
import GLOBAL
import os
from EPP import EPP
import Queue
import unittest
global EPP_Queue
from test1 import test1
from test2 import test2
logging.basicConfig(level=logging.DEBUG,
format='(%(threadName)-10s) %(message)s',
)
class all_test(threading.Thread,unittest.TestCase):
def cleanup():
if os.path.exists("/dev/epp_dev"):
os.unlink("/dev/epp_dev")
print "starts here"
server_ip ='192.168.10.15'
EppQueue = Queue.Queue(1)
EPP = threading.Thread(name='EPP', target=EPP,
args=('192.168.10.125',54321,'/dev/ttyS17',
EppQueue,))
EPP.setDaemon(True)
EPP.start()
time.sleep(5)
suite1 = unittest.TestLoader().loadTestsFromTestCase(test1)
suite2 = unittest.TestLoader().loadTestsFromTestCase(test2)
return unittest.TestSuite([suite1, suite2])
print "final"
raw_input("keyy")
def main():
unittest.main()
if __name__ == '__main__':
main()
Read
http://ivory.idyll.org/articles/nose-intro.html.
Download the package
http://darcs.idyll.org/~t/projects/nose-demo.tar.gz
Follow the instructions provided in the first link.
nosetest, when run from command line like 'nosetest' or 'nosetest-2.6' will recursively hunt for tests in the directory you execute it in.
So if you have a directory holding N tests, just execute it in that directory. They will all be executed.

Categories

Resources