Python: Importing a variable inside of a infinite loop - python

I have two modules, a host and a scanner. Both loop indefinitely to communicate with the serial ports. I want to import the variable "bestchannel" from scanner into host but by importing it, the while loop inside scanner runs first and loops forever. I want each module to run separately but be able to send each other data in real time. Is this possible?
(outside of scanning ram)
Sample Code:
Host Loop----------------------------------------------
while True:
ser.write( assemble("20","FF","FF","64","B") )
sData = ser.read(100)
if len(sData)>0:
for i in range(0, len(sData)-17):
if sData[i]==chr(1) and sData[i+1]==chr(20) and sData[i+2]==chr(int("A1", 16)):
height = (ord(sData[i+16])*256+ord(sData[i+17]))/100
print "Sensor ", ord(sData[i+12]), " is returning height ",
height, "mm. The minnoisechan:",minchannel
Scanner Loop----------------------------------------------
while True:
ser.write( scan("FF", "FF", str(scanlength)) ) #Channel Mask, Length
time.sleep(scanlength+2.0)
sData = ser.read(100)
if len(sData)>0:
for i in range(0, len(sData)-16):
if sData[i]==chr(1) and sData[i+1]==chr(23) and sData[i+2]==chr(int("C5", 16)):
for j in range(0, 16):
chan[j] = sData[i+5+j]
print "channel: ",j+11,"=",ord(chan[j])
if ord(chan[j])<minvalue:
minvalue=ord(chan[j])
minchannel=j+11
count+=1
print "count",count,"minvalue:",minvalue,"minchannel:",minchannel
minvalue=999
I want minchannel from scanner to be accessible to host.
sample code is in the link or down in the answer, sorry I had to use a different browser.

So again, if you haven't explored implementing your code using threads then I'd suggest that to get two loops to run at the same time. So something like this:
import threading
import Queue
def host(dataQueue):
"""
Host code goes here.
"""
# Check dataQueue for incoming data among other things...
....
def scanner(dataQueue):
"""
Scanner code goes.
"""
# Put data into dataQueue among other things...
....
if __name__ == 'main':
dataQ = Queue.queue()
hostThread = threading.Thread(target=host, name="Host", arg=(dataQ,))
scannerThread = threading.Thread(target=scanner, name="Scanner", arg=(dataQ,))
hostThread.start()
scannerThread.start()
At the very least this will get you started running your two processes together. You still will need to figure out the thread management aspect of this.

Related

Examining file on different node (different IP address), same network - possible?

I have a small group of Raspberry Pis, all on the same local network (192.168.1.2xx) All are running Python 3.7.3, one (R Pi CM3) on Raspbian Buster, the other (R Pi 4B 8gig) on Raspberry Pi OS 64.
I have a file on one device (the Pi 4B), located at /tmp/speech.wav, that is generated on the fly, real-time:
192.168.1.201 - /tmp/speech.wav
I have a script that works well on that device, that tells me the play duration time of the .wav file in seconds:
import wave
import contextlib
def getPlayTime():
fname = '/tmp/speech.wav'
with contextlib.closing(wave.open(fname,'r')) as f:
frames = f.getnframes()
rate = f.getframerate()
duration = round(frames / float(rate), 2)
return duration
However - the node that needs to operate on that duration information is running on another node at 192.168.1.210. I cannot simply move the various files all to the same node as there is a LOT going on, things are where they are for a reason.
So what I need to know is how to alter my approach such that I can change the script reference to something like this pseudocode:
fname = '/tmp/speech.wav # 192.168.1.201'
Is such a thing possible? Searching the web it seems that I am up against millions of people looking for how to obtain IP addresses, fix multiple IP address issues, fix duplicate ip address issues... but I can't seem yet to find how to simply examine a file on a different ip address as I have described here. I have no network security restrictions, so any setting is up for consideration. Help would be much appreciated.
There are lots of possibilities, and it probably comes down to how often you need to check the duration, from how many clients, and how often the file changes and whether you have other information that you want to share between the nodes.
Here are some options:
set up an SMB (Samba) server on the Pi that has the WAV file and let the other nodes mount the filesystem and access the file as if it was local
set up an NFS server on the Pi that has the WAV file and let the other nodes mount the filesystem and access the file as if it was local
let other nodes use ssh to login and extract the duration, or scp to retrieve the file - see paramiko in Python
set up Redis on one node and throw the WAV file in there so anyone can get it - this is potentially attractive if you have lots of lists, arrays, strings, integers, hashes, queues or sets that you want to share between Raspberry Pis very fast. Example here.
Here is a very simple example of writing a sound track into Redis from one node (say Redis is on 192.168.0.200) and reading it back from any other. Of course, you may just want the writing node to write the duration in there rather than the whole track - which would be more efficient. Or you may want to store loads of other shared data or settings.
This is the writer:
#!/usr/bin/env python3
import redis
from pathlib import Path
host='192.168.1.200'
# Connect to Redis
r = redis.Redis(host)
# Load some music, or otherwise create it
music = Path('song.wav').read_bytes()
# Put music into Redis where others can see it
r.set("music",music)
And this is the reader:
#!/usr/bin/env python3
import redis
from pathlib import Path
host='192.168.1.200'
# Connect to Redis
r = redis.Redis(host)
# Retrieve music track from Redis
music = r.get("music")
print(f'{len(music)} bytes read from Redis')
Then, during testing, you may want to manually push a track into Redis from the Terminal:
redis-cli -x -h 192.168.0.200 set music < OtherTrack.wav
Or manually retrieve the track from Redis to a file:
redis-cli -h 192.168.0.200 get music > RetrievedFromRedis.wav
OK, this is what I finally settled on - and it works great. Using ZeroMQ for message passing, I have the function to get the playtime of the wav, and another gathers data about the speech about to be spoken, then all that is sent to the motor core prior to sending the speech. The motor core handles the timing issues to sync the jaw to the speech. So, I'm not actually putting the code that generates the wav and also returns the length of the wav playback time onto the node that ultimately makes use of it, but it turns out that message passing is fast enough so there is plenty of time space to receive, process and implement the motion control to match the speech perfectly. Posting this here in case it's helpful for folks in the future working on similar issues.
import time
import zmq
import os
import re
import wave
import contextlib
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555") #Listens for speech to output
print("Connecting to Motor Control")
jawCmd = context.socket(zmq.PUB)
jawCmd.connect("tcp://192.168.1.210:5554") #Sends to MotorFunctions for Jaw Movement
def getPlayTime(): # Checks to see if current file duration has changed
fname = '/tmp/speech.wav' # and if yes, sends new duration
with contextlib.closing(wave.open(fname,'r')) as f:
frames = f.getnframes()
rate = f.getframerate()
duration = round(frames / float(rate), 3)
speakTime = str(duration)
return speakTime
def set_voice(V,T):
T2 = '"' + T + '"'
audioFile = "/tmp/speech.wav" # /tmp set as tmpfs, or RAMDISK to reduce SD Card write ops
if V == "A":
voice = "Allison"
elif V == "B":
voice = "Belle"
elif V == "C":
voice = "Callie"
elif V == "D":
voice = "Dallas"
elif V == "V":
voice = "David"
else:
voice = "Belle"
os.system("swift -n " + voice + " -o " + audioFile + " " +T2) # Record audio
tailTrim = .5 # Calculate Jaw Timing
speakTime = eval(getPlayTime()) # Start by getting playlength
speakTime = round((speakTime - tailTrim), 2) # Chop .5 s for trailing silence
wordList = T.split()
jawString = []
for index in range(len(wordList)):
wordLen = len(wordList[index])
jawString.append(wordLen)
jawString = str(jawString)
speakTime = str(speakTime)
jawString = speakTime + "|" + jawString # 3.456|[4, 2, 7, 4, 2, 9, 3, 4, 3, 6] - will split on "|"
jawCmd.send_string(jawString) # Send Jaw Operating Sequence
os.system("aplay " + audioFile) # Play audio
pronunciationDict = {'teh':'the','process':'prawcess','Maeve':'Mayve','Mariposa':'May-reeposah','Lila':'Lala','Trump':'Ass hole'}
def adjustResponse(response): # Adjusts spellings in output string to create better speech output.
for key, value in pronunciationDict.items():
if key in response or key.lower() in response:
response = re.sub(key, value, response, flags=re.I)
return response
SpeakText="Speech center connected and online."
set_voice(V,SpeakText) # Cepstral Voices: A = Allison; B = Belle; C = Callie; D = Dallas; V = David;
while True:
SpeakText = socket.recv().decode('utf-8') # .decode gets rid of the b' in front of the string
SpeakTextX = adjustResponse(SpeakText) # Run the string through the pronunciation dictionary
print("SpeakText = ",SpeakTextX)
set_voice(V,SpeakTextX)
print("Received request: %s" % SpeakTextX)
socket.send_string(str(SpeakTextX)) # Send data back to source for confirmation

How do I make code save local variables between its processes in python like ipython notebook does?

I think my title isn't clear so... I made this code which fetches top Dota TV games as an array of these match_ids and prints them in the end. STEAM_LGN, STEAM_PSW are steam login/password combination.
from steam.client import SteamClient
from dota2.client import Dota2Client
client = SteamClient()
dota = Dota2Client(client)
#client.on('logged_on')
def start_dota():
dota.launch()
match_ids = []
#dota.on('top_source_tv_games')
def response(result):
for match in result.game_list: # games
match_ids.append(match.match_id)
def request_matches():
dota.request_top_source_tv_games()
client.cli_login(username=STEAM_LGN, password=STEAM_PSW)
request_matches()
dota.on('top_source_tv_games', response)
print(match_ids)
The thing I'm having a problem with
When using Anaconda iPython Notebook -> when I run the cell for the first time -> it returns me
[]
but when I do it for the second time, it returns me a real data, for example
[5769568657, 5769554974, 5769555609, 5769572298, 5769543230, 5769561446, 5769562113, 5769552763, 5769550735, 5769563870]
So every time when I am playing in my ipython notebook sandbox -> I hit Shift+Enter twice and get the data.
But now I need to tranfer this code to a bigger project. So for example let's say I save that code to dota2.info.py file and have this code in another file referencing dota2.info.py:
import subprocess
import time
### some code ###
while(True):
subprocess.call(['python', './dota2info.py'])
time.sleep(10)
And when running project code this always prints [] like it was doing on first Shift+Enter cell-running in Anaconda ipython notebook.
[]
[]
[]
...
So my question is what should I do in this situation ? How can I solve this problem of (I don't know) ValvePython/dota2 code caching some important data in local unknown to me variables in ipython notebook?
Ideally I want the code immediately give me real data without having these [].
Hard to tell why it happens, but as a possible workaround I'd try wrapping the code in the cell you're re-running, in a function that retries until nonempty results are achieved.
For example, assuming everything was in the rerun cell except the imports, this might be dota2info.py:
from steam.client import SteamClient
from dota2.client import Dota2Client
import time
def get_results():
client = SteamClient()
dota = Dota2Client(client)
#client.on('logged_on')
def start_dota():
dota.launch()
match_ids = []
#dota.on('top_source_tv_games')
def response(result):
for match in result.game_list: # games
match_ids.append(match.match_id)
def request_matches():
dota.request_top_source_tv_games()
client.cli_login(username=STEAM_LGN, password=STEAM_PSW)
request_matches()
dota.on('top_source_tv_games', response)
return match_ids
if __name__ == "__main__":
results = get_results()
max_retries = 10
retries = 0
while not results and retries < max_retries:
time.sleep(3) # wait number of seconds to retry
results = get_results()
retries += 1
print(results)

Why thread did not work on sniff() function on python scapy?

I have a code, i want to run the function cek_paket() and delete_paket() at the same time. My friend suggest me to use thread. So shortly, im using this to sniffing arp packet, and save the information (source, destination address,and the amount of packet arrived) into list in dictionary python. So sniff() function is the function by scapy that i use to sniff packet. After the packet arrive, it will return it to cek_paket() to save the information to jumlah_reply list. And i use delete_reply() to delete the first list(jumlah_reply[0]) every 3 second. I have no problem in cek_paket() function. The problem is, why the sniffing() function only works? But delete_reply function did not work?
from scapy.all import*
import thread
import time
jumlah_reply = []
def cek_paket(pkt):
if pkt[ARP].op ==2:
destinasi = str(pkt[ARP].pdst)
source = str(pkt[ARP].psrc)
dikirim = {'src':source,'dst':destinasi}
if len(jumlah_reply)==0:
dikirim['count']=1
jumlah_reply.append(dikirim)
found = True
else:
found=False
for itung in jumlah_reply:
if itung['src']==dikirim['src'] and itung['dst']==dikirim['dst']:
itung['count']+=1
found = True
break
if not found:
jumlah_reply.append(dikirim)
dikirim['count']=1
print("reply")
print(jumlah_reply)
print("--------------------------------")
def delete_paket():
if len(jumlah_reply) > 0:
del jumlah_reply[0]
print("*********************")
print (jumlah_reply)
print("**********************")
time.sleep(3)
def sniffing():
sniff(prn=cek_paket,filter="arp",store=0)
try:
thread.start_new_thread(sniffing())
thread.start_new_thread(delete_paket())
except:
print("error")
while 1:
pass
I was expecting the output:
When there is a ARP reply packet, the information will be added into the list, and 3 second after it, the first item will be deleted. but the actual output is
As you can see the information packet have added into the list but i've run the code for a minute there is no action from delete_reply() function. Why this is happen? is the sniff() function from scapy lock all the process?

Threading memory profiling

So I hope this isn't a duplicate, however I either haven't been able to find the adequate solution or I just am not 100% on what I'm looking for. I've written a program to thread lots of requests. I create a thread to
Fetch responses from a number of api's such as this: share.yandex.ru/gpp.xml?url=MY_URL as well as scraping blogs
Parse the responses of all requests from the example above/ json/ using python-goose to extract articles
Return the parsed results back to the primary thread and insert into a database.
It's all been going well until it needs to pull back larger amounts of data which i haven't tested before. The primary reason for this is that it takes me over my shared memory limit on a shared Linux server (512mb) initiating a kill. This should be enough as it's only a few thousand requests, although i could be wrong. I'm clearing all large data variables/ objects within the main thread but that doesn't seem to help either.
I ran a memory_profile on the primary function which creates the threads with a thread class which looks like this:
class URLThread(Thread):
def __init__(self,request):
super(URLThread, self).__init__()
self.url = request['request']
self.post_id = request['post_id']
self.domain_id = request['domain_id']
self.post_data = request['post_params']
self.type = request['type']
self.code = ""
self.result = ""
self.final_results = ""
self.error = ""
self.encoding = ""
def run(self):
try:
self.request = get_page(self.url,self.type)
self.code = self.request['code']
self.result = self.request['result']
self.final_results = response_handler(dict(result=self.result,type=self.type,orig_url=self.url ))
self.encoding = chardet.detect(self.result)
self.error = self.request['error']
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
errors.append((exc_type, fname, exc_tb.tb_lineno,e,'NOW()'))
pass
#profile
def multi_get(uris,timeout=2.0):
def alive_count(lst):
alive = map(lambda x : 1 if x.isAlive() else 0, lst)
return reduce(lambda a,b : a + b, alive)
threads = [ URLThread(uri) for uri in uris ]
for thread in threads:
thread.start()
while alive_count(threads) > 0 and timeout > 0.0:
timeout = timeout - UPDATE_INTERVAL
sleep(UPDATE_INTERVAL)
return [ {"request":x.url,
"code":str(x.code),
"result":x.result,
"post_id":str(x.post_id),
"domain_id":str(x.domain_id),
"final_results":x.final_results,
"error":str(x.error),
"encoding":str(x.encoding),
"type":x.type}
for x in threads ]
And the results look like this on the first batch of requests i pump through it (FYI it's a link as the output text isn't readable in here, i can't paste a html table or embed an image until i get 2 more points ):
http://tinypic.com/r/28c147d/8
And it doesn't seem to drop any of the memory in subsequent passes (I'm batching 100 requests/ threads through at 1 time). By this i mean once a batch of threads is complete they seem to stay in memory ad every time it runs another, memory is added as below:
http://tinypic.com/r/nzkeoz/8
Am I doing something really stupid here?
Python will generally free the memory taken up by an object when there are no references to that object left. Your multi_get function returns a list that contains references to every thread that you have created. So it's unlikely that Python would free that memory. But we would need to see what the code that is calling multi_get is doing in order to be sure.
To start freeing the memory you will need to stop returning references to the threads from this function. Or if you want to continue to do that, at least delete them somewhere del x.

Using pyUSB to read data from ELM327 OBDII to USB device

I am having problems using the pyUSB library to read data from an ELM327 OBDII to USB device. I know that I need to write a command to the device on the write endpoint and read the received data back on the read endpoint. It doesn't seem to want to work for me though.
I wrote my own class obdusb for this:
import usb.core
class obdusb:
def __init__(self,_vend,_prod):
'''Handle to USB device'''
self.idVendor = _vend
self.idProduct = _prod
self._dev = usb.core.find(idVendor=_vend, idProduct=_prod)
return None
def GetDevice(self):
'''Must be called after constructor'''
return self._dev
def SetupEndpoint(self):
'''Must be called after constructor'''
try:
self._dev.set_configuration()
except usb.core.USBError as e:
sys.exit("Could not set configuration")
self._endpointWrite = self._dev[0][(0,0)][1]
self._endpointRead = self._dev[0][(0,0)][0]
#Resetting device and setting vehicle protocol (Auto)
#20ms is required as a delay between each written command
#ATZ resets device
self._dev.write(self._endpointWrite.bEndpointAddress,'ATZ',0)
sleep(0.002)
#ATSP 0 should set vehicle protocol automatically
self._dev.write(self._endpointWrite.bEndpointAddress,'ATSP 0',0)
sleep(0.02)
return self._endpointRead
def GetData(self,strCommand):
data = []
self._dev.write(self._endpintWrite.bEndpointAddress,strCommand,0)
sleep(0.002)
data = self._dev.read(self._endpointRead.bEndpointAddress, self._endpointRead.wMaxPacketSize)
return data
So I then use this class and call the GetData method using this code:
import obdusb
#Setting up library,device and endpoint
lib = obdusb.obdusb(0x0403,0x6001)
myDev = lib.GetDevice()
endp = lib.SetupEndpoint()
#Testing GetData function with random OBD command
#0902 is VIN number of vehicle being requested
dataArr = lib.GetData('0902')
PrintResults(dataArr)
raw_input("Press any key")
def PrintResults(arr):
size = len(arr)
print "Data currently in buffer:"
for i in range(0,size):
print "[" + str(i) + "]: " + str(make[i])
This only ever prints the numbers 1 and 60 from [0] and [1] element in the array. No other data has been return from the command. This is the case whether the device is connected to a car or not. I don't know what these 2 pieces of information are. I am expecting it to return a string of hexadecimal numbers. Does anyone know what I am doing wrong here?
If you don't use ATST or ATAT, you have to expect a timeout of 200ms at start, between every write/read combination.
Are you sending a '\r' after each command? It looks like you don't, so it's forever waiting for a Carriage Return.
And a hint: test with 010D or 010C or something. 09xx might be difficult what to expect.
UPDATE:
You can do that both ways. As long as you 'seperate' each command with a carriage return.
http://elmelectronics.com/ELM327/AT_Commands.pdf
http://elmelectronics.com/DSheets/ELM327DS.pdf (Expanded list).
That command list was quite usefull to me.
ATAT can be used to the adjust the timeout.
When you send 010D, the ELM chip will wait normally 200 ms, to get all possible reactions. Sometimes you can get more returns, so it waits the 200 ms.
What you also can do, and it's a mystery as only scantools tend to implement this:
'010D1/r'
The 1 after the command, specifies the ELM should report back, when it has 1 reply from the bus. So it reduces the delay quite efficiently, at the cost of not able to get more values from the address '010D'. (Which is speed!)
Sorry for my english, I hope send you in the right direction.

Categories

Resources