Examining file on different node (different IP address), same network - possible? - python

I have a small group of Raspberry Pis, all on the same local network (192.168.1.2xx) All are running Python 3.7.3, one (R Pi CM3) on Raspbian Buster, the other (R Pi 4B 8gig) on Raspberry Pi OS 64.
I have a file on one device (the Pi 4B), located at /tmp/speech.wav, that is generated on the fly, real-time:
192.168.1.201 - /tmp/speech.wav
I have a script that works well on that device, that tells me the play duration time of the .wav file in seconds:
import wave
import contextlib
def getPlayTime():
fname = '/tmp/speech.wav'
with contextlib.closing(wave.open(fname,'r')) as f:
frames = f.getnframes()
rate = f.getframerate()
duration = round(frames / float(rate), 2)
return duration
However - the node that needs to operate on that duration information is running on another node at 192.168.1.210. I cannot simply move the various files all to the same node as there is a LOT going on, things are where they are for a reason.
So what I need to know is how to alter my approach such that I can change the script reference to something like this pseudocode:
fname = '/tmp/speech.wav # 192.168.1.201'
Is such a thing possible? Searching the web it seems that I am up against millions of people looking for how to obtain IP addresses, fix multiple IP address issues, fix duplicate ip address issues... but I can't seem yet to find how to simply examine a file on a different ip address as I have described here. I have no network security restrictions, so any setting is up for consideration. Help would be much appreciated.

There are lots of possibilities, and it probably comes down to how often you need to check the duration, from how many clients, and how often the file changes and whether you have other information that you want to share between the nodes.
Here are some options:
set up an SMB (Samba) server on the Pi that has the WAV file and let the other nodes mount the filesystem and access the file as if it was local
set up an NFS server on the Pi that has the WAV file and let the other nodes mount the filesystem and access the file as if it was local
let other nodes use ssh to login and extract the duration, or scp to retrieve the file - see paramiko in Python
set up Redis on one node and throw the WAV file in there so anyone can get it - this is potentially attractive if you have lots of lists, arrays, strings, integers, hashes, queues or sets that you want to share between Raspberry Pis very fast. Example here.
Here is a very simple example of writing a sound track into Redis from one node (say Redis is on 192.168.0.200) and reading it back from any other. Of course, you may just want the writing node to write the duration in there rather than the whole track - which would be more efficient. Or you may want to store loads of other shared data or settings.
This is the writer:
#!/usr/bin/env python3
import redis
from pathlib import Path
host='192.168.1.200'
# Connect to Redis
r = redis.Redis(host)
# Load some music, or otherwise create it
music = Path('song.wav').read_bytes()
# Put music into Redis where others can see it
r.set("music",music)
And this is the reader:
#!/usr/bin/env python3
import redis
from pathlib import Path
host='192.168.1.200'
# Connect to Redis
r = redis.Redis(host)
# Retrieve music track from Redis
music = r.get("music")
print(f'{len(music)} bytes read from Redis')
Then, during testing, you may want to manually push a track into Redis from the Terminal:
redis-cli -x -h 192.168.0.200 set music < OtherTrack.wav
Or manually retrieve the track from Redis to a file:
redis-cli -h 192.168.0.200 get music > RetrievedFromRedis.wav

OK, this is what I finally settled on - and it works great. Using ZeroMQ for message passing, I have the function to get the playtime of the wav, and another gathers data about the speech about to be spoken, then all that is sent to the motor core prior to sending the speech. The motor core handles the timing issues to sync the jaw to the speech. So, I'm not actually putting the code that generates the wav and also returns the length of the wav playback time onto the node that ultimately makes use of it, but it turns out that message passing is fast enough so there is plenty of time space to receive, process and implement the motion control to match the speech perfectly. Posting this here in case it's helpful for folks in the future working on similar issues.
import time
import zmq
import os
import re
import wave
import contextlib
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555") #Listens for speech to output
print("Connecting to Motor Control")
jawCmd = context.socket(zmq.PUB)
jawCmd.connect("tcp://192.168.1.210:5554") #Sends to MotorFunctions for Jaw Movement
def getPlayTime(): # Checks to see if current file duration has changed
fname = '/tmp/speech.wav' # and if yes, sends new duration
with contextlib.closing(wave.open(fname,'r')) as f:
frames = f.getnframes()
rate = f.getframerate()
duration = round(frames / float(rate), 3)
speakTime = str(duration)
return speakTime
def set_voice(V,T):
T2 = '"' + T + '"'
audioFile = "/tmp/speech.wav" # /tmp set as tmpfs, or RAMDISK to reduce SD Card write ops
if V == "A":
voice = "Allison"
elif V == "B":
voice = "Belle"
elif V == "C":
voice = "Callie"
elif V == "D":
voice = "Dallas"
elif V == "V":
voice = "David"
else:
voice = "Belle"
os.system("swift -n " + voice + " -o " + audioFile + " " +T2) # Record audio
tailTrim = .5 # Calculate Jaw Timing
speakTime = eval(getPlayTime()) # Start by getting playlength
speakTime = round((speakTime - tailTrim), 2) # Chop .5 s for trailing silence
wordList = T.split()
jawString = []
for index in range(len(wordList)):
wordLen = len(wordList[index])
jawString.append(wordLen)
jawString = str(jawString)
speakTime = str(speakTime)
jawString = speakTime + "|" + jawString # 3.456|[4, 2, 7, 4, 2, 9, 3, 4, 3, 6] - will split on "|"
jawCmd.send_string(jawString) # Send Jaw Operating Sequence
os.system("aplay " + audioFile) # Play audio
pronunciationDict = {'teh':'the','process':'prawcess','Maeve':'Mayve','Mariposa':'May-reeposah','Lila':'Lala','Trump':'Ass hole'}
def adjustResponse(response): # Adjusts spellings in output string to create better speech output.
for key, value in pronunciationDict.items():
if key in response or key.lower() in response:
response = re.sub(key, value, response, flags=re.I)
return response
SpeakText="Speech center connected and online."
set_voice(V,SpeakText) # Cepstral Voices: A = Allison; B = Belle; C = Callie; D = Dallas; V = David;
while True:
SpeakText = socket.recv().decode('utf-8') # .decode gets rid of the b' in front of the string
SpeakTextX = adjustResponse(SpeakText) # Run the string through the pronunciation dictionary
print("SpeakText = ",SpeakTextX)
set_voice(V,SpeakTextX)
print("Received request: %s" % SpeakTextX)
socket.send_string(str(SpeakTextX)) # Send data back to source for confirmation

Related

VLC Python Binding only playing the first mp4 file in the folder

I was trying to Play multiple small mp4 files that is in a folder. And i was using this example as my reference:
https://github.com/oaubert/python-vlc/blob/master/generated/3.0/examples/play_buffer.py
But when i ran the py file giving that particular folder it read all of the files but closes after playing the first file. Can Someone Help me with this.
And i know i can just use a for loop to play all the files in a folder but i don't want any stutter between transitioning those files(Or in simple words i want seamless transition between those files). So any help would be appreciated.
i don't want any stutter between transitioning those files
What you are looking for is called "gapless playback". Sadly, this is not currently supported by LibVLC.
Regardless, to play multiple individual files, you need to either feed them one after the other to the mediaplayer (by subscribing to the EndReached event, for example) or use a MediaList object.
You have picked a complicated and arguably inappropriate code example for your scenario.
Try something like this (without ctypes):
import vlc
import glob
import time
base_folder = './'
# vlc State 0: Nowt, 1 Opening, 2 Buffering, 3 Playing, 4 Paused, 5 Stopped, 6 Ended, 7 Error
playing = set([1,2,3,4])
def add_media(inst, media_list, playlist):
for song in playlist:
print('Loading: - {0}'.format(song))
media = inst.media_new(song)
media_list.add_media(media)
playlist = glob.glob(base_folder + "/" + "*.mp3")
playlist = sorted(playlist)
media_player = vlc.MediaListPlayer()
inst = vlc.Instance('--no-xlib --quiet ')
media_list = vlc.MediaList()
add_media(inst, media_list, playlist)
media_player.set_media_list(media_list)
media_player.play()
time.sleep(0.1)
current = ""
idx = 1
player = media_player.get_media_player()
while True:
state = player.get_state()
if state.value == vlc.State.Ended or state == vlc.State.Error:
if idx == len(playlist)+1:
break
title = player.get_media().get_mrl()
if title != current:
print("\nPlaying - {0}\t{1} of {2}".format(str(title), idx, len(playlist)))
current = title
idx += 1
time.sleep(0.1)
print("\nPlaylist Finished")
Here, we use a MediaListPlayer() rather than a media.player_new_from_media() or a media_player_new() because we are playing a set of media, not a single instance.
You can improve on this by controlling it via events using the event_manager() and attaching appropriate events for the player to listen for.

Downloading Multiple torrent files with Libtorrent in Python

I'm trying to write a torrent application that can take in a list of magnet links and then download them all together. I've been trying to read and understand the documentation at Libtorrent but I haven't been able to tell if what I try works or not. I've managed to be able to apply a SOCKS5 proxy to a Libtorrent session and download a single magnet link using this code:
import libtorrent as lt
import time
import os
ses = lt.session()
r = lt.proxy_settings()
r.hostname = "proxy_info"
r.username = "proxy_info"
r.password = "proxy_info"
r.port = 1080
r.type = lt.proxy_type_t.socks5_pw
ses.set_peer_proxy(r)
ses.set_web_seed_proxy(r)
ses.set_proxy(r)
t = ses.settings()
t.force_proxy = True
t.proxy_peer_connections = True
t.anonymous_mode = True
ses.set_settings(t)
print(ses.get_settings())
ses.peer_proxy()
ses.web_seed_proxy()
ses.set_settings(t)
magnet_link = "magnet"
params = {
"save_path": os.getcwd() + r"\torrents",
"storage_mode": lt.storage_mode_t.storage_mode_sparse,
"url": magnet_link
}
handle = lt.add_magnet_uri(ses, magnet_link, params)
ses.start_dht()
print('downloading metadata...')
while not handle.has_metadata():
time.sleep(1)
print('got metadata, starting torrent download...')
while handle.status().state != lt.torrent_status.seeding:
s = handle.status()
state_str = ['queued', 'checking', 'downloading metadata', 'downloading', 'finished', 'seeding', 'allocating']
print('%.2f%% complete (down: %.1f kb/s up: %.1f kB/s peers: %d) %s' % (s.progress * 100, s.download_rate / 1000, s.upload_rate / 1000, s.num_peers, state_str[s.state]))
time.sleep(5)
This is great and all for runing on its own with a single link. What I want to do is something like this:
def torrent_download(magnetic_link_list):
for mag in range(len(magnetic_link_list)):
handle = lt.add_magnet_uri(ses, magnetic_link_list[mag], params)
#Then download all the files
#Once all files complete, stop the torrents so they dont seed.
return torrent_name_list
I'm not sure if this is even on the right track or not, but some pointers would be helpful.
UPDATE: This is what I now have and it works fine in my case
def magnet2torrent(magnet_link):
global LIBTORRENT_SESSION, TORRENT_HANDLES
if LIBTORRENT_SESSION is None and TORRENT_HANDLES is None:
TORRENT_HANDLES = []
settings = lt.default_settings()
settings['proxy_hostname'] = CONFIG_DATA["PROXY"]["HOST"]
settings['proxy_username'] = CONFIG_DATA["PROXY"]["USERNAME"]
settings['proxy_password'] = CONFIG_DATA["PROXY"]["PASSWORD"]
settings['proxy_port'] = CONFIG_DATA["PROXY"]["PORT"]
settings['proxy_type'] = CONFIG_DATA["PROXY"]["TYPE"]
settings['force_proxy'] = True
settings['anonymous_mode'] = True
LIBTORRENT_SESSION = lt.session(settings)
params = {
"save_path": os.getcwd() + r"/torrents",
"storage_mode": lt.storage_mode_t.storage_mode_sparse,
"url": magnet_link
}
TORRENT_HANDLES.append(LIBTORRENT_SESSION.add_torrent(params))
def check_torrents():
global TORRENT_HANDLES
for torrent in range(len(TORRENT_HANDLES)):
print(TORRENT_HANDLES[torrent].status().is_seeding)
It's called "magnet links" (not magnetic).
In new versions of libtorrent, the way you add a magnet link is:
params = lt.parse_magnet_link(uri)
handle = ses.add_torrent(params)
That also gives you an opportunity to tweak the add_torrent_params object, to set the save directory for instance.
If you're adding a lot of magnet links (or regular torrent files for that matter) and want to do it quickly, a faster way is to use:
ses.add_torrent_async(params)
That function will return immediately and the torrent_handle object can be picked up later in the add_torrent_alert.
As for downloading multiple magnet links in parallel, your pseudo code for adding them is correct. You just want to make sure you either save off all the torrent_handle objects you get back or query all torrent handles once you're done adding them (using ses.get_torrents()). In your pseudo code you seem to overwrite the last torrent handle every time you add a new one.
The condition you expressed for exiting was that all torrents were complete. The simplest way of doing that is simply to poll them all with handle.status().is_seeding. i.e. loop over your list of torrent handles and ask that. Keep in mind that the call to status() requires a round-trip to the libtorrent network thread, which isn't super fast.
The faster way of doing this is to keep track of all torrents that aren't seeding yet, and "strike them off your list" as you get torrent_finished_alerts for torrents. (you get alerts by calling ses.pop_alerts()).
Another suggestion I would make is to set up your settings_pack object first, then create the session. It's more efficient and a bit cleaner. Especially with regards to opening listen sockets and then immediately closing and re-opening them when you change settings.
i.e.
p = lt.settings_pack()
p['proxy_hostname'] = '...'
p['proxy_username'] = '...'
p['proxy_password'] = '...'
p['proxy_port'] = 1080
p['proxy_type'] = lt.proxy_type_t.socks5_pw
p['proxy_peer_connections'] = True
ses = lt.session(p)

Is there a way to send DICOM data to specific directories on remote PACS server?

I am getting a hang of communication between SCU and SCP for DICOM servers and images. I am using a ClearCanas PACS server and have access to the web GUI. Using the following code, I am able to send DICOM dt from SCU(my computer) to SCP(remote server)
import sys
import argparse
from netdicom import AE
from netdicom.SOPclass import StorageSOPClass, VerificationSOPClass
from dicom.UID import ExplicitVRLittleEndian, ImplicitVRLittleEndian, \
ExplicitVRBigEndian
from dicom import read_file
# parse commandline
parser = argparse.ArgumentParser(description='storage SCU example')
parser.add_argument('remotehost')
parser.add_argument('remoteport', type=int)
parser.add_argument('file', nargs='+')
parser.add_argument('-aet', help='calling AE title', default='PYNETDICOM')
parser.add_argument('-aec', help='called AE title', default='REMOTESCU')
parser.add_argument('-implicit', action='store_true',
help='negociate implicit transfer syntax only',
default=False)
parser.add_argument('-explicit', action='store_true',
help='negociate explicit transfer syntax only',
default=False)
args = parser.parse_args()
if args.implicit:
ts = [ImplicitVRLittleEndian]
elif args.explicit:
ts = [ExplicitVRLittleEndian]
else:
ts = [
ExplicitVRLittleEndian,
ImplicitVRLittleEndian,
ExplicitVRBigEndian
]
# call back
def OnAssociateResponse(association):
print "Association response received"
# create application entity
MyAE = AE(args.aet, 0, [StorageSOPClass, VerificationSOPClass], [], ts)
MyAE.OnAssociateResponse = OnAssociateResponse
# remote application entity
RemoteAE = dict(Address=args.remotehost, Port=args.remoteport, AET=args.aec)
# create association with remote AE
print "Request association"
assoc = MyAE.RequestAssociation(RemoteAE)
if not assoc:
print "Could not establish association"
sys.exit(1)
# perform a DICOM ECHO, just to make sure remote AE is listening
print "DICOM Echo ... ",
st = assoc.VerificationSOPClass.SCU(1)
print 'done with status "%s"' % st
# create some dataset
for ii in args.file:
print
print ii
d = read_file(ii)
print "DICOM StoreSCU ... ",
try:
st = assoc.SCU(d, 1)
print 'done with status "%s"' % st
except:
raise
print "problem", d.SOPClassUID
print "Release association"
assoc.Release(0)
# done
MyAE.Quit
My question is, is there a way to send the objects to different directories on the server / make directories remotely on the server and send the data to different directories?
Short answer - No
A longer answer - DICOM standard does not care in any way or form, how the files are stored on the PACS server, that's left up to the implementation. The PACS server could use a robotic carving tool to engrave them on stone tablets and read the with OCR, if it could still receive and send them out according to the standard. Therefore there is no way to influence such details through the DICOM interface.
From the standard's viewpoint - everything is already neatly organised hierarchically on the scale of Patient - Study - Series - Instance, where each levels has their only unique ID-s. These ID-s are stored in each and every DICOM file in the relevant DICOM tags and most PACS servers keep track of these ID-s using a database to facilitate quick lookups. The actual location of the DICOM file on the disk is also stored in the database, but that is internal implementation details and is not exposed via the DICOM interface.
And frankly - I don't see what's the use case for this requirement anyway? The organisation is there already and You can query by these attributes using the DICOM interface.
Generally no. You cannot tell the SCP where, on its file system, to store data from your SCU.

Syncronizing files with AJA Ki Pro using Python

We have an AJA Ki Pro recorder at another location and I a need to created an automated system that pulls the recorded files over to my editing studio. So far I have successfully been able to pull recordings using a python script run via an Applescript through Automator. I than can trigger the application from iCal. Basically my script involves setting the "MediaState" parameter on my recorder to "Data" (value=1) so I can pull files, comparing the files on the recorder to my local files (it only downloads what I dont already have locally), and then setting the "MediaState" property back to "Rec" (value=0) so the recorder is ready to go again.
Here are the 2 problems I have been unable to resolve so far. Bear with me, I have about 2 days worth of experience with Python :) It seems that I have somehow created a loop where it constantly says "Looking for new clips" and "No new clips found". Ideally I would like to have the script terminate if no new clips are found. I would also like to set this up so that when it finishes a download through cURL, it automatically sets my "MediaState" back to value=0 and ends the script. Here is my code so far:
# This script polls the unit downloads any new clips it hasn't already downloaded to the current directory
# Arguments: hostname or IP address of Ki Pro unit
import urllib, sys, string, os, posix, time
def is_download_allowed(address):
f = urllib.urlopen("http://"+address+"/config?action=get&paramid=eParamID_MediaState")
response = f.read()
if (response.find('"value":"1"') > -1):
return True
f = urllib.urlopen("http://"+address+"/config?action=set&paramid=eParamID_MediaState&value=1")
def download_clip(clip):
url = "http://" + address + "/media/" + clip
print url
posix.system("curl --output " + clip + " " + url);
def download_clips(response):
values = response.split(":")
i = 0
for word in values:
i += 1
if(word.find('clipname') > -1):
clip = values[i].split(',')[0].translate(string.maketrans("",""), '[]{} \,\"\" ')
if not os.path.exists(clip):
print "Downloading clip: " + clip
download_clip(clip)
else:
f = urllib.urlopen("http://"+address+"/config?action=set&paramid=eParamID_MediaState&value=0")
print "No new clips found"
address = sys.argv[1]
while 1:
if (is_download_allowed(address)):
print "Looking for new clips"
f = urllib.urlopen("http://"+address+"/clips")
response = f.read()
download_clips(response)
If the download_clips function is looping through the clip names, why you need that infinite while 1 loop? I think it is not necessary. Just remove it and dedent the block.

Using pyUSB to read data from ELM327 OBDII to USB device

I am having problems using the pyUSB library to read data from an ELM327 OBDII to USB device. I know that I need to write a command to the device on the write endpoint and read the received data back on the read endpoint. It doesn't seem to want to work for me though.
I wrote my own class obdusb for this:
import usb.core
class obdusb:
def __init__(self,_vend,_prod):
'''Handle to USB device'''
self.idVendor = _vend
self.idProduct = _prod
self._dev = usb.core.find(idVendor=_vend, idProduct=_prod)
return None
def GetDevice(self):
'''Must be called after constructor'''
return self._dev
def SetupEndpoint(self):
'''Must be called after constructor'''
try:
self._dev.set_configuration()
except usb.core.USBError as e:
sys.exit("Could not set configuration")
self._endpointWrite = self._dev[0][(0,0)][1]
self._endpointRead = self._dev[0][(0,0)][0]
#Resetting device and setting vehicle protocol (Auto)
#20ms is required as a delay between each written command
#ATZ resets device
self._dev.write(self._endpointWrite.bEndpointAddress,'ATZ',0)
sleep(0.002)
#ATSP 0 should set vehicle protocol automatically
self._dev.write(self._endpointWrite.bEndpointAddress,'ATSP 0',0)
sleep(0.02)
return self._endpointRead
def GetData(self,strCommand):
data = []
self._dev.write(self._endpintWrite.bEndpointAddress,strCommand,0)
sleep(0.002)
data = self._dev.read(self._endpointRead.bEndpointAddress, self._endpointRead.wMaxPacketSize)
return data
So I then use this class and call the GetData method using this code:
import obdusb
#Setting up library,device and endpoint
lib = obdusb.obdusb(0x0403,0x6001)
myDev = lib.GetDevice()
endp = lib.SetupEndpoint()
#Testing GetData function with random OBD command
#0902 is VIN number of vehicle being requested
dataArr = lib.GetData('0902')
PrintResults(dataArr)
raw_input("Press any key")
def PrintResults(arr):
size = len(arr)
print "Data currently in buffer:"
for i in range(0,size):
print "[" + str(i) + "]: " + str(make[i])
This only ever prints the numbers 1 and 60 from [0] and [1] element in the array. No other data has been return from the command. This is the case whether the device is connected to a car or not. I don't know what these 2 pieces of information are. I am expecting it to return a string of hexadecimal numbers. Does anyone know what I am doing wrong here?
If you don't use ATST or ATAT, you have to expect a timeout of 200ms at start, between every write/read combination.
Are you sending a '\r' after each command? It looks like you don't, so it's forever waiting for a Carriage Return.
And a hint: test with 010D or 010C or something. 09xx might be difficult what to expect.
UPDATE:
You can do that both ways. As long as you 'seperate' each command with a carriage return.
http://elmelectronics.com/ELM327/AT_Commands.pdf
http://elmelectronics.com/DSheets/ELM327DS.pdf (Expanded list).
That command list was quite usefull to me.
ATAT can be used to the adjust the timeout.
When you send 010D, the ELM chip will wait normally 200 ms, to get all possible reactions. Sometimes you can get more returns, so it waits the 200 ms.
What you also can do, and it's a mystery as only scantools tend to implement this:
'010D1/r'
The 1 after the command, specifies the ELM should report back, when it has 1 reply from the bus. So it reduces the delay quite efficiently, at the cost of not able to get more values from the address '010D'. (Which is speed!)
Sorry for my english, I hope send you in the right direction.

Categories

Resources