I'm trying to use a computer connected to an Arduino (which is itself connected to some 5V voltmeters) to "fake" an old school stereo VU meter. My goal is to have the computer that is playing the audio file analyze the signal and send the amplitude information to the Arudino via a serial connection to be displayed on the voltmeters.
I'm using MPD to render and send the audio to a USB DAC (ODAC). MPD is also outputting to a FIFO, which I read from using a Python script. I read from the FIFO in 4096 byte chunks, then use the audioop library to split that chunk/sample into a left and right channel and compute the maximum amplitude of each channel.
Here's the problem - I'm getting swamped with data. I'm guessing my math is wrong or that I don't understand how a FIFO works (or maybe both). MPD is outputting everything in 44100:16:2 format - I thought that meant that it would be writing out 44,100 4-byte samples per second. So if I'm grabbing 4096 byte chunks, I should expect about 43 chunks per second. But I'm getting far more than that (over 100) and the number of chunks I get per second doesn't change if I up my chunk size. For example, if I double my chunk size to 8192, I still get roughly the same number of chunks per second. So clearly I'm doing something wrong, but I don't know what it is. Anyone have any thoughts?
Here is the relevant portion of my mpd.conf file:
audio_output {
type "fifo"
name "my_fifo"
path "/tmp/mpd.fifo"
format "44100:16:2"
}
And here is the Python script:
import os
import audioop
import time
import errno
import math
#Open the FIFO that MPD has created for us
#This represents the sample (44100:16:2) that MPD is currently "playing"
fifo = os.open('/tmp/mpd.fifo', os.O_RDONLY)
while 1:
try:
rawStream = os.read(fifo, 4096)
except OSError as err:
if err.errno == errno.EAGAIN or err.errno == errno.EWOULDBLOCK:
rawStream = None
else:
raise
if rawStream:
leftChannel = audioop.tomono(rawStream, 2, 1, 0)
rightChannel = audioop.tomono(rawStream, 2, 0, 1)
stereoPeak = audioop.max(rawStream, 2)
leftPeak = audioop.max(leftChannel, 2)
rightPeak = audioop.max(rightChannel, 2)
leftDB = 20 * math.log10(leftPeak) -74
rightDB = 20 * math.log10(rightPeak) -74
print(rightPeak, leftPeak, rightDB, leftDB)
Answering my own question. It turns out that, regardless of how many bytes I specified should be read, os.read() was returning 2048 bytes. What that means is that the second parameter that os.read() takes is the maximum number of bytes it will read - but there's no guarantee that that many bytes will actually be read. I had thought that by leaving out the NONBLOCK option when I opened the FIFO that the os.read() call would wait around until it received an end of file or the number of bytes specified. But that's not the case. To get around this issue, my code now checks the length of the byte string returned by os.read() and - if that length is less than my specified chunk size - will wait to grab the next chunk(s) and then will concatenate all the chunks together so that I have a chunk size that matches my target before I move on to processing the data.
Related
In Node.JS, I spawn a child Python process to be piped. I want to send a UInt8Array through stdin. So as to notify the size of the buffer data to be read, I send the size of it before. But it doesn't stop reading for the actual data from the buffer properly after a specified size. As a result, the Python process doesn't terminate forever. I've checked that it takes bufferSize properly and converts it into an integer. In the absence of size = int(input()) and python.stdin.write(bufferSize.toString() + "\n") and when the size of the buffer is hardcoded, it works correctly. I couldn't figure out why it does not end waiting after reading for the specified amount of bytes.
// Node.JS
const python_command = command.serializeBinary()
const python = spawn('test/production_tests/py_test_scripts/protocolbuffer/venv/bin/python', ['test/production_tests/py_test_scripts/protocolbuffer/command_handler.py']);
const bufferSize = python_command.byteLength
python.stdin.write(bufferSize.toString() + "\n")
python.stdin.write(python_command)
# Python
size = int(input())
data = sys.stdin.buffer.read(size)
In a nutshell, the problem arises from the fact that putting normal stdin input() firstly and then sys.stdin.buffer.read. I guess the preceding one conflicts with the successive one and precludes it to work normally.
There are two potential problems here. The first is that the pipe between node.js and the python script is block buffered. You won't see any data on the python side until either a block's worth of data is filled (system dependent) or the pipe is closed. The second is that there is a decoder between input and the byte stream coming in on stdin. This decoder is free to read ahead in the stream as it wishes. Reading sys.stdin.buffer may miss whatever happens to be buffered in the decoder.
You can solve the second problem by doing all of your reads from the buffer as shown below. The first problem needs to be solved on the node.js side - likely by closing its subprocess stdin. You may be better off just writing the size as a binary number, say uint64.
import struct
import sys
# read size - assuming its coming in as ascii stream
size_buf = []
while True:
c = sys.stdin.buffer.read(1)
if c == b"\n":
size = int(b"".join(size_buf))
break
size_buf.append(c)
fmt = "B" # read unsigned char
fmtsize = struct.calcsize(fmt)
buf = [struct.unpack(fmt, sys.stdin.buffer.read(fmtsize))[0] for _ in range(size)]
print(buf)
I've got an issue where I am trying to measure the time interval between a serial write and serial read in python. I'm using python 3.8 and a USB->RS485 adaptor on windows. Essentially, the code follows as so:
def write_packet(self, packet, flush=True):
if flush:
self.clear_buffers()
self.ser.write(packet)
self.tx_time = time.time_ns()
and this is immediately followed by:
def read_packet(self):
first_byte = True
while True:
byte = self.ser.read(1)
# Check if the read timed out:
if byte == b'':
**(notify of timeout)**
if byte == b'\x00':
**(end of packet, decode and break)**
else:
if first_byte:
self.time_rx = time.time_ns()
first_byte = False
As you can probably see, I'm trying to capture the time between just after transmitting, and receiving the first byte. After that I do something like this to get the time in ms:
time_diff_ms = (self.rx_time - self.tx_time)/1000000
My issue is the timings time_diff_ms seem to be way off. The scope image below of the RS485 signals shows it should read times of around ~1ms yet the script reads values of 6ms, 11ms, etc., almost random values.
https://i.stack.imgur.com/eJFAJ.jpg
I've also tried running the script on Linux, but not much difference. I'm also working with fairly high baud rates of 921600.
I'm trying to send files (images and text) by sockets in python. I don't want to create a new connection every time because the code is writing lots of files (>100) in a short amount of time so I don't want to build up that many connections while they wait to close. So before each chunk of the file is sent, I send the length of the chunk first. When I run it, it gives me a ValueError on length = int(s.recv(4)) , showing a string from the file and saying that it cannot be converted to an int. Here is the part of my code that sends and receives one file:
Sending:
#Connect s and open file f
s.setblocking(1)
buf = 4096
while True:
msg = f.read(buf)
length = str(len(msg))
if len(length) < 4: length = "0"*(4-len(length)) + length
s.sendall(length)
if length == "0000": break
s.sendall(msg)
if len(msg) != buf: break
Receiving:
#Connect s and open file f
while True:
length = int(s.recv(4))
if length == 0: break
f.write(s.recv(length))
if length < buf: break
Running on Windows 8.
If you are sending large files (depending on your router), the router might split the packets into shorter ones causing you to accidentally try to get the length of a chunk while still not getting the entire earlier file. You should make sure with a while loop that you got the entire length of data, if not keep requesting for the data with s.recv(length - len(what_i_got_so_far)).
Example, when I'm sending to myself a few MB picture through LAN the router cuts the packets to around 25 KB so I have to use recv a lot of times although I only used 1 send.
I hope this helps.
See edits below.
I have two programs that communicate through sockets. I'm trying to send a block of data from one to the other. This has been working with some test data, but is failing with others.
s.sendall('%16d' % len(data))
s.sendall(data)
print(len(data))
sends to
size = int(s.recv(16))
recvd = ''
while size > len(recvd):
data = s.recv(1024)
if not data:
break
recvd += data
print(size, len(recvd))
At one end:
s = socket.socket()
s.connect((server_ip, port))
and the other:
c = socket.socket()
c.bind(('', port))
c.listen(1)
s,a = c.accept()
In my latest test, I sent a 7973903 byte block and the receiver reports size as 7973930.
Why is the data block received off by 27 bytes?
Any other issues?
Python 2.7 or 2.5.4 if that matters.
EDIT: Aha - I'm probably reading past the end of the send buffer. If remaining bytes is less than 1024, I should only read the number of remaining bytes. Is there a standard technique for this sort of data transfer? I have the feeling I'm reinventing the wheel.
EDIT2: I'm screwing up by reading the next file in the series. I'm sending file1 and the last block is 997 bytes. Then I send file2, so the recv(1024) at the end of file1 reads the first 27 bytes of file2.
I'll start another question on how to do this better.
Thanks everyone. Asking and reading comments helped me focus.
First, the line
size = int(s.recv(16))
might read less than 16 bytes — it is unlikely, I will grant, but possible depending on how the network buffers align. The recv() call argument is a maximum value, a limit on how much data you are willing to receive. But you might only receive one byte. The operating system will generally give you control back once at least one byte has arrived, maybe (depending on the OS and on how busy the CPU is) after waiting another few milliseconds in case a second packet arrives with some further data, so that it only has to wake you up once instead of twice.
So you would want to say instead (to do the simplest possible loop; other variants are possible):
data = ''
while len(data) < 16:
more = s.recv(16 - len(data))
if not more:
raise EOFError()
data += more
This is indeed a wheel nearly everyone re-invents because it is so often needed. And your own code needs it a second time: your while loop needs its recv() to count down, asking for smaller and smaller limits until finally it has received exactly the number of bytes that were promised, and no more.
I have a Python script that reads a file (typically from optical media) marking the unreadable sectors, to allow a re-attempt to read said unreadable sectors on a different optical reader.
I discovered that my script does not work with block devices (e.g. /dev/sr0), in order to create a copy of the contained ISO9660/UDF filesystem, because os.stat().st_size is zero. The algorithm currently needs to know the filesize in advance; I can change that, but the issue (of knowing the block device size) remains, and it's not answered here, so I open this question.
I am aware of the following two related SO questions:
Determine the size of a block device (/proc/partitions, ioctl through ctypes)
how to check file size in python? (about non-special files)
Therefore, I'm asking: in Python, how can I get the file size of a block device file?
The “most clean” (i.e. not dependent on external volumes and most reusable) Python solution I've reached, is to open the device file and seek at the end, returning the file offset:
def get_file_size(filename):
"Get the file size by seeking at end"
fd= os.open(filename, os.O_RDONLY)
try:
return os.lseek(fd, 0, os.SEEK_END)
finally:
os.close(fd)
Linux-specific ioctl-based solution:
import fcntl
import struct
device_path = '/dev/sr0'
req = 0x80081272 # BLKGETSIZE64, result is bytes as unsigned 64-bit integer (uint64)
buf = b' ' * 8
fmt = 'L'
with open(device_path) as dev:
buf = fcntl.ioctl(dev.fileno(), req, buf)
bytes = struct.unpack('L', buf)[0]
print device_path, 'is about', bytes / (1024 ** 2), 'megabytes'
Other unixes will have different values for req, buf, fmt of course.
In Linux, there is /sys/block/${dev}/size that can be read even without sudo. To get the size of /dev/sdb simply do:
print( 512 * int(open('/sys/block/sdb/size','r').read()) )
See also https://unix.stackexchange.com/a/52219/384116
Another possible solution is
def blockdev_size(path):
"""Return device size in bytes.
"""
with open(path, 'rb') as f:
return f.seek(0, 2) or f.tell()
or f.tell() part is there for Python 2 portability's sake — file.seek() returns None in Python 2.
Magic constant 2 may be substituted with io.SEEK_END.
Trying to adapt from the other answer:
import fcntl
c = 0x00001260 ## check man ioctl_list, BLKGETSIZE
f = open('/dev/sr0', 'ro')
s = fcntl.ioctl(f, c)
print s
I don't have a suitable computer at hand to test this. I'd be curious to know if it works :)