Rebuilding my wave file with struct - python

My goal is to read a wave file and edit the data of it by adding a random number to each bit of data in the range of -1 to 1 with the hope of creating some distortion and then saving it as an edited wave file. I read and edit the wave file like so:
riffTag = fileIn.read(4)
if riffTag != 'RIFF':
print 'not a valid RIFF file'
exit(1)
riffLength = struct.unpack('<L', fileIn.read(4))[0]
riffType = fileIn.read(4)
if riffType != 'WAVE':
print 'not a WAV file'
exit(1)
# now read children
while fileIn.tell() < 8 + riffLength:
tag = fileIn.read(4)
length = struct.unpack('<L', fileIn.read(4))[0]
if tag == 'fmt ': # format element
fmtData = fileIn.read(length)
fmt, numChannels, sampleRate, byteRate, blockAlign, bitsPerSample = struct.unpack('<HHLLHH', fmtData)
stHeaderFields['AudioFormat'] = fmt
stHeaderFields['NumChannels'] = numChannels
stHeaderFields['SampleRate'] = sampleRate
stHeaderFields['ByteRate'] = byteRate
stHeaderFields['BlockAlign'] = blockAlign
stHeaderFields['BitsPerSample'] = bitsPerSample
elif tag == 'data': # data element
rawData = fileIn.read(length)
else: # some other element, just skip it
fileIn.seek(length, 1)
numChannels = stHeaderFields['NumChannels']
# some sanity checks
assert(stHeaderFields['BitsPerSample'] == 16)
assert(numChannels * stHeaderFields['BitsPerSample'] == blockAlign * 8)
samples = []
edited_samples = []
for offset in range(0, len(rawData), blockAlign):
samples.append(struct.unpack('<h', rawData[offset:offset+blockAlign]))
for sample in samples:
edited_samples.append(sample[0] + random.randint(-1, 1))
After I've done this I try to save the data is a new edited wave file by doing the following:
foo = []
for sample in edited_samples:
foo.append(struct.pack('<h', int(sample)))
with open(fileIn.name + ' edited.wav', 'w') as file_out:
file_out.write('RIFF')
file_out.write(struct.pack('<L', riffLength))
file_out.write('WAVE')
file_out.write(ur'fmt\u0020')
file_out.write(struct.pack('<H', fmt))
file_out.write(struct.pack('<H', numChannels))
file_out.write(struct.pack('<L', sampleRate))
file_out.write(struct.pack('<L', byteRate))
file_out.write(struct.pack('<H', blockAlign))
file_out.write(struct.pack('<H', bitsPerSample))
file_out.write('data')
for item in foo:
file_out.write(item)
While it doesn't give me any errors I can't play the new wave file in a media player. When I try open my new wave file I get a crash on the line fmt, numChannels, sampleRate, byteRate, blockAlign, bitsPerSample = struct.unpack('<HHLLHH', fmtData) with the error error: unpack requires a string argument of length 16. I imagine I'm building the wave file wrong. How do I build it correctly?

Unless you're intent on writing the support for .wav files yourself for some other reason (getting experience with dealing with binary files, etc.), don't do this. Python comes with the wave module that handles all of the file format issues and lets you just work with the data.

Related

Python - Accelerometer reading and writing to CSV file at 1 kHz rate

I am trying to use a MPU-6000 accelerometer and Raspberry Pi Zero W to log vibration data in a windshield. I'm fairly new to Python so please bear with me.
I've written a python2 script that configures the MPU-6000 to communicate over I2C, with the clock configured to 400 kHz.
The MPU-6000 gives an interrupt when there is new data available in the accelerometer registers, which is read, converted to 2's complement and then written to a CSV file together with a timestamp. The output rate of the accelerometer is configured to be 1 kHz.
I'm experiencing that when sampling all three sensor axis the script isn't able to write all data points to the CSV file. Instead of a 1000 datapoints pr axis pr second I get approximately 650 datapoints pr axis pr second.
I've tried writing only one axis, which proved succesfull with 1000 datapoints pr second. I know that the MPU-6000 has a FIFO register available, which I probably can burst read to get 1000 samples/s without any problem. The problem will be obtaining a timestamp for each sample, so I haven't tried to implement reading from the FIFO register yet.
I will most likely do most of the post processing in Matlab, so the most important things the python script should do is to write sensor data in any form to a CSV file at the determined rate, with a timestamp.
Is there any way to further improve my Python script, so I can sample all three axis and write to a CSV file at a 1 kHz rate?
Parts of my script is depicted below:
#!/usr/bin/python
import smbus
import math
import csv
import time
import sys
import datetime
# Register addresses
power_mgmt_1 = 0x6b
power_mgmt_2 = 0x6c
samlerate_divider = 0x19
accel_config = 0x1C
INT_Enable = 0x38
def read_byte(reg):
return bus.read_byte_data(address, reg)
def read_word(reg):
h = bus.read_byte_data(address, reg)
l = bus.read_byte_data(address, reg+1)
value = (h <<8)+l
return value
def read_word_2c(reg):
val = read_word(reg)
if (val >= 0x8000):
return -((65535 - val) + 1)
else:
return val
csvwriter = None
def csv_open():
csvfile = open('accel-data.csv', 'a')
csvwriter = csv.writer(csvfile)
def csv_write(timedelta, accelerometerx, accelerometery, accelerometerz):
global csvwriter
csvwriter.writerow([timedelta, accelerometerx, accelerometery,
accelerometerz])
# I2C configs
bus = smbus.SMBus(1)
address = 0x69
#Power management configurations
bus.write_byte_data(address, power_mgmt_1, 0)
bus.write_byte_data(address, power_mgmt_2, 0x00)
#Configure sample-rate divider
bus.write_byte_data(address, 0x19, 0x07)
#Configure data ready interrupt:
bus.write_byte_data(address,INT_Enable, 0x01)
#Opening csv file and getting ready for writing
csv_open()
csv_write('Time', 'X_Axis', 'Y_Axis', 'Z_Axis')
print
print "Accelerometer"
print "---------------------"
print "Printing acccelerometer data: "
#starttime = datetime.datetime.now()
while True:
data_interrupt_read = bus.read_byte_data(address, 0x3A)
if data_interrupt_read == 1:
meas_time = datetime.datetime.now()
# delta_time = meas_time - starttime
accelerometer_xout = read_word_2c(0x3b)
accelerometer_yout = read_word_2c(0x3d)
accelerometer_zout = read_word_2c(0x3f)
# accelerometer_xout = read_word(0x3b)
# accelerometer_yout = read_word(0x3d)
# accelerometer_zout = read_word(0x3f)
# accelerometer_xout_scaled = accelerometer_xout / 16384.0
# accelerometer_yout_scaled = accelerometer_yout / 16384.0
# accelerometer_zout_scaled = accelerometer_zout / 16384.0
# csv_write(meas_time, accelerometer_xout_scaled,
accelerometer_yout_scaled, accelerometer_zout_scaled)
csv_write(meas_time, accelerometer_xout, accelerometer_yout,
accelerometer_zout)
continue
If the data you are trying to write is continuous, then the best approach is to minimise the amount of processing needed to write it and to also minimise the amount of data being written. To do this, a good approach would be to write the raw data into a binary formatted file. Each data word would then only require 2 bytes to be written. The datetime object can be converted into a timestamp which would need 4 bytes. So you would use a format such as:
[4 byte timestamp][2 byte x][2 byte y][2 byte z]
Python's struct library can be used to convert multiple variables into a single binary string which can be written to a file. The data appears to be signed, if this is the case, you could try writing the word as is, and then using the libraries built in support for signed values to read it back in later.
For example, the following could be used to write the raw data to a binary file:
#!/usr/bin/python
import smbus
import math
import csv
import time
import sys
import datetime
import struct
# Register addresses
power_mgmt_1 = 0x6b
power_mgmt_2 = 0x6c
samlerate_divider = 0x19
accel_config = 0x1C
INT_Enable = 0x38
def read_byte(reg):
return bus.read_byte_data(address, reg)
def read_word(reg):
h = bus.read_byte_data(address, reg)
l = bus.read_byte_data(address, reg+1)
value = (h <<8)+l
return value
# I2C configs
bus = smbus.SMBus(1)
address = 0x69
#Power management configurations
bus.write_byte_data(address, power_mgmt_1, 0)
bus.write_byte_data(address, power_mgmt_2, 0x00)
#Configure sample-rate divider
bus.write_byte_data(address, 0x19, 0x07)
#Configure data ready interrupt:
bus.write_byte_data(address, INT_Enable, 0x01)
print
print "Accelerometer"
print "---------------------"
print "Printing accelerometer data: "
#starttime = datetime.datetime.now()
bin_format = 'L3H'
with open('accel-data.bin', 'ab') as f_output:
while True:
#data_interrupt_read = bus.read_byte_data(address, 0x3A)
data_interrupt_read = 1
if data_interrupt_read == 1:
meas_time = datetime.datetime.now()
timestamp = time.mktime(meas_time.timetuple())
accelerometer_xout = read_word(0x3b)
accelerometer_yout = read_word(0x3d)
accelerometer_zout = read_word(0x3f)
f_output.write(struct.pack(bin_format, timestamp, accelerometer_xout, accelerometer_yout, accelerometer_zout))
Then later on, you could then convert the binary file to a CSV file using:
from datetime import datetime
import csv
import struct
bin_format = 'L3h' # Read data as signed words
entry_size = struct.calcsize(bin_format)
with open('accel-data.bin', 'rb') as f_input, open('accel-data.csv', 'wb') as f_output:
csv_output = csv.writer(f_output)
csv_output.writerow(['Time', 'X_Axis', 'Y_Axis', 'Z_Axis'])
while True:
bin_entry = f_input.read(entry_size)
if len(bin_entry) < entry_size:
break
entry = list(struct.unpack(bin_format, bin_entry))
entry[0] = datetime.fromtimestamp(entry[0]).strftime('%Y-%m-%d %H:%M:%S')
csv_output.writerow(entry)
If your data collection is not continuous, you could make use of threads. One thread would read your data into a special queue. Another thread could read items out of the queue onto the disk.
If it is continuous this approach will fail if the writing of data is slower than the reading of it.
Take a look at the special Format characters used to tell struct how to pack and unpack the binary data.

Tuple index out of range with working of an image in Python

Have a class Header:
class Header:
MAX_FORMAT_LENGTH=8
magicnum = "hide" #str
size = 0
fformat = "txt"
Have a fucntion, which can encode pixel in an image:
def encode_in_pixel(byte, pixel):
"""Encodes a byte in the two least significant bits of each channel.
A 4-channel pixel is needed, which should be a tuple of 4 values from 0 to
255.
"""
r = (byte&3)
g = (byte&12)>>2
b = (byte&48)>>4
a = (byte&192)>>6
color = (r+(pixel[0]&252),\
g+(pixel[1]&252),\
b+(pixel[2]&252),\
a+(pixel[3]&252)) # Here is my problem
return color
After inputting correct arg in command line , while building, I have an error, which says that:
a = pixel[3]&3
IndexError: tuple index out of range
I was trying to input diffiret kinds of images, like: png,jpg,jpeg and etc.
Because wanted to check A 4-channel pixel out because I could wrong with the image. But no, I have the same error.
I would like to show , how my encoding process looks ,except encode_in_pixel(), I have a function ,which call encode_in_pixel:
def encode(image, data, filename, encryption=False, password=""):
im = Image.open(image)
px = im.load()
#Create a header
header = Header()
header.size = len(data)
header.fformat = "" if (len(filename.split(os.extsep))<2)\
else filename.split(os.extsep)[1]
#Add the header to the file data
headerdata = struct.pack("4s"+\
"I"+\
str(Header.MAX_FORMAT_LENGTH)+"s",\
header.magicnum, header.size, header.fformat)
filebytes = headerdata + data
#Optional encryption step
if encrypt:
if password:
filebytes = encrypt(filebytes, password,\
padding=im.width*im.height - len(filebytes))
else:
print "Password is empty, encryption skipped"
#Ensure the image is large enough to hide the data
if len(filebytes) > im.width*im.height:
print "Image too small to encode the file. \
You can store 1 byte per pixel."
exit()
for i in range(len(filebytes)):
coords = (i%im.width, i/im.width)
byte = ord(filebytes[i])
px[coords[0], coords[1]] = encode_in_pixel(byte, px[coords[0], coords[1]])
#here i'm trying to call function , where I have an error
im.save("output.png", "PNG")
I can be mistake in given args, but if it is , how should I give them?

How to extract image file details in Python (Windows)?

I'm writing a program that has to process multiple images. Many of them have different resolutions (dpi). Is there a way to retrieve the information from file properties? I tried PIL.ExifTags, PIL.IptcImagePlugin, other EXIF extractors, but everything returns None.
If it can't get dpi from jpeg by exif tools, the jpeg may not have exif and may have JFIF(APP0 metadata). It can get dpi from JFIF.
def get_resolution(filename):
with open(filename, "rb") as f:
data = f.read()
if data[0:2] != b"\xff\xd8":
raise ValueError("Not JPEG.")
if data[2:4] != b"\xff\xe0":
return None
else:
if data[13] == b"\x00":
unit = "no unit"
elif data[13] == b"\x01":
unit = "dpi"
elif data[13] == b"\x02":
unit = "dpcm"
else:
raise ValueError("Bad JFIF")
x = 256 * ord(data[14]) + ord(data[15])
y = 256 * ord(data[16]) + ord(data[17])
return {"unit":unit, "resolution":(x, y)}

Read the properties of HDF file in Python

I have a problem reading hdf file in pandas. As of now, I don't know the keys of the file.
How do I read the file [data.hdf] in such a case? And, my file is .hdf not .h5 , Does it make a difference it terms data fetching?
I see that you need a 'group identifier in the store'
pandas.io.pytables.read_hdf(path_or_buf, key, **kwargs)
I was able to get the metadata from pytables
File(filename=data.hdf, title='', mode='a', root_uep='/', filters=Filters(complevel=0, shuffle=False, fletcher32=False, least_significant_digit=None))
/ (RootGroup) ''
/UID (EArray(317,)) ''
atom := StringAtom(itemsize=36, shape=(), dflt='')
maindim := 0
flavor := 'numpy'
byteorder := 'irrelevant'
chunkshape := (100,)
/X Y (EArray(8319, 2, 317)) ''
atom := Float32Atom(shape=(), dflt=0.0)
maindim := 0
flavor := 'numpy'
byteorder := 'little'
chunkshape := (1000, 2, 100)
How do I make it readable via pandas?
First (.hdf or .h5) doesn't make any difference.
Second, I'm not sure about the pandas, but I read the HDF5 key like:
import h5py
h5f = h5py.File("test.h5", "r")
h5f.keys()
or
h5f.values()
Docs are here. However you will jot be able to directly read the format you show with pandas. You need to use PyTables to read it in. pandas can read in PyTables Table format directly even without the meta data that pandas uses.
pyhdf will be alternative option for hdf file in python
you can read and see keys from:
import pyhdf
hdf = pyhdf.SD.SD('file.hdf')
hdf.datasets()
I hope it will help you!
gud luck
You can use this simple function to see the variable names of any the HDF file (only works for the variables in the scientific mode)
from pyhdf.SD import *
def HDFvars(File):
"""
Extract variable names for an hdf file
"""
# hdfFile = SD.SD(File, mode=1)
hdfFile = SD(File, mode=1)
dsets = hdfFile.datasets()
k = []
for key in dsets.keys():
k.append(key)
k.sort()
hdfFile.end() # close the file
return k
If the variables aren't in the scientific mode, you can try whit pyhdf.V using the following program that shows the contents of the vgroups contained inside
any HDF file.
from pyhdf.HDF import *
from pyhdf.V import *
from pyhdf.VS import *
from pyhdf.SD import *
def describevg(refnum):
# Describe the vgroup with the given refnum.
# Open vgroup in read mode.
vg = v.attach(refnum)
print "----------------"
print "name:", vg._name, "class:",vg._class, "tag,ref:",
print vg._tag, vg._refnum
# Show the number of members of each main object type.
print "members: ", vg._nmembers,
print "datasets:", vg.nrefs(HC.DFTAG_NDG),
print "vdatas: ", vg.nrefs(HC.DFTAG_VH),
print "vgroups: ", vg.nrefs(HC.DFTAG_VG)
# Read the contents of the vgroup.
members = vg.tagrefs()
# Display info about each member.
index = -1
for tag, ref in members:
index += 1
print "member index", index
# Vdata tag
if tag == HC.DFTAG_VH:
vd = vs.attach(ref)
nrecs, intmode, fields, size, name = vd.inquire()
print " vdata:",name, "tag,ref:",tag, ref
print " fields:",fields
print " nrecs:",nrecs
vd.detach()
# SDS tag
elif tag == HC.DFTAG_NDG:
sds = sd.select(sd.reftoindex(ref))
name, rank, dims, type, nattrs = sds.info()
print " dataset:",name, "tag,ref:", tag, ref
print " dims:",dims
print " type:",type
sds.endaccess()
# VS tag
elif tag == HC.DFTAG_VG:
vg0 = v.attach(ref)
print " vgroup:", vg0._name, "tag,ref:", tag, ref
vg0.detach()
# Unhandled tag
else:
print "unhandled tag,ref",tag,ref
# Close vgroup
vg.detach()
# Open HDF file in readonly mode.
filename = 'yourfile.hdf'
hdf = HDF(filename)
# Initialize the SD, V and VS interfaces on the file.
sd = SD(filename)
vs = hdf.vstart()
v = hdf.vgstart()
# Scan all vgroups in the file.
ref = -1
while 1:
try:
ref = v.getid(ref)
print ref
except HDF4Error,msg: # no more vgroup
break
describevg(ref)

Speed up reading wav in python

Evening,
I am working on a project that requires me to read in multichannel wav files in 32-bit float.
When I read a specific file (1 minute long, 6 channels, 48k fs) in into Matlab and measure it with tic/toc it parses the file in 2.456482 seconds.
Matlab Code for file reading speed measurement
tic
wavread('C:/data/testData/6ch.wav');
toc
When I do it in python (mind you, I'm pretty unfamiliar with python) it takes 18.1655315617 seconds!
It seems to me like the way I am doing it is inefficient (I did get it down to 18 from 28 but it's still too much...)
I stripped the code to what is relevant to this subject:
Python Code for file reading speed measurement
import wave32
import struct
import time
import numpy as np
def getWavData(inFile)
wavFile = wave32.open(inFile, 'r')
wavParams = wavFile.getparams()
nChannels = wavParams[0]
byteDepth = wavParams[1]
nFrames = wavParams[3]
wavData = np.empty([nFrames, nChannels], np.float32)
frames = wavFile.readframes(nFrames)
for i in range(nFrames):
for j in range(nChannels):
start = ( i * nChannels + j ) * byteDepth
stop = start + byteDepth
wavData[i][j] = struct.unpack('<f', frames[start:stop])[0]
return wavData
inFile = 'C:/data/testData/6ch.wav'
start = time.clock()
data2 = getWavData(inFile)
elapsed = time.clock()
elapsedNew = elapsed - start
print str(elapsedNew)
please not that wav32 is a small hack I had to perform on wave.py to enable 32-bit float reading.
"""Stuff to parse WAVE files.
Usage.
Reading WAVE files:
f = wave.open(file, 'r')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods read(), seek(), and close().
When the setpos() and rewind() methods are not used, the seek()
method is not necessary.
This returns an instance of a class with the following public methods:
getnchannels() -- returns number of audio channels (1 for
mono, 2 for stereo)
getsampwidth() -- returns sample width in bytes
getframerate() -- returns sampling frequency
getnframes() -- returns number of audio frames
getcomptype() -- returns compression type ('NONE' for linear samples)
getcompname() -- returns human-readable version of
compression type ('not compressed' linear samples)
getparams() -- returns a tuple consisting of all of the
above in the above order
getmarkers() -- returns None (for compatibility with the
aifc module)
getmark(id) -- raises an error since the mark does not
exist (for compatibility with the aifc module)
readframes(n) -- returns at most n frames of audio
rewind() -- rewind to the beginning of the audio stream
setpos(pos) -- seek to the specified position
tell() -- return the current position
close() -- close the instance (make it unusable)
The position returned by tell() and the position given to setpos()
are compatible and have nothing to do with the actual position in the
file.
The close() method is called automatically when the class instance
is destroyed.
Writing WAVE files:
f = wave.open(file, 'w')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods write(), tell(), seek(), and
close().
This returns an instance of a class with the following public methods:
setnchannels(n) -- set the number of channels
setsampwidth(n) -- set the sample width
setframerate(n) -- set the frame rate
setnframes(n) -- set the number of frames
setcomptype(type, name)
-- set the compression type and the
human-readable compression type
setparams(tuple)
-- set all parameters at once
tell() -- return current position in output file
writeframesraw(data)
-- write audio frames without pathing up the
file header
writeframes(data)
-- write audio frames and patch up the file header
close() -- patch up the file header and close the
output file
You should set the parameters before the first writeframesraw or
writeframes. The total number of frames does not need to be set,
but when it is set to the correct value, the header does not have to
be patched up.
It is best to first set all parameters, perhaps possibly the
compression type, and then write audio frames using writeframesraw.
When all frames have been written, either call writeframes('') or
close() to patch up the sizes in the header.
The close() method is called automatically when the class instance
is destroyed.
"""
import __builtin__
__all__ = ["open", "openfp", "Error"]
class Error(Exception):
pass
WAVE_FORMAT_PCM = 0x0001
WAVE_FORMAT_IEEE_FLOAT = 0x0003
_array_fmts = None, 'b', 'h', None, 'l'
# Determine endian-ness
import struct
if struct.pack("h", 1) == "\000\001":
big_endian = 1
else:
big_endian = 0
from chunk import Chunk
class Wave_read:
"""Variables used in this class:
These variables are available to the user though appropriate
methods of this class:
_file -- the open file with methods read(), close(), and seek()
set through the __init__() method
_nchannels -- the number of audio channels
available through the getnchannels() method
_nframes -- the number of audio frames
available through the getnframes() method
_sampwidth -- the number of bytes per audio sample
available through the getsampwidth() method
_framerate -- the sampling frequency
available through the getframerate() method
_comptype -- the AIFF-C compression type ('NONE' if AIFF)
available through the getcomptype() method
_compname -- the human-readable AIFF-C compression type
available through the getcomptype() method
_soundpos -- the position in the audio stream
available through the tell() method, set through the
setpos() method
These variables are used internally only:
_fmt_chunk_read -- 1 iff the FMT chunk has been read
_data_seek_needed -- 1 iff positioned correctly in audio
file for readframes()
_data_chunk -- instantiation of a chunk class for the DATA chunk
_framesize -- size of one frame in the file
"""
def initfp(self, file):
self._convert = None
self._soundpos = 0
self._file = Chunk(file, bigendian = 0)
if self._file.getname() != 'RIFF':
raise Error, 'file does not start with RIFF id'
if self._file.read(4) != 'WAVE':
raise Error, 'not a WAVE file'
self._fmt_chunk_read = 0
self._data_chunk = None
while 1:
self._data_seek_needed = 1
try:
chunk = Chunk(self._file, bigendian = 0)
except EOFError:
break
chunkname = chunk.getname()
if chunkname == 'fmt ':
self._read_fmt_chunk(chunk)
self._fmt_chunk_read = 1
elif chunkname == 'data':
if not self._fmt_chunk_read:
raise Error, 'data chunk before fmt chunk'
self._data_chunk = chunk
self._nframes = chunk.chunksize // self._framesize
self._data_seek_needed = 0
break
chunk.skip()
if not self._fmt_chunk_read or not self._data_chunk:
raise Error, 'fmt chunk and/or data chunk missing'
def __init__(self, f):
self._i_opened_the_file = None
if isinstance(f, basestring):
f = __builtin__.open(f, 'rb')
self._i_opened_the_file = f
# else, assume it is an open file object already
try:
self.initfp(f)
except:
if self._i_opened_the_file:
f.close()
raise
def __del__(self):
self.close()
#
# User visible methods.
#
def getfp(self):
return self._file
def rewind(self):
self._data_seek_needed = 1
self._soundpos = 0
def close(self):
if self._i_opened_the_file:
self._i_opened_the_file.close()
self._i_opened_the_file = None
self._file = None
def tell(self):
return self._soundpos
def getnchannels(self):
return self._nchannels
def getnframes(self):
return self._nframes
def getsampwidth(self):
return self._sampwidth
def getframerate(self):
return self._framerate
def getcomptype(self):
return self._comptype
def getcompname(self):
return self._compname
def getparams(self):
return self.getnchannels(), self.getsampwidth(), \
self.getframerate(), self.getnframes(), \
self.getcomptype(), self.getcompname()
def getmarkers(self):
return None
def getmark(self, id):
raise Error, 'no marks'
def setpos(self, pos):
if pos < 0 or pos > self._nframes:
raise Error, 'position not in range'
self._soundpos = pos
self._data_seek_needed = 1
def readframes(self, nframes):
if self._data_seek_needed:
self._data_chunk.seek(0, 0)
pos = self._soundpos * self._framesize
if pos:
self._data_chunk.seek(pos, 0)
self._data_seek_needed = 0
if nframes == 0:
return ''
if self._sampwidth > 1 and big_endian:
# unfortunately the fromfile() method does not take
# something that only looks like a file object, so
# we have to reach into the innards of the chunk object
import array
chunk = self._data_chunk
data = array.array(_array_fmts[self._sampwidth])
nitems = nframes * self._nchannels
if nitems * self._sampwidth > chunk.chunksize - chunk.size_read:
nitems = (chunk.chunksize - chunk.size_read) / self._sampwidth
data.fromfile(chunk.file.file, nitems)
# "tell" data chunk how much was read
chunk.size_read = chunk.size_read + nitems * self._sampwidth
# do the same for the outermost chunk
chunk = chunk.file
chunk.size_read = chunk.size_read + nitems * self._sampwidth
data.byteswap()
data = data.tostring()
else:
data = self._data_chunk.read(nframes * self._framesize)
if self._convert and data:
data = self._convert(data)
self._soundpos = self._soundpos + len(data) // (self._nchannels * self._sampwidth)
return data
#
# Internal methods.
#
def _read_fmt_chunk(self, chunk):
wFormatTag, self._nchannels, self._framerate, dwAvgBytesPerSec, wBlockAlign = struct.unpack('<hhllh', chunk.read(14))
if wFormatTag == WAVE_FORMAT_PCM or wFormatTag==WAVE_FORMAT_IEEE_FLOAT:
sampwidth = struct.unpack('<h', chunk.read(2))[0]
self._sampwidth = (sampwidth + 7) // 8
else:
#sampwidth = struct.unpack('<h', chunk.read(2))[0]
#self._sampwidth = (sampwidth + 7) // 8
raise Error, 'unknown format: %r' % (wFormatTag,)
self._framesize = self._nchannels * self._sampwidth
self._comptype = 'NONE'
self._compname = 'not compressed'
class Wave_write:
"""Variables used in this class:
These variables are user settable through appropriate methods
of this class:
_file -- the open file with methods write(), close(), tell(), seek()
set through the __init__() method
_comptype -- the AIFF-C compression type ('NONE' in AIFF)
set through the setcomptype() or setparams() method
_compname -- the human-readable AIFF-C compression type
set through the setcomptype() or setparams() method
_nchannels -- the number of audio channels
set through the setnchannels() or setparams() method
_sampwidth -- the number of bytes per audio sample
set through the setsampwidth() or setparams() method
_framerate -- the sampling frequency
set through the setframerate() or setparams() method
_nframes -- the number of audio frames written to the header
set through the setnframes() or setparams() method
These variables are used internally only:
_datalength -- the size of the audio samples written to the header
_nframeswritten -- the number of frames actually written
_datawritten -- the size of the audio samples actually written
"""
def __init__(self, f):
self._i_opened_the_file = None
if isinstance(f, basestring):
f = __builtin__.open(f, 'wb')
self._i_opened_the_file = f
try:
self.initfp(f)
except:
if self._i_opened_the_file:
f.close()
raise
def initfp(self, file):
self._file = file
self._convert = None
self._nchannels = 0
self._sampwidth = 0
self._framerate = 0
self._nframes = 0
self._nframeswritten = 0
self._datawritten = 0
self._datalength = 0
self._headerwritten = False
def __del__(self):
self.close()
#
# User visible methods.
#
def setnchannels(self, nchannels):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if nchannels < 1:
raise Error, 'bad # of channels'
self._nchannels = nchannels
def getnchannels(self):
if not self._nchannels:
raise Error, 'number of channels not set'
return self._nchannels
def setsampwidth(self, sampwidth):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if sampwidth < 1 or sampwidth > 4:
raise Error, 'bad sample width'
self._sampwidth = sampwidth
def getsampwidth(self):
if not self._sampwidth:
raise Error, 'sample width not set'
return self._sampwidth
def setframerate(self, framerate):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if framerate <= 0:
raise Error, 'bad frame rate'
self._framerate = framerate
def getframerate(self):
if not self._framerate:
raise Error, 'frame rate not set'
return self._framerate
def setnframes(self, nframes):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
self._nframes = nframes
def getnframes(self):
return self._nframeswritten
def setcomptype(self, comptype, compname):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if comptype not in ('NONE',):
raise Error, 'unsupported compression type'
self._comptype = comptype
self._compname = compname
def getcomptype(self):
return self._comptype
def getcompname(self):
return self._compname
def setparams(self, params):
nchannels, sampwidth, framerate, nframes, comptype, compname = params
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
self.setnchannels(nchannels)
self.setsampwidth(sampwidth)
self.setframerate(framerate)
self.setnframes(nframes)
self.setcomptype(comptype, compname)
def getparams(self):
if not self._nchannels or not self._sampwidth or not self._framerate:
raise Error, 'not all parameters set'
return self._nchannels, self._sampwidth, self._framerate, \
self._nframes, self._comptype, self._compname
def setmark(self, id, pos, name):
raise Error, 'setmark() not supported'
def getmark(self, id):
raise Error, 'no marks'
def getmarkers(self):
return None
def tell(self):
return self._nframeswritten
def writeframesraw(self, data):
self._ensure_header_written(len(data))
nframes = len(data) // (self._sampwidth * self._nchannels)
if self._convert:
data = self._convert(data)
if self._sampwidth > 1 and big_endian:
import array
data = array.array(_array_fmts[self._sampwidth], data)
data.byteswap()
data.tofile(self._file)
self._datawritten = self._datawritten + len(data) * self._sampwidth
else:
self._file.write(data)
self._datawritten = self._datawritten + len(data)
self._nframeswritten = self._nframeswritten + nframes
def writeframes(self, data):
self.writeframesraw(data)
if self._datalength != self._datawritten:
self._patchheader()
def close(self):
if self._file:
self._ensure_header_written(0)
if self._datalength != self._datawritten:
self._patchheader()
self._file.flush()
self._file = None
if self._i_opened_the_file:
self._i_opened_the_file.close()
self._i_opened_the_file = None
#
# Internal methods.
#
def _ensure_header_written(self, datasize):
if not self._headerwritten:
if not self._nchannels:
raise Error, '# channels not specified'
if not self._sampwidth:
raise Error, 'sample width not specified'
if not self._framerate:
raise Error, 'sampling rate not specified'
self._write_header(datasize)
def _write_header(self, initlength):
assert not self._headerwritten
self._file.write('RIFF')
if not self._nframes:
self._nframes = initlength / (self._nchannels * self._sampwidth)
self._datalength = self._nframes * self._nchannels * self._sampwidth
self._form_length_pos = self._file.tell()
self._file.write(struct.pack('<l4s4slhhllhh4s',
36 + self._datalength, 'WAVE', 'fmt ', 16,
WAVE_FORMAT_PCM, self._nchannels, self._framerate,
self._nchannels * self._framerate * self._sampwidth,
self._nchannels * self._sampwidth,
self._sampwidth * 8, 'data'))
self._data_length_pos = self._file.tell()
self._file.write(struct.pack('<l', self._datalength))
self._headerwritten = True
def _patchheader(self):
assert self._headerwritten
if self._datawritten == self._datalength:
return
curpos = self._file.tell()
self._file.seek(self._form_length_pos, 0)
self._file.write(struct.pack('<l', 36 + self._datawritten))
self._file.seek(self._data_length_pos, 0)
self._file.write(struct.pack('<l', self._datawritten))
self._file.seek(curpos, 0)
self._datalength = self._datawritten
def open(f, mode=None):
if mode is None:
if hasattr(f, 'mode'):
mode = f.mode
else:
mode = 'rb'
if mode in ('r', 'rb'):
return Wave_read(f)
elif mode in ('w', 'wb'):
return Wave_write(f)
else:
raise Error, "mode must be 'r', 'rb', 'w', or 'wb'"
openfp = open # B/W compatibility
Sorry for the long code BTW :)
So my question is: is the wave.py module inherently slow (any alternatives to fix this?) or am I doing something inefficient?
I suppose I could just read in the wav header with a custom function and read the file in in a different way, but it seems like this is going to be A LOT of work, especially since I don't know a lot about 1) python and 2) file handling
Kind regards,
K.
Edit: I tried unutbu's suggestion but that does not work as scipy does not accept >16 bit.
When I try to parse the wav file through the scipy wavreader I get this message:
C:\Users\King Broos\AppData\Local\Enthought\Canopy32\System\lib\site-packages\scipy\io\wavfile.py:31: WavFileWarning: Unfamiliar format bytes
warnings.warn("Unfamiliar format bytes", WavFileWarning)
C:\Users\King Broos\AppData\Local\Enthought\Canopy32\System\lib\site-packages\scipy\io\wavfile.py:121: WavFileWarning: chunk not understood
warnings.warn("chunk not understood", WavFileWarning)
Looking into the code of wavfile.py this is the line where it throws the exception:
if (comp != 1 or size > 16):
warnings.warn("Unfamiliar format bytes", WavFileWarning)
I really need either 24 or 32 bit so I guess scipy not an option?
If you can install or have scipy, then use wavfile.read:
import scipy.io.wavfile as wavfile
sample_rate, x = wavfile.read(filename)
You might also want to study the source code, here.
Note that scipy.io.wavfile does not use Python's wave module. I'm not sure if it reads your IEEE_FLOAT format or not, but it does not do the same check as wave.py:
if wFormatTag == WAVE_FORMAT_PCM or wFormatTag==WAVE_FORMAT_IEEE_FLOAT:
sampwidth = struct.unpack('<h', chunk.read(2))[0]
self._sampwidth = (sampwidth + 7) // 8
else:
#sampwidth = struct.unpack('<h', chunk.read(2))[0]
#self._sampwidth = (sampwidth + 7) // 8
raise Error, 'unknown format: %r' % (wFormatTag,)
so perhaps it will work out-of-the-box.
By the way, instead of making your own module, wave32.py which is almost exactly the same as wave.py from the standard library, you could use monkey-patching:
import wave
import struct
WAVE_FORMAT_IEEE_FLOAT = 0x0003
def _read_fmt_chunk(self, chunk):
wFormatTag, self._nchannels, self._framerate, dwAvgBytesPerSec, wBlockAlign = struct.unpack('<hhllh', chunk.read(14))
if wFormatTag == WAVE_FORMAT_PCM or wFormatTag == WAVE_FORMAT_IEEE_FLOAT:
sampwidth = struct.unpack('<h', chunk.read(2))[0]
self._sampwidth = (sampwidth + 7) // 8
else:
raise Error, 'unknown format: %r' % (wFormatTag,)
self._framesize = self._nchannels * self._sampwidth
self._comptype = 'NONE'
self._compname = 'not compressed'
wave.Wave_read._read_fmt_chunk = _read_fmt_chunk
You can also use numpy directly:
import numpy as np
fs = np.fromfile(filename, dtype=np.int32, count=1, offset=24)[0] # Hz
byte_length = np.fromfile(filename, dtype=np.int32, count=1, offset=40)[0]
To manually read pieces of metadata. I recommend using a hex editor and wave format reference to verify the locations for pieces of metadata and the offset to the start of the data chunk (might not be 40 or 44 bytes in).
To read 32-bit WAVE_FORMAT_IEEE_FLOAT:
data = np.fromfile(filename, dtype=np.float32, count=byte_length // 4, offset=44)
To read 24-bit WAVE_FORMAT_PCM:
# prepend zero-byte to each sample (since there's no np.int24)
# then flatten, convert normally and byte-shift to correct for extra byte
data = np.zeros([byte_length // 3, 4], dtype=np.int8)
data[:, 1:] = np.fromfile(filename, dtype=np.int8, count=byte_length, offset=44).reshape(-1, 3)
data = np.right_shift(data.reshape(-1).view(dtype=np.int32), 8)
data = data / 2 ** 23 # if you want to normalize
Depends on the wavefile and machine, but this seems to be ~120 times faster than a loop for a 4.4 MB 24-bit .wav file, but there's likely bigger performance gains for bigger files (until swap is required, I think there's ~5 memory copies performed, including normalization).
This assumes:
No extra chunks at the start of the file, else offset= parameters are wrong
Single channel - reshape the array and/or change the byte order for multi-channel, with something like .reshape(num_channels, -1, order='F')
Little-endian I think

Categories

Resources