I have two Byte objects.
One comes from using the Wave module to read a "chunk" of data:
def get_wave_from_file(filename):
import wave
original_wave = wave.open(filename, 'rb')
return original_wave
The other uses MIDI information and a Synthesizer module (fluidsynth)
def create_wave_from_midi_info(sound_font_path, notes):
import fluidsynth
s = []
fl = fluidsynth.Synth()
sfid = fl.sfload(sound_font_path) # Loads a soundfont
fl.program_select(track=0, soundfontid=sfid, banknum=0, presetnum=0) # Selects the soundfont
for n in notes:
fl.noteon(0, n['midi_num'], n['velocity'])
s = np.append(s, fl.get_samples(int(44100 * n['duration']))) # Gives the note the correct duration, based on a sample rate of 44.1Khz
fl.noteoff(0, n['midi_num'])
fl.delete()
samps = fluidsynth.raw_audio_string(s)
return samps
The two files are of different length.
I want to combine the two waves, so that both are heard simultaneously.
Specifically, I would like to do this "one chunk at a time".
Here is my setup:
def get_a_chunk_from_each(wave_object, bytes_from_midi, chunk_size=1024, starting_sample=0)):
from_wav_data = wave_object.readframes(chunk_size)
from_midi_data = bytes_from_midi[starting_sample:starting_sample + chunk_size]
return from_wav_data, from_midi_data
Info about the return from get_a_chunk_from_each():
type(from_wav_data), type(from_midi_data)
len(from_wav_data), type(from_midi_data)
4096 1024
Firstly, I'm confused as to why the lengths are different (the one generated from wave_object.readframes(1024) is exactly 4 times longer than the one generated by manually slicing bytes_from_midi[0:1024]. This may be part of the reason I have been unsuccessful.
Secondly, I want to create the function which combines the two chunks. The following "pseudocode" illustrates what I want to happen:
def combine_chunks(chunk1, chunk2):
mixed = chunk1 + chunk2
# OR, probably more like:
mixed = (chunk1 + chunk2) / 2
# To prevent clipping?
return mixed
It turns out there is a very, very simple solution.
I simply used the library audioop:
https://docs.python.org/3/library/audioop.html
and used their "add" function ("width" is the sample width in bytes. Since this is 16 bit audio, that's 16 / 8 = 2 bytes):
audioop.add(chunk1, chunk2, width=2)
Related
I am working on a Python package for converting baking recipes. Ideally, the recipe is simply stored as a CSV file read in by the package. Given the recipe can be imperial or metric units of measurement, I am trying to internally convert any set of measurement units to metric for simplicity.
The main question I am trying to solve is a light-weight way to store a lot of conversions and ratios given a variety of names that a measurement unit can be.
For example, if a recipe has "tsp", I would want to classify it in the teaspoon family which would consist of ['tsp', 'tsps', 'teaspoon', 'teaspoons'] and have them all use the TSP_TO_METRIC conversion ratio.
Initially, I started as a list of lists but I feel like there may be a more elegant way to store and access these items. I was thinking a dictionary or some sort of JSON file to read in but unsure where the line is between needing an external file versus a long file of constants? I will continue to expand conversions as different ingredients are added so I am also looking for an easy way to scale.
Here is an example of the data conversions I am attempting to store. Then I use a series of if-else coupled with any(unit in sublist for sublist in VOLUME_NAMES): to check the lists of lists.
TSP_TO_METRIC = 5
TBSP_TO_METRIC = 15
OZ_TO_METRIC = 28.35
CUP_TO_METRIC = 8 * OZ_TO_METRIC
PINT_TO_METRIC = 2 * CUP_TO_METRIC
QUART_TO_METRIC = 4 * CUP_TO_METRIC
GALLON_TO_METRIC = 16 * CUP_TO_METRIC
LB_TO_METRIC = 16 * OZ_TO_METRIC
STICK_TO_METRIC = 8 * TBSP_TO_METRIC
TSP_NAMES = ['TSP', 'TSPS', 'TEASPOON', 'TEASPOONS']
TBSP_NAMES = ['TBSP', 'TBSPS', 'TABLESPOON', 'TABLESPOONS']
CUP_NAMES = ['CUP', 'CUPS']
LB_NAMES = ['LB', 'LBS', 'POUND', 'POUNDS']
OZ_NAMES = ['OZ', 'OUNCE', 'OUNCES']
BUTTER_NAMES = ['STICK', 'STICKS']
EGG_NAMES = ['CT', 'COUNT']
GALLON_NAMES = ['GAL', 'GALLON', 'GALLONS']
VOLUME_NAMES = [TSP_NAMES, TBSP_NAMES, CUP_NAMES, GALLON_NAMES]
WEIGHT_NAMES = [LB_NAMES, OZ_NAMES]
I am attempting to read a binary file using Python. Someone else has read in the data with R using the following code:
x <- readBin(webpage, numeric(), n=6e8, size = 4, endian = "little")
myPoints <- data.frame("tmax" = x[1:(length(x)/4)],
"nmax" = x[(length(x)/4 + 1):(2*(length(x)/4))],
"tmin" = x[(2*length(x)/4 + 1):(3*(length(x)/4))],
"nmin" = x[(3*length(x)/4 + 1):(length(x))])
With Python, I am trying the following code:
import struct
with open('file','rb') as f:
val = f.read(16)
while val != '':
print(struct.unpack('4f', val))
val = f.read(16)
I am coming to slightly different results. For example, the first row in R returns 4 columns as -999.9, 0, -999.0, 0. Python returns -999.0 for all four columns (images below).
Python output:
R output:
I know that they are slicing by the length of the file with some of the [] code, but I do not know how exactly to do this in Python, nor do I understand quite why they do this. Basically, I want to recreate what R is doing in Python.
I can provide more of either code base if needed. I did not want to overwhelm with code that was not necessary.
Deducing from the R code, the binary file first contains a certain number tmax's, then the same number of nmax's, then tmin's and nmin's. What the code does is reading the entire file, which is then chopped up in the 4 parts (tmax's, nmax's, etc..) using slicing.
To do the same in python:
import struct
# Read entire file into memory first. This is done so we can count
# number of bytes before parsing the bytes. It is not a very memory
# efficient way, but it's the easiest. The R-code as posted wastes even
# more memory: it always takes 6e8 * 4 bytes (~ 2.2Gb) of memory no
# matter how small the file may be.
#
data = open('data.bin','rb').read()
# Calculate number of points in the file. This is
# file-size / 16, because there are 4 numeric()'s per
# point, and they are 4 bytes each.
#
num = int(len(data) / 16)
# Now we know how much there are, we take all tmax numbers first, then
# all nmax's, tmin's and lastly all nmin's.
# First generate a format string because it depends on the number points
# there are in the file. It will look like: "fffff"
#
format_string = 'f' * num
# Then, for cleaner code, calculate chunk size of the bytes we need to
# slice off each time.
#
n = num * 4 # 4-byte floats
# Note that python has different interpretation of slicing indices
# than R, so no "+1" is needed here as it is in the R code.
#
tmax = struct.unpack(format_string, data[:n])
nmax = struct.unpack(format_string, data[n:2*n])
tmin = struct.unpack(format_string, data[2*n:3*n])
nmin = struct.unpack(format_string, data[3*n:])
print("tmax", tmax)
print("nmax", nmax)
print("tmin", tmin)
print("nmin", nmin)
If the goal is to have this data structured as a list of points(?) like (tmax,nmax,tmin,nmin), then append this to the code:
print()
print("Points:")
# Combine ("zip") all 4 lists into a list of (tmax,nmax,tmin,nmin) points.
# Python has a function to do this at once: zip()
#
i = 0
for point in zip(tmax, nmax, tmin, nmin):
print(i, ":", point)
i += 1
Here's a less memory-hungry way to do the same. It possibly is a bit faster too. (but that is difficult to check for me)
My computer did not have sufficient memory to run the first program with those huge files. This one does, but I still needed to create a list of ony tmax's first (the first 1/4 of the file), then print it, and then delete the list in order to have enough memory for nmax's, tmin's and nmin's.
But this one too says the nmin's inside the 2018 file are all -999.0. If that doesn't make sense, could you check what the R-code makes of it then? I suspect that it is just what's in the file. The other possibility is of course, that I got it all wrong (which I doubt). However, I tried the 2017 file too, and that one does not have such problem: all of tmax, nmax, tmin, nmin have around 37% -999.0 's.
Anyway, here's the second code:
import os
import struct
# load_data()
# data_store : object to append() data items (floats) to
# num : number of floats to read and store
# datafile : opened binary file object to read float data from
#
def load_data(data_store, num, datafile):
for i in range(num):
data = datafile.read(4) # process one float (=4 bytes) at a time
item = struct.unpack("<f", data)[0] # '<' means little endian
data_store.append(item)
# save_list() saves a list of float's as strings to a file
#
def save_list(filename, datalist):
output = open(filename, "wt")
for item in datalist:
output.write(str(item) + '\n')
output.close()
#### MAIN ####
datafile = open('data.bin','rb')
# Get file size so we can calculate number of points without reading
# the (large) file entirely into memory.
#
file_info = os.stat(datafile.fileno())
# Calculate number of points, i.e. number of each tmax's, nmax's,
# tmin's, nmin's. A point is 4 floats of 4 bytes each, hence number
# of points = file-size / (4*4)
#
num = int(file_info.st_size / 16)
tmax_list = list()
load_data(tmax_list, num, datafile)
save_list("tmax.txt", tmax_list)
del tmax_list # huge list, save memory
nmax_list = list()
load_data(nmax_list, num, datafile)
save_list("nmax.txt", nmax_list)
del nmax_list # huge list, save memory
tmin_list = list()
load_data(tmin_list, num, datafile)
save_list("tmin.txt", tmin_list)
del tmin_list # huge list, save memory
nmin_list = list()
load_data(nmin_list, num, datafile)
save_list("nmin.txt", nmin_list)
del nmin_list # huge list, save memory
I need to process over 10 million spectroscopic data sets. The data is structured like this: there are around 1000 .fits (.fits is some data storage format) files, each file contains around 600-1000 spectra in which there are around 4500 elements in each spectra (so each file returns a ~1000*4500 matrix). That means each spectra is going to be repeatedly read around 10 times (or each file is going to be repeatedly read around 10,000 times) if I am going to loop over the 10 million entries. Although the same spectra is repeatedly read around 10 times, it is not duplicate because each time I extract different segments of the same spectra. With the help of #Paul Panzer, I already avoid reading the same file multiple times.
I have a catalog file which contains all the information I need, like the coordinates x, y, the radius r, the strength s, etc. The catalog also contains the information to target which file I am going to read (identified by n1, n2) and which spectra in that file I am going to use (identified by n3).
The code I have now is:
import numpy as np
from itertools import izip
import itertools
import fitsio
x = []
y = []
r = []
s = []
n1 = []
n2 = []
n3 = []
with open('spectra_ID.dat') as file_ID, open('catalog.txt') as file_c:
for line1, line2 in izip(file_ID,file_c):
parts1 = line1.split()
parts2 = line2.split()
n1.append(int(parts1[0]))
n2.append(int(parts1[1]))
n3.append(int(parts1[2]))
x.append(float(parts2[0]))
y.append(float(parts2[1]))
r.append(float(parts2[2]))
s.append(float(parts2[3]))
def data_analysis(n_galaxies):
n_num = 0
data = np.zeros((n_galaxies), dtype=[('spec','f4',(200)),('x','f8'),('y','f8'),('r','f8'),('s','f8')])
idx = np.lexsort((n3,n2,n1))
for kk,gg in itertools.groupby(zip(idx, n1[idx], n2[idx]), lambda x: x[1:]):
filename = "../../data/" + str(kk[0]) + "/spPlate-" + str(kk[0]) + "-" + str(kk[1]) + ".fits"
fits_spectra = fitsio.FITS(filename)
fluxx = fits_spectra[0].read()
n_element = fluxx.shape[1]
hdu = fits_spectra[0].read_header()
wave_start = hdu['CRVAL1']
logwave = wave_start + 0.0001 * np.arange(n_element)
wavegrid = np.power(10,logwave)
for ss, plate1, mjd1 in gg:
if n_num % 1000000 == 0:
print n_num
n3new = n3[ss]-1
flux = fluxx[n3new]
### following is my data reduction of individual spectra, I will skip here
### After all my analysis, I have the data storage as below:
data['spec'][n_num] = flux_intplt
data['x'][n_num] = x[ss]
data['y'][n_num] = y[ss]
data['r'][n_num] = r[ss]
data['s'][n_num] = s[ss]
n_num += 1
print n_num
data_output = FITS('./analyzedDATA/data_ALL.fits','rw')
data_output.write(data)
I kind of understand that the multiprocessing need to remove one loop, but pass the index to the function. However, there are two loops in my function and those two are highly correlated, so I do not know how to approach. Since the most time-consuming part of this code is reading files from disk, so the multiprocessing need to take full advantage of cores to read multiple files at one time. Could any one shed a light on me?
Get rid of global vars, you can't use global vars with processes
Merge your multiple global vars into one container class or dict,
assigning different segments of the same spectra into one data set
Move your global with open(... into a def ...
Separate data_output into a own def ...
Try first, without multiprocessing, this concept:
for line1, line2 in izip(file_ID,file_c):
data_set = create data set from (line1, line2)
result = data_analysis(data_set)
data_output.write(data)
Consider to use 2 processes one for file reading and one for file writing.
Use multiprocessing.Pool(processes=n) for data_analysis.
Communicate between processes using multiprocessing.Manager().Queue()
I am currently working on processing .wav files with python, using Pyaudio for streaming the audio, and the python wave library for loading the file data.
I plan to later on include processing of the individual stereo channels, with regards to amplitude of the signal, and panning of the stereo signal, but for now i'm just trying to seperate the two channels of the wave file, and stitch them back together - Hopefully ending up with data that is identical to the input data.
Below is my code.
The method getRawSample works perfectly fine, and i can stream audio through that function.
The problem is my getSample method. Somewhere along the line, where i'm seperating the two channels of audio, and joining them back together, the audio gets distorted. I have even commented out the part where i do amplitude and panning adjustment, so in theory it's data in -> data out.
Below is an example of my code:
class Sample(threading.Thread) :
def __init__(self, filepath, chunk):
super(Sample, self).__init__()
self.CHUNK = chunk
self.filepath = filepath
self.wave = wave.open(self.filepath, 'rb')
self.amp = 0.5 # varies from 0 to 1
self.pan = 0 # varies from -pi to pi
self.WIDTH = self.wave.getsampwidth()
self.CHANNELS = self.wave.getnchannels()
self.RATE = self.wave.getframerate()
self.MAXFRAMEFEEDS = self.wave.getnframes()/self.CHUNK # maximum even number of chunks
self.unpstr = '<{0}h'.format(self.CHUNK*self.WIDTH) # format for unpacking the sample byte string
self.pckstr = '<{0}h'.format(self.CHUNK*self.WIDTH) # format for unpacking the sample byte string
self.framePos = 0 # keeps track of how many chunks of data fed
# panning and amplitude adjustment of input sample data
def panAmp(self, data, panVal, ampVal): # when panning, using constant power panning
[left, right] = self.getChannels(data)
#left = np.multiply(0.5, left) #(np.sqrt(2)/2)*(np.cos(panVal) + np.sin(panVal))
#right = np.multiply(0.5, right) # (np.sqrt(2)/2)*(np.cos(panVal) - np.sin(panVal))
outputList = self.combineChannels(left, right)
dataResult = struct.pack(self.pckstr, *outputList)
return dataResult
def getChannels(self, data):
dataPrepare = list(struct.unpack(self.unpstr, data))
left = dataPrepare[0::self.CHANNELS]
right = dataPrepare[1::self.CHANNELS]
return [left, right]
def combineChannels(self, left, right):
stereoData = left
for i in range(0, self.CHUNK/self.WIDTH):
index = i*2+1
stereoData = np.insert(stereoData, index, right[i*self.WIDTH:(i+1)*self.WIDTH])
return stereoData
def getSample(self, panVal, ampVal):
data = self.wave.readframes(self.CHUNK)
self.framePos += 1
if self.framePos > self.MAXFRAMEFEEDS: # if no more audio samples to process
self.wave.rewind()
data = self.wave.readframes(self.CHUNK)
self.framePos = 1
return self.panAmp(data, panVal, ampVal)
def getRawSample(self): # for debugging, bypasses pan and amp functions
data = self.wave.readframes(self.CHUNK)
self.framePos += 1
if self.framePos > self.MAXFRAMEFEEDS: # if no more audio samples to process
self.wave.rewind()
data = self.wave.readframes(self.CHUNK)
self.framePos = 1
return data
i am suspecting that the error is in the way that i stitch together the left and right channel, but not sure.
I load the project with 16 bit 44100khz .wav files.
Below is a link to an audio file so that you can hear the resulting audio output.
The first part is running two files (both two channel) through the getSample method, while the next part is running those same files, through the getRawSample method.
https://dl.dropboxusercontent.com/u/24215404/pythonaudiosample.wav
Basing on the audio, as said earlier, it seems like the stereo file gets distorted. Looking at the waveform of above file, it seems as though the right and left channels are exactly the same after going through the getSample method.
If needed, i can also post my code including the main function.
Hopefully my question isn't too vague, but i am grateful for any help or input!
As it so often happens, i slept on it, and woke up the next day with a solution.
The problem was in the combineChannels function.
Following is the working code:
def combineChannels(self, left, right):
stereoData = left
for i in range(0, self.CHUNK):
index = i*2+1
stereoData = np.insert(stereoData, index, right[i:(i+1)])
return stereoData
The changes are
For loop bounds: as i have 1024 items (the same as my chunk size) in the lists left and right, i ofcourse need to iterate through every one of those.
index: the index definition remains the same
stereoData: Again, here i remember that im working with lists, each containing a frame of audio. The code in the question assumes that my list is stored as a bytestring, but this is ofcourse not the case. And as you see, the resulting code is much simpler.
How do I get the actual filesize on disk in python? (the actual size it takes on the harddrive).
UNIX only:
import os
from collections import namedtuple
_ntuple_diskusage = namedtuple('usage', 'total used free')
def disk_usage(path):
"""Return disk usage statistics about the given path.
Returned valus is a named tuple with attributes 'total', 'used' and
'free', which are the amount of total, used and free space, in bytes.
"""
st = os.statvfs(path)
free = st.f_bavail * st.f_frsize
total = st.f_blocks * st.f_frsize
used = (st.f_blocks - st.f_bfree) * st.f_frsize
return _ntuple_diskusage(total, used, free)
Usage:
>>> disk_usage('/')
usage(total=21378641920, used=7650934784, free=12641718272)
>>>
Edit 1 - also for Windows: https://code.activestate.com/recipes/577972-disk-usage/?in=user-4178764
Edit 2 - this is also available in Python 3.3+: https://docs.python.org/3/library/shutil.html#shutil.disk_usage
Here is the correct way to get a file's size on disk, on platforms where st_blocks is set:
import os
def size_on_disk(path):
st = os.stat(path)
return st.st_blocks * 512
Other answers that indicate to multiply by os.stat(path).st_blksize or os.vfsstat(path).f_bsize are simply incorrect.
The Python documentation for os.stat_result.st_blocks very clearly states:
st_blocks
Number of 512-byte blocks allocated for file. This may be smaller than st_size/512 when the file has holes.
Furthermore, the stat(2) man page says the same thing:
blkcnt_t st_blocks; /* Number of 512B blocks allocated */
Update 2021-03-26: Previously, my answer rounded the logical size of the file up to be an integer multiple of the block size. This approach only works if the file is stored in a continuous sequence of blocks on disk (or if all the blocks are full except for one). Since this is a special case (though common for small files), I have updated my answer to make it more generally correct. However, note that unfortunately the statvfs method and the st_blocks value may not be available on some system (e.g., Windows 10).
Call os.stat(filename).st_blocks to get the number of blocks in the file.
Call os.statvfs(filename).f_bsize to get the filesystem block size.
Then compute the correct size on disk, as follows:
num_blocks = os.stat(filename).st_blocks
block_size = os.statvfs(filename).f_bsize
sizeOnDisk = num_blocks*block_size
st = os.stat(…)
du = st.st_blocks * st.st_blksize
Practically 12 years and no answer on how to do this in windows...
Here's how to find the 'Size on disk' in windows via ctypes;
import ctypes
def GetSizeOnDisk(path):
'''https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getcompressedfilesizew'''
filesizehigh = ctypes.c_ulonglong(0) # not sure about this... something about files >4gb
return ctypes.windll.kernel32.GetCompressedFileSizeW(ctypes.c_wchar_p(path),ctypes.pointer(filesizehigh))
'''
>>> os.stat(somecompressedorofflinefile).st_size
943141
>>> GetSizeOnDisk(somecompressedorofflinefile)
671744
>>>
'''
I'm not certain if this is size on disk, or the logical size:
import os
filename = "/home/tzhx/stuff.wev"
size = os.path.getsize(filename)
If it's not the droid your looking for, you can round it up by dividing by cluster size (as float), then using ceil, then multiplying.
To get the disk usage for a given file/folder, you can do the following:
import os
def disk_usage(path):
"""Return cumulative number of bytes for a given path."""
# get total usage of current path
total = os.path.getsize(path)
# if path is dir, collect children
if os.path.isdir(path):
for file_name in os.listdir(path):
child = os.path.join(path, file_name)
# recursively get byte use for children
total += disk_usage(child)
return total
The function recursively collects byte usage for files nested within a given path, and returns the cumulative use for the entire path.
You could also add a print "{path}: {bytes}".format(path, total) in there if you want the information for each file to print.