Problem Finding Differences Between Objetcs - python

I'm a python self student in early stages of my process. I've decided to write a code that does some reverse-engineering on Adobe Premiere's PRPROJ files. This files are gzipped XMLs, and I've managed to parse it and extract most of the attributes I've wanted to and store them in objects. I've later found out what I'm trying to do is kind of an Open Source API to read PRPROJ files. I might learn something in the way and have a lot lo learn yet, so thanks for your patience.
Now, when trying to find what the inner timecodes of Audio Clips in timelines are, I've couldn't find the right criteria to distinguish between them in the XML Code. Premiere seems to know the difference between them and I can't find find it.
My first hypothesis was the difference was about audios having embed timecode or not. My second was file formats —WAVs, AIFFs, mp3 etc— and now I'm in total blank.
Audio clips in timelines are a result of the XML different objects combined in this way:
Object Structure in XML
So I've made this Premiere Project containing different kind of Audio Clips and tried different codes to actually retrieve In and Out points for the audio selection.
Representation of Timeline and Clip's in and outs
I've managed to successfully retrieve 3 of 4 inPoints and OutPoints, resulting of the adding of XMLs object's inPoint + MediaInPoints and OutPoint + MediaOutPoints properly divided by the MediaFrameRate attribute corresponding (for some reason for me still unknown, some kinds use the clip MediaFrame attributes, other uses ProjectSettings' MediaFrame attribute)
**Sample 1 (.mov's audio coming from video file)
**
<MediaInPoint>0</MediaInPoint>
<MediaOutPoint>23843635200000</MediaOutPoint>
<MediaFrameRate>4233600000</MediaFrameRate>
And was successfully calculated this way:
cframerate = int(clip_loggings.find('MediaFrameRate').text)
cinpoint = int(clip_loggings.find('MediaInPoint').text)
cinpoint = round((media_outpoint / cframerate))
clip_timecode_in = round((clip_timecode_in / cframerate)) + cinpoint
clip_timecode_out = round((clip_timecode_out - 1) / cframerate) + cinpoint - 1
**Sample 2 (mp3 file, this particular file I found it sourced its time code from Project's)
**
<MediaInPoint>0</MediaInPoint>
<MediaOutPoint>52835328000000</MediaOutPoint>
<MediaFrameRate>5760000</MediaFrameRate>
VideoSettings' <FrameRate>8475667200</FrameRate>
proj_fr_ref = root.find('ProjectSettings/VideoSettings').get('ObjectRef')
cinpoint = int(clip_loggings.find('MediaInPoint').text)
cinpoint = round((media_outpoint / cframerate))
for proj_frs in root.findall ('VideoSettings'):
if proj_frs.get('ObjectID') == proj_fr_ref:
if clip_speed > 0:
cframerate = int(proj_frs.find('FrameRate').text)
clip_timecode_in = round((clip_timecode_in) / cframerate) + cinpoint # Seems to be linked to Project's VideoSetting's FrameRate, Why???
clip_timecode_out = round((clip_timecode_out - 1)/ cframerate) + cinpoint - 1
**Sample 3 (WAV file, '2000'is a factor I found empirically)
**
<MediaInPoint>472035821292000</MediaInPoint>
<MediaOutPoint>8715173208084000</MediaOutPoint>
<MediaFrameRate>5292000</MediaFrameRate>
<TimecodeFormat>200</TimecodeFormat>
cinpoint = int(clip_loggings.find('MediaInPoint').text)
cinpoint = round((media_outpoint / media_framerate / 2000))
clip_timecode_in = round((clip_timecode_in / cframerate / 2000)) + cinpoint
clip_timecode_out = round((clip_timecode_out - 1) / cframerate / 2000) + cinpoint - 1
**Sample 4 (Criteria Pending)
**
<CaptureMode>1</CaptureMode>
<ClipName>Sample 4.wav</ClipName>
<MediaInPoint>0</MediaInPoint>
<MediaOutPoint>186823687680000</MediaOutPoint>
<MediaFrameRate>5292000</MediaFrameRate>
<TimecodeFormat>200</TimecodeFormat>
The last, second WAV, Sample 4, I couldnt make it work, and made me realize my criteria was wrong. What Could it Be?
Please Help me! Complete information needed is uploaded to a Google Drive and is here: https://drive.google.com/drive/folders/1zbK42WFh4SN-8-ppo7QMSkTsXlZEv9MB

Related

Problem to get old series data (Python for Finance)

I've converted this formula (ZLEMA Moving Average)
But I have many issues with "Data(Lag Days Ago)", it seems that it cant go back to find the result. Here's the function but unfortunately it doesn't produce the desired result.
def fzlema(source,period):
zxLag = period / 2 if (period / 2) == np.round(period / 2) else (period - 1) / 2
zxLag = int(zxLag)
zxEMAData = source + (source - source.iloc[zxLag]) # Probably error is in this line
zlema = zxEMAData.ewm(span=period, adjust=False).mean()
zlema = np.round(zlema,2)
return zlema
zlema = fzlema(dataframe['close'], 50)
To be clear, the script runs perfectly but what I got is unmatched as it's been calculated on Tradingview.
I tried used iloc[..] and tail(..) but neither return exact results.
I can use the libraries pandas and numpy.
Any point of view?
SOLVED:
Simply using source.shift(zxLag)

What would be a recommended data structure for a set of baking conversions?

I am working on a Python package for converting baking recipes. Ideally, the recipe is simply stored as a CSV file read in by the package. Given the recipe can be imperial or metric units of measurement, I am trying to internally convert any set of measurement units to metric for simplicity.
The main question I am trying to solve is a light-weight way to store a lot of conversions and ratios given a variety of names that a measurement unit can be.
For example, if a recipe has "tsp", I would want to classify it in the teaspoon family which would consist of ['tsp', 'tsps', 'teaspoon', 'teaspoons'] and have them all use the TSP_TO_METRIC conversion ratio.
Initially, I started as a list of lists but I feel like there may be a more elegant way to store and access these items. I was thinking a dictionary or some sort of JSON file to read in but unsure where the line is between needing an external file versus a long file of constants? I will continue to expand conversions as different ingredients are added so I am also looking for an easy way to scale.
Here is an example of the data conversions I am attempting to store. Then I use a series of if-else coupled with any(unit in sublist for sublist in VOLUME_NAMES): to check the lists of lists.
TSP_TO_METRIC = 5
TBSP_TO_METRIC = 15
OZ_TO_METRIC = 28.35
CUP_TO_METRIC = 8 * OZ_TO_METRIC
PINT_TO_METRIC = 2 * CUP_TO_METRIC
QUART_TO_METRIC = 4 * CUP_TO_METRIC
GALLON_TO_METRIC = 16 * CUP_TO_METRIC
LB_TO_METRIC = 16 * OZ_TO_METRIC
STICK_TO_METRIC = 8 * TBSP_TO_METRIC
TSP_NAMES = ['TSP', 'TSPS', 'TEASPOON', 'TEASPOONS']
TBSP_NAMES = ['TBSP', 'TBSPS', 'TABLESPOON', 'TABLESPOONS']
CUP_NAMES = ['CUP', 'CUPS']
LB_NAMES = ['LB', 'LBS', 'POUND', 'POUNDS']
OZ_NAMES = ['OZ', 'OUNCE', 'OUNCES']
BUTTER_NAMES = ['STICK', 'STICKS']
EGG_NAMES = ['CT', 'COUNT']
GALLON_NAMES = ['GAL', 'GALLON', 'GALLONS']
VOLUME_NAMES = [TSP_NAMES, TBSP_NAMES, CUP_NAMES, GALLON_NAMES]
WEIGHT_NAMES = [LB_NAMES, OZ_NAMES]

different result between opencv convertTo in c++ and manual conversion in Python

I'm trying to port a code from c++ to python, where at some point a frame is extracted from a .oni recording (OpenNI2), scaled to 8 bit and saved as jpg.
I use OpenCV function convertTo in c++, which is not available in python, so reading the documentation I'm triying to do the same operation manually, but something is wrong.
This is the c++
cv::Mat depthImage8;
double maxVal = 650.0;
double minVal = 520.0;
depthImage.convertTo(depthImage8, CV_8UC1, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
cv::imwrite(dst_folder + "/" + std::to_string(DepthFrameIndex) + "_8bit.jpg", depthImage8);
which produce:
This is the Python version:
depth_scale_factor = 255.0 / (650.0-520.0)
depth_scale_beta_factor = -520.0*255.0/(650.0-520.0)
depth_uint8 = (depth_array*depth_scale_factor+depth_scale_beta_factor).astype('uint8')
which produce:
This code seems to work, but however images generated are different, while the original one (16UC1) are identical (already checked and they match pixel by pixel), so there should be something wrong in the conversion functions.
Thanks to the comments I came up with the solution. As stated by users michelson and Dan Masek Opencv performs saturate_cast operation, while numpy don't. So in order to get the same result, Python version must be:
depth_uint8 = depth_array*depth_scale_factor+depth_scale_beta_factor
depth_uint8[depth_uint8>255] = 255
depth_uint8[depth_uint8<0] = 0
depth_uint8 = depth_uint8.astype('uint8')

Python - Mix two audio chunks

I have two Byte objects.
One comes from using the Wave module to read a "chunk" of data:
def get_wave_from_file(filename):
import wave
original_wave = wave.open(filename, 'rb')
return original_wave
The other uses MIDI information and a Synthesizer module (fluidsynth)
def create_wave_from_midi_info(sound_font_path, notes):
import fluidsynth
s = []
fl = fluidsynth.Synth()
sfid = fl.sfload(sound_font_path) # Loads a soundfont
fl.program_select(track=0, soundfontid=sfid, banknum=0, presetnum=0) # Selects the soundfont
for n in notes:
fl.noteon(0, n['midi_num'], n['velocity'])
s = np.append(s, fl.get_samples(int(44100 * n['duration']))) # Gives the note the correct duration, based on a sample rate of 44.1Khz
fl.noteoff(0, n['midi_num'])
fl.delete()
samps = fluidsynth.raw_audio_string(s)
return samps
The two files are of different length.
I want to combine the two waves, so that both are heard simultaneously.
Specifically, I would like to do this "one chunk at a time".
Here is my setup:
def get_a_chunk_from_each(wave_object, bytes_from_midi, chunk_size=1024, starting_sample=0)):
from_wav_data = wave_object.readframes(chunk_size)
from_midi_data = bytes_from_midi[starting_sample:starting_sample + chunk_size]
return from_wav_data, from_midi_data
Info about the return from get_a_chunk_from_each():
type(from_wav_data), type(from_midi_data)
len(from_wav_data), type(from_midi_data)
4096 1024
Firstly, I'm confused as to why the lengths are different (the one generated from wave_object.readframes(1024) is exactly 4 times longer than the one generated by manually slicing bytes_from_midi[0:1024]. This may be part of the reason I have been unsuccessful.
Secondly, I want to create the function which combines the two chunks. The following "pseudocode" illustrates what I want to happen:
def combine_chunks(chunk1, chunk2):
mixed = chunk1 + chunk2
# OR, probably more like:
mixed = (chunk1 + chunk2) / 2
# To prevent clipping?
return mixed
It turns out there is a very, very simple solution.
I simply used the library audioop:
https://docs.python.org/3/library/audioop.html
and used their "add" function ("width" is the sample width in bytes. Since this is 16 bit audio, that's 16 / 8 = 2 bytes):
audioop.add(chunk1, chunk2, width=2)

Instructables open source code: Python IndexError: list index out of range

I've seen this error on several other questions but couldn't find the answer.
{I'm a complete stranger to Python, but I'm following the instructions from a site and I keep getting this error once I try to run the script:
IndexError: list index out of range
Here's the script:
##//txt to stl conversion - 3d printable record
##//by Amanda Ghassaei
##//Dec 2012
##//http://www.instructables.com/id/3D-Printed-Record/
##
##/*
## * This program is free software; you can redistribute it and/or modify
## * it under the terms of the GNU General Public License as published by
## * the Free Software Foundation; either version 3 of the License, or
## * (at your option) any later version.
##*/
import wave
import math
import struct
bitDepth = 8#target bitDepth
frate = 44100#target frame rate
fileName = "bill.wav"#file to be imported (change this)
#read file and get data
w = wave.open(fileName, 'r')
numframes = w.getnframes()
frame = w.readframes(numframes)#w.getnframes()
frameInt = map(ord, list(frame))#turn into array
#separate left and right channels and merge bytes
frameOneChannel = [0]*numframes#initialize list of one channel of wave
for i in range(numframes):
frameOneChannel[i] = frameInt[4*i+1]*2**8+frameInt[4*i]#separate channels and store one channel in new list
if frameOneChannel[i] > 2**15:
frameOneChannel[i] = (frameOneChannel[i]-2**16)
elif frameOneChannel[i] == 2**15:
frameOneChannel[i] = 0
else:
frameOneChannel[i] = frameOneChannel[i]
#convert to string
audioStr = ''
for i in range(numframes):
audioStr += str(frameOneChannel[i])
audioStr += ","#separate elements with comma
fileName = fileName[:-3]#remove .wav extension
text_file = open(fileName+"txt", "w")
text_file.write("%s"%audioStr)
text_file.close()
Thanks a lot,
Leart
Leart - check these it may help:
Is your input file in correct format? As I see it, you need to produce that file before hand before you can use it in this program... Post that file in here as well.
Check if your bitrate and frame rates are correct
Just for debugging purposes (if the code is correct, this may not produce correct results, but good for testing). You are accessing frameInt[4*i+1], with index i multiplied by 4 then adding 1 (going beyond the frameInt index eventually).
Add an 'if' to check size before accessing the array element in frameInt:
if len(frameInt)>=(4*i+1):
Add that statement right after the first occurence of "for i in range(numframes):" and just before "frameOneChannel[i] = frameInt[4*i+1]*2**8+frameInt[4*i]#separate channels and store one channel in new list"
*watch tab spaces

Categories

Resources