Remove noise from Load Cell ADC Reading - python

I have connected my load cell to raspberry pi. I have used ADS1231 from TI and python code on my raspberry pi to get adc readings at 80 samples/sec or 80Hz sampling speed. As you know the adc signals do contain some spikes or noise which i want to remove and get the stable reading. I have read many articles redaring Fourier transform and various other articles on removing noise from the signal.
Can any body guide me on using any dsp method in python for my application ? I don't know much about DSP and how to implement them. Any library in python or any example code can be of help.
Thanks in advance.

Related

BMA490l returns zeros if i read 6 bytes from register 0x12

i have trouble getting this to work. im using BMA409l acceleration sensor with raspberry pi zero w with i2c communication of 100khz. basically i want to get acceleration readings that should be from 0x12 to 0x17 according to the datasheet
but when i use this code
import smbus
bus=smbus.SMBus(1)
bus.read_i2c_block_data(0x18,0x12,6)
i just get
[0,0,0,0,0,0]
i'm sorry for the description this small, but to be honest i as well have little to no understanding whats going on. is the IC damaged? or is this problem caused by my lack of knowledge, as im kinda new in this field.
if you want you can see full BMA490l datasheet here

Detecting a noise in an audio stream

My goal is to be able to detect a specific noise that comes through the speakers of a PC using Python. That means the following, in pseudo code:
Sound is being played out of the speakers, by applications such as games for example,
ny "audio to detect" sound happens, and I want to detect that, and take an action
The specific sound I want to detect can be found here.
If I break that down, i believe I need two things:
A way to sample the audio that is being streamed to an audio device
I actually have this bit working -- with the code found here : https://gist.github.com/renegadeandy/8424327f471f52a1b656bfb1c4ddf3e8 -- it is based off of sounddevice example plot - which I combine with an audio loopback device. This allows my code, to receive a callback with data that is played to the speakers.
A way to compare each sample with my "audio to detect" sound file.
The detection does not need to be exact - it just needs to be close. For example there will be lots of other noises happening at the same time, so its more being able to detect the footprint of the "audio to detect" within the audio stream of a variety of sounds.
Having investigated this, I found technologies mentioned in this post on SO and also this interesting article on Chromaprint. The Chromaprint article uses fpcalc to generate fingerprints, but because my "audio to detect" is around 1 - 2 seconds, fpcalc can't generate the fingerprint. I need something which works across smaller timespaces.
Can somebody help me with the problem #2 as detailed above?
How should I attempt this comparison (ideally with a little example), based upon my sampling using sounddevice in the audio_callback function.
Many thanks in advance.

Recovering Real PSK

I am trying to transmit and receive a BPSK signal from an Ettus Research N210 to an Ettus Research B200. I run my received signal through gain control, clock sync, and a PLL, then try to demodulate the signal.
Here is my flowchart.
In simulation (passing the signal through a channel block instead of transmitting from one radio to the other), this flowchart works fine. Below are the results of the simulation. As you can see, the receiver sees the rotated constellation and the processing corrects for this. Everything is fine and the packets are successfully decoded.
However, when I transmit and receive from my two real radios, I no longer receive signals that resemble 2-PSK. Instead, the constellation plots of the RX signal just look like blobs.
Here is my flowchart again with the USRP blocks un-commented.
And here are the results of the transmission and receive.
I am very confused by the lack of constellation pattern in the received signal. Sometimes when I send a packet, the RX constellation takes on a more orderly oval-looking shape, but it does not look like a line. Unfortunately I was unable to catch the oval pattern on screenshot since it returns to blob pattern very quickly.
I do not think this is a hardware issue because I have successfully used these radios before for UHF GMSK stuff.
Is there something wrong with my timing recovery / processing?
Thanks yall in advance for any and all help.
Found the issue. I had set my sampling rate lower than the USRP's minimum sampling rate. After a day of frustration, I changed my sampling rate to 320k and a few things in my processing block, and now things work and I get a nice looking constellation.
Here are my updated (working) flowchart and plots.

Analyse audio files with Python

I actually have Photodiode connect to my PC an do capturing with Audacity.
I want to improve this by using an old RPI1 as dedicated test station. As result the shutter speed should appear on the console. I would prefere a python solution for getting signal an analyse it.
Can anyone give me some suggestions? I played around with oct2py, but i dont really under stand how to calculate the time between the two peak of the signal.
I have no expertise on sound analysis with Python and this is what I found doing some internet research as far as I am interested by this topic
pyAudioAnalysis for an eponym purpose
You an use pyAudioAnalysis developed by Theodoros Giannakopoulos
Towards your end, function mtFileClassification() from audioSegmentation.py can be a good start. This function
splits an audio signal to successive mid-term segments and extracts mid-term feature statistics from each of these sgments, using mtFeatureExtraction() from audioFeatureExtraction.py
classifies each segment using a pre-trained supervised model
merges successive fix-sized segments that share the same class label to larger segments
visualize statistics regarding the results of the segmentation - classification process.
For instance
from pyAudioAnalysis import audioSegmentation as aS
[flagsInd, classesAll, acc, CM] = aS.mtFileClassification("data/scottish.wav","data/svmSM", "svm", True, 'data/scottish.segments')
Note that the last argument of this function is a .segment file. This is used as ground-truth (if available) in order to estimate the overall performance of the classification-segmentation method. If this file does not exist, the performance measure is not calculated. These files are simple comma-separated files of the format: ,,. For example:
0.01,9.90,speech
9.90,10.70,silence
10.70,23.50,speech
23.50,184.30,music
184.30,185.10,silence
185.10,200.75,speech
...
If I have well understood your question this is at least what you want to generate isn't it ? I rather think you have to provide it there.
Most of these information are directly quoted from his wiki which I suggest you to read it. Yet don't hesitate to reach out as far as I am really interested by this topic
Other available libraries for audio analysis :

What's a good way to examine audio with python and split it between high, mid and low pitches for visualizaton?

So, I'm planning on trying out making a light organ with an Arduino and Python, communicating over serial to control the brightness of several LEDs. The computer will use the microphone or a playing MP3 to generate the data.
I'm not so sure how to handle the audio processing. What's a good option for python that can take either a playing audio file or microphone data (I'd prefer the microphone), and then split it into different frequency ranges and write the intensity to variables? Do I need to worry about overtones if I use the microphone?
If you're not committed to using Python, you should also look at using PureData (PD) to handle the audio analysis. Interfacing PD to the Arduino is already a solved problem, and there are a lot of pre-existing components that make working with audio easy.
Try http://wiki.python.org/moin/Audio for links to various Python audio processing packages.
The audioop package has some basic waveform manipulation functions.
See also:
Detect and record a sound with python
Detect & Record Audio in Python
Portaudio has a Python interface that would let you read data off the microphone.
For the band splitting, you could use something like a band-pass filter feeding into an envelope follower -- one filter+follower for each frequency band of interest.

Categories

Resources