I am trying to transmit and receive a BPSK signal from an Ettus Research N210 to an Ettus Research B200. I run my received signal through gain control, clock sync, and a PLL, then try to demodulate the signal.
Here is my flowchart.
In simulation (passing the signal through a channel block instead of transmitting from one radio to the other), this flowchart works fine. Below are the results of the simulation. As you can see, the receiver sees the rotated constellation and the processing corrects for this. Everything is fine and the packets are successfully decoded.
However, when I transmit and receive from my two real radios, I no longer receive signals that resemble 2-PSK. Instead, the constellation plots of the RX signal just look like blobs.
Here is my flowchart again with the USRP blocks un-commented.
And here are the results of the transmission and receive.
I am very confused by the lack of constellation pattern in the received signal. Sometimes when I send a packet, the RX constellation takes on a more orderly oval-looking shape, but it does not look like a line. Unfortunately I was unable to catch the oval pattern on screenshot since it returns to blob pattern very quickly.
I do not think this is a hardware issue because I have successfully used these radios before for UHF GMSK stuff.
Is there something wrong with my timing recovery / processing?
Thanks yall in advance for any and all help.
Found the issue. I had set my sampling rate lower than the USRP's minimum sampling rate. After a day of frustration, I changed my sampling rate to 320k and a few things in my processing block, and now things work and I get a nice looking constellation.
Here are my updated (working) flowchart and plots.
Related
I am trying to work on the signal transmission and reception using the OFDM. I got following issues :
If a channel module is used, will be able to receive what transmitted, but while trying to transmit and receive over a hackRf module it displays as data packets missing or parser value detected.
The SNR value is always low on both transmitter and receiver sides.
There is always a shifting and duplication of the constellation points after the repack bits module.
Do anyone have any suggestions or previous experience like this ?
Any help provided will be appreciated.Thanks in advance.
Transmitter flow graph:
Constellation plot:
Transmitted signal FFT:
I am working with a remote source to get data from a 1MHz bandwidth. Once I have isolated a peak in the FFT plot like this, I am sending that peak data through a ZMQ Pub Sink to a separate computer, where I decode the bytes into a string using this code. When I run the code, I get a series of numbers separated by periods. However, I don't know what the output numbers mean. Could anyone help me out with this? Thank you!!
This is a picture of the plain output from the low pass filter block and this is the output when I push the peak detector data.
I am trying to do a project, and in part of the project I have the user say a word which gets recorded. This word then gets the silence around it cut out, and there is a button that plays back their word without the silence. I am using librosa's librosa.effects.trim command to achieve this.
For example:
def record_audio():
global myrecording
global yt
playsound(beep1)
myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=1)
sd.wait()
playsound(beep2)
#trimming the audio
yt, index = librosa.effects.trim(myrecording, top_db=60)
However, when I play the audio back, I can tell that it is not trimming the recording. The variable explorer shows that myrecording and yt are the same length. I can hear it when I play what is supposed to be the trimmed audio clip back as well. I don't get any error messages when this occurs either. Is there any way to get librosa to actually clip the audio? I have tried adjusting top_db and that did not fix it. Aside from that, I am not quite sure what I could be doing wrong.
For a real answer, you'd have to post a sample recording so that we could inspect what exactly is going on.
In lieu of of that, I'd like to refer to this GitHub issue, where one of the main authors of librosa offers advice for a very similar issue.
In essence: You want to lower the top_db threshold and reduce frame_length and hop_length. E.g.:
yt, index = librosa.effects.trim(myrecording, top_db=50, frame_length=256, hop_length=64)
Decreasing hop_length effectively increases the resolution for trimming. Decreasing top_db makes the function less sensitive, i.e., low level noise is also regarded as silence. Using a computer microphone, you do probably have quite a bit of low level background noise.
If this all does not help, you might want to consider using SOX, or its Python wrapper pysox. It also has a trim function.
Update Look at the waveform of your audio. Does it have a spike somewhere at the beginning? Some crack sound perhaps. That will keep librosa from trimming correctly. Perhaps manually throwing away the first second (=fs samples) and then trimming solves the issue:
librosa.effects.trim(myrecording[fs:], top_db=50, frame_length=256, hop_length=64)
My goal is to be able to detect a specific noise that comes through the speakers of a PC using Python. That means the following, in pseudo code:
Sound is being played out of the speakers, by applications such as games for example,
ny "audio to detect" sound happens, and I want to detect that, and take an action
The specific sound I want to detect can be found here.
If I break that down, i believe I need two things:
A way to sample the audio that is being streamed to an audio device
I actually have this bit working -- with the code found here : https://gist.github.com/renegadeandy/8424327f471f52a1b656bfb1c4ddf3e8 -- it is based off of sounddevice example plot - which I combine with an audio loopback device. This allows my code, to receive a callback with data that is played to the speakers.
A way to compare each sample with my "audio to detect" sound file.
The detection does not need to be exact - it just needs to be close. For example there will be lots of other noises happening at the same time, so its more being able to detect the footprint of the "audio to detect" within the audio stream of a variety of sounds.
Having investigated this, I found technologies mentioned in this post on SO and also this interesting article on Chromaprint. The Chromaprint article uses fpcalc to generate fingerprints, but because my "audio to detect" is around 1 - 2 seconds, fpcalc can't generate the fingerprint. I need something which works across smaller timespaces.
Can somebody help me with the problem #2 as detailed above?
How should I attempt this comparison (ideally with a little example), based upon my sampling using sounddevice in the audio_callback function.
Many thanks in advance.
I want to make certain frequencies in a sequence of audio data louder. I have already analyzed the data using FFT and have gotten a value for each audio frequency in the data. I just have no idea how I can use the frequencies to manipulate the sound data itself.
From what I understand so far, data is encoded in such a way that the difference between every two consecutive readings determines the audio amplitude at that time instant. So making the audio louder at that time instant would involve making the difference between the two consecutive readings greater. But how do I know which time instants are involved with which frequency? I don't know when the frequency starts appearing.
(I am using Python, specifically PyAudio for getting the audio data and Num/SciPy for the FFT, though this probably shouldn't be relevant.)
You are looking for a graphic equalizer. Some quick Googling turned up rbeq, which seems to be a plugin for Rhythmbox written in Python. I haven't looked through the code to see if the actual EQ part is written in Python or is just controlling something in the host, but I recommend looking through their source.