I have been using the PyRPL API Manual to begin running my Red Pitaya (STEMLab 125-10) on Jupyter Notebook instead of using the GUI. When I reach the oscilloscope code (below, along with a condensed version of the API manual code before it), when I reach ch1, ch2 = res.result(), I get "InvalidStateError: Result is not set." This is the error I would expect for trying to call the result of a future which is not done yet, but according to the source code for pyrpl.hardware_modules.scope, the future is never done:
This Future object is the one controlling the acquisition in
rolling_mode. It will never be fullfilled (done), since rolling_mode
is always continuous, but the timer/slot mechanism to control the
rolling_mode acquisition is encapsulated in this object.
(See class ContinuousRollingFuture in the link above). So if I can't take the result of a future which is not fulfilled yet, and the future is never fulfilled, and I need the result to get the curve, how can I ever get the curve?
I don't have much experience with the python library asyncio, so I would be very grateful for suggestions and tips on what I'm not understanding!
#condensed version of PyRPL code before the oscilloscope block
HOSTNAME = "rp-XXXXXX.local" # hostname of the red pitaya
folder = "." # relative folder where files are saved
import sys
import os
from pyrpl import Pyrpl
import numpy as np
import time
from time import sleep
import matplotlib.pyplot as plt
import scipy.fft
import IPython
import ipywidgets as widgets
from scipy.signal import welch
p = Pyrpl(hostname=HOSTNAME, config='test',gui=False)
r = p.rp
s = r.scope
# oscilloscope code
asg = r.asg1
# turn off asg so the scope has a chance to measure its "off-state" as well
asg.output_direct = "off"
# setup scope
s.input1 = 'asg1'
# show pid output on channel2
s.input2 = 'pid0'
# trig at zero volt crossing
s.threshold_ch1 = 0
# positive/negative slope is detected by waiting for input to
# sweep through hysteresis around the trigger threshold in
# the right direction
s.hysteresis_ch1 = 0.01
# trigger on the input signal positive slope
s.trigger_source = 'ch1_positive_edge'
# take data symetrically around the trigger event
s.trigger_delay = 0
# set decimation factor to 64 -> full scope trace is 8ns * 2^14 * decimation = 8.3 ms long
s.decimation = 64
# launch a single (asynchronous) curve acquisition, the asynchronous
# acquisition means that the function returns immediately, eventhough the
# data-acquisition is still going on.
res = s.curve_async()
print("Before turning on asg:")
print("Curve ready:", s.curve_ready()) # trigger should still be armed
# turn on asg and leave enough time for the scope to record the data
asg.setup(frequency=1e3, amplitude=0.3, start_phase=90, waveform='halframp', trigger_source='immediately')
sleep(0.010)
# check that the trigger has been disarmed
print("After turning on asg:")
print("Curve ready:", s.curve_ready())
print("Trigger event age [ms]:",8e-9*((
s.current_timestamp&0xFFFFFFFFFFFFFFFF) - s.trigger_timestamp)*1000)
# The function curve_async returns a *future* (or promise) of the curve. To
# access the actual curve, use result()
# this is the line that's giving me trouble:
ch1, ch2 = res.result()
# plot the data
%matplotlib inline
plt.plot(s.times*1e3, ch1, s.times*1e3, ch2)
plt.xlabel("Time [ms]")
plt.ylabel("Voltage")
Related
I'm trying to upload an xarray dataset to GCP using the function ds.to_zarr(store=store), and it works perfect. However, I would like to show the progress of big datasets. Is there any option to chunk my dataset in a way I can use tqdm or someting like that to log the uploading progress?
Here is the code that I currently have:
import os
import xarray as xr
import numpy as np
import gcsfs
from dask.diagnostics import ProgressBar
if __name__ == '__main__':
# for testing
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "service-account.json"
# create xarray
data_arr = np.random.rand(5000, 100, 100)
data_xarr = xr.DataArray(data_arr,
dims=["x", "y", "z"])
# define store
gcp_blob_uri = "gs://gprlib/test.zarr"
gcs = gcsfs.GCSFileSystem()
store = gcs.get_mapper(gcp_blob_uri)
# delayed to_zarr computation -> seems that it does not work
write_job = data_xarr\
.to_dataset(name="data")\
.to_zarr(store, mode="w", compute=False)
print(write_job)
xarray.Dataset.to_zarr has an optional argument compute which is True by default:
compute (bool, optional) – If True write array data immediately, otherwise return a dask.delayed.Delayed object that can be computed to write array data later. Metadata is always updated eagerly.
Using this, you can track the progress using dask's own dask.distributed.progress bar:
write_job = ds.to_zarr(store, compute=False)
write_job = write_job.persist()
# this will return an interactive (non-blocking) widget if in a notebook
# environment. To force the widget to block, provide notebook=False.
distributed.progress(write_job, notebook=False)
[############## ] | 35% Completed | 4.5s
Note that for this to work, the dataset must consist of chunked dask arrays. If the data is in memory, you could use a single chunk per array with ds.chunk().to_zarr.
I'm new to USRP so I don't quite understand the transmission and reception. I have a IQ data needs to be transmitted, I'm using the tx_waveform.py to perform the transmission or any other I should use? What would be the output of this? What are the configuration need to be done to transmit the data and receive the IQ data on receiver (OTA or Over the wire)?? What should be the procedure of this? I have attached the example code I found, this would come in receiver part, but how to use the tx_waveform.py? I just need to have a simple end to end transmission setup.
Thank you
I am using the same X310 USRP for the close loop transmission by simply connect a RF cable between them. Besides, I have one octoclock is connected to this USRP.
num_symbols = 10000
r = 0.1*np.random.randn(num_symbols) + 0.1j*np.random.randn(num_symbols)
print(r.size)
r = r.astype(np.complex64) # Convert to 64
r.tofile('test.dat')
import uhd
import numpy as np
usrp = uhd.usrp.MultiUSRP()
num_samps = 10000
center_freq = 1e9
sample_rate = 50e6
gain = 20
usrp.set_rx_rate(sample_rate, 0)
usrp.set_rx_freq(uhd.libpyuhd.types.tune_request(center_freq), 0)
usrp.set_rx_gain(gain, 0)
# Set up the stream and receive buffer
st_args = uhd.usrp.StreamArgs("fc32", "sc16")
st_args.channels = [0]
metadata = uhd.types.RXMetadata()
streamer = usrp.get_rx_stream(st_args)
recv_buffer = np.zeros((1, 1000), dtype=np.complex64) #maximum allowed value is 1996 through streamer.get_max_num_samps()
# Start Stream
stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.start_cont)
stream_cmd.stream_now = True
streamer.issue_stream_cmd(stream_cmd)
# Receive Samples
samples = np.fromfile('test.dat', np.complex64)
for i in range(num_samps//1000):
streamer.recv(recv_buffer, metadata)
samples[i*1000:(i+1)*1000] = recv_buffer[0]
# Stop Stream
stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.stop_cont)
streamer.issue_stream_cmd(stream_cmd)
print(samples)
Your code looks like it was taken from this source with a few minor modifications: PySDR: A Guide to SDR and DSP using Python
If you look a bit further down on that page, there is an example describing how to transmit data, using the usrp.send_waveform()
Furthermore, the modification you made to that example is a bit strange:
# Receive Samples
samples = np.fromfile('test.dat', np.complex64) #<-- here you populate samples with data from your file
for i in range(num_samps//1000):
streamer.recv(recv_buffer, metadata)
samples[i*1000:(i+1)*1000] = recv_buffer[0] #<-- now you overwrite with received data
First you need to confirm you have uhd package by runing:
python
import uhd
Then you need to confirm your host PC could find your USRP X310 by typing
uhd_find_devices --args addr={your X310''s IP address}
Then you could download latest python examples [tx_waveforms.py,rx_to_file.py] from uhd's repo
Then you could run --help to see if you can understand how to use these two file
python
tx_waveforms.py --help
rx_to_file.py --help
Also it will make you know that you are missing some packages which you need to pip install by yourself.
This page will inspire you how to pass the argument to the program even though it was not using the same file as you downloaded, but the low level API is similar.
In Python DataFrame, Im trying to generate histogram, it gets generated the first time when the function is called. However, when the create_histogram function is called second time it gets stuck at h = df.hist(bins=3, column="amount"). When I say "stuck", I mean to say that it does not finish executing the statement and the execution does not continue to the next line but at the same time it does not give any error or break out from the execution. What is exactly the problem here and how can I fix this?
import matplotlib.pyplot as plt
...
...
def create_histogram(self, field):
df = self.main_df # This is DataFrame
h = df.hist(bins=20, column="amount")
fileContent = StringIO()
plt.savefig(fileContent, dpi=None, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format="png",
transparent=False, bbox_inches=None, pad_inches=0.5,
frameon=None)
content = fileContent.getvalue()
return content
Finally I figured this out myself.
Whenever I executed the function I was always getting the following log message but I was ignoring it due to my lack of awareness.
Backend TkAgg is interactive backend. Turning interactive mode on.
But then I realised that may be its running in interactive mode (which was not my purpose). So, I found out that there is a way to turn it off, which is given below.
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
And this fixed my issue.
NOTE: the use should be called immediately after importing matplotlib in the sequence given here.
I have a piece of code which gets called by a different function, carries out some calculations for me and then plots the output to a file. Seeing as the whole script can take a while to run for larger datasets and since I may want to analyse multiple datasets at a given time I start it in screen then disconnect and close my putty session and check back on it the next day. I am using Ubuntu 14.04. My code looks as follows (I have skipped the calculations):
import shelve
import os, sys, time
import numpy
import timeit
import logging
import csv
import itertools
import graph_tool.all as gt
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
plt.ioff()
#Do some calculations
print 'plotting indeg'
# Let's plot its in-degree distribution
in_hist = gt.vertex_hist(g, "in")
y = in_hist[0]
err = numpy.sqrt(in_hist[0])
err[err >= y] = y[err >= y] - 1e-2
plt.figure(figsize=(6,4))
plt.errorbar(in_hist[1][:-1], in_hist[0], fmt="o",
label="in")
plt.gca().set_yscale("log")
plt.gca().set_xscale("log")
plt.gca().set_ylim(0.8, 1e5)
plt.gca().set_xlim(0.8, 1e3)
plt.subplots_adjust(left=0.2, bottom=0.2)
plt.xlabel("$k_{in}$")
plt.ylabel("$NP(k_{in})$")
plt.tight_layout()
plt.savefig("in-deg-dist.png")
plt.close()
print 'plotting outdeg'
#Do some more stuff
The script runs perfectly happily until I get to the plotting commands. To try and get to the root of the problem I am currently running it in putty without screen and with no X11 applications. The ouput I get is the following:
plotting indeg
PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connection refused
: cannot connect to X server localhost:10.0
I presume this is caused by the code trying to open a window but I thought that by explicitely setting plt.off() that would be disabled. Since it wasn't I followed this thread (Generating matplotlib graphs without a running X server ) and specified the backend, but that didn't solve the problem either. Where might I be going wrong?
The calling function calls other functions too which also use matplotlib. These get called only after this one but during the import statement their dependecies get loaded. Seeing as they were loaded first they disabled the subsequent matplotlib.use('Agg') declaration. Moving that declaration to the main script has solved the problem.
I'm doing some modeling of a neural sensory system, which has different model components for different stages of the system. Each stage's computations are done in parallel, and then passed to the next stage.
I'm running into multiprocessing.pool.MaybeEncodingError: Error sending result: '[<periphery_configuration.PeripheryOutput object at 0x10d00cf28>]'. Reason: 'error("'i' format requires -2147483648 <= number <= 2147483647",)', which per this i believe to mean that I am trying to pass a list of numpy arrays that is so large that multiprocessing is running out of index room.
I have attempted to fix this using joblib, as follows. What am I doing wrong? Per their docs, I should be memoizing to disk automatically; that doesn't appear to be happening.
Here's an abbreviated implementation. The call to has_shareable_memory is some ugly shotgun debugging, and I don't fully understand what it's supposed to do.
The following code works when self.stimulus contains less than about 10,000 elements. It fails with the above error for very long self.stimulus on the line results = Parallel... in run. self.solve_one_cochlea is the method that computes one result.
import multiprocessing as mp
from datetime import datetime, timedelta
from joblib import Parallel, delayed
from joblib.pool import has_shareable_memory
from periphery_model import CochleaModel
class Periphery:
def init():
self.cochlear_list= [[CochleaModel(), self.stimulus[i], self.irr_on[i], i, (0, i + 1)] for i in range(len(self.stimulus))]
def run(self) -> [PeripheryOutput]:
"""Simulate sound propagation up to the auditory nerve for many stimulus levels
:return: A list of output data, one for each stimulus level
"""
s1 = datetime.now()
results = Parallel(n_jobs=mp.cpu_count(), max_nbytes=10e6)(delayed(self.solve_one_cochlea, has_shareable_memory)(xx) for xx in self.cochlear_list)
self.save_model_configuration()
print("\ncochlear simulation of {} stimulus levels finished in {:0.3f}s".format(len(self.stimulus), timedelta.total_seconds(
datetime.now() - s1)))
return results