Plotting 3D Data from Gyro Encoder - python

I am collecting data from the gyro encoder via serial port. I would like to plot the real time data. Below is the python code:
import serial
import struct
import numpy as np
#import csv
ser = serial.Serial("COM5",115200)
buffer = []
while 1:
buffer += ser.read(ser.inWaiting())
'\x7e\x5d' in 'buffer'
val = ser.read(18)
(NA, timestamp, rsvd, gyro_xout, gyro_yout, gyro_zout, level, encoder_angle) = struct.unpack(">HHhhhhlh",val)
print (NA, timestamp, rsvd, gyro_xout, gyro_yout, gyro_zout, level, encoder_angle)
I want to plot gyro_xout, gyro_yout, and gyro_zout. I am new to python. I would greatly appreciate if someone can help in plotting the 3D values. Thanks!!

Related

Plotting or time.sleep or csv saving datas in a while loop induces lags between sensed and read datas from a force sensor

I am quite fresh using Python. I need it to acquire serial datas from a force sensor plugged into a COMPORT (6). With the code below, I have no problem to store the datas in a list and save it afterwards. I can also print each data without noticing any lag.
However, when I try to implement a plot in my while loop, a rather annoying shift appears between the time I touch my sensor and the datas are written (from few to tens of seconds). I first thought it was because matplotlib is a memory consuming library, but even when i add the simple line "time.sleep(0.00001), which is a very short pause compared to the rate of acquisition (60 FPS), I get the same lags. I also tried to save my datas in a csv file and plot the datas in a different function by using multiprocess but even saving datas triggers the lag also.
This is problematic as visualizing my live datas is an important part of my experiment.
Could someone please help me with this particular issue ?
Thank you so much.
import serial
import struct
import platform
import multiprocessing
import time
import numpy as np
import csv
# from pylab import *
import matplotlib.pyplot as plt
class MeasurementConverter:
def convertValue(self, bytes):
pass
class ForceMeasurementConverterKG(MeasurementConverter):
def __init__(self, F_n, S_n, u_e):
self.F_n = F_n
self.S_n = S_n
self.u_e = u_e
def convertValue(self, value):
A = struct.unpack('>H', value)[0]
# return (A - 0x8000) * (self.F_n / self.S_n) * (self.u_e / 0x8000)
return self.F_n / self.S_n * ((A - 0x8000) / 0x8000) * self.u_e * 2
class GSV3USB:
def __init__(self, com_port, baudrate=38400):
com_path = f'/dev/ttyUSB{com_port}' if platform.system(
) == 'Linux' else f'COM{com_port}'
# print(f'Using COM: {com_path}')
self.sensor = serial.Serial(com_path, baudrate)
self.converter = ForceMeasurementConverterKG(10, 0.499552, 2)
def read_value(self):
self.sensor.read_until(b'\xA5')
read_val = self.sensor.read(2)
return self.converter.convertValue(read_val)
# initialization of datas
gsv_data=[]
temps=[]
t_0=time.time()
def data_gsv():
dev = GSV3USB(6)
# fig=plt.figure()
# ax = fig.add_subplot(111)
i=0
# line1, = ax.plot(temps, gsv_data)
try:
while True:
gsv_data.append(dev.read_value())
t1=time.time()-t_0
temps.append(t1)
# I can print the datas without noticing any lags
print(dev.read_value())
# I cannot plot the datas without noticing any lags
plt.plot(temps,gsv_data)
plt.draw ()
plt.axis([temps[i]-6,temps[i]+6,-2,10])
plt.pause(0.00001)
plt.clf()
i=i+1
# I cannot pause without noticing any lags
time.sleep(0.0001)
# I cannot save datas without noticing any lags
with open('student_gsv.csv', 'w') as f:
write = csv.writer(f)
write.writerow(gsv_data)
except KeyboardInterrupt:
print("Exiting")
return
if __name__ == "__main__":
data_gsv()```
Your main loop is constructed in a way that each time when you acquire a value with gsv_data.append(dev.read_value()), the plot is drawn again. That probably takes time, and if the data frequency with that the measuring device (GSV-3 USB) send measured data is quite high - let's say >10 frames/s - you'll get these "lags".
I would lower the plot update rate:
append the measured data to an array (as you already do)
Use a counter or a comparison of time difference to get a plot update rate of about 4 times per second (250ms). This is an update rate often used for being consumed by the human eye.

How to plot scatterplot using matplotlib from arrays (using strings)? Python

I have been trying to plot a 3D scatterplot from a pandas array (I have tried to convert the data over to numpy arrays and strings to put into the system). However, the error ValueError: s must be a scalar, or float array-like with the same size as x and y keeps popping up. My data for Patient ID is in the format of EMR-001, EMR-002 etc after blanking it out. My data for Discharge Date is converted to become a string of numbers like 20200120. My data for Diagnosis Code is a mix of characters like 001 or 10B.
I have also tried to look online at some of the other examples but have not been able to identify any areas. Could I seek your advice for anything I missed out or code I can input?
I'm using Python 3.9, UTF-8. Thank you in advanced!
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#importing csv data via pd
A = pd.read_csv('input.csv') #import file for current master list
Diagnosis_Des = A["Diagnosis Code"]
Discharge_Date = A["Discharge Date"]
Patient_ID = A["Patient ID"]
B = Diagnosis_Des.to_numpy()
#B1 = np.array2string(B)
#print(B.shape)
C = Discharge_Date.to_numpy() #need to change to data format
#C1 = np.array2string(C)
#print(C1)
D = Patient_ID.to_numpy()
#D1 = np.array2string(D)
#print(D.shape)
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
sequence_containing_x_vals = D
sequence_containing_y_vals = B
print(type(sequence_containing_y_vals))
sequence_containing_z_vals = C
print(type(sequence_containing_z_vals))
plt.scatter(sequence_containing_x_vals, sequence_containing_y_vals, sequence_containing_z_vals)
pyplot.show()

How to use CSV in mne.io.read_raw_fif()?

I have a CSV file about EEG signals. I want to use his file with men package
so I try this code in colab:
import numpy as np
import mne
from mne.channels import make_standard_montage
from numpy import genfromtxt
my_data = genfromtxt('dataset.csv', delimiter=',',dtype=None)
# Some information about the channels
ch_names = ['CH 1', 'CH 2', 'CH 3'] # TODO: finish this list
# Sampling rate of the Nautilus machine
sfreq = 5000 # Hz
# Create the info structure needed by MNE
info = mne.create_info(ch_names, sfreq)
# Finally, create the Raw object
raw = mne.io.Raw(my_data, info)
# Plot it!
raw.plot()
this is error:
here I know my_data: it was (numpy.ndarray), and it should be with
raw.fif, raw.fif.gz, raw_sss.fif, raw_sss.fif.gz, raw_tsss.fif,
raw_tsss.fif.gz, or _meg.fif. If a file-like object is provided,
preloading must be used, as it mention here.
So my question is, how can I convert CSV to one of these formats? is there any suggestion?
Thanks.

Convert geopandas dataframe to GEE feature collection using python

Given a geopandas dataframe (e.g. df that contains a geometry field), is the following a simplest way to convert it into ee.FeatureCollection?
import ee
features=[]
for index, row in df.iterrows():
g=ee.Geometry.Point([row['geometry'].x,row['geometry'].y])
# Define feature with a geometry and 'name' field from the dataframe
feature = ee.Feature(g,{'name':ee.String(row['name'])})
features.append(feature)
fc = ee.FeatureCollection(features)
If you want convert points geodataframe (GeoPandas) to ee.FeatureCollection, you can use this function:
import geopandas as gpd
import numpy as np
from functools import reduce
from geopandas import GeoDataFrame
from shapely.geometry import Point,Polygon
def make_points(gdf):
g = [i for i in gdf.geometry]
features=[]
for i in range(len(g)):
g = [i for i in gdf.geometry]
x,y = g[i].coords.xy
cords = np.dstack((x,y)).tolist()
double_list = reduce(lambda x,y: x+y, cords)
single_list = reduce(lambda x,y: x+y, double_list)
g=ee.Geometry.Point(single_list)
feature = ee.Feature(g)
features.append(feature)
#print("done")
ee_object = ee.FeatureCollection(features)
return ee_object
points_features_collections = make_points(points_gdf)
to do this function I based on this Reference
You can build a FeatureCollection from a json object. So if your geometry data file type is GeoJson you can do the following:
# import libraries
import ee
import json
# initialize earth engine client
ee.Initialize()
# load your gemotry data (which should be in GeoJson file)
with open("my_geometry_data.geojson") as f:
geojson = json.load(f)
# construct a FeatureCollection object from the json object
fc = ee.FeatureCollection(geojson)
If your geometry data is in different format (shapefile, geopackage), you can first save it to GeoJson then build a FeatureCollection object.
Finally, if you don't want to write any conversion code, and want just to convert your Geopandas.GeoDataFrame instantly to ee.FeatureCollection you can use the python package: geemap
geemap has several function for converting geometry data to FeatureCollection, and vice versa. You can see examples here. In your case, you need to use the geopandas_to_ee function, so your code would look like this:
# import libraries
import ee
import geemap
import geopandas as gpd
# initialize earth engine client
ee.Initialize()
# load your gemotry data using GeoPandas (which can be stored in different formats)
gdf = gpd.read_file("my_geometry_file.geojson")
# convert to FeatureCollection using one line of code
fc = geemap.geopandas_to_ee(gdf)
Note that under the hood, geemap is converting the GeoDataFrame to a json file, and then following the first approach I mentioned above.

How to use numpy in the Programmable Filter in ParaView

Assume, I have a ProgrammableFilter in paraview, which gets two inputs: mesh1 with data and mesh2 without.
Furthermore, I know the permutation of the points from mesh1 to mesh2.
Inside the filter, I can access the point values through
data0=inputs[0].GetPointData().GetArray('data')`
and obtain a part of the array using
subData=data0[0:6]
for example. But how could I add this subData to the output without a python loop?
To experiment with the code, I created a (not so small) working example:
#!/usr/bin/python
from paraview.simple import *
import numpy as np
import vtk
from vtk.util.numpy_support import numpy_to_vtk
#generate an arbitrary source with data
mesh2=Sphere()
mesh2.Center=[0.0, 0.0, 0.0]
mesh2.EndPhi=360
mesh2.EndTheta=360
mesh2.PhiResolution=100
mesh2.Radius=1.0
mesh2.StartPhi=0.0
mesh2.StartTheta=0.0
mesh2.ThetaResolution=100
mesh2.UpdatePipeline()
#add the data
mesh2Vtk=servermanager.Fetch(mesh2)
nPointsSphere=mesh2Vtk.GetNumberOfPoints()
mesh2Data=paraview.vtk.vtkFloatArray()
mesh2Data.SetNumberOfValues(nPointsSphere)
mesh2Data.SetName("mesh2Data")
#TODO: use numpy here?? do this with a ProgrammableFilter ?
data=np.random.rand(nPointsSphere,1)
for k in range(nPointsSphere):
mesh2Data.SetValue(k, data[k])
mesh2Vtk.GetPointData().AddArray(mesh2Data)
#send back to paraview server
#from https://public.kitware.com/pipermail/paraview/2011-February/020120.html
t=TrivialProducer()
filter= t.GetClientSideObject()
filter.SetOutput(mesh2Vtk)
t.UpdatePipeline()
w=CreateWriter('Sphere_withData.vtp')
w.UpdatePipeline()
Delete(w)
#create mesh1 without data
mesh1=Line()
mesh1.Point1=[0,0,0]
mesh1.Point2=[0,0,1]
mesh1.Resolution=5
mesh1.UpdatePipeline()
progFilter=ProgrammableFilter(mesh1)
progFilter.Input=[mesh1, t]
progFilter.Script="curT=inputs[1].GetPointData().GetArray('mesh2Data')"\
"\nglobIndices=range(0,6)"\
"\nsubT=curT[globIndices]"\
"\nswap=vtk.vtkFloatArray()"\
"\nswap.SetNumberOfValues(len(globIndices))"\
"\nswap.SetName('T')"\
"\n#TODO: how can i avoid this loop, i.e. write output.GetPointData().AddArray(converToVTK(subT))"\
"\nfor k in range(len(globIndices)):"\
"\n swap.SetValue(k,subT[k])"\
"\noutput.PointData.AddArray(swap)"
progFilter.UpdatePipeline()
w=CreateWriter('Line_withData.vtp')
w.UpdatePipeline()
Delete(w)
I accepted the answer, because it looks right. The following two scripts even show the problem:
base script 'run.py':
src1='file1.vtu'
r1=XMLUnstructuredGridReader(FileName=src1)
progFilter=ProgrammableFilter(r1)
progFilter.Input=[r1]
with open('script.py','r') as myFile:
progFilter.Script=myFile.read()
progFilter.UpdatePipeline()
progData=progFilter.GetPointDataInformation()
print progData.GetArray('T2').GetRange()
and the script for the programmable filter:
import vtk
import vtk.numpy_interface.dataset_adapter as dsa
import numpy as np
globIndices=inputs[0].GetPointData().GetArray('T')
subT=np.ones((globIndices.shape[0],1))
subTVtk=dsa.VTKArray(subT)
output.PointData.append(subTVtk, 'T2')
With this combination, I get the error messages:
File "/usr/lib/python2.7/dist-packages/vtk/numpy_interface/dataset_adapter.py", line 652, in append
self.VTKObject.AddArray(arr)
TypeError: AddArray argument 1: method requires a VTK object
File "run.py", line 15, in
print progData.GetArray('T2').GetRange()
AttributeError: 'NoneType' object has no attribute 'GetRange'
The first error message stems seems to be the reason for the second one.
Here's a minimal example that creates a VTK data array from a Numpy array. You should be able to adapt it for your purposes.
import numpy as np
import vtk
from vtk.numpy_interface import dataset_adapter as da
np_arr = np.ones(6)
vtk_arr = da.VTKArray(np_arr)
output.PointData.append(vtk_arr, "my data")

Categories

Resources