adding series to a dynamic, already displayed plot in matplotlib - python

problem
I have a problem with matplotlib. I looked at a lot of things online (including an extensive amount of stack overflow questions) and did not find an answer to my problem. To be clear: I know how to dynamically update a plot in matplotlib. Everything I find online deals with showing the plot AFTER having set the series. here the problem is different. Please see the code below.
I have simulation results being written on the fly in a file (let's say file.log). I have a current code that prints the results on the fly as well using matplotlib. the problem is that, for various reasons, sometimes a new variable is written in file.log. But the monitoring is already displayed. How can I add a lineplot to the existing plot and keep updating ?
Here I give the general structure of the code I have.
The code structure I have
shell command:
I tail the log file one line by one line and pipe the output to a python script which handles the line in a for loop (this is working fine, no problem with that).
tail -f -n 1 file.log | monitoring.py
The python. It is commented line by line. Sorry for the long code sequence but everything with matplotlib is a bit long and needs context
#here I load the previous data (before starting monitoring)
historic_data = function_to_load_history('path/to/file.log')
#then I create a plot
figure, ax = plt.subplots()
#I keep the lines in a list
lineList = []
for serie in historic_data:
line, = ax.plot(serie)
lineList.append(line)
#then I display the plot
plt.ion()
plt.show()
#and I start 'listening' for tail input in the for loop
for line in sys.stdin:
# I add the new data to the historic data
historic_data = function_to_append(line, historic_data)
#I update the plot and redraw, using the already created lines
for i,serie in enumerate(historic_data):
# OK here is the problem : here, sometimes, a new serie is discovered in the log file
# so the last series have no line associated in lineList. I test this :
# if the line exist in the line list, then no problem I update an existing line
if i < len(lineList):
lineList[i].set_data(serie)
#HERE IS THE PROBLEM : if i enconter a new serie, I add it to the plot but it is not dislayed.
else:
line, = ax.plot(serie)
lineList.append(line)
figure.gca().autoscale_view()
figure.gca().relim()
plt.draw()
I have no idea how to ask matplotlib to draw the new serie(s). Any idea ?
thank you =)

Related

How to save python notebook cell code to file in Colab

TLDR: How can I make a notebook cell save its own python code to a file so that I can reference it later?
I'm doing tons of small experiments where I make adjustments to Python code to change its behaviour, and then run various algorithms to produce results for my research. I want to save the cell code (the actual python code, not the output) into a new uniquely named file every time I run it so that I can easily keep track of which experiments I have already conducted. I found lots of answers on saving the output of a cell, but this is not what I need. Any ideas how to make a notebook cell save its own code to a file in Google Colab?
For example, I'm looking to save a file that contains the entire below snippet in text:
df['signal adjusted'] = df['signal'].pct_change() + df['baseline']
results = run_experiment(df)
All cell codes are stored in a List variable In.
For example you can print the lastest cell by
print(In[-1]) # show itself
# print(In[-1]) # show itself
So you can easily save the content of In[-1] or In[-2] to wherever you want.
Posting one potential solution but still looking for a better and cleaner option.
By defining the entire cell as a string, I can execute it and save to file with a separate command:
cell_str = '''
df['signal adjusted'] = df['signal'].pct_change() + df['baseline']
results = run_experiment(df)
'''
exec(cell_str)
with open('cell.txt', 'w') as f:
f.write(cell_str)

Reading .csv file while it is being written

I have a similar problem to the one described here below in a different question:
Reading from a CSV file while it is being written to
Unfortunately the solution is not explained.
I'd like to create a script that plots some variables in a .csv file dynamically. The .csv is updated everytime a sensor registers something.
My basic idea was to read the file each fixed period of time and if the number of rows is increased, to update the plot with the new variables.
How can I proceed?
I am not that experienced in csv
but take this logic
def writeandRead(one_row):
with open (path/file.csv,"a"): # it is append .. if your case write just change from a to w
write.row(one_row)
with open(whateve.csv,"r"):
red=csv.read #i don't know syntax ... take the logic😊
return red
for row in rows: #or you have a lists whatever dictionary
print(writeandRead(row))

Live graph plot from a CSV file with matplotlib

I followed the tutorial on live graph plot from CSV/TXT file, but when I am running the python program no graph is created, instead, the terminal goes into busy mode until I exit using 'Ctrl+Z'.
For some reason, the animate function in matplotlib is not working for me. Instead I wrote the following code, which is supposed to do the job:
import matplotlib.pyplot as plt
while True:
pullData = open("data1.csv","r").read()
dataArray = pullData.split('\n')
xar = []
yar = []
for eachLine in dataArray:
if len(eachLine)>1:
x,y = eachLine.split(',')
xar.append(x)
yar.append(y)
plt.plot(xar, yar)
plt.pause(0.05)
plt.show()
But the above code is not reading the data points from the CSV file properly and generating the wrong graph.
I currently have Python 3.6.5 :: Anaconda, Inc. installed on the system. Could someone help in this, please? Thank you in advance.
You could use the polt Python package which I developed for this exact purpose of displaying live data.
Supposing you want to display live timeseries of multiple data columns in a CSV file, you could just pipe the live CSV stream (header+live columns) into polt:
(head -n1 myfile.csv; tail -fn0 myfile.csv) | polt add-source -p csv live
Explanation
(
head -n1 myfile.csv; # output first line of CSV file (header)
tail -fn0 myfile.csv # output new CSV data continuously
) | polt \ # pipe the data into polt
add-source -p csv # tell polt to interpret data as CSV
live # do the live plotting
If you do not directly want to plot timeseries, you can check the polt Animator documentation for further displaying possibilities.

Plotly graph streaming, getting data from local text file creates weird behavior

I used the plotly streaming API from Python plot.ly/python/streaming-tutorial ) to set up a dashboard with graphs showing data streamed from local logfiles (.txt).
I followed the tutorial to create a graph of a data stream; reproducing the "Getting started" example worked fine (although i had to change the py.iplot() into py.plot()).
I made some small modifications to the working example code to get the Y-axis value from a text file on my local drive. It does manage to plot the value written in my text file on the graph and even update it as I modify the value in the text file, but it behaves differently than the graph produced by the example code for a streamed plotly graph. I include both my code for the "Example" graph and my code for the "Data from local text file" graph; and images of the different behaviors.
The first two images show the Plot and Data produced by the "Example" code and the last two for the "Data from local text file" code. : http://imgur.com/a/ugo6m
The interesting thing here is that in the first case (Example), the updated value of Y is shown on a new line in the Data tab. This is not the case in the second case (Data from local text file). In the second case, the Y value is updated, but always takes the place of the first line. Instead of adding a new Y point and storing the previous one, it just constantly modifies the first value that Y received. I think the problem comes from there.
Here's a link for both codes, they're short and only the last few lines matter, as I suppose the problem comes from there since they're the only difference between both codes. I tried different working expressions to read the value from the text file ("with open('data.txt', 'r')) but nothing does it. Does anyone know how to make it work properly?
(!!!Careful both codes run an infinite loop!!!)
"Example": http://pastebin.com/6by30ANs
"Data from local text file": see below
Thanks in advance for your time,
PS: I had to put my second code here below as I do not have enough reputation to put more than 2 links.
import plotly.plotly as py
import plotly.tools as tls
import plotly.graph_objs as go
import datetime
import time
tls.set_credentials_file(username='lo.dewaele', stream_ids = ['aqwoq4i2or'], api_key='PNASXMZQmLmAVLtthYq2')
stream_ids = tls.get_credentials_file()['stream_ids']
stream_id = stream_ids[0]
stream_1 = dict(token=stream_id, maxpoints=20)
trace1 = go.Scatter(
x=[],
y=[],
mode='lines+markers',
stream=stream_1
)
data = go.Data([trace1])
layout = go.Layout(title='Time Series')
fig = go.Figure(data=data, layout=layout)
py.plot(fig, filename='stream')
s = py.Stream(stream_id)
s.open()
time.sleep(1)
while True:
graphdata = open('graphdata.txt', 'r') #open file in read mode
y = [graphdata.readline()] #read the first line of the file (just one integer)
x = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')
s.write(dict(x=x, y=y))
graphdata.close() #close the file
time.sleep(1)
s.close()

How to produce multiple output files with a single input file using the command 'np.random.normal'?

I have a file with 44,586 lines of data. It is read in using pylab:
data = pl.loadtxt("20100101.txt")
density = data[:,0]
I need to run something like...
densities = np.random.normal(density, 30, 1)
np.savetxt('1.txt', np.vstack((densities.ravel())).T)
...and create a new file named 1.txt which has all 44,586 lines of my data randomised within the parameters I desire. Will my above commands be sufficient to read through and perform what I want on every line of data?
The more complicated part on top of this - is that I want to run this 1,000 times and produce 1,000 .txt files (1.txt, 2.txt ... 1000.txt) which each run the exact same command.
I become stuck when trying to run loops in scripts, as I am still very inexperienced. I am having trouble even beginning to get this running how I desire - also I am confused how to handle saving the files with their different names. I have used np.savetxt in the past, but don't know how to make it perform this task.
Thanks for any help!
There are two minor issues - the first relates to how to select the name of the files (which can be solved using pythons support for string concatenation), the second relates to np.random.normal - which only allows a size parameter when loc is a scalar.
data = pl.loadtxt("20100101.txt")
density = data[:,0]
for i in range(1, 1001):
densities = np.random.normal(loc=density, scale=30)
np.savetxt(str(i) + '.txt', densities)

Categories

Resources