my problem seem to be similar to This Thread however, while I think I am following the advised method, I still get a PicklingError. When I run my process locally without sending to an IPython Cluster Engine the function works fine.
I am using zipline with IPyhon's notebook, so I first create a class based on zipline.TradingAlgorithm
Cell [ 1 ]
from IPython.parallel import Client
rc = Client()
lview = rc.load_balanced_view()
Cell [ 2 ]
%%px --local # This insures that the Class and modules exist on each engine
import zipline as zpl
import numpy as np
class Agent(zpl.TradingAlgorithm): # must define initialize and handle_data methods
def initialize(self):
self.valueHistory = None
pass
def handle_data(self, data):
for security in data.keys():
## Just randomly buy/sell/hold for each security
coinflip = np.random.random()
if coinflip < .25:
self.order(security,100)
elif coinflip > .75:
self.order(security,-100)
pass
Cell [ 3 ]
from zipline.utils.factory import load_from_yahoo
start = '2013-04-01'
end = '2013-06-01'
sidList = ['SPY','GOOG']
data = load_from_yahoo(stocks=sidList,start=start,end=end)
agentList = []
for i in range(3):
agentList.append(Agent())
def testSystem(agent,data):
results = agent.run(data) #-- This is how the zipline based class is executed
#-- next I'm just storing the final value of the test so I can plot later
agent.valueHistory.append(results['portfolio_value'][len(results['portfolio_value'])-1])
return agent
for i in range(10):
tasks = []
for agent in agentList:
#agent = testSystem(agent,data) ## On its own, this works!
#-- To Test, uncomment the above line and comment out the next two
tasks.append(lview.apply_async(testSystem,agent,data))
agentList = [ar.get() for ar in tasks]
for agent in agentList:
plot(agent.valueHistory)
Here is the Error produced:
PicklingError Traceback (most recent call last)/Library/Python/2.7/site-packages/IPython/kernel/zmq/serialize.pyc in serialize_object(obj, buffer_threshold, item_threshold)
100 buffers.extend(_extract_buffers(cobj, buffer_threshold))
101
--> 102 buffers.insert(0, pickle.dumps(cobj,-1))
103 return buffers
104
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
If I override the run() method from zipline.TradingAlgorithm with something like:
def run(self, data):
return 1
Trying something like this...
def run(self, data):
return zpl.TradingAlgorithm.run(self,data)
results in the same PicklingError.
then the passing off to the engines works, but obviously the guts of the test are not performed. As run is a method internal to zipline.TradingAlgorithm and I don't know everything that it does, how would I make sure it is passed through?
It looks like the zipline TradingAlgorithm object is not pickleable after it has been run:
import zipline as zpl
class Agent(zpl.TradingAlgorithm): # must define initialize and handle_data methods
def handle_data(self, data):
pass
agent = Agent()
pickle.dumps(agent)[:32] # ok
agent.run(data)
pickle.dumps(agent)[:32] # fails
But this suggests to me that you should be creating the Agents on the engines, and only passing data / results back and forth (ideally, not passing data across at all, or at most once).
Minimizing data transfers might look something like this:
define the class:
%%px
import zipline as zpl
import numpy as np
class Agent(zpl.TradingAlgorithm): # must define initialize and handle_data methods
def initialize(self):
self.valueHistory = []
def handle_data(self, data):
for security in data.keys():
## Just randomly buy/sell/hold for each security
coinflip = np.random.random()
if coinflip < .25:
self.order(security,100)
elif coinflip > .75:
self.order(security,-100)
load the data
%%px
from zipline.utils.factory import load_from_yahoo
start = '2013-04-01'
end = '2013-06-01'
sidList = ['SPY','GOOG']
data = load_from_yahoo(stocks=sidList,start=start,end=end)
agent = Agent()
and run the code:
def testSystem(agent, data):
results = agent.run(data) #-- This is how the zipline based class is executed
#-- next I'm just storing the final value of the test so I can plot later
agent.valueHistory.append(results['portfolio_value'][len(results['portfolio_value'])-1])
# create references to the remote agent / data objects
agent_ref = parallel.Reference('agent')
data_ref = parallel.Reference('data')
tasks = []
for i in range(10):
for j in range(len(rc)):
tasks.append(lview.apply_async(testSystem, agent_ref, data_ref))
# wait for the tasks to complete
[ t.get() for t in tasks ]
And plot the results, never fetching the agents themselves
%matplotlib inline
import matplotlib.pyplot as plt
for history in rc[:].apply_async(lambda : agent.valueHistory):
plt.plot(history)
This is not quite the same code you shared - three agents bouncing back and forth on all your engines, whereas this has on agent per engine. I don't know enough about zipline to say whether that's useful to you or not.
Related
so I've been thinking about this for a couple days now and I cant figure it out, I've searched around but couldn't find the answer I was looking for, so any help would be greatly appreciated.
Essentially what I am trying to do is call a method on a group of objects in my main thread from a separate thread, just once after 2 seconds and then the thread can exit, I'm just using threading as a way of creating a non-blocking 2 second pause (if there are other ways of accomplishing this please let me know.
So I have a pyqtplot graph/plot that updates from a websocket stream and the gui can only be updated from the thread that starts it (the main one).
What happens is I open a websocket stream fill up a buffer for about 2 seconds, make an REST request, apply the updates from the buffer to the data from the REST request and then update the data/plot as new messages come in. Now the issue is I can't figure out how to create a non blocking 2 second pause in the main thread without creating a child thread. If I create a child thread and pass the object that contains the dictionary I want to update after 2 seconds, I get issues regarding updating the plot from a different thread. So what I THINK is happening is when that new spawned thread is spawned the reference to the object I want to update is actually the object itself, or the data (dictionary) containing the update data is now in a different thread as the gui and that causes issues.
open websocket --> start filling buffer --> wait 2 seconds --> REST request --> apply updates from buffer to REST data --> update data as new websocket updates/messages come in.
Unfortunately the websocket and gui only start when you run pg.exec() and you can't break them up to start individually, you create them and then start them together (or at least I have failed to find a way to start them individually, alternatively I also tried using a separate library to handle websockets however this requires starting a thread for incoming messages as well)
This is the minimum reproducible example, sorry it's pretty long but I couldn't really break it down anymore without removing required functionality as well as preserving context:
import json
import importlib
from requests.api import get
import functools
import time
import threading
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
QtWebSockets = importlib.import_module(pg.Qt.QT_LIB + '.QtWebSockets')
class coin():
def __init__(self):
self.orderBook = {'bids':{}, 'asks':{}}
self.SnapShotRecieved = False
self.last_uID = 0
self.ordBookBuff = []
self.pltwgt = pg.PlotWidget()
self.pltwgt.show()
self.bidBar = pg.BarGraphItem(x=[0], height=[1], width= 1, brush=(25,25,255,125), pen=(0,0,0,0))
self.askBar = pg.BarGraphItem(x=[1], height=[1], width= 1, brush=(255,25,25,125), pen=(0,0,0,0))
self.pltwgt.addItem(self.bidBar)
self.pltwgt.addItem(self.askBar)
def updateOrderBook(self, message):
for side in ['a','b']:
bookSide = 'bids' if side == 'b' else 'asks'
for update in message[side]:
if float(update[1]) == 0:
try:
del self.orderBook[bookSide][float(update[0])]
except:
pass
else:
self.orderBook[bookSide].update({float(update[0]): float(update[1])})
while len(self.orderBook[bookSide]) > 1000:
del self.orderBook[bookSide][(min(self.orderBook['bids'], key=self.orderBook['bids'].get)) if side == 'b' else (max(self.orderBook['asks'], key=self.orderBook['asks'].get))]
if self.SnapShotRecieved == True:
self.bidBar.setOpts(x0=self.orderBook['bids'].keys(), height=self.orderBook['bids'].values(), width=1 )
self.askBar.setOpts(x0=self.orderBook['asks'].keys(), height=self.orderBook['asks'].values(), width=1 )
def getOrderBookSnapshot(self):
orderBookEncoded = get('https://api.binance.com/api/v3/depth?symbol=BTCUSDT&limit=1000')
if orderBookEncoded.ok:
rawOrderBook = orderBookEncoded.json()
orderBook = {'bids':{}, 'asks':{}}
for orders in rawOrderBook['bids']:
orderBook['bids'].update({float(orders[0]): float(orders[1])})
for orders in rawOrderBook['asks']:
orderBook['asks'].update({float(orders[0]): float(orders[1])})
last_uID = rawOrderBook['lastUpdateId']
while self.ordBookBuff[0]['u'] <= last_uID:
del self.ordBookBuff[0]
if len(self.ordBookBuff) == 0:
break
if len(self.ordBookBuff) >= 1 :
for eachUpdate in self.ordBookBuff:
self.last_uID = eachUpdate['u']
self.updateOrderBook(eachUpdate)
self.ordBookBuff = []
self.SnapShotRecieved = True
else:
print('Error retieving order book.') #RESTfull request failed
def on_text_message(message, refObj):
messaged = json.loads(message)
if refObj.SnapShotRecieved == False:
refObj.ordBookBuff.append(messaged)
else:
refObj.updateOrderBook(messaged)
def delay(myObj):
time.sleep(2)
myObj.getOrderBookSnapshot()
def main():
pg.mkQApp()
refObj = coin()
websock = QtWebSockets.QWebSocket()
websock.connected.connect(lambda : print('connected'))
websock.disconnected.connect(lambda : print('disconnected'))
websock.error.connect(lambda e : print('error', e))
websock.textMessageReceived.connect(functools.partial(on_text_message, refObj=refObj))
url = QtCore.QUrl("wss://stream.binance.com:9443/ws/btcusdt#depth#1000ms")
websock.open(url)
getorderbook = threading.Thread(target = delay, args=(refObj,), daemon=True) #, args = (lambda : websocketThreadExitFlag,)
getorderbook.start()
pg.exec()
if __name__ == "__main__":
main()
I was looking at this StackOverflow thread on using ray.serve to have a saved TF model predict in parallel:
https://stackoverflow.com/a/62459372
I tried something similar with the following:
import ray
from ray import serve; serve.init()
import tensorflow as tf
class A:
def __init__(self):
self.model = tf.constant(1.0) # dummy example
#serve.accept_batch
def __call__(self, *, input_data=None):
print(input_data) # test if method is entered
# do stuff, serve model
if __name__ == '__main__':
serve.create_backend("tf", A,
# configure resources
ray_actor_options={"num_cpus": 2},
# configure replicas
config={
"num_replicas": 2,
"max_batch_size": 24,
"batch_wait_timeout": 0.1
}
)
serve.create_endpoint("tf", backend="tf")
handle = serve.get_handle("tf")
args = [1,2,3]
futures = [handle.remote(input_data=i) for i in args]
result = ray.get(futures)
However, I get the following error:
TypeError: __call__() takes 1 positional argument but 2 positional arguments (and 1 keyword-only argument) were given. There's something wrong with the arguments passed into __call__.
This seems like a simple mistake, how should I change the args array so that the __call__ method is actually entered?
The API for Ray 1.0 is updated. Please see the migration guide https://gist.github.com/simon-mo/6d23dfed729457313137aef6cfbc7b54
For the specific code sample you posted, you can updated it to:
import ray
from ray import serve
import tensorflow as tf
class A:
def __init__(self):
self.model = tf.Constant(1.0) # dummy example
#serve.accept_batch
def __call__(self, requests):
for req in requests:
print(req.data) # test if method is entered
# do stuff, serve model
if __name__ == '__main__':
client = serve.start()
client.create_backend("tf", A,
# configure resources
ray_actor_options={"num_cpus": 2},
# configure replicas
config={
"num_replicas": 2,
"max_batch_size": 24,
"batch_wait_timeout": 0.1
}
)
client.create_endpoint("tf", backend="tf")
handle = client.get_handle("tf")
args = [1,2,3]
futures = [handle.remote(i) for i in args]
result = ray.get(futures)
I am building a website with flask and when you click on a button I'm trying to run my machine learning code that is in a different .py file. But when I click on that button I get this error
AttributeError: Can't get attribute 'Tokenizer' on <module '__main__' from 'c:filepath'
I've been told it's because my Tokenizer class isn't able to unpickle the file. But I'm not sure why it's not able to because when I run my machine learning code on it's own it works fine. But when I try to click on the button through flask, that's when I get that error. Any help would be much appreciated
The function I'm trying to run is called starter("no") from a file called Music_Generator_2.py
app.py
#app.route('/generated')
def generated():
print("start")
Music_Generator_2.start("no") #from Music_Generator_2
print("sucess")
return render_template('index.html', tested_generator="generated")
The error occurs on the second line of this code
Music_Generator_2.py
model = tf.keras.models.load_model("model_25epochs.h5", custom_objects=SeqSelfAttention.get_custom_objects())
tokenizer = pickle.load(open("tokenizer25.p", "rb"))
#generate from random
max_generate = 200
unique_notes = tokenizer.unique_word
seq_len = 200
generate = generate_from_random(unique_notes, seq_len)
generate = generate_notes(generate, model, unique_notes, max_generate, seq_len)
write_midi_file(generate, tokenizer, "rand test.mid", start=seq_len - 1, fs=7, max_generate=max_generate)
#generate from a note
max_generate = 300
unique_notes = tokenizer.unique_word # same as above
seq_len = 300
generate = generate_from_one_note(tokenizer, "72")
generate = generate_notes(generate, model, unique_notes, max_generate, seq_len)
This is the code that I'm trying to in my machine learning program
Music_Generator_2.py
Tokenizer class
class Tokenizer:
def __init__(self):
self.notes_to_index = {}
self.index_to_notes = {}
self.num_word = 0
self.unique_word = 0
self.note_freq = {}
'''transform a list of notes from strings to indexes
list_array is a list of notes in string format'''
def transform(self, list_array):
transformed = []
for i in list_array:
transformed.append([self.notes_to_index[note] for note in i])
return np.array(transformed, dtype = np.int32)
'''partial fir on the dictionary of the tokenizer
notes is a list of notes'''
def partial_fit(self, notes):
for note in notes:
note_str = ",".join(str(n) for n in note)
if note_str in self.note_freq:
self.note_freq[note_str] += 1
self.num_word += 1
else:
self.note_freq[note_str] = 1
self.unique_word += 1
self.num_word += 1
self.notes_to_index[note_str] =self.unique_word
self.index_to_notes[self.unique_word] = note_str
'''add a new note to the dictionary
note is the new note to be added as a string'''
def add_new_note(self, note):
assert note not in self.notes_to_index
self.unique_word += 1
self.notes_to_index[note] = self.unique_word
self.index_to_notes[self.unique_word] = note
Solved: I moved my tokenizer class into it's own .py file and then I just imported that file for app.py and Mustic_Generator_2.py. I found the solution from here
This could be an issue of how you are running Flask. Are you running it inside of a virtualenv? If so, make sure that the correct pip packages are installed. I would make sure that the environment in which I run Flask is identical to the one where you run it on your own and it works.
I'm using multiprocess Python module to parallelise processing with OpenCV algorithms (e.g. ORB detector/descriptor). The multiprocess module works fine in most cases, but when it comes to returning a list of cv2.KeyPoint objects there is a problem - all fields of each key point are set to 0 when returned to the caller process, although inside the worker process all key points are correct (as returned by OpenCV).
Here is minimal example that can be used to reproduce the error (you will need an image file called lena.png to make it work):
import numpy as np
from cv2 import ORB_create, imread, cvtColor, COLOR_BGR2GRAY
from multiprocess import Pool
feature = ORB_create(nfeatures=4)
def proc(img):
return feature.detect(img)
def good(feat, frames):
return map(proc, frames)
def bad(feat, frames):
# this starts a worker process
# and then collects result
# but something is lost on the way
pool = Pool(4)
return pool.map(proc, frames)
if __name__ == '__main__':
# it doesn't matter how many images
# a list of images is required to make use of
# pool from multiprocess module
rgb_images = map(lambda fn: imread(fn), ['lena.png'])
grey_images = map(lambda img: cvtColor(img, COLOR_BGR2GRAY), rgb_images)
good_kp = good(feature, grey_images)
bad_kp = bad(feature, grey_images)
# this will fail because elements in
# bad_kp will all contain zeros
for i in range(len(grey_images)):
for x, y in zip(good_kp[i], bad_kp[i]):
# these should be the same
print('good: pt=%s angle=%s size=%s - bad: pt=%s angle=%s size=%s' % (x.pt, x.angle, x.size, y.pt, y.angle, y.size))
assert x.pt == y.pt
Platforms: both CentOS 7.6 and Windows 10 x64
Versions:
Python version: 2.7.15
multiprocess: 0.70.6.1
opencv-python-headless: 3.4.5.20 and 4.0.0.21
Is there a way to work around this? Use of standard multiprocessing module is not an option because of heavy usage of lambdas and callable objects which "can't be pickled".
After some analysis it turned out that the problem is caused by something about cv2.KeyPoint class. This is suggested in a related question and corresponding answer. The problems is that pickle which is apparently used by dill is unable to work with this class.
A simple solution is to avoid sending instances of cv2.KeyPoint between worker and main process. If this is not convenient, then one should wrap data of each keypoint in a simple Python structure or dictionary and pass it.
An example of wrapper could be:
import cv2
class KeyPoint(object):
def __init__(self, kp):
# type: (cv2.KeyPoint) -> None
x, y = kp.pt
self.pt = float(x), float(y)
self.angle = float(kp.angle) if kp.angle is not None else None
self.size = float(kp.size) if kp.size is not None else None
self.response = float(kp.response) if kp.response is not None else None
self.class_id = int(kp.class_id) if kp.class_id is not None else None
self.octave = int(kp.octave) if kp.octave is not None else None
def to_opencv(self):
# type: () -> cv2.KeyPoint
kp = cv2.KeyPoint()
kp.pt = self.pt
kp.angle = self.angle
kp.size = self.size
kp.response = self.response
kp.octave = self.octave
kp.class_id = self.class_id
return kp
I'm trying o thread the following code and send data to it (at random intervals) but I can't figure out how. I'm saving all the data to a txt file and reading the info from there, it isn't working very well. Is it possible to create a function that sends data to a specific thread( like : SendDataToThread(data, ThreadNumber) )? and how would I go about reading the data sent? I've seen a few solutions using queue but I was unable to understand them. here is the script I am temporarily using to plot the graph which I found here. sorry if the question seems simple but I've never before had to messed with threading or matplotlib.
import matplotlib.pyplot as plt
from threading import Thread
plt.ion()
class DynamicUpdate():
#Suppose we know the x range
min_x = 0
max_x = 10
def on_launch(self):
#Set up plot
self.figure, self.ax = plt.subplots()
self.lines, = self.ax.plot([],[], 'o')
#Autoscale on unknown axis and known lims on the other
self.ax.set_autoscaley_on(True)
self.ax.set_xlim(self.min_x, self.max_x)
#Other stuff
self.ax.grid()
...
def on_running(self, xdata, ydata):
#Update data (with the new _and_ the old points)
self.lines.set_xdata(xdata)
self.lines.set_ydata(ydata)
#Need both of these in order to rescale
self.ax.relim()
self.ax.autoscale_view()
#We need to draw *and* flush
self.figure.canvas.draw()
self.figure.canvas.flush_events()
#Example
def __call__(self):
# read/plot data
Here's some example code which shows how to do several of the things that were asked about. This uses multithreading rather than multiprocessing, and shows some examples of using queues, starting/stopping worker threads and updating a matplotlib plot with additional data.
(Part of the code comes from answers to other questions including this one and this one.)
The code shows a possible implementation of an asynchronous worker, to which data can be sent for subsequent processing. The worker uses an internal queue to buffer the data, and an internal thread (loop) that reads data from the queue, does some processing and sends the result for display.
An asynchronous plotter implementation is also shown. Results can be sent to this plotter from multiple workers. (This also uses an internal queue for buffering; this is done to allow the main program thread itself to call the function that updates the plot, which appears to be a requirement with matplotlib.)
NB This was written for Python 2.7 on OSX. Hope some of it may be useful.
import time
import threading
import Queue
import math
import matplotlib.pyplot as plt
class AsynchronousPlotter:
"""
Updates a matplotlib data plot asynchronously.
Uses an internal queue to buffer results passed for plotting in x, y pairs.
NB the output_queued_results() function is intended be called periodically
from the main program thread, to update the plot with any waiting results.
"""
def output_queued_results(self):
"""
Plots any waiting results. Should be called from main program thread.
Items for display are x, y pairs
"""
while not self.queue.empty():
item = self.queue.get()
x, y = item
self.add_point(x, y)
self.queue.task_done()
def queue_result_for_output(self, x, y):
"""
Queues an x, y pair for display. Called from worker threads, so intended
to be thread safe.
"""
self.lock.acquire(True)
self.queue.put([x, y])
self.lock.release()
def redraw(self):
self.ax.relim()
self.ax.autoscale_view()
self.fig.canvas.draw()
plt.pause(0.001)
def add_point(self, x, y):
self.xdata.append(x)
self.ydata.append(y)
self.lines.set_xdata(self.xdata)
self.lines.set_ydata(self.ydata)
self.redraw()
def __init__(self):
self.xdata=[]
self.ydata=[]
self.fig = plt.figure()
self.ax = self.fig.add_subplot(111)
self.lines, = self.ax.plot(self.xdata, self.ydata, 'o')
self.ax.set_autoscalex_on(True)
self.ax.set_autoscaley_on(True)
plt.ion()
plt.show()
self.lock = threading.Lock()
self.queue = Queue.Queue()
class AsynchronousWorker:
"""
Processes data asynchronously.
Uses an internal queue and internal thread to handle data passed in.
Does some processing on the data in the internal thread, and then
sends result to an asynchronous plotter for display
"""
def queue_data_for_processing(self, raw_data):
"""
Queues data for processing by the internal thread.
"""
self.queue.put(raw_data)
def _worker_loop(self):
"""
The internal thread loop. Runs until the exit signal is set.
Processes the supplied raw data into something ready
for display.
"""
while True:
try:
# check for any data waiting in the queue
raw_data = self.queue.get(True, 1)
# process the raw data, and send for display
# in this trivial example, change circle radius -> area
x, y = raw_data
y = y**2 * math.pi
self.ap.queue_result_for_output(x, y)
self.queue.task_done()
except Queue.Empty:
pass
finally:
if self.esig.is_set():
return
def hang_up(self):
self.esig.set() # set the exit signal...
self.loop.join() # ... and wait for thread to exit
def __init__(self, ident, ap):
self.ident = ident
self.ap = ap
self.esig = threading.Event()
self.queue = Queue.Queue()
self.loop = threading.Thread(target=self._worker_loop)
self.loop.start()
if __name__ == "__main__":
ap = AsynchronousPlotter()
num_workers = 5 # use this many workers
# create some workers. Give each worker some ID and tell it
# where it can find the output plotter
workers = []
for worker_number in range (num_workers):
workers.append(AsynchronousWorker(worker_number, ap))
# supply some data to the workers
for worker_number in range (num_workers):
circle_number = worker_number
circle_radius = worker_number * 4
workers[worker_number].queue_data_for_processing([circle_number, circle_radius])
# wait for workers to finish then tell the plotter to plot the results
# in a longer-running example we would update the plot every few seconds
time.sleep(2)
ap.output_queued_results();
# Wait for user to hit return, and clean up workers
raw_input("Hit Return...")
for worker in workers:
worker.hang_up()
I kinda improved the code I can send a value to it when it is being created so that is good, but with multiprocessing I can't really figure out how to make the plot show. When I call the plot without multiprocessing it works so it might be something simple that I can't see. Also I'm trying to study the code you left a link to but to me, it's not very clear. I'm also trying to save the processes to a list so that later I can try to send the data directly to the process while the process is running(I think it's with pipe that I do this but, I'm not sure)
import matplotlib.pyplot as plt
from multiprocessing import Process
plt.ion()
class DynamicUpdate():
#Suppose we know the x range
min_x = 0
max_x = 10
def __init__(self, x):
self.number = x
def on_launch(self):
#Set up plot
self.figure, self.ax = plt.subplots()
self.lines, = self.ax.plot([],[], 'o')
#Autoscale on unknown axis and known lims on the other
self.ax.set_autoscaley_on(True)
self.ax.set_xlim(self.min_x, self.max_x)
#Other stuff
self.ax.grid()
...
def on_running(self, xdata, ydata):
#Update data (with the new _and_ the old points)
self.lines.set_xdata(xdata)
self.lines.set_ydata(ydata)
#Need both of these in order to rescale
self.ax.relim()
self.ax.autoscale_view()
#We need to draw *and* flush
self.figure.canvas.draw()
self.figure.canvas.flush_events()
#Example
def __call__(self):
print(self.number)
import numpy as np
import time
self.on_launch()
xdata = []
ydata = []
for x in np.arange(0,10,0.5):
xdata.append(x)
ydata.append(np.exp(-x**2)+10*np.exp(-(x-7)**2))
self.on_running(xdata, ydata)
time.sleep(1)
return xdata, ydata
_processes_=[]
for i in range(0,2):
_processes_.append(Process(target=DynamicUpdate(i)))
p = Process(target=_processes_[i])
p.start()
# tried adding p.join(), but it didn't change anything
p.join()