I want to trace thread by log all the symbol it call, so I found tow method
1、the lldb settings list:
'target.process.thread' variables:
trace-thread -- If true, this thread will single-step and log execution.
it means the lldb will log execution, but I can't find where is the log
2、lldb python SBThread has a event eBroadcastBitSelectedFrameChanged, I think it will callback when thread frame change, but why SBThread has no broadcaster?
1) This setting was put in mostly to help diagnose problems with lldb's stepping algorithms. Since it causes all execution to go by instruction single step, it's going to make your program execute very slowly, so it hasn't been used for anything other than that purpose (and I haven't used it for that purpose in a good while, so it might have bit-rotted.) The output is supposed to go to the debugger's stdout.
2) eBroadcastBitSelectedFrameChanged is only sent when the user changes the selected frame with command line commands. It's meant to allow a GUI like Xcode that also allows command line interaction to keep the GUI sync'ed with user commands in the console. There isn't a GetBroadcaster for threads, because threads come and go and you generally want to listen to ALL the threads, not just a particular one. To do that, call SBThread.GetBroadcasterClassName and then sign your listener up for events by class name (StartListeningForEventClass).
If you have a need to listen to a particular thread, file an enhancement request to the bug tracker at http://lldb.llvm.org.
Related
I've written a program that uses tkinter to create a GUI, and in the GUI I have a button that starts a program that connects to a socket and reads in messages with signal information. I needed this to happen constantly in the background, because I had other functionality I needed accessible on the GUI, but the GUI would be locked.
So I wrote code that would run that button in a new thread.
# Run everything after connect in a separate thread, so the GUI is not locked
def _start_connect_thread(self, event):
HOST = self.ip_e.get()
PORT = int(self.port_e.get())
global connect_thread
connect_thread = threading.Thread(target=self.connect, kwargs={'host': HOST, 'port': PORT})
connect_thread.daemon = True
connect_thread.start()
# Connect TaskTCS and StreamingDataService to AIMS
def connect(self, host=None, port=None):
print("Connecting sensor tasking program to AIMS...")
self.tt = TaskTCS(host, port)
print("Connecting streaming data program to AIMS...")
self.sd = StreamingData(host, port)
# Run Streaming Data Service, which will pull all streaming data from sensor
self.sd.run()
With this code, my GUI is free to perform other tasks. Most importantly, I can press a button that plots the data coming in from the sensor. When I press the plot button, a flag is toggled in the sd class, and it uses the information coming from the sensor to plot it with matplotlib. Inside the sd class is a function that is running on a while loop, unpacking information from the sensor and checking if the flag is toggled in order to know when to plot it.
Is this not thread safe?
The reason I ask is this program works perfectly fine on the machine I'm working on. However, when I try to run this with anaconda3 python, I get these errors.
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
QObject::setParent: Cannot set parent, new parent is in a different thread
QObject::setParent: Cannot set parent, new parent is in a different thread
I'm not sure if these errors are from anaconda, or if it's from non-thread-safe coding.
When this program was attempted to run on a machine that had python 2.6, it got this error when clicking the connect button:
Exception in thread Thread-1:
Trackback (most recent call last):
File .usr/lib64/python2.6/threading.py line 532 in _bootstart_inner self.run()
File /usr/lib64/python2.6/threading.py, line 484, in run self._target(*self._args, **self._kwargs)
File “WaterfallGUI.py”, line 282 in connect HOST = self.ip_e.get()
File “/usr/lib64/python2.6/lib-tk/Tkinter.py”, line 2372, in get return self.tk.call(self._w,’get’)
TclError: out of stack space (infinite loop?)
So can a program somehow not have issues with threads on one machine but it can on others?
Note: In an attempt to solve the second error, I moved the .get() functions in the _start_connect_thread function to before actually starting the thread. Before, I was calling those functions in connect. Because I'm calling tkinter getters in a different thread, could that have been the issue in that case? If so, why wouldn't it cause an error on my machine on python 2.7? This was the old code:
def _start_connect_thread(self, event):
global connect_thread
connect_thread = threading.Thread(target=self.connect)
connect_thread.daemon = True
connect_thread.start()
def connect(self):
HOST = self.ip_e.get()
PORT = int(self.port_e.get())
...
I don't believe I'm calling anything tkinter GUI related outside of the main loop in the rest of my code. I see stuff about queues but I can't tell if I need to implement that in my code.
A program can work on one machine and not on another, but "thread safety" these days means that the program provably does not invoke any "undefined behaviors" of the language that it's written in or the libraries that it uses.
If there's some machine where the program does not "work," and some other machine where it does work, then that pretty much proves that the program is not "thread safe."
The reason I ask is...when I try to run this with anaconda3 python, I get these errors...
Oh. Python.
That's not really a well specified language. It's more like a family of similar languages. You're not necessarily just running the program on a different machine, you're also porting it to what may be a subtly different language.
No, this is not possible. Either code is thread-safe or it isn't. Thread-safety is a property of the algorithms/code, not the target machine. As indicated in the comments, this is far more likely due to an environment setup difference than something about the machine.
That being said, I'm not convinced that this is exactly a thread-safety issue at all. I'm admittedly not terribly familiar with this particular GUI framework, so I could be wrong here, but based on references like this, it seems like you're trying to "directly" update the GUI from another thread, which isn't permitted. (This is actually a very common restriction; WPF, for example, has the exact same rule).
I am creating a test automation which uses an application without any interfaces. However, The application calls a batch script when it changes modes, and I am therefore am able to catch the mode transitions.
What I want to do is to get the batch script to give an input to my python script (I have a state machine running in python) during runtime. Such that I can monitor the state of the application with python instead of the batch file.
I am using a similar state machine to the one of Karn Saheb:
https://dev.to/karn/building-a-simple-state-machine-in-python
However, instead of changing states statically like:
device.on_event('event')
I want the python script to do something similar to:
while(True):
device.on_event(input()) # where the input is passed from the batch script:
REM state.bat
set CurrentState=%1
"magic code to pass CurrentState to python input()" %CurrentState%
I see that a solution would be to start the python script from the batch file every time it is called with the "event" and then save the current event in another file upon termination of the python script... But I want to avoid such handling and rather evaluate this during runtime.
Thank you in advance!
A reasonably portable way of doing this without ugly polling on temporary files is to use a socket: have the main process listen and have the batch file(s) start a small program that connects to the server and writes a message.
There are security considerations here: you can start by listening only to the loopback interface, with further authentication if the local machine should not be trusted.
If you have more than one of these processes, or if you need to handle the child dying before it issues its next report, you’ll have to use threads or something like select to unify the news from different input channels (e.g., waiting on the child to exit vs. waiting on news from the next batch file).
I am using ZeroMQ, which is a messaging library (presumably async I/O), if you don't know what it is you can think of it as similar to socket library in python, the sockets used for messaging are usually run within an infinite while loop with a small sleep for keep everything cool.
I have the code written in a separate file and I have a GUI based on the working of that code separate, I want to integrate the two codes.
But the issue I come across is that I can not possibly put a while True, or a blocking socket.recv() inside tkinter's .mainloop().
I want to receive on on a socket, which is blocking - BUT I can manage that part of the issue, zmq sockets can either be polled on (check periodically to see if we have any pending messages to process) or equivalently you can use zmq.DONTWAIT which does the same thing.
The issue remaining however is that I need a while True, so that the socket is constantly polled, say every millisecond to see if we have messages.
How do I put a while True inside the tkinter .mainloop() that allows me to check the state of that socket constantly?
I would visualize something like this :
while True:
update_gui() # contains the mainloop and all GUI code
check_socket() # listener socket for incoming traffic
if work:
# # do a work, while GUI will hang for a bit.
I have checked the internet, and came across solution on SO, which says that you can use the After property of widgets but I am not sure how that works. If someone could help me out I would be super grateful !
Code for reference :
zmq.DONTWAIT throws an exception if you do not have any pending messages which makes us move forward in the loop.
while 1:
if socket_listen and int(share_state):
try:
msg = socket_listen.recv_string(zmq.DONTWAIT)
except:
pass
time.sleep(0.01)
I would like that I could put this inside the .mainloop() and along with the GUI this also gets checked every iteration.
Additional info : Polling here equates to :
check if we have messages on socket1
if not then proceed normally
else do work.
How do I put a while True inside the tkinter .mainloop() that allows me to check the state of that socket constantly?
Do not design such part using an explicit while True-loop, better use the tkinter-native tooling: asking .after() to re-submit the call not later than a certain amount of time ( let for other things to happen concurrently, yet having a reasonable amount of certainty, your requested call will still be activated no later than "after" specified amount of milliseconds ).
I love Tkinter architecture of co-existing event processing
So if one keeps the Finite-State-Automata ( a game, or a GUI front-end ) clean crafted on the Tkinter-grounds, one can enjoy delivering ZeroMQ-messages data being coordinated "behind" the scene, right by Tkinter-native tools, so no imperative-code will be needed whatsoever. Just let the messages get translated into tkinter-monitored-variables, if you need to have indeed smart-working GUI integration.
aScheduledTaskID = aGuiRelatedOBJECT.after( msecs_to_wait,
aFunc_to_call = None,
*args
)
# -> <_ID_#_>
# ... guarantees a given wait-time + just a one, soloist-call
# after a delay of at least delay_ms milliseconds.
# There is no upper limit to how long it will actually take, but
# your callback-FUN will be called NO SOONER than you requested,
# and it will be called only once.
# aFunc_to_call() may "renew" with .after()
#
# .after_cancel( aScheduledTaskID ) # <- <id> CANCELLED from SCHEDULER
#
# .after_idle() ~ SCHEDULE A TASK TO BE CALLED UPON .mainloop() TURNED-IDLE
#
# aScheduledTaskOnIdleID = aGuiOBJECT.after_idle( aFunc_to_call = None,
# *args
# )
# -> <_ID_#_>
That's cool on using the ready-to-reuse tkinter native-infrastructure scheduler tools in a smart way, isn't it?
Epilogue:
( Blocking calls? Better never use blocking calls at all. Have anyone ever said blocking calls here? :o) )
a while True, or a blocking socket.recv() inside tkinter's .mainloop().
well, one can put such a loop into a component aligned with tkinter native-infrastructure scheduler, yet this idea is actually an antipattern and can turn things into wreck havoc ( not only for tkinter, in general in any event-loop handler it is a bit risky to expect any "competitive" event-handler loop to somehow tolerate or behave in a peacefull the co-existence of adjacent intentions - problems will appear ( be it from a straight blocking or due to one being a just too much dominant in scheduling resources or other sorts of a war on time and resources ) ).
I need to execute a command on a simple button press event in my Django project (for which I'm using "subprocess.Popen()" in my views.py ).
After I execute this script it may take anywhere from 2 minutes to 5 minutes to complete. So while the script executes I need to disable the html button but I want the users to continue using other web pages while the script finishes in the background. Now the real problem is that I want to enable the html button back, when the process finishes!
I'm stuck at this from many days. Any help or suggestion is really really appreciated.
I think you have to use some "realtime" libraries for django. I personally know django-realtime (simple one) and swampdragon (less simple, but more functional). With both of this libraries you can create web-socket connection and send messages to clients from server that way. It may be command for enabling html button or javascript alert or whatever you want.
In your case I advice you first option, because you can send message to client directly from any view. And swampdragon needs model to track changes as far I know.
Like valentjedi suggested, you should be using swampdragon for real time with django.
You should take the first tutorial here: http://swampdragon.net/tutorial/part-1-here-be-dragons-and-thats-a-good-thing/
Then read this as it holds knowledge required to accomplish what you want:
http://swampdragon.net/tutorial/building-a-real-time-server-monitor-app-with-swampdragon-and-django/
However there is a difference between your situation and the example given above, in your situation:
Use Celery or any other task queue, since the action you wait for takes long time to finish, you will need to pass it to the background. (You can also make these tasks occur one after another if you don't want to freeze your system with enormous memory usage).
Move the part of code that runs the script to your celery task, in this case, Popen should be called in your Celery task and not in your view (router in swampdragon).
You then create a channel with the user's unique identifier, and add relevant swampdragon javascript code in your html file for the button to subscribe to that user's channel (also consider disabling the feature on your view (router) since front-end code can be tempered with.
The channel's role will be to pull the celery task state, you
then disable or enable the button according to the state of
the task.
overview:
Create celery task for your script.
Create a user unique channel that pulls the task state.
Disable or enable the button on the front-end according to the state of the taks, consider displaying failure message in case the script fails so that the user restart again.
Hope this helps!
I am working on a django web application.
A function 'xyx' (it updates a variable) needs to be called every 2 minutes.
I want one http request should start the daemon and keep calling xyz (every 2 minutes) until I send another http request to stop it.
Appreciate your ideas.
Thanks
Vishal Rana
There are a number of ways to achieve this. Assuming the correct server resources I would write a python script that calls function xyz "outside" of your django directory (although importing the necessary stuff) that only runs if /var/run/django-stuff/my-daemon.run exists. Get cron to run this every two minutes.
Then, for your django functions, your start function creates the above mentioned file if it doesn't already exist and the stop function destroys it.
As I say, there are other ways to achieve this. You could have a python script on loop waiting for approx 2 minutes... etc. In either case, you're up against the fact that two python scripts run on two different invocations of cpython (no idea if this is the case with mod_wsgi) cannot communicate with each other and as such IPC between python scripts is not simple, so you need to use some sort of formal IPC (like semaphores, files etc) rather than just common variables (which won't work).
Probably a little hacked but you could try this:
Set up a crontab entry that runs a script every two minutes. This script will check for some sort of flag (file existence, contents of a file, etc.) on the disk to decide whether to run a given python module. The problem with this is it could take up to 1:59 to run the function the first time after it is started.
I think if you started a daemon in the view function it would keep the httpd worker process alive as well as the connection unless you figure out how to send a connection close without terminating the django view function. This could be very bad if you want to be able to do this in parallel for different users. Also to kill the function this way, you would have to somehow know which python and/or httpd process you want to kill later so you don't kill all of them.
The real way to do it would be to code an actual daemon in w/e language and just make a system call to "/etc/init.d/daemon_name start" and "... stop" in the django views. For this, you need to make sure your web server user has permission to execute the daemon.
If the easy solutions (loop in a script, crontab signaled by a temp file) are too fragile for your intended usage, you could use Twisted facilities for process handling and scheduling and networking. Your Django app (using a Twisted client) would simply communicate via TCP (locally) with the Twisted server.