LLDB: silently continue after python script is done executing - python

I've written a python script that I am attaching to a watchpoint in LLDB, such as:
def wpCallback(frame, wp, internal_dict):
...
and I am attaching the callback with:
watchpoint command add -F commands.wpCallback watchpointID
I would like execution of the program to immediately resume after wpCallback is finished. Currently, execution halts as the watchpoint normally would. Is it possible to silently continue after the function is done? Based on this answer it seems like you can do something like this in GDB:
break foo if x>0
commands
silent
do something...
cont
end

You should be able to call SBProcess.Continue() on your process in your watchpoint callback. I.e. if you called the first argument to your callback frame do:
frame.thread.process.Continue()
That works for breakpoints, but seems to be broken for watchpoints in current TOT lldb. It looks like it disables the watchpoint. That's:
https://llvm.org/bugs/show_bug.cgi?id=28055

Related

How to run Anki-Qt script without requiring a breakpoint?

If I run the toy script below as shown:
import sys
sys.path.append('/usr/share/anki')
import aqt
app = aqt._run(argv=['/usr/bin/anki', '--profile=test'], exec=False)
# breakpoint()
print(repr(aqt.mw.col))
aqt.mw.cleanupAndExit()
...I get the following output, which is not right:
$ python3 /tmp/ankiq.py
None
If I uncomment the commented statement, however, and re-run the modified script, I get the correct output (eventually):
$ python3 /tmp/ankiq.py
> /tmp/ankiq.py(8)<module>()
-> print(repr(aqt.mw.col))
(Pdb) c
<anki.collection._Collection object at 0x7f32ec1417b8>
I would like to avoid the need for the breakpoint() statement (and for having to hit c whenever I want to run such code).
My guess is that, when the breakpoint() statement is commented out, the print statement happens before aqt.mw has been fully initialized.
(I tried replacing the breakpoint() statement with time.sleep(1), but when I run the script with this modification, it hangs before ever printing any output.)
Q: How can I modify the toy script above so that, by the time the print statement executes, aqt.mw.col has its correct value?
It seems that calling aqt._run(*args, exec=False) returns a QApplication object - but without starting its event-loop. To manually process pending events, you could therefore try calling app.processEvents().
From the comments, it appears the exact solution is as follows:
while aqt.mw.col is None:
app.processEvents()

Binding / piping output of run() on/into function in python3 (lynux)

I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling

get a return value from a Daemon Process in Python

I have written a python daemon process that can be started and stopped using the following commands
/usr/local/bin/daemon.py start
/usr/local/bin/daemon.py stop
I can achieve the same results by calling these commands from a python script
os.system('/usr/local/bin/daemon.py start')
os.system('/usr/local/bin/daemon.py stop')
this works perfectly fine, but now I wish to add a functionality to the daemon process such that when I run the command
os.system('/usr/local/bin/daemon.py foo')
the daemon returns a Python object. So, something like :
foobar = os.sytem('/usr/local/bin/daemon.py foo')
just to be clear, I have all the logic ready in the daemon to return a Python object, I just can't figure out how to pass this object to the calling python script. Is there some way?
Don't you mean you want to implement simple serialization and deserialization?
In that case I'd propose to look at pickle (https://docs.python.org/2/library/pickle.html) to transform your data into a generic text format at the daemon side and transform it back to Python code at the client side.
I think, marshaling is what you need: https://docs.python.org/2.7/library/marshal.html & https://docs.python.org/2/library/pickle.html

Preventing write interrupts in python script

I'm writing a parser in Python that outputs a bunch of database rows to standard out. In order for the DB to process them properly, each row needs to be fully printed to the console. I'm trying to prevent interrupts from making the print command stop halfway through printing a line.
I tried the solution that recommended using a signal handler override, but this still doesn't prevent the row from being partially printed when the program is interrupted. (I think the WRITE system call is cancelled to handle the interrupt).
I thought that the problem was solved by issue 10956 but I upgraded to Python 2.7.5 and the problem still happens.
You can see for yourself by running this example:
# Writer
import signal
interrupted = False
def signal_handler(signal, frame):
global interrupted
iterrupted = True
signal.signal(signal.SIGINT, signal_handler)
while True:
if interrupted:
break
print '0123456789'
In a terminal:
$ mkfifo --mode=0666 pipe
$ python writer.py > pipe
In another terminal:
$ cat pipe
Then Ctrl+C the first terminal. Some of the time the second terminal will end with an incomplete sequence of characters.
Is there any way of ensuring that full lines are written?
This seems less like an interrupt problem per se then a buffering issue. If I make a small change to your code, I don't get the partial lines.
# Writer
import sys
while True:
print '0123456789'
sys.stdout.flush()
It sounds like you don't really want to catch a signal but rather block it temporarily. This is supported by some *nix flavours. However Python explicitly does not support this.
You can write a C wrapper for sigmasks or look for a library. However if you are looking for a portable solution...

How to check if a shell command is over in Python

Let's say that I have this simple line in python:
os.system("sudo apt-get update")
of course, apt-get will take some time untill it's finished, how can I check in python if the command had finished or not yet?
Edit: this is the code with Popen:
os.environ['packagename'] = entry.get_text()
process = Popen(['dpkg-repack', '$packagename'])
if process.poll() is None:
print "It still working.."
else:
print "It finished"
Now the problem is, it never print "It finished" even when it really finish.
As the documentation states it:
This is implemented by calling the Standard C function system(), and
has the same limitations
The C call to system simply runs the program until it exits. Calling os.system blocks your python code until the bash command has finished thus you'll know that it is finished when os.system returns. If you'd like to do other stuff while waiting for the call to finish, there are several possibilities. The preferred way is to use the subprocessing module.
from subprocess import Popen
...
# Runs the command in another process. Doesn't block
process = Popen(['ls', '-l'])
# Later
# Returns the return code of the command. None if it hasn't finished
if process.poll() is None:
# Still running
else:
# Has finished
Check the link above for more things you can do with Popen
For a more general approach at running code concurrently, you can run that in another thread or process. Here's example code:
from threading import Thread
...
thread = Thread(group=None, target=lambda:os.system("ls -l"))
thread.run()
# Later
if thread.is_alive():
# Still running
else:
# Has finished
Another option would be to use the concurrent.futures module.
os.system will actually wait for the command to finish and return the exit status (format dependent format).
os.system is blocking; it calls the command waits for its completion, and returns its return code.
So, it'll be finished once os.system returns.
If your code isn't working, I think that could be caused by one of sudo's quirks, it refuses to give rights on certain environments(I don't know the details tho.).

Categories

Resources