I am trying to step through a thread. This works while I use debugger.SetAsync(False), but I want to do this asynchronously. Here is a script to reproduce it. It steps when setting debugger.SetAsync (False) instead of True. I added time.sleep so that it has time to execute my instructions. I expect the next instruction in the frame.pc
import time
import sys
lldb_path = "/Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources/Python"
sys.path = sys.path + [lldb_path]
import lldb
import os
exe = "./a.out"
debugger = lldb.SBDebugger.Create()
debugger.SetAsync (True) # change this to False, to make it work
target = debugger.CreateTargetWithFileAndArch (exe, lldb.LLDB_ARCH_DEFAULT)
if target:
main_bp = target.BreakpointCreateByName ("main", target.GetExecutable().GetFilename())
print main_bp
launch_info = lldb.SBLaunchInfo(None)
launch_info.SetExecutableFile (lldb.SBFileSpec(exe), True)
error = lldb.SBError()
process = target.Launch (launch_info, error)
time.sleep(1)
# Make sure the launch went ok
if process:
# Print some simple process info
state = process.GetState ()
print 'process state'
print state
thread = process.GetThreadAtIndex(0)
frame = thread.GetFrameAtIndex(0)
print 'stop loc'
print hex(frame.pc)
print 'thread stop reason'
print thread.stop_reason
print 'stepping'
thread.StepInstruction(False)
time.sleep(1)
print 'process state'
print process.GetState ()
print 'thread stop reason'
print thread.stop_reason
frame = thread.GetFrameAtIndex(0)
print 'stop loc'
print hex(frame.pc) # invalid output?
Version: lldb-340.4.110 (Provided with Xcode)
Python: Python 2.7.10
Os: Mac Yosemite
The "async" version of the lldb API's uses an event based system. You can't wait for things to happen using sleep's - but rather using the WaitForEvent API's lldb provides. An example of how to do this is given at:
https://github.com/llvm/llvm-project/blob/main/lldb/examples/python/process_events.py
There's a bunch of stuff at the beginning of the example that shows how to load the lldb module and does argument parsing. The part you want to look at is the loop:
listener = debugger.GetListener()
# sign up for process state change events
stop_idx = 0
done = False
while not done:
event = lldb.SBEvent()
if listener.WaitForEvent (options.event_timeout, event):
and below.
Related
Am trying to run few Alteryx workflows in Python in parallel using Subprocess. Here is the script.
import subprocess
import os
import time
workflows =["C:\\Users\\vevek.seetharaman\\Sample2.yxmd","C:\\Users\\vevek.seetharaman\\Sample.yxmd"]
processes = []
for file in workflows:
p = subprocess.Popen(["C:\\Users\\vevek.seetharaman\\AppData\\Local\\Alteryx\\bin\\AlteryxEngineCmd.exe", file],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
processes.append(p)
print(processes)
for i in processes:
while i.poll() is None:
# Process hasn't exited yet, let's wait some
time.sleep(0.5)
res = processes[0].communicate()
print("return code =", processes[0].returncode)
print("stderr =", res[1])
res2 = processes[1].communicate()
print("return code =", processes[1].returncode)
print("stderr =", res2[1])
It completes to run the code, however it does not give out the right return code**(it is always 0)** even though i induced an error in one of the workflow to debug the code.
However when i run the same workflows sequentially, they give the right return codes to notify error, here is the script
workflow1 = "C:\\Users\\vevek.seetharaman\\Sample 2.yxmd"
subprocess.run(["C:\\Users\\vevek.seetharaman\\AppData\\Local\\Alteryx\\bin\\AlteryxEngineCmd.exe", workflow1])
Output - Return code =0 # Signifying error(this is the workflow i deliberately created error)
workflow2 = "C:\\Users\\vevek.seetharaman\\Sample.yxmd"
subprocess.run(["C:\\Users\\vevek.seetharaman\\AppData\\Local\\Alteryx\\bin\\AlteryxEngineCmd.exe", workflow2])
Output - Return code = 1 # Signifying success
PS: Apologies that am not able to give you a working code as i had to point to a location using code
I'm trying to write very simple program which controls remote machine using pexpect. But remote system does not react to sent commands.
Here is source code:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import pexpect
import sys
child = pexpect.spawn('telnet 192.168.2.81 24')
res = child.expect('/ # ')
print(res)
res = child.sendline('touch foo')
print(res)
Here is output:
0
10
So, as far as I understand, commands are executed successfully but there is no result on target system, i.e. foo file is not created.
Could anybody help me?
Add the following line after pexpect.spawn() or you would see nothing.
# for Python 2
child.logfile_read = sys.stdout
# for Python 3
child.logfile_read = sys.stdout.buffer
You also need the following statements at the end (otherwise the script would immediately exit after sendline('touch foo') so touch foo does not have a chance to run):
child.sendline('exit')
child.expect(pexpect.EOF)
According to the manual:
The logfile_read and logfile_send members can be used to separately log the input from the child and output sent to the child. Sometimes you don’t want to see everything you write to the child. You only want to log what the child sends back. For example:
child = pexpect.spawn('some_command')
child.logfile_read = sys.stdout
Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...
My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.
Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).
def start(command, working_directory):
pidfile = os.path.join(working_directory, 'application.pid')
# I expect the pid of the started application to be here. The file is not created. Nothing is there.
context = daemon.DaemonContext(working_directory=working_directory,
pidfile=daemon.pidfile.PIDLockFile(pidfile))
with context:
psutil.Popen(command)
# This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
with open(pidfile, 'r') as pf:
pid = pf.read()
return pid
From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.
What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.
Thanks for any tip!
#Edit:
I tried using simply the Popen from psutil/subprocess with somewhat more promising result.
def start(self, command):
import psutil/subprocess
proc = psutil.Popen(command)
return str(proc.pid)
Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.
#Update:
Using os.fork:
#staticmethod
def __start_process(command, working_directory):
pid = os.fork()
if pid == 0:
os.chdir(working_directory)
proc = psutil.Popen(command)
with open('application.pid', 'w') as pf:
pf.write(proc.pid)
def start(self):
...
__start_process(command, working_directory)
with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
pid = int(pf.read())
proc = psutil.Process(pid)
print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")
After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:
ps auxf | grep
No instances are running...
Checking the pidfile; sure it's there it was created
cat /application.pid
EMPTY 0bytes
From multiple partial tips i got, finally managed to get it working...
def start(command, working_directory):
pid = os.fork()
if pid == 0:
os.setsid()
os.umask(0) # I'm not sure about this, not on my notebook at the moment
os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
else:
with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
pf.write(str(pid))
return pid
That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.
Have you tried with os.fork()?
In a nutshell, os.fork() spawns a new process and returns the PID of that new process.
You could do something like this:
#!/usr/bin/env python
import os
import subprocess
import sys
import time
command = 'ls' # YOUR COMMAND
working_directory = '/etc' # YOUR WORKING DIRECTORY
def child(command, directory):
print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
# Change working directory
os.chdir(directory)
# Execute command
cmd = subprocess.Popen(command
, shell=True
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
, stdin=subprocess.PIPE
)
# Retrieve output and error(s), if any
output = cmd.stdout.read() + cmd.stderr.read()
print output
# Exiting
print 'Child process ending now'
sys.exit(0)
def main():
print "I'm the main process"
pid = os.fork()
if pid == 0:
child(command, working_directory)
else:
print 'A subprocess was created with PID: %s' % pid
# Do stuff here ...
time.sleep(5)
print 'Main process ending now.'
sys.exit(0)
if __name__ == '__main__':
main()
Further info:
Documentation: https://docs.python.org/2/library/os.html#os.fork
Examples: http://www.python-course.eu/forking.php
Another related-question: Regarding The os.fork() Function In Python
I was asked to simulate CLI with Python.
This is what I did
def somefunction(a,b):
//codes here
//consider some other functions too
print "--- StackOverFlow Shell ---"
while True:
user_input = raw_input("#> ")
splitit = user_input.split(" ")
if splitit[0] == "add":
firstNum = splitit[1]
sNum = splitit[2]
result = somefunction(firstNum, sNum)
print result
//consider some other elif blocks with "sub", "div", etc
else:
print "Invalid Command"
I do also check the length of the list, here "splitit" I will allow only 3 argumets, first will be the operation, and second and third are the arguments with which some functions are to be performed, in case the argument is more than 3, for that i do put a check.
Though Somehow I manage to make it work, but is there a better way to achieve the same?
Use python CMD Module:
Check few examples given on the below pages
http://docs.python.org/library/cmd.html # Support for line-oriented command interpreters
http://www.doughellmann.com/PyMOTW/cmd - # Create line-oriented command processors
prompt can be set to a string to be printed each time the user is asked for a new command.
intro is the “welcome” message printed at the start of the program.
eg:
import cmd
class HelloWorld(cmd.Cmd):
"""Simple command processor example."""
prompt = 'prompt: '
intro = "Simple command processor example."
You should check out the VTE lib:
http://earobinson.wordpress.com/2007/09/10/python-vteterminal-example/
It works really well and you can very easily customize its look. This is how easy it is:
# make terminal
terminal = vte.Terminal()
terminal.connect ("child-exited", lambda term: gtk.main_quit())
terminal.fork_command()
# put the terminal in a scrollable window
terminal_window = gtk.ScrolledWindow()
terminal_window.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
terminal_window.add(terminal)
Is is possible to detect if a Python script was started from the command prompt or by a user "double clicking" a .py file in the file explorer on Windows?
If running from the command line there is an extra environment variable 'PROMPT' defined.
This script will pause if clicked from the explorer and not pause if run from the command line:
import os
print 'Hello, world!'
if not 'PROMPT' in os.environ:
raw_input()
Tested on Windows 7 with Python 2.7
Here is an example of how to obtain the parent process id and name of the current running script. As suggested by Tomalak this can be used to detect if the script was started from the command prompt or by double clicking in explorer.
import win32pdh
import os
def getPIDInfo():
"""
Return a dictionary with keys the PID of all running processes.
The values are dictionaries with the following key-value pairs:
- name: <Name of the process PID>
- parent_id: <PID of this process parent>
"""
# get the names and occurences of all running process names
items, instances = win32pdh.EnumObjectItems(None, None, 'Process', win32pdh.PERF_DETAIL_WIZARD)
instance_dict = {}
for instance in instances:
instance_dict[instance] = instance_dict.get(instance, 0) + 1
# define the info to obtain
counter_items = ['ID Process', 'Creating Process ID']
# output dict
pid_dict = {}
# loop over each program (multiple instances might be running)
for instance, max_instances in instance_dict.items():
for inum in xrange(max_instances):
# define the counters for the query
hq = win32pdh.OpenQuery()
hcs = {}
for item in counter_items:
path = win32pdh.MakeCounterPath((None,'Process',instance, None,inum,item))
hcs[item] = win32pdh.AddCounter(hq,path)
win32pdh.CollectQueryData(hq)
# store the values in a temporary dict
hc_dict = {}
for item, hc in hcs.items():
type,val=win32pdh.GetFormattedCounterValue(hc,win32pdh.PDH_FMT_LONG)
hc_dict[item] = val
win32pdh.RemoveCounter(hc)
win32pdh.CloseQuery(hq)
# obtain the pid and ppid of the current instance
# and store it in the output dict
pid, ppid = (hc_dict[item] for item in counter_items)
pid_dict[pid] = {'name': instance, 'parent_id': ppid}
return pid_dict
def getParentInfo(pid):
"""
Returns a PID, Name tuple of the parent process for the argument pid process.
"""
pid_info = getPIDInfo()
ppid = pid_info[pid]['parent_id']
return ppid, pid_info[ppid]['name']
if __name__ == "__main__":
"""
Print the current PID and information of the parent process.
"""
pid = os.getpid()
ppid, ppname = getParentInfo(pid)
print "This PID: %s. Parent PID: %s, Parent process name: %s" % (pid, ppid, ppname)
dummy = raw_input()
When run from the command prompt this outputs:
This PID: 148. Parent PID: 4660, Parent process name: cmd
When started by double clicking in explorer this outputs:
This PID: 1896. Parent PID: 3788, Parent process name: explorer
The command-prompt started script has a parent process named cmd.exe (or a non-existent process, in case the console has been closed in the mean time).
The doubleclick-started script should have a parent process named explorer.exe.
Good question. One thing you could do is create a shortcut to the script in Windows, and pass arguments (using the shortcut's Target property) that would denote the script was launched by double-clicking (in this case, a shortcut).
I put this little function (pybyebye()) just before the return statement in some of my programs. I have tested it under Windows 10 on my desktop and laptop and it does what I want, i.e. it pauses awaiting user input only when the program has been started by double-clicking the program in File Explorer. This prevents the temporary command window from vanishing before the user says so. Under Linux, it does nothing. No harm anyway! Likewise on a Mac.
## PYBYEBYE :
def pybyebye (eprompt="PROMPT",efps="FPS_BROWSER_"):
"nice exit in Windows according to program launch from: IDLE, command, clix."
## first examine environment (os & sys having been imported) :
ui = None
platform = sys.platform
## print("os =",platform)
if not platform.lower().startswith("win"):
return ui ## only relevant in windows
fromidle = False
launched = "Launched from"
if sys.executable.endswith("pythonw.exe"):
fromidle = True ## launched from within IDLE
envkeys = sorted(os.environ)
prompter = eprompt in envkeys
browser = False
for ek in envkeys:
## print(ek)
if ek.startswith(efps):
browser = True
break
## next decide on launch context :
if fromidle and not prompter: ## surely IDLE
## print(launched,"IDLE")
pass ## screen won't disappear
elif browser and not prompter: ## run with double click
## print(launched,"File Explorer")
print("Press Enter to finish ....") ; ui=input()
elif prompter and not fromidle: ## run from preexisting command window
## print(launched,"Command Window")
pass ## screen won't disappear
else: ## something funny going on, Mac or Linux ??
print("launch mode undetermined!")
return ui