I am working with group of EC2 instances that are deployed in a subnet.
I usually use the aws ssm command
aws ssm start-session --region us-east-2 --target i-01234567abcdef --profile profile-one
to connect to that ec2 instance and run commands such as
ssh mirrora
to go into a different ec2 instance in the subnet from the one I am connected to
Then I can run ls on mirrora to look at the files and such.
Now I am trying to automate the same using python
I am new to python (and first time working with subprocess). I have researched a bit and learned that I can use Popen to open a subprocess and I set the stdin and stdout arguments to subprocess.PIPE
Here's a sample of the code I am trying to execute
command = "aws ssm start-session --region us-east-2 --target i-0123456abcdef --profile profile-one"
ssh = subprocess.Popen(command.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=0, shell= True, universal_newlines=True)
ssh.stdin.write("ssh mirrora")
ssh.stdin.write("ls")
# output = ssh.stdout.read()
# output , err = ssh.communicate()
# print("output:" + output)
# print("err:" + err)
# output = ssh.stdout.read()
# print("Output of ls is:", output)
print("HERE")
If I use either stdout.read or communicate method, the program seems to get stuck and nothing works. I can't even ctrl + c the terminal.
If I run it without any stdout or communicate, it prints "HERE" in the terminal (almost instantaneously) and nothing actually seems to happen on the server side (as I tried to run the shutdown command to turn off the mirrora, but it is still running after program exits without stdout or communicate).
What am I doing wrong?
Ideally I want to run the commands I want inside the subprocess that aws ssm opens and run certain commands on one of the ec2 instances and get the output for one or more commands using stdout.read after certain stdin.write
Any help or links will be appreciated.
Thanks
It seems to me that you should use run-command instead start-session.
This is an SSM feature used to execute static commands and scripts, where a session is similar to just an ssh connection.
I think it should work like this:
import boto3
script = f"""
ssh mirrora 'ls'
ssh mirrora 'ls; ps -aux'
"""
target = "i-01234567abcdef "
session = boto3.Session(profile_name="profile-one")
ssm_client = session.client("ssm")
response = ssm_client.send_command(
InstanceIds=[target],
DocumentName="AWS-RunShellScript", # https://eu-central-1.console.aws.amazon.com/systems-manager/documents/AWS-RunShellScript/description?region=eu-central-1
TimeoutSeconds=20,
Comment="Example of send command",
Parameters={
'commands': [
script,
],
'workingDirectory': "/home/user1",
},
)
Here you can find the documentation for send-command API. https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ssm.html#SSM.Client.send_command
If you want to view the logs in the python script then you will need to extend it - probably easiest way is to set an s3 bucket in the send_command (it will redirect your stdout and stderr to provided s3 bucket), download saved in s3 files, and cat/print them.
Related
I'm trying to write a Python script that starts a subprocess to run an Azure CLI command once the file is executed.
When I run locally, I run:
az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area
I get prompted for an input which looks like:
Which service connection do you want to use to communicate with GitHub?
[1] Create new GitHub service connection
[2] <my connection name>
[3] <org name>
Please enter a choice [Default choice(1)]:
I can type in 2 and press enter then my pipeline is successfully created in Azure DevOps. I would like to run this command being dynamically entered when prompted.
So far I have tried:
import subprocess
cmd = 'az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area
cmd = cmd.split()
subprocess.run(cmd, shell=True)
This will run in the exact same way as when I try to run it locally.
Try to follow answers from here I have also tried:
p = subprocess.run(cmd, input="1", capture_output=True, text=True, shell=True)
print(p)
Which gives me an error saying raise NoTTYException(error_msg)\nknack.prompting.NoTTYException.
Is there a way where I can execute this Python script, and it will run the Azure CLI command then enter 2 when prompted without any manually intervention?
You are trying to solve the wrong problem. az pipeline create takes a --service-connection parameter. You don't need to respond to the prompt, you can provide the service connection value on the command line and skip the prompt entirely.
IMHO, Daniel is right, you're not supposed to deal with stdin in your program.
Nevertheless, if you really need to, you should use pexpect package, which basically opens a process, waits for given output, and then sends input to the process' stdin.
Here's a basic example:
import pexpect
from pexpect.popen_spawn import PopenSpawn
cmd = 'az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area'
child = pexpect.popen_spawn.PopenSpawn('cmd', timeout=1)
child.expect ('.*Please enter a choice.*')
child.sendline ('2')
# child.interact() # Give control of the child to the user.
Have a look at pexpect documentation for more details. MS Windows support is available since v4.0.
Another archaic solution would be to use subprocess the following way, emulating basically what expect would do:
import subprocess
from time import sleep
p = subprocess.Popen(azure_command, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
sleep(.5)
stdout = p.communicate(input=b'2\n')[0]
print(stdout.decode())
Still, best solution is to use non-interactive mode of most CLI programs.
I am trying to port some simple scripts that I have in tcl to python.
Using tcl/expect, we can see every executed command on the standard output. For example,
spawn ssh admin#$IP
send "ls\r"
would yield by default an output like this:
ssh admin#10.10.10.10
ls
....
In python, only way I saw was to decode the child.before or after outputs.
Is there a way python can output everything it runs to the console or a file ?
This is what I am doing now:
#!/usr/bin/env python3
import pexpect
shellprompt = "] # "
child = pexpect.spawn('ssh admin#XYZ')
child.sendline('ls')
child.expect(shellprompt)
ls_out = child.before.decode()
print(ls_out)
This is run on a Linux machine, and doing ssh to a Linux machine
I'm trying to run a sequence of shell commands in the same environment:
same exported variables, persistent history, etc.
And I want to work with each commands output before running the next command.
After looking over python subprocess.run and Pexpect.spawn neither seem to provide both features.
subprocess.run allows me to run one command and then examine the output, but not to keep the environment open for another command.
Pexpect.spawn("bash") allows me to run multiple commands in the same environment, but i can't get the output until EOF; when bash itself exits.
Ideally i would like an interface that can do both:
shell = bash.new()
shell.run("export VAR=2")
shell.run("whoami")
print(shell.exit_code, shell.stdout())
# 0, User
shell.run("echo $VAR")
print(shell.stdout())
# 2
shell.run("!!")
print(shell.stdout())
# 2
shell.run("cat file -")
shell.stdin("Foo Bar")
print(shell.stdout())
# Foo Bar
print(shell.stderr())
# cat: file: No such file or directory
shell.close()
Sounds like a case for Popen. You can specify bufsize to disable buffering, if it gets in the way.
Example from the linked page:
with Popen(["ifconfig"], stdout=PIPE) as proc:
log.write(proc.stdout.read())
There's also proc.stdin for sending more commands, and proc.stderr.
I am trying to write a python script which when executes will open a Maya file in another computer and creates its playblast there. Is this possible? Also I would like to add one more thing that the systems I use are all Windows. Thanks
Yes it is possible, i do this all the time on several computers. First you need to access the computer. This has been answered. Then call maya from within your shell as follows:
maya -command myblast -file filetoblast.ma
you will need myblast.mel somewhere in your script path
myblast.mel:
global proc myblast(){
playblast -widthHeight 1920 1080 -percent 100
-fmt "movie" -v 0 -f (`file -q -sn`+".avi");
evalDeferred("quit -f");
}
Configure what you need in this file such as shading options etc. Please note calling Maya GUI uses up one license and playblast need that GUI (you could shave some seconds by not doing the default GUI)
In order to execute something on a remote computer, you've got to have some sort of service running there.
If it is a linux machine, you can simply connect via ssh and run the commands. In python you can do that using paramiko:
import paramiko
ssh = paramiko.SSHClient()
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("echo hello")
Otherwise, you can use a python service, but you'll have to run it beforehand.
You can use Celery as previously mentioned, or ZeroMQ, or more simply use RPyC:
Simply run the rpyc_classic.py script on the target machine, and then you can run python on it:
conn = rpyc.classic.connect("my_remote_server")
conn.modules.os.system('echo foo')
Alternatively, you can create a custom RPyC service (see documentation).
A final option is using an HTTP server like previously suggested. This may be easiest if you don't want to start installing everything. You can use Bottle which is a simple HTTP framework in python:
Server-side:
from bottle import route, run
#route('/run_maya')
def index(name):
# Do whatever
return 'kay'
run(host='localhost', port=8080)
Client-side:
import requests
requests.get('http://remote_server/run_maya')
One last option for cheap rpc is to run maya.standalone from a a maya python ("mayapy", usually installed next to the maya binary). The standalone is going to be running inside a regular python script so it can uses any of the remote procedure tricks in KimiNewts answer.
You can also create your own mini-server using basic python. The server could use the maya command port, or a wsgi server using the built in wsgiref module. Here is an example which uses wsgiref running inside a standalone to control a maya remotely via http.
We've been dealing with the same issue at work. We're using Celery as the task manager and have code like this inside of the Celery task for playblasting on the worker machines. This is done on Windows and uses Python.
import os
import subprocess
import tempfile
import textwrap
MAYA_EXE = r"C:\Program Files\Autodesk\Maya2016\bin\maya.exe"
def function_name():
# the python code you want to execute in Maya
pycmd = textwrap.dedent('''
import pymel.core as pm
# Your code here to load your scene and playblast
# new scene to remove quicktimeShim which sometimes fails to quit
# with Maya and prevents the subprocess from exiting
pm.newFile(force=True)
# wait a second to make sure quicktimeShim is gone
time.sleep(1)
pm.evalDeferred("pm.mel.quit('-f')")
''')
# write the code into a temporary file
tempscript = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)
tempscript.write(pycmd)
tempscript.close()
# build a subprocess command
melcmd = 'python "execfile(\'%s\')";' % tempscript.name.replace('\\', '/')
cmd = [MAYA_EXE, '-command', melcmd]
# launch the subprocess
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait()
# when the process is done, remove the temporary script
try:
os.remove(tempscript.name)
except WindowsError:
pass
I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr.
I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this:
import re, subprocess
executable = ... # Name of executable
regex = ... # Regex to extract the port number from the output
p = subprocess.Popen(executable,
bufsize=1, # line buffered
stderr=subprocess.PIPE
)
s = p.stderr.readline()
port = re.match(regex).groups()[0]
print port
If I log in interactively, this script works:
$ ssh remotehost.example.com
Last login: Thu Aug 28 17:31:18 2008 from localhost
$ ./invokejob.py
63409
$ exit
logout
Connection to remotehost.example.com closed.
(Note: successful logout, it did not hang).
However, if I try to invoke it from the command-line, it just hangs:
$ ssh remotehost.example.com invokejob.py
Does anybody know why it hangs in the second case, and what I can do to avoid this?
Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
s = p.stderr.readline()
I suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.
When you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.
what if you do the following:
ssh <remote host> '<your command> ;<your regexp using awk or something>'
For example
ssh <remote host> '<your program>; ps aux | awk \'/root/ {print $2}\''
This will connect to , execute and then print each PSID for any user root or any process with root in its description.
I have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).