Unable to fake terminal input with termios.TIOCSTI - python

Most of the code samples I've seen are trying to read from stdin without local echo. To do this they modify the "local modes" flag to remove the setting to "Echo input characters". I thought I could just modify the "input modes" flag to TIOCSTI which is for "Insert the given byte in the input queue.". However, even though I run the script as root, it has no effect. anything I write to the fd seems to go to the terminal output, rather than the terminal input. Basically what I want to do is this exact thing, but in pure python.
"""
termfake.py
Usage: sudo python termfake.py /dev/ttys002
Get the tty device path of a different local termimal by running `tty`
in that terminal.
"""
import sys
import termios
fd = open(sys.argv[1], 'w')
fdno = fd.fileno()
# Returns [iflag, oflag, cflag, lflag, ispeed, ospeed, cc]
tatters = termios.tcgetattr(fdno)
print('original', tatters)
tatters[0] = termios.TIOCSTI
print('TIOCSTI', termios.TIOCSTI)
# Set iflag
termios.tcsetattr(fdno, termios.TCSANOW, tatters)
# Verify setting change
with open('/dev/ttys002', 'w') as fd2:
print('modified', termios.tcgetattr(fd2.fileno()))
fd.write('This is test\n')
fd.close()

TIOCSTI is an ioctl (documented in tty_ioctl(4)), not a terminal setting, so you can't use tcsetattr() -- you need to feed each character of the fake input to ioctl() instead. Never had to do ioctl's from Python before, but the following seems to work for running an ls in a different terminal (specified as the argument, e.g. /dev/pts/13) that's running Bash:
import fcntl
import sys
import termios
with open(sys.argv[1], 'w') as fd:
for c in "ls\n":
fcntl.ioctl(fd, termios.TIOCSTI, c)
TIOCSTI requires root privileges (or CAP_SYS_ADMIN to be more specific, but that's usually the same in practice) by the way -- see capabilities(7).

I took the answer from #Ulfalizer and expanded it a bit to be a complete and usable app.
import sys
import fcntl
import termios
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('tty', type=argparse.FileType('w'),
help='full tty path as given by the tty command')
group = parser.add_mutually_exclusive_group()
group.add_argument('-n', action='store_true',
help='prevent sending a trailing newline character')
group.add_argument('--stdin', action='store_true',
help='read input from stdin')
group = parser.add_argument_group()
group.add_argument('cmd', nargs='?',
help='command to run (required if not using --stdin)')
group.add_argument('args', nargs='*',
help='arguments to command')
args = parser.parse_known_args()
if args.stdin:
data = sys.stdin.read()
else:
data = ' '.join([args.cmd] + args.args)
for c in data:
fcntl.ioctl(args.tty, termios.TIOCSTI, c)
if not args.n and data[-1][-1] != '\n':
fcntl.ioctl(args.tty, termios.TIOCSTI, '\n')
Here is how you use it:
Terminal #1: do...
$ tty > /tmp/t1
Terminal #2: do...
$ sudo python termfake.py $(cat /tmp/t1) date +%s
Terminal #1: observe...
$ tty > /tmp/t1
$ date +%s
1487276400

Related

getpass behaves different in pychram IDE and terminal

paaword.py is a script where getpass() asked the user about the password and validates it. but i want to automate the whole process and used subprocess for it (main.py). And i am using python3.10
Problem:
problem is when i run the main.py in pycharm IDE it works normally (it automates the process). but when I run the script python3 main.py in ubuntu terminal it asked for the input.
I dont know why it behaves deifferent in in IDE and terminal?
password.py
import warnings
import getpass
import time
# Suppress warnings
warnings.filterwarnings("ignore", category=getpass.GetPassWarning)
for x in range(10):
print(f"curnt index {x}")
time.sleep(5)
password = getpass.getpass("Enter your password: ")
if password != "test":
print("wrong password")
else:
print("correct password")
main.py
import subprocess
# subprocess
proc = subprocess.Popen(["python", "password.py"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
password = "test"
input_data = f"{password}\n"
# read output from the subprocess in real-time
while True:
if proc.poll() is not None:
break
proc.stdin.write(input_data.encode())
proc.stdin.flush()
output = proc.stdout.readline().decode().strip()
if output:
print(output)
output in pycharm:
output in ubuntu terminal (20.04)
Judging by the screenshots, your OS is Linux.
In Linux, getpass() first tries to read directly from the process' controlling terminal (/dev/tty), or, if that fails, stdin using direct terminal I/O; and only if that fails, it falls back to regular I/O, displaying a warning.
Judging by the warnings in the IDE, the latter is exactly what happens in your first case.
Lib/getpass.py:
def unix_getpass(prompt='Password: ', stream=None):
<...>
try:
# Always try reading and writing directly on the tty first.
fd = os.open('/dev/tty', os.O_RDWR|os.O_NOCTTY)
tty = io.FileIO(fd, 'w+')
<...>
input = io.TextIOWrapper(tty)
<...>
except OSError:
# If that fails, see if stdin can be controlled.
<...>
try:
fd = sys.stdin.fileno()
except (AttributeError, ValueError):
fd = None
passwd = fallback_getpass(prompt, stream) # fallback_getpass is what displays the warnings
input = sys.stdin
<...>
if fd is not None:
try:
old = termios.tcgetattr(fd)
<...>
except termios.error:
<...>
passwd = fallback_getpass(prompt, stream)
<...>
return passwd
As you can see, getpass() is specifically designed to be interactive and resist intercepting its input. So if you need to provide a password automatically, use another way:
store it in a file readable only by you (e.g. SSH does that; you can provide that file as an argument and store other arguments there as well), or
use the system's keyring
and only fall back to getpass if the password was not provided that way and/or if you detect that the program is being run interactively (sys.stdin.isatty())
while it's also possible to provide the password on the command line -- in that case, you have to overwrite it in your process' stored command line to hide it from snooping. I couldn't find a way to do that in Python.
You can check Secure Password Handling in Python | Martin Heinz | Personal Website & Blog for a more detailed rundown of the above. (note: it suggests using envvars and load them from .env which would probably not apply to you. That's designed for .NET projects which due to the rigid structure of MS Visual Studio's build system, have had to rely on envvars for any variable values.)

Running a C executable inside a python program

I have written a C code where I have converted one file format to another file format. To run my C code, I have taken one command line argument : filestem.
I executed that code using : ./executable_file filestem > outputfile
Where I have got my desired output inside outputfile
Now I want to take that executable and run within a python code.
I am trying like :
import subprocess
import sys
filestem = sys.argv[1];
subprocess.run(['/home/dev/executable_file', filestem , 'outputfile'])
But it is unable to create the outputfile. I think some thing should be added to solve the > issue. But unable to figure out. Please help.
subprocess.run has optional stdout argument, you might give it file handle, so in your case something like
import subprocess
import sys
filestem = sys.argv[1]
with open('outputfile','wb') as f:
subprocess.run(['/home/dev/executable_file', filestem],stdout=f)
should work. I do not have ability to test it so please run it and write if it does work as intended
You have several options:
NOTE - Tested in CentOS 7, using Python 2.7
1. Try pexpect:
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import pexpect
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
command_output, exitstatus = pexpect.run("/usr/bin/bash -c '{0}'".format(cmd), withexitstatus=True)
if exitstatus == 0:
print(command_output)
else:
print("Houston, we've had a problem.")
2. Run subprocess with shell=true (Not recommended):
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import sys
import subprocess
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
result = subprocess.check_output(shlex.split(cmd), shell=True) # or subprocess.call(cmd, shell=True)
print(result)
It works, but python.org frowns upon this, due to the chance of a shell injection: see "Security Considerations" in the subprocess documentation.
3. If you must use subprocess, run each command separately and take the SDTOUT of the previous command and pipe it into the STDIN of the next command:
p = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE)
stdout_data, stderr_data = p.communicate()
p = subprocess.Popen(cmd, stdin=stdout_data, stdout=PIPE)
etc...
Good luck with your code!

Python: Paging of argparse help text?

For a Python script that uses argparse and has a very long argument list, is it possible to make argparse page what it prints to the terminal when calling the script with the -h option?
I could not find a quick answer, so I wrote a little something:
# hello.py
import argparse
import os
import shlex
import stat
import subprocess as sb
import tempfile
def get_pager():
"""
Get path to your pager of choice, or less, or more
"""
pagers = (os.getenv('PAGER'), 'less', 'more',)
for path in (os.getenv('PATH') or '').split(os.path.pathsep):
for pager in pagers:
if pager is None:
continue
pager = iter(pager.split(' ', 1))
prog = os.path.join(path, next(pager))
args = next(pager, None) or ''
try:
md = os.stat(prog).st_mode
if md & (stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH):
return '{p} {a}'.format(p=prog, a=args)
except OSError:
continue
class CustomArgParser(argparse.ArgumentParser):
"""
A custom ArgumentParser class that prints help messages
using either your pager, or less or more, if available.
Otherwise, it does what ArgumentParser would do.
Use the PAGER environment variable to force it to use your pager
of choice.
"""
def print_help(self, file=None):
text = self.format_help()
pager = get_pager()
if pager is None:
return super().print_help(file)
fd, fname = tempfile.mkstemp(prefix='simeon_help_', suffix='.txt')
with open(fd, 'w') as fh:
super().print_help(fh)
cmd = shlex.split('{p} {f}'.format(p=pager, f=fname))
with sb.Popen(cmd) as proc:
rc = proc.wait()
if rc != 0:
super().print_help(file)
try:
os.unlink(fname)
except:
pass
if __name__ == '__main__':
parser = CustomArgParser(description='Some little program')
parser.add_argument('--message', '-m', help='Your message', default='hello world')
args = parser.parse_args()
print(args.message)
This snippet does main things. First, it defines a function to get the absolute path to a pager. If you set the environment variable PAGER, it will try and use it to display the help messages. Second, it defines a custom class that inherits pretty much everything from argparse.ArgumentParser. The only method that gets overridden here is print_help. It implements print_help by defaulting to super().print_help() whenever a valid pager is not found. If a valid is found, then it writes the help message to a temporary file and then opens a child process that invokes the pager with the path to the temporary file. When the pager returns, the temporary file is deleted. That's pretty much it.
You are more than welcome to update get_pager to add as many pager programs as you see fit.
Call the script:
python3 hello.py --help ## Uses less
PAGER='nano --view' python3 hello.py --help ## Uses nano
PAGER=more python3 hello.py --help ## Uses more

ssh works in terminal but not through Python

I am trying to write data to a disk on a remote machine via ssh using a Python script. However it is giving an error of dd: /dev/xbd2d: Device not configured.
import argparse
import os
import time
parser = argparse.ArgumentParser(description='basis')
parser.add_argument("-g", default=1, help="")
args = parser.parse_args()
volume = args.g
instance_ip=10.1.12.3
cmd_ssh='ssh -tt -i basis.pem root#'+instance_ip+ ''' "date | dd of=/dev/xbd2d"'''
os.system(cmd_ssh)
What is quite unusual is that if use the command:
ssh -tt -i basis.pem root#10.1.12.3 "date | dd of=/dev/xbd2d"
in a terminal, it executes correctly without any problem and writes the data to the disk. I wrote the same script in C++ and it worked fine but for some reason Python gives me dd: /dev/xbd2d: Device not configured.
Checking the quotes around the variable and also using \" instead of the triple quotes solved my problem as recommended by Eran
import argparse
import os
import time
parser = argparse.ArgumentParser(description='basis')
parser.add_argument("-g", default=1, help="")
args = parser.parse_args()
volume = args.g
instance_ip=10.1.12.3
cmd_ssh="ssh -tt -i basis.pem root#"+str(instance_ip)+" \"date | dd of=/dev/xbd2d\""
os.system(cmd_ssh)

Get user input while reading STDIN from a pipe?

I'm writing a command line tool (let's call it interactive_rm), that is supposed to read file paths line by line from STDIN so that it can be used with Unix pipes like in the following example:
$ find . | interactive_rm
I'm currently reading each path from STDIN like this:
def _parse_stdin():
for line in sys.stdin:
yield prepare_line(line)
Now the problem... Before the tool removes a path it should ask the user for confirmation. To do so I would use input() like this:
for path in _parse_stdin():
print('Do you want to delete this path: [y/n]\n' + path)
answer = input()
if answer == 'y':
delete(path)
But this doesn't work, because STDIN is already occupied by the pipe, so the input() function is skipped or I end up with an "EOFError: EOF when reading a line".
Does anyone know a solution to this?
When stdin is redirected, it needs to reopen the terminal to be able to read from it, e.g.:
from __future__ import print_function
import os
def main():
tty = os.open("/dev/tty", os.O_RDONLY)
while True:
r = os.read(tty, 1024)
if not r: # Wait for Ctrl-D.
break
print("----", r)
if __name__ == "__main__":
main()
And run it like the following to test it:
python t.py < /dev/null
Note that the terminal may be unavailable. For example, if the command is run through a ssh session without allocating a terminal as ssh -T ... command.

Categories

Resources