I'm making a webshell in python, so actually, the user will use his favourite shell trough a web server. My idea is to create a subprocess.Popen with bash -i and to make two functions read and write in the webapp that, respectively, read the stdout or write in the stdin of the subprocess.
I start the shell with:
p = subprocess.Popen(["script","-f","-c","bash -i -l"],stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
The writing is Ok, but when I read the standard output I don't read the user typing:
while select.select([p.stdout],[],[],0)[0]!=[] or select.select([p.stderr],[],[],0)[0]!=[]:
if(select.select([p.stdout],[],[],0)[0]!=[]): data+=p.stdout.read(1)
if(select.select([p.stderr],[],[],0)[0]!=[]): data+=p.stderr.read(1)
I could force the echoing adding the user input to the output but that's not very elegant because, if the user uses some program that prevent echoing (like a password input), the user input would always be shown in the web page.
So, there is a way, like an option in the bash parameters, to force it adding the input to the output?
PS: If you wonder why I'm using script to run bash is because running bash alone will cause python to stop itself with
[1]+ Stopped python RPCServer.py
Altough I've not figured out WHY it happens, i've found how to prevent it from this question: Main Python Process is stopped using subprocess calls in SocketServer
So, there is a way, like an option in the bash parameters, to force it adding the input to the output?
Yes: you can use the -v command-line option ("Print shell input lines as they are read.").
Related
I have some security concerns regarding pythons os.system.
As I couldn't find an answer I would like to ask for your help.
So I have stored the username & password for my database as environment variables.
Then I want to start a server with a shell statement:
os.system(f"server --host 0.0.0.0 " f"{db_type}://{db_user}:{db_pw}#host")
I removed some parts of the statement as they are not relevant for this question.
My question is:
Will my variables db_user or db_pw get exposed somewhere? I am concerned that os.system will print or stdout the whole statement with the clear variables.
If so, is there a way to prevent it?
The code will run on an ec2/aws.
I know there are other ways to start a server but I am interested in this specific scenario.
Yes, the contents will be exposed. Not specifically on stdout/err, but you can see the contents.
Take the example
password='secret'
os.system(f"echo {password} && sleep 1000")
This will start the command in a new subshell (as per documentation). This process will now run, so it will be visible in the running process list. Start for example top or htop and search for that process.
That might display something like this:
There you can see the content of the password variable.
This is due to the fact, that first the complete string argument to os.system is evaluated and substituted. This string is then passed to sh to start a new subshell.
As a unix user can list the machines processes, it's never a good idea to pass secrets via cli arguments. Neither is passing via ENV-variables, as you could inspect the environment via cat /proc/{$pid}/environ.
The best way would be to pass the data via stdin to the subprocess.
I have a variable list of programs that I want to kick off from a cron job. The solution I have settled on at least for now is to write the actual cron job in python, then run through the list, starting each program with:
outf=open('the_command.log','w')
subprocess.Popen(['nohup','the_command', ...],stdout=outf)
outf.close()
The problem with this is that it creates a nohup.out file - the same one for each process, it seems. If I did this same thing from the command line, it might look like:
$ nohup the_command ... > the_command.log 2>&1
This works fine, except I get a message from nohup when I run it:
nohup: ignoring input and redirecting stderr to stdout
I have tried to redirect stderr to /dev/null, but the result is that the_command.log is empty. How can I solve this?
I solved this by using a different command detach from http://inglorion.net/software/detach/
But I now consider this to be improper. It would be better to use oneshot services started by your cron job script or make your cron entry cause a oneshot service to be started.
With this there would be no need to detach as the processes aren't your scripts children rather they are children of the supervisor. Any init that supports starting a normally down service and does not restart it when it exits can be used.
I am trying to execute commands using communicate in the terminal that i spawned.
sitecreate_proc = subprocess.Popen(['gnome-terminal'], stdout=subprocess.PIPE, stdin=subprocess)
out = sitecreate_proc.communicate("pwd")
print out
the "out" variable is always empty.
Displaying the terminal is necessary.
gnome-terminal is a graphical application and as one, likely doesn't use its own standard streams that it got from the parent process.
You need to run console applications instead to communicate with them -
either the commands themselves:
>>> subprocess.check_output("pwd")
'/c/Users/Ivan\n'
or an interactive shell command, then send input to it and receive responses as per Interacting with bash from python
If you just need to output stream data to the same console that python is using, you can simply write out their data as you're getting it - either automatically with tee, or by hand at appropriate moments.
If, instead, you need to launch an independent terminal emulator window on a desktop and interact with it via IPC, that's another matter entirely - namely, UI automation, and has nothing to do with standard console streams.
The most common way for that in Linux is D-Bus (there are other options outlined on the previous link). Ppl report however (as of 2012) that gnome-terminal doesn't support D-bus and you have to jump through hoops to interact with it. There is an article on controlling konsole via D-Bus though.
As I remember communicate return a tuple,
communicate() returns a tuple (stdoutdata, stderrdata)
. so you can't user communicate("pwd"). gnome-terminal returns, then try to get that result, by sitecreate_proc.communicate()[0] for stroutdate, or sitecreate_proc.communicate()[0] for stderrdata
I want to write a python script, that opens an *.exe-file (it is a CMD-console application)
communicates with it by sending input and reading output (for example via stdin, stdout) many times.
I tried it with communicate(), but it closes the pipe after I send the first input (communicate(input='\n')),
so it does work for me only once.
Then I tried it again via p.stdin.readline(), but I only can read line by line. When I read an newline, the process
terminates (that is not what I need).
I just want to start a program, read the output and send an input to it, then wait until the next output and send
a new input to it, and so on....
Is there a good way to do it? Does anybody have an example or a similar problem that is solved?
I need the same code as you, actually i am trying using:
https://pexpect.readthedocs.io/en/stable/index.html
After no sucess with subprocess..
I am writing a Python program which runs a virtual terminal. Currently I am launching it like so:
import pexpect, thread
def create_input(child, scrollers, textlength=80, height=12):
while 1:
newtext = child.readline()
print newtext
child = pexpect.spawn("bash", timeout=30000)
thread.start_new_thread(create_input,(child))
This works, and I can send commands to it via child.send(command). However, I only get entire lines as output. This means that if I launch something like Nano or Links, I don't receive any output until the process has completed. I also can't see what I'm typing until I press enter. Is there any way to read the individual characters as bash outputs them?
You would need to change the output of whatever program bash is running to be unbuffered instead of line buffering. A good amount of programs have a command line option for unbuffered output.
The expect project has a tool called unbuffer that looks like it can give you all bash output unbuffered. I have personally never used it, but there are other answers here on SO that also recommend it: bash: force exec'd process to have unbuffered stdout
The problem is lies in something else. If you open an interactive shell normally a terminal window is opened that runs bash, sh, csh or whatever. See the word terminal!
In the old days, we connected a terminal to a serial port (telnet does the same but over ip), again the word terminal.
Even a dumb terminal respond to ESC codes, to report its type and to set the cursor position, colors, clear screen etc.
So you are starting a subprocess with interactive output, but there is no way in telling that shell and subprocesses are to a terminal in this setup other than with bash startup parameters if there are any.
I suggest you enable telnetd but only on localhost (127.0.0.1)
Within your program, make a socket and connect to localhost:telnet and look up how to emulate a proper terminal. If a program is in line mode you are fine but if you go to full screen editing, somewhere you will need an array of 80x24 or 132x24 or whatever size you want to store its characters, color. You also need to be able to shift lines up in that array.
I have not looked but I cannot imagine there is no telnet client example in python, and a terminal emu must be there too!
Another great thing is that telnet sessions clean up if the the ip connection is lost, eliminating ghost processes.
Martijn