I am trying to encrypt password in my shell script and need to pass my predefined variable as a response to the request coming post running one of the shell commands.
I am trying to automate password reset process and need to automatically assign new password as per my shell script logic, for that I need to encrypt my password.
#below line will prompt for pwd but i want to sent my variable as response
hashed_pwd=`python -c 'import crypt,getpass; print crypt.crypt(getpass.getpass())'`
echo $hashed_pwd
I want to have hashed_pwd as a value without asking me to enter the password.
Where do you have problems with your approach?
Is it, that you don't know how to pass variables to python, if so, you can do something like
hashed_pwd=$(python2 -c "import crypt,getpass; print crypt.crypt($YOURPASS)")
Notice the changes I made:
- called python2, since your code is written in python2 to avoid that maybe a python3 interpreter is started instead
- changed the ' to use " which allows variable substitution
- the change from backticks to the $() notation is more or less cosmetically
Btw. when I tried to test this, I noticed, that in my environment it seems I have different package versions of crypt. In one crypt needs two arguments (password and salt, where salt has to be a string), in the other, the second argument is optional.
If that is also the case in your system, you could pass it from your shell environment as:
python2 -c "import crypt,getpass; print crypt.crypt(getpass.getpass(),'$salt')"
Related
I am trying to run an ssh command within a python script using os.system to add a 0 at the end of a fully matched string in a remote server using ssh and sed.
I have a file called nodelist in a remote server that's a list that looks like this.
test-node-1
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
I want to use sed to make the following modification, I want to search test-node-1, and when a full match is found I want to add a 0 at the end, the file must end up looking like this.
test-node-1 0
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
However, when I run the first command,
hostname = 'test-node-1'
function = 'nodelist'
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/{hostname}/s/$/ 0/' ~/{function}.txt\"")
The result becomes like this,
test-node-1 0
test-node-2
...
test-node-11 0
test-node-12 0
test-node-13 0
...
test-node-21
I tried adding a \b to the command like this,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\b{hostname}\b/s/$/ 0/' ~/{function}.txt\"")
The command doesn't work at all.
I have to manually type in the node name instead of using a variable like so,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\btest-node-1\b/s/$/ 0/' ~/{function}.txt\"")
to make my command work.
What's wrong with my command, why can't I do what I want it to do?
This code has serious security problems; fixing them requires reengineering it from scratch. Let's do that here:
#!/usr/bin/env python3
import os.path
import shlex # note, quote is only here in Python 3.x; in 2.x it was in the pipes module
import subprocess
import sys
# can set these from a loop if you choose, of course
username = "whoever"
serverlocation = "whereever"
hostname = 'test-node-1'
function = 'somename'
desired_cmd = ['sed', '-i',
f'/\\b{hostname}\\b/s/$/ 0/',
f'{function}.txt']
desired_cmd_str = ' '.join(shlex.quote(word) for word in desired_cmd)
print(f"Remote command: {desired_cmd_str}", file=sys.stderr)
# could just pass the below direct to subprocess.run, but let's log what we're doing:
ssh_cmd = ['ssh', '-i', os.path.expanduser('~/.ssh/my-ssh-key'),
f"{username}#{serverlocation}", desired_cmd_str]
ssh_cmd_str = ' '.join(shlex.quote(word) for word in ssh_cmd)
print(f"Local command: {ssh_cmd_str}", file=sys.stderr) # log equivalent shell command
subprocess.run(ssh_cmd) # but locally, run without a shell
If you run this (except for the subprocess.run at the end, which would require a real SSH key, hostname, etc), output looks like:
Remote command: sed -i '/\btest-node-1\b/s/$/ 0/' somename.txt
Local command: ssh -i /home/yourname/.ssh/my-ssh-key whoever#whereever 'sed -i '"'"'/\btest-node-1\b/s/$/ 0/'"'"' somename.txt'
That's correct/desired output; the funny '"'"' idiom is how one safely injects a literal single quote inside a single-quoted string in a POSIX-compliant shell.
What's different? Lots:
We're generating the commands we want to run as arrays, and letting Python do the work of converting those arrays to strings where necessary. This avoids shell injection attacks, a very common class of security vulnerability.
Because we're generating lists ourselves, we can change how we quote each one: We can use f-strings when it's appropriate to do so, raw strings when it's appropriate, etc.
We aren't passing ~ to the remote server: It's redundant and unnecessary because ~ is the default place for a SSH session to start; and the security precautions we're using (to prevent values from being parsed as code by a shell) prevent it from having any effect (as the replacement of ~ with the active value of HOME is not done by sed itself, but by the shell that invokes it; because we aren't invoking any local shell at all, we also needed to use os.path.expanduser to cause the ~ in ~/.ssh/my-ssh-key to be honored).
Because we aren't using a raw string, we need to double the backslashes in \b to ensure that they're treated as literal rather than syntactic by Python.
Critically, we're never passing data in a context where it could be parsed as code by any shell, either local or remote.
I have a program which needs to receive input very fast and I know what the input has to be, but there is a timer which I suppose expects no delay between opening the program and entering the input.
I've tried using bash script but it doesn't seem to work, and trying ./program; password also doesn't work (it returns that 'password' is not a command).
My bash script looks like this:
#! /bin/bash
cd ~/Downloads
./program
password
Perhaps it's working, but I'm not receiving any output from the program, which would usually display how long it took to get an input.
Well, first of all, change execution to ~/Downloads/program password. Also make sure program is executable ( chmod +x if it isn't) and that it takes arguments.
Furthermore, to refrain from mentioning the path every time, move program to ~/bin/ (create if it doesn't exist) and add that location to $PATH if it isn't there.
If the "program" does not expect the password as a command line argument, then you probably want to input it through stdin:
#! /bin/bash
cd ~/Downloads
echo "password" | ./program
or, if there is more input:
./program <<INPUT
password
moreInput
moreInput2
...
moreInputN
INPUT
The first variant uses simple piping, the second relies on HereDocs.
In the (improbable) case that the program expects the password as an argument, you have to pass it as follows:
./programm password
without line-breaks and semicolons in between.
I'm saying that this is "improbable", because if such an invokation is used from the shell, then the password will be saved as clear text in bash-history, which is obviously not very good.
So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.
I want to ask you guys how to pass a variable from a shell script to a python script and save it then as local variable. I have a Variable in the shell script:
#!/bin/bash
userpem=$(egrep "CN=$1/" index.txt|awk '{print $3}').pem
output='openssl x509 -in $userpem -noout -text'
export output
I read on some posts that I can do that with os.environ(foo) but I just saw examples like this:
from django.shortcuts import render
import os
import subprocess
from django.http import HttpResponse
def info(request, benutzername):
os.chdir("/var/www/openvpn/examples/easy-rsa/2.0/keys")
subprocess.Popen(["/var/www/openvpn/examples/easy-rsa/2.0/keys/getinfo.sh",benutzername])
output = os.environ['output']
return HttpResponse(output)
You can't do what your trying to do.
fedorqui's answer shows how to read environment variables from Python, but that won't help you.
First, you're just starting the shell script and immediately assuming it's already done its work, when it may not even have finished launching yet. You might get lucky and have it work sometimes, but not reliably. You need to wait for the Popen object (which also means you need to store it in a variable)—or, even more simply, call it (which waits until it finishes) instead of just kicking it off.
And you probably want to check the return value, or just use check_call, so you'll know if it fails.
Meanwhile, if you fix that, it still won't do you any good. export doesn't export variables to your parent process, it exports them to your children.
If you want to pass a value back to your parent, the simplest way to do that is by writing it to stdout. Then, of course, your Python code will have to read your output. The easiest way to do that is to use check_output instead of check_call.
Finally, I'm pretty sure you wanted to actually run openssl and capture its output, not just set output to the literal string openssl x509 -in $userpem -noout -text. (Especially not with single quotes, which will prevent $userpem from being substituted, meaning you looked it up for nothing.) To do that, you need to use backticks or $(), as you did in the previous line, not quotes.
So:
#!/bin/bash
userpem=$(egrep "CN=$1/" index.txt|awk '{print $3}').pem
output=$(openssl x509 -in $userpem -noout -text)
echo $output
And:
def info(request, benutzername):
os.chdir("/var/www/openvpn/examples/easy-rsa/2.0/keys")
output = subprocess.check_output(["/var/www/openvpn/examples/easy-rsa/2.0/keys/getinfo.sh",benutzername])
return HttpResponse(output)
As a side note, os.chdir is usually a bad idea in web servers. That will set the current directory for all requests, not just this one. (It's especially bad if you're using a semi-preemptive greenlet server framework, like something based on gevent, because a different request could chdir you somewhere else between your chdir call and your subprocess call…)
You need to use os.environ['variablename'] to work with an environment variable.
For example here the v variable is created and exported:
$ export v="hello"
Let's create a script a.py:
import os
d=os.environ['v']
print "my environment var v is --> " + d
And calling it:
$ python a.py
my environment var v is --> hello
I just learned subprocess.check_call() function today. I intend to use it to replace my os.system command.
Originally, I have my command as such:
os.system("mount -t cifs //source/share /mnt/share -o username=user")
The command above will prompt for password and it will mount the drive if the password is correct.
However, if I tried:
cmd_string="mount -t cifs //source/share /mnt/share -o username=user"
subprocess.check_call(cmd_string.split(" "), shell=True, stdin=sys.stdin)
It would not ask for password. Instead, it just print the partitions. It's like invoke "mount" command without parameters.
Any idea how I can use check_call and still receive interactive input from user?
You are invoking mount without parameters. If you specify shell=True, subprocess expects the entire command including arguments is passed to the shell as one big string. If you specify shell=False (the default if not specified), the command and its arguments are passed as a list of strings, as you have done by using split. By mixing the two forms, you are effectively only passing the string mount as the command to be executed. Either remove shell=True or remove the .split(" "). The first choice is usually better unless there is some reason you need shell parsing to be involved, normally something to be avoided.