Fabrics 2.x ssh connection using identity fails to work - python

Trying to connect to the host described in ssh config using fabrics 2 and identity file.
con = Connection('my_host')
#task
def tt(c):
con.run('uname -a')
~/.ssh/config :
Host my_host
HostName 123.144.76.84
User ubuntu
IdentityFile ~/.keys/somekey
It fails with
paramiko.ssh_exception.AuthenticationException: Authentication failed.
While $ ssh my_host from the terminal works.
I've tried to do fab -i ~/.keys/somekey tt with same result.

Fabric accepts a hosts iterable as parameters in tasks. Per the documentation:
An iterable of host-connection specifiers appropriate for eventually instantiating a Connection. The existence of this argument will trigger automatic parameterization of the task when invoked from the CLI, similar to the behavior of --hosts.
One of the members of which could be:
A string appropriate for being the first positional argument to Connection - see its docs for details, but these are typically shorthand-only convenience strings like hostname.example.com or user#host:port.
As for your example, please try this for fabfile.py:
host_list = ["my_host"]
#task(hosts=host_list)
def tt(c):
c.run('uname -a')
Alternatively, you can omit the host declaration from the fabfile altogether. If you don't specify the host in fabfile.py, you can simply specify it as a host when invoking the fab cli utility. If your fabfile.py is this:
#task
def tt(c):
c.run('uname -a')
You would now run fab -H my_host tt to run it on the alias tt from your SSH client config.
Hope this helps.

There seems to be something afoot with paramiko. Without digging deeper I don't know if it's a bug or not. In any case, I had the same issue, and even a plain paramiko call got me the same error.
Following another SO question I was able to make it work by disabling rsa-sha2-256 and rsa-sha2-512 as mentioned.
Luckily, fabric exposes access to the paramiko arguments like so:
con = Connection(
'my_host',
connect_kwargs={
"disabled_algorithms": {"pubkeys": ["rsa-sha2-256", "rsa-sha2-512"]}
}
)
I find it unlucky that this is required in the fabfile. If someone else has a better/cleaner solution feel free to comment.

Same problem.
You can try add -d for more detail when fabric run:
fab2 -d tt
I found the exception: paramiko.ssh_exception.SSHException: Invalid key, then regenerate key from server, problem solved.

Related

Attempting to run commands on an external server via python using paramiko ssh [duplicate]

I am slowly trying to make a python script to SSH then FTP to do some manual file getting I have to do all the time. I am using Paramiko and the session seems to command, and prints the directory but my change directory command doesn't seem to work, it prints the directory I start in: /01/home/.
import paramiko
hostname = ''
port = 22
username = ''
password = ''
#selecting PROD instance, changing to data directory, checking directory
command = {
1:'ORACLE_SID=PROD',2:'cd /01/application/dataload',3:'pwd'
}
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname,port,username,password)
for key,value in command.items():
stdin,stdout,stderr=ssh.exec_command(value)
outlines=stdout.readlines()
result=''.join(outlines)
print (result)
ssh.close()
When you run exec_command multiple times, each command is executed in its own "shell". So the previous commands have no effect on an environment of the following commands.
If you need the previous commands to affect the following commands, just use an appropriate syntax of your server shell. Most *nix shells use a semicolon or an double-ampersand (with different semantics) to specify a list of commands. In your case, the ampersand is more appropriate, as it executes following commands, only if previous commands succeed:
command = "ORACLE_SID=PROD && cd /01/application/dataload && pwd"
stdin,stdout,stderr = ssh.exec_command(command)
In many cases, you do not even need to use multiple commands.
For example, instead of this sequence, that you might do when using shell interactively:
cd /path
ls
You can do:
ls /path
See also:
How to get each dependent command execution output using Paramiko exec_command
Obligatory warning: Do not use AutoAddPolicy on its own – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
Well by accidentally trying something I managed to figure this out I believe. You need to do all the commands at one time and do not need to do them in a loop. for for my instance it would be
import paramiko
hostname = ''
port = 22
username = ''
password = ''
#selecting PROD instance, changing to data directory, checking directory
command = 'ORACLE_SID=PROD;cd /01/application/dataload;pwd'
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname,port,username,password)
stdin,stdout,stderr=ssh.exec_command(value)
outlines=stdout.readlines()
result=''.join(outlines)
print (result)
ssh.close()

execute which command over ssh in python script

I'm trying to run the command which solsql over SSH in a Python script.
I think the problem is in the ssh command and not the Python part, but maybe it's both.
I tried
subprocess.check_output("ssh root#IP which solsql",
stderr=subprocess.STDOUT, shell=True)
but I get an error.
I tried to run the command manually:
ssh root#{server_IP}" which solsql"
and I get a different output.
On the server I get the real path (/opt/solidDB/soliddb-6.5/bin/solsql)
but over SSH I get this:
which: no solsql in
(/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin)
I think what your looking for is something like paramiko. An example of how to use the library and issue a command to the remote system.
import base64
import paramiko
key = paramiko.RSAKey(data=base64.b64decode(b'AAA...'))
client = paramiko.SSHClient()
client.get_host_keys().add('ssh.example.com', 'ssh-rsa', key)
client.connect('ssh.example.com', username='THE_USER', password='THE_PASSWORD')
stdin, stdout, stderr = client.exec_command('which solsql')
for line in stdout:
print('... ' + line.strip('\n'))
client.close()
When you run a command over SSH, your shell executes a different set of startup files than when you connect interactively to the server. So the fundamental problem is really that the path where this tool is installed is not in your PATH when you connect via ssh from a script.
A common but crude workaround is to force the shell to read in the file with the PATH definition you want; but of course that basically requires you to know at least where the correct PATH is set, so you might as well just figure out where exactly the tool is installed in the first place anyway.
ssh server '. .bashrc; type -all solsql'
(assuming that the PATH is set up in your .bashrc; and ignoring for the time being the difference between executing stuff as yourself and as root. The dot and space before .bashrc are quite significant. Notice also how we use the POSIX command type rather than the brittle which command which should have died a natural but horrible death decades ago).
If you have a good idea of where the tool might be installed, perhaps instead do
subprocess.check_output(['ssh', 'root#' + ip, '''
for path in /opt/solidDB/*/bin /usr/local/bin /usr/bin; do
test -x "$path/solsql" || continue
echo "$path"
exit 0
done
exit 1'''])
Notice how we also avoid the (here, useless) shell=True. Perhaps see also Actual meaning of 'shell=True' in subprocess
First, you need to debug your error.
Use the code like this:
command = "ssh root#IP which solsql"
try:
retult = subprocess.check_output(command,shell=True,stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output))
print ("Result:", result)
It will output error message to you, and you'll know what to do, for example, ssh could have asked for a password, or didn't find your key, or something else.

python values to bash line on a remote server

So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.

LDAP Python Query differing results from built in linux tool ldapquery

I'm having issues retrieving all attributes from a LDAP server using a python ldap script.
First off, what I can pull using the linux command ldapsearch
If I run ldapsearch -LLL uid=user and do not specify the host that I want to hit, I do not get back all of the attributes I want.
HOWEVER, if I run ldapsearch -LLL -h ldaphost uid=user I get back ALL of the attributes that are available. I'm chalking this up to one LDAP server having more attributes that I need.
Anyway, back to my python script...here's the script I'm executing:
# Initialize and bind
con = ldap.initialize('ldaphost')
con.simple_bind_s()
# Run a query against the directory
baseDN = "basedn"
searchScope = ldap.SCOPE_SUBTREE
retrieveAttributes = None
searchFilter = uid=user
res = con.search_s(baseDN, searchScope, searchFilter, retrieveAttributes)
print res
When I run this python script, the output is the exact same list of attributes that I achieved when I execute ldapsearch -LLL uid=user...it's almost like my python script is not actually connecting to the desired host and pulling the extra attributes that only that LDAP host will give me.
Is there an additional piece that I need to add to this script to specify a LDAP host? It seems like I am doing that in the second line of the script, but I'm not a LDAP guru and am probably doing something wrong. Any help would be appreciated.
Thanks.

How do I access Meteor's MongoDB from another client, while Meteor is running?

I would like to access Meteor's MongoDB from a Python client, while Meteor is running.
I can't start a mongod because Meteor's database is locked.
How do I access the database from another client?
The meteor command provides a clean way. To get the URL for the running mongod:
meteor mongo -U
which you can parse from python.
Meteor starts the mongod for you on port 3002 when you run the meteor command, and stores the mongo data file in .meteor/local/db
Output from ps aux | grep 'mongod' shows the mongod command that meteor uses:
/usr/local/meteor/mongodb/bin/mongod --bind_ip 127.0.0.1 --smallfiles --port 3002 --dbpath /path/to/your/project/.meteor/local/db
So just connect your mongo client accordingly. In python:
>>> import pymongo
>>> con = pymongo.Connection(host='127.0.0.1', port=3002)
>>> con.database_names()
[u'meteor', u'local']
UPDATE: unfortunately making changes directly in mongo in this way won't reflect live in the app, but the changes will be reflected on a full page (re)load.
Use the Meteor deployment instructions
The command will look like this:
PORT=3000 MONGO_URL=mongodb://localhost:27017/myapp node bundle/main.js
You can also find it from within server side code using:
process.env.MONGO_URL
Even if you don't set this environment variable when running, it gets set to the default. This seems to be how it is found internally (packages/mongo/remote_collection_driver.js)
The one is given by meteor mongo -U seems to reconstruct the default domain/ip and db-name, but use the port stored in the file.
You can put this anywhere in the server folder, and read it from the command line.
console.log('db url: ' + process.env.MONGO_URL);
I set up a webpage to display it to double check in the selenium tests that we are using the test database, and not overwriting live data.
And here a shell script to get Mongo URI and Mongo Database:
#!/bin/bash -eux
read -s -p "Enter Password: " password
cmd=$(meteor mongo --url myapp.meteor.com << ENDPASS
$password
ENDPASS)
mongo_uri=$(echo $cmd | cut -f2 -d" ")
mongo_db=$(echo $mongo_uri | cut -d/ -f 4)
#my_client_command_with MONGODB_URI=$mongo_uri MONGO_DB=$mongo_db
````
In regards to a 10 second delay on updates: Tail the MongoDB oplog! There's more information on how to do it here:
http://meteorhacks.com/lets-scale-meteor.html
Make sure you install smart collections and use those (instantiate your collections using Meteor.SmartCollection instead of Meteor.Collection) and you will find the updates are essentially immediate.

Categories

Resources