I'm having issues retrieving all attributes from a LDAP server using a python ldap script.
First off, what I can pull using the linux command ldapsearch
If I run ldapsearch -LLL uid=user and do not specify the host that I want to hit, I do not get back all of the attributes I want.
HOWEVER, if I run ldapsearch -LLL -h ldaphost uid=user I get back ALL of the attributes that are available. I'm chalking this up to one LDAP server having more attributes that I need.
Anyway, back to my python script...here's the script I'm executing:
# Initialize and bind
con = ldap.initialize('ldaphost')
con.simple_bind_s()
# Run a query against the directory
baseDN = "basedn"
searchScope = ldap.SCOPE_SUBTREE
retrieveAttributes = None
searchFilter = uid=user
res = con.search_s(baseDN, searchScope, searchFilter, retrieveAttributes)
print res
When I run this python script, the output is the exact same list of attributes that I achieved when I execute ldapsearch -LLL uid=user...it's almost like my python script is not actually connecting to the desired host and pulling the extra attributes that only that LDAP host will give me.
Is there an additional piece that I need to add to this script to specify a LDAP host? It seems like I am doing that in the second line of the script, but I'm not a LDAP guru and am probably doing something wrong. Any help would be appreciated.
Thanks.
Related
Trying to connect to the host described in ssh config using fabrics 2 and identity file.
con = Connection('my_host')
#task
def tt(c):
con.run('uname -a')
~/.ssh/config :
Host my_host
HostName 123.144.76.84
User ubuntu
IdentityFile ~/.keys/somekey
It fails with
paramiko.ssh_exception.AuthenticationException: Authentication failed.
While $ ssh my_host from the terminal works.
I've tried to do fab -i ~/.keys/somekey tt with same result.
Fabric accepts a hosts iterable as parameters in tasks. Per the documentation:
An iterable of host-connection specifiers appropriate for eventually instantiating a Connection. The existence of this argument will trigger automatic parameterization of the task when invoked from the CLI, similar to the behavior of --hosts.
One of the members of which could be:
A string appropriate for being the first positional argument to Connection - see its docs for details, but these are typically shorthand-only convenience strings like hostname.example.com or user#host:port.
As for your example, please try this for fabfile.py:
host_list = ["my_host"]
#task(hosts=host_list)
def tt(c):
c.run('uname -a')
Alternatively, you can omit the host declaration from the fabfile altogether. If you don't specify the host in fabfile.py, you can simply specify it as a host when invoking the fab cli utility. If your fabfile.py is this:
#task
def tt(c):
c.run('uname -a')
You would now run fab -H my_host tt to run it on the alias tt from your SSH client config.
Hope this helps.
There seems to be something afoot with paramiko. Without digging deeper I don't know if it's a bug or not. In any case, I had the same issue, and even a plain paramiko call got me the same error.
Following another SO question I was able to make it work by disabling rsa-sha2-256 and rsa-sha2-512 as mentioned.
Luckily, fabric exposes access to the paramiko arguments like so:
con = Connection(
'my_host',
connect_kwargs={
"disabled_algorithms": {"pubkeys": ["rsa-sha2-256", "rsa-sha2-512"]}
}
)
I find it unlucky that this is required in the fabfile. If someone else has a better/cleaner solution feel free to comment.
Same problem.
You can try add -d for more detail when fabric run:
fab2 -d tt
I found the exception: paramiko.ssh_exception.SSHException: Invalid key, then regenerate key from server, problem solved.
I am trying to learn how to send a list of lists in Python to R -script which runs statistical methods and gives two or three data frames back to Python
I stumbled across the pyRserve package. I was able to follow the manual in their documentation and everything works great in command line (>>> ). When I run a script, it does not stop. I have installed Rserve package and started its service in RStudio. Below is the code:
import pyRserve
print "here1" #prints this line...
conn = pyRserve.connect(host='localhost', port=6311)
print "here2"
a= conn.eval('3+5')
print a
Can anyone please help?
The (docs) suggest:
$ python
>>> import pyRserve
>>> conn = pyRserve.connect()
And then go on with:
To connect to a different location host and port can be specified explicitly:
pyRserve.connect(host='localhost', port=6311)
This is not meant to indicate that both lines should be run. The second line should be viewed as a potential modifier for the first. So if you need an alternate address or port, then it should look like:
$ python
>>> import pyRserve
>>> conn = pyRserve.connect(host='localhost', port=6311)
Also note this caveat for windows users:
Note On some windows versions it might be necessary to always provide ‘localhost’ for connecting to a locally running Rserve instance.
So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.
I have NGINX UWSGI and WEB2PY installed on the server. Web2py application performing only one function by accessing the database and printing rows in the table.
def fetch():
import psycopg2
conn = psycopg2.connect(database="postgres",
user="postgres",
password="qwerty",
host="127.0.0.1")
cur = conn.cursor()
cur.execute("SELECT id, name from TEST")
rows = cur.fetchall()
conn.close()
return rows
When the function is called locally the table contents is returned.
But when I'm trying to call the function from remote machine I get an internal error 500.
One more interesting thing, is when function looks like this:
def hello():
return 'hello'
String 'hello' is returned. Starting adding it an import directive immediately causes error page to be generated.
Can any one please suggest the proper application syntax/logic?
My guess is that your MySQL service doesn't allow remote access. Could you check your MySQL configuration?
vim /etc/mysql/my.cnf
Comment out the following lines.
#bind-address = 127.0.0.1
#skip-networking
If there is no skip-networking line in your configuration file, just add it and comment out it.
And then restart the mysql service.
service mysql restart
Forgive the stupid question but have you checked if the module is available on your server?
When you say that the error appears in your hello function as soon as you try to import, it's the same directive import psycopg2?
Try this:
Assuming that fetch() it's defined on controllers/default.py
open folder views/default and create a new file called fetch.html
paste this inside
{{extend 'layout.html'}}
{{=rows}}
fetch.html is a view or a template if you prefer
Modify fetch() to return a dictionary with rows for the view to print
return dict(rows=rows)
this is very basic tough, you can find more information about basic steps in the book -> http://www.web2py.com/books/default/chapter/29/03/overview#Postbacks
I am a security analyst..and Python newbie that constantly needs to find the following 3 pieces of information on end users during incident investigations at work:
1.Their device hostname
2.The IP address associated with the device
3.Their login username
I don't even know how to begin creating a script that would provide this information, but I'm thinking that it would prompt me to input 1 of the 3 piece of info I mentioned above and then print out the other 2 pieces. Beyond the prompt part below, I'm stuck..
#!/usr/bin/python
print "Please paste in one of the following pieces of information.."
print "\n"
print "1. Device hostname"
print "2. IP address"
print "3. Username"
print "\n"
str = raw_input()
I've seen a few posts that detail how to pull various bits of info on a system locally, but not remotely. Does anyone know how I'd go about building this type of script in Python?
There are existing command line tools that you can use like: dig, the host command, and nslookup which will all do DNS lookups for hostnames and IP addresses. The request for "username" doesn't seem meaningful. More than one user can log in to a single machine, so I'm not sure how you plan on gathering that piece of information unless you allow for multiple return values.
Also here's a Perl one-liner that will do an IP to name resolution (reverse DNS lookup):
perl -e 'use Socket qw(getnameinfo inet_aton pack_sockaddr_in NI_NUMERICSERV); my $ip = inet_aton($ARGV[0]); my $addr = pack_sockaddr_in(80, $ip); my ($err, $hostname) = getnameinfo($addr, NI_NUMERICSERV); print "$hostname\n"'
The script assumes a single argument on the command line.
Here's a script that does the same thing in the forward DNS direction, printing all IPs:
perl -e 'use Socket qw(getaddrinfo getnameinfo NI_NUMERICSERV NI_NUMERICHOST); my ($err, #res) = getaddrinfo($ARGV[0], "www", {socktype => SOCK_STREAM}); for my $r (#res) { my ($err, $ip) = getnameinfo($r->{addr}, NI_NUMERICHOST | NI_NUMERICSERV); print "$ip\n";}'
It also assumes a single command line argument.
If you want a Python solution, this will get you started using the gevent library:
import gevent.resolver_ares
from gevent.socket import AF_INET, SOCK_STREAM
def resolve(fqdn):
resolver = gevent.resolver_ares.Resolver()
results = resolver.getaddrinfo(
fqdn, 0, family=AF_INET, socktype=SOCK_STREAM)
return results[0][-1][0]
This only does a forward resolution, but you should be able to modify it based on the Perl code to get a working reverse resolution as well. You can also use the built-in socket.getaddrinfo instead, but if you plan on doing this for a large number of machines, I'd recommend the gevent library.