Delete DNS 'A' Records from Domain Controller using Python - python

I have a DC with "example.com" and I have many DNS records/FQDNs with 'A' record(Windows Servers).
Ex: Server1.example.com A 172.3.2.1
I'm using Python and trying to delete the record (server1).
Unfortunately, its giving me response as None.
I am using dnspython library.
def DeleteDNSRecords(serverList):
try:
for server in serverList:
updater = update.Update(server,'A')
response = updater.delete(server,'A')
print(str(response))
except Exception as e:
print (e)

Related

How Handle Expired SSL/TLS Certificate with Python Requests?

What's the correct way to handle an expired certificates with Python Requests?
I want the code to differentiate between a "connection error" and connection with an "expired TLS certificate".
import requests
def conn(URL):
try:
response = requests.get(URL)
except requests.exceptions.RequestException:
print(URL, "Cannot connect")
return False
print(URL, "connection sucessful")
return True
# valid cert
conn("https://www.google.com")
# unexistant domain
conn("https://unexistent-domain-example.com")
# expired cert
conn("https://expired-rsa-dv.ssl.com")
I want the code to differentiate between a "connection error" and connection with an "expired TLS certificate".
You can look at the exception details and see if 'CERTIFICATE_VERIFY_FAILED' is there.
import requests
def conn(URL):
try:
response = requests.get(URL)
except requests.exceptions.RequestException as e:
if 'CERTIFICATE_VERIFY_FAILED' in str(e):
print('CERTIFICATE_VERIFY_FAILED')
print(URL, f"Cannot connect: {str(e)}")
print('--------------------------')
return False
print(URL, "connection sucessful")
return True
# valid cert
conn("https://www.google.com")
# unexistant domain
conn("https://unexistent-domain-example.com")
# expired cert
conn("https://expired-rsa-dv.ssl.com")
requests is a perfect tool for requests, but your task is to check server certificate expiration date which require using lower level API. The algorithm is to retrieve server certificate, parse it and check end date.
To get certificate from server there's function ssl.get_server_certificate(). It will return certificate in PEM encoding.
There're plenty of ways how to parse PEM encoded certificate (check this question), I'd stick with "undocumented" one.
To parse time from string you can use ssl.cert_time_to_seconds().
To parse url you can use urllib.parse.urlparse(). To get current timestamp you can use time.time()
Code:
import ssl
from time import time
from urllib.parse import urlparse
from pathlib import Path
def conn(url):
parsed_url = urlparse(url)
cert = ssl.get_server_certificate((parsed_url.hostname, parsed_url.port or 443))
# save cert to temporary file (filename required for _test_decode_cert())
temp_filename = Path(__file__).parent / "temp.crt"
with open(temp_filename, "w") as f:
f.write(cert)
try:
parsed_cert = ssl._ssl._test_decode_cert(temp_filename)
except Exception:
return
finally: # delete temporary file
temp_filename.unlink()
return ssl.cert_time_to_seconds(parsed_cert["notAfter"]) > time()
It'll throw an exception on any connection error, you can handle it with try .. except over get_server_certificate() call (if needed).

Does psycopg2.connect inherit the proxy set in this context manager?

I have a Django app below that uses a proxy to connect to an external Postgres database. I had to replace another package with psycopg2 and it works fine locally, but doesn't work when I move onto our production server which is a Heroku app using QuotaguardStatic for proxy purposes. I'm not sure what's wrong here
For some reason, the psycopg2.connect part returns an error with a different IP address. Is it not inheriting the proxy set in the context manager? What would be
from apps.proxy.socks import Socks5Proxy
import requests
PROXY_URL = os.environ['QUOTAGUARDSTATIC_URL']
with Socks5Proxy(url=PROXY_URL) as p:
public_ip = requests.get("http://wtfismyip.com/text").text
print(public_ip) # prints the expected IP address
print('end')
try:
connection = psycopg2.connect(user=EXTERNAL_DB_USERNAME,
password=EXTERNAL_DB_PASSWORD,
host=EXTERNAL_DB_HOSTNAME,
port=EXTERNAL_DB_PORT,
database=EXTERNAL_DB_DATABASE,
cursor_factory=RealDictCursor # To access query results like a dictionary
) # , ssl_context=True
except psycopg2.DatabaseError as e:
logger.error('Unable to connect to Illuminate database')
raise e
Error is:
psycopg2.OperationalError: FATAL: no pg_hba.conf entry for host "12.345.678.910", user "username", database "databasename", SSL on
Basically, the IP address 12.345.678.910 does not match what was printed at the beginning of the context manager where the proxy is set. Do I need to set a proxy another method so that the psycopg2 connection uses it?

Python: how to setup python-ldap to ignore referrals?

how can I avoid getting (undocumented) exception in following code?
import ldap
import ldap.sasl
connection = ldap.initialize('ldaps://server:636', trace_level=0)
connection.set_option(ldap.OPT_REFERRALS, 0)
connection.protocol_version = 3
sasl_auth = ldap.sasl.external()
connection.sasl_interactive_bind_s('', sasl_auth)
baseDN = 'ou=org.com,ou=xx,dc=xxx,dc=com'
filter = 'objectclass=*'
try:
result = connection.search_s(baseDN, ldap.SCOPE_SUBTREE, filter)
except ldap.REFERRAL, e:
print "referral"
except ldap.LDAPError, e:
print "Ldaperror"
It happens that baseDN given in example is a referral. When I run this code I get referral as output.
What would I want is that python-ldap just would skip it or ignore without throwing strange exception (I cannot find documentation about it)?
(this may help or not) The problem happened when I was searching baseDN upper in a tree. When I was searching 'ou=xx,dc=xxx,dc=com' it started to freeze on my production env when on development env everything works great. When I started to looking at it I found that it freezing on referral branches. How can I tell python-ldap to ignore referrals? Code above does not work as I want.
This is a working example, see if it helps.
def ldap_initialize(remote, port, user, password, use_ssl=False, timeout=None):
prefix = 'ldap'
if use_ssl is True:
prefix = 'ldaps'
# ask ldap to ignore certificate errors
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
if timeout:
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, timeout)
ldap.set_option(ldap.OPT_REFERRALS, ldap.OPT_OFF)
server = prefix + '://' + remote + ':' + '%s' % port
l = ldap.initialize(server)
l.simple_bind_s(user, password)

Python - try statement breaking urllib2.urlopen

I'm writing a program in Python that has to make a http request while being forced onto a direct connection in order to avoid a proxy. Here is the code I use which successfully manages this:
print "INFO: Testing API..."
proxy = urllib2.ProxyHandler({})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
req = urllib2.urlopen('http://maps.googleapis.com/maps/api/geocode/json?address=blahblah&sensor=true')
returneddata = json.loads(req.read())
I then want to add a try statement around 'req', in order to handle a situation where the user is not connected to the internet, which I have tried like so:
try:
req = urllib2.urlopen('http://maps.googleapis.com/maps/api/geocode/json?address=blahblah&sensor=true')
except urllib2.URLError:
print "Unable to connect etc etc"
The trouble is that by doing that, it always throws the exception, even though the address is perfectly accessible & the code works without it.
Any ideas? Cheers.

Python problems with FancyURLopener, 401, and "Connection: close"

I'm new to Python, so forgive me if I am missing something obvious.
I am using urllib.FancyURLopener to retrieve a web document. It works fine when authentication is disabled on the web server, but fails when authentication is enabled.
My guess is that I need to subclass urllib.FancyURLopener to override the get_user_passwd() and/or prompt_user_passwd() methods. So I did:
class my_opener (urllib.FancyURLopener):
# Redefine
def get_user_passwd(self, host, realm, clear_cache=0):
print "get_user_passwd() called; host %s, realm %s" % (host, realm)
return ('name', 'password')
Then I attempt to open the page:
try:
opener = my_opener()
f = opener.open ('http://1.2.3.4/whatever.html')
content = f.read()
print "Got it: ", content
except IOError:
print "Failed!"
I expect FancyURLopener to handle the 401, call my get_user_passwd(), and retry the request.
It does not; I get the IOError exception when I call "f = opener.open()".
Wireshark tells me that the request is sent, and that the server is sending a "401 Unauthorized" response with two headers of interest:
WWW-Authenticate: BASIC
Connection: close
The connection is then closed, I catch my exception, and it's all over.
It fails the same way even if I retry the "f = opener.open()" after IOError.
I have verified that my my_opener() class is working by overriding the http_error_401() method with a simple "print 'Got 401 error'". I have also tried to override the prompt_user_passwd() method, but that doesn't happen either.
I see no way to proactively specify the user name and password.
So how do I get urllib to retry the request?
Thanks.
I just tried your code on my webserver (nginx) and it works as expected:
Get from urllib client
HTTP/1.1 401 Unauthorized from server with Headers
Connection: close
WWW-Authenticate: Basic realm="Restricted"
client tries again with Authorization header
Authorization: Basic <Base64encoded credentials>
Server responds with 200 OK + Content
So I guess your code is right (I tried it with python 2.7.1) and maybe the webserver you are trying to access is not working as expected. Here is the code tested using the free http basic auth testsite browserspy.dk (seems they are using apache - the code works as expected):
import urllib
class my_opener (urllib.FancyURLopener):
# Redefine
def get_user_passwd(self, host, realm, clear_cache=0):
print "get_user_passwd() called; host %s, realm %s" % (host, realm)
return ('test', 'test')
try:
opener = my_opener()
f = opener.open ('http://browserspy.dk/password-ok.php')
content = f.read()
print "Got it: ", content
except IOError:
print "Failed!"

Categories

Resources