I have a remote API I hit with POST requests. I have two scripts that do this job. One works; the other I need to debug.
The one that works is more complex and in Python, using the requests package:
import requests
from requests.auth import HTTPBasicAuth
r = requests.post("http://thing.com/",
data = {"param1": "foo"},
auth = HTTPBasicAuth("user#domain.com", "password"))
This works great. The other script, which is CLI, does theoretically the identical freaking thing and does not work. The CLI script prints out the URL it's trying to use. I'd love to get the working Python script to tell me which the hell URL it is trying to use. How do I make requests tell me the fully formatted URL it's using?
You can add the following to print out debug info about all your requests:
import logging
logging.basicConfig(level=logging.DEBUG)
You'll get output like
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): <URL>
DEBUG:requests.packages.urllib3.connectionpool:"POST /oauth/token HTTP/1.1" 200 None
If you want to print other information, you can print out the objects you're passing in or implement some of the requests event hooks.
Related
I am trying to automate the trading strategy that I executed manually before. This requires communicating with my broker through an API. I am authorizing through HTTP basic auth. To test I tried to make an API request to get information about funds in my account.
At first, I was getting 401 responses and it turned out that I was using the wrong identification information.
After I fixed this issue, all API requests that I am making are giving me 404 responses.
An example
import requests
from requests.auth import HTTPBasicAuth
response = requests.get("https://api-demo.exante.eu/md/{version}/accounts", auth=HTTPBasicAuth
('name', 'pass'))
print(response)
After this, I tried some code online to check whether or not there are other problems. I tried this
https://gist.github.com/rshrc/127ba2c20df74263d71bc5a5595c8969
and this also gives me 404.
Link to my brokers API documentation:
https://api-live.exante.eu/api-docs/#section/API-versions
Does anyone know where might be the problem? Directions would be helpful. Thanks!
It looks like you're not passing a version with the {version} variable. Don't forget to also format the URL string with an f before it. This should work:
import requests
from requests.auth import HTTPBasicAuth
version = "3.0"
response = requests.get(f"https://api-demo.exante.eu/md/{version}/accounts", auth=HTTPBasicAuth
('name', 'pass'))
print(response)
I have several Python scripts that are used from the CLI. Now I have been asked if I could provide a web API to perform some of the work normally done from the CLI. I would like to respond with JSON formatted data. I know little about JSON or API's.
How do I retrieve the query data from the http request?
How to a create the HTTP headers for the response from my script?
Given the following URL, how can I respond with a JSON reply of the name "Steve"?
http://myserver.com/api.py?query=who
I already have a functioning Apache web server running, and can serve up HTML and CGI scripts. It's simply coding the Python script that I need some help with. I would prefer not to use a framework, like Flask.
A simple code example would be appreciated.
The following is the Python code that I've come up with, but I know it's not the correct way to do this, and it doesn't use JSON:
#!/usr/local/bin/python3.7
import cgi # CGI module
# Set page headers and start page
print("Content-type: text/plain")
print("X-Content-Type-Options: nosniff\x0d\x0a\x0d\x0a")
# Form defaults
query_arg = None
# Get form values, if any
form = cgi.FieldStorage()
# Get the rest of the values if the form was submitted
if "query" in form.keys():
query_arg = form["query"].value
if query_arg == "who":
print("Steve", end="", flush=True)
You are trying to build a request handler with core python code. which is able to handle http request, In my opinion its not good idea as there are multiple securty scenarios attached with it, also in cross server request its bit difficult to handle all request and request scenarios . I will suggest to use Flask which is very lightweight and this will give an pre-setup of routing mechanism to handle all kind of request in very less code below is the snippet to generate http json response hope it helps
import sys
import flask
import random, string
from flask import jsonify
class Utils:
def make_response(self):
response = jsonify({
'message': 'success',
})
response.status_code = 200
return response
I am trying to create a very basic app that will be able to connect to a web server which host my college assignments, results, and more and notify me when ever there's something new on it. Currently I am trying to get the hang of the requests module, but I am not able to login as the server uses this kind of authentication, and it gives me error 401 unauthorized.
I tried searching how to authenticate to web servers tried using sockets, with no luck. Could you please help me find out how to do this?
EDIT: I am using python 3.4
After inspecting the headers in the response for that URL, I think the server is trying to use NTLM authentication.
Try installing requests-ntlm (e.g. with pip install requests_ntlm) and then doing this:
import requests
from requests_ntlm import HttpNtlmAuth
requests.get('http://moodle.mcast.edu.mt:8085/',
auth=HttpNtlmAuth('domain\\username', 'password'))
You need to attach a simple authentication header within the socket request headers.
Example;
import base64
mySocket.send('GET / HTTP/1.1\r\nAuthorization: Basic %s\r\n\r\n' % base64.b64encode('user:pass'))
Python 3x;
import base64
template = 'GET / HTTP/1.1\r\nAuthorization: Basic %s\r\n\r\n'
mySocket.send(bytes(template % base64.b64encode(bytes('user:pass', 'UTF-8')), 'UTF-8'))
They might not supply a programmatic API with which to authorize requests. If that is the case, you could try using selenium to manually open a browser and fill the details in for you. Selenium has handling for alert boxes too apparently, though I haven't used it myself.
Basically i need a program that given a URL, it downloads a file and saves it. I know this should be easy but there are a couple of drawbacks here...
First, it is part of a tool I'm building at work, I have everything else besides that and the URL is HTTPS, the URL is of those you would paste in your browser and you'd get a pop up saying if you want to open or save the file (.txt).
Second, I'm a beginner at this, so if there's info I'm not providing please ask me. :)
I'm using Python 3.3 by the way.
I tried this:
import urllib.request
response = urllib.request.urlopen('https://websitewithfile.com')
txt = response.read()
print(txt)
And I get:
urllib.error.HTTPError: HTTP Error 401: Authorization Required
Any ideas? Thanks!!
You can do this easily with the requests library.
import requests
response = requests.get('https://websitewithfile.com/text.txt',verify=False, auth=('user', 'pass'))
print(response.text)
to save the file you would type
with open('filename.txt','w') as fout:
fout.write(response.text):
(I would suggest you always set verify=True in the resquests.get() command)
Here is the documentation:
Doesn't the browser also ask you to sign in? Then you need to repeat the request with the added authentication like this:
Python urllib2, basic HTTP authentication, and tr.im
Equally good: Python, HTTPS GET with basic authentication
If you don't have Requests module, then the code below works for python 2.6 or later. Not sure about 3.x
import urllib
testfile = urllib.URLopener()
testfile.retrieve("https://randomsite.com/file.gz", "/local/path/to/download/file")
You can try this solution: https://github.qualcomm.com/graphics-infra/urllib-siteminder
import siteminder
import getpass
url = 'https://XYZ.dns.com'
r = siteminder.urlopen(url, getpass.getuser(), getpass.getpass(), "dns.com")
Password:<Enter Your Password>
data = r.read() / pd.read_html(r.read()) # need to import panda as pd for the second one
This is a newbie problem with python, advice is much appreciated.
no-ip.com provides an easy way to update a computer's changing ip-address, simply open the url
http://user:password#dynupdate.no-ip.com/nic/update?hostname=my.host.name
...both http and https work when entered in firefox. I tried to implement that in a script residing in "/etc/NetworkManager/dispatcher.d" to be used by Network Manager on a recent version of Ubuntu.
What works is the python script:
from urllib import urlopen;
urlopen("http://user:password#dynupdate.no-ip.com/nic/update?hostname=my.host.name")
What I want to have is the same with "https", which does not work as easily. Could anyone, please,
(1) show me what the script should look like for https,
(2) give me some keywords, which I can use to learn about this.
(3) perhaps even explain why it does not work any more when the script is changed to using "urllib2":
from urllib2 import urlopen;
urlopen("http://user:password#dynupdate.no-ip.com/nic/update?hostname=my.host.name")
Thank you!
The user:password part isn't in the actual URL, but a shortcut for HTTP authentication. The browser's URL parsing lib will filter them out. In urllib2, you want to
import base64, urllib2
user,password = 'john_smith','123456'
request = urllib2.Request('dynupdate.no-ip.com/nic/update?hostname=my.host.name')
auth = base64.base64encode(user + ':' + password)
request.add_header('Authorization', 'Basic ' + auth)
urllib2.urlopen(request)