Pulling Change Request CSV from ServiceNow with specific columns (Fields) in Python - python

Currently I have
import requests
import json
import csv
# Set the request parameters
url= 'dev.service-now.com/change_request_list.do?CSV&'
user = 'myuser'
pwd = 'mypass'
# Set proper headers (Unsure if this is needed)
headers = {"Accept":"application/json"}
# Do the HTTP request
response = requests.get(url, auth=(user, pwd), headers=headers )
response.raise_for_status()
with open('out.csv', 'w') as f:
writer = csv.writer(f)
for line in response.iter_lines():
writer.writerow(line.decode('utf-8').split(','))
This gets the data I want from ServiceNow however it is missing certain fields. I need the 'Opened' & 'Closed' Column and am unsure how to query that with the code I have.
Any help would be perfect! I am really new to using requests.

Here is a solution using the Rest Table API that allows you to control what fields you want to pull. I also added a sysparm_query to restrict the rows.
import requests
import json
import csv
from urllib.parse import urlencode
url = 'https://dev.service-now.com/api/now/table/change_request'
user = 'myuser'
pwd = 'mypass'
fields = ['number', 'short_description', 'opened_at', 'closed_at']
params = {
'sysparm_display_value': 'true',
'sysparm_exclude_reference_link': 'true',
'sysparm_limit': '5000',
'sysparm_fields': ','.join(fields),
'sysparm_query': 'sys_created_on>2020-09-15'
}
headers = {"Accept":"application/json"}
response = requests.get(url + '?' + urlencode(params),
auth=(user, pwd), headers=headers)
response.raise_for_status()
rows = response.json()['result']
with open('out.csv', 'w') as f:
writer = csv.writer(f)
writer.writerow(fields)
for row in rows:
outputrow = []
for field in fields:
outputrow.append(row[field])
writer.writerow(outputrow)

At a glance, your code looks correct. I think you just need to update the web service URL you are using to provide the sysparm_default_export_fields=all parameter. I.E:
dev.service-now.com/change_request_list.do?CSV&sysparm_default_export_fields=all
After that, you should get a response containing every field, including system created fields like sys_id and created_on. Alternatively, you could create a new view in ServiceNow and provide the sysparm_view=viewName parameter in your URL.

Related

Grabbing the octet stream data from a Graph API response

I have been working on some code to download a days worth of Teams usage data from the Graph API. I can successfully send the token and receive the response. The response apparently contains the URL in the head to download the csv file. I can't see to find the code to grab it though.
My code as the moment is as follows.
import requests, urllib, json, csv, os
client_id = urllib.parse.quote_plus('XXXX')
client_secret = urllib.parse.quote_plus('XXXX')
tenant = urllib.parse.quote_plus('XXXX')
auth_uri = 'https://login.microsoftonline.com/' + tenant \
+ '/oauth2/v2.0/token'
auth_body = 'grant_type=client_credentials&client_id=' + client_id \
+ '&client_secret=' + client_secret \
+ '&scope=https%3A%2F%2Fgraph.microsoft.com%2F.default'
authorization = requests.post(auth_uri, data=auth_body, headers={'Content-Type': 'application/x-www-form-urlencoded'})
token = json.loads(authorization.content)['access_token']
graph_uri = 'https://graph.microsoft.com/v1.0/reports/getTeamsUserActivityUserDetail(date=2023-01-22)'
response = requests.get(graph_uri, data=auth_body, headers={'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token})
print(response. Headers)
Is there any easy way to parse the URL from the header and to obtain the CSV file?
REF: https://learn.microsoft.com/en-us/graph/api/reportroot-getteamsuseractivityuserdetail?view=graph-rest-beta
response.headers is a case-insensitive dictionary of response headers, so you should be able to get location header this way
locationUrl = response.headers['location']
# retrieving data from the URL using get method
response = requests.get(locationUrl)
# write response content to a file
with open("data.csv", 'wb') as f:
f.write(response.content)

X-WWW Put Request List inside of Dict

Attempting to write a put request in Python that keeps getting denied - the issue seems to be that the request isn't accepting the list as value for the dict.
Any suggestions on how I can get this to be accepted?
import requests
key = 'Bearer abc123'
url = 'www.my_url.com'
headers = {'Content-Type':'application/x-www-form-urlencoded',
'Accept':'application/json',
'Authorization':key}
data = {}
data['first_name']='John'
data['last_name']='Doe'
data['employee_job[roles]'] = [{'name':'CEO'}]
r = requests.put(url,data=data,headers=headers)
if the server accept put request,then the problem maybe json format
you could try this,put the data as json directly,not the form encoded.
need more information to detect the problem,can you provide the api document?
import requests
key = 'Bearer abc123'
url = 'www.my_url.com'
headers = {
'Accept':'application/json',
'Authorization':key}
data = {}
data['first_name']='John'
data['last_name']='Doe'
data['employee_job'] = {'roles':[{'name':'CEO'}]}
r = requests.put(url,json=data,headers=headers)

Can't get full table from requests python

I'm trying to get the whole table from this website: https://br.investing.com/commodities/aluminum-historical-data
But when I send this code:
with requests.Session() as s:
r = s.post('https://br.investing.com/commodities/aluminum-historical-data',
headers={"curr_id": "49768","smlID": "300586","header": "Alumínio Futuros Dados Históricos",
'User-Agent': 'Mozilla/5.0', 'st_date': '01/01/2017','end_date': '29/09/2018',
'interval_sec': 'Daily','sort_col': 'date','sort_ord': 'DESC','action': 'historical_data'})
bs2 = BeautifulSoup(r.text,'lxml')
tb = bs2.find('table',{"id":"curr_table"})
It only returns a piece of the table, not the whole date I just filtered.
I did see the post page below:
Can anyone help me get the whole table I just filtered?
You made two mistakes with your code.
The first one is the url.
You need to use the correct url to request data to investing.com.
Your current url is 'https://br.investing.com/commodities/aluminum-historical-data'
However, when you see inspection and click 'Network' the Request URLis https://br.investing.com/instruments/HistoricalDataAjax.
Your second mistake exists in s.post(blah). As Federico Rubbi referred above, what you coded assigned to headers must be assigned to data instead.
Now, your mistakes are all resolved. You need to do only one step more. You have to add a dictionary {'X-Requested-With': 'XMLHttpRequest'} to your_headers. Seeing from your code, I can see that you have already checked Network tab in HTML inspection. So, you are probably able to see why you need {'X-Requested-With': 'XMLHttpRequest'}.
So the entire code should be as follows.
import requests
import bs4 as bs
with requests.Session() as s:
url = 'https://br.investing.com/instruments/HistoricalDataAjax' # Making up for the first mistake.
your_headers = {'User-Agent': 'Mozilla/5.0'}
s.get(url, headers= your_headers)
c_list = s.cookies.get_dict().items()
cookie_list = [key+'='+value for key,value in c_list]
cookie = ','.join(cookie_list)
your_headers = {**{'X-Requested-With': 'XMLHttpRequest'},**your_headers}
your_headers['Cookie'] = cookie
data= {} # Your data. Making up for the second mistake.
response = s.post(url, data= data, headers = your_headers)
The problem is that you're passing form data as headers.
You have to send data with data keyworded argument in request.Session.post:
with requests.Session() as session:
url = 'https://br.investing.com/commodities/aluminum-historical-data'
data = {
"curr_id": "49768",
"smlID": "300586",
"header": "Alumínio Futuros Dados Históricos",
'User-Agent': 'Mozilla/5.0',
'st_date': '01/01/2017',
'end_date': '29/09/2018',
'interval_sec': 'Daily',
'sort_col': 'date',
'sort_ord': 'DESC',
'action': 'historical_data',
}
your_headers = {} # your headers here
response = session.post(url, data=data, headers=your_headers)
bs2 = BeautifulSoup(response.text,'lxml')
tb = bs2.find('table',{"id":"curr_table"})
I'd also recommend including your headers (especially user-agents) in the POST request because the site could not allow bots. In this case, if you do it, it will be harder to detect the bot.

python requests post file using multipart form parameters [duplicate]

I'm performing a simple task of uploading a file using Python requests library. I searched Stack Overflow and no one seemed to have the same problem, namely, that the file is not received by the server:
import requests
url='http://nesssi.cacr.caltech.edu/cgi-bin/getmulticonedb_release2.cgi/post'
files={'files': open('file.txt','rb')}
values={'upload_file' : 'file.txt' , 'DB':'photcat' , 'OUT':'csv' , 'SHORT':'short'}
r=requests.post(url,files=files,data=values)
I'm filling the value of 'upload_file' keyword with my filename, because if I leave it blank, it says
Error - You must select a file to upload!
And now I get
File file.txt of size bytes is uploaded successfully!
Query service results: There were 0 lines.
Which comes up only if the file is empty. So I'm stuck as to how to send my file successfully. I know that the file works because if I go to this website and manually fill in the form it returns a nice list of matched objects, which is what I'm after. I'd really appreciate all hints.
Some other threads related (but not answering my problem):
Send file using POST from a Python script
http://docs.python-requests.org/en/latest/user/quickstart/#response-content
Uploading files using requests and send extra data
http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow
If upload_file is meant to be the file, use:
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
and requests will send a multi-part form POST body with the upload_file field set to the contents of the file.txt file.
The filename will be included in the mime header for the specific field:
>>> import requests
>>> open('file.txt', 'wb') # create an empty demo file
<_io.BufferedWriter name='file.txt'>
>>> files = {'upload_file': open('file.txt', 'rb')}
>>> print(requests.Request('POST', 'http://example.com', files=files).prepare().body.decode('ascii'))
--c226ce13d09842658ffbd31e0563c6bd
Content-Disposition: form-data; name="upload_file"; filename="file.txt"
--c226ce13d09842658ffbd31e0563c6bd--
Note the filename="file.txt" parameter.
You can use a tuple for the files mapping value, with between 2 and 4 elements, if you need more control. The first element is the filename, followed by the contents, and an optional content-type header value and an optional mapping of additional headers:
files = {'upload_file': ('foobar.txt', open('file.txt','rb'), 'text/x-spam')}
This sets an alternative filename and content type, leaving out the optional headers.
If you are meaning the whole POST body to be taken from a file (with no other fields specified), then don't use the files parameter, just post the file directly as data. You then may want to set a Content-Type header too, as none will be set otherwise. See Python requests - POST data from a file.
(2018) the new python requests library has simplified this process, we can use the 'files' variable to signal that we want to upload a multipart-encoded file
url = 'http://httpbin.org/post'
files = {'file': open('report.xls', 'rb')}
r = requests.post(url, files=files)
r.text
Client Upload
If you want to upload a single file with Python requests library, then requests lib supports streaming uploads, which allow you to send large files or streams without reading into memory.
with open('massive-body', 'rb') as f:
requests.post('http://some.url/streamed', data=f)
Server Side
Then store the file on the server.py side such that save the stream into file without loading into the memory. Following is an example with using Flask file uploads.
#app.route("/upload", methods=['POST'])
def upload_file():
from werkzeug.datastructures import FileStorage
FileStorage(request.stream).save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return 'OK', 200
Or use werkzeug Form Data Parsing as mentioned in a fix for the issue of "large file uploads eating up memory" in order to avoid using memory inefficiently on large files upload (s.t. 22 GiB file in ~60 seconds. Memory usage is constant at about 13 MiB.).
#app.route("/upload", methods=['POST'])
def upload_file():
def custom_stream_factory(total_content_length, filename, content_type, content_length=None):
import tempfile
tmpfile = tempfile.NamedTemporaryFile('wb+', prefix='flaskapp', suffix='.nc')
app.logger.info("start receiving file ... filename => " + str(tmpfile.name))
return tmpfile
import werkzeug, flask
stream, form, files = werkzeug.formparser.parse_form_data(flask.request.environ, stream_factory=custom_stream_factory)
for fil in files.values():
app.logger.info(" ".join(["saved form name", fil.name, "submitted as", fil.filename, "to temporary file", fil.stream.name]))
# Do whatever with stored file at `fil.stream.name`
return 'OK', 200
You can send any file via post api while calling the API just need to mention files={'any_key': fobj}
import requests
import json
url = "https://request-url.com"
headers = {"Content-Type": "application/json; charset=utf-8"}
with open(filepath, 'rb') as fobj:
response = requests.post(url, headers=headers, files={'file': fobj})
print("Status Code", response.status_code)
print("JSON Response ", response.json())
#martijn-pieters answer is correct, however I wanted to add a bit of context to data= and also to the other side, in the Flask server, in the case where you are trying to upload files and a JSON.
From the request side, this works as Martijn describes:
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
However, on the Flask side (the receiving webserver on the other side of this POST), I had to use form
#app.route("/sftp-upload", methods=["POST"])
def upload_file():
if request.method == "POST":
# the mimetype here isnt application/json
# see here: https://stackoverflow.com/questions/20001229/how-to-get-posted-json-in-flask
body = request.form
print(body) # <- immutable dict
body = request.get_json() will return nothing. body = request.get_data() will return a blob containing lots of things like the filename etc.
Here's the bad part: on the client side, changing data={} to json={} results in this server not being able to read the KV pairs! As in, this will result in a {} body above:
r = requests.post(url, files=files, json=values). # No!
This is bad because the server does not have control over how the user formats the request; and json= is going to be the habbit of requests users.
Upload:
with open('file.txt', 'rb') as f:
files = {'upload_file': f.read()}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
Download (Django):
with open('file.txt', 'wb') as f:
f.write(request.FILES['upload_file'].file.read())
Regarding the answers given so far, there was always something missing that prevented it to work on my side. So let me show you what worked for me:
import json
import os
import requests
API_ENDPOINT = "http://localhost:80"
access_token = "sdfJHKsdfjJKHKJsdfJKHJKysdfJKHsdfJKHs" # TODO: get fresh Token here
def upload_engagement_file(filepath):
url = API_ENDPOINT + "/api/files" # add any URL parameters if needed
hdr = {"Authorization": "Bearer %s" % access_token}
with open(filepath, "rb") as fobj:
file_obj = fobj.read()
file_basename = os.path.basename(filepath)
file_to_upload = {"file": (str(file_basename), file_obj)}
finfo = {"fullPath": filepath}
upload_response = requests.post(url, headers=hdr, files=file_to_upload, data=finfo)
fobj.close()
# print("Status Code ", upload_response.status_code)
# print("JSON Response ", upload_response.json())
return upload_response
Note that requests.post(...) needs
a url parameter, containing the full URL of the API endpoint you're calling, using the API_ENDPOINT, assuming we have an http://localhost:8000/api/files endpoint to POST a file
a headers parameter, containing at least the authorization (bearer token)
a files parameter taking the name of the file plus the entire file content
a data parameter taking just the path and file name
Installation required (console):
pip install requests
What you get back from the function call is a response object containing a status code and also the full error message in JSON format. The commented print statements at the end of upload_engagement_file are showing you how you can access them.
Note: Some useful additional information about the requests library can be found here
Some may need to upload via a put request and this is slightly different that posting data. It is important to understand how the server expects the data in order to form a valid request. A frequent source of confusion is sending multipart-form data when it isn't accepted. This example uses basic auth and updates an image via a put request.
url = 'foobar.com/api/image-1'
basic = requests.auth.HTTPBasicAuth('someuser', 'password123')
# Setting the appropriate header is important and will vary based
# on what you upload
headers = {'Content-Type': 'image/png'}
with open('image-1.png', 'rb') as img_1:
r = requests.put(url, auth=basic, data=img_1, headers=headers)
While the requests library makes working with http requests a lot easier, some of its magic and convenience obscures just how to craft more nuanced requests.
In Ubuntu you can apply this way,
to save file at some location (temporary) and then open and send it to API
path = default_storage.save('static/tmp/' + f1.name, ContentFile(f1.read()))
path12 = os.path.join(os.getcwd(), "static/tmp/" + f1.name)
data={} #can be anything u want to pass along with File
file1 = open(path12, 'rb')
header = {"Content-Disposition": "attachment; filename=" + f1.name, "Authorization": "JWT " + token}
res= requests.post(url,data,header)

Python append json to json file in a while loop

I'm trying to get all users information from GitHub API using Python Requests library. Here is my code:
import requests
import json
url = 'https://api.github.com/users'
token = "my_token"
headers = {'Authorization': 'token %s' % token}
r = requests.get(url, headers=headers)
users = r.json()
with open('users.json', 'w') as outfile:
json.dump(users, outfile)
I can dump first page of users into a json file by now. I can also find the 'next' page's url:
next_url = r.links['next'].get('url')
r2 = requests.get(next_url, headers=headers)
users2 = r2.json()
Since I don't know how many pages yet, how can I append 2nd, 3rd... page to 'users.json' sequentially in a while loop as fast as possible?
Thanks!
First, you need to open file in 'a' mode, otherwise subsequence write will overwrite everything
import requests
import json
url = 'https://api.github.com/users'
token = "my_token"
headers = {'Authorization': 'token %s' % token}
outfile = open('users.json', 'a')
while True:
r = requests.get(url, headers=headers)
users = r.json()
json.dump(users, outfile)
url = r.links['next'].get('url')
# I don't know what Github return in case there is no more users, so you need to double check by yourself
if url == '':
break
outfile.close()
Append the data you get from the requests query to a list and move on to the next query.
Once you have all of the data you want, then proceed to try to concatenate the data into a file or into an object. You can also use threading to do multiple queries in parallel, but most likely there is going to be rate limiting on the api.

Categories

Resources