I am attempting to take the printed output of my code and have it appended onto a file in python. Here is the below example of the code test.py:
import http.client
conn = http.client.HTTPSConnection("xxxxxxxxxxxx")
headers = {
'Content-Type': "xxxxxxxx",
'Accept': "xxxxxxxxxx",
'Authorization': "xxxxxxxxxxxx"
}
conn.request("GET", "xxxxxxxxxxxx", headers=headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
This prints out a huge amount of text onto my console.
My goal is to take that output and send it over to an arbitrary file. An example I can think of that I've done on my terminal is python3 test.py >> file.txt and this shows me the output into that text file.
However, is there a way to run something similar to test.py >> file.txt but within the python code?
You could open the file in "a" (i.e., append) mode and then write to it:
with open("file.txt", "a") as f:
f.write(data.decode("utf-8"))
You can use the included module for writing to a file.
with open("test.txt", "w") as f:
f.write(decoded)
This will take the decoded text and put it into a file named test.txt.
import http.client
conn = http.client.HTTPSConnection("xxxxxxxxxxxx")
headers = {
'Content-Type': "xxxxxxxx",
'Accept': "xxxxxxxxxx",
'Authorization': "xxxxxxxxxxxx"
}
conn.request("GET", "xxxxxxxxxxxx", headers=headers)
res = conn.getresponse()
data = res.read()
decoded = data.decode("utf-8")
print(decoded)
with open("test.txt", "w") as f:
f.write(decoded)
Related
I have a list of domains and the Grafana's dashboard template. It needs to replace a domain name in the template and POST it into Grafana.
I stumbled on the last step PUT.
#!/usr/bin/env python3
import re
import sys
import os
import json
import requests
from requests.structures import CaseInsensitiveDict
import CloudFlare
# Curl requests
url = "http://localhost:8080/api/snapshots"
headers = CaseInsensitiveDict()
headers["Content-Type"] = "application/json"
headers["Authorization"] = "Bearer glsa_W2d"
zonefile = "zones.txt"
dashboard_template = "report_001_test.json"
sys.path.insert(0, os.path.abspath('..'))
# There I can prepare a lot of dashboard json file for each of domain (zone)
with open(zonefile, 'r') as zone_file:
for zone in zone_file.read().split('\n'):
with open(dashboard_template, 'r') as templatefile:
template = templatefile.read()
with open(f"{zone}.json", 'w+') as writefile:
writefile.write(template.replace('replace_zone', zone))
# This code for create a snapshot in Grafana using the template and save output into a file.
with open('report_001.json', 'rb') as fp:
data = fp.read()
resp = requests.post(url, headers=headers, data=data)
with open('report.json', 'w') as file_snap:
file_snap.write(resp.text)
I aim to avoid writefile i.e., write template to files, and instead of that, use the as data to make a CURL request and save request output into a file.
zone.txt
domain0.com
domain1.com
domain2.com
report_001_test.json - just for example. it's not a real dashboard
{
"dashboard": {
"list": [
{
"current": {
"selected": false,
"text": "replace_zone",
"value": "replace_zone"
}
}
]
}
}
UPD
The possible solution might be like this:
with open(zonefile, 'r') as zone_file:
for zone in zone_file.read().split('\n'):
with open(dashboard_template, 'r') as templatefile:
template = templatefile.read()
data = template.replace('replace_zone', zone)
resp = requests.post(url, headers=headers, data=data)
with open('report.json', 'a') as file_snap:
file_snap.write(resp.text)
import requests
import csv
url = "https://paneledgesandbox.//API/v3/surveys/id/import-responses"
with open('Cars.csv', 'r') as file:
payload = csv.reader(file)
print(payload)
headers = {
'X-API-TOKEN': 'zzzz',
'Content-Type': 'text/csv',
'Cookie': 'CPSessID=bb5b5cf55ceff2c2b8237810db3ca3a7; XSRF-TOKEN=XSRF_0wG1WMbV3G0ktBb'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)`
I am getting an error while trying to read a csv file in a post call using python.
I am not sure where exactly the issue is considering that using the with command, the file automatically closes.
In
payload = csv.reader(file)
you are not actually reading your csv, but rather returning a generator which is exhausted after the file is closed when you go out of with's scope.
You need to read the data instead
payload = "\n".join([", ".join(line) for line in csv.reader(file)])
I'm having problems with a CSV file upload with requests.post method in python 3.
from requests.auth import HTTPBasicAuth
import csv
import requests
user='myuser'
pw='mypass'
advertiserid='10550'
campaignid='12394'
url='http://example.example.com/api/edc/upload/'+advertiserid+'/'+campaignid+'/'+'?encoding=utf-8&fieldsep=%3B&decimalsep=.&date=DD%2FMM%2FYYYY&info=1&process=1'
csv="myfile.csv"
with open(csv, 'r') as f:
r = requests.post(url, files={csv: f})
print(r)
The output is 'Response [502]'
Any idea of what could be the problem?
Many thanks!
You can refer the documentation of Requests library here: post-a-multipart-encoded-file
Change your request line to:
r = requests.post(url, files={'report.csv': f})
Try opening it in binary mode? And with specific 'text/csv' mime type?
with open(csv, 'rb') as f:
r = requests.post(url, files={'file': ('myfile.csv', f, 'text/csv', {'Expires': '0'})})
print(r.text)
If it still does not work, try without the binary, but still with the rest.
If it stiiill does not work, print the exact error message. And 502 (Bad Gateway) might just mean that you're not targetting the right url. (you're not targetting example.com, right?
csv="myfile.csv"
url='http://example.example.com/api/edc/upload/'+advertiserid+'/'+campaignid+'/'+'?encoding=utf-8&fieldsep=%3B&decimalsep=.&date=DD%2FMM%2FYYYY&info=1&process=1'
files = {'upload_file': open(csv,'rb')}
r = requests.post(url, files=files)
Imagine I have a rest API to import the CSV file (Multipart encoded file)
corresponding python request should be like below.
import requests
hierarchy_file_name = '/Users/herle/ws/LookoutLab/data/monitor/Import_Hierarchy_Testcase_v2.csv'
headers = {
'x-api-key': **REST_API_KEY**,
'x-api-token': **REST_API_TOKEN**,
'accept': 'application/json'
}
files = {'file': (hierarchy_file_name, open(hierarchy_file_name, 'rb'), 'text/csv')}
url = "https://abcd.com"
response = requests.post(url +'/api/v2/core/workspaces/import/validate',
files=files, verify=False, headers=headers)
print("Created")
print(response)
print(response.text)
Note:
Make sure that you don't add 'Content-Type': 'multipart/form-data' in the header
I need to do a API call to upload a file along with a JSON string with details about the file.
I am trying to use the python requests lib to do this:
import requests
info = {
'var1' : 'this',
'var2' : 'that',
}
data = json.dumps({
'token' : auth_token,
'info' : info,
})
headers = {'Content-type': 'multipart/form-data'}
files = {'document': open('file_name.pdf', 'rb')}
r = requests.post(url, files=files, data=data, headers=headers)
This throws the following error:
raise ValueError("Data must not be a string.")
ValueError: Data must not be a string
If I remove the 'files' from the request, it works.
If I remove the 'data' from the request, it works.
If I do not encode data as JSON it works.
For this reason I think the error is to do with sending JSON data and files in the same request.
Any ideas on how to get this working?
See this thread How to send JSON as part of multipart POST-request
Do not set the Content-type header yourself, leave that to pyrequests to generate
def send_request():
payload = {"param_1": "value_1", "param_2": "value_2"}
files = {
'json': (None, json.dumps(payload), 'application/json'),
'file': (os.path.basename(file), open(file, 'rb'), 'application/octet-stream')
}
r = requests.post(url, files=files)
print(r.content)
Don't encode using json.
import requests
info = {
'var1' : 'this',
'var2' : 'that',
}
data = {
'token' : auth_token,
'info' : info,
}
headers = {'Content-type': 'multipart/form-data'}
files = {'document': open('file_name.pdf', 'rb')}
r = requests.post(url, files=files, data=data, headers=headers)
Note that this may not necessarily be what you want, as it will become another form-data section.
I'm don't think you can send both data and files in a multipart encoded file, so you need to make your data a "file" too:
files = {
'data' : data,
'document': open('file_name.pdf', 'rb')
}
r = requests.post(url, files=files, headers=headers)
I have been using requests==2.22.0
For me , the below code worked.
import requests
data = {
'var1': 'this',
'var2': 'that'
}
r = requests.post("http://api.example.com/v1/api/some/",
files={'document': open('doocument.pdf', 'rb')},
data=data,
headers={"Authorization": "Token jfhgfgsdadhfghfgvgjhN"}. #since I had to authenticate for the same
)
print (r.json())
For sending Facebook Messenger API, I changed all the payload dictionary values to be strings. Then, I can pass the payload as data parameter.
import requests
ACCESS_TOKEN = ''
url = 'https://graph.facebook.com/v2.6/me/messages'
payload = {
'access_token' : ACCESS_TOKEN,
'messaging_type' : "UPDATE",
'recipient' : '{"id":"1111111111111"}',
'message' : '{"attachment":{"type":"image", "payload":{"is_reusable":true}}}',
}
files = {'filedata': (file, open(file, 'rb'), 'image/png')}
r = requests.post(url, files=files, data=payload)
1. Sending request
import json
import requests
cover = 'superneat.jpg'
payload = {'title': 'The 100 (2014)', 'episodes': json.dumps(_episodes)}
files = [
('json', ('payload.json', json.dumps(payload), 'application/json')),
('cover', (cover, open(cover, 'rb')))
]
r = requests.post("https://superneatech.com/store/series", files=files)
print(r.text)
2. Receiving request
You will receive the JSON data as a file, get the content and continue...
Reference: View Here
What is more:
files = {
'document': open('file_name.pdf', 'rb')
}
That will only work if your file is at the same directory where your script is.
If you want to append file from different directory you should do:
files = {
'document': open(os.path.join(dir_path, 'file_name.pdf'), 'rb')
}
Where dir_path is a directory with your 'file_name.pdf' file.
But what if you'd like to send multiple PDFs ?
You can simply make a custom function to return a list of files you need (in your case that can be only those with .pdf extension). That also includes files in subdirectories (search for files recursively):
def prepare_pdfs():
return sorted([os.path.join(root, filename) for root, dirnames, filenames in os.walk(dir_path) for filename in filenames if filename.endswith('.pdf')])
Then you can call it:
my_data = prepare_pdfs()
And with simple loop:
for file in my_data:
pdf = open(file, 'rb')
files = {
'document': pdf
}
r = requests.post(url, files=files, ...)
conn = httplib.HTTPConnection("www.encodable.com/uploaddemo/")
conn.request("POST", path, chunk, headers)
Above is the site "www.encodable.com/uploaddemo/" where I want to upload an image.
I am better versed in php so I am unable to understand the meaning of path and headers here. In the code above, chunk is an object consisting of my image file.
The following code produces an error as I was trying to implement without any knowledge of headers and path.
import httplib
def upload_image_to_url():
filename = '//home//harshit//Desktop//h1.jpg'
f = open(filename, "rb")
chunk = f.read()
f.close()
headers = {
"Content−type": "application/octet−stream",
"Accept": "text/plain"
}
conn = httplib.HTTPConnection("www.encodable.com/uploaddemo/")
conn.request("POST", "/uploaddemo/files/", chunk)
response = conn.getresponse()
remote_file = response.read()
conn.close()
print remote_file
upload_image_to_url()
Currently, you aren't using the headers you've declared earlier in the code. You should provide them as the fourth argument to conn.request:
conn.request("POST", "/uploaddemo/files/", chunk, headers)
Also, side note: you can pass open("h1.jpg", "rb") directly into conn.request without reading it fully into chunk first. conn.request accepts file-like objects and it will be more efficient to stream the file a little at a time:
conn.request("POST", "/uploaddemo/files/", open("h1.jpg", "rb"), headers)