Creating python script to load api keys from csv file - python

I am currently trying to use a csv file (input.csv) to load multiple api keys so i can download multiple difference documents using the same the api link but the system changes the api key automatically based of input.csv. The input.csv is in the same location as the python script. Also trying to get python to save to a specific location. Any help would be massively appreciated.
I am currently getting the following error when running the script:
import csv
import sys
import requests
def query_api(business_id, api_key):
headers = {
"Authorization": api_key
}
r = requests.get('https://api.link.com', headers=headers)
print(r.text)
# get filename from command line arguments
if len(sys.argv) < 2:
print "input.csv"
sys.exit(1)
csv_filename = sys.argv[1]
with open(csv_filename) as csv_file:
csv_reader = csv.DictReader(csv_file, delimiter=',')
for row in csv_reader:
business_id = row['BusinessId']
api_key = row['ApiKey']
query_api(business_id, api_key)
I am currently getting the following error when running the script:
line 12
print "input.csv"
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("input.csv")?

Related

Download Public Github repository using Python function

I have a CSV file with one column. This column consists of around 100 GitHub public repo addresses (for example, NCIP/c3pr-docs)
I want to know if there is any way to download all these 100 public repo inside my computer using Python.
I don't want to use any command on the terminal, I need a function for it.
I use a very simple code to access to user and repo. Here it is:
import csv
import requests
#replace the name with your actual csv file name
file_name = "dataset.csv"
f = open(file_name)
csv_file = csv.reader(f)
second_column = [] #empty list to store second column values
for line in csv_file:
if line[1] == "Java":
second_column.append(line[0])
print(line[0]) #index 1 for second column
So by doing this I read a CSV file and get access to the users and repo.
I need a piece of code to help me download all these repo
Try this:
import requests
def download(user_and_repo, branch):
URL = f"https://github.com/{user_and_repo}/archive/{branch}.tar.gz"
response = requests.get(URL)
open(f"{user_and_repo.split('/')[1]}.tar.gz", "wb").write(response.content)
download("AmazingRise/hugo-theme-diary", "main")
Tested under Python 3.9.

Using Python to download api data

Hi I have created a piece of code that downloads data from a api end point and also loads in the apikeys.
I am trying to achieve downloading the api data into csv files into their own folder based on the input.csv I have tried to achieve this by adding the following section at the end. The problem is that it does not download the file its to be receiving from the api end point.
Please assist?
with open('filepath/newfile.csv', 'w+') as f:
f.write(r.text)
import csv
import sys
import requests
def query_api(business_id, api_key):
headers = {
"Authorization": api_key
}
r = requests.get('https://api.link.com', headers=headers)
print(r.text)
# get filename from command line arguments
if len(sys.argv) < 2:
print ("input.csv")
sys.exit(1)
csv_filename = sys.argv[1]
with open(csv_filename) as csv_file:
csv_reader = csv.DictReader(csv_file, delimiter=',')
for row in csv_reader:
business_id = row['BusinessId']
api_key = row['ApiKey']
query_api(business_id, api_key)
with open('filepath/newfile.csv', 'w+') as f:
f.write(r.text)

How do I fix my code so that it is automated?

I have the below code that takes my standardized .txt file and converts it into a JSON file perfectly. The only problem is that sometimes I have over 300 files and doing this manually (i.e. changing the number at the end of the file and running the script is too much and takes too long. I want to automate this. The files as you can see reside in one folder/directory and I am placing the JSON file in a differentfolder/directory, but essentially keeping the naming convention standardized except instead of ending with .txt it ends with .json but the prefix or file names are the same and standardized. An example would be: CRAZY_CAT_FINAL1.TXT, CRAZY_CAT_FINAL2.TXT and so on and so forth all the way to file 300. How can I automate and keep the file naming convention in place, and read and output the files to different folders/directories? I have tried, but can't seem to get this to iterate. Any help would be greatly appreciated.
import glob
import time
from glob import glob
import pandas as pd
import numpy as np
import csv
import json
csvfile = open(r'C:\Users\...\...\...\Dog\CRAZY_CAT_FINAL1.txt', 'r')
jsonfile = open(r'C:\Users\...\...\...\Rat\CRAZY_CAT_FINAL1.json', 'w')
reader = csv.DictReader(csvfile)
out = json.dumps([row for row in reader])
jsonfile.write(out)
****************************************************************************
I also have this code using the python library "requests". How do I make this code so that it uploads multiple json files with a standard naming convention? The files end with a number...
import requests
#function to post to api
def postData(xactData):
url = 'http link'
headers = {
'Content-Type': 'application/json',
'Content-Length': str(len(xactData)),
'Request-Timeout': '60000'
}
return requests.post(url, headers=headers, data=xactData)
#read data
f = (r'filepath/file/file.json', 'r')
data = f.read()
print(data)
# post data
result = postData(data)
print(result)
Use f-strings?
for i in range(1,301):
csvfile = open(f'C:\Users\...\...\...\Dog\CRAZY_CAT_FINAL{i}.txt', 'r')
jsonfile = open(f'C:\Users\...\...\...\Rat\CRAZY_CAT_FINAL{i}.json', 'w')
import time
from glob import glob
import csv
import json
import os
INPATH r'C:\Users\...\...\...\Dog'
OUTPATH = r'C:\Users\...\...\...\Rat'
for csvname in glob(INPATH+'\*.txt'):
jsonname = OUTPATH + '/' + os.basename(csvname[:-3] + 'json')
reader = csv.DictReader(open(csvname,'r'))
json.dump( list(reader), open(jsonname,'w') )

Python-3 Trying to iterate through a csv and get http response codes

I am attempting to read a csv file that contains a long list of urls. I need to iterate through the list and get the urls that throw a 301, 302, or 404 response. In trying to test the script I am getting an exited with code 0 so I know it is error free but it is not doing what I need it to. I am new to python and working with files, my experience has been ui automation primarily. Any suggestions would be gladly appreciated. Below is the code.
import csv
import requests
import responses
from urllib.request import urlopen
from bs4 import BeautifulSoup
f = open('redirect.csv', 'r')
contents = []
with open('redirect.csv', 'r') as csvf: # Open file in read mode
urls = csv.reader(csvf)
for url in urls:
contents.append(url) # Add each url to list contents
def run():
resp = urllib.request.urlopen(url)
print(self.url, resp.getcode())
run()
print(run)
Given you have a CSV similar to the following (the heading is URL)
URL
https://duckduckgo.com
https://bing.com
You can do something like this using the requests library.
import csv
import requests
with open('urls.csv', newline='') as csvfile:
errors = []
reader = csv.DictReader(csvfile)
# Iterate through each line of the csv file
for row in reader:
try:
r = requests.get(row['URL'])
if r.status_code in [301, 302, 404]:
# print(f"{r.status_code}: {row['url']}")
errors.append([row['url'], r.status_code])
except:
pass
Uncomment the print statement if you want to see the results in the terminal. The code at the moment appends a list of URL and status code to an errors list. You can print or continue processing this if you prefer.

How to download csv ata from website using Python

I'm trying to automatically download data from the following website; however I just get the html and no data:
http://tcplus.com/GTN/OperationalCapacity#filter.GasDay=02/02/19&filter.CycleType=1&page=1&sort=LocationName&sort_direction=ascending
import csv
import urllib2
downloaded_data = urllib2.urlopen('http://tcplus.com/GTN/OperationalCapacity#filter.GasDay=02/02/19&filter.CycleType=1&page=1&sort=LocationName&sort_direction=ascending')
csv_data = csv.reader(downloaded_data)
for row in csv_data:
print row
The code below will only fetch data from provided url, but if you tweak parameters you can get other reports as well.
import requests
parameters = {'serviceTypeName': 'Ganesha.InfoPost.Service.OperationalCapacity.OperationalCapacityService, Ganesha.InfoPost.Service',
'filterTypeName': 'Ganesha.InfoPost.ViewModels.GasDayAndCycleTypeFilterViewModel, Ganesha.InfoPost',
'templateType': 6,
'exportType': 1,
'filter.GasDay': '02/02/19',
'filter.CycleType': 1}
response = requests.post('http://tcplus.com/GTN/Export/Generate', data=parameters)
with open('result.csv', 'w') as f:
f.write(response.text)

Categories

Resources