Sorry for my limited python knowledge.
I was using this code:
import requests
symbols = ["XYZW","XYZW","ABC"]
for s in symbols:
url = 'https://www.alphavantage.co/query?function=BALANCE_SHEET&symbol={}&apikey=apikey'.format(s)
r = requests.get(url)
data = r.json()
And expected an output of the three different dictionaries, but only got the ABC's data.
Am I supposed to loop it? I'm not sure how to. And why did it give me the last in the list? Does it sort alphabetically?
Use a list to store the value on each iteration, and then loop through them to print the results.
import requests
symbols = ["XYZW","XYZW","ABC"]
urls = []
for s in symbols:
urls.append('https://www.alphavantage.co/query?function=BALANCE_SHEET&symbol={}&apikey=apikey'.format(s))
for url in urls:
r = requests.get(url)
data = r.json()
print(data)
you reset the url every iteration of your for loop. Therefore you are only requesting the last url in the list.
Related
I have very basic knowledge of python, so sorry if my question sounds dumb.
I need to query a website for a personal project I am doing, but I need to query it 500 times, and each time I need to change 1 specific part of the url, then take the data and upload it to gsheets.
(The () signifies what part of the url I need to change)
'https://www.alphavantage.co/query?function=BALANCE_SHEET&symbol=(symbol)&apikey=apikey'
I thought about using while and format {} to do it, but I was unsure how to change the string each time, bar writing out the names for variables by hand (defeating the whole purpose of this).
I already have a list of the symbols I need to use, but I don't know how to input them
Example of how I get 1 piece of data
import requests
url = 'https://www.alphavantage.co/query?function=BALANCE_SHEET&symbol=MMM&apikey=demo'
r = requests.get(url)
data = r.json()
Example of what I'd like to change it to
import requests
url = 'https://www.alphavantage.co/query?function=BALANCE_SHEET&symbol=AOS&apikey=demo'
r = requests.get(url)
data = r.json()
#then change it to
import requests
url = 'https://www.alphavantage.co/query?function=BALANCE_SHEET&symbol=ABT&apikey=demo'
r = requests.get(url)
data = r.json()
so on and so forth, 500 times.
You might combine .format with for loop, consider following simple example
symbols = ["abc","xyz","123"]
for s in symbols:
url = 'https://www.example.com?symbol={}'.format(s)
print(url)
output
https://www.example.com?symbol=abc
https://www.example.com?symbol=xyz
https://www.example.com?symbol=123
You might also elect to use any other way of formatting, e.g. f-string (requires python3.6 or newer) in which case code would be
symbols = ["abc","xyz","123"]
for s in symbols:
url = f'https://www.example.com?symbol={s}'
print(url)
Alternatively you might params optional argument of requests.get function as follows
import requests
symbols = ["abc","xyz","123"]
for s in symbols:
r = requests.get('https://www.example.com', params={'symbol':s})
print(r.url)
output
https://www.example.com/?symbol=abc
https://www.example.com/?symbol=xyz
https://www.example.com/?symbol=123
I have a list of API links, and I'm trying to get the data from these API links.
If my list of API links looks like this:
api_links = ['https://api.blahblah.com/john', 'https://api.blahblah.com/sarah', 'https://api.blahblah.com/jane']
How can I get a list of loaded data from these API links? I'm getting an error message when doing this code:
response_API = requests.get([(x) for x in api_links])
Which is preventing me from loading the data here:
data = response_API.text
data_lst = json.loads(data)
Where am I going wrong?
change
response_API = requests.get([(x) for x in api_links])
to
response_API = [requests.get(x) for x in api_links]
responce_api will be a dict of requests object.
The function requests.get take as first argument an URL, not a list of them.
You want to call this function several times with one string, instead of one time with a list of strings.
Like this with a for loop :
for api_link in api_links:
response_API = requests.get(api_link)
data = response_API.text
data_lst = json.loads(data)
# Process further the data for the current api_link
The use of comprehension lists may not be a good idea here, as the process to do on each API link is not trivial.
I'm trying to do a little code that gets the emails (and other things in the future) from an API. But I'm getting "TypeError: list indices must be integers or slices, not str" and I don't know what to do about it. I've been looking at other questions here but I still don't get it. I might be a bit slow when it comes to this.
I've also been watching some tutorials on the tube, and done the same as them, but still getting different errors. I run Python 3.5.
Here is my code:
from urllib.request import urlopen
import json, re
# Opens the url for the API
url = 'https://jsonplaceholder.typicode.com/posts/1/comments'
r = urlopen(url)
# This should put the response from API in a Dict
result= r.read().decode('utf-8')
data = json.loads(result)
#This shuld get all the names from the the Dict
for name in data['name']: #TypeError here.
print(name)
I know that I could regex the text and get the result that I want.
Code for that:
from urllib.request import urlopen
import re
url = 'https://jsonplaceholder.typicode.com/posts/1/comments'
r = urlopen(url)
result = r.read().decode('utf-8')
f = re.findall('"email": "(\w+\S\w+)', result)
print(f)
But that seems like the wrong way to do this.
Can someone please help me understand what I'm doing wrong here?
data is a list of dicts, that's why you are getting TypeError while iterating on it.
The way to go is something like this:
for item in data: # item is {"name": "foo", "email": "foo#mail..."}
print(item['name'])
print(item['email'])
#PiAreSquared's comment is correct, just a bit more explanation here:
from urllib.request import urlopen
import json, re
# Opens the url for the API
url = 'https://jsonplaceholder.typicode.com/posts/1/comments'
r = urlopen(url)
# This should put the response from API in a Dict
result= r.read().decode('utf-8')
data = json.loads(result)
# your data is a list of elements
# and each element is a dict object, so you can loop over the data
# to get the dict element, and then access the keys and values as you wish
# see below for some example
for element in data: #TypeError here.
name = element['name']
email = element['email']
# if you want to get all names, you should do
names = [element['name'] for element in data]
# same to get all emails
emails = [email['email'] for email in data]
I am trying to create my first Python web-scraper to automate one task for work - I need to write all vacancies from this website (only for health) to an Excel file. Using a tutorial, I have come up with the following program.
However, in step 6, I receive an error stating: IndexError: list index out of range.
I have tried using start_page = paging[2].text, as I thought that the first page may be the base page, but it results in the same error.
Here are the steps that I followed:
I checked that the website https://iworkfor.nsw.gov.au allows scraping
Imported the necessary libraries:
import requests
from bs4 import BeautifulSoup
import pandas
stored the URL as a variable:
base_url = "https://iworkfor.nsw.gov.au/nsw-health-jobs?divisionid=1"
Get the HTML content:
r = requests.get(base_url)`
c = r.content
parse HTML
soup = BeautifulSoup(c,"html.parser")
To extract the first and last page numbers
paging = soup.find("div",{"class":"pana jobResultPaging tab-paging-top"}).find_all("a")
start_page = paging[1].text
last_page = paging[len(paging)-2].text
Making an empty list to append all the content:
web_content_list = []
Making page links from the page numbers ,crawl through the pages and extract the contents from the corresponding tags
for page_number in range(int(start_page),int(last_page) + 1):
# To form the url based on page numbers
url = base_url+"&page="+str(page_number)
r = requests.get(base_url+"&page="+str(page_number))
c = r.content
soup = BeautifulSoup(c,"html.parser")
To extract the Title
vacancies_header = soup.find_all("div", {"class":"box-sec2-left"})
To extract the LHD, Job type and Job Reference number
vacancies_content = soup.find_all("div", {"class":"box-sec2-right"})
To process vacancy by vacancy by looping
for item_header,item_content in zip(vacancies_header,vacancies_content):
# To store the information to a dictionary
web_content_dict = {}
web_content_dict["Title"]=item_header.find("a").text.replace("\r","").replace("\n","")
web_content_dict["Date Posted"] = item_header.find("span").text
web_content_dict["LHD"] = item_content.find("h5").text
web_content_dict["Position Type"] = item_content.find("p").text
web_content_dict["Job Reference Number"] = item_content.find("span",{"class":"box-sec2-reference"}).text
# To store the dictionary to into a list
web_content_list.append(web_content_dict)
To make a dataframe with the list
df = pandas.DataFrame(web_content_list)
To write the dataframe to a csv file
df.to_csv("Output.csv")
Ideally, the program will write the data about all vacancies to a CSV file in a nice table with the columns: title, date posted, LHD, Position Type, Job reference number.
The problem is that your initial call to find() returns an empty <div>, and so your subsequent call to find_all returns an empty list:
>div = soup.find("div",{"class":"pana jobResultPaging tab-paging-top"
>div
<div class="pana jobResultPaging tab-paging-top">
</div>
>div.find_all("a")
[]
Update:
The reason you're unable to parse the contents of the <div> in question (i.e. why it's empty) has to do with the fact that the data retrieved from the server is "paginated" by client-side javascript (code in your browser). Your python code is parsing only the HTML that is returned by the request to iworkfor.nsw.gov.au; the data that is what you're after (and what is turned into "pages") is requested by that same javascript and returned by the server in a format called JSON.
So, the bad news is that the instructions that have been provided to you will not work. You will have to parse the JSON returned by the server and then decode the escaped HTML that it contains.
I'm trying to scrape a site, when I run the following code without region_id=[any number from one to 32] I get a [500], but if I set region_id=1 I'll get only a first page by default (on the url it is pagina=&), pages are up to 500; is there a command or parameter for retrieving every page (every possible value of pagina=), avoiding for loops?
import requests
url = "http://www.enciclovida.mx/explora-por-region/especies-por-grupo?utf8=%E2%9C%93&grupo_id=Plantas®ion_id=&parent_id=&pagina=&nombre="
resp = requests.get(url, headers={'User-Agent':'Mozilla/5.0'})
data = resp.json()
Even without a for loop, you are still going to need iteration. You could do it with recursion or map as I've done below, but the iteration is still there. This solution has the advantage that everything is a generator, so only when you ask for a page's json from all_data will url be formatted, the request made, checked and converted to json. I added a filter to make sure you got a valid response before trying to get the json out. It still makes every request sequentially, but you could replace map with a parallel implementation quite easily.
import requests
from itertools import product, starmap
from functools import partial
def is_valid_resp(resp):
return resp.status_code == requests.codes.ok
def get_json(resp):
return resp.json()
# There's a .format hiding on the end of this really long url,
# with {} in appropriate places
url = "http://www.enciclovida.mx/explora-por-region/especies-por-grupo?utf8=%E2%9C%93&grupo_id=Plantas®ion_id={}&parent_id=&pagina={}&nombre=".format
regions = range(1, 33)
pages = range(1, 501)
urls = starmap(url, product(regions, pages))
moz_get = partial(requests.get, headers={'User-Agent':'Mozilla/5.0'})
responses = map(moz_get, urls)
valid_responses = filter(is_valid_response, responses)
all_data = map(get_json, valid_responses)
# all_data is a generator that will give you each page's json.