use Requests to get weather data from virtual crossings API - python

I am trying to access historical weather data from an API. I obtained the API key from here: https://www.visualcrossing.com/weather/weather-data-services#/timeline
I am trying this but I keep getting an Error 404. I am not sure if this is because of a problem with the API or my code.
import requests
r = requests.get("https://weather.visualcrossing.com/VisualCrossingWebServices/rest/services/timeline/London,UK/2021-01-01/2021-03-28?key=AXSSS")
print(r)
Documentation: https://www.visualcrossing.com/resources/documentation/weather-api/timeline-weather-api/
How can I obtain the data?

i tested the site you gave and created a account and api key to get London whether data, you can use it too
Code :
import requests
r = requests.get("https://weather.visualcrossing.com/VisualCrossingWebServices/rest/services/timeline/London?unitGroup=metric&key=PPKBBJ7637X5SNDUG6HZA23X7")
print(r)
Output :
<Response [200]>
now you can access data by json() method too:
print(r.json())
the output is so Huge, but your problem is 2 things:
1-API key is not correct (i tested)
2-You should buy premium plan
for get a range of dates url will be like this:
https://weather.visualcrossing.com/VisualCrossingWebServices/rest/services/timeline/London/2021-1-1/2021-1-5?unitGroup=us&key=PPKBBJ7637X5SNDUG6HZA23X7
and the date range you give , has too much row per request, you should buy premium plan
Otherwise you will get this error in their own website:
Your plan allows up to 100 rows per request. This query will return (yyy) rows. Please smaller date range or fewer locations.
Our paid plans offer increased query limits
If you don`t want to pay for paid plan you can use this link to use github public apis, a lot of free and without api key you can use for it
Github public apis

Related

How can I fetch KPIs from a PowerBI report using python?

I need to fetch KPIs (revenue change, total cost etc.) from a PBI report.
So far I have tried calling few PowerBI Rest APIs like this-
url_groups = 'https://api.powerbi.com/v1.0/myorg/reports'
header = {'Content-Type':'application/json','Authorization': f'Bearer {access_token}'}
api_out = requests.get(url=url_groups, headers=header)
However, only the report name, type , urls etc. are getting returned in the output, not the KPIs. Any help would be highly appreciated.
PowerBI has many API endpoints (https://learn.microsoft.com/en-us/rest/api/power-bi/). You are using the Reports endpoint, that's why you are getting the output you see.
There are other endpoints you might be interested in checking: Dashboard endpoint (https://api.powerbi.com/v1.0/myorg/dashboards/{dashboardId}) and Dataset one (https://api.powerbi.com/v1.0/myorg/datasets/{datasetId}).
The only thing is that in the documentation looks like there is no specific place where you can get your data from by using the API.
It might be necessary to schedule csv exports from PowerBI and then reading those in Python.
To get at the data in the report you need to send a DAX query to the report's Dataset using the ExecuteQueries API.
In Power BI Desktop you can use the Performance Analyzer to see the DAX query sent by a visual, instead of writing it from scratch.

Understanding Twitter Premium API Sandbox in python

I have already Twitter Standard API (I got the approved recently and did not use Twitter API yet ) because I need to collect historical tweets.
So I have to upgrade to Premium API but should I choose API sandbox to test my code before paid and upgrade the premium API full archive? I am afraid to lose some tweets and reduce the requests.
I am a little confusing for understanding some operators
results_per_call=100 .. max_results=100 .. what are they meaning?
Can I choose any numbers to get more tweets?
How many requests can I use per day?
I find code in python that I will use it to collect ? is it correct? I am a beginner in python
where can I find the JSON file on my computer.? and how convert this file to .cvs?
!pip install searchtweets
!pip install yaml
import yaml
config = dict(
search_tweets_api = dict(
account_type = 'premium',
endpoint = 'https://api.twitter.com/1.1/tweets/search/fullarchive/YOUR_LABEL.json',
consumer_key = 'YOUR_CONSUMER_KEY',
consumer_secret = 'YOUR_CONSUMER_SECRET'
))
with open('twitter_keys_fullarchive.yaml', 'w') as config_file:
yaml.dump(config, config_file, default_flow_style=False)
from searchtweets import load_credentials
premium_search_args = load_credentials("twitter_keys_fullarchive.yaml",
yaml_key="search_tweets_api",
env_overwrite=False)
print(premium_search_args)
from searchtweets import gen_rule_payload
query = "(#COVID19 OR # Corona_virus) (pandemic OR corona OR infected OR vaccine)" rule = gen_rule_payload(query, results_per_call=100, from_date="2020-01-01", to_date="2020-01-30")` from searchtweets import ResultStream
rs = ResultStream(rule_payload=rule,
max_results=100,
**premium_search_args) print(rs)
mport json
with open('twitter_premium_api_demo.jsonl', 'a', encoding='utf-8') as f:
n = 0
for tweet in rs.stream():
n += 1
if n % 10 == 0:
print('{0}: {1}'.format(str(n), tweet['created_at']))
json.dump(tweet, f)
f.write('\n') print('done')
Very thank you in advance.
Once I had the same task that collect twitter data using different conditions,After lot of searching and tests,I had to create completely separate python twitter client API for my task.This is what I know regarding the API (documentation is little bit confusing)
Twitter API has 3 versions for search and download data.
Standard(free version with limitations)
Premium (paid version with some extended features)
Enterprise ( paid version with customize options for large scale operations)
Standard API
Free to use with correct authentication
Only return past 7 days data
Can use Standard search operators
You can send limited number of requests within given time period(ex 180 requests in 15min window for user auth and 450 requests in 15 min window for app auth)
one request return 100 data objects (100 tweets)
Premium API
Preimum APi includes 2 versions.
30-day Endpoint - Provide tweets posted within last 30 days
Full Archive endpoints - Provides tweets from starting from 2006
these 2 versions share the same endpoints and only difference is timeframe you can search.
Premium package returns maximum 500 data objects per request,Still you can limit the return count according to your use case.
Select requests per month by subscription (example 50 requests,250 requests (per month))
Answering your questions:
results_per_call=100 means how many tweet objects return by the API by default and max_results=100 is how many objects you need.
should I choose API sandbox to test my code before paid and upgrade the premium API full archive?
yes you can test basic logic and some search queries and check return object using free service.But if you need to search date difference more than 7days, or premium operators you have to use premium API.
these are some useful links
https://developer.twitter.com/en/docs/tweets/search/overview
operators
https://developer.twitter.com/en/docs/tweets/search/guides/standard-operators
https://developer.twitter.com/en/docs/tweets/search/guides/premium-operators
API
https://developer.twitter.com/en/docs/tweets/search/api-reference/premium-search
https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets
There are more hidden information in documentation please add more if you find anything useful.

Google PageSpeed Score Calculation

I am trying to include the Google PageSpeed Insights Score in my application. I came across the api for it and have tried to use it:
https://www.googleapis.com/pagespeedonline/v2/runPagespeed?url=http://wikipedia.org&filter_third_party_resources=true&locale=en_US&screenshot=false&strategy=desktop&key=MyAPIKey
After this I got the output as shown in the gist:
https://gist.github.com/JafferWilson/6f8c5661e11654f301247edca45d23df
But when I use the application of PageSpeed Insights, with same domain as : WikiPedia.org, I got different result of score and could not find that in the JSON api: https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwikipedia.org&tab=mobile
I am using Python2.7 with windows10. and have tried this code for accessing the api:
>>> url = "https://www.googleapis.com/pagespeedonline/v2/runPagespeed?url=http://wikipedia.org&filter_third_party_resources=true&locale=en_US&screenshot=false&strategy=desktop&key=MYAPIKey"
>>> response = urllib.urlopen(url)
>>> data = json.loads(response.read())
print data.
But I want to have the exact scoring as shown on the PageSpeedInsights of Google. Kindly suggest me what is the way to have the same score as that of Google Insights Page. I could not see the same score in the API result anyways.
For Desktop/Mobile: set strategy=desktop to strategy=mobile in the url.
Discrepancies between the JSON and the website could possibly just be variation within multiple runs, since it's likely the website doesn't fall squarely within scoring buckets. However, it seems that the score is relatively stable within a 1-score range for both desktop and mobile.

scrape urls from google search

I am trying to write a code that gets 1000 first URL's of http pages in google search of some word. I used this code in Python to get the 1000 first URL's
import GoogleScraper
import urllib
urls = GoogleScraper.scrape('english teachers', number_pages=2)
for url in urls:
print(urllib.parse.unquote(url.geturl()))
print('[!] Received %d results by asking %d pages with %d results per page' %
(len(urls), 2, 100))`
but this code returns 0 received results.
is there another way to get a lot of URL's from google search in a convenient way?
I also tried xgoogle and pygoogle modules but they can handle just with small amount of requests for pages.
Google has a Custom Search API which allows you to make 100 queries a day for free. Given that each page has 10 results per page, you can barely fit in 1000 results in a day. xgoogle and pygoogle are just wrappers around this API, so I don't think you'll be able to get more results by using them.
If you do need more, consider creating another Google account with another API key which will effectively double your limit. If you're okay with slightly inferior results, you can try out Bing's Search API (they offer 5000 requests a month).

Python Twitter Statistics

I need to get the number of people who have followed a certain account by month, also the number of people who have unfollowed the same account by month, the total number of tweets by month, and the total number of times something the account tweeted has been retweeted by month.
I am using python to do this, and have installed python-twitter, but as the documentation is rather sparse, I'm having to do a lot of guesswork. I was wondering if anyone could point me in the right direction? I was able to get authenticated using OAuth, so thats not an issue, I just need some help with getting those numbers.
Thank you all.
These types of statistical breakdowns are not generally available via the Twitter API. Depending on your sample date range, you may have luck using Twittercounter.com's API (you can sign up for an API key here).
The API is rate limited to 100 calls per hour, unless you get whitelisted. You can get results for the previous 14 days. An example request is below:
http://api.twittercounter.com?twitter_id=813286&apikey=[api_key]
The results, in JSON, look like this:
{"version":"1.1","username":"BarackObama","url":"http:\/\/www.barackobama.com","avatar":"http:\/\/a1.twimg.com\/profile_images\/784227851\/BarackObama_twitter_photo_normal.jpg","followers_current":7420937,"date_updated":"2011-04-16","follow_days":"563","started_followers":"2264457","growth_since":5156480,"average_growth":"9166","tomorrow":"7430103","next_month":"7695917","followers_yesterday":7414507,"rank":"3","followers_2w_ago":7243541,"growth_since_2w":177396,"average_growth_2w":"12671","tomorrow_2w":"7433608","next_month_2w":"7801067","followersperdate":{"date2011-04-16":7420937,"date2011-04-15":7414507,"date2011-04-14":7400522,"date2011-04-13":7385729,"date2011-04-12":7370229,"date2011-04-11":7366548,"date2011-04-10":7349078,"date2011-04-09":7341737,"date2011-04-08":7325918,"date2011-04-07":7309609,"date2011-04-06":7306325,"date2011-04-05":7283591,"date2011-04-04":7269377,"date2011-04-03":7257596},"last_update":1302981230}
The retweet stats aren't available from Twittercounter, but you might be able to obtain those from Favstar (although they don't have a public API currently.)
My problem is I also need to get unfollow statistics, which twittercounter does not supply.
My solution was to access the twitter REST API directly, using the oauth2 library in python. I found this very simple compared to some of the other twitter libraries for python out there. This example was particularly helpful: http://parand.com/say/index.php/2010/06/13/using-python-oauth2-to-access-oauth-protected-resources/

Categories

Resources