Google PageSpeed Score Calculation - python

I am trying to include the Google PageSpeed Insights Score in my application. I came across the api for it and have tried to use it:
https://www.googleapis.com/pagespeedonline/v2/runPagespeed?url=http://wikipedia.org&filter_third_party_resources=true&locale=en_US&screenshot=false&strategy=desktop&key=MyAPIKey
After this I got the output as shown in the gist:
https://gist.github.com/JafferWilson/6f8c5661e11654f301247edca45d23df
But when I use the application of PageSpeed Insights, with same domain as : WikiPedia.org, I got different result of score and could not find that in the JSON api: https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwikipedia.org&tab=mobile
I am using Python2.7 with windows10. and have tried this code for accessing the api:
>>> url = "https://www.googleapis.com/pagespeedonline/v2/runPagespeed?url=http://wikipedia.org&filter_third_party_resources=true&locale=en_US&screenshot=false&strategy=desktop&key=MYAPIKey"
>>> response = urllib.urlopen(url)
>>> data = json.loads(response.read())
print data.
But I want to have the exact scoring as shown on the PageSpeedInsights of Google. Kindly suggest me what is the way to have the same score as that of Google Insights Page. I could not see the same score in the API result anyways.

For Desktop/Mobile: set strategy=desktop to strategy=mobile in the url.
Discrepancies between the JSON and the website could possibly just be variation within multiple runs, since it's likely the website doesn't fall squarely within scoring buckets. However, it seems that the score is relatively stable within a 1-score range for both desktop and mobile.

Related

Getting all review requests from Review Board Python Web API

I would like to get the information about all reviews from my server. That's my code that I used to achieve my goal.
from rbtools.api.client import RBClient
client = RBClient('http://my-server.net/')
root = client.get_root()
reviews = root.get_review_requests()
The variable reviews contains just 25 review requests (I expected much, much more). What's even stranger I tried something a bit different
count = root.get_review_requests(counts_only=True)
Now count.count is equal to 17164. How can I extract the rest of my reviews? I tried to check the official documentation but I haven't found anything connected to my problem.
According to the documentation (https://www.reviewboard.org/docs/manual/dev/webapi/2.0/resources/review-request-list/#webapi2.0-review-request-list-resource), counts_only is only a Boolean flag that indicates following:
If specified, a single count field is returned with the number of results, instead of the results themselves.
But, what you could do, is to provide it with status, so:
count = root.get_review_requests(counts_only=True, status='all')
should return you all the requests.
Keep in mind that I didn't test this part of the code locally. I referred to their repo test example -> https://github.com/reviewboard/rbtools/blob/master/rbtools/utils/tests/test_review_request.py#L643 and the documentation link posted above.
You have to use pagination (unfortunately I can't provide exact code without ability to reproduce your question):
The maximum number of results to return in this list. By default, this is 25. There is a hard limit of 200; if you need more than 200 results, you will need to make more than one request, using the “next” pagination link.
Looks like pagination helper class also available.
If you want to get 200 results you may set max_results:
requests = root.get_review_requests(max_results=200)
Anyway HERE is a good example how to iterate over results.
Also I don't recommend to get all 17164 results by one request even if it possible. Because total data of response will be huge (let's say if size one a result is 10KB total size will be more than 171MB)

How can I make it so I am able to call the function with a certain stock,

def live_price(stock):
string = (data.decode("utf-8"))
conn.request("GET", f"/stock/{stock}/ohlc", headers=headers)
print(Price)
live_price("QCOM")
I want to be able to type "live_price("stockname") and then have the function output the data for the stock. If anyone can help that would be great. All other variables mentioned are defined elsewhere in the code.
import yfinance
def live_price(stock):
inst = yfinance.download(stock)
print(inst['Open'][-1])
When one has a hammer, everything looks like a nail. Or, in different words - The best solution for your problem will actually be achieved with Google Sheets as it has access to Google Finance live data (which is by far the best possible data source for live prices). If you'd later like to make any analysis using Python, you can just draw data from your google sheet either locally with your preferred code editor, or even better, while using Google Colabratory.

Twitter API - Obtain user tweets and parse into a table/database

This is a small project I'd like to get started on in the near future. It's still in the planning stage so this post is more about being steered in the right direction
Essentially, I'd like to obtain tweets from a user and parse the tweets into a table/database, with the aim to be able to run this program in real-time.
My initial plan to tackle this was to use Beautiful Soup, a Python specific library, however, I believe the Twitter API is the better approach (advice on this subject would be appreciated)
There are still 3 unknowns:
Where do I store the tweets once obtained?
How to parse the tweets?
Where to store the parsed data?
To answer (3), I suppose it depends on what I want to do with the data. I still haven't decided how I'll use the parsed data but I know that I'd like it put into categories so my thinking is probably a database/table/excel??
A few questions still to answer and I'd like you guys to steer me in the right direction. My programming language knowledge is limited to just C for now, but as this project means a great deal to me, I'm willing to put the effort in and learn the necessary languages/APIs.
What languages/APIs will I need to gain an understanding of to accomplish this project? From where I stand, it seems to be Twitter API and Python.
EDIT: So I have a basic script going which obtains a user tweets. It works better than expected. However, I'd like to take it another step. I'd like to only obtain the users' tweets if it contains a hashtag inside of the tweet. All other tweets should be ignored. How best to do this?
Here is a snippet of the basic code I have going:
import tweepy
import twitter_credentials
auth = tweepy.OAuthHandler(twitter_credentials.CONSUMER_KEY, twitter_credentials.CONSUMER_SECRET)
auth.set_access_token(twitter_credentials.ACCESS_TOKEN, twitter_credentials.ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
stuff = api.user_timeline(screen_name = 'XXXXXXXXXX', count = 10, include_rts = False)
for status in stuff:
print(status.text)
Scraping Twitter (or any other social network) with for example Beautiful soup, as you said, is not a good idea for 2 reasons :
if the source pages changes (name attributes, div ids...), you have to keep your code up to date
your script can be banned because scraping is not "allowed".
To answer your questions :
1) you can store the tweets wherever you want : csv, mysql, sqlite, redis, neo4j...
2) With official API, you get JSON. Here is a Tweet Object : https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object.html . With tweepy, for example status.text will give you the text of the tweet.
3) Same as #1. If you don't know actually what you will do with the data, store the full JSONs. You will be able later to parse them.
I suggest tweepy/python (http://www.tweepy.org/) or twit/nodejs (https://www.npmjs.com/package/twit). And read official docs : https://developer.twitter.com/en/docs/api-reference-index

Python Amazon Product Api not able to get image

I'm using python amazon product api and I can't seem to get the url for the image of the product.
Here is my code so far
for book in amz_api.item_search('Books', Keywords='cookies', ResponseGroup='Large', limit=10):
print book.ItemAttributes.Large
But I get this reply
AttributeError: no such child: {http://webservices.amazon.com/AWSECommerceService/2011-08-01}Large
Any help would be apprecicated
To access the image URLs, you can try to change your code to use one of the following:
print book.SmallImage.URL
print book.MediumImage.URL
print book.LargeImage.URL
The error is because there is no "Large" attribute in ItemAttributes. The image URLs are available in a different part of the response.
The Large Response Group (ResponseGroup='Large') returns a lot of data. According to the docs it's for demonstration purposes and not intended for production applications. To make your code production ready, you might need a different approach, such as the Images Response Group (ResponseGroup='Images').
Also, the python type for the book variable in the above code is:
<type 'lxml.objectify.ObjectifiedElement'>
While debugging, you can look at all the data available in book using something like this:
from lxml import objectify
print(objectify.dump(book))

how to get image search result using bing search api with python?

I need some image sample for machine learning training. I have not enough resource now, so I need to crawl some using the search engine. Google is not free now and I choose bing.
I have tried pybing. It seems not work now.
I don't known how to get the appid.
from py_bing_search import PyBingImageSearch
bing_image = PyBingImageSearch('Your-Api-Key-Here', "x-box console", image_filters='Size:medium+Color:Monochrome') #image_filters is optional
first_fifty_result= bing_image.search(limit=50, format='json') #1-50
print (first_fifty_result[0].media_url)

Categories

Resources