Twitter Search Query in Python - python

I am trying to pull tweets matching the given search query. I'm using the following code :
import urllib2 as urllib
import json
response = urllib.urlopen("https://search.twitter.com/search.json?q=microsoft")
pyresponse = json.load(response)
print pyresponse
This was working a few days ago, but suddenly stopped working now. With some help from google, I learned that this type of url in not supported anymore.
How do I perform this search query. What url shall I use?

Twitter is deprecating non-authenticated searches. You should look into Tweepy or another Python library that interacts with Twitter. https://github.com/tweepy/tweepy

Related

How to get account information via using API Binance in Python

I am trying to get account information by using requests library with the code below.
enter image description here
import requests
url = 'https://api.binance.com'
api_get_info = '/sapi/v1/accountSnapshot'
hmac_sha256 = 'api_key'
get_account_info = requests.get(url+api_get_info+api_get_info)
print(get_account_info.text)
I have no idea what the url should be like. Could you guys please give me some idea :)
Binance has an official GitHub repository with examples about these signatures, you can check it out over here: https://github.com/binance/binance-signature-examples/blob/master/python/spot/spot.py
See the function send_signed_request() at the bottom of that file.

Searching keywords using facebook graph API in Python

I recently started working on sentiment analysis for which I need data from Facebook/Instagram post/comment/replies. I plan to search a keyword or tag and would get data from the results. I looked up facebook graph api and some blogs and I found code of 2016 which did exactly that but it isn't working.
import urllib3
import facebook
import requests
token= # mytoken
graph = facebook.GraphAPI(access_token=token, version = 3.1)
events = graph.request('/search?q=Poetry&type=event&limit=10000')
This code finds information on events for any search term say “Poetry” and limiting those events number to 10000
but when I run it I get the following error:
Tell me if it possible and what I should do to fix it.

Rest API programming: Requests vs urllib2 in Python 2.7 -- the API needs authentication (Error 401)

I am a beginner trying to learn REST API programming through Python 2.7 to get data from Socialcast API. From my research it looks like requests or urllib2 would work. I need to authenticate with username and id for the API. I tried using urllib2 and it gave me error 401.
Which one should I use? My goal is to produce .csv files from the data so I can visualize it. Thank you in advance.
The question will yield a bit of an opinion based response, but I would suggest using Requests. I find that when making request that require parameters using Requests is easier to manage. An example for the Socialcast using Requests would be
parameters={"email" : emailAddress, "passoword" : password}
r = requests.post(postUrl, parameters)
The post url would be the url to make the post request and emailAddress and password would be the vales you use to login in.
For the csv, take a look here which includes a tutorial on going from json to csv.

Is python-oauth2 the right tool for my issue?

I'm coding an app which has to use this api. So I want to do at a certain point a search on their database. Now I'm struggling with which python library is the right one to use in order to authenticate about oAuth2? I couldn't find any by now, where I was sure, it would offer the necessary functions.
I wonder if this library (python-oauth2) offers, what I need. But this isn't a library for the client, is it? It seems it is for the server...
I'd be really grateful, if someone could just give me an advice, with what I should work.
Method 1
You will need to use the following modules. No need to use oauth. Just need to get the token before performing any search using the api.
requests, json, urllib
Here's a short Example code for that
import requests, json, urllib
BASE_URL = "http://scoilnet.com/grants/apikey/"
r = requests.post(_BASE_URL+"user/token/", data={'username': username , 'password': password })
print r.getcontent
The above code will show you how to request a token from the api. Using that token you will be making get and post requests to the api which will give a json response. That json response will be shown as a Dictionary from which you will load you data in your program.
Method 2
You can also use urllib or urllib2 or urllib3

Python 2.7 - Tweepy - How to get rate_limit_status()?

I am working on a twitter App using Python 2.7 and the latest version of the tweepy module. One thing I cannot figure out is how to use the function rate_limit_status()
Here is my code:
import tweepy, time, sys, random, pickle
import pprint
# argfile = str(sys.argv[1])
#enter the corresponding information from your Twitter application:
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_KEY = ''
ACCESS_SECRET = ''
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
public_tweets = api.home_timeline()
user = api.get_user('#MyUserName')
print api.rate_limit_status()
When I print the results of the function it gives me a large dictionary that I cannot decipher. I have looked at the tweepy documentation but can't find any good examples on using rate_limit_status().
What is the next step I should be doing to troubleshoot something like this?
Is there a tool to format these large dictionaries so I can read them and try to decipher how to access the values in the dictionary?
Edit:
It turns out I didn't have a good understanding of what a Rest API is and how simply it works! I was expecting something MUCH more complicated in my head.
I actually switched to the twitter Python module twitter library instead of Tweepy and then did a lot of research on how to use the Twitter API.
Two youtube videos that REALLY helped me are:
https://www.youtube.com/watch?v=7YcW25PHnAA
and
https://www.youtube.com/watch?v=fhPb6ocUz_k
The Postman Chrome app was awesome and allowed me to easily test and visualize how my calls to the Twitter API worked and it easily formatted the resulting JSON to so I could read it.
To do quick calculations I also took the JSON from Postman and threw it into this website http://konklone.io/json/ to get a csv that I could then open in Excel and make sure everything was behaving as expected and that I was getting the right now number of results.
After all that, writing the Python code to interact with the Twitter API was easy!
Adding all this in this hopes it will help someone else in the future! If it does please let me know! :)
As per the Tweepy documentation
Returns the remaining number of API requests available to the
requesting user before the API limit is reached for the current hour.
Calls to rate_limit_status do not count against the rate limit. If
authentication credentials are provided, the rate limit status for the
authenticating user is returned. Otherwise, the rate limit status for
the requester’s IP address is returned.
So in simpler words you can say that, it returns a JSON object in which tells you about the Number of requests you have made and Number of requests remaining, the reason why it is difficult to read at first sight lies in the face that, it contains the count for every type of API call that you have made and not only the current API call you just executed.
So for example if you run the above script, then, you can see that you have made a call to api.home_timeline() Now according to the twitter Rules and Regulations you can only make 15 calls to this method in a given window session, So if you unpack the JSON object returned then you can see that, there is a lot of data but if you analyse the data then, You will find that api.home_timeline() only affects limits of relevant methods, such as when calling above methods you can check the rate limit using:
data = api.rate_limit_status()
print data['resources']['statuses']['/statuses/home_timeline']
print data['resources']['users']['/users/lookup']
However you have to do a little bit of research on the JSON returned and then you can extract the relevant data from the JSON object, as the returned JSON objects are hard to read, you can always use these types of links to make it more user readable and then analyse it.

Categories

Resources