Tweepy (Python) and Oauth (Twitter) Problem - python

auth = tweepy.OAuthHandler(consumer_token, consumer_secret)
When I run this line of tweepy code i run into two problems. I receive the code "Tweepy has no attribute OAuthHandler" despite that being in the documentation I found. One assumes that that would be valid code.
The second problem is that when I registered my app, I did not receive a consumer_ token. I'm not quite sure how to request a consumer token either.

First you have to get both the consumer_token and the consumer_secret. When you register the app it gives you a couple of strings you then use for authentication. The consumer_token is the Consumer Key string twitter provides you with, and then the consumer_secret is the Consumer Secret twitter provides you with.
Then when you call auth = tweepy.OAuthHandler(consumer_token, consumer_secret) you have to have set both the consumer_token and the consumer_secret to the strings twitter provided you with. Then this should work.

Make sure you have
import tweepy
before you attempt to call any of its classes. That sounds like the main problem you are having.
You will need 2 sets of keys, the consumer keys, and the access tokens. Both are available on the https://dev.twitter.com/apps page where you have your registered app. The consumer keys are used by the OAuthHandler(), and the access tokens are used by set_access_token(). Please see this example of using OAuthHandler

Related

How to create a twitter bot that replies back as soon as another person tweets?

Yesterday I wrote a twitter bot with Python that takes the most recent tweet from Donald Trump, translates it over in Google Translate 45 times, and tweets back at him the final translation in English. Everything works, except the fact that I now need to add some sort of "listener" at the beginning of the code to automatically detect when he tweets so that the rest of the code can do its magic. I've been looking over the internet for some time now and I can't seem to find any sort of event handler that would allow the script to detect when he tweets. So that's why I've come to you guys. Is there any way to use Tweepy or other Python libraries to actively detect when someone tweets? I'll include my code so you guys can see what I want to happen exactly when he does Tweet. It's annotated so hopefully it's not too complicated to understand. Thanks!
import tweepy
from googletrans import Translator
#Keys for accessing the Twitter API
consumer_key = 'PLACEHOLDER'
consumer_secret = 'PLACEHOLDER'
access_token = 'PLACEHOLDER'
access_token_secret = 'PLACEHOLDER'
#Setting up authentification
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
#Scrapes my timeline for DT's latest tweet. I want this to be done AUTOMATICALLY when he tweets.
tweet = api.user_timeline(screen_name = 'realDonaldTrump', count = 1, include_rts = False, tweet_mode = 'extended')
#Translates the text from the .json file that is pulle from the previous line using the Google translate library.
for status in tweet:
translator = Translator()
translation = translator.translate(translation.text, 'mn')
translation = translator.translate(status._json["full_text"], 'ja')
#There are more translations in the actual code, but to reduce the length and complexity, I've taken those out. They don't matter to the specific question.
#Include his handle in a message so that the tweet is tweeted back at him once the translation is complete.
message = ('#realDonaldTrump', translation.text)
#Tweets the message back at him under that specific tweet using it's ID.
send = api.update_status(message, status._json["id"])
I just want the code to be able to scrape my timelines for one of DT's tweets in real time. Thanks!
To Automate your script, you might need to push it into a production server and then create a cron job that will run the script at given regular intervals. To run it locally, I use 1. NGROK - Exposes your localhost addresses and ports to the outside world ie it gives you a way to reach your localhost from the internet, and 2. Invictify - This allows you to run your scripts at a schedule. Also, the script as it is will need a web service to be triggered. Use Flask or FastAPI to create endpoints that call the script when triggered.

How to use google python oauth libraries to implement OpenID Connect?

I am evaluating different options for authentication in a python App Engine flex environment, for apps that run within a G Suite domain.
I am trying to put together the OpenID Connect "Server flow" instructions here with how google-auth-library-python implements the general OAuth2 instructions here.
I kind of follow things up until 4. Exchange code for access token and ID token, which looks like flow.fetch_token, except it says "response to this request contains the following fields in a JSON array," and it includes not just the access token but the id token and other things. I did see this patch to the library. Does that mean I could use some flow.fetch_token to create an IDTokenCredentials (how?) and then use this to build an OpenID Connect API client (and where is that API documented)? And what about validating the id token, is there a separate python library to help with that or is that part of the API library?
It is all very confusing. A great deal would be cleared up with some actual "soup to nuts" example code but I haven't found anything anywhere on the internet, which makes me think (a) perhaps this is not a viable way to do authentication, or (b) it is so recent the python libraries have not caught up? I would however much rather do authentication on the server than in the client with Google Sign-In.
Any suggestions or links to code are much appreciated.
It seems Google's python library contains a module for id token validation. This can be found at google.oauth2.id_token module. Once validated, it will return the decoded token which you can use to obtain user information.
from google.oauth2 import id_token
from google.auth.transport import requests
request = requests.Request()
id_info = id_token.verify_oauth2_token(
token, request, 'my-client-id.example.com')
if id_info['iss'] != 'https://accounts.google.com':
raise ValueError('Wrong issuer.')
userid = id_info['sub']
Once you obtain user information, you should follow authentication process as described in Authenticate the user section.
OK, I think I found my answer in the source code now.
google.oauth2.credentials.Credentials exposes id_token:
Depending on the authorization server and the scopes requested, this may be populated when credentials are obtained and updated when refresh is called. This token is a JWT. It can be verified and decoded [as #kavindu-dodanduwa pointed out] using google.oauth2.id_token.verify_oauth2_token.
And several layers down the call stack we can see fetch_token does some minimal validation of the response JSON (checking that an access token was returned, etc.) but basically passes through whatever it gets from the token endpoint, including (i.e. if an OpenID Connect scope is included) the id token as a JWT.
EDIT:
And the final piece of the puzzle is the translation of tokens from the (generic) OAuthSession to (Google-specific) credentials in google_auth_oauthlib.helpers, where the id_token is grabbed, if it exists.
Note that the generic oauthlib library does seem to implement OpenID Connect now, but looks to be very recent and in process (July 2018). Google doesn't seem to use any of this at the moment (this threw me off a bit).

Twitter search API: search by tweet id

I'd like send query to twitter search API using the tweet id but it seems you cannot search a tweet by having its id (maybe because you don't need to search it if you already have the id). For example imagine we have a tweet https://twitter.com/great_watches/status/643389532105256961 and we want to send 643389532105256961 to the search API to see if the tweet is available on the search api or not.
I need it because I just want to compare twitter search api with twitter streaming api.
I have a python script which is listening to the stream for some keywords and whenever a tweet is comming I like to search it on twitter search api to see if it is available there also or not. meaningless huh?
You can't compare the the Search API to the Streaming API the way you're doing it due to the fact they're both retrieving different types of information.
From the Search API docs:
The Twitter Search API is part of Twitter’s v1.1 REST API. It allows
queries against the indices of recent or popular Tweets and behaves
similarily to, but not exactly like the Search feature available in
Twitter mobile or web clients, such as Twitter.com search.
Before getting involved, it’s important to know that the Search API is
focused on relevance and not completeness. This means that some Tweets
and users may be missing from search results. If you want to match for
completeness you should consider using a Streaming API instead.
Here's to explain the scenario based on the information you've given.
You're streaming the word python and you find a match.
You instantly take that match and look for it on search API
The issue with that is by the time you're going from Streaming API (which is in real time) and you're looking to find the same one on the search API you'll get in conflict where more relevant tweets and popular one that will supersede it.
You'll need to redefine the query sent to the search API to include that exact same one (i.e. include more than python as you have done with the Streaming API).
You can get it using the tweepy api. Get the consumer key, secret and access key, secret from https://apps.twitter.com/ .Then run the following:
consumer_key = 'XXXX'
consumer_secret = 'XXXX'
access_key = 'XXXX-XXXX'
access_secret = 'XXXX'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
tweet = api.statuses_lookup(['643389532105256961'])
a1[0].text # Prints the message
More info here http://docs.tweepy.org/en/v3.5.0/api.html#API.statuses_lookup

Python 2.7 - Tweepy - How to get rate_limit_status()?

I am working on a twitter App using Python 2.7 and the latest version of the tweepy module. One thing I cannot figure out is how to use the function rate_limit_status()
Here is my code:
import tweepy, time, sys, random, pickle
import pprint
# argfile = str(sys.argv[1])
#enter the corresponding information from your Twitter application:
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_KEY = ''
ACCESS_SECRET = ''
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
public_tweets = api.home_timeline()
user = api.get_user('#MyUserName')
print api.rate_limit_status()
When I print the results of the function it gives me a large dictionary that I cannot decipher. I have looked at the tweepy documentation but can't find any good examples on using rate_limit_status().
What is the next step I should be doing to troubleshoot something like this?
Is there a tool to format these large dictionaries so I can read them and try to decipher how to access the values in the dictionary?
Edit:
It turns out I didn't have a good understanding of what a Rest API is and how simply it works! I was expecting something MUCH more complicated in my head.
I actually switched to the twitter Python module twitter library instead of Tweepy and then did a lot of research on how to use the Twitter API.
Two youtube videos that REALLY helped me are:
https://www.youtube.com/watch?v=7YcW25PHnAA
and
https://www.youtube.com/watch?v=fhPb6ocUz_k
The Postman Chrome app was awesome and allowed me to easily test and visualize how my calls to the Twitter API worked and it easily formatted the resulting JSON to so I could read it.
To do quick calculations I also took the JSON from Postman and threw it into this website http://konklone.io/json/ to get a csv that I could then open in Excel and make sure everything was behaving as expected and that I was getting the right now number of results.
After all that, writing the Python code to interact with the Twitter API was easy!
Adding all this in this hopes it will help someone else in the future! If it does please let me know! :)
As per the Tweepy documentation
Returns the remaining number of API requests available to the
requesting user before the API limit is reached for the current hour.
Calls to rate_limit_status do not count against the rate limit. If
authentication credentials are provided, the rate limit status for the
authenticating user is returned. Otherwise, the rate limit status for
the requester’s IP address is returned.
So in simpler words you can say that, it returns a JSON object in which tells you about the Number of requests you have made and Number of requests remaining, the reason why it is difficult to read at first sight lies in the face that, it contains the count for every type of API call that you have made and not only the current API call you just executed.
So for example if you run the above script, then, you can see that you have made a call to api.home_timeline() Now according to the twitter Rules and Regulations you can only make 15 calls to this method in a given window session, So if you unpack the JSON object returned then you can see that, there is a lot of data but if you analyse the data then, You will find that api.home_timeline() only affects limits of relevant methods, such as when calling above methods you can check the rate limit using:
data = api.rate_limit_status()
print data['resources']['statuses']['/statuses/home_timeline']
print data['resources']['users']['/users/lookup']
However you have to do a little bit of research on the JSON returned and then you can extract the relevant data from the JSON object, as the returned JSON objects are hard to read, you can always use these types of links to make it more user readable and then analyse it.

Getting Private Users in Tweepy

I'm using tweepy to follow some people. I try following x's followers, and then y's followers . If there exists a mutual follower , I get the following error
[{u'message': u"You've already requested to follow SOMEUSER.", u'code': 160}]
So I want to be able to understand whether a user is private or not depending on user's ID , so I can skip trying to follow them in my code. Is this possible ? The API Reference do not have the Object properties.
I know this question was asked a year ago but I came across this question because I was wondering myself. I can now provide an accurate answer for those who come across this page like myself.
import tweepy
SCREEN_NAME = 'abcdefg'
CONSUMER_KEY = '1234'
CONSUMER_SECRET = '5678'
ACCESS_TOKEN = '4321'
ACCESS_TOKEN_SECRET = '8765'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
u = api.get_user(123456789)
print u.protected
I would suggest that you keep a local database with the users that you already requested to follow so you don't do unnecessary API calls. You are probably asking to follow the same users over and over again.
If the local database is not an option you can see if a user is private with the GET users/show API endpoint passing a user_id or screen_name. The response returns the information you need (check the "protected" field) and much more.
To use this method from tweepy you have the get_user() method.

Categories

Resources