Python Twitter Search API - python

I am wondering why this code does not return for every json object the desired text?
# import
import json
from twitter import Twitter, OAuth, TwitterHTTPError, TwitterStream
# Authentication
ACCESS_TOKEN = 'hidden'
ACCESS_SECRET = 'hidden'
CONSUMER_KEY = 'hidden'
CONSUMER_SECRET = 'hidden'
oauth = OAuth(ACCESS_TOKEN, ACCESS_SECRET, CONSUMER_KEY, CONSUMER_SECRET)
# TWITTER SEARCH
twitter = Twitter(auth=oauth)
search_tweets = twitter.search.tweets(q='#google')
json_dump = json.dumps(search_tweets)
for line in json_dump:
print(json_dump['text'])
It gives me the following error:
Traceback (most recent call last):
File "twitter_streaming.py", line 46, in
print(json_dump['text'])
TypeError: string indices must be integers

This line:
json_dump = json.dumps(search_tweets)
converted the search_tweets variable into a string containing JSON in it. The issue is then with this code:
for line in json_dump:
print(json_dump['text'])
What this code does is iterates over the characters in json_dump (since json_dump is now a string), and then for each character, tries getting the ['text'] index from this character, which is obviously possible.
It seems like you need to convert search_tweets to a list of dictionaries instead, in order to make it work with your later code. However, without an example output of the search_tweets variable I can't say what exact code you need to do - if you could add the contents of the search_tweets variable that would be helpful.

Related

Twitter module has no object 'trends'

import tweepy
import json
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
OAUTH_TOKEN = ''
OAUTH_TOKEN_SECRET = ''
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(OAUTH_TOKEN, OAUTH_TOKEN_SECRET)
twitter_api = tweepy.API(auth)
# I made a dict of different countries and their WOE_ID...
PLACE_WOE_ID = country_id[country]
place_trends = twitter_api.trends.place(_id=PLACE_WOE_ID)
Everytime I run my code I am getting the following error. I checked other posts on stackoverflow regarding twitter api but I haven't found a solution yet.
Traceback (most recent call last):
File "C:/Users/user/Documents/twipgm.py", line 44, in <module>
place_trends = twitter_api.trends.place(_id=PLACE_WOE_ID)
AttributeError: 'API' object has no attribute 'trends'
There is no method like
place()
in tweepy documentation.
place_trends = twitter_api.trends_place(..)
Should solve your problem, I strongly suggest you to check this docs.
Note that Tweepy 4.0 version has renamed this method to .get_place_trends().
You can follow this tutorial to get some recent updates on Tweepy >4.0 use cases.

python code to fetch the tweets with its place, from where it is tagged

I'm very much new to python. I need a code which will fetch the tweets and the places from where it is tagged,or geo-tagged tweets.
I Have code which will export the tweets the csv file based o the hash tag given, I want another column added, with specifies from where it is tweeted.
Please help me on this.
The code is as follows,
import tweepy
import csv
import pandas as pd
####input your credentials here
consumer_key = ''
consumer_secret = ''
access_token = '-'
access_token_secret = ''
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth,wait_on_rate_limit=True)
csvFile = open('mydoc.csv', 'a')
#Use csv Writer
csvWriter = csv.writer(csvFile)
for tweet in tweepy.Cursor(api.search,q="#iPhoneX",count=100,
lang="en",
since="2017-01-01").items():
print (tweet.created_at, tweet.text)
csvWriter.writerow([tweet.created_at, tweet.text.encode('utf-8')])
I'm in very much in need of it.Please help !!!
I think you can do like this:
for tweet in tweepy.Cursor(api.search, q="#iPhoneX", count=100, lang="en",
since="2017-01-01").items():
if tweet.coordinates or tweet.geo:
print(tweet.id_str, tweet.coordinates, tweet.geo)
You can see the structure of a tweet object (also called status) here. The different fields correspond to:
coordinates: "Represents the geographic location of this Tweet as reported by the user or client application. The inner coordinates array is formatted as geoJSON (longitude first, then latitude)"
place: "When present, indicates that the tweet is associated (but not necessarily originating from) a Place". This happens when the user tagged a place for the tweet.
geo: "Deprecated. Use the coordinates field instead. This deprecated attribute has its coordinates formatted as [lat, long], while all other Tweet geo is formatted as [long, lat]."

Python function won't accept string as a parameter

I am trying to pass a string as a parameter to a function in python(2), but as i try to execute it, i get an error that says:
Error get_all_tweets expected a string or other character buffer object
This is my code:
def get_all_tweets(screen_name):
consumer_key = 'XXXXXXXXXXXXXXXXXXXXXXXX'
consumer_secret = 'XXXXXXXXXXXXXXXXXXXXXXXX'
access_token = 'XXXXXXXXXXXXXXXXXXXXXXXX'
access_secret = 'XXXXXXXXXXXXXXXXXXXXXXXX'
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth)
alltweets = []
new_tweets = api.user_timeline(screen_name = screen_name, count = 200)
alltweets.extend(new_tweets)
try:
with open('$screen_name.json', 'a') as f:
f.write(alltweets)
return True
except BaseException as e:
print ('Error get_all_tweets %s' % str(e))
return True
get_all_tweets(str("BarackObama"))
I can't understand why i get a complaint about the parameter not being a string, which it clearly is. I am fairly new to python, but every resource i have come across states that this is the way to pass a string as a parameter.
Is there something i have overseen? I don't get any other errors.
I am using Python 2.7.12.
Thanks in advance
The weird error stems from you catching BaseException, something you should never ever do.
The true error is a TypeError: You trying to write a list to a file:
f.write(alltweets)
This won't work, because the write method of a file object only accepts strings or other character buffer objects as arguments.
The way to write a list to a file is by iterating over it:
for tweet in alltweets:
f.write(tweet + "\n")
This will probably not work in your case, because I assume what tweepy returns as a tweet is a dictionary, not a simple string. In that case, use json to encode it:
import json
...
for tweet in alltweets:
f.write(json.dumps(tweet) + "\n")

Filter text with Python before tweeting

Still kinda new to this stuff so please, bear with me:
I have a java program that scrapes for links, works like a charm. I also recently learned how to make an auto tweeting bot in python, also works flawlessly.
Now here comes where i am having troubles...
The java program exports all links found in a text file (no biggie) and my twitter bot grabs any lines in the text file and tweets them (again, no biggie). BUT what i want to do is filter the twitter bot to ONLY tweet specific links that have specific key words...
Here is my twitter bot
import tweepy, time, sys
argfile = str(sys.argv[1])
CONSUMER_KEY = 'example'
CONSUMER_SECRET = 'example'
ACCESS_KEY = 'example'
ACCESS_SECRET = 'example'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
filename=open(argfile,'r')
f=filename.readlines()
filename.close()
for line in f:
api.update_status(line)
time.sleep(60)#Tweet every 60 seconds
Now i have tried multiple things like
for line in f:
if: 'robot' in line:
api.update_status(line)
time.sleep(60)#Tweet every 60 seconds
Which doesn't seem to be working. I am curious if i have to use configparser to filter the data?
First of all, you have to use the if loop syntax correctly;
if 'word' in line :
api.update_status(line)
time.sleep(60)
Secondly , to make it as a kind of 'shortcut' you could do it like this:
if 'word' or 'word' or 'word' in line:
api.update_status(line)
time.sleep(60)
ps:- you could write as many words to filter as you wish

LinkedIn API Python Key Error 2.7

This code is available online to run a map of your connections in linkedin
This uses linkedin api.
I'm able to connect fine and everything runs okay till the last script of actually writing the data to a csv.
Whenever I run the code
import oauth2 as oauth
import urlparse
import simplejson
import codecs
CONSUMER_KEY = "xxx"
CONSUMER_SECRET = "xxx"
OAUTH_TOKEN = "xxx"
OAUTH_TOKEN_SECRET = "xxx"
OUTPUT = "linked.csv"
def linkedin_connections():
# Use your credentials to build the oauth client
consumer = oauth.Consumer(key=CONSUMER_KEY, secret=CONSUMER_SECRET)
token = oauth.Token(key=OAUTH_TOKEN, secret=OAUTH_TOKEN_SECRET)
client = oauth.Client(consumer, token)
# Fetch first degree connections
resp, content = client.request('http://api.linkedin.com/v1/people/~/connections?format=json')
results = simplejson.loads(content)
# File that will store the results
output = codecs.open(OUTPUT, 'w', 'utf-8')
# Loop thru the 1st degree connection and see how they connect to each other
for result in results["values"]:
con = "%s %s" % (result["firstName"].replace(",", " "), result["lastName"].replace(",", " "))
print >>output, "%s,%s" % ("John Henry", con)
# This is the trick, use the search API to get related connections
u = "https://api.linkedin.com/v1/people/%s:(relation-to-viewer:(related-connections))?format=json" % result["id"]
resp, content = client.request(u)
rels = simplejson.loads(content)
try:
for rel in rels['relationToViewer']['relatedConnections']['values']:
sec = "%s %s" % (rel["firstName"].replace(",", " "), rel["lastName"].replace(",", " "))
print >>output, "%s,%s" % (con, sec)
except:
pass
if __name__ == '__main__':
linkedin_connections()
for result in results["values"]:
KeyError: 'values'
When I run this I get an error message:
Traceback (most recent call last):
File "linkedin-2-query.py", line 51, in <module>
linkedin_connections()
File "linkedin-2-query.py", line 35, in linkedin_connections
for result in results["values"]:
KeyError: 'values'
Any suggestions or help would be greatly appreciated!
I encountered the same issue working through the post Visualizing your LinkedIn graph using Gephi – Part 1.
Python raises a KeyError whenever a dict() object is requested (using the format a = adict[key]) and the key is not in the dictionary. KeyError - Python Wiki
After searching a bit and adding some print statements, I realize that my OAuth session has expired, so the OAuth token in my linkedin-2-query.py script was is longer valid.
Since the OAuth token is invalid, the LinkedIn API does not return a dictionary with the key "values" like the script expects. Instead, the API returns the string 'N'. Python tries to find the dict key "values"in the string 'N', fails, and generates the KeyError: 'values'.
So a new, valid OAuth token & secret should get the API to return a dict containing connection data.
I run the linkedin-1-oauth.py script again, and then visit the LinkedIn Application details page to find my new OAuth token. (The screenshot omits the values for my app. You should see alphanumeric values for each Key, Token, & Secret.)
...
I then update my linkedin-2-query.py script with the new OAuth User Token and OAuth User Secret
OAUTH_TOKEN = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # your updated OAuth User Token
OAUTH_TOKEN_SECRET = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # your updated OAuth User Secret
After updating the OAuth token & secret, I immediately run my linkedin-2-query.py script. Hooray, it runs without errors and retrieves my connection data from the API.

Categories

Resources