I am trying to create a little python script to follow Twitter user IDs from a textfile (one per line, in numeric format e.g. 217275660, 30921943, etc.). I took a look at this answer on stack exchange to make the code below using the 'try/except' answer, but I am getting an error "NameError: name 'TwitterError' is not defined"...
Anyone know how to clear this issue up and fix the code? I feel like it should be pretty simple but haven't used the Twitter API before.
# Script to follow Twitter users from text file containing user IDs (one per line)
# Header stuff I've just thrown in from another script to authenticate
import json
import time
import tweepy
import pprint
from tweepy.parsers import RawParser
from auth import TwitterAuth
from datetime import datetime
auth = tweepy.OAuthHandler(TwitterAuth.consumer_key, TwitterAuth.consumer_secret)
auth.set_access_token(TwitterAuth.access_token, TwitterAuth.access_token_secret)
rawParser = RawParser()
api = tweepy.API(auth_handler = auth, parser = rawParser)
# Follow everyone from list?!
with open('to_follow.txt') as f:
for line in f:
try:
api.CreateFriendship(userID)
except TwitterError:
continue
print "Done."
That is may be because the tweepy throws error of type TweepError so you need to catch TweepError instead of TwitterError
for line in f:
try:
api.CreateFriendship(userID)
except TweepError,e:
continue
Related
I am trying to use machine learning to perform sentiment analysis on data from twitter. To aggregate the data, I've made a class which will mine and
pre-process data. In order to clean and pre-process the data, I'd like to convert each tweet's text to a string. However, when the line of code in the inner for loop in the massMine method is called, i get a WebDriverException: no such session. The relevant bits of code are below, any input is appreciated, thanks.
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import numpy as np
import pandas
import re
class TweetMiner(object):
def __init__(self):
self.base_url = u'https://twitter.com/search?q=from%3A'
self.raw_data = []
def mineTweets(self, query, tweet_quota):
'''
Mine data from a singular twitter account,
input consists of a twitter handle, and a
value indicating how much data to mine
Ex: “#diddy” should be inputted as “diddy”
'''
browser = webdriver.Chrome()
url = self.base_url + query
browser.get(url)
time.sleep(1)
body = browser.find_element_by_tag_name('body')
for _ in range(tweet_quota):
body.send_keys(Keys.PAGE_DOWN)
time.sleep(0.2)
tweets = browser.find_elements_by_class_name('tweet-text')
for tweet in tweets:
print(tweet.text)
browser.close()
return tweets
def massMine(self, inputArray, dataSize):
'''
Mine data from an array of twitter
accounts, input array consists of twitter
handles and a value indicating how much
data to mine
Ex: “#diddy” should be inputted as “diddy”
'''
for user in inputArray:
rtn = ""
tweets = self.mineTweets(user,dataSize)
for tweet in tweets:
rtn += (tweet.text)
return rtn
EDIT: I don't know what caused this error - but if anyone stumbles across this post with a similar error I was able to workaround by simply writing each tweet to a text file.
I use to get this error when I have opened too many browser instances and haven't closed it properly (both via automation script or manually). When other browser instances were killed one by one this error is removed. I found that C:\Users\(yourAccountName)\AppData\Local\Temp directory is totally filled up and hence causing the NoSuchSession error.
Preferred solution will be to see if too many browsers/tabs are open. Remove them. Or manually remove all the contents inside above Temp path and try.
I'm pretty new.
I wrote this python script to make an API call from blockr.io to check the balance of multiple bitcoin addresses.
The contents of btcaddy.txt are bitcoin addresses seperated by commas. For this example, let it parse this.
import urllib2
import json
btcaddy = open("btcaddy.txt","r")
urlRequest = urllib2.Request("http://btc.blockr.io/api/v1/address/info/" + btcaddy.read())
data = urllib2.urlopen(urlRequest).read()
json_data = json.loads(data)
balance = float(json_data['data''address'])
print balance
raw_input()
However, it gives me an error. What am I doing wrong? For now, how do I get it to print the balance of the addresses?
You've done multiple things wrong in your code. Here's my fix. I recommend a for loop.
import json
import urllib
addresses = open("btcaddy.txt", "r").read()
base_url = "http://btc.blockr.io/api/v1/address/info/"
request = urllib.urlopen(base_url+addresses)
result = json.loads(request.read())['data']
for balance in result:
print balance['address'], ":" , balance['balance'], "BTC"
You don't need an input at the end, too.
Your question is clear, but your tries not.
You said, you have a file, with at least, more than registry. So you need to retrieve the lines of this file.
with open("btcaddy.txt","r") as a:
addresses = a.readlines()
Now you could iterate over registries and make a request to this uri. The urllib module is enough for this task.
import json
import urllib
base_url = "http://btc.blockr.io/api/v1/address/info/%s"
for address in addresses:
request = urllib.request.urlopen(base_url % address)
result = json.loads(request.read().decode('utf8'))
print(result)
HTTP sends bytes as response, so you should to us decode('utf8') as approach to handle with data.
My goal is to connect to Youtube API and download the URLs of specific music producers.I found the following script which I used from the following link: https://www.youtube.com/watch?v=_M_wle0Iq9M. In the video the code works beautifully. But when I try it on python 2.7 it gives me KeyError:'items'.
I know KeyErrors can occur when there is an incorrect use of a dictionary or when a key doesn't exist.
I have tried going to the google developers site for youtube to make sure that 'items' exist and it does.
I am also aware that using get() may be helpful for my problem but I am not sure. Any suggestions to fixing my KeyError using the following code or any suggestions on how to improve my code to reach my main goal of downloading the URLs (I have a Youtube API)?
Here is the code:
#these modules help with HTTP request from Youtube
import urllib
import urllib2
import json
API_KEY = open("/Users/ereyes/Desktop/APIKey.rtf","r")
API_KEY = API_KEY.read()
searchTerm = raw_input('Search for a video:')
searchTerm = urllib.quote_plus(searchTerm)
url = 'https://www.googleapis.com/youtube/v3/search?part=snippet&q='+searchTerm+'&key='+API_KEY
response = urllib.urlopen(url)
videos = json.load(response)
videoMetadata = [] #declaring our list
for video in videos['items']: #"for loop" cycle through json response and searches in items
if video['id']['kind'] == 'youtube#video': #makes sure that item we are looking at is only videos
videoMetadata.append(video['snippet']['title']+ # getting title of video and putting into list
"\nhttp://youtube.com/watch?v="+video['id']['videoId'])
videoMetadata.sort(); # sorts our list alphaetically
print ("\nSearch Results:\n") #print out search results
for metadata in videoMetadata:
print (metadata)+"\n"
raw_input('Press Enter to Exit')
The problem is most likely a combination of using an RTF file instead of a plain text file for the API key and you seem to be confused whether to use urllib or urllib2 since you imported both.
Personally, I would recommend requests, but I think you need to read() the contents of the request to get a string
response = urllib.urlopen(url).read()
You can check that by printing the response variable
I am a total newbie when it comes to backend. I am working on a very simple webpage that needs one element to be updated every couple minutes or so. I'd like it to make a request to my Firebase database, get a single integer, and change a number on the webpage to that integer.
Right now I am having trouble updating the Firebase with a simple Python program. Here is what my Firebase looks like every time I run my python script: Click
When I run the script, it adds 6 new random variables with the value I'd like to send to Firebase. Here is what my code looks like so far:
from firebase import firebase
fb = firebase.FirebaseApplication('https://myAssignedDomain.com/', None)
Result = fb.post('test/coffee', {'percentage': 40})
What do I need to do in order to only change one existing value in Firebase rather than create 6 new random variables?
This is how you can update the value of a particular property in firebase python 1.2 package
from firebase import firebase
fb = firebase.FirebaseApplication('https://myAssignedDomain.com/', None)
fb.put('test/asdf',"count",4) #"path","property_Name",property_Value
This function will update the 'percentage' value from your node. Just make sure that the node is writable so your script can modify it.
import urllib.request
import urllib.error
import json
def update_entry(new_percentage):
my_data = dict()
my_data["percentage"] = new_percentage
json_data = json.dumps(my_data).encode()
request = urllib.requests.Request("https://<YOUR-PROJECT-ID>.firebaseio.com/test/coffe.json", data=json_data, method="PATCH")
try:
loader = urllib.request.urlopen(request)
except urllib.error.URLError as e:
message = json.loads(e.read())
print(message["error"])
else:
print(loader.read())
I'm creating a simple transit twitter-bot which posts a tweet to my API, then grabs the result to later on reply with an answer on travel times and such. All the magic is on the server-side , and this code should work just fine. Here's how:
A user composes like the tweet below:
#kollektiven Sundsvall Navet - Ljustadalen
My script removes the #kollektiven from the tweet, send the rest Sundsvall Navet - Ljustadalen to our API. Then a JSON should be given to the script. The script should later on reply you with an answer like this:
#jackbillstrom Sundsvall busstation Navet (2014-01-08 20:45) till Ljustadalen centrum (Sundsvall kn) (2014-01-08 20:59)
But it doesn't. I'm using this code from github called spritzbot. I edited the extensions/hello.py to look like the one below:
# -*- coding: utf-8 -*-
import json, urllib2, os
os.system("clear")
def process_mention(status, settings):
print status.user.screen_name,':', status.text.encode('utf-8')
urlencode = status.text.lower().replace(" ","%20") # URL-encoding
tweet = urlencode.strip('#kollektiven ')
try:
call = "http://xn--datorkraftfrvrlden-xtb17a.se/kollektiven/proxy.php?input="+tweet # Endpoint
endpoint = urllib2.urlopen(call) # GET-Request to API endpoint
data = json.load(endpoint) # Load JSON
answer = data['proxyOutput'] # The answer from the API
return dict(response=str(answer)) # Posts answer tweet
except:
return dict(response="Error, kontakta #jackbillstrom") # Error-meddelande
What is causing this problem? And why? I made some changes before I came to this revision, and it worked back then.
You need:
if __name__ == '__main__':
process_mention(...)
...
You're not calling process_mention anywhere, just defining it.