Posting a random image from directory to twitter - python

So I'm working on a little mini project running off of a raspberry pi, of which the end goal is to have the pi running a python script to post a random image from a directory to twitter (a twitter bot). I have this code so far, and I'm getting an error. I'm assuming that the error is caused due to the fact that it is somehow not receiving the correct file type, but I'm not 100% sure. A little help would be fantastic!
import os
import random
from twython import Twython
#removed keys for privacy reasons
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_KEY = ''
ACCESS_SECRET = ''
twitter = Twython(CONSUMER_KEY, CONSUMER_SECRET, ACCESS_KEY, ACCESS_SECRET)
path = '/home/pi/DailyMeme/Memes/downloaded_images'
count = 0
file_list = []
for filename in os.listdir(path):
count = count+1
file_list.append(filename)
rnd = random.randint(0,count)
photo = open('/home/pi/DailyMeme/Memes/downloaded_images/'+file_list[rnd]+'.jpg','rb')
response = twitter.upload_media(media=photo)
twitter.update_status(status='check out this random meme', media_ids[response['media_id']])
EDIT: I'm new to python so it would be great if you could explain to me solutions
Error Message
File "random_meme.py", line 20
twitter.update_status(status='check out this random meme' , media_ids[response['media_id']])
SyntaxError: non-keyword arg after keyword arg

The issue is that once you have a keyword in the function call, in this case status all the following parameters in the function call must include a keyword.
Therefore you'll need to call this function either in different order of parameters or add a keyword to the second parameter
twitter.update_status(status='check out this random meme' , some_keyword=media_ids[response['media_id']])
(This is at least part of the issue regardless to what you actually want to achieve)

Related

Python flask server to retrieve certain records

I have this following python code for a Flask server. I am trying to have this part of the code list all my vehicles that match the horsepower that I put in through my browser. I want it to return all the car names that match the horsepower, but what I have doesn't seem to be working? It returns nothing. I know the issue is somewhere in the "for" statement, but I don't know how to fix it.
This is my first time doing something like this and I've been trying multiple things for hours. I can't figure it out. Could you please help?
from flask import Flask
from flask import request
import os, json
app = Flask(__name__, static_folder='flask')
#app.route('/HORSEPOWER')
def horsepower():
horsepower = request.args.get('horsepower')
message = "<h3>HORSEPOWER "+str(horsepower)+"</h3>"
path = os.getcwd() + "/data/vehicles.json"
with open(path) as f:
data = json.load(f)
for record in data:
horsepower=int(record["Horsepower"])
if horsepower == record:
car=record["Car"]
return message
The following example should meet your expectations.
from flask import Flask
from flask import request
import os, json
app = Flask(__name__)
#app.route('/horsepower')
def horsepower():
# The type of the URL parameters are automatically converted to integer.
horsepower = request.args.get('horsepower', type=int)
# Read the file which is located in the data folder relative to the
# application root directory.
path = os.path.join(app.root_path, 'data', 'vehicles.json')
with open(path) as f:
data = json.load(f)
# A list of names of the data sets is created,
# the performance of which corresponds to the parameter passed.
cars = [record['Car'] for record in data if horsepower == int(record["Horsepower"])]
# The result is then output separated by commas.
return f'''
<h3>HORSEPOWER {horsepower}</h3>
<p>{','.join(cars)}<p>
'''
There are many different ways of writing the loop. I used a short variant in the example. In more detail, you can use these as well.
cars = []
for record in data:
if horsepower == int(record['Horsepower']):
cars.append(record['Car'])
As a tip:
Pay attention to when you overwrite the value of a variable by using the same name.

How do I avoid getting a sporadic KeyError: 'data' when using the Reddit API in python?

I have the following python code that is working ok to use reddit's api and look up the front page of different subreddits and their rising submissions.
from pprint import pprint
import requests
import json
import datetime
import csv
import time
subredditsToScan = ["Arts", "AskReddit", "askscience", "aww", "books", "creepy", "dataisbeautiful", "DIY", "Documentaries", "EarthPorn", "explainlikeimfive", "food", "funny", "gaming", "gifs", "history", "jokes", "LifeProTips", "movies", "music", "pics", "science", "ShowerThoughts", "space", "sports", "tifu", "todayilearned", "videos", "worldnews"]
ofilePosts = open('posts.csv', 'wb')
writerPosts = csv.writer(ofilePosts, delimiter=',')
ofileUrls = open('urls.csv', 'wb')
writerUrls = csv.writer(ofileUrls, delimiter=',')
for subreddit in subredditsToScan:
front = requests.get(r'http://www.reddit.com/r/' + subreddit + '/.json')
rising = requests.get(r'http://www.reddit.com/r/' + subreddit + '/rising/.json')
front.text
rising.text
risingData = rising.json()
frontData = front.json()
print(len(risingData['data']['children']))
print(len(frontData['data']['children']))
for i in range(0, len(risingData['data']['children'])):
author = risingData['data']['children'][i]['data']['author']
score = risingData['data']['children'][i]['data']['score']
subreddit = risingData['data']['children'][i]['data']['subreddit']
gilded = risingData['data']['children'][i]['data']['gilded']
numOfComments = risingData['data']['children'][i]['data']['num_comments']
linkUrl = risingData['data']['children'][i]['data']['permalink']
timeCreated = risingData['data']['children'][i]['data']['created_utc']
writerPosts.writerow([author, score, subreddit, gilded, numOfComments, linkUrl, timeCreated])
writerUrls.writerow([linkUrl])
for j in range(0, len(frontData['data']['children'])):
author = frontData['data']['children'][j]['data']['author'].encode('utf-8').strip()
score = frontData['data']['children'][j]['data']['score']
subreddit = frontData['data']['children'][j]['data']['subreddit'].encode('utf-8').strip()
gilded = frontData['data']['children'][j]['data']['gilded']
numOfComments = frontData['data']['children'][j]['data']['num_comments']
linkUrl = frontData['data']['children'][j]['data']['permalink'].encode('utf-8').strip()
timeCreated = frontData['data']['children'][j]['data']['created_utc']
writerPosts.writerow([author, score, subreddit, gilded, numOfComments, linkUrl, timeCreated])
writerUrls.writerow([linkUrl])
It works well and scrapes the data accurately but it constantly gets interrupted, seemingly randomly, and has a run time crash, saying:
Traceback (most recent call last):
File "dataGather1.py", line 27, in <module>
for i in range(0, len(risingData['data']['children'])):
KeyError: 'data'
I have no idea why this error is occuring on and off and not consistently. I thought maybe I am calling the API too much so it stops me from accessing it so I threw a sleep in my code but that did not help. Any ideas?
When there are no data on the response from the API there are is no key data on the dictionary so you get a keyError on some subreddits. You need to use a try catch
The json you are parsing doesn't contain the 'data' element. Thus you get an error. I think your hunch is correct though. It is probably rate limiting, or that you're asking for hidden/deleted entries.
Reddit is very strict about accessing their API without playing nice. Meaning you should register your app and use a meaningful user-agent to your requets, and you should probably use the python library for this kind of thing: https://praw.readthedocs.io/en/latest/
Without registering it seems to my experience that the direct REST reddit API is even more strict than the 1 request per 2 seconds rule they have (had?).
Python raises a KeyError whenever a dict() object is requested (using the format a = adict[key]) and the key is not in the dictionary.
It seems like when you are getting this error, your data value is empty.
You might just try to get the length of the dictionary before you execute the for loop. If it’s empty, it will just not run. Some interesting error checking here might help.
size = len(risingData)
if size:
for i in range(0,size):
…

"TypeError: list indices must be integers, not str" in JSON

Ok, so I am having trouble trying to get my code to work, my goal is to make a Reddit Bot that refers to Steam's appid JSON to link users to the steam store page when the user says the name of a game.
The bot is almost complete, however, I keep getting "TypeError: list indices must be integers, not str" when the bot runs.
Here is my code:
import praw
import time
import json
import codecs
# Death Zone /// I hope you have coffee, brcause you won't leave until this is done
with open('API.json', encoding='utf-8-sig') as steam_strings:
dic = json.loads(steam_strings.read())
print("Successfully read JSON")
a = dic.get('appid')
n = dic.get('name')
[app['name'] for app in dic['applist']['apps']['app']]
# End Death Zone
app_id = 'CENSORED'
app_secret = 'CENSORED'
app_uri = 'https://127.0.0.1:65010/authorize_callback'
app_ua = 'Stop asking me how to get the Windows flair dummy, I am here for that reason'
app_scopes = 'account creddits edit flair history identity livemanage modconfig modcontributors modflair modlog modothers modposts modself modwiki mysubreddits privatemessages read report save submit subscribe vote wikiedit wikiread'
app_account_code = 'CENSORED'
app_refresh = 'CENSORED'
import praw
def login():
r = praw.Reddit(app_ua)
r.set_oauth_app_info(app_id, app_secret, app_uri)
r.refresh_access_information(app_refresh)
print("Steam Link Bot! Version Alpha 0.1.2")
return r
r = login()
words_to_reply = dic['applist']['apps']['app']['name']
# {'applist':1,'apps':2, 'app':3, 'name':4}
cache = []
def run_bot():
subreddit = r.get_subreddit("eegras")
comments = subreddit.get_comments(limit=100)
for comment in comments:
comment_text = comment.body.lower()
isMatch = any(string in comment_text for string in words_to_reply)
if comment.id not in cache and isMatch:
comment.reply(['applist']['apps']['app']['appid'])
cache.append(comment.id)
print("I replied to a comment successfully!")
while True:
run_bot()
time.sleep(10)
Any help would be appreciated, I'm kinda a beginner at Python, so take it easy.
This type of error is raised when it is accessed a list by a string , deferentially of dictionaries that allow be indexed by strings.
If possible comment line which occurs this error, or you can check the type of data making a print type and checking if it really is a dictionary . However make sure the JSON really is structured like a dictionary , or if there are lists inside.

Biopython pubmed lookup - "No connection could be made because the target machine actively refused it" error 10061

I'm trying to retrieve ids of a specific set of keywords from pubmed using the following standard code,
import os
from Bio import Entrez
from Bio import Medline
#Defining keyword file
keywords_file = "D:\keywords.txt"
# Splitting keyword file into a list
keyword_list = []
keyword_list = open(keywords_file).read().split(',')
#print keyword_list
# Creating folders by keywords and creating a text file of the same keyword in each folder
for item in keyword_list:
create_file = item +'.txt.'
path = r"D:\Thesis"+'\\'+item
#print path
if not os.path.exists(path):
os.makedirs(path)
#print os.getcwd()
os.chdir(path)
f = open(item+'.txt','a')
f.close()
# Using biopython to fetch ids of the keyword searches
limit = 10
def fetch_ids(keyword,limit):
for item in keyword:
print item
print "Fetching search for "+item+"\n"
#os.environ['http_proxy'] = '127.0.0.1:13828'
Entrez.email = 'A.N.Other#example.com'
search = Entrez.esearch(db='pubmed',retmax=limit,term = '"'+item+'"')
print term
result = Entrez.read(search)
ids = result['IdList']
#print len(ids)
return ids
print fetch_ids(keyword_list,limit)
id_res = fetch_ids(keyword_list,limit)
print id_res
def write_ids_in_file(id_res):
with open(item+'.txt','w') as temp_file:
temp_file.write('\n'.join(ids))
temp_file.close()
write_ids_in_file(id_res)
In a nutshell what I'm trying to do is to create folders with the same name as each of the keywords, create a text file within the folder, fetch the ids from pubmed through the code and save the ids in the text files. My program worked fine when I initially tested it, however, after a couple of tries it started throwing me the target machine actively refused connection error. Some more details that could be useful,
header information
Host = 'eutils.ncbi.nlm.nih.gov'
Connection = 'close'
User-Agent = 'Python-urllib/2.7'
URL
http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?term=%22inflammasome%22&retmax=10&db=pubmed&tool=biopython&email=A.N.Other%40example.com
host = '127.0.0.1:13828'
I know that this question has been asked many times with the response that the port is not listening but what I want to know is if this is my issue as well, then how do get the application to work on this specific port. I've already gone to my firewall settings and opened a port 13828 but I'm not sure what to do beyond this. If this is not the case, what could potentially be a work around solution?
Thanks!
You need search.close() after result = Entrez.read(search). Check official instruction here. http://biopython.org/DIST/docs/api/Bio.Entrez-module.html
Shut down port or TCP due to too many open connections is a normal behavior for a public website.

I need help using the library.add_album feature of pylast (python last.fm api wrapper)

I am trying to access the library class of pylast, but must be doing something wrong. I can get most other features to work. The following is a code example which just takes the standard working example and adds what I believe to be the correct way of adding an album to my last.fm library:
import pylast
# You have to have your own unique two values for API_KEY and API_SECRET
# Obtain yours from http://www.last.fm/api/account for Last.fm
API_KEY = "80a1c765efb52869575821c03d93a30e" # this is a sample key
API_SECRET = "2ba567f5b0d74c6cc6a8d07ef2cbc2d"
# In order to perform a write operation you need to authenticate yourself
username = "astroid0"
password_hash = pylast.md5("xxx")
network = pylast.LastFMNetwork(api_key = API_KEY, api_secret =
API_SECRET, username = username, password_hash = password_hash)
# now you can use that object every where
artist = network.get_artist("System of a Down")
artist.shout("<3")
track = network.get_track("Iron Maiden", "The Nomad")
track.love()
track.add_tags(("awesome", "favorite"))
## This is the area causing trouble
library1 = pylast.Library(user = "astroid0", network = "LastFM")
album1 = network.get_album("The Rolling Stones", "Sticky Fingers")
library1.add_album(album1)
ss the library class of pylast, but must be doing something wrong. I can get most other features to work. The following is a code example which just takes the standard working example and adds what I believe to be the correct way of adding an album to my last.fm library:
library1 = pylast.Library(user = "astroid0", network = "LastFM")
album1 = network.get_album("The Rolling Stones", "Sticky Fingers")
library1.add_album(album1)
I am new to python, so I am sorry if this is obvious, I have just been stuck for days now, and decided to ask.
It's a bug in pylast.
Line 1957 (from trunk) should be:
params["artist"] = album.get_artist().get_name()
instead of:
params["artist"] = album.get_artist.get_name()
You can report the issue to the author here.
The answer by miles82 shows the bug and it's been reported to pylast.
Unfortunately there's been no updates in a few years so I've fixed this in my fork of pylast.

Categories

Resources