I have this simple python code, which returning the content of URL and store the result as json text file named "file", but it keeps returning empty result .
What I am doing wrong here? It is just a simple code I am so disappointed.
I have included all the imports needed import Facebook,import request,and import json.
url ="https://graph.facebook.com/search?limit=5000&type=page&q=%26&access_token=xx&__after_id=139433456868"
content = requests.get(url).json()
file = open("file.txt" , 'w')
file.write(json.dumps(content, indent=1))
file.close()
but it keeps returning empty result to me what I am missing here?
here is the result:
"data": []
any help please?
Its working fine:
import urllib2
accesstoken="CAACEdEose0cBACF6HpTDEuVEwVnjx1sHOJFS3ZBQZBsoWqKKyFl93xwZCxKysqsQMUgOZBLjJoMurSxinn96pgbdfSYbyS9Hh3pULdED8Tv255RgnsYmnlxvR7JZCN7X25zP6fRnRK0ZCNmChfLPejaltwM2JGtPGYBQwnmAL9tQBKBmbZAkGYCEQHAbUf7k1YZD"
urllib2.urlopen("https://graph.facebook.com/search?limit=5000&type=page&q=%26&access_token="+accesstoken+"&__after_id=139433456868").read()
I think you have not requested access token before making the request.
How to find access token?
def getSecretToken(verification_code):
token_url = ( "https://graph.facebook.com/oauth/access_token?" +
"client_id=" + app_id +
"&redirect_uri=" +my_url +
"&client_secret=" + app_secret +
"&code=" + verification_code )
response = requests.get(token_url).content
params = {}
result = response.split("&", 1)
print result
for p in result:
(k,v) = p.split("=")
params[k] = v
return params['access_token']
how do you get that verification code?
verification_code=""
if "code" in request.query:
verification_code = request.query["code"]
if not verification_code:
dialog_url = ( "http://www.facebook.com/dialog/oauth?" +
"client_id=" + app_id +
"&redirect_uri=" + my_url +
"&scope=publish_stream" )
return "<script>top.location.href='" + dialog_url + "'</script>"
else:
access_token = getSecretToken(verification_code)
Related
Python 3.7.2
PyCharm
I'm fairly new to Python, and API interaction; I'm trying to loop through the API for Rocket Chat, specifically pulling user email address's out.
Unlike nearly every example I can find, Rocket Chat doesn't use any kind of construct like "Next" - it uses count and offset, which I had actually
though might make this easier.
I have managed to get the first part of this working,
looping over the JSON and getting the emails. What I need to do, is loop through the API endpoints - which is what I have ran into some issue with.
I have looked at this answer Unable to loop through paged API responses with Python
as it seemed to be pretty close to what I want, but I couldn't get it to work correctly.
The code below, is what I have right now; obviously this isn't doing any looping through the API endpoint just yet, its just looping over the returned json.
import os
import csv
import requests
import json
url = "https://rocketchat.internal.net"
login = "/api/v1/login"
rocketchatusers = "/api/v1/users.list"
#offset = "?count=500&offset=0"
class API:
def userlist(self, userid, token):
headers = {'X-Auth-Token': token, 'X-User-Id': userid}
rocketusers = requests.get(url + rocketchatusers, headers=headers, verify=False)
print('Status Code:' + str(rocketusers.status_code))
print('Content Type:' + rocketusers.headers['content-type'])
userlist = json.loads(rocketusers.text)
x = 0
y = 0
emails = open('emails', 'w')
while y == 0:
try:
for i in userlist:
print(userlist['users'][x]['emails'][0]['address'], file=emails)
# print(userlist['users'][x]['emails'][0]['address'])
x += 1
except KeyError:
print("This user has no email address", file=emails)
x += 1
except IndexError:
print("End of List")
emails.close()
y += 1
What I have tried and what I would like to do, is something along the lines of an easy FOR loop. There are realistically probably a lot of ways to do what I'm after, I just don't know them.
Something like this:
import os
import csv
import requests
import json
url = "https://rocketchat.internal.net"
login = "/api/v1/login"
rocketchatusers = "/api/v1/users.list"
offset = "?count=500&offset="+p
p = 0
class API:
def userlist(self, userid, token):
headers = {'X-Auth-Token': token, 'X-User-Id': userid}
rocketusers = requests.get(url + rocketchatusers+offset, headers=headers, verify=False)
for r in rocketusers:
print('Status Code:' + str(rocketusers.status_code))
print('Content Type:' + rocketusers.headers['content-type'])
userlist = json.loads(rocketusers.text)
x = 0
y = 0
emails = open('emails', 'w')
while y == 0:
try:
for i in userlist:
print(userlist['users'][x]['emails'][0]['address'], file=emails)
# print(userlist['users'][x]['emails'][0]['address'])
x += 1
except KeyError:
print("This user has no email address", file=emails)
x += 1
except IndexError:
print("End of List")
emails.close()
y += 1
p += 500
Now, obviously this doesn't work, or I'd not be posting, but the why it doesn't work is the issue.
The error that get report is that I can't concatenate an INT, when a STR is expected. Ok, fine. When I attempt something like:
str(p = 0)
I get a type error. I have tried a lot of other things as well, many of them simply silly, such as p = [], p = {} and other more radical idea's as well.
The URL, if not all variables and concatenated would look something like this:
https://rocketchat.internal.net/api/v1/users.list?count=500&offset=0
https://rocketchat.internal.net/api/v1/users.list?count=500&offset=500
https://rocketchat.internal.net/api/v1/users.list?count=500&offset=1000
https://rocketchat.internal.net/api/v1/users.list?count=500&offset=1500
I feel like there is something really simple that I'm missing. I'm reasonably sure that the answer is in the response to the post I listed, but I couldn't get it to work.
So, after asking around, I found out that I had been on the right path to figuring this issue out, I had just tried in the wrong place. Here's what I ended up with:
def userlist(self, userid, token):
p = 0
while p <= 7500:
if not os.path.exists('./emails'):
headers = {'X-Auth-Token': token, 'X-User-Id': userid}
rocketusers = requests.get(url + rocketchatusers + offset + str(p), headers=headers, verify=False)
print('Status Code:' + str(rocketusers.status_code))
print('Content Type:' + rocketusers.headers['content-type'])
print('Creating the file "emails" to use to compare against list of regulated users.')
print(url + rocketchatusers + offset + str(p))
userlist = json.loads(rocketusers.text)
x = 0
y = 0
emails = open('emails', 'a+')
while y == 0:
try:
for i in userlist:
#print(userlist['users'][x]['emails'][0]['address'], file=emails)
print(userlist['users'][x]['ldap'], file=emails)
print(userlist['users'][x]['username'], file=emails)
x += 1
except KeyError:
x += 1
except IndexError:
print("End of List")
emails.close()
p += 50
y += 1
else:
headers = {'X-Auth-Token': token, 'X-User-Id': userid}
rocketusers = requests.get(url + rocketchatusers + offset + str(p), headers=headers, verify=False)
print('Status Code:' + str(rocketusers.status_code))
print('Content Type:' + rocketusers.headers['content-type'])
print('Populating file "emails" - this takes a few moments, please be patient.')
print(url + rocketchatusers + offset + str(p))
userlist = json.loads(rocketusers.text)
x = 0
z = 0
emails = open('emails', 'a+')
while z == 0:
try:
for i in userlist:
#print(userlist['users'][x]['emails'][0]['address'], file=emails)
print(userlist['users'][x]['ldap'], file=emails)
print(userlist['users'][x]['username'], file=emails)
x += 1
except KeyError:
x += 1
except IndexError:
print("End of List")
emails.close()
p += 50
z += 1
This is still a work in progress, unfortunately, this isn't an avenue for collaboration, later I may post this to GitHub so that others can see it.
The script I have is exporting all users but I am looking to export users who have a type = xyz. There are two types of users in the directory such as type a and type b and i only want to export users who have type attribute matches b.
Please help me to add a clause/statement in the script so it should only pull users with Type "B" and ignore other users with ant other type.
import requests
import json
import re
import sys
import csv
orgName = ""
apiKey = ""
api_token = "SSWS "+ apiKey
headers = {'Accept':'application/json','Content-Type':'application/json','Authorization':api_token}
def GetPaginatedResponse(url):
response = requests.request("GET", url, headers=headers)
returnResponseList = []
responseJSON = json.dumps(response.json())
responseList = json.loads(responseJSON)
returnResponseList = returnResponseList + responseList
if "errorCode" in responseJSON:
print "\nYou encountered following Error: \n"
print responseJSON
print "\n"
return "Error"
else:
headerLink= response.headers["Link"]
while str(headerLink).find("rel=\"next\"") > -1:
linkItems = str(headerLink).split(",")
nextCursorLink = ""
for link in linkItems:
if str(link).find("rel=\"next\"") > -1:
nextCursorLink = str(link)
nextLink = str(nextCursorLink.split(";")[0]).strip()
nextLink = nextLink[1:]
nextLink = nextLink[:-1]
url = nextLink
response = requests.request("GET", url, headers=headers)
responseJSON = json.dumps(response.json())
responseList = json.loads(responseJSON)
returnResponseList = returnResponseList + responseList
headerLink= response.headers["Link"]
returnJSON = json.dumps(returnResponseList)
return returnResponseList
def DownloadSFUsers():
url = "https://"+orgName+".com/api/v1/users"
responseJSON = GetPaginatedResponse(url)
if responseJSON != "Error":
userFile = open("Only-Okta_Users.csv", "wb")
writer = csv.writer(userFile)
writer.writerow(["login","type"]).encode('utf-8')
for user in responseJSON:
login = user[u"profile"][u"login"]
type = user[u"credentials"][u"provider"][u"type"]
row = ("+login+","+type).encode('utf-8')
writer.writerow([login,type])
if __name__ == "__main__":
DownloadSFUsers()
Wrap your statement that writes a user to the csv file in an if statement that tests for the correct type.
I'm running into an intermittent issue when I run the code below. I'm trying to collect all the page_tokens in the ajax calls made by pressing the "load more" button if it exists. Basically, I'm trying to get all the page tokens from a YouTube Channel.
Sometimes it will retrieve the tokens, and other times it doesn't. My best guess is either I made a mistake in my "find_embedded_page_token" function or that I need some sort of delay/sleep inserted somewhere.
Below is the full code:
import requests
import pprint
import urllib.parse
import lxml
def find_XSRF_token(html, key, num_chars=2):
pos_begin = html.find(key) + len(key) + num_chars
pos_end = html.find('"', pos_begin)
return html[pos_begin: pos_end]
def find_page_token(html, key, num_chars=2):
pos_begin = html.find(key) + len(key) + num_chars
pos_end = html.find('&', pos_begin)
return html[pos_begin: pos_end]
def find_embedded_page_token(html, key, num_chars=2):
pos_begin = html.find(key) + len(key) + num_chars
pos_end = html.find('&', pos_begin)
excess_str = html[pos_begin: pos_end]
sep = '\\'
rest = excess_str.split(sep,1)[0]
return rest
sxeVid = 'https://www.youtube.com/user/sxephil/videos'
ajaxStr = 'https://www.youtube.com/browse_ajax?action_continuation=1&continuation='
s = requests.Session()
r = s.get(sxeVid)
html = r.text
session_token = find_XSRF_token(html, 'XSRF_TOKEN', 4)
page_token = find_page_token(html, ';continuation=', 0)
print(page_token)
s = requests.Session()
r = s.get(ajaxStr+page_token)
ajxHtml = r.text
ajax_page_token = find_embedded_page_token(ajxHtml, ';continuation=', 0)
while page_token:
ajxBtn = ajxHtml.find('data-uix-load-more-href=')
if ajxBtn != -1:
s = requests.Session()
r = s.get(ajaxStr+ajax_page_token)
ajxHtml = r.text
ajax_page_token = find_embedded_page_token(ajxHtml, ';continuation=', 0)
print(ajax_page_token)
else:
break
This is what's returning randomly that is unexpected. It's pulling not just the token, but also the HTML after the desired cut off.
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk1yZ0JBQSUzRCUzRA%253D%253D"><span class="yt-uix-button-content"> <span class="load-more-loading hid">
<span class="yt-spinner">
<span class="yt-spinner-img yt-sprite" title="Loading icon"></span>
The expected response I'm expecting is this:
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk1yZ0JBQSUzRCUzRA%253D%253D
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk5MZ0JBQSUzRCUzRA%253D%253D
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk5iZ0JBQSUzRCUzRA%253D%253D
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk5yZ0JBQSUzRCUzRA%253D%253D
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk43Z0JBQSUzRCUzRA%253D%253D
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk9MZ0JBQSUzRCUzRA%253D%253D
4qmFsgJAEhhVQ2xGU1U5X2JVYjRSYzZPWWZUdDVTUHcaJEVnWjJhV1JsYjNNZ0FEZ0JZQUZxQUhvQk9iZ0JBQSUzRCUzRA%253D%253D
Any help is greatly appreciated. Also, if my tags are wrong, let me know what tags to +/-.
I am beginner to python. I am the developer of Easy APIs Project (http://gcdc2013-easyapisproject.appspot.com) and was doing a Python implementation of weather API using my project. Visit http://gcdc2013-easyapisproject.appspot.com/APIs_Doc.html to see Weather API. The below is my implementation but it returns HTTPError: HTTP Error 400: Bad request error.
import urllib2
def celsius(a):
responsex = urllib2.urlopen('http://gcdc2013-easyapisproject.appspot.com/unitconversion?q='+a+' in celsius')
htmlx = responsex.read()
responsex.close()
htmlx = html[1:] #remove first {
htmlx = html[:-1] #remove last }
htmlx = html.split('}{') #split and put each resutls to array
return str(htmlx[1]);
print "Enter a city name:",
q = raw_input() #get word from user
response = urllib2.urlopen('http://gcdc2013-easyapisproject.appspot.com/weather?q='+q)
html = response.read()
response.close()
html = html[1:] #remove first {
html = html[:-1] #remove last }
html = html.split('}{') #split and put each resutls to array
print "Today weather is " + html[1]
print "Temperature is " + html[3]
print "Temperature is " + celsius(html[3])
Please help me..
The query string should be quoted using urllib.quote or urllib.quote_plus:
import urllib
import urllib2
def celsius(a):
responsex = urllib2.urlopen('http://gcdc2013-easyapisproject.appspot.com/unitconversion?q=' + urllib.quote(a + ' in celsius'))
html = responsex.read()
responsex.close()
html = html[1:] #remove first {
html = html[:-1] #remove last }
html = html.split('}{') #split and put each resutls to array
return html[0]
print "Enter a city name:",
q = raw_input() #get word from user
response = urllib2.urlopen('http://gcdc2013-easyapisproject.appspot.com/weather?q='+urllib.quote(q))
html = response.read()
print repr(html)
response.close()
html = html[1:] #remove first {
html = html[:-1] #remove last }
html = html.split('}{') #split and put each resutls to array
print "Today weather is " + html[1]
print "Temperature is " + html[3]
print "Temperature is " + celsius(html[3].split()[0])
In addition to that, I modified celsius to use html instead of htmlx. The original code mixed use of html and htmlx.
I have found the answer. The query should be quoted with urllib2.quote(q)
After much experimenting and googling, the following Python code successfully calls Google's Search APi - but only returns 4 results: after reading the Google Search API docs, I thought the 'start=' would return additional results: but this not happen.
Can anyone give pointers? Thanks.
Python code:
/usr/bin/python
import urllib
import simplejson
query = urllib.urlencode({'q' : 'site:example.com'})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s&start=50' \
% (query)
search_results = urllib.urlopen(url)
json = simplejson.loads(search_results.read())
results = json['responseData']['results']
for i in results:
print i['title'] + ": " + i['url']
The start option doesn't give you more results, it just moves you forward that many results. Think of the results as a queue. Starting at 50 will give you results 50, 51, 52, and 53.
With this you can get more results by starting every 4th result:
import urllib
import simplejson
num_queries = 50*4
query = urllib.urlencode({'q' : 'example'})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query
for start in range(0, num_queries, 4):
request_url = '{0}&start={1}'.format(url, start)
search_results = urllib.urlopen(request_url)
json = simplejson.loads(search_results.read())
results = json['responseData']['results']
for i in results:
print i['title'] + ": " + i['url']