404 error while doing an api call to Reddit - python

According to their documentation:
This should be enough to get the hottest new reddit submissions:
r = client.get(r'http://www.reddit.com/api/hot/', data=user_pass_dict)
But it doesn't and I get a 404 error. Am I getting the url for data request wrong?
http://www.reddit.com/api/login works though.

Your question specifically asks what you need to do to get the "hottest new" submissions. "Hottest new" doesn't really make sense as there is the "hot" view and a "new" view. The URLs for those two views are http://www.reddit.com/hot and http://www.reddit.com/new respectively.
To make those URLs more code-friendly, you can append .json to the end of the URL (any reddit URL for that matter) to get a json-representation of the data. For instance, to get the list of "hot" submissions make a GET request to http://www.reddit.com/hot.json.
For completeness, in your example, you attempt to pass in data=user_pass_dict. That's definitely not going to work the way you expect it to. While logging in is not necessary for what you want to do, if you happen to have need for more complicated use of reddit's API using python, I strongly suggest using PRAW. With PRAW you can iterate over the "hot" submissions via:
import praw
r = praw.Reddit('<REPLACE WITH A UNIQUE USER AGENT>')
for submission in r.get_frontpage():
# do something with the submission
print(vars(submission))

According to the docs, use /hot rather than /api/hot:
r = client.get(r'http://www.reddit.com/hot/', data=user_pass_dict)

Related

Python Product Tag Update using Shopify API - 400 Error - Product :Required Parameter Missing Or Invalid

Just built my first Shopify store and wanted to use python and API to bulk-update product tags on all our products.
However, I've run into a snag on my PUT call. I keep getting a 400 Response with the message '{"errors":{"product":"Required parameter missing or invalid"}}'.
I've seen other people with similar issues on here but none of their solutions seem to be working for me.
Has anyone else run into this problem that can help me figure it out?
Here are some screenshots of my code, a printout of the payload, and the error in the response.
Code, Payload and Response:
I can successfully get the product info using the API and was originally sending the entire JSON payload that was returned just updated it with my product tags and sent it back through the API.
To narrow potential pain points, I'm keeping it simple now and just including "id" and "tags" in the payload but still get the same error.
We figured it out! It turns out that when we initially created the store we used a domain store name that was misspelled so we created a new domain store name with the correct spelling. However, the original domain store name was not deleted (not even sure it can be), and even though the new domain store name forwards to the original one and allows for GET requests, for PUT requests we had to use the original misspelled domain store name and the PUTs work fine.
I figured this out by using fiddler to capture a manual product update via the Shopify website to see what the payload looked like and that's why I noticed the store URL was different than the one we were using.
Hope this helps someone with a similar issue!

Python use API to get data

I am kind of stucked in trying to solve following issue: I try to access a web-page in order to get some data for a supplier (need to do it for work) in an automated way, using an api
The API is called https://wl-api.mf.gov.pl and shall provide information stored in json for supplier which can be found over their tax ID.
I use the request package and I am able to manage to get positive response:
import requests
nip=7393033097
response=requests.get("https://wl-api.mf.gov.pl")
print(response) --> Response [200]
If I click on the link and scroll until I find the specific part for the tax information, I find the following line
GET /api/search/nip/{nip}
So what I did is to add this line into my response variable, since this is how I understood it - and there is the point where I think I am wrong
response=requests.get("https://wl-api.mf.gov.pl/search/7393033097/{7393033097}")
However, I cannot access it.
Am I doing something wrong - I do believe yes - and can anyone give me a little help :)
Update: If I check the requirements / documentation I find following information where I need a bit support to implement it
GET /api/search/nip/{nip}
(nip?date)
Single entity search by nip
**Path parameters**
nip (required)
*Path Parameter — Nip*
**Query parameters**
date (required)
*Query Parameter — format: date*
**Return type**
EntityResponse
Example data
Content-Type: application/json
I think this line:https://wl-api.mf.gov.pl/search/7393033097/{7393033097} should be like this:
https://wl-api.mf.gov.pl/search/nip/7393033097

How to achieve this python post and get?

I've been trying for hours using requests and urllib. I'm so lost, misunderstood by google too. Some tips, or even anything would be useful. Thank you.
Goals: Post country code and phone numbers, then get mobile carrier etc.
Problem: Not printing anything. Variable "name" prints out None.
def do_ccc(self): #Part of bigger class
"""Phone Number Scan"""
#prefix=input("Phone Number Prefix:")
#number=input("Phone Number: ")
url=("https://freecarrierlookup.com/index.php")
from bs4 import BeautifulSoup
import urllib
data = {'cc' : "COUNTRY CODE",
'phonenum' : "PHONE NUMBER"}#.encode('ascii')
data=json.dump(data,sys.stdout)
page=urllib.request.urlopen(url, data)
soup=BeautifulSoup(page, 'html.parser')
name=soup.find('div', attrs={'class': 'col-sm-6 col-md-8'})
#^^^# Test(should print phone number)
print(name)
As Zags pointed out, it is not a good idea to use a website and violate their terms of service, especially when the site of offers a cheap API.
But answering your original question:
You are using json.loads instead of json.load resulting in an empty an empty data object.
If you look at the page, you will see that the URL for POST requests is different, getcarrier.php instead of index.php.
You would also need to convert your str from json.dumps to bytes and even then the site will reject your calls, since a hidden token is added to each request submitted by the website to prevent automatic scraping.
The problem is with what you're code is trying to do. freecarrierlookup.com is just a website. In order to do what you want, you'd need to do web scraping, which is complicated, unmaintainable, and usually a violation of the site's terms of service.
What you need to do is find an API that provides the data you're looking for. A good API will usually have either sample code or a Python library that you can use to make requests.

How to get thread title using praw?

I'm using praw to write a bot for reddit. I already know how to use the get_comments() function to get all the comments in a subreddit. However, I would like to get the titles of all the posts in a subreddit, however, after going through the docs for praw, I could not find a function which does so.
I just want to go into a subreddit, and fetch all the titles of the posts, and then store them in an object.
Could someone tell me how I go around achieving this?
import praw
r=praw.Reddit('demo')
subreddit=r.get_subreddit('stackoverflow')
for submission in subreddit.get_hot(limit=10):
print submission.title
This information is available in the PRAW documentation.
Seems pretty late but anyways you can go through the official reddit api JSON responses format. From there you can see all the attributes that are available to you for a particular object.
Here's the github link for the reddit API
Edit: You can also use pprint(vars(object_name))

How to get all content posted by a Facebook Group using Graph API

I am very new to the Graph API and trying to write a simple python script that first identifies all pages that a user has liked and all groups that he/she is a part of. To do this, I used the following:
To get the groups he has joined:
API: /{user-id}/groups
Permissions req: user_groups
To get the pages he has liked:
API: /{user-id}/likes
Permissions req: user_likes
and
url='https://graph.facebook.com/'+userId+'/likes?access_token='+accessToken +'&limit='+str(limit)
Now that I can see the id's of the groups in the JSON output, I want to hit them one by one and fetch all content (posts, comments, photos etc.) posted within that group. Is this possible and if yes, how can I do it? What API calls do I have to make?
That's quite a broad question, before asking here you should have give a try searching on SO.
Anyways, I'll tell you broadly how can you do it.
First of all go through the official documentation of Graph API: Graph API Reference.
You'll find each and every API which can be used to fetch the data. For example: /group, /page. You'll get to know what kind of access token with what permissions are required for an API call.
Here are some API calls useful to you-
to fetch the group/page's posts- /{group-id/page-id}/posts
to fetch the comments of a post- {post-id}/comments
to fetch the group/page's photos- /{group-id/page-id}/photos
and so on. Once you'll go through the documentation and test some API calls, the things would be much clear. It's quite easy!
Hope it helps. Good luck!
Here's an example using facepy:
from facepy import GraphAPI
import json
graph = GraphAPI(APP_TOKEN)
groupIDs = ("[id here]","[etc]")
outfile_name ="teacher-groups-summary-export-data.csv"
f = csv.writer(open(outfile_name, "wb+"))
for gID in groupIDs:
groupData = graph.get(gID + "/feed", page=True, retry=3, limit=500)
for data in groupData:
json_data=json.dumps(data, indent = 4,cls=DecimalEncoder)
decoded_response = json_data.decode("UTF-8")
data = json.loads(decoded_response)
print "Paging group data..."
for item in data["data"]:
...etc, dealing with items...
Check the API reference. You should use feed.
You can use /{group-id}/feed to get an array of Post objects of the group. Remember to include a user access token for a member of the group.

Categories

Resources