Find a Facebook group ID via graph - python

Im writing a script to convert facebook user id's in Python. To get a user ID I am simple going to:
https://graph.facebook.com/USERNAME
Is there any way to access this for groups as well?
I have found other questions like this but none give a definitive answer.
Do I need to use oauth?
I have seen several websites that do what I need such as
http://lookup-id.com/
https://www.wallflux.com/facebook_id/

Related

LinkedIn Marketing API Creative names

I am trying to use the marketing developer platform api to pull reports for my campaigns.
I want to be able to break down my reports by campaign and then by creative name.
In the LinkedIn documentation (https://learn.microsoft.com/en-gb/linkedin/marketing/integrations/ads-reporting/ads-reporting#statistics-finder) they give examples of the statistics finder and say that it can pull up to 3 pivots.
This is the example they give:
GET https://api.linkedin.com/v2/adAnalyticsV2?q=statistics&pivots[0]=CAMPAIGN&dateRange.start.day=1&dateRange.start.month=1&dateRange.start.year=2017&timeGranularity=DAILY&campaigns[0]=urn:li:sponsoredCampaign:1234567
I can't seem to get it to work for more than 1 pivot.
Another issue that I am facing is that I am not sure how to pull creative names - I can only seem to get creative ids in my api calls.
I am using the examples from to get campaign name:
https://learn.microsoft.com/en-gb/linkedin/shared/references/v2/ads/adcampaigns?context=linkedin/marketing/context
Looking at the creative name equivalent:
https://learn.microsoft.com/en-gb/linkedin/shared/references/v2/ads/adcreatives?context=linkedin/marketing/context
I cannot seem to find name for creatives here. Am I looking in the wrong place?
The magic sequence to get multiple pivots is:
...q=statistics&pivots[0]=ACCOUNT&pivots[1]=CAMPAIGN&pivots[2]=CREATIVE&...
As for creative names, they do not 'simply' exist. There are different fields (variables/data) for each type of creative, and what you would see in the UI depends on the type of campaign/creative displayed. For a simple Text Ad, it would be variables.data.title and variables.data.text. For the rest, you need to use projection to get specific fields from the urn's referenced.

How to get list of categories in Google Books API?

I was searching for an already answered question about this but couldn't find one so please forgive me if I somehow missed it.
I'm using Google Books API and I know I can search a book by specific category.
My question is, how can I get all the available categories from the API?
I looked in the API documentation but couldn't find any mention of this.
The Google books api does not have an end point for returning Categories that are not associated with a book itself.
The Google Books api is only there to list books. You can
search and browse through the list of books that match a given query.
view information about a book, including metadata, availability and price, links to the preview page.
manage your own bookshelves.
You can see the category of a book you can not get a list of available categories in the whole system
You may be interested to know this has been on their todo list since 2012 category list
We have numerous requests for this and we're investigating how we can properly provide the data. One issue is Google does not own all the category information. "New York Times Bestsellers" is one obvious example. We need to first identify what we can publish through the API.
work around
i worked around it by implementing my own category list mechanism so i can pull all categories that exists in my app's database.
(unfortunately, the newly announced ScriptDb deprecation means my whole system will go to waste in a couple of monthes anyway... but that's another story)
https://support.google.com/books/partner/answer/3237055?hl=en
Scroll down to subject/genres and you will see this link.
https://bisg.org/page/bisacedition
This list is apparently a list of subjects AKA categories for North American Books. I am making various GET requests with an API testing tool and getting for the most part, perfect matches (you may have to drop a word from the query string. ex: "criticism" instead of "literary criticism") for whatever subject I choose from the BISG subjects list, and what comes back in the json response under the "categories" key.
Ex: GET https://www.googleapis.com/books/v1/volumes?q=business+subject:juvenile+fiction
Long story short, the BISG link is where I'm pretty sure Google got all the options for their "categories" key from.

Extracting facebook groups data with No admin rights

I need to extract some data from a facebook group for my research. I tried certain approach and tools but nothing works out the way I wanted. I want to extract things like comment, no. of likes per post, post content, date of post etc.I am looking for a R or python script that can automate this process. I followed many tutorial but they seems to be outdated since facebook change its api. I have come to know that facebook has its own api to do data extraction. Please quide me to do this either in R or in Python.
(Remember - I don't have admin rights to grab the data from the group.)
I found this blog post about Rfacebook helpful. Note that if you don't have admin rights for the group then you're probably out of luck unless you just want to access your own data.

Using Python to extract LinkedIn information [duplicate]

This question already exists:
Python: visiting random LinkedIn profiles [closed]
Closed 8 years ago.
I'm trying to visit, say a set of 8,000 LinkedIn profiles that belong to people who have a certain first name (just for example, let's say "Larry"), and then would like to extract the kinds of jobs each user has held in the past. Is there an efficient way to do this? I would need each Larry to be picked independently from one another; basically, traversing someone's network isn't a good way to do this. Is there a way to completely randomize how the Larry's are picked?
Don't even know where to start. Thanks.
To Start:
Trying to crawl the response linkedin gives you on your browser would be almost suicidal.
Check their APIs (particularly the People's API) and their code samples.
Important disclaimer found in the People's API:
People Search API is a part of our Vetted API Access Program. You must
apply here and get LinkedIn's approval before using this API.
MAYBE with that in mind you'll be able to write an script that queries and parses those APIs. For instance, retrieving users with Larry as a first name http://api.linkedin.com/v1/people-search?first-name=Larry
Once you get approved by Linkedin and you have retrieved some data from their APIs and tried some json or XML parsing (whatever the APIs return), you will have something more specific to ask.
If you still want to crawl the html returned by linkedin when you hit https://www.linkedin.com/pub/dir/?first=Larry&last=&search=Search take a look to BeautifulSoup

Crawler for Twitter social graph via python

I am sorry for asking but I am new in writing crawler.
I would like to crawl Twitter space for Twitter users and follow relationship among them using python.
Any recommendation for starting points such as tutorials?
Thank you very much in advance.
I'm a big fan of Tweepy myself - https://github.com/tweepy/tweepy
You'll have to refer to the Twitter docs for the API methods that you're going to need. As far as I know, Tweepy wraps all of them, but I recommend looking at Twitter's own docs to find out which ones you need.
To construct a following/follower graph, you're going to need some of these:
GET followers/ids - grab followers (in IDs) for a user
GET friends/ids - grab followings (in IDs) for a user
GET users/lookup - grab up to 100 users, specified by IDs
besides reading the twitter api?
a good starting point would be the great python twitter library by mike verdona which personally I think is the the best one. (also an intorduction here)
also see this question in stackoverflow

Categories

Resources