python-otter documentation - python

I want to collect old tweets for a specific period. I have found out that topsy provides the Otter API to do that. I am developing in Python so I found the python-otter API http://otterapi.googlecode.com/svn/trunk/. However, there is no documentation and I have no idea how to use it! Does anybody know if there is any documention at all. And btw is there another way I can find old tweets programatically?
Thanks

The documentation can be found in http://code.google.com/p/otterapi/wiki/Resources
Why not directly make GET requests using urllib2 and the like??

Related

Is there any code for finding a certain keyword in twitter

I was wondering if anyone has any sample code for finding a certain keyword in twitter that has been recently posted and has a certain amount of likes within a certain timeframe
preferably in python. Anything related to this would help a lot if you have it. Thank You!
I have personally not done this before, but a simple google search yielded this (a python wrapper for the Twitter API):
https://python-twitter.readthedocs.io/en/latest/index.html
and a GitHub with examples that they linked from their getting started page:
https://github.com/bear/python-twitter/tree/master/examples
There you can find some example code for getting all of a user's tweets and much more.
Iterating through a list of users tweets might be able to do the job here, but if that doesn't cut it I recommend searching the docs linked above for what you need.

Using python twitter library how can i extract the list of users follow a particular #hashtags anonymously?

So, I have a problem statement in which I want to extract the list of users who are following a particular #hashtag like #obama, #corona etc.
The challenge here is I want to extract this data anonymously i.e without providing any account keys.
I tried a library named twint that is capable of doing this but it's very slow. can anyone recommend a better alternative for my use case..?
there's no such library available that satisfy your use case. yes there's this twint library but as you mentioned it's slow for your use case. so try with some other language libraries see if something is available over there.
You can try to make a scripts in python using selenium, and I think that you could get that names of users very fast.
This Github repo which I found might be useful. It does not require authentication to get the twitter data. Have a look at it -https://github.com/bisguzar/twitter-scraper
I tried this approach last year, but found out that my date range fell well outside of the available info provided by Twitter, and had to use the Premium API. If this is not a constraint for you, and since you do not want to code your own scraper, take a look at this option:
TweetScraper: Updated in September last year, also provides MongoDB integration. I haven't tried it, but seems to work OK. Don't know about the time performance.

How to retrieve wav file from API?

I am working on Python3.4.4
I tried to use a Merriam-Webster API, and here is an example link:
http://www.dictionaryapi.com/api/v1/references/collegiate/xml/purple?key=bf534d02-bf4e-49bc-b43f-37f68a0bf4fd
There is a file under the tag, you will see after you open the url.
And I am wondering that how can I retrieve that wav file......
Because it is kind of just a string to me......
Thank you very much!
Okay, I just sort it out.
Usually you need to look at the instructions for the API, I look it up on the official website and it tells you that how you are going to retrieve that. In this case you are going to another url, and then wala

Python Google Web Crawler

I am working on a project that needs to do a search on the internet (i.e. stack overflow). Retrieve all relevant results (URL, text, images paths) from the crawler from the search to an XML file. I am building it with python. Does anyone have any suggestion as to how i should approach this problem? I don't want to scan through the entire web, just top relevant results (stackoverflow, 10/08/2013, python as an example)
for stackoverflow you can use the api directly
for example:
https://api.stackexchange.com/2.1/questions?fromdate=1381190400&todate=1381276800&order=desc&sort=activity&tagged=python&site=stackoverflow
see https://api.stackexchange.com/docs/questions#fromdate=2013-10-08&todate=2013-10-09&order=desc&sort=activity&tagged=python&filter=default&site=stackoverflow
you can't making more 30 requests a second see http://api.stackexchange.com/docs/throttle
It sounds like you could use BeautifulSoup. And check out this thread, it sounds like it's what you need. Creating an XML document with BeautifulSoup: StackOverFlow
As for downloading and using BeautifulSoup, the site is here
It's pretty simple to use.
Hope this helps.

Crawler for Twitter social graph via python

I am sorry for asking but I am new in writing crawler.
I would like to crawl Twitter space for Twitter users and follow relationship among them using python.
Any recommendation for starting points such as tutorials?
Thank you very much in advance.
I'm a big fan of Tweepy myself - https://github.com/tweepy/tweepy
You'll have to refer to the Twitter docs for the API methods that you're going to need. As far as I know, Tweepy wraps all of them, but I recommend looking at Twitter's own docs to find out which ones you need.
To construct a following/follower graph, you're going to need some of these:
GET followers/ids - grab followers (in IDs) for a user
GET friends/ids - grab followings (in IDs) for a user
GET users/lookup - grab up to 100 users, specified by IDs
besides reading the twitter api?
a good starting point would be the great python twitter library by mike verdona which personally I think is the the best one. (also an intorduction here)
also see this question in stackoverflow

Categories

Resources