How to use the PivotalTracker API with Python 2.7 - python

I am trying to use the PivotalTracker API to get all the stories from an epic. I am very lost on where to begin. I looked at the samples, but they are using cURL, not python. I also stumbled upon the pytracker module but it is 4 years old and is obsolete as PivotalTracker has switched over from XML to JSON during that time. Again I'm very lost on where to start, but appreciate any guidance you have to offer, thanks!

So I ended up installing the Requests module for Python and managed to make the cURL requests through that. Here is how to do it: https://stackoverflow.com/a/25797678/1536101

You can use the pytracker wrapper library: https://code.google.com/p/pytracker/
Or, you would directly use urllib to call Pivotal Tracker's restful APIs: https://docs.python.org/2/library/urllib.html

Related

How to start with the InstagramAPI in Python?

i want to play with the InstagramAPI and write some code for like getting a list of my follower and something like that. I am really new to that topic.
What is the best way to do this? Is there a Python-Lib for handle those json request or should I send them directly to the (new? graphAPI, displayAPI) InstagramAPI?
Appreciate every advice I can get. Thanks :)
LevPasha's Instagram-API-python, instabot, and many other API's are no longer functional as of Oct 24, 2020 after Facebook deprecated the legacy API and now has a new, authentication-required, API. It now requires registering your app with Facebook to be able to get access to many of the API features (via oembed) that were previously available without any authentication.
See https://developers.facebook.com/docs/instagram/oembed/ for more details on the new implementation and how to migrate.
You should still be able to get a list of your followers, etc. via the new oEmbed API and python--it will require registering the app, making a call to the new GET API with your authentication key via the python requests package, and then processing the result.
There is one library called instabot. This useful library has all the necessary functions/methods to interact with your insta account. Read its documentation here.
The pip installation is: pip install instabot
To get started with, lets say you want to simply login to your account.
from instabot import Bot
bot = Bot()
bot.login(username="YOUR USERNAME", password="YOUR PASSWORD")
To get the list of your followers,
my_followers = bot.followers()
If you want to upload a photo or get your posts,
bot.upload_photo(image, caption="blah blah blah") #the image variable here is a path to that image
all_posts = bot.get_your_medias() #this will return all of your medias of the account
#to get the info of each media, use
for post in all_posts:
print(bot.get_media_info(post))
and there are many other functions/methods available in this library.
It actually very fun to interact with instagram using python. You will have a great time. Enjoy :)
You can use https://github.com/LevPasha/Instagram-API-python to call Instagram APIs
and also if you want to call API directly You can use the requests package.
It also supports graphql APIs.
Here you can see an example:
https://gist.github.com/gbaman/b3137e18c739e0cf98539bf4ec4366ad
It seems like in 2022 this is the only active working and maintained python solution:
https://github.com/adw0rd/instagrapi

Unable to get page source code in python

I'm trying to get the source code of a page by using:
import urllib2
url="http://france.meteofrance.com/france/meteo?PREVISIONS_PORTLET.path=previsionsville/750560"
page =urllib2.urlopen(url)
data=page.read()
print data
and also by using a user_agent(headers)
I did not succeed to get the source code of the page!
Have you guys any ideas what can be done?
Thanks in Advance
I tried it and the requests works, but the content that you receive says that your browser must accept cookies (in french). You could probably get around that with urllib2, but I think the easiest way would be to use the requests lib (if you don't mind having an additional dependency).
To install requests:
pip install requests
And then in your script:
import requests
url = 'http://france.meteofrance.com/france/meteo?PREVISIONS_PORTLET.path=previsionsville/750560'
response = requests.get(url)
print(response.content)
I'm pretty sure the source code of the page will be what you expect then.
requests library worked for me as Martin Maillard showed.
Also in another thread I have noticed this note by leoluk here:
Edit: It's 2014 now, and most of the important libraries have been
ported and you should definitely use Python 3 if you can.
python-requests is a very nice high-level library which is easier to
use than urllib2.
So I wrote this get_page procedure:
import requests
def get_page (website_url):
response = requests.get(website_url)
return response.content
print get_page('http://example.com')
Cheers!
I tried a lot of things, "urllib" "urllib2" and many other things, but one thing worked for me for everything I needed and solved any problem I faced. It was Mechanize .This library simulates using a real browser, so it handles a lot of issues in that area.

Python and curl question

I will be transmitting purchase info (like CC) to a bank gateway and retrieve the result by using Django thus via Python.
What would be the efficient and secure way of doing this?
I have read a documentation of this gateway for php, they seem to use this method:
$xml= Some xml holding data of a purchase.
$curl = `/usr/bin/curl -s -d 'DATA=$xml' "https://url of the virtual bank POS"`;
$data=explode("\n",$curl); //return value is also an xml, seems like they are splitting by each `\n`
and using the $data, they process if the payment is accepted, rejected etc..
I want to achieve this under python language, for this I have done some searching and seems like there is a python curl application named pycurl yet I have no experience using curl and do not know if this is library is suitable for this task. Please keep in mind that as this transfer requires security, I will be using SSL.
Any suggestion will be appreciated.
Use of the standard library urllib2 module should be enough:
import urllib
import urllib2
request_data = urllib.urlencode({"DATA": xml})
response = urllib2.urlopen("https://url of the virtual bank POS", request_data)
response_data = response.read()
data = response_data.split('\n')
I assume that xml variable holds data to be sent.
Citing pycurl.sourceforge.net:
To sum up, PycURL is very fast (esp. for multiple concurrent operations) and very feature complete, but has a somewhat complex interface. If you need something simpler or prefer a pure Python module you might want to check out urllib2 and urlgrabber. There is also a good comparison of the various libraries.
Both curl and urllib2 can work with https so it's up to you.

Windows live api for python

I have read documentation about Windows Live API: http://msdn.microsoft.com/en-us/library/bb463989.aspx
But how can I retrieve contacts from hotmail with python ?
Is there any example ?
Your program will first need to obtain "delegated authentication", for which the Python samples are here.
After that, the interface is REST-like: you only need to HTTP GET the appropriate URI (per the docs, that's '/LiveContacts/contacts' to retrieve all contacts. The REST Schema is documented here. You can make an HTTP GET request in Python with such standard library modules as urllib and urllib2, though the lower-level httplib module is also fine.
For those who are searching for the download link for the library
http://download.microsoft.com/download/6/2/a/62adfe67-6fee-487f-9c3e-911ce5d0bc9d/webauth-python-1.2.tar.gz

How to access YQL in Python (Django)?

Hey, I need a simple example for the following task:
Send a query to YQL and receive a response
I am accessing public data from python backend of my Django app.
If I just copy/paste an example from YQL, it says "Please provide valid credentials".
I guess, I need OAuth authorization to do it.
So I got an API key and a shared secret.
Now, what should I do with them?
Should I use python oauth library? This one?
http://oauth.googlecode.com/svn/code/python/oauth/
But what is the code? How I pass my secret/API key along with my yql query?
I guess, many Django programmers would love to know this.
I've just released python-yql also available on pypi. It can do public, two-legged oauth a.k.a signed requests and facilitate 3-legged outh too.
It's brand new so there may be some bugs whilst I work on improving the test coverage but should hopefully do what you need. See the source for some idea on how to use it.
Installing to try it is as follows:
sudo easy_install yql
Bug/Feature requests can be filed here: https://bugs.launchpad.net/python-yql
If you only are accessing public data you can just make a direct rest call from python.
>>> import urllib2
>>> result = urllib2.urlopen("http://query.yahooapis.com/v1/public/yql?q=select%20title%2Cabstract%20from%20search.web%20where%20query%3D%22paul%20tarjan%22&format=json").read()
>>> print result[:100]
{"query":{"count":"10","created":"2009-11-03T04:47:01Z","lang":"en-US","updated":"2009-11-03T04:47:0
And then you can parse the result with simplejson.
>>> import simplejson
>>> data = simplejson.loads(result)
>>> data['query']['results']['result'][0]['title']
u'<b>Paul</b> <b>Tarjan</b> - Silicon Valley, CA | Facebook'
Ok, I sort of resolved the problem.
In YQL console example for data/html the following url was presented as an example:
http://query.yahooapis.com/v1/yql?q=select+*+from+html+where+url%3D%22http%3A%2F%2Ffinance.yahoo.com%2Fq%3Fs%3Dyhoo%22+and%0A++++++xpath%3D%27%2F%2Fdiv%5B%40id%3D%22yfi_headlines%22%5D%2Fdiv%5B2%5D%2Ful%2Fli%2Fa%27
It does not work!
But if you insert "/public" after "v1/" than it magically starts working!
http://query.yahooapis.com/v1/public/yql?q=select+*+from+html+where+url%3D%22http%3A%2F%2Ffinance.yahoo.com%2Fq%3Fs%3Dyhoo%22+and%0A++++++xpath%3D%27%2F%2Fdiv%5B%40id%3D%22yfi_headlines%22%5D%2Fdiv%5B2%5D%2Ful%2Fli%2Fa%27
But the question of how to pass my API key (for v1/yql access) is still open. Any advice?

Categories

Resources