I'm currently trying to use a RestAPI to set user permissions via a python script. It reads the permission from one server and has to import the permissions of a the same user on another server.
I am using the python requests module and did read up on how to use put with parameters but appear to have issues with the correct syntax.
RestAPI endpoint
the username and permission part is what causes my issue.
I have tried like this:
#!/usr/bin/env python
import requests
payload = (({username}), ({permission}))
set_user_permission_project = requests.put(f'{url}/rest/api/1.0/projects/{row[2]}/permissions/users', auth=(user, pw), params=payload)
And prior to that attempt, I tried it like this:
#!/usr/bin/env python
import requests
set_user_permission_project = requests.put(f'{url}/rest/api/1.0/projects/{row[2]}/permissions/users?{username}&{row[8]}', auth=(user, pw))
Probably I am missing something very essential here and don't get it.
Thanks a lot in advance for your help
Br
After the very useful comments from #estherwn I double checked on the RestAPI and adapted the call accordingly. It was supposed to be key+var as suggested.
Hence the answer for me was:
import requests
set_user_permission_project = requests.put(f'{url}/rest/api/1.0/projects/{row[2]}/permissions/users?name={username}&permission={row[8]}', auth=(user, pw))
I hope someone will find this helpful one day.
Thanks once more for your help #estherwn
Related
I am trying to update a already created json file in a gist from a python program. The problem is, I can't figure out how to do it.
I've found this api, which I'm pretty sure is related to what I'm trying to do.
I once again don't know how to use it properly though.
Also I found a wrapper for GitHub gists called "simplegists" that looked perfect for what I'm trying to do. However, it seems to be currently broken, and I and others are having problems using it( specifically this problem ).
Would anyone be kind enough to help me figure out how I can edit a gist using a GitHub authentication token, in python, or at least give me some kind of reference I can work off of? Thanks!
Quite some python wrappers aren't working anymore because Github discontinued password authentication to the API on November 13, 2020. The best way to proceed is by using an API token.
So first get a token and select the relevant scopes ('gist').
Then you can use a python patch request in line with the API description to update your gist with the new json file:
import requests
import json
token='API_TOKEN'
filename="YOUR_UPDATED_JSON_FILE.json"
gist_id="GIST_ID"
content=open(filename, 'r').read()
headers = {'Authorization': f'token {token}'}
r = requests.patch('https://api.github.com/gists/' + gist_id, data=json.dumps({'files':{filename:{"content":content}}}),headers=headers)
print(r.json())
Note that this example assumes that you haven't enabled two-factor authentication.
In GitHub REST API Version 2022-11-28, you need to change
headers = {'Authorization': f'token {token}'}
to
headers = {'Authorization': f'Bearer {token}'}
I have created an app, and have the token, the keys, and all that information from stocktwits.com. I want to know how i can integrate code into my python script where i am able to post a message to stocktwits via Python.
Thanks for your help.
Regards,
PM
Using Python's request module, you could make a request like this:
import requests
payload={"access_token":"YOURACCESSTOKEN", "body":"YOURMESSAGE"}
requests.post("https://api.stocktwits.com/api/2/messages/create.json", data=payload)
Be sure to look at Stocktwit's auth docs so you can make sure that you have the right API keys.
I am a beginner so I apologize if my question is very obvious or not worded correctly.
I need to send a request to a URL so data can then be sent back in XLM format. The URL will have a user specific login and password, so I need to incorporate that as well. Also there is a port (port 80) that I need to include in the request. Is requests.get the way to go? I'm not exactly sure where to start. After receiving the XLM data, I need to process it (store it) on my machine - if anyone also wants to take a stab at that (I am also struggling to understand exactly how XLM data is sent over, is it an entire file?). Thanks in advance for the help.
Here is a python documentation on how to fetch internet resources using the urllib package.
It talks about getting the data, storing it in a file, sending data and some basic authentication.
https://docs.python.org/3/howto/urllib2.html
Getting the URL would look something like this import
Import urllib.request
urllib.request.urlopen("http://yoururlhere.co.uk").read()
Note that this is for strings and Python 3 only.
Python 2 version can be found here
What is the quickest way to HTTP GET in Python?
If you want to parse the data you may want to use this
https://docs.python.org/2/library/xml.etree.elementtree.html
I hope this helps! I am not too sure on how you would approach the username and password stuff but these links can hopefully provide you with information on how to do some of the other stuff!
Import the requests library and then call the post method as follows:
import requests
data = {
"email" : "netsparkertest#test.com",
"password" : "abcd12333",
}
r = requests.post('www.facebook.com', data=data)
print r.text
print r.status_code
print r.content
print r.headers
I'm coding an app which has to use this api. So I want to do at a certain point a search on their database. Now I'm struggling with which python library is the right one to use in order to authenticate about oAuth2? I couldn't find any by now, where I was sure, it would offer the necessary functions.
I wonder if this library (python-oauth2) offers, what I need. But this isn't a library for the client, is it? It seems it is for the server...
I'd be really grateful, if someone could just give me an advice, with what I should work.
Method 1
You will need to use the following modules. No need to use oauth. Just need to get the token before performing any search using the api.
requests, json, urllib
Here's a short Example code for that
import requests, json, urllib
BASE_URL = "http://scoilnet.com/grants/apikey/"
r = requests.post(_BASE_URL+"user/token/", data={'username': username , 'password': password })
print r.getcontent
The above code will show you how to request a token from the api. Using that token you will be making get and post requests to the api which will give a json response. That json response will be shown as a Dictionary from which you will load you data in your program.
Method 2
You can also use urllib or urllib2 or urllib3
I'm trying to get the source code of a page by using:
import urllib2
url="http://france.meteofrance.com/france/meteo?PREVISIONS_PORTLET.path=previsionsville/750560"
page =urllib2.urlopen(url)
data=page.read()
print data
and also by using a user_agent(headers)
I did not succeed to get the source code of the page!
Have you guys any ideas what can be done?
Thanks in Advance
I tried it and the requests works, but the content that you receive says that your browser must accept cookies (in french). You could probably get around that with urllib2, but I think the easiest way would be to use the requests lib (if you don't mind having an additional dependency).
To install requests:
pip install requests
And then in your script:
import requests
url = 'http://france.meteofrance.com/france/meteo?PREVISIONS_PORTLET.path=previsionsville/750560'
response = requests.get(url)
print(response.content)
I'm pretty sure the source code of the page will be what you expect then.
requests library worked for me as Martin Maillard showed.
Also in another thread I have noticed this note by leoluk here:
Edit: It's 2014 now, and most of the important libraries have been
ported and you should definitely use Python 3 if you can.
python-requests is a very nice high-level library which is easier to
use than urllib2.
So I wrote this get_page procedure:
import requests
def get_page (website_url):
response = requests.get(website_url)
return response.content
print get_page('http://example.com')
Cheers!
I tried a lot of things, "urllib" "urllib2" and many other things, but one thing worked for me for everything I needed and solved any problem I faced. It was Mechanize .This library simulates using a real browser, so it handles a lot of issues in that area.