HP QC REST API using python - python

I tried to connect HP QC using python to create defects and attach files, but I am not able to connect with HP QC. Here is my code:
domain='DEFAULT_773497139'
project='773497139_DEMO'
import requests
url = "https://almalm1250saastrial.saas.hpe.com/qcbin/"
querystring = {"username":"user#gmail.com","password":"password"}
headers = {
'cache-control': "no-cache",
'token': "5d33d0b7-1d04-4989-3349-3005b847ab7f"
}
response = requests.request("POST", url, headers=headers, params=querystring)
#~ print(response.text)
print response.headers
new_header = response.headers
new_url = url+ u'rest/domains/'+domain+u'/projects/'+project
new_querystring = {
"username":"user#gmail.com",
"password":"password",
"domain":'DEFAULT_773497139',
"project":'773497139_DEMO'
}
print new_url
response = requests.request("POST", new_url, headers=new_header, params=new_querystring)
print(response.text)
Now login works fine, but when try other API it asks for, I would get this message:
Authentication failed. Browser based integrations - to login append '?login-form-required=y' to the url you tried to access
If the parameter has been added, then it goes back to login page.

Seems that your urls are not well builded:
base_url ='https://server.saas.hpe.com/qcbin/'
base_url + '/qcbin/rest/domains/
you will get:
..../qcbin/qcbin/...
qcbin twice

The way I do it is to based on python request Sessions. First I create a session, then post my credentials to ../authentication-point/alm-authenticate/ (or sth like this, you should check it) and then using this session I can get, post or do whatever I want.
So:
s = requests.Session()
s.post(`../authentication-point/alm-authenticate/`, data=credentials)
# now session object is authenticated and recognized
# you can s.post, s.get or whatever
I think it's a good url, but I can't check it right now :)

Session issue has beensolved by LWSSO cookie (LWSSO_COOKIE_KEY).

Just send a unicode string to your server and use the header for the basic Authorization as specified by the HP REST API:
login_url = u'https://almalm1250saastrial.saas.hpe.com/qcbin/authentication-point/authenticate'
username,password = user,passwd
logs = base64.b64encode("{0}:{1}".format(username, password))
header['Authorization'] = "Basic {}".format(logs)
POST by using the requests module in python is quite easy:
requests.post(login_url, headers=header)
That's it...now you are authenticated and you can proceed with next action :-) To check on that you can "GET" on:
login_auth = u'https://almalm1250saastrial.saas.hpe.com/qcbin/rest/is-authenticated
you should get a code 200 --> That means you are authenticated.
Hope this help you. Have a nice day and let me know if something is still not clear.
p.s.: to send REST msg in python I am using requests module. It is really easy! You can create a session if you want to send multiple actions--> then work with that sessions--> ALM = requests.session(), then use ALM.post(whatever) and so on :-)

Related

How can I make post requests with querystrings using sessions in Python?

i'm using requests module and i'm interested in sending a post request with querystrings using sessions, how can I do that? I haven't found anything related with request.Sessions and querystrings
with Sessions (it returns me a http 500 response code)
response = self.session.post(self.url, data = payload, headers = self.headers, params = querystring)
without Sessions ( it works fine)
response = requests.request("POST", self.url, json=payload, headers=self.headers, params=querystring)
Maybe you can provide the url and a little bit more code.
In the session example you pass data=payload.
In the second example json=payload.
Did you create the session correctly?

There is no valid form to send username and login using Python requests

I am trying to scrape a website called quantumonline.com. There are 2 forms there, neither of which have the respective 'acctname' and 'pswrd' as a values I can fill in. Here is my code:
session = requests.session()
data = {
'acctname':'myusername',
'pswrd':'mypassword'
}
session.post('http://quantumonline.com/login.cfm',data=data)
Afterward, I try to access a secure page on it using the same session, and it tells me to register. I have also tried using
data = {
'acctname':'myusername',
'pswrd':'mypassword',
'submit':'Login'
}
I have no clue why it wont work. Any help is appreciated.
The website required the requests Session to have gone to the homepage before it tries to submit a login request. Everyone who helped me inch closer to the answer: Thank you.
Here is the working code:
session = requests.Session()
data = {
'acctname':'username',
'pswrd':'password',
'action':'login_test.cfm'
}
headers = {'User-Agent': 'Mozilla/5.0'}
session.get('http://quantumonline.com', headers=headers) #Get cookies
response = session.post('http://quantumonline.com/login_test.cfm', headers=headers, data=data)

how to use python requests to login to website

Im trying to login and scrape a job site and send me notification when ever certain key words are found.I think i have correctly traced the xpath for the value of feild "login[iovation]" but i cannot extract the value, here is what i have done so far to login
import requests
from lxml import html
header = {"User-Agent":"Mozilla/4.0 (compatible; MSIE 5.5;Windows NT)"}
login_url = 'https://www.upwork.com/ab/account-security/login'
session_requests = requests.session()
#get csrf
result = session_requests.get(login_url)
tree=html.fromstring(result.text)
auth_token = list(set(tree.xpath('//*[#name="login[_token]"]/#value')))
auth_iovat = list(set(tree.xpath('//*[#name="login[iovation]"]/#value')))
# create payload
payload = {
"login[username]": "myemail#gmail.com",
"login[password]": "pa$$w0rD",
"login[_token]": auth_token,
"login[iovation]": auth_iovation,
"login[redir]": "/home"
}
#perform login
scrapeurl='https://www.upwork.com/ab/find-work/'
result=session_requests.post(login_url, data = payload, headers = dict(referer = login_url))
#test the result
print result.text
This is screen shot of form data when i login successfully
This is because upworks uses something called iOvation (https://www.iovation.com/) to reduce fraud. iOvation uses digital fingerprint of your device/browser, which are sent via login[iovation] parameter.
If you look at the javascripts loaded on your site, you will find two javascript being loaded from iesnare.com domain. This domain and many others are owned by iOvaiton to drop third party javascript to identify your device/browser.
I think if you copy the string from the successful login and send it over along with all the http headers as is including the browser agent in python code, you should be okie.
Are you sure that result is fetching 2XX code
When I am this code result = session_requests.get(login_url)..its fetching me a 403 status code, which means I am not going to login_url itself
They have an official API now, no need for scraping, just register for API keys.

fail to login to website with request

At first here is my code:
import requests
payload = {'name':'loginname',
'password': 'loginpassword'}
requests.post('http://myurl/auth',data=payload,verify=False)
rl = requests.get('http://myurl/dashboard',verify=False)
print(rl.text)
My problem:
I get the status code 200 which means that the login was successfull.
But when i try to visit the protected page http://myurl/dashboard my output doesn't fit. It shows me the first page for login and i don't get it why.
I know there are many questions like that but i studied every answer and the docs but i dont get it.
Any help would be very nice. It drives me cracy. Thanks in advance!
You need to use a Session object to have requests maintain track of your cookies, such as the login cookie which would get set after your requests.post login action. Quoting the[first example there:
s = requests.Session()
s.get('http://httpbin.org/cookies/set/sessioncookie/123456789')
r = s.get('http://httpbin.org/cookies')
print(r.text)
# '{"cookies": {"sessioncookie": "123456789"}}'
So in your case:
import requests
payload = {'name': 'loginname',
'password': 'loginpassword'}
s = requests.Session()
r1 = s.post('http://myurl/auth', data=payload, verify=False)
if r1.status_code == 200: # not necessary if you are sure it would login successfully
r2 = s.get('http://myurl/dashboard', verify=False)

Passing a CSRF token into requests.post()

I saw this post - Passing csrftoken with python Requests
I've been working through it trying to make it work for Greenhouse. I'm trying to build a script that will automate profile creation.
I can fetch data using GET and cookies, but I think I'm I'm getting stuck with X-CSRF. I downloaded the Live HTTP headers plugin for Mozilla to get the CSRF token, but I'm unsure how to pass it in.
So far what I have:
csrf = 'some_csrf_token'
cookie = 'some_cookie_id'
data = dict('person_first_name'='Morgan') ## this is submitting my name on the form
url = 'https://app.greenhouse.io/people/new?hiring_plan_id=24047' ##submission form page
headers = {'Cookie':cookie}
r = requests.post(url, data=data, headers=headers)
Any thoughts how I should construct my requests.post?
If you want requests to handle the cookies for you, you should set a session.
session = requests.session()
logindata = {'authenticity_token': 'whatevertokenis',
'user[email]': 'your#loginemail.com',
'user[password]': 'yourpassword',
'user[remember_me]': '0'}
login = session.post('https://app.greenhouse.io/users/sign_in', data=logindata) #this should log in in, i don't have an account there to test.
data = dict('person_first_name'='Morgan')
url = 'https://app.greenhouse.io/people/new?hiring_plan_id=24047'
r = session.post(url, data=data) #unless you need to set a user agent or referrer address you may not need the header to be added.

Categories

Resources