I am following this tutorial to convert a .sldprt to .obj file. I wanted to accomplish this conversion using a python script, and I found a script online that accomplishes this to the point where it uploads the file to the server and begins the conversion. In step 3 of the tutorial (Verify the job is Complete), when I type the following command into the command line:
curl -X 'GET' -H 'Authorization: Bearer MYTOKEN' -v 'https://developer.api.autodesk.com/modelderivative/v2/designdata/MYURN/manifest'
I get an appropriate response (see image below):
However, doing the same thing from Python script gives me the following output:
My Python script is as below:
### Verify if translation is complete and get the outURN
url = BASE_URL + 'modelderivative/v2/designdata/' + urn + '/manifest'
headers = {
'Authorization' : 'Bearer ' + ACCESS_TOKEN
}
r = requests.get(url, headers=headers)
content = eval(r.content)
print("==========================================")
print(content)
print("==========================================")
I have no idea whats the difference between the two (terminal command and the command given from python script). Can someone point out what the problem here is?
Or better yet, listen to extraction.finished event, which notifies when a translation is done.
I believe I had to pause for some time after beginning the conversion to allow some time for the cloud to convert .sldprt to .stl. The solution is constantly poll the 'status' key and proceed only when the status changes from 'pending' to 'success'
Related
Hi am attempting to upload a file to a server when making a post with requests but everytime I get errors. If do the same with cURL it goes through but Im not familiar with cURL or uploading files really, mostly get requests, so I have no clue what its doing differently. I am on windows and am running the below. This is being uploaded to mcafee epo so Im not sure if their api just super picky or what the difference is but every python example ive tried for uploading a file via requests module has failed for me.
url = "https://server.url.com:1234/remote/repository.checkInPackage.do?&allowUnsignedPackages=True&option=Normal&branch=Evaluation"
user = "domain\user"
password = "mypass"
filepath = "C:\\my\\folder\\with\\afile.zip"
with open(filepath, "rb") as f:
file_dict = {"file": f}
response = requests.post(url, auth=(user, password), files=file_dict)
I usually get a error as follows:
'Error 0 :\r\njava.lang.reflect.InvocationTargetException\r\n'
if I use cURL it works though
curl.exe -k -s -u "domain\username:mypass" "https://server.url.com:1234/remote/repository.checkInPackage.do?&allowUnsignedPackages=True&option=Normal&branch=Evaluation" -F file=#"C:\my\folder\with\afile.zip"
I cant really see the difference though and am wondering what is being done differently on the backend for cURL or what I could be doing wrong when using python.
I am writing a script in Python that detects the language of a provided text.
I found the following command line that works in a terminal, but I would like to use it in my script.
Command :
**curl -X POST "https://api.cognitive.microsofttranslator.com/detect?api-version=3.0" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'What language is this text written in?'}]"**.
In the script, elements like the client-secret, the "text", and so on... should be in variables. And I would like to catch the result of the whole command line in a variable and then print it to the user.
How can I do this?
I found the command line here.
The command in Command Line is essentially sending http request. So you just need to use the python code I provide below, just for reference.
import requests
import json
url = 'https://api.cognitive.microsofttranslator.com//Detect?api-version=3.0'
body =[{"text": "你好"}]
headers = {'Content-Type': 'application/json',"Ocp-apim-subscription-key": "b12776c*****14f5","Ocp-apim-subscription-region": "koreacentral"}
r = requests.post(url, data=json.dumps(body), headers=headers)
result=json.loads(r.text)
a=result[0]["language"]
print(r.text)
print("Language = " + a)
I a splunk developer need to create a python script to get data from website using api call. I have no idea about how to write python script.
I have one refresh token through which we will get another token (access token ).
curl -X POST https://xxxx.com/api/auth/refreshToken -d <refresh token>
above command will return only access code in text format
curl -X GET https://xxxx.com/api/reporting/v0.1.0/training -g --header "Authorization:Bearer <access token>"| json_pp
by running above code we will get the data in json format.
I need to create a python script for this type api call.
Thanks in advance.
Say you have a file called rest.py, then :
import requests
from requests.auth import HTTPDigestAuth
import json
# Replace with the correct URL
url = "http://api_url"
# It is a good practice not to hardcode the credentials. So ask the user to enter credentials at runtime
myResponse = requests.get(url,auth=HTTPDigestAuth(raw_input("username: "), raw_input("Password: ")), verify=True)
#print (myResponse.status_code)
# For successful API call, response code will be 200 (OK)
if(myResponse.ok):
# Loading the response data into a dict variable
# json.loads takes in only binary or string variables so using content to fetch binary content
# Loads (Load String) takes a Json file and converts into python data structure (dict or list, depending on JSON)
jData = json.loads(myResponse.content)
print("The response contains {0} properties".format(len(jData)))
print("\n")
for key in jData:
print key + " : " + jData[key]
else:
# If response code is not ok (200), print the resulting http error code with description
myResponse.raise_for_status()
I'm trying to convert curl script to parse pdf file from grobid server to requests in Python.
Basically, if I run the grobid server as follows,
./gradlew run
I can use the following curl to get the output of parsed XML of an academic paper example.pdf as below
curl -v --form input=#example.pdf localhost:8070/api/processHeaderDocument
However, I don't know the way to convert this script into Python. Here is my attempt to use requests:
GROBID_URL = 'http://localhost:8070'
url = '%s/processHeaderDocument' % GROBID_URL
pdf = 'example.pdf'
xml = requests.post(url, files=[pdf]).text
I got the answer. Basically, I missed api in the GROBID_URL and also the input files should be a dictionary instead of a list.
GROBID_URL = 'http://localhost:8070'
url = '%s/api/processHeaderDocument' % GROBID_URL
pdf = 'example.pdf'
xml = requests.post(url, files={'input': open(pdf, 'rb')}).text
Here is an example bash script from http://ceur-ws.bitplan.com/index.php/Grobid. Please note that there is also a ready to use python client available. See https://github.com/kermitt2/grobid_client_python
#!/bin/bash
# WF 2020-08-04
# call grobid service with paper from ceur-ws
v=2644
p=44
vol=Vol-$v
pdf=paper$p.pdf
if [ ! -f $pdf ]
then
wget http://ceur-ws.org/$vol/$pdf
else
echo "paper $p from volume $v already downloaded"
fi
curl -v --form input=#./$pdf http://grobid.bitplan.com/api/processFulltextDocument > $p.tei
I need to export a massive number of events from splunk. Hence for performance reasons i resorted to directly using the REST API in my python code rather than using the Splunk SDK itself.
I found the following curl command to export results. This is also available here:-
curl -ku username:password
https://splunk_host:port/servicesNS/admin/search/search/jobs/export -d
search=“search index%3D_internal | head 3” -d output_mode=json
My attempt at simulating this using python's http functions is as follows:-
//assume i have authenticated to splunk and have a session key
base_url = "http://splunkhost:port"
search_job_urn = '/services/search/jobs/export'
myhttp = httplib2.Http(disable_ssl_certificate_validation=True)
searchjob = myhttp.request(base_url + search_job_urn, 'POST', headers=
{'Authorization': 'Splunk %s' % sessionKey},
body=urllib.urlencode({'search':'search index=indexname sourcetype=sourcename'}))[1]
print searchjob
The last print keeps printing all results until done. For large queries i get "Memory Errors". I need to be able to read results in chunks (say 50,000) and write them to a file and reset the buffer for searchjob. How can i accomplish that?