!curl commands in Python notebook fail with 500 Internal error - python

I am running the below code in Google Colab and get The server encountered an internal error or misconfiguration and was unable to complete your request. If I am running the command without passing in the variable $data like below, it runs perfectly fine. Only when I'm looping through the file and passing variables it seems to be failing
import csv
import json
reader = csv.reader(open('/content/drive/MyDrive/file5.csv'))
for row in reader:
data = {"snps": row[0], "pop": "YRI", "r2_threshold": "0.9", "maf_threshold": "0.01"}
data = json.dumps(data)
data = "'{}'".format(data)
!curl -k -H "Content-Type: application/json" -X POST -d "$data" 'https://ldlink.nci.nih.gov/LDlinkRest/snpclip?token=e3e559472899'
This works:
!curl -k -H "Content-Type: application/json" -X POST -d '{"snps": "rs3\nrs4", "pop":"YRI", "r2_threshold": "0.1", "maf_threshold": "0.01"}' 'https://ldlink.nci.nih.gov/LDlinkRest/snpclip?token=e3e559472899'

UPDATE: Actually, ipython does allow you to run ! escapes in a loop; the actual error in your code is purely in the incorrect quoting (especially the addition of single quotes around the data value, but there could be more).
Original (partially incorrect) answer below.
The ! escape tells your notebook (Google Colab, Jupyter, or what have you; basically whatever is running ipython as a kernel or similar) to leave Python and run a shell command. Python itself has no support for this; the closest approximation would be something like
import subprocess
...
for row in reader:
data = {"snps": row[0], "pop": "YRI", "r2_threshold": "0.9", "maf_threshold": "0.01"}
data = json.dumps(data)
# This was wrong on so many levels
# data = "'{}'".format(data)
subprocess.run(['curl', '-k',
'-H', "Content-Type: application/json",
'-X', 'POST', '-d', data,
'https://ldlink.nci.nih.gov/LDlinkRest/snpclip?token=e3e559472899'],
text=True, check=True)
though avoiding subprocess and running Python urllib or requests code to perform the POST would be more efficient and elegant, and give you more control over what gets sent and how it gets handled.
How to properly quote strings when translating between shell commands and Python requires you to understand the shell's quoting behavior. I'll just briefly note that I left double quotes where they were not incorrect in your original command, but otherwise preferred single quotes, and of course, data now refers to a proper Python variable with that name, not a shell variable with the same name.
To reiterate: ipython (which is what your notebook is an interface to) knows how to run both Python code and shell scipt code via !; but once you ask it to run Python code, ipython hands it over to Python proper, and you are no longer in ipython.

Related

Content limit to update/create GitHub file with API with Python

I try to use GitHub API to update one of my files but i have some error to update some files with large size. First of all, I have to mention that in https://docs.github.com/en/rest/repos/contents#size-limits mentioned the files between 1-100MB use raw or object and greater than 100MB unable to sent. But size of my file is 149KB and don’t work.
I use this script to update my files:
import os , subprocess
Server_name_result = open(f"httpx_new.txt", "rb").read()
Server_name_encoded = subprocess.getoutput(f"""echo "$(cat httpx_new.txt)" | base64 -w 0 """)
sha_file = subprocess.getoutput("""curl -s -H "Authorization: Bearer <TOKEN>" https://api.github.com/repos/PrivetUser/PrivetRepo/contents/Servers.txt | jq -r '.sha' """)
os.system(f"""curl -X PUT -H "Accept: application/vnd.github+json" -H "Authorization: Bearer <TOKEN>" https://api.github.com/repos/PrivetUser/PrivetRepo/contents/Servers.txt -d '{{"message":"a new commit message","committer":{{"name":"name","email":"email#gmail.com"}},"content":"{Server_name_encoded}","sha":"{sha_file}"}}'""")
When my new file is 72KB the script work as well but in this case when my file size become 149KB The script doesn’t work at all and just pass from last command. I believe the problem is in content parameter and because this becomes very long command, It passed. I try double encoded but it doesn’t work. I tested most libraries and codes for update file content but none of them work and this one has this bug.
What is the best way to update file content with python and how i can solve this problem to execute my command!?

How to get data from web in python using curl?

In bash when I used
myscript.sh
file="/tmp/vipin/kk.txt"
curl -L "myabcurlx=10&id-11.com" > $file
cat $file
./myscript.sh gives me below output
1,2,33abc
2,54fdd,fddg3
3,fffff,gfr54
When I tried to fetch it using python and tried below code -
mypython.py
command = curl + ' -L ' + 'myabcurlx=10&id-11.com'
output = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE).stdout.read().decode('ascii')
print(output)
python mypython.py throw error, Can you please point out what is wrong with my code.
Error :
/bin/sh: line 1: &id=11: command not found
Wrong Parameter
command = curl + ' -L ' + 'myabcurlx=10&id-11.com'
Print out what this string is, or just think about it. Assuming that curl is the string 'curl' or '/usr/bin/curl' or something, you get:
curl -L myabcurlx=10&id-11.com
That’s obviously not the same thing you typed at the shell. Most importantly, that last argument is not quoted, and it has a & in the middle of it, which means that what you’re actually asking it to do is to run curl in the background and then run some other program that doesn’t exist, as if you’d done this:
curl -L myabcurlx=10 &
id-11.com
Obviously you could manually include quotes in the string:
command = curl + ' -L ' + '"myabcurlx=10&id-11.com"'
… but that won’t work if the string is, say, a variable rather than a literal in your source—especially if that variable might have quote characters within it.
The shlex module has helpers to quoting things properly.
But the easiest thing to do is just not try to build a command line in the first place. You aren’t using any shell features here, so why add the extra headaches, performance costs, problems with the shell getting in the way of your output and retcode, and possible security issues for no benefit?
Make the arguments a list rather than a string:
command = [curl, '-L', 'myabcurlx=10&id-11.com']
… and leave off the shell=True
And it just works. No need to get spaces and quotes and escapes right.
Well, it still won’t work, because Popen doesn’t return output, it’s a constructor for a Popen object. But that’s a whole separate problem—which should be easy to solve if you read the docs.
But for this case, an even better solution is to use the Python bindings to libcurl instead of calling the command-line tool. Or, even better, since you’re not using any of the complicated features of curl in the first place, just use requests to make the same request. Either way, you get a response object as a Python object with useful attributes like text and headers and request.headers that you can’t get from a command line tool except by parsing its output as a giant string.
import subprocess
fileName="/tmp/vipin/kk.txt"
with open(fileName,"w") as f:
subprocess.read(["curl","-L","myabcurlx=10&id-11.com"],stdout=f)
print(fileName)
recommended approaches:
https://docs.python.org/3.7/library/urllib.request.html#examples
http://docs.python-requests.org/en/master/user/install/

Converting cURL in python request

I would like to know how to convert a cURL command into a python request.
Indeed, I am using this cURL command :
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary
'air_quality,host=raspberrypi value=200'
So, it allows to write the value 200 in the database mydb. But I would like to put this command in a python script. Then, it's not possible to do it, I got a format error.
I think it is possible to do it with python but I don't know how exactly. First, I have to import that :
import requests
Then the command should be like that :
requests.post("htp://localhost:8086/write?db=mydb
air_quality,host=raspberrypi value="+str(sensor_value))
My question is : how to write correctly the previous line for the python request ?
This a screenshot of my error :
Troubleshooting
#Jack I found the answer, the right command is :
payload='air_quality,host=raspberrypi value=100'
requests.post(url="http://localhost:8086/write?db=mydb", data=payload)
I checked in the influxdb database mydb and this is working, meanwhile I would like to get back the values from a sensor, the value is written in the variable sensor_value. How to get it ? I tried this :
payload='air_quality,host=raspberrypi value=sensor_value'
And I got this error : {"error":"unable to parse 'air_quality,host=raspberrypi value=sensor_value': invalid boolean"}

python subprocess.popen redirect to create a file

I've been searching for how to do this without any success. I've inherited a python script for performing an hourly backup on our database. The original script is too slow, so I'm trying a faster utility. My new command would look like this if typed into a shell:
pg_basebackup -h 127.0.0.1 -F t -X f -c fast -z -D - --username=postgres > db.backup.tgz
The problem is that the original script uses call(cmd) and it fails if the above is the cmd string. I've been looking for how to modify this to use popen but cannot find any examples where a file create redirect is used as in
>. The pg_basebackup as shown will output to stdout. The only way I've succeeded so far is to change -D - to -D some.file.tgz and then move the file to the archive, but I'd rather do this in one step.
Any ideas?
Jay
May be like this ?
with open("db.backup.tgz","a") as stdout:
p = subprocess.Popen(cmd_without_redirector, stdout=stdout, stderr=stdout, shell=True)
p.wait()
Hmmm... The pg_basebackup executable must be able to attach to that file. If I open the file in the manner you suggest, I don't know the correct syntax in python to be able to do that. If I try putting either " > " or " >> " in the string to call with cmd(), python pukes on it. That's my real problem that I'm not finding any guidance on.

subprocess.Popen not escaping command line arguments properly?

I am trying to call the following curl command with python:
curl -k -F file=#something.zip -F "data={\\"title\\":\\"Another App\\"}" -Lu usr:pwd https://build.phonegap.com/api/v0/apps
For it to work, I've found that the json I'm passing in data needs to be escaped with backslashes.
I can call this command with...
os.system(curl -k -F file=#something.zip -F "data={\\"title\\":\\"Another App\\"}" -Lu usr:pwd https://build.phonegap.com/api/v0/apps)
and it works.
However, when I try to use the subprocess module like this...
s = 'curl -k -F file=#something.zip -F "data={\\"title\\":\\"Another App\\"}" -Lu usr:pwd https://build.phonegap.com/api/v0/apps'
push = subprocess.Popen(s.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, errors = push.communicate()
print output
...the curl doesn't work and I get an error from the api I'm using that I'm using invalid parameters, which I've gotten in the past when I've used improperly escaped json.
What is going on here? Why can I call this command with os.system and not subprocess.Popen? So far my hypothesis is that the split is messing up something in the string, but I didn't find anything that looked wrong when I check the output of s.split().
perhaps using shell=True
push = subprocess.Popen(s, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Instead of doing
s.split()
try using shlex from the standard library
import shlex
shlex.split(s)
Shlex allows you configure the escaping behavior (see the link for details, the defaults might be sufficient though)
Specifically where you are going wrong is splitting at:
\"Another,
App\"}"
.split()#
is using space-character by default you'll need to change split behaviour as others have said.

Categories

Resources