I need to create AWS lambda function to execute the python program.
I need to incorporate the following shell command in it.
curl https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r '.prefixes[] | select(.region=="ap-southeast-1") | .ip_prefix'
Could someone guide me on this.
To simply shell out to curl and jq to get that data,
import subprocess
data = subprocess.check_output("""curl https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r '.prefixes[] | select(.region=="ap-southeast-1") | .ip_prefix'""", shell=True)
but you really probably shouldn't do that since e.g. there's no guarantee you have curl and jq in the Lambda execution environment (not to mention the overhead).
Instead, if you have the requests library,
import requests
resp = requests.get("https://ip-ranges.amazonaws.com/ip-ranges.json")
resp.raise_for_status()
prefixes = {
r["ip_prefix"]
for r in resp.json()["prefixes"]
if r["region"] == "ap-southeast-1"
}
Related
I come from a Unix world and want to do ssh port forwarding ssh -L on a Windows 10 machine.
I have a working solution with a python one-liner.
Now I wanted to ask the Powershell-users how to elegantly do this on pure powershell
$LOCALPORT=$(python -c "import socket; s=socket.socket(); s.bind(('',0)); print(s.getsockname()[1]); s.close()")
To give you some framing, it would run in a script like this:
$USER="USERNAME"
$REMOTEHOST="REMOTEHOST"
$LOCALIP=$(Get-NetAdapter -Name "WiFi" | Get-NetIPAddress).IPv4Address
$LOCALPORT=$(python -c "import socket; s=socket.socket(); s.bind(('',0)); print(s.getsockname()[1]); s.close()")
$REMOTEIP=ssh $USER#$REMOTEHOST "cat `$HOME/var/ip|cut -d`':`' -f1"
$REMOTEPORT=ssh $USER#$REMOTEHOST "cat `$HOME/var/ip|cut -d`':`' -f2"
Start-Job ssh -L $LOCALIP`:$LOCALPORT`:$REMOTEIP`:$REMOTEPORT $USER#$REMOTEHOST -N -v -v -v
sleep 5
```
You can try this:
$usedPorts = (Get-NetTCPConnection | select -ExpandProperty LocalPort) + (Get-NetUDPEndpoint | select -ExpandProperty LocalPort)
5000..60000 | where { $usedPorts -notcontains $_ } | select -first 1
Get-NetTCPConnection and Get-NetUDPEndpoint both return current connections and we just need LocalPort.
Next line iterates from 5000 to 60000 and checks if the port number is not used. Last part of the pipe returns the first result.
EDIT
I just found out that you can also use this to get currently used ports:
$usedPorts = (Get-NetTCPConnection).LocalPort + (Get-NetUDPEndpoint).LocalPort
This syntax is called member enumeration and is available since PowerShell v3.
You can read more about it here: https://stackoverflow.com/a/48888108/5805327
I have two following shell scripts.
nodes.sh:
#!/bin/bash
NODE_IDs=$(docker node ls --format "{{.ID}}")
for NODE_ID in ${NODE_IDs}
do
docker node inspect $NODE_ID | jq -r '.[] | {node:.ID, ip:.Status.Addr}'
done | jq -s
nodes.sh gives following output (with ./nodes.sh or cat ./nodes.sh | bash):
[
{
"node": "b2d9g6i9yp5uj5k25h1ehp26e",
"ip": "192.168.1.123"
},
{
"node": "iy25xmeln0ns7onzg4jaofiwo",
"ip": "192.168.1.125"
}
]
node_detail.sh:
#!/bin/bash
docker node inspect b2d | jq '.[] | {node: .ID, ip: .Status.Addr}'
where as node_detail.sh gives (./node_detail.sh or cat ./node_detail.sh):
{
"node": "b2d9g6i9yp5uj5k25h1ehp26e",
"ip": "192.168.1.123"
}
Problem: I would like to run both script from python subporcess.
I can run and get output for node_detail.sh with following code:
>>> import subprocess
>>> proc = subprocess.Popen('./node_detail.sh', stdout=subprocess.PIPE, shell=True)
>>> proc.stdout.read()
'{\n "node": "b2d9g6i9yp5uj5k25h1ehp26e",\n "ip": "192.168.1.123"\n}\n'
I wrote following code to get output from nodes.sh
>>> import subprocess
>>> proc = subprocess.Popen('./nodes.sh', stdout=subprocess.PIPE, shell=True)
Now I am getting following error:
>>> jq - commandline JSON processor [version 1.5-1-a5b5cbe]
Usage: jq [options] <jq filter> [file...]
jq is a tool for processing JSON inputs, applying the
given filter to its JSON text inputs and producing the
filter's results as JSON on standard output.
The simplest filter is ., which is the identity filter,
copying jq's input to its output unmodified (except for
formatting).
For more advanced filters see the jq(1) manpage ("man jq")
and/or https://stedolan.github.io/jq
Some of the options include:
-c compact instead of pretty-printed output;
-n use `null` as the single input value;
-e set the exit status code based on the output;
-s read (slurp) all inputs into an array; apply filter to it;
-r output raw strings, not JSON texts;
-R read raw strings, not JSON texts;
-C colorize JSON;
-M monochrome (don't colorize JSON);
-S sort keys of objects on output;
--tab use tabs for indentation;
--arg a v set variable $a to value <v>;
--argjson a v set variable $a to JSON value <v>;
--slurpfile a f set variable $a to an array of JSON texts read from <f>;
See the manpage for more options.
Error: writing output failed: Broken pipe
Error: writing output failed: Broken pipe
Why I am getting Error: writing output failed: Broken pipe?
In nodes.sh, rather than invoking jq without any argument, invoke it as jq -s ..
I am trying to store the output of a cmd command as a variable in python.
To achieve this i am using os.system() but os.system() just runs the process,it doesn't capture the output.
import os
PlatformName = os.system("adb shell getprop | grep -e 'bt.name'")
DeviceName = os.system("adb shell getprop | grep -e '.product.brand'")
DeviceID = os.system("adb shell getprop | grep -e 'serialno'")
Version = os.system("adb shell getprop | grep -e 'version.release'")
print(PlatformName)
print(DeviceName)
print(DeviceID)
print(Version)
Then i tried to use the subprocess module.
import subprocess
import os
PlatformName = subprocess.check_output(["adb shell getprop | grep -e 'bt.name'"])
DeviceName = subprocess.check_output(["adb shell getprop | grep -e '.product.brand'"])
DeviceID = subprocess.check_output(["adb shell getprop | grep -e 'serialno'"])
Version = subprocess.check_output(["adb shell getprop | grep -e 'version.release'"])
print(PlatformName)
print(DeviceName)
print(DeviceID)
print(Version)
I am getting the following error
FileNotFoundError: [WinError 2] The system cannot find the file
specified
How can I store the output of the command as a variable?
The issues here:
passing arguments like this (string in a list, with spaces) is really not recommended
passing arguments like this need shell=True for it to have a slight chance to work, and shell=True is known for security issues (and other issues as well, like non-portability)
grep is not standard on windows, and the pattern is a regex which means you'd probably have to escape . ("bt\.name").
when not found grep returns 1 and would make check_output fail.
when found grep returns match(es), and a newline, that you'd have to strip
I'd rewrite this:
PlatformName = subprocess.check_output(["adb shell getprop | grep -e 'bt.name'"])
as:
output = subprocess.check_output(["adb","shell","getprop"])
platform_name = next((line for line in output.decode().splitlines() if "bt.name" in line),"")
The second line is a "native" version of grep (without regexes). It returns the first occurrence of "bt.line" in the output lines or empty string if not found.
You don't need grep here (the above is not strictly equivalent, as it yields the first occurrence, not all the occurrences but that should be okay on your case). And your clients may not have grep installed on Windows.
Hey I got the same problem as you. Sub-process can do what you want even with the shell=False. The trick is the communicate() method.
with subprocess.Popen(cmdCode,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd = workingDir,
bufsize=1,
universal_newlines = True) as proc:
#output is stored in proc.stdout
#errors are stored in proc.stderr
Now you just need a little function to scan the proc.stdout for the information you need: PlatformName, etc
Currently, I'm trying to convert CURL request to Python script.
curl $(curl -u username:password -s https://api.example.com/v1.1/reports/11111?fields=download | jq ".report.download" -r) > "C:\sample.zip"
I have tried pycurl, with no success, due to knowledge limitation.
As a solution, I have found, that it is possible to run commands through python.
https://www.raspberrypi.org/forums/viewtopic.php?t=112351
import os
os.system("curl -K.........")
And other solution ( based on the search more common) using subprocess
import subprocess
subprocess.call(['command', 'argument'])
Currently, I'm not sure where to move and how to adapt this solution to my sitionation.
import os
os.system("curl $(curl -u username:password -s https://api.example.com/v1.1/reports/11111?fields=download | jq '.report.download' -r) > 'C:\sample.zip'")
'curl' is not recognized as an internal or external command,
operable program or batch file.
255
P.S. - Update v1
Any suggestion?
import requests
response = requests.get('https://api.example.com/v1.1/reports/11111?fields=download | jq ".report.download" -r', auth=('username', 'password'))
This work without "| jq ".report.download" this part, but this is the main part which gives at the end only link to download the file.
ANy way around it?
The error 'curl' is not recognized as an internal or external command means that python couldn't find the location where curl is installed. If you have already installed curl, try giving the full path to where curl is installed. For example, if curl.exe is located in C:\System32, the try
import os
os.system("C:\System32\curl $(curl -u username:password -s https://api.example.com/v1.1/reports/11111?fields=download | jq '.report.download' -r) > 'C:\sample.zip'")
But thats definitely not pythonic way of doing things. I would instead suggest to use requests module.
You need to invoke requests module twice for this, first to download the json content from https://api.example.com/v1.1/reports/11111?fields=download, get a new url pointed byreport.download and then invoke requests again to download data from the new url.
Something along these lines should get you going
import requests
url = 'https://api.example.com/v1.1/reports/11111'
response = requests.get(url, params=(('fields', 'download'),),
auth=('username', 'password'))
report_url = response.json()['report']['download']
data = requests.get(report_url).content
with open('C:\sample.zip', 'w') as f:
f.write(data)
You can use this site to convert the actual curl part of your command to something that works with requests: https://curl.trillworks.com/
From there, just use the .json() method of the request object to do whatever processing you need to be doing.
Finally, can save like so:
import json
with open('C:\sample.zip', 'r') as f:
json.dump(data, f)
I need to execute a shell command in python and need to store the result to a variable. How can I perform this.
I need to execute openssl rsautl -encrypt -inkey key and get the result to a variable.
---edit---
How can I execute
perl -e 'print "hello world"' | openssl rsautl -encrypt -inkey key
in python and get the output..
You can use subprocess.check_output
from subprocess import check_output
out = check_output(["openssl", "rsautl", "-encrypt", "-inkey", "key"])
The output will be stored in out.
A Simple way to execute a shell command is os.popen:
import os
cmdOutput1 = os.popen("openssl rsautl -encrypt -inkey key").readlines()
cmdOutput2 = os.popen("perl -e 'print \"hello world\"' | openssl rsautl -encrypt -inkey key").readlines()
All it takes is the command you want to run in the form of one String. It will return you an open file object. By using .readlines() this open file object will be converted to a list, where an Item in the List will correspond to a single line of Output from your command.