I'm trying to create a json file via shell, however, new lines are now allowed and through an error.
Invalid control character at: line 5 column 26 (char 87) which points to \n
echo '{
"param1": "asdfasf",
"param2": "asffad",
"param3": "asdfsaf",
"param4": "asdfasf\nasfasfas"
}' | python -m json.tool > test.json
Assuming I'd like to preserve new lines, how can I get this to put a json file?
UPDATE:
I'm thinking it has something to do with strict mode for python's json encoder/decoder.
If strict is False (True is the default), then control characters will
be allowed inside strings. Control characters in this context are
those with character codes in the 0-31 range, including '\t' (tab),
'\n', '\r' and '\0'.
https://docs.python.org/2/library/json.html
How can strict mode be set to False from within python -m json.tool?
Escaping the \ seems to do the trick:
echo '{
"param1": "asdfasf",
"param2": "asffad",
"param3": "asdfsaf",
"param4": "asdfasf\\nasfasfas"
}' | python -m json.tool > test.json
It creates valid json:
with open('/home/test.json', 'rU') as f:
js = json.load(f)
print(js)
print(js["param4"])
Output:
{'param1': 'asdfasf', 'param3': 'asdfsaf', 'param2': 'asffad', 'param4': 'asdfasf\nasfasfas'}
asdfasf
asfasfas
zsh is replacing \n with a proper carriage return. You could escape it or use heredoc style instead:
python -m json.tool > test.json << EOF
{
"param1": "asdfasf",
"param2": "asffad",
"param3": "asdfsaf",
"param4": "asdfasf\nasfasfas"
}
EOF
Related
I have two following shell scripts.
nodes.sh:
#!/bin/bash
NODE_IDs=$(docker node ls --format "{{.ID}}")
for NODE_ID in ${NODE_IDs}
do
docker node inspect $NODE_ID | jq -r '.[] | {node:.ID, ip:.Status.Addr}'
done | jq -s
nodes.sh gives following output (with ./nodes.sh or cat ./nodes.sh | bash):
[
{
"node": "b2d9g6i9yp5uj5k25h1ehp26e",
"ip": "192.168.1.123"
},
{
"node": "iy25xmeln0ns7onzg4jaofiwo",
"ip": "192.168.1.125"
}
]
node_detail.sh:
#!/bin/bash
docker node inspect b2d | jq '.[] | {node: .ID, ip: .Status.Addr}'
where as node_detail.sh gives (./node_detail.sh or cat ./node_detail.sh):
{
"node": "b2d9g6i9yp5uj5k25h1ehp26e",
"ip": "192.168.1.123"
}
Problem: I would like to run both script from python subporcess.
I can run and get output for node_detail.sh with following code:
>>> import subprocess
>>> proc = subprocess.Popen('./node_detail.sh', stdout=subprocess.PIPE, shell=True)
>>> proc.stdout.read()
'{\n "node": "b2d9g6i9yp5uj5k25h1ehp26e",\n "ip": "192.168.1.123"\n}\n'
I wrote following code to get output from nodes.sh
>>> import subprocess
>>> proc = subprocess.Popen('./nodes.sh', stdout=subprocess.PIPE, shell=True)
Now I am getting following error:
>>> jq - commandline JSON processor [version 1.5-1-a5b5cbe]
Usage: jq [options] <jq filter> [file...]
jq is a tool for processing JSON inputs, applying the
given filter to its JSON text inputs and producing the
filter's results as JSON on standard output.
The simplest filter is ., which is the identity filter,
copying jq's input to its output unmodified (except for
formatting).
For more advanced filters see the jq(1) manpage ("man jq")
and/or https://stedolan.github.io/jq
Some of the options include:
-c compact instead of pretty-printed output;
-n use `null` as the single input value;
-e set the exit status code based on the output;
-s read (slurp) all inputs into an array; apply filter to it;
-r output raw strings, not JSON texts;
-R read raw strings, not JSON texts;
-C colorize JSON;
-M monochrome (don't colorize JSON);
-S sort keys of objects on output;
--tab use tabs for indentation;
--arg a v set variable $a to value <v>;
--argjson a v set variable $a to JSON value <v>;
--slurpfile a f set variable $a to an array of JSON texts read from <f>;
See the manpage for more options.
Error: writing output failed: Broken pipe
Error: writing output failed: Broken pipe
Why I am getting Error: writing output failed: Broken pipe?
In nodes.sh, rather than invoking jq without any argument, invoke it as jq -s ..
I have 2 text files that I need to compare line by line.
I'm basically wanting to output either "matching" or "not matching" for each line depending on if it matches.
I've tried reading a few tutorial and using stuff like diff and dircmp but can't seem to find a way to do this. I don't care if it's bash, perl, python, etc. Both files are 243 lines.
Is there a command available in Linux to do this?
Here's an example of what I'm looking for...
File 1
Test
Hello
Example
File 2
Test
What
Example
And I'd want to output this:
matching
not matching
matching
In perl:
#!/usr/bin/perl
use strict;
use File::Slurp;
my #file1 = read_file 'file1', { chomp => 1 };
my #file2 = read_file 'file2', { chomp => 1 };
foreach (#file1) {
my $line = shift #file2;
print $_ eq $line ? "not matching\n" : "matching\n";
}
What you are after is an awk script of the following form:
$ awk '(NR==FNR){a[FNR]=$0;next}
!(FNR in a) { print "file2 has more lines than file1"; exit 1 }
{ print (($0 == a[FNR]) ? "matching" : "not matching") }
END { if (NR-FNR > FNR) print "file1 has more lines than file2"; exit 1}' file1 file2
This script works on the basis that both of your files are 243 lines. You will need to sort both files before running the script ie sort file1.txt > file1.sorted.txt and the same for the other file.
#!/bin/bash
while read file1 <&3 && read file2 <&4
if [[ $file1 == $file2 ]]; then
echo "matching" >> three.txt
else
echo "not matching" >> three.txt
fi
done 3</path/to/file1.sorted.txt 4</path/to/file2.sorted.txt
The above script will read each file line by line, comparing the input using the if statement. If the two strings are identical, it will write "matching" to three.txt else it will write "not matching" to the same file. The loop will go through each line.
You will have to sort the data within both files to make a comparison.
I've tested it with the following data:
one.sorted.txt
abc
cba
efg
gfe
xyz
zxy
two.sorted.txt
abc
cbd
efh
gfe
xyz
zmo
three.txt
matching
not matching
not matching
matching
matching
not matching
Its best to use dedicated linux file comparing tools such as Meld or Vimdiff, they are pretty straight forward and very convinient.
You can enter 'which meld' to check if you have it installed, if not found, install using this:
sudo apt-get install meld
In addition, here is a simple python script to get the results you asked for:
#!/usr/bin/env python3
with open ('1.txt') as f1:
lines1 = f1.readlines()
lines1 = [line.rstrip() for line in lines1]
with open ('2.txt') as f2:
lines2 = f2.readlines()
lines2 = [line.rstrip() for line in lines2]
for i, line in enumerate(range(min(len(lines1),len(lines2)))):
print("matching") if lines1[i] == lines2[i] else print("not matching")
I have to use the below bash command in a python script which includes multiple pip and grep commands.
grep name | cut -d':' -f2 | tr -d '"'| tr -d ','
I tried to do the same using subprocess module but didn't succeed.
Can anyone help me to run the above command in Python3 scripts?
I have to get the below output from a file file.txt.
Tom
Jack
file.txt contains:
"name": "Tom",
"Age": 10
"name": "Jack",
"Age": 15
Actually I want to know how can run the below bash command using Python.
cat file.txt | grep name | cut -d':' -f2 | tr -d '"'| tr -d ','
This works without having to use the subprocess library or any other os cmd related library, only Python.
my_file = open("./file.txt")
line = True
while line:
line = my_file.readline()
line_array = line.split()
try:
if line_array[0] == '"name":':
print(line_array[1].replace('"', '').replace(',', ''))
except IndexError:
pass
my_file.close()
If you not trying to parse a json file or any other structured file for which using a parser would be the best approach, just change your command into:
grep -oP '(?<="name":[[:blank:]]").*(?=",)' file.txt
You do not need any pipe at all.
This will give you the output:
Tom
Jack
Explanations:
-P activate perl regex for lookahead/lookbehind
-o just output the matching string not the whole line
Regex used: (?<="name":[[:blank:]]").*(?=",)
(?<="name":[[:blank:]]") Positive lookbehind: to force the constraint "name": followed by a blank char and then another double quote " the name followed by a double quote " extracted via (?=",) positive lookahead
demo: https://regex101.com/r/JvLCkO/1
I have a cURL command which return some json results.
I have :
{
"all":[
{
"id":"1"
},
{
"id":"2"
},
{
"id":"3"
}
]
}
My goal is to retreive all IDs values into an array in Bash. I know how to retreive a particular ID knowing the position.
Here is what I tried :
#!/bin/bash
CURL_COMM=$(curl https://DOMAIN/API -H "X-Auth-Token: TOKEN" | python -c "import sys, json; print json.load(sys.stdin)['all'][0]['id']")
echo "$CURL_COMM"
This will output 1 as expected, but I need to retreive the other IDs without knowing the number of element. Is that possible ?
And is it possible to retreive values contained in array, like :
{
"all":[
{
"id":"1",
"actions":[
"power",
"reboot"
]
},
{
"id":"2"
},
{
"id":"3"
}
]
}
Is it possible to retreive the actions list ?
You can use list comprehension:
python -c "import sys, json; print [i['id'] for i in json.load(sys.stdin)['all']]"
As always, jq makes working with JSON from a command line easy.
First one as a string holding a JSON array:
$ CURL_COMM=$(curl blah | jq -c '[ .all[].id | tonumber ]')
$ echo $CURL_COMM
[1,2,3]
First one as a bash array:
$ CURL_COMM=($(curl blah | jq '.all[].id | tonumber'))
$ echo ${CURL_COMM[1]}
2
Second one:
$ jq -c '.all[0].actions' example.json
["power","reboot"]
I know little of python other than this simple invocation: python -m json.tool {someSourceOfJSON}
Note how the source document is ordered "id", "z", "a" but the resulting JSON document presents the attributes "a", "id", "z".
$ echo '{ "id": "hello", "z": "obj", "a": 1 }' | python -m json.tool
{
"a": 1,
"id": "hello",
"z": "obj"
}
How, or can, I make the json.tool thing maintain the order of the attributes from the original JSON document?
The python version is whatever comes with this MacBookPro
$ python --version
Python 2.7.15
I'm not sure if it's possible with python -m json.tool but is with a one liner (which I'm guessing is the actual X/Y root problem) :
echo '{ "id": "hello", "z": "obj", "a": 1 }' | python -c "import json, sys, collections; print(json.dumps(json.loads(sys.stdin.read(), object_pairs_hook=collections.OrderedDict), indent=4))"
Result:
{
"id": "hello",
"z": "obj",
"a": 1
}
This is essentially the following code, but without the immediate objects and some readability compromises, such as oneline imports.
import json
import sys
import collections
# Read from stdin / pipe as a str
text = sys.stdin.read()
# Deserialise text to a Python object.
# It's most likely to be a dict, depending on the input
# Use `OrderedDict` type to maintain order of dicts.
my_obj = json.loads(text, object_pairs_hook=collections.OrderedDict)
# Serialise the object back to text
text_indented = json.dumps(my_obj, indent=4)
# Write it out again
print(text_indented)