I have a python script that prints JSON and string:
# script.py
print('{"name": "bob", "height": 4, "weight": 145}')
print('hello')
sys.stdout.flush()
A nodejs app calls the python script via child-process. But I'm getting error on the output. How can I process the python output in nodejs?
// nodejs
var process = spawn('python3', ["./script.py", toSend]);
process.stdout.on('data', function(data) {
message = JSON.parse(data)
console.log(message)
})
I'm getting this a SyntaxError: Unexpected token from running this.
In your python script...
This line
print('{"name": "bob", "height": 4, "weight": 145}')
should be changed
import json
print(json.dumps({"name": "bob", "height": 4, "weight": 145}))
That will handle and make sure the JSON is formatted correctly, so that the JSON string can be parsed by node (but your current version should be fine). However in this case the real problem is what follows...
You are ending your script with
print('hello')
which means that JSON.parse() is going to try and parse hello as part of the JSON.parse() because you are reading from stdout... hello is not JSON formatted. So JSON.parse() is going to fail. So remove that line as well.
If you have more then one json object to send as you stated in your comments
You can either combine all the data into a single JSON object
my_object = {"data": "info"....} and json.dumps() that single larger object..
or
obj1 = {}
obj2 = {}
myobjects = [obj1, obj2]
print(json.dumps(myobjects))
and the Node side will recieve a list of objects that can be iterated on
Related
I'm trying to get the value of ["pooled_metrics"]["vmaf"]["harmonic_mean"] from a JSON file I want to parse using python. This is the current state of my code:
for crf in crf_ranges:
vmaf_output_crf_list_log = job['config']['output_dir'] + '/' + build_name(stream) + f'/vmaf_{crf}.json'
# read the vmaf_output_crf_list_log file and get value from ["pooled_metrics"]["vmaf"]["harmonic_mean"]
with open(vmaf_output_crf_list_log, 'r') as json_vmaf_file:
# load the json_string["pooled_metrics"] into a python dictionary
vm = json.loads(json_vmaf_file.read())
vmaf_values.append((crf, vm["pooled_metrics"]["vmaf"]["harmonic_mean"]))
This will give me back the following error:
AttributeError: 'dict' object has no attribute 'loads'
I always get back the same AttributeError not matter if I use "load" or "loads".
I validated the contents of the JSON, which is valid using various online validators, but still, I am not able to load the JSON for further parsing operations.
I expect that I can load a file that contains valid JSON data. The content of the file looks like this:
{
"frames": [
{
"frameNum": 0,
"metrics": {
"integer_vif_scale2": 0.997330,
}
},
],
"pooled_metrics": {
"vmaf": {
"min": 89.617207,
"harmonic_mean": 99.868023
}
},
"aggregate_metrics": {
}
}
Can somebody provide me some advice onto this behavior, what does it seem so absolutely impossible to load this JSON file?
loads is a method for the json library as the docs say https://docs.python.org/3/library/json.html#json.loads. In this case you are having a AttributeError this means that probably you have created another variable named "json" and when you call json.loads is calling that variable hence it won't have a loads method.
Python 3.8.5 with Pandas 1.1.3
This code, to convert Python json dict list to csv, using pandas, works without issue:
import csv
import pandas as pd
import json
data = [{"results": [{"type": "ID", "value": "1234", "normalized": "1234", "count": 1, "offsets": [{"start": 14, "end": 25}], "id_b": "10"}, {"type": "ID", "value": "5678", "normalized": "5678", "count": 1, "offsets": [{"start": 32, "end": 43}], "id_b": "11"}], "responseHeaders": {"Date": "Tue, 25 May 2021 14:41:28 GMT", "Content-Type": "application/json", "Content-Length": "350", "Connection": "keep-alive", "Server": "openresty", "X-StuffAPI-ProcessedLanguage": "eng", "X-StuffAPI-Request-Id": "abcdef", "Strict-Transport-Security": "max-age=63072000; includeSubDomains; preload", "X-StuffAPI-App-Id": "123456789", "X-StuffAPI-Concurrency": "1"}}]
pd.read_json(json.dumps(data)).to_csv('file.csv')
The value in the data variable above is pasted directly from the response of an API call to one of our services. The problem occurs when I attempt to do everything including the API call in one script. Let's first look at everything in the script that seems to be working fine:
import csv
import pandas as pd
import json
import stuff.api
def run(key, url):
# Create an API instance
api = API(user_key=key, service_url=url)
# submit data from a text file to the API parser
file1 = open("123.txt","r")
text_data = file1.read()
params = DocumentParameters()
params["content"] = text_data
file1.close()
try:
return api.data(params)
except StuffAPIException as exception:
print(exception)
if __name__ == '__main__':
result = run('1234', 'https://192.168.0.125:8100/rest/')
y = json.dumps(result)
t = type(y)
print(y)
print(t)
The above print(y) statement will return the exact data which I've shown in the data variable in the first code block above. And the print(t) statement was for me to capture the return type to help me try and diagnose the issue - the result of that is <class 'str'>.
So now we add this right under the print(t) line (exactly as in the first code block):
pd.read_json(json.dumps(result)).to_csv('file.csv')
And I get this error:
ValueError: Mixing dicts with non-Series may lead to ambiguous
ordering.
I have seen the many threads about this error, but none of them seem to pertain exactly to what's happening here.
With my limited experience thus far, I am guessing this issue may be due to the return type being string? I'm not sure, but this troubleshooting step is just the first hurdle to overcome - I need to eventually be able to parse the data into separate columns of the csv file, but for now, I just need to get it into the csv file without errors.
I understand you won't be able to fully reproduce this without access to my server, but hoping that's not needed to figure this out.
I would like to pretty print a json file where i can see the array ID's. Im working on a Cisco Nexus Switch with NX-OS that runs Python (2.7.11). Looking at following code:
cmd = 'show interface Eth1/1 counters'
out = json.loads(clid(cmd))
print (json.dumps(out, sort_keys=True, indent=4))
This gives me:
{
"TABLE_rx_counters": {
"ROW_rx_counters": [
{
"eth_inbytes": "442370508663",
"eth_inucast": "76618907",
"interface_rx": "Ethernet1/1"
},
{
"eth_inbcast": "4269",
"eth_inmcast": "49144",
"interface_rx": "Ethernet1/1"
}
]
},
"TABLE_tx_counters": {
"ROW_tx_counters": [
{
"eth_outbytes": "217868085254",
"eth_outucast": "66635610",
"interface_tx": "Ethernet1/1"
},
{
"eth_outbcast": "1137",
"eth_outmcast": "557815",
"interface_tx": "Ethernet1/1"
}
]
}
}
But i need to access the field by:
rxuc = int(out['TABLE_rx_counters']['ROW_rx_counters'][0]['eth_inucast'])
rxmc = int(out['TABLE_rx_counters']['ROW_rx_counters'][1]['eth_inmcast'])
rxbc = int(out['TABLE_rx_counters']['ROW_rx_counters'][1]['eth_inbcast'])
txuc = int(out['TABLE_tx_counters']['ROW_tx_counters'][0]['eth_outucast'])
txmc = int(out['TABLE_tx_counters']['ROW_tx_counters'][1]['eth_outmcast'])
txbc = int(out['TABLE_tx_counters']['ROW_tx_counters'][1]['eth_outbcast'])
So i need to know the array ID (in this example zeros and ones) to access the information for this interface. It seems pretty easy with only 2 arrays, but imagine 500. Right now, i always copy the json code to jsoneditoronline.org where i can see the ID's:
Is there an easy way to make the IDs visible within python itself?
You posted is valid JSON.
The image is from a tool that takes the data from JSON and displays it. You can display it in any way you want, but the contents in the file will need to be valid JSON.
If you do not need to load the JSON later, you can do with it whatever you like, but json.dumps() will give you JSON only.
I need to add a $ symbol before the { on each line. how can I do it in python,
Problem: Using python I am reading all the API endpoint from the JSON file before I pass those API endpoints, I need to append a $ symbol just before to the open parenthesis {
Below is the code reads the API endpoint name from JSON file and prints.
import json
with open("example.json", "r") as reads: # Reading all the API endpoints from json file.
data = json.load(reads)
print(data['paths'].items())
for parameters, values in data['paths'].items():
print(parameters)
From the above code, I need to go further to achieve adding a $ symbol next to { before printing it.
Below list i get by reading the json file using python:
/API/{id}/one
/{two}/one/three
/three/four/{five}
Expected is:
/API/${id}/one
/${two}/one/three
/three/four/${five}
You could use .replace().
>>> obj="""
... /API/{id}/one
... /{two}/one/three
... /three/four/{five}
... """
>>> newobj = obj.replace('{','${')
>>> print(newobj)
/API/${id}/one
/${two}/one/three
/three/four/${five}
You Can use re library.
for parameters, values in data['paths'].items():
print(re.sub('{', '${', parameters))
For more info on re, Go through the doc. https://docs.python.org/3/library/re.html, It's a very helpful module.
I am using Python to read a JSON file and convert each record into a class instance. I already have a sample JSON file, but for the Python code, I am trying to use test-driven development (TDD) methods for the first time. Here's what the data in my (sample) JSON file look like:
[{"Name": "Max", "Breed": "Poodle", "Color": "White", "Age": 8},
{"Name": "Jack", "Breed": "Corgi", "Color": "Black", "Age": 4},
{"Name": "Lucy", "Breed": "Labrador Retriever", "Color": "Brown", "Age": 2},
{"Name": "Bear", "Breed": "German Shepherd", "Color": "Brown", "Age": 6}]
I know I want to test for valid entries in each of the arguments for all the instances. For example, I want to check the breed against a tuple of acceptable breeds, and check that age is always given as an integer. Given my total lack of TDD experience, it's not clear to me if the code checking the objects resulting from the JSON import code is itself the test, or if I should be using one set of tests for the JSON import code and a separate set to test the instances generated by the import code.
Those are two separate instances. Testing the JSON load is completely different from testing the loaded json data. Loading the data should not be complex. (json.loads). But if you need to test it, keep it as minimal and fine grained as possible. Your tests should not affect each other.
In general, your test cases should test very specific portions of your code. That is, it should test specific functionality of your program. Because you mentioned validating the json data you load (breed for instance), this implies that your program should also have this validate functionality. For this instance, you would have test cases like the ones below.
import doggie
def test_validate_breed():
# Positive test case -- everything here should pass. (Assuming you give good data)
# your load_json routine itself could be a test. But generally this either works or
# it raises a json exception... At any rate, load_json returns a list of dictionaries
# like those you described above.
l = load_json()
for d in l:
assert doggie.validate_breed(d)
# Generate an invalid dictionary entry
d = { "Name": "Some Name", "Breed": "Invalid!", ... }
assert False == doggie.validate_breed(d)
def test_validate_age():
l = load_json()
for d in l:
assert doggie.validate_age(d)
# generate an invalid dictionary entry
d = { "Name": "Some Name", ... , "Age": 1000 }
assert False == doggie.validate_age(d)
The beauty of testing is that it exposes flaws in your design. It is very good at exposing unnecessary coupling.
I recommend you check out nose for unit testing. It makes running tests a cinch and provides nice utility functions that better describe test failures. For instance:
>>> import nose.tools as nt
>>> nt.assert_equal(4, 5)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\MSFBPY27\lib\unittest\case.py", line 509, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\MSFBPY27\lib\unittest\case.py", line 502, in _baseAssertEqual
raise self.failureException(msg)
AssertionError: 4 != 5