I'm unable to parse JSON. My JSON snippet returned from requests.post response :-
{'result': {'parent': '', 'reason': '', 'made_sla': 'true', 'backout_plan': '', 'watch_list': '', 'upon_reject': 'cancel', 'sys_updated_on': '2018-08-22 11:16:09', 'type': 'Comprehensive', 'conflict_status': 'Not Run', 'approval_history': '', 'number': 'CHG0030006', 'test_plan': '', 'cab_delegate': '', 'sys_updated_by': 'admin', 'opened_by': {'link': 'https://dev65345.service-now.com/api/now/table/sys_user/6816f79cc0a8016401c5a33be04be441', 'value': '6816f79cc0a8016401c5a33be04be441'}, 'user_input': '', 'requested_by_date': '', 'sys_created_on': '2018-08-22 11:16:09', 'sys_domain': {'link': 'https://dev65345.service-now.com/api/now/table/sys_user_group/global', 'value': 'global'}, 'state': '-5', 'sys_created_by': 'admin', 'knowledge': 'false', 'order': '', 'phase': 'requested', 'closed_at': '', 'cmdb_ci': '', 'delivery_plan': '', 'impact': '3', 'active': 'true', 'review_comments': '', 'work_notes_list': '', 'business_service': '', 'priority': '4', 'sys_domain_path': '/', 'time_worked': '', 'cab_recommendation': '', 'expected_start': '', 'production_system': 'false', 'opened_at': '2018-08-22 11:16:09', 'review_date': '', 'business_duration': '', 'group_list': '', 'requested_by': {'link': 'https://dev6345.service-now.com/api/now/table/sys_user/user1', 'value': 'user1'}, 'work_end': '', 'change_plan': '', 'phase_state': 'open', 'approval_set': '', 'cab_date': '', 'work_notes': '', 'implementation_plan': '', 'end_date': '', 'short_description': '', 'close_code': '', 'correlation_display': '', 'delivery_task': '', 'work_start': '', 'assignment_group': {'link': 'https://dev65345.service-now.com/api/now/table/sys_user_group/testgroup', 'value': 'testgroup'}, 'additional_assignee_list': '', 'outside_maintenance_schedule': 'false', 'description': '', 'on_hold_reason': '', 'calendar_duration': '', 'std_change_producer_version': '', 'close_notes': '', 'sys_class_name': 'change_request', 'closed_by': '', 'follow_up': '', 'sys_id': '436eda82db4023008e357a61399619ee', 'contact_type': '', 'cab_required': 'false', 'urgency': '3', 'scope': '3', 'company': '', 'justification': '', 'reassignment_count': '0', 'review_status': '', 'activity_due': '', 'assigned_to': '', 'start_date': '', 'comments': '', 'approval': 'requested', 'sla_due': '', 'comments_and_work_notes': '', 'due_date': '', 'sys_mod_count': '0', 'on_hold': 'false', 'sys_tags': '', 'conflict_last_run': '', 'escalation': '0', 'upon_approval': 'proceed', 'correlation_id': '', 'location': '', 'risk': '3', 'category': 'Other', 'risk_impact_analysis': ''}}
I searched on the net. It is showing as as it is single quotes it's not parsing.
So I tried to convert the single quotes into double quotes.
with open ('output.json','r') as handle:
handle=open('output.json')
str="123"
str=handle.stringify() #also with .str()
str = str.replace("\'", "\"")
jsonobj=json.load(json.dumps(handle))
But it shows me No attribute stringify or str as it is an json object and these are string object function. So, can you please help me with what is the correct way of parsing the json object with single quotes in a file.
The code:-
import requests
import json
from pprint import pprint
print("hello world")
url="********"
user="****"
password="*****"
headers={"Content-Type":"application/xml","Accept":"application/json"}
#response=requests.get(url,auth=(user,password),headers=headers)
response = requests.post(url, auth=(user, password), headers=headers ,data="******in xml****")
print(response.status_code)
print(response.json())
jsonobj=json.load(json.dumps(response.json()))
pprint(jsonobj)
What you receive from requests.post is not JSON, it's a dictionary.
One that can be encoded in JSON, via json.dumps(result).
JSON is a text format to represent objects (the "ON" means "object notation"). You can convert a dictionary (or list or scalar) into a JSON-encoded string, or the other way around.
What requests.post does is taking the JSON response and already parsing it (with json.loads), so you don't have to think about JSON at all.
You haven't shown the code where you get the data from the post. However, you are almost certainly doing something like this:
response = requests.post('...')
data = response.json()
Here data is already parsed from JSON to a Python dict; that is what the requests json method does. There is no need to parse it again.
If you need raw JSON rather than Python data, then don't call the json method. Get the data direct from the response:
data = response.content
Now data will be a string containing JSON.
Related
data = obj.generateSession("P78013","Ujhdy#2")
print(data)
The result printed in the following format
{'status': True, 'message': 'SUCCESS', 'errorcode': '', 'data':
{'clientcode': 'K98913', 'name': 'HPP', 'email': '',
'mobileno': '', 'exchanges': ['bse_cm', 'cde_fo', 'mcx_fo', 'ncx_fo',
'nse_cm', 'nse_fo'], 'products': ['CNC', 'NRML', 'MARGIN', 'MIS',
'BO', 'CO'], 'lastlogintime': '', 'broker': '', 'jwtToken': 'Bearer
eyJhbGciOiJIUzUxMiJ9.eyJ1c2VybmFtZSI6Iko4ODkxMyIsInJvbGVzIjowLCJ1c2VydHlwZSI6IlVTRVIiLCJpYXQiOjE2NTU3NTAxNDksImV4cCI6MTc0MjE1MDE0OX0.P1Ne0T0lTgScZJ1udMYRaJ32WeNDB-bZIwMg4uSAGC4RDFnYRsdvXGRyIEx7KS1LpQ6ndRIt7UjoyIewCs7HLA',
'refreshToken':
'eyJhbGciOiJIUzUxMiJ9.eyJ0b2tlbiI6IlJFRlJFU0gtVE9LRU4iLCJpYXQiOjE2NTU3NTAxNDl9.9DM1ggWfaervPe3qCpoDywfdb8kJ6okQrqZeR_mjsbGliqM7w0DdRyxTHyB7m-742Sfj9tVsZ4qQrOK0RQ9TmQ'}}
i am trying to filter out the 'jwtToken' value in the string format like below
jwtToken='Bearer eyJhbGciOiJIUzUxMiJ9.eyJ1c2VybmFtZSI.....'
here is one way to extract it
re.findall("(jwtToken).?:(.*)\'\,",s)[0]
('jwtToken',
" 'Bearer eyJhbGciOiJIUzUxMiJ9.eyJ1c2VybmFtZSI6Iko4ODkxMyIsInJvbGVzIjowLCJ1c2VydHlwZSI6IlVTRVIiLCJpYXQiOjE2NTU3NTAxNDksImV4cCI6MTc0MjE1MDE0OX0.P1Ne0T0lTgScZJ1udMYRaJ32WeNDB-bZIwMg4uSAGC4RDFnYRsdvXGRyIEx7KS1LpQ6ndRIt7UjoyIewCs7HLA")
jwt tokens are just base64encoded json
import base64
token="Bearer eyJhbGciOiJIUzUxMiJ9.eyJ1c2VybmFtZSI6Iko4ODkxMyIsInJvbGVzIjowLCJ1c2VydHlwZSI6IlVTRVIiLCJpYXQiOjE2NTU3NTAxNDksImV4cCI6MTc0MjE1MDE0OX0.P1Ne0T0lTgScZJ1udMYRaJ32WeNDB-bZIwMg4uSAGC4RDFnYRsdvXGRyIEx7KS1LpQ6ndRIt7UjoyIewCs7HLA".split(" ")[1]
b64tok = token.split(".",1)[1]
print(base64.b64decode(b64tok))
I kind of have two real questions. Both relate to this code:
import urllib
import requests
def query(q):
base_url = "https://api.duckduckgo.com/?q={}&format=json"
resp = requests.get(base_url.format(urllib.parse.quote(q)))
json = resp.json()
return json
One is this: When I query something like this: "US Presidents", I get back something like this:
{'Abstract': '', 'AbstractSource': '', 'AbstractText': '', 'AbstractURL': '', 'Answer': '', 'AnswerType': '', 'Definition': '', 'DefinitionSource': '', 'DefinitionURL': '', 'Entity': '', 'Heading': '', 'Image': '', 'ImageHeight': '', 'ImageIsLogo': '', 'ImageWidth': '', 'Infobox': '', 'Redirect': '', 'RelatedTopics': [], 'Results': [], 'Type': '', 'meta': {'attribution': None, 'blockgroup': None, 'created_date': '2021-03-24', 'description': 'testing', 'designer': None, 'dev_date': '2021-03-24', 'dev_milestone': 'development', 'developer': [{'name': 'zt', 'type': 'duck.co', 'url': 'https://duck.co/user/zt'}], 'example_query': '', 'id': 'just_another_test', 'is_stackexchange': 0, 'js_callback_name': 'another_test', 'live_date': None, 'maintainer': {'github': ''}, 'name': 'Just Another Test', 'perl_module': 'DDG::Lontail::AnotherTest', 'producer': None, 'production_state': 'offline', 'repo': 'fathead', 'signal_from': 'just_another_test', 'src_domain': 'how about there', 'src_id': None, 'src_name': 'hi there', 'src_options': {'directory': '', 'is_fanon': 0, 'is_mediawiki': 0, 'is_wikipedia': 0, 'language': '', 'min_abstract_length': None, 'skip_abstract': 0, 'skip_abstract_paren': 0, 'skip_icon': 0, 'skip_image_name': 0, 'skip_qr': '', 'src_info': '', 'src_skip': ''}, 'src_url': 'Hello there', 'status': None, 'tab': 'is this source', 'topic': [], 'unsafe': None}}
Basically, everything is empty. Even the Heading key, which I know was sent as "US Presidents" encoded into url form. This issue seems to affect all queries I send with a space in them. Even when I go to this url: "https://api.duckduckgo.com/?q=US%20Presidents&format=json&pretty=1" in a browser, all I get is a bunch of blank json keys.
My next question is this. When I send in something like this: "1+1", the json response's "Answer" key is this:
{'from': 'calculator', 'id': 'calculator', 'name': 'Calculator', 'result': '', 'signal': 'high', 'templates': {'group': 'base', 'options': {'content': 'DDH.calculator.content'}}}
Everything else seems to be correct, but the 'result' should be '2', should it not? The entire rest of the json seems to be correct, including all 'RelatedTopics'
Any help with this would be greatly appreciated.
Basically duckduckgo api is not a real search engine. It is just a dictionary. So try US%20President instead of US%20presidents and you'll get an answer. For encoding you can use blanks, but if it's not a fixed term I would prefer the plus-sign. You can do this by using urllib.parse.quote_plus()
With calculation you're right. But I see absolutely no use case to use a calculus-api within a python code. It is like using trampoline to travel to the moon if there is a rocket available. And maybe they see it the same and do not offer calc services in their api?!
I am trying to convert a list into JSON using Python. The result from a query looks something like this :
Output:
[<Record r=<Relationship id=106 nodes=(<Node id=71 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '105', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'remove', 'sing': '', 'pl': '', 'title': 'BMProcessor', 'version': 'v6.0'}>, <Node id=59 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '93', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'A DataProcessor which represents a ML algorithm', 'sing': '', 'pl': '', 'title': 'LearningProcessor', 'version': 'v6.0'}>) type='subclass_of' properties={}> b=<Node id=59 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '93', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'A DataProcessor which represents a ML algorithm', 'sing': '', 'pl': '', 'title': 'LearningProcessor', 'version': 'v6.0'}> n=<Node id=71 labels=frozenset({'TBox'}) properties={'identifier': '', 'ontology_level': 'lower', 'neo4jImportId': '105', 'html_info': '', 'namespace': 'car', 'admin': '', 'description': 'remove', 'sing': '', 'pl': '', 'title': 'BMProcessor', 'version': 'v6.0'}>>]
Function :
def runQuery(query):
pprint.pprint(connection.execute(query))
When I perform a simple json.dumps() it return with TypeError: Object of type is not JSON serializable
I want to print a JSON format out of this. How can I do so?
You can get the result.data() which is a dictionary.
You can also iterate over the records in the cursor and convert one by one, and extract the fields you want, which is what the linked example above does - pass the cursor and use the blob['field'] syntax
def serialize_genre(genre):
return {
'id': genre['id'],
'name': genre['name'],
}
I am trying to implement the following with loading an internal data structure to pandas:
df = pd.DataFrame(self.data,
nrows=num_rows+500,
skiprows=skip_rows,
header=header_row,
usecols=limit_cols)
However, it doesn't appear to implement any of those (like it does when reading a csv file, other than the data). Is there another method I can use to have more control over the data that I'm ingesting? Or, do I need to rebuild the data before loading it into pandas?
My input data looks like this:
data = [
['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PARIAH', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/8847-7152-6775-8B59-ADE0-Y', '10.5240/FFE3-D036-A9A4-9E7A-D833-1', '', '', '', '04065', '', '', '2011', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', '04065', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN'],
['ABC', 'es-419', 'US', 'Movie', 'Full Extract', 'PATCH ADAMS', '', '', 'EST', 'Features - EST', 'HD', '2017-05-12 00:00:00', 'Open', 'WSP', '10.5000', '', '', '', '', '10.5240/DD84-FBF4-8F67-D6F3-47FF-1', '10.5240/B091-00D4-8215-39D8-0F33-8', '', '', '', 'U2254', '', '', '1998', '', '', '', '', '', '', '', '', '', '', '', '113811', '', '', '', '', '', 'U2254', '', 'Spanish (LAS)', 'US', '10', 'USA NATL SALE', '2017-05-11 00:00:00', 'TIER 3', '21', '', '', 'USA NATL SALE-SPANISH LANGUAGE', 'SPAN']
]
And so I'm looking to be able to state which rows it should load (or skip) and which columns it should skip (usecols). Is that possible to do with an internal python data structure?
How can I turn this data into a flat data frame?
I've tried using json_normalize and pivot, but I can't seem to get the format right.
This is my desired out put format:
SiteName|SiteId|...|CompressorMeterRefID|TankID|TankNumber...|TankID|TankNumber...|TankID|... DateandTime|...
Please advise
[{'SiteName': 'Reinschmiedt 1-4H (CRP 11)',
'SiteId': 57,
'SiteRefId': 'OK10020',
'Choke': '',
'GasMeter1': 53.25,
'GasMeter1Name': 'Check Meter',
'GasMeter1RefId': '',
'GasMeter2Name': '',
'GasMeter2RefId': '',
'GasMeter3Name': '',
'GasMeter3RefId': '',
'OilMeter1Name': '',
'OilMeter1RefId': '',
'OilMeter2Name': '',
'OilMeter2RefId': '',
'WaterMeter1': 0.0,
'WaterMeter1Name': 'Water Meter',
'WaterMeter1RefId': '',
'WaterMeter2Name': '',
'WaterMeter2RefId': '',
'FlareMeterName': '',
'FlareMeterRefId': '',
'GasLiftMeterName': '',
'GasLiftMeterRefId': '',
'CompressorMeterName': '',
'CompressorMeterRefId': '',
'TankEntries': [{'TankId': 138,
'TankNumber': 2,
'TankLevelDateTime': '2018-07-01T12:00:00.0000000Z',
'TankLevelDateTimeLocal': '2018-07-01T07:00:00.0000000Z',
'TankTopGauge': 35.99,
'TankName': 'Oil Tank 209206',
'TankRefId': 0,
'TankRefId2': '',
'TankRefId3': ''},
{'TankId': 139,
'TankNumber': 3,
'TankLevelDateTime': '2018-07-01T12:00:00.0000000Z',
'TankLevelDateTimeLocal': '2018-07-01T07:00:00.0000000Z',
'TankTopGauge': 109.5,
'TankName': 'Oil Tank 209207',
'TankRefId': 0,
'TankRefId2': '',
'TankRefId3': ''}],
'DateAndTime': '2018-07-01T12:00:00.0000000Z',
'DateAndTimeLocal': '2018-07-01T07:00:00.0000000Z',
'UserName': 'ScadaVisor',
'Notes': ''},
{'SiteName': 'Allen 1-11H (CRP 8)',
.....
.....
.....
In r you can do it like this using jsonlite package:
result<- as.data.frame(jsonlite::stream_in(textConnection(data)))