python 3: splitting information and reporting occurrences - python

Say for example i want to count how many times bob visits sears and walmart how would i do this by creating dictionaries?
information given:
bob:oct1:sears
bob:oct1:walmart
mary:oct2:walmart
don:oct2:sears
bob:oct4:walmart
mary:oct4:sears

Okay, as this might be homework, I’ll try to give you some hints on how to do this. If this is not homework, please say so, and I’ll restore my original answer and example code.
So first of all, you have your data set in a way that each entry is in single line. As we want to work with each data entry on its own, we have to split the original data into each lines. We can use str.split for that.
Each entry is constructed in a simple format name:date:location. So to get each of those segments again, we can use str.split again. Then we end up with separated content for each entry.
To store this, we want to sort the data by the name first. So we choose a dictionary taking the name as the key, and put in the visits as the data. As we don’t care about the date, we can forget about it. Instead we want to count how often a single location occurs for a given name. So what we do, is keep another dictionary using the locations as the key and the visit count as the data. So we end up with a doubled dictionary, looking like this:
{
'bob': {
'sears': 1,
'walmart': 1,
},
'mary': {
...
}
}
So to get the final answers we just look into that dictionary and can immediately read out the values.

#poke provided a nice explanation, here's a corresponding code:
Read input from files provided on command-line or stdin and dump occurrences in json format:
#!/usr/bin/env python
import csv
import fileinput
import json
import sys
from collections import defaultdict
visits = defaultdict(lambda: defaultdict(int))
for name, _, shop in csv.reader(fileinput.input(), delimiter=':'):
visits[name][shop] += 1
# pretty print
json.dump(visits, sys.stdout, indent=2)
Output
{
"bob": {
"sears": 1,
"walmart": 2
},
"don": {
"sears": 1
},
"mary": {
"sears": 1,
"walmart": 1
}
}
This representation allows easily to find out how many visits and where a person had.
If you always know both name and location then you could use a simpler representation:
visits = defaultdict(int)
for name, _, shop in csv.reader(fileinput.input(), delimiter=':'):
visits[name,shop] += 1
print(visits['bob','walmart'])
# -> 2

Related

Parse a json/dictionary with same key values

I currently have a list variable that looks like this:
list_of_dicts = [{"Away_Team":"KC", "Home_Team":"NYY"},
{"Away_Team":"TB", "Home_Team":"MIA"},
{"Away_Team":"TOR", "Home_Team":"BOS"},
]
As you can see, there are multiple keys with the same names, pertaining to the game matchups.
When I try to use:
print(json.dumps(list_of_dicts[0], indent=4, sort_keys=True))
...it only prints out the first matchup due to the same keys:
{
"Away_Team": "KC",
"Home_Team": "NYY"
}
How can I convert this list_of_dicts variable into something like the following output so I can use it like a valid dictionary or json object?
{
"Away_Team_1":"KC", "Home_Team_1":"NYY",
"Away_Team_2":"TB", "Home_Team_2":"MIA",
"Away_Team_3":"TOR", "Home_Team_3":"BOS",
}
This output doesn't need to be exactly that if a better solution is available, this is just to give you an idea of how I'd like to be able to parse the data.
The list_of_dicts variable can be of varying sizes, I've shown 3 here, but it could contain 1 or 10 matchups, it varies, so the solution needs to be dynamic to that.
You can add suffixes to the keys with enumerate:
list_of_dicts2 = [{f"{k}_{i}":v for k,v in d.items()} for i,d in enumerate(list_of_dicts, start=1)]
One option is to use pandas:
pd.DataFrame(list_of_dicts).to_csv('filename.csv', index=False)
gives
Away_Team,Home_Team
KC,NYY
TB,MIA
TOR,BOS
Now the index is implied by the row, and if you load it back in you'll have those indices. Pandas also supports to_json if you are hard set on using json though. You can even recover your original list from a dataframe using .to_dict(orient='records')
Data structure is important. You really don't need a dictionary for this. Simply reducing to a list of tuples the first slot always the away team, and the second the home team.
list_of_dicts = [{"Away_Team":"KC", "Home_Team":"NYY"},
{"Away_Team":"TB", "Home_Team":"MIA"},
{"Away_Team":"TOR", "Home_Team":"BOS"},
]
l = [tuple(l.values()) for l in list_of_dicts]
output:
[('KC', 'NYY'), ('TB', 'MIA'), ('TOR', 'BOS')]
The problem with your proposed solution is iterating through dicts where you don't know the key name is cumbersome, this solution makes the data structure easy to decipher, transform, or manipulate.

How to detect and indent json substrings inside longer non-json text?

I have an existing Python application, which logs like:
import logging
import json
logger = logging.getLogger()
some_var = 'abc'
data = {
1: 2,
'blah': {
['hello']
}
}
logger.info(f"The value of some_var is {some_var} and data is {json.dumps(data)}")
So the logger.info function is given:
The value of some_var is abc and data is {1: 2,"blah": {["hello"]}}
Currently my logs go to AWS CloudWatch, which does some magic and renders this with indentation like:
The value of some_var is abc and data is {
1: 2,
"blah": {
["hello"]
}
}
This makes the logs super clear to read.
Now I want to make some changes to my logging, handling it myself with another python script that wraps around my code and emails out logs when there's a failure.
What I want is some way of taking each log entry (or a stream/list of entries), and applying this indentation.
So I want a function which takes in a string, and detects which subset(s) of that string are json, then inserts \n and to pretty-print that json.
example input:
Hello, {"a": {"b": "c"}} is some json data, but also {"c": [1,2,3]} is too
example output
Hello,
{
"a": {
"b": "c"
}
}
is some json data, but also
{
"c": [
1,
2,
3
]
}
is too
I have considered splitting up each entry into everything before and after the first {. Leave the left half as is, and pass the right half to json.dumps(json.loads(x), indent=4).
But what if there's stuff after the json object in the log file?
Ok, we can just select everything after the first { and before the last }.
Then pass the middle bit to the JSON library.
But what if there's two JSON objects in this log entry? (Like in the above example.) We'll have to use a stack to figure out whether any { appears after all prior { have been closed with a corresponding }.
But what if there's something like {"a": "\}"}. Hmm, ok we need to handle escaping.
Now I find myself having to write a whole json parser from scratch.
Is there any easy way to do this?
I suppose I could use a regex to replace every instance of json.dumps(x) in my whole repo with json.dumps(x, indent=4). But json.dumps is sometimes used outside logging statements, and it just makes all my logging lines that extra bit longer. Is there a neat elegant solution?
(Bonus points if it can parse and indent the json-like output that str(x) produces in python. That's basically json with single quotes instead of double.)
In order to extract JSON objects from a string, see this answer. The extract_json_objects() function from that answer will handle JSON objects, and nested JSON objects but nothing else. If you have a list in your log outside of a JSON object, it's not going to be picked up.
In your case, modify the function to also return the strings/text around all the JSON objects, so that you can put them all into the log together (or replace the logline):
from json import JSONDecoder
def extract_json_objects(text, decoder=JSONDecoder()):
pos = 0
while True:
match = text.find('{', pos)
if match == -1:
yield text[pos:] # return the remaining text
break
yield text[pos:match] # modification for the non-JSON parts
try:
result, index = decoder.raw_decode(text[match:])
yield result
pos = match + index
except ValueError:
pos = match + 1
Use that function to process your loglines, add them to a list of strings, which you then join together to produce a single string for your output, logger, etc.:
def jsonify_logline(line):
line_parts = []
for result in extract_json_objects(line):
if isinstance(result, dict): # got a JSON obj
line_parts.append(json.dumps(result, indent=4))
else: # got text/non-JSON-obj
line_parts.append(result)
# (don't make that a list comprehension, quite un-readable)
return ''.join(line_parts)
Example:
>>> demo_text = """Hello, {"a": {"b": "c"}} is some json data, but also {"c": [1,2,3]} is too"""
>>> print(jsonify_logline(demo_text))
Hello, {
"a": {
"b": "c"
}
} is some json data, but also {
"c": [
1,
2,
3
]
} is too
>>>
Other things not directly related which would have helped:
Instead of using json.dumps(x) for all your log lines, following the DRY principle and create a function like logdump(x) which does whatever you'd want to do, like json.dumps(x), or json.dumps(x, indent=4), or jsonify_logline(x). That way, if you needed to change the JSON format for all your logs, you just change that one function; no need for mass "search & replace", which comes with its own issues and edge-cases.
You can even add an optional parameter to it pretty=True to decide if you want it indented or not.
You could mass search & replace all your existing loglines to do logger.blah(jsonify_logline(<previous log f-string or text>))
If you are JSON-dumping custom objects/class instances, then use their __str__ method to always output pretty-printed JSON. And the __repr__ to be non-pretty/compact.
Then you wouldn't need to modify the logline at all. Doing logger.info(f'here is my object {x}') would directly invoke obj.__str__.

Extract values from json-file which has no unique markers

A json-file which has unique markers (or [more appropriate] field-names) preceeding the values is (rather) easy to dissect, because you can perform a string search on the unique markers/field-names to find within the string the first and last position of the characters of the value, and with that info you can pinpoint the position of the value, and extract the value.
Have performed that function with various lua-scripts and Python-scripts (also on xml-files).
Now need to extract values from a json-file which does not have unique markers/ field-names, but just a multiple occurrence of "value_type" and "value", preceeding the 'name', respectively the 'value': see below.
{
"software_version": "NRZ-2017-099",
"age":"78",
"sensordatavalues":[
{"value_type":"SDS_P1","value":"4.43"},
{"value_type":"SDS_P2","value":"3.80"},
{"value_type":"temperature","value":"20.10"},
{"value_type":"humidity","value":"44.50"},
{"value_type":"samples","value":"614292"},
{"value_type":"min_micro","value":"233"},
{"value_type":"max_micro","value":"25951"},
{"value_type":"signal","value":"-66"}
]
}
Experience as described above does not provide working solution.
Question: In this json-filelayout, how to directly extract the specific, individual values (preferably by lua-script)?
[Or might XML-parsing provide an easier solution?]
Here is Python to read the JSON file and make it more convenient:
import json
import pprint
with open("/tmp/foo.json") as j:
data = json.load(j)
for sdv in data.pop('sensordatavalues'):
data[sdv['value_type']] = sdv['value']
pprint.pprint(data)
The results:
{'SDS_P1': '4.43',
'SDS_P2': '3.80',
'age': '78',
'humidity': '44.50',
'max_micro': '25951',
'min_micro': '233',
'samples': '614292',
'signal': '-66',
'software_version': 'NRZ-2017-099',
'temperature': '20.10'}
You might want to have a look into filter functions.
E.g. in your example json to get only the dict that contains the value for samples you could go by:
sample_sensordata = list(filter(lambda d: d["value_type"] == "samples", your_json_dict["sensordatavalues"]))
sample_value = sample_sensordata["value"]
To make a dictionary like Ned Batchelder said you could also go with a dict comprehension like this:
sensor_data_dict = {d['value_type']: d['value'] for d in a}
and then get the value you want just by sensor_data_dict['<ValueTypeYouAreLookingFor>']
A little bit late and I'm trying Anvil in which the previous answers didn't work. just for the curious people.
resp = anvil.http.request("http://<ipaddress>/data.json", json=True)
#print(resp) # prints json file
tempdict = resp['sensordatavalues'][2].values()
humiddict = resp['sensordatavalues'][3].values()
temperature = float(list(tempdict)[1])
humidity = float(list(humiddict)[1])
print(temperature)
print(humidity)

Parse Json file and save specific values [duplicate]

This question already has answers here:
Getting a list of values from a list of dicts
(10 answers)
Closed 5 years ago.
I have this JSON file where the amount of id's sometimes changes (more id's will be added):
{
"maps": [
{
"id": "blabla1",
"iscategorical": "0"
},
{
"id": "blabla2",
"iscategorical": "0"
},
{
"id": "blabla3",
"iscategorical": "0"
},
{
"id": "blabla4",
"iscategorical": "0"
}
]
}
I have this python code that has to print all the values of ids:
import json
data = json.load(open('data.json'))
variable1 = data["maps"][0]["id"]
print(variable1)
variable2 = data["maps"][1]["id"]
print(variable2)
variable3 = data["maps"][2]["id"]
print(variable3)
variable4 = data["maps"][3]["id"]
print(variable4)
I have to use variables, because i want to show the values in a dropdown menu. Is it possible to save the values of the id's in a more efficient way? How do you know the max amount of id's of this json file (in de example 4)?
You can get the number of id (which is the number of elements) by checking the length of data['maps']:
number_of_ids = len(data['maps'])
A clean way to get all the id values is storing them in a list.
You can achieve this in a pythonic way like this:
list_of_ids = [map['id'] for map in data['maps']]
Using this approach you don't even need to store the number of elements in the original json, because you iterate through all of them using a foreach approach, essentially.
If the pythonic approach troubles you, you can achieve the same thing with a classic foreach approach doing so:
list_of_ids = []
for map in data['maps']:
list_of_ids.append(map['id'])
Or you can do with a classic for loop, and here is where you really need the length:
number_of_ids = len(data['maps'])
list_of_ids = []
for i in range(0,number_of_ids):
list_of_ids.append(data['maps'][i]['id'])
This last is the classic way, but I suggest you to take the others approaches in order to leverage the advantages python offers to you!
You can find more on this stuff here!
Happy coding!
data['maps'] is a simple list, so you can iterate over it as such:
for map in data['maps']:
print(map['id'])
To store them in a variable, you'll need to output them to a list. Storing them each in a separate variable is not a good idea, because like you said, you don't have a way to know how many there are.
ids = []
for map in data['maps']:
ids.append(map['id'])

Create nested JSON from flat csv

Trying to create a 4 deep nested JSON from a csv based upon this example:
Region,Company,Department,Expense,Cost
Gondwanaland,Bobs Bits,Operations,nuts,332
Gondwanaland,Bobs Bits,Operations,bolts,254
Gondwanaland,Maureens Melons,Operations,nuts,123
At each level I would like to sum the costs and include it in the outputted JSON at the relevant level.
The structure of the outputted JSON should look something like this:
{
"id": "aUniqueIdentifier",
"name": "usually a nodes name",
"data": [
{
"key": "some key",
"value": "some value"
},
{
"key": "some other key",
"value": "some other value"
}
],
"children": [/* other nodes or empty */ ]
}
(REF: http://blog.thejit.org/2008/04/27/feeding-json-tree-structures-to-the-jit/)
Thinking along the lines of a recursive function in python but have not had much success with this approach so far... any suggestions for a quick and easy solution greatly appreciated?
UPDATE:
Gradually giving up on the idea of the summarised costs because I just can't figure it out :(. I'not much of a python coder yet)! Simply being able to generate the formatted JSON would be good enough and I can plug in the numbers later if I have to.
Have been reading, googling and reading for a solution and on the way have learnt a lot but still no success in creating my nested JSON files from the above CSV strucutre. Must be a simple solution somewhere on the web? Maybe somebody else has had more luck with their search terms????
Here are some hints.
Parse the input to a list of lists with csv.reader:
>>> rows = list(csv.reader(source.splitlines()))
Loop over the list to buildi up your dictionary and summarize the costs. Depending on the structure you're looking to create the build-up might look something like this:
>>> summary = []
>>> for region, company, department, expense, cost in rows[1:]:
summary.setdefault(*region, company, department), []).append((expense, cost))
Write the result out with json.dump:
>>> json.dump(summary, open('dest.json', 'wb'))
Hopefully, the recursive function below will help get you started. It builds a tree from the input. Please be aware of what type you want your leaves to be in, which we label as the "cost". You'll need to elaborate on the function to build-up the exact structure you intend:
import csv, itertools, json
def cluster(rows):
result = []
for key, group in itertools.groupby(rows, key=lambda r: r[0]):
group_rows = [row[1:] for row in group]
if len(group_rows[0]) == 2:
result.append({key: dict(group_rows)})
else:
result.append({key: cluster(group_rows)})
return result
if __name__ == '__main__':
s = '''\
Gondwanaland,Bobs Bits,Operations,nuts,332
Gondwanaland,Bobs Bits,Operations,bolts,254
Gondwanaland,Maureens Melons,Operations,nuts,123
'''
rows = list(csv.reader(s.splitlines()))
r = cluster(rows)
print json.dumps(r, indent=4)

Categories

Resources