Looking up content and exporting results - python

Please bear with me, Python newbie here.
EDIT More generalized question: how can I export something like this:
def lookup(x):
print(something)
lookup(request)
output = open(output, 'w').write(content_of_request)
Original Post
I have a file with dictionary structure content (dicx) where I look up stuff based on input request (request). Now, I'd like to export these requested files into a new file, but I'm having trouble doing this... specifically, I don't know how to store request into content for exporting.
Here's the simplified version of my code:
from dicx import list_X
def writefile(x,y,z):
x = open(y, 'w').write(z)
def lookup(x):
print(list_X[table_Y]['name_Z1'])
print(list_X[table_Y]['name_Z2'])
request = raw_input()
if request in list_X:
lookup(request)
writefile(output, output, content)
I think it gives you a general idea as to what I'm trying to do, but here's the complete code: http://pastebin.com/HBuihPPF

Ah, now I hopefully got you. You should use return values like in other program languages.
def lookup(x):
return something
data = lookup(request)
open(output, 'w').write(data)
In Python you can also use tuples and return multiple variables. But I would only use it where they make sence. Take care of separation of concerns and single responsability of a function/method.
def lookup(x):
return something, whatever
a, b = lookup(request)

Related

How to pass variable to JSON, for python?

I am new in work with JSON, so sorry in advance for the stupid question.
I want to write JSON with the variable in the value field. It looks like this:
def print_json(user_name):
opened_json = open('way/to/json/file')
tmp = json.load(opened_json)
res = tmp(['path_to_folder'](user_name))
print(res)
def main(user_name):
print_json(user_name)
main('user')
It is JSON:
{"path_to_folder": "/Users/" + user_name + "/my_folder/"}
Awaiting for that output:
/Users/user/my_folder/
Please, tell me if any solution here exists.
Thanks in advance!
EDIT: My problem, that I can't add variable to JSON correctly. It marked red. Wrong syntax, when I try to concat.
What you want isn't directly possible in JSON, because it doesn't support "templating".
One solution would be to use a templating language such as Jinja to write a JSON template, then load this file without the json library and fill in the values using Jinja, and finally use json.loads to load a dictionary from your rendered string.
Your json-like file could look something like this:
{"path_to_folder": "/Users/{{ user_name }}/my_folder/"}
Your Python code:
import json
from jinja2 import Environment, FileSystemLoader
env = Environment(
FileSystemLoader("path/to/template")
)
template = env.get_template("template_filename.json")
def print_json(username):
return json.loads(
template.render(user_name=username)
)
...
In fact, if this is a simple one-time thing, it might even be better to use Python's built-in templating. I would recommend old-style formatting in the case of JSON, because otherwise you'll have to escape a lot of braces:
JSON file:
{"path_to_folder": "/Users/%(user_name)s/my_folder/"}
"Rendering":
with open("path/to/json") as f:
rendered = json.loads(f.read() % {"user_name": username})

How to write Python output to file from for loop?

I am using the Instaloader package to scrape some data from Instagram.
Ideally, I am looking to scrape the posts associated to a specific hashtag. I created the code below, and it outputs lines of scraped data, but I am unable to write this output to a .txt, .csv, or .json file.
I tried to first append the loop output to a list, but the list was empty. My efforts to output to a file have also been unsuccessful.
I know I am missing a step here, please provide any input that you have! Thank you.
import instaloader
import json
L= instaloader.Instaloader()
for posts in L.get_hashtag_posts('NewYorkCity'):
L.download_hashtag('NewYorkCity', max_count = 10)
with open('output.json', 'w') as f:
json.dump(posts, f)
break
print('Done')
Looking at your code, it seems the spacing might be a bit off. When you use the open command with the "with" statement it does a try: / finally: on the file object.
In your case you created a variable f representing this file object. I believe if you make the following change you can write your json data to the file.
import instaloader
import json
L= instaloader.Instaloader()
for posts in L.get_hashtag_posts('NewYorkCity'):
L.download_hashtag('NewYorkCity', max_count = 10)
with open('output.json', 'w') as f:
f.write(json.dump(posts))
break
print('Done')
I am sure you intended this, but if your goal in the break was to only get the first value in the loop returned you could make this edit as well
for posts in L.get_hashtag_posts('NewYorkCity')[0]:
This will return the first item in the list.
If you would like the first 3, for example, you could do [:3]
See this tutorial on Python Data Structures

How do I load JSON into Couchbase Headless Server in Python?

I am trying to create a Python script that can take a JSON object and insert it into a headless Couchbase server. I have been able to successfully connect to the server and insert some data. I'd like to be able to specify the path of a JSON object and upsert that.
So far I have this:
from couchbase.bucket import Bucket
from couchbase.exceptions import CouchbaseError
import json
cb = Bucket('couchbase://XXX.XXX.XXX?password=XXXX')
print cb.server_nodes
#tempJson = json.loads(open("myData.json","r"))
try:
result = cb.upsert('healthRec', {'record': 'bob'})
# result = cb.upsert('healthRec', {'record': tempJson})
except CouchbaseError as e:
print "Couldn't upsert", e
raise
print(cb.get('healthRec').value)
I know that the first commented out line that loads the json is incorrect because it is expecting a string not an actual json... Can anyone help?
Thanks!
Figured it out:
with open('myData.json', 'r') as f:
data = json.load(f)
try:
result = cb.upsert('healthRec', {'record': data})
I am looking into using cbdocloader, but this was my first step getting this to work. Thanks!
I know that you've found a solution that works for you in this instance but I thought I'd correct the issue that you experienced in your initial code snippet.
json.loads() takes a string as an input and decodes the json string into a dictionary (or whatever custom object you use based on the object_hook), which is why you were seeing the issue as you are passing it a file handle.
There is actually a method json.load() which works as expected, as you have used in your eventual answer.
You would have been able to use it as follows (if you wanted something slightly less verbose than the with statement):
tempJson = json.load(open("myData.json","r"))
As Kirk mentioned though if you have a large number of json documents to insert then it might be worth taking a look at cbdocloader as it will handle all of this boilerplate code for you (with appropriate error handling and other functionality).
This readme covers the uses of cbdocloader and how to format your data correctly to allow it to load your documents into Couchbase Server.

Conditional Python for loop based on YAML data

I am new to Python so please bear with me. I am trying to figure out how to loop through a set of values in a YAML file. The file is parsed using PyYAML and then would need to be fed into the loop. Here is some YAML for example:
dohicky.yml
---
#Example file
dohicky:
"1":
Stuff:
- Data
- Data
Morestuff:
- Data
- Data
"2":
Stuff:
- Data
- Data
Morestuff:
- Data
- Data
"n":
- Etc
First, I am pulling the contents of the YAML out.
import yaml
f = open('dohicky.yml')
dohicky = yaml.safe_load(f)
f.close()
Now I just need either a for loop or a while loop to iterate through each numbered "id" under "dohicky".
for x in xrange(1, 2):
So obviously this would work, but is staticly defined for only 2 elements. Not sure how for example:
"do while" id = dohicky["dohicky"]["x"] is true. #Not code, just concept!
The other problem I am immediately running into is how to then create an object inside this loop. For example:
id(x) = dohicky(Pass other info from YAML to class) #Not code, just concept!
Unfortunately I am not familiar enough with Python yet (withPyYAML) to understand the syntax. Any help is MUCH appreciated!
* UPDATE *
This is kinda sudo code, but you should understand what I am trying to do at least.
import yaml
f = open('dohicky.yml')
dohicky = yaml.safe_load(f)
f.close()
for x in dohicky["dohicky"]["x"]
test = dohicky["dohicky"]["1"]["Stuff"]
print test
In this test, I am just printing the output of "Stuff", but in reality, I need to create an object using that data.
You can use the following code to iterate through each numbered "id" under "dohicky":
for dohicky_id in dohicky['dohicky']:
stuff = dohicky['dohicky'][dohicky_id]['Stuff']
In this case, the stuff is a list of Data dicts. If you can't, for whatever reasons, to work with the dictionary, and you want to convert the Data dicts into objects, you can do this by looking the following question: Convert Python dict to object?

Is there a memory efficient and fast way to load big JSON files?

I have some json files with 500MB.
If I use the "trivial" json.load() to load its content all at once, it will consume a lot of memory.
Is there a way to read partially the file? If it was a text, line delimited file, I would be able to iterate over the lines. I am looking for analogy to it.
There was a duplicate to this question that had a better answer. See https://stackoverflow.com/a/10382359/1623645, which suggests ijson.
Update:
I tried it out, and ijson is to JSON what SAX is to XML. For instance, you can do this:
import ijson
for prefix, the_type, value in ijson.parse(open(json_file_name)):
print prefix, the_type, value
where prefix is a dot-separated index in the JSON tree (what happens if your key names have dots in them? I guess that would be bad for Javascript, too...), theType describes a SAX-like event, one of 'null', 'boolean', 'number', 'string', 'map_key', 'start_map', 'end_map', 'start_array', 'end_array', and value is the value of the object or None if the_type is an event like starting/ending a map/array.
The project has some docstrings, but not enough global documentation. I had to dig into ijson/common.py to find what I was looking for.
So the problem is not that each file is too big, but that there are too many of them, and they seem to be adding up in memory. Python's garbage collector should be fine, unless you are keeping around references you don't need. It's hard to tell exactly what's happening without any further information, but some things you can try:
Modularize your code. Do something like:
for json_file in list_of_files:
process_file(json_file)
If you write process_file() in such a way that it doesn't rely on any global state, and doesn't
change any global state, the garbage collector should be able to do its job.
Deal with each file in a separate process. Instead of parsing all the JSON files at once, write a
program that parses just one, and pass each one in from a shell script, or from another python
process that calls your script via subprocess.Popen. This is a little less elegant, but if
nothing else works, it will ensure that you're not holding on to stale data from one file to the
next.
Hope this helps.
Yes.
You can use jsonstreamer SAX-like push parser that I have written which will allow you to parse arbitrary sized chunks, you can get it here and checkout the README for examples. Its fast because it uses the 'C' yajl library.
It can be done by using ijson. The working of ijson has been very well explained by Jim Pivarski in the answer above. The code below will read a file and print each json from the list. For example, file content is as below
[{"name": "rantidine", "drug": {"type": "tablet", "content_type": "solid"}},
{"name": "nicip", "drug": {"type": "capsule", "content_type": "solid"}}]
You can print every element of the array using the below method
def extract_json(filename):
with open(filename, 'rb') as input_file:
jsonobj = ijson.items(input_file, 'item')
jsons = (o for o in jsonobj)
for j in jsons:
print(j)
Note: 'item' is the default prefix given by ijson.
if you want to access only specific json's based on a condition you can do it in following way.
def extract_tabtype(filename):
with open(filename, 'rb') as input_file:
objects = ijson.items(input_file, 'item.drugs')
tabtype = (o for o in objects if o['type'] == 'tablet')
for prop in tabtype:
print(prop)
This will print only those json whose type is tablet.
On your mention of running out of memory I must question if you're actually managing memory. Are you using the "del" keyword to remove your old object before trying to read a new one? Python should never silently retain something in memory if you remove it.
Update
See the other answers for advice.
Original answer from 2010, now outdated
Short answer: no.
Properly dividing a json file would take intimate knowledge of the json object graph to get right.
However, if you have this knowledge, then you could implement a file-like object that wraps the json file and spits out proper chunks.
For instance, if you know that your json file is a single array of objects, you could create a generator that wraps the json file and returns chunks of the array.
You would have to do some string content parsing to get the chunking of the json file right.
I don't know what generates your json content. If possible, I would consider generating a number of managable files, instead of one huge file.
Another idea is to try load it into a document-store database like MongoDB.
It deals with large blobs of JSON well. Although you might run into the same problem loading the JSON - avoid the problem by loading the files one at a time.
If path works for you, then you can interact with the JSON data via their client and potentially not have to hold the entire blob in memory
http://www.mongodb.org/
"the garbage collector should free the memory"
Correct.
Since it doesn't, something else is wrong. Generally, the problem with infinite memory growth is global variables.
Remove all global variables.
Make all module-level code into smaller functions.
in addition to #codeape
I would try writing a custom json parser to help you figure out the structure of the JSON blob you are dealing with. Print out the key names only, etc. Make a hierarchical tree and decide (yourself) how you can chunk it. This way you can do what #codeape suggests - break the file up into smaller chunks, etc
You can parse the JSON file to CSV file and you can parse it line by line:
import ijson
import csv
def convert_json(self, file_path):
did_write_headers = False
headers = []
row = []
iterable_json = ijson.parse(open(file_path, 'r'))
with open(file_path + '.csv', 'w') as csv_file:
csv_writer = csv.writer(csv_file, ',', '"', csv.QUOTE_MINIMAL)
for prefix, event, value in iterable_json:
if event == 'end_map':
if not did_write_headers:
csv_writer.writerow(headers)
did_write_headers = True
csv_writer.writerow(row)
row = []
if event == 'map_key' and not did_write_headers:
headers.append(value)
if event == 'string':
row.append(value)
So simply using json.load() will take a lot of time. Instead, you can load the json data line by line using key and value pair into a dictionary and append that dictionary to the final dictionary and convert it to pandas DataFrame which will help you in further analysis
def get_data():
with open('Your_json_file_name', 'r') as f:
for line in f:
yield line
data = get_data()
data_dict = {}
each = {}
for line in data:
each = {}
# k and v are the key and value pair
for k, v in json.loads(line).items():
#print(f'{k}: {v}')
each[f'{k}'] = f'{v}'
data_dict[i] = each
Data = pd.DataFrame(data_dict)
#Data will give you the dictionary data in dataFrame (table format) but it will
#be in transposed form , so will then finally transpose the dataframe as ->
Data_1 = Data.T

Categories

Resources