Sending simultaneous JSON requests via Python 3.5 - python

So I have this and it's working:
import requests
import csv
my_list = [item1, item2, item3, item4]
def write_results(file, list_to_iterate):
with open(file, 'w') as f:
fields = ("column_item", "column_value")
wr = csv.DictWriter(f, delimiter=",", fieldnames=fields, lineterminator='\n')
wr.writeheader()
for each_item in list_to_iterate:
try:
r = requests.get('https://www.somewebsite.com/api/something?this='+each_item).json()
value = str(r['value'])
except:
value = "none"
wr.writerow({'column_item': each_item, 'column_value': value})
write_results('spreadsheet.csv', my_list)
I'm basically writing the outputs that I fetch from a JSON output page to a CSV. My function works great and operates exactly as intended. The only drawback is that I'm actually iterating through a huge list, so I'm having to send a ton of requests. I'm wondering if it is possible for me to send my requests in parallel rather than waiting for a response then sending the next one. The order in which I write to the CSV doesn't even matter, so it's not a problem if I retrieve responses out of sync with my list.
I've tried and looked at all these different methods, but I just cannot get my head around a solid solution. All the examples I've seen are more geared towards web scraping and they are using a total different set of modules (not using requests).
Any help is gladly appreciated. This is driving me nuts. Also, if it's possible, I'd like to use as many native modules as possible.
Note: Python 3.5

Related

How to write Python output to file from for loop?

I am using the Instaloader package to scrape some data from Instagram.
Ideally, I am looking to scrape the posts associated to a specific hashtag. I created the code below, and it outputs lines of scraped data, but I am unable to write this output to a .txt, .csv, or .json file.
I tried to first append the loop output to a list, but the list was empty. My efforts to output to a file have also been unsuccessful.
I know I am missing a step here, please provide any input that you have! Thank you.
import instaloader
import json
L= instaloader.Instaloader()
for posts in L.get_hashtag_posts('NewYorkCity'):
L.download_hashtag('NewYorkCity', max_count = 10)
with open('output.json', 'w') as f:
json.dump(posts, f)
break
print('Done')
Looking at your code, it seems the spacing might be a bit off. When you use the open command with the "with" statement it does a try: / finally: on the file object.
In your case you created a variable f representing this file object. I believe if you make the following change you can write your json data to the file.
import instaloader
import json
L= instaloader.Instaloader()
for posts in L.get_hashtag_posts('NewYorkCity'):
L.download_hashtag('NewYorkCity', max_count = 10)
with open('output.json', 'w') as f:
f.write(json.dump(posts))
break
print('Done')
I am sure you intended this, but if your goal in the break was to only get the first value in the loop returned you could make this edit as well
for posts in L.get_hashtag_posts('NewYorkCity')[0]:
This will return the first item in the list.
If you would like the first 3, for example, you could do [:3]
See this tutorial on Python Data Structures

When I run the code, it runs without errors, but the csv file is not created, why?

I found a tutorial and I'm trying to run this script, I did not work with python before.
tutorial
I've already seen what is running through logging.debug, checking whether it is connecting to google and trying to create csv file with other scripts
from urllib.parse import urlencode, urlparse, parse_qs
from lxml.html import fromstring
from requests import get
import csv
def scrape_run():
with open('/Users/Work/Desktop/searches.txt') as searches:
for search in searches:
userQuery = search
raw = get("https://www.google.com/search?q=" + userQuery).text
page = fromstring(raw)
links = page.cssselect('.r a')
csvfile = '/Users/Work/Desktop/data.csv'
for row in links:
raw_url = row.get('href')
title = row.text_content()
if raw_url.startswith("/url?"):
url = parse_qs(urlparse(raw_url).query)['q']
csvRow = [userQuery, url[0], title]
with open(csvfile, 'a') as data:
writer = csv.writer(data)
writer.writerow(csvRow)
print(links)
scrape_run()
The TL;DR of this script is that it does three basic functions:
Locates and opens your searches.txt file.
Uses those keywords and searches the first page of Google for each
result.
Creates a new CSV file and prints the results (Keyword, URLs, and
page titles).
Solved
Google add captcha couse i use to many request
its work when i use mobile internet
Assuming the links variable is full and contains data - please verify.
if empty - test the api call itself you are making, maybe it returns something different than you expected.
Other than that - I think you just need to tweak a little bit your file handling.
https://www.guru99.com/reading-and-writing-files-in-python.html
here you can find some guidelines regarding file handling in python.
in my perspective, you need to make sure you create the file first.
start on with a script which is able to just create a file.
after that enhance the script to be able to write and append to the file.
from there on I think you are good to go and continue with you're script.
other than that I think that you would prefer opening the file only once instead of each loop, it could mean much faster execution time.
let me know if something is not clear.

Is there a way to quickly get rid of a lot of excess data with regex searches?

I'm trying to pull a few pieces of data for data entry into a server. I've gotten the data from a web API, and they include a lot of information that to me, is garbage. I need to get rid of a ton of it, but I'm having issues with where to start. The data I need is up until "abilities", and then starts again at "name":"Contherious". And here's that link. Most of the data processing I've been doing has been trying to use regex searches to try to process this, and the only search I can think of is that between the names that I need versus the names that I don't need have a space and lead to ID directly after them. I'm just unclear as to how to grab each and every one of these names and any help would be appreciated.
I've tried
DMG_DONE_FILE = "rawDmgDoneData.txt"
out = []
with open(DMG_DONE_FILE, 'r') as f:
line = f.readline()
while line:
regex_id = search('^+"name":"\s"+(\w+)+"id":',line)
if regex_id:
out.append(regex_id.group(1))
line = f.readline()
and I get errors because I generally don't know what I'm doing with regex searches
import sys
import json
# use urllib to fetch from api
# example here for testing is reading from local file
f=open('file.json','r')
data=f.read()
f.close()
entries = json.loads(data)
Now you have a data structure that you can easily address
e.g. entries['entries'][0]['name']
alternatively using jq https://stedolan.github.io/jq/
cat file.json |jq '.entries[]| {name:.name,id:.id,type:.type,itemLevel:.itemLevel,icon:.icon,total:.total,activeTime:.activeTime,activeTimeReduced:.activeTimeReduced}'

How to improve performance through Python multithreading

I'm new to Python and multithreading, so please bear with me.
I'm writing a script to process domains in a list through Web of Trust, a service that ranks websites from 1-100 on a scale of "trustworthiness", and write them to a CSV. Unfortunately Web of Trust's servers can take quite a while to respond, and processing 100k domains can take hours.
My attempts at multithreading so far have been disappointing -- attempting to modify the script from this answer gave threading errors, I believe because some threads took too long to resolve.
Here's my unmodified script. Can someone help me multithread it, or point me to a good multithreading resource? Thanks in advance.
import urllib
import re
text = open("top100k", "r")
text = text.read()
text = re.split("\n+", text)
out = open('output.csv', 'w')
for element in text:
try:
content = urllib.urlopen("http://api.mywot.com/0.4/public_query2?target=" + element)
content = content.read()
content = content[content.index('<application name="0" r="'):content.index('" c')]
content = element + "," + content[25] + content[26] + "\n"
out.write(content)
except:
pass
A quick scan through the WoT API documentation shows that as well as the public_query2 request that you are using, there is a public_query_json request that lets you get the data in batches of up to 100. I would suggest using that before you start flooding their server with lots of requests in parallel.

Is there a memory efficient and fast way to load big JSON files?

I have some json files with 500MB.
If I use the "trivial" json.load() to load its content all at once, it will consume a lot of memory.
Is there a way to read partially the file? If it was a text, line delimited file, I would be able to iterate over the lines. I am looking for analogy to it.
There was a duplicate to this question that had a better answer. See https://stackoverflow.com/a/10382359/1623645, which suggests ijson.
Update:
I tried it out, and ijson is to JSON what SAX is to XML. For instance, you can do this:
import ijson
for prefix, the_type, value in ijson.parse(open(json_file_name)):
print prefix, the_type, value
where prefix is a dot-separated index in the JSON tree (what happens if your key names have dots in them? I guess that would be bad for Javascript, too...), theType describes a SAX-like event, one of 'null', 'boolean', 'number', 'string', 'map_key', 'start_map', 'end_map', 'start_array', 'end_array', and value is the value of the object or None if the_type is an event like starting/ending a map/array.
The project has some docstrings, but not enough global documentation. I had to dig into ijson/common.py to find what I was looking for.
So the problem is not that each file is too big, but that there are too many of them, and they seem to be adding up in memory. Python's garbage collector should be fine, unless you are keeping around references you don't need. It's hard to tell exactly what's happening without any further information, but some things you can try:
Modularize your code. Do something like:
for json_file in list_of_files:
process_file(json_file)
If you write process_file() in such a way that it doesn't rely on any global state, and doesn't
change any global state, the garbage collector should be able to do its job.
Deal with each file in a separate process. Instead of parsing all the JSON files at once, write a
program that parses just one, and pass each one in from a shell script, or from another python
process that calls your script via subprocess.Popen. This is a little less elegant, but if
nothing else works, it will ensure that you're not holding on to stale data from one file to the
next.
Hope this helps.
Yes.
You can use jsonstreamer SAX-like push parser that I have written which will allow you to parse arbitrary sized chunks, you can get it here and checkout the README for examples. Its fast because it uses the 'C' yajl library.
It can be done by using ijson. The working of ijson has been very well explained by Jim Pivarski in the answer above. The code below will read a file and print each json from the list. For example, file content is as below
[{"name": "rantidine", "drug": {"type": "tablet", "content_type": "solid"}},
{"name": "nicip", "drug": {"type": "capsule", "content_type": "solid"}}]
You can print every element of the array using the below method
def extract_json(filename):
with open(filename, 'rb') as input_file:
jsonobj = ijson.items(input_file, 'item')
jsons = (o for o in jsonobj)
for j in jsons:
print(j)
Note: 'item' is the default prefix given by ijson.
if you want to access only specific json's based on a condition you can do it in following way.
def extract_tabtype(filename):
with open(filename, 'rb') as input_file:
objects = ijson.items(input_file, 'item.drugs')
tabtype = (o for o in objects if o['type'] == 'tablet')
for prop in tabtype:
print(prop)
This will print only those json whose type is tablet.
On your mention of running out of memory I must question if you're actually managing memory. Are you using the "del" keyword to remove your old object before trying to read a new one? Python should never silently retain something in memory if you remove it.
Update
See the other answers for advice.
Original answer from 2010, now outdated
Short answer: no.
Properly dividing a json file would take intimate knowledge of the json object graph to get right.
However, if you have this knowledge, then you could implement a file-like object that wraps the json file and spits out proper chunks.
For instance, if you know that your json file is a single array of objects, you could create a generator that wraps the json file and returns chunks of the array.
You would have to do some string content parsing to get the chunking of the json file right.
I don't know what generates your json content. If possible, I would consider generating a number of managable files, instead of one huge file.
Another idea is to try load it into a document-store database like MongoDB.
It deals with large blobs of JSON well. Although you might run into the same problem loading the JSON - avoid the problem by loading the files one at a time.
If path works for you, then you can interact with the JSON data via their client and potentially not have to hold the entire blob in memory
http://www.mongodb.org/
"the garbage collector should free the memory"
Correct.
Since it doesn't, something else is wrong. Generally, the problem with infinite memory growth is global variables.
Remove all global variables.
Make all module-level code into smaller functions.
in addition to #codeape
I would try writing a custom json parser to help you figure out the structure of the JSON blob you are dealing with. Print out the key names only, etc. Make a hierarchical tree and decide (yourself) how you can chunk it. This way you can do what #codeape suggests - break the file up into smaller chunks, etc
You can parse the JSON file to CSV file and you can parse it line by line:
import ijson
import csv
def convert_json(self, file_path):
did_write_headers = False
headers = []
row = []
iterable_json = ijson.parse(open(file_path, 'r'))
with open(file_path + '.csv', 'w') as csv_file:
csv_writer = csv.writer(csv_file, ',', '"', csv.QUOTE_MINIMAL)
for prefix, event, value in iterable_json:
if event == 'end_map':
if not did_write_headers:
csv_writer.writerow(headers)
did_write_headers = True
csv_writer.writerow(row)
row = []
if event == 'map_key' and not did_write_headers:
headers.append(value)
if event == 'string':
row.append(value)
So simply using json.load() will take a lot of time. Instead, you can load the json data line by line using key and value pair into a dictionary and append that dictionary to the final dictionary and convert it to pandas DataFrame which will help you in further analysis
def get_data():
with open('Your_json_file_name', 'r') as f:
for line in f:
yield line
data = get_data()
data_dict = {}
each = {}
for line in data:
each = {}
# k and v are the key and value pair
for k, v in json.loads(line).items():
#print(f'{k}: {v}')
each[f'{k}'] = f'{v}'
data_dict[i] = each
Data = pd.DataFrame(data_dict)
#Data will give you the dictionary data in dataFrame (table format) but it will
#be in transposed form , so will then finally transpose the dataframe as ->
Data_1 = Data.T

Categories

Resources