In Python, trying to convert geocoded tsv file into geojson format - python

trying to convert a geocoded TSV file into JSON format but i'm having trouble with it. Here's the code:
import geojson
import csv
def create_map(datafile):
geo_map = {"type":"FeatureCollection"}
item_list = []
datablock = list(csv.reader(datafile))
for i, line in enumerate(datablock):
data = {}
data['type'] = 'Feature'
data['id'] = i
data['properties']={'title': line['Movie Title'],
'description': line['Amenities'],
'date': line['Date']}
data['name'] = {line['Location']}
data['geometry'] = {'type':'Point',
'coordinates':(line['Lat'], line['Lng'])}
item_list.append(data)
for point in item_list:
geo_map.setdefault('features', []).append(point)
with open("thedamngeojson.geojson", 'w') as f:
f.write(geojson.dumps(geo_map))
create_map('MovieParksGeocode2.tsv')
I'm getting a TypeError:list indices must be integers, not str on the data['properties'] line but I don't understand, isn't that how I set values to the geoJSON fields?
The file I'm reading from has values under these keys: Location Movie Title Date Amenities Lat Lng
The file is viewable here: https://github.com/yongcho822/Movies-in-the-park/blob/master/MovieParksGeocodeTest.tsv
Thanks guys, much appreciated as always.

You have a couple things going on here that need to get fixed.
1.Your TSV contains newlines with double quotes. I don't think this is intended, and will cause some problems.
Location Movie Title Date Amenities Formatted_Address Lat Lng
"
Edgebrook Park, Chicago " A League of Their Own 7-Jun "
Family friendly activities and games. Also: crying is allowed." Edgebrook Park, 6525 North Hiawatha Avenue, Chicago, IL 60646, USA 41.9998876 -87.7627672
"
2.You don't need the geojson module to dump out JSON - which is all GeoJSON is. Just import json instead.
3.You are trying to read a TSV, but you don't include the delimiter=\t option that is needed for that.
4.You are trying to read keys off the rows, but you aren't using DictReader which does that for you.Hence the TypeError about indices you mention above.
Check out my revised code block below..you still need to fix your TSV to be a valid TSV.
import csv
import json
def create_map(datafile):
geo_map = {"type":"FeatureCollection"}
item_list = []
with open(datafile,'r') as tsvfile:
reader = csv.DictReader(tsvfile,delimiter='\t')
for i, line in enumerate(reader):
print line
data = {}
data['type'] = 'Feature'
data['id'] = i
data['properties']={'title': line['Movie Title'],
'description': line['Amenities'],
'date': line['Date']}
data['name'] = {line['Location']}
data['geometry'] = {'type':'Point',
'coordinates':(line['Lat'], line['Lng'])}
item_list.append(data)
for point in item_list:
geo_map.setdefault('features', []).append(point)
with open("thedamngeojson.geojson", 'w') as f:
f.write(json.dumps(geo_map))
create_map('MovieParksGeocode2.tsv')

Related

How to read txt file data and convert into nested dictionary?

I have this txt file but I'm having trouble in converting it into a nested dictionary in python. The txt file only has the values of the pokemon but are missing the keys such as 'quantity' or 'fee'. Below is the content in the txt file. (I have the ability to change the txt file if needed)
charmander,3,100,fire
squirtle,2,50,water
bulbasaur,5,25,grass
gyrados,1,1000,water flying
This is my desired dictionary:
pokemon = {
'charmander':{'quantity':3,'fee':100,'powers':['fire']},
'squirtle':{'quantity':2,'fee':50,'powers':['water']},
'bulbasaur':{'quantity':5,'fee':25,'powers':['grass']},
'gyrados':{'quantity':1,'fee':1000,'powers':['water','flying']}
}
Convert text file to lines, then process each line using "," delimiters. For powers, split the string again using " " delimiter. Then just package each extracted piece of information into your dict structure as below.
with open('pokemonInfo.txt') as f:
data = f.readlines()
dict = {}
for r in data:
fields = r.split(",")
pName = fields[0]
qty = fields[1]
fee = fields[2]
powers = fields[3]
dict[pName] = {"quantity": qty, "fee": fee, "powers": [p.strip() for p in powers.split(" ")]}
for record in dict.items():
print(record)

OSM Overpass missing data in query result

I'm gathering all cities, towns and villages of some countries from OSM using an Overpass query in a Python program.
Everything seems to be correct but I found a town in Luxembug that is missing im my result set. It concerns the town Kiischpelt.
'''
import requests
import json
Country = 'LU'
overpass_url = "http://overpass-api.de/api/interpreter"
overpass_query = """
[out:json];
area["ISO3166-1"=""" + Country + """][admin_level=2]->.search;
(node["place"="city"](area.search);
node["place"="town"](area.search);
node["place"="village"](area.search);
way["place"="city"](area.search);
way["place"="town"](area.search);
way["place"="village"](area.search);
rel["place"="city"](area.search);
rel["place"="town"](area.search);
rel["place"="village"](area.search);
);
out center;
"""
response = requests.get(overpass_url,
params={'data': overpass_query})
data = response.json()
filename = """C:/Data/GetGeoData/data/""" + Country + 'cities' +'.json'
f = open(filename,'w', encoding="utf-8")
json.dump(data, f)
f.close()
'''
When searching on the OSM site for Kiischpelt, I get a result of type relation but it doesnet appear in my result set.
Also when I change the query as follows
'''rel"place";''' which should return all places of all kinds (city, town, village, isolated dwelling,...)
Any idea what I'm doing wrong?
Many thanks!

How do I create a loop such that I get all the queries into one csv in through python?

I have created a function that fetches price, rating, etc after it hits an API:
def is_priced(business_id):
try:
priced_ind = get_business(API_KEY, business_id)
priced_ind1 = priced_ind['price']
except:
priced_ind1 = 'None'
return priced_ind1
priced_ind = is_priced(b_id)
print(priced_ind)
Similar for rating
def is_rated(business_id):
try:
rated_ind = get_business(API_KEY, business_id)
rated_ind1 = rated_ind['rating']
except:
rated_ind1 = 'None'
return rated_ind1
However, I want my function to loop through the business names I have in my CSV file and catch all this data and export it to a new csv file with these two parameters beside the names of the business.
The CSV file has info on the name of the business along with its address,city,state,zip and country
Eg:
Name address city state zip country
XYZ(The) 5* WE 223899th St. New York NY 19921 US
My output:
Querying https://api.xyz.com/v3/businesses/matches ...
True
Querying https://api.xyz.com/v3/businesses/matches ...
4.0
Querying https://api.xyz.com/v3/businesses/matches ...
$$
Querying https://api.xyz.com/v3/businesses/matches ...
Querying https://api.xyz.com/v3/businesses/matches ...
The real issue is my output only returns business id in the csv. and the rating etc as u see is just returned in the console. how do I set a loop such that it returns for all the businesses the info i desire into a single CSV?
The csv module is useful for this sort of thing e.g.
import csv
with open('f.csv', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
with open('tmp.csv', 'w') as output:
writer = csv.writer(output)
for row in reader:
business_id = row[0]
row.append(get_price_index(business_id))
row.append(get_rate_index(business_id))
writer.writerow(row)
You can read the business names from the CSV file, iterate over them using a for loop, hit the API and store the results, and write to a new CSV file.
import csv
data = []
with open('businesses.csv') as fp:
# skip header line
header = next(fp)
reader = csv.reader(fp)
for row in reader:
b_name = reader[0]
# not sure how you get the business ID:
b_id = get_business_id(b_name)
p = is_priced(b_id)
r = is_rated(b_id)
out.append((b_name, p, r))
# write out the results
with open('business_data.csv', 'w') as fp:
writer = csv.writer(fp)
writer.writerow(['name', 'price', 'rating'])
for row in data:
writer.writerow(row)
You can do this easily using pandas:
import pandas as pd
csv = pd.read_csv('your_csv.csv', usecols=['business_name']) # since you only need the name
# you'll receive business_name in your functions
csv = csv.apply(is_priced, axis=1)
csv = csv.apply(is_rated, axis=1)
csv.to_csv('result.csv', index=False)
All you have to do in your functions is:
def is_priced(row):
business_name = row['business_name']
business_id = ??
...

How to merge three files with common id in pandas?

I have three files which are users.dat, ratings.dat and movies.dat.
users.dat
1::F::1::10::48067
1::F::1::10::48067
1::F::1::10::48067
1::F::1::10::48067
1::F::1::10::48067
1::F::1::10::48067
1::F::1::10::48067
1::F::1::10::48067
ratings.dat
1::1193::5::978300760
1::661::3::978302109
1::914::3::978301968
1::3408::4::978300275
1::2355::5::978824291
1::1197::3::978302268
1::1287::5::978302039
1::2804::5::978300719
movied.dat
1193::One Flew Over the Cuckoo's Nest (1975)::Drama
661::James and the Giant Peach (1996)::Animation|Children's|Musical
914::My Fair Lady (1964)::Musical|Romance
3408::Erin Brockovich (2000)::Drama
2355::Bug's Life, A (1998)::Animation|Children's|Comedy
1197::Princess Bride, The (1987)::Action|Adventure|Comedy|Romance
1287::Ben-Hur (1959)::Action|Adventure|Drama
2804::Christmas Story, A (1983)::Comedy|Drama
My expected output
1::1193::5::978300760::F::1::10::48067::One Flew Over the Cuckoo's Nest::Drama::1975
1::661::3::978302109::F::1::10::48067::James and the Giant Peach::Animation|Children's|Musical::1996
1::914::3::978301968::F::1::10::48067::My Fair Lady ::Musical|Romance::1964
1::3408::4::978300275::F::1::10::48067::Erin Brockovich ::Drama::2000
1::2355::5::978824291::F::1::10::48067::Bug's Life, A ::Animation|Children's|Comedy::1998
I am trying to merge these files without using pandas. I created three dictionary. User id is a common key. Then, I tried to merge these three files using users keys. But, i did not merge exaclty what i want. Any advice and suggestions will be greatly appreciated
My code
import json
file = open("users.dat","r",encoding = 'utf-8')
users={}
for line in file:
x = line.split('::')
user_id=x[0]
gender=x[1]
age=x[2]
occupation=x[3]
i_zip=x[4]
users[user_id]=gender,age,occupation,i_zip.strip()
file = open("movies.dat","r",encoding='latin-1')
movies={}
for line in file:
x = line.split('::')
movie_id=x[0]
title=x[1]
genre=x[2]
movies[movie_id]=title,genre.strip()
file = open("ratings.dat","r")
ratings={}
for line in file:
x = line.split('::')
a=x[0]
b=x[1]
c=x[2]
d=x[3]
ratings[a]=b,c,d.strip()
newdict = {}
newdict.update(users)
newdict.update(movies)
newdict.update(ratings)
for i in users.keys():
addition = users[i] + movies[i]+ratings[i]
newdict[i] = addition
with open('data.txt', 'w') as outfile:
json.dump(newdict, outfile)
My output like this
{"1": ["F", "1", "10", "48067", "Toy Story (1995)", "Animation|Children's|Comedy", "1246", "4", "978302091"], "2": ["M", "56", "16", "70072", "Jumanji (1995)", "Adventure|Children's|Fantasy", "1247", "5", "978298652"],
First mistake in your code (apart from messed up indents) is that you make a dictionary out of ratings with user ID as a key:
ratings[a]=b,c,d.strip()
For your dataset, dictionary ratings will end up with value { '1': ('2804', '5', '978300719') }. So all but one rating would have been lost since you have only one user.
What you want to do instead is to treat your ratings data as a list, not a dictionary. And the result you are trying to achieve is also an extended version of the ratings, because you will end up with as many rows, as you have scores.
Secondly, you don't need json module, since your desired output is not in JSON format.
Here's a code that does the job:
#!/usr/bin/env python3
# Part 1: collect data from the files
users = {}
file = open("users.dat","r",encoding = 'utf-8')
for line in file:
user_id, gender, age, occupation, i_zip = line.rstrip().split('::')
users[user_id] = (gender, age, occupation, i_zip)
movies={}
file = open("movies.dat","r",encoding='latin-1')
for line in file:
movie_id, title, genre = line.rstrip().split('::')
# Parse year from title
title = title.rstrip()
year = 'N/A'
if title[-1]==')' and '(' in title:
short_title, in_parenthesis = title.rsplit('(', 1)
in_parenthesis = in_parenthesis.rstrip(')').rstrip()
if in_parenthesis.isdigit() and len(in_parenthesis)==4:
# Text in parenthesis has four digits - it must be year
title = short_title.rstrip()
year = in_parenthesis
movies[movie_id] = (title, genre, year)
ratings=[]
file = open("ratings.dat","r")
for line in file:
user_id, movie_id, score, dt = line.rstrip().split('::')
ratings.append((user_id, movie_id, score, dt))
# Part 2: save the output
file = open('output.dat','w',encoding='utf-8')
for user_id, movie_id, score, dt in ratings:
# Get user data from dictionary
gender, age, occupation, i_zip = users[user_id]
# Get movie data from dictionary
title, genre, year = movies[movie_id]
# Merge data into a single string
row = '::'.join([user_id, movie_id, score, dt,
gender, age, occupation, i_zip,
title, genre, year])
# Write to the file
file.write(row + '\n')
file.close()
Part 1 is based on your code, with the main differences that I save the ratings to a list (not dictionary) and that I added parsing of years.
Part 2 is where the output is being saved.
Contents of output.dat file after running the script:
1::1193::5::978300760::F::1::10::48067::One Flew Over the Cuckoo's Nest::Drama::1975
1::661::3::978302109::F::1::10::48067::James and the Giant Peach::Animation|Children's|Musical::1996
1::914::3::978301968::F::1::10::48067::My Fair Lady::Musical|Romance::1964
1::3408::4::978300275::F::1::10::48067::Erin Brockovich::Drama::2000
1::2355::5::978824291::F::1::10::48067::Bug's Life, A::Animation|Children's|Comedy::1998
1::1197::3::978302268::F::1::10::48067::Princess Bride, The::Action|Adventure|Comedy|Romance::1987
1::1287::5::978302039::F::1::10::48067::Ben-Hur::Action|Adventure|Drama::1959
1::2804::5::978300719::F::1::10::48067::Christmas Story, A::Comedy|Drama::1983

save two list in one json file

I'm getting data with two lists and I want to save both of them in one single json file can someone help me.
I'm using selenium
def get_name(self):
name = []
name = self.find_elements_by_class_name ('item-desc')
price = []
price = self.find_elements_by_class_name ('item-goodPrice')
for names in name :
names = (names.text)
#print names
for prices in price :
prices = (prices.text)
#print price
I would create a dictionary and then JSON dumps
An example could be:
import json
def get_name(self):
names = [ name.text for name in self.find_elements_by_class_name('item-desc') ]
prices = [ price.text for price in self.find_elements_by_class_name('item-goodPrice')]
with open('output-file-name.json', 'w') as f:
f.write(json.dumps({'names': names, 'prices': prices}))
EDIT: In the first version of the answer I was only creating the JSON, if you want to create a file as well, you should include what suggested by #Andersson comment

Categories

Resources