python vincent needing url address - python

I'm using vincent a data visualization package. One of the inputs it takes is path to data.
(from the documentation)
`geo_data` needs to be passed as a list of dicts with the following
| format:
| {
| name: data name
| url: path_to_data,
| feature: TopoJSON object set (ex: 'countries')
| }
|
I have a topo.json file on my computer, but when I run that in, ipython says loading failed.
map=r'C:\Users\chungkim271\Desktop\DC housing\dc.json'
geo_data = [{'name': 'DC',
'url': map,
'feature': "collection"}]
vis = vincent.Map(geo_data=geo_data, scale=1000)
vis
Do you know if vincent only takes url addresses, and if so, what is the quickest way i can get an url address for this file?
Thanks in advance

It seems that you're using it in Jupyter Notebook. If no, my reply is irrelevant for your case.
AFAIK, vincent needs this topojson file to be available through web server (so javascript from your browser will be able to download it to build the map). If the topojson file is somewhere in the Jupyter root dir then it's available (and you can provide relative path to it), otherwise it's not.
To determine relative path you can use something like this:
import os
relpath = os.path.relpath('abs-path-to-geodata', os.path.abspath(os.path.curdir))

I know that this post is old, hopefully this helps someone. I am not sure what map you are looking for, but here is the URL for the world map
world_topo="https://raw.githubusercontent.com/wrobstory/vincent_map_data/master/world-countries.topo.json"
and the USA state maps
state_topo = "https://raw.githubusercontent.com/wrobstory/vincent_map_data/master/us_states.topo.json"
I got this working beautifully, hope this is helpful for someone!

Related

How to avail "Forecasting: Methods and Application" dataset in Python?

I am using the book Forecasting: Methods and Applications by Makridakis, Wheelwright and Hyndman. I want to do the exercises along the way, but in Python, not R (as suggested in the book).
I do not know how to use R. I know that the datasets can be availed from an R package - fma. This is the link to the package.
Is there a possible script, in R or Python, which will allow me to download the datasets as .csv files? That way, I will be able to access them using Python.
one possibility:
## install and load package:
install.packages('fma')
library('fma')
## list example data of package fma:
data(package = 'fma')
## export single data as csv:
write.csv(cement, file = 'cement.csv')
## bulk export:
## data names are in `[,3]`rd column of list member "results"
## of `data(...)` output
for (data_name in data(package = 'fma')[['results']][,3]){
write.csv(get(data_name), file = paste0(data_name, '.csv'))
}
Edit:
As Anirban noted, attaching the package {fma} exposes only a few datasets to the search path. The data can be obtained by cloning or downloading from Rob J. Hyndman's source (click green Code button and choose). Subfolder 'data' contains each dataset as an .rda file which can be load()ed and converted. (Observe the licence conditions - GPL3.0 - and acknowledge the authors' efforts anyway.)
That said, you could load and convert the data like this:
setwd('path/to/fma-master/data')
for(data_name in dir()){
cat(paste0('converting ', data_name, '... '))
load(data_name)
object_name <- (gsub('\\.rda','', data_name))
write.csv(get(object_name),
file = paste0(object_name,'.csv'),
row.names = FALSE,
append = FALSE ## overwrite file if exists
)
}

Import table from same Notebook

How could one read a markdown table from a text block and import in into a variable in a code block in the same PowerShell notebook?
Attempted to import with PowerShell
$b=Import-Csv -Path <> -Delimiter '|'
Couldn't figure out how to point the -Path parameter to the text block in the same notebook. Using a .ipynb file in Azure Data Studio.
I believe the functionality you are looking for is not possible. As a workaround, I would suggest storing the cell markdown as a variable in Python first and using that variable to populate the printed markdown cell. Here is an example. I believe it will work in any notebook built on top of iPython:
#running this cell in your notebook will print the variable as Markdown
mymd = "# Some markdown"
from IPython.display import display, Markdown
display(Markdown(mymd))
Update: If you are worried that representing multi-line markdown is too complicated, you have two good options. First, use triples quotes to read the line breaks as part of the string:
mymd = """
| First Header | Second Header |
| ------------- | ------------- |
| Content Cell | Content Cell |
| Content Cell | Content Cell |
"""
Option 2: Put your markdown in a file and read it to a string:
with open("somefile.md") as f:
mymd = f.read()
Either option would benefit from a well documented and carefully followed workflow but would work well for this use case.
As per comment on the question, the .ipynb appears to contain JSON formatted text.
Quote about JSON from WikiPedia:
JSON is an open-standard file format or data interchange format that
uses human-readable text to transmit data objects consisting of
attribute–value pairs and array data types (or any other serializable
value). It is a very common data format, with a diverse range of
applications, such as serving as replacement for XML in AJAX systems.
JSON is a language-independent data format. It was derived from
JavaScript, but many modern programming languages include code to
generate and parse JSON-format data. The official Internet media type
for JSON is application/json. JSON filenames use the extension .json.
Also PowerShell has its own "cmdlet" commands for managing JSON files: ConvertTo-Json and ConvertFrom-Json.
The ConvertFrom-Json (and the ConvertTo-Json) cmdlet don't have a -Path parameter, instead it will convert from a -InputObject variable or stream, if the information comes from a file, you can use the Get-Content cmdlet to retrieve your data from a file:
$Data = Get-Content -Path .\YourFile.ipynb | ConvertFrom-Json
If your file is actually not provided through a file system but from a web page on the internet (which I suspect) you need rely on the Invoke-WebRequest cmdlet or if it concerns a web API on the Invoke-RestMethod cmdlet. For these cmdlets you need to figure out and supply more details like the URL you need to address.

Choropleth map with OpenStreetMap data

My goal is to get a so-called "choropleth map" (I guess) of the zip code areas in Germany. I have found the python package "folium" but it seems like it takes a .json file as input:
https://github.com/python-visualization/folium
On OpenStreetMap I only see shp.zip and .osm.pbf files. Inside the shp.zip archive I find all sorts of file endings which I have never heard of but no .json file. How do I use the data from OpenStreetMap to feed folium? Am I running into the wrong direction?
If you want to create a choropleth map you must follow these steps:
First you need a file containing info about the regions of that country. A sample .json file has been supplied with this answer, however, there are actually many file formats commonly used for maps. In your case, you need to convert your OSM shape file (.shp) into a more modern file type like .geojson. Thankfully we have ogr2ogr to do this last part:
ogr2ogr -f GeoJSON -t_srs EPSG:4326 -simplify 1000 [name].geojson [name].shp
Advice: You can also extract the administrative borders from these web sites:
* [OSM Boundaries Map 4.2][2]
* [Mapzen][3]
* [Geofabrik][4]
Download data based on it (a .csv file, for example). Obviously, the file must have a column with the ZIP Codes of that country.
Once you get these files the rest is straightforward, Follium will create the choropleth map automatically.
I wrote a simple example of this about the unemployment rate in the US:
Code:
import folium
import pandas as pd
osm = folium.Map([43, -100], zoom_start=4)
osm.choropleth(
geo_str = open('US_states.json').read(),
data = pd.read_csv("US_unemployment.csv"),
columns = ['State', 'Unemployment'],
key_on = 'feature.id',
fill_color = 'YlGn',
)
Output:
I haven't done this myself but there are various solutions for converting OSM files (.osm or .pbf) to (geo)json. For example osmtogeojson. More tools can be found at the GeoJSON page in the OSM wiki.
I went to https://overpass-turbo.eu/ (which retrieves data from openstreetmap via a specific Query Language QL) and hit run on the following code:
[timeout:900];
area[name="Deutschland"][admin_level=2][boundary=administrative]->.myarea;
rel(area.myarea)["boundary"="postal_code"];
out geom;
You can "export to geojson" but in my case that didn't work because it's too much data which cannot be processed inside the browser. But exporting the "raw data" works. So I did that and then I used "osmtogeojson" to get the right format. After that I was able to feed my openstreetmap data to folium as described in the tutorial of folium.
This answer was posted as an edit to the question Choropleth map with OpenStreetMap data by the OP user3182532 under CC BY-SA 3.0.

Import Skin Weight Maps (Python) - (Maya)

Like mentioned on this post, I would like to just import a skin weightmap (a .weightMap file) into a scene without having to open a dialogue box. Trying to reverse - engineer the script mentioned in the reply didn't get me anywhere.
When I do it manually thru maya's ui - the script history shows...
ImportSkinWeightMaps;
...as a command. But my searches on this keep leading me to the deformerWeights command.
Thing is, there is no example on the documentation as to how to correctly write the syntax. Writing the flags, the path thru trial and error with it didn't work out, plus additional searches keep giving me the hint that I need to use a .xml file for some reason? when all I want to do is import a .weightMap file.
I even ended up looking at weight importer scripts in highend3d.com in hopes at looking at what a proper importing syntax should look like.
All I need is the correct syntax (or command) for something like:
mel.eval("ImportSkinWeightMaps;")
or
cmds.deformerWeights (p = "path to my .weightMap file", im=True, )
or
from pymel.core import *
pymel.core.runtime.ImportSkinWeightMaps ( 'targetOject', 'path to .weightMap file' )
Any help would be greatly appreciated.
Thanks!
why not using some cmds.skinPercent ?
It is more reliable.
http://tech-artists.org/forum/showthread.php?5490-Faster-way-to-find-number-of-influences-Maya&p=27598#post27598

working with geojson and vincent on python

I want to import a geojson file into python so I can map it with a visualization package vincent and merge with other data in a pandas data frame.
To be specific, the said geojson file is: http://ec2-54-235-58-226.compute-1.amazonaws.com/storage/f/2013-05-12T03%3A50%3A18.251Z/dcneighorhoodboundarieswapo.geojson. It's a map of DC with neighborhoods, put together by Justin Grimes.
Right now, I'm just trying to visualize this map on notebook. Here's my code:
import vincent
map=r'http://ec2-54-235-58-226.compute-1.amazonaws.com/storage/f/2013-05-12T03%3A50%3A18.251Z/dcneighorhoodboundarieswapo.geojson'
geo_data = [{'name': 'countries',
'url': map,
'feature': "features"}]
vis = vincent.Map(geo_data=geo_data, scale=5000)
vis
but I keep getting an error message, local host says: [Vega err] loading failed.
What am I doing wrong here?
I don't yet know much about GIS and Python so I ask you be specific in your explanation. Thank you in advance.
At this moment you can't use for you maps with vincent anything but topojson file format (see https://github.com/mbostock/topojson/wiki).
You can convert geojson into topojson using web tools like https://mapshaper.org/ or using command-line utility (https://github.com/mbostock/topojson/wiki/Command-Line-Reference) with command like this:
topojson -p -o <target-file>.topo.json -- <input-file>.json
(-p says utility to keep properties of geometries.)

Categories

Resources