The first time I'll do a heatMap in python 3 using Pandas and Matplotlib.
I tried to use the plugin gmaps in jupyter notebook.
I uploaded a csv file that conatin 2 columns (long,lat).
import gmaps
import gmaps.datasets
gmaps.configure(api_key=os.environ["GOOGLE_API_KEY")
locations = gmaps.datasets.load_dataset("my_file.csv")
fig = gmaps.figure()
fig.add_layer(gmaps.heatmap_layer(loactions))
fig
I got the following error:
676 except KeyError:
677 # raise KeyError with the original key value
--> 678 raise KeyError(key) from None
679 return self.decodevalue(value)
680
KeyError: 'GOOGLE_API_KEY'
How can I read my file to resolve it?
Thank you
There are points to correct in your code. I will provide a list of what I had to do in order to put this to work in my environment (jupyter notebook).
1) Make sure to have the gmaps installed in your environment. You can achieve this by using something like:
pip install gmaps
2) In jupyter I had an issue that the js that shows the map wasn't loaded correctly. After installing the package (step 1), you have to stop all instances of jupyter and run the following command:
jupyter nbextension enable --py gmaps
3) You must have a valid Google API Key, to replace the GOOGLE_API_KEY placeholder on your code. Which by the way, was missing a closing square brackets. To create your API Key, please follow the instructions from this link. Note that is mandatory.
4) You don't have to import gmaps.datasets if you are working with your own file. This module loads pre-defined datasets. You can read your csv using Pandas, for instance.
The code to to perform the whole operation is:
import pandas as pd
import gmaps
gmaps.configure(api_key='YOUR_API_KEY') # you have to replace the value YOUR_API_KEY by the key generated in the step 3.
locations = pd.read_csv('my_file.csv')
fig = gmaps.figure()
fig.add_layer(gmaps.heatmap_layer(locations))
fig
This produces the following map, that from my perspective I can't judge if it's correct or not.
EDIT:
Your file has the order of the columns Long and Lat, and the API expects Lat and Long. Changing the order made more sense for me:
Related
update: This code can just cause the perpetual running. Even if I don't add any other code ?
from ogb.nodeproppred import PygNodePropPredDataset
Here is my code, and I want to download the OGB.
import torch_geometric.transforms as T
from ogb.nodeproppred import PygNodePropPredDataset
dataset_name = 'ogbn-arxiv'
dataset = PygNodePropPredDataset(name=dataset_name,
transform=T.ToSparseTensor())
print('The {} dataset has {} graph'.format(dataset_name, len(dataset)))
# Extract the graph
data = dataset[0]
print(data)
But when I run this code, it just keep the state of running and output nothing.
I think I've already match the requirement which shows in OGB website.
I use windows11 and pycharm.
If you want to download the OGB dataset, you should uninstall the "outdated" package, as it seems there are some conflicts among the package. For more details, please read the OGB github issues.
I need to read OpenAir files in Python.
According to the following vector driver description, GDAL has built-in OpenAir functionality:
https://gdal.org/drivers/vector/openair.html
However there is no example code for reading such OpenAir files.
So far I have tried to read a sample file using the following lines:
from osgeo import gdal
airspace = gdal.Open('export.txt')
However it returns me the following error:
ERROR 4: `export.txt' not recognized as a supported file format.
I already looked at vectorio however no OpenAir functionality has been implemented.
Why do I get the error above?
In case anyone wants to reproduce the problem: sample OpenAir files can easily be generated using XContest:
https://airspace.xcontest.org/
Since you're dealing with vector data, you need to use ogr instead of gdal (it's normally packaged along with gdal)
So you can do:
from osgeo import ogr
ds = ogr.Open('export.txt')
layer = ds.GetLayer(0)
featureCount = layer.GetFeatureCount()
print(featureCount)
There's plenty of info out there on using ogr, but this cookbook might be helpful.
I try to get familiar with python in R.
I made it work using reticulate along the following lines:
library(reticulate)
py_install("pandas")
Then I can always get back to the enviroment where I installed that to using
use_condaenv("r-reticulate")
How can I use this in a python chunk of the following form
```{python}
import pandas as pd
```
pandas can be found in the first version above (using reticulate) but not in the version with the python chunk. How can I tell it to use the "r-reticulate" environment? Setting it in an R set-up chunk in the following form does not work for me
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
library(reticulate)
use_condaenv("r-reticulate")
```
I had the same issue, it was fixed by setting the required keyword to TRUE.
use_condaenv(condaenv='env-name', required = TRUE)
I am using Python and R code with jupyter notebook at the same time. Specifically, I want to use pandas to deal with the data, pass the DataFrame object to R kernal, and then use ggplot2 to visualize it.
However, as long as I pass the pandas DataFrame object to the R kernal, and use ggplot() to make plots,the jupyter notebook will always give a warning as following:
C:\Study\Anaconda3-5.2.0\lib\site-packages\rpy2-2.9.4-py3.6-win-amd64.egg\rpy2\robjects\pandas2ri.py:191: FutureWarning: from_items is deprecated. Please use DataFrame.from_dict(dict(items), ...) instead. DataFrame.from_dict(OrderedDict(items)) may be used to preserve the key order.
res = PandasDataFrame.from_items(items)
My code is very simple, showing as the following:
%load_ext rpy2.ipython
%R library(ggplot2)
# data_train is a pandas DataFrame object
%%R -i data_train
ggplot(data = data_train,aes(x = factor(Survived))) + geom_bar(fill = "#539bf3")
You could do it directly in python using python ggplot library
Not exactly what you are asking but in case you overlook it
I'm trying the new library(Chartify) provided by Spotify Team. On running the code below, I'm receiving the following error:
import chartify
import pandas as pd
file = "./data/Social_Network_Ads.csv"
data = pd.read_csv(file, sep = ',')
chart = chartify.Chart(blank_labels=True, y_axis_type='categorical', x_axis_type='linear')
chart.plot.scatter(
data_frame=data,
categorical_columns='Gender',
numeric_column='EstimatedSalary',
color_column='EstimatedSalary')
chart.style.color_palette.reset_palette_order()
chart.set_title("Scatter Plot w.r.t. Salaries of different Gender")
chart.set_subtitle("Labels for specific observations.")
chart.show()
[9643:9643:1127/175201.738360:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
The HTML is being created though, but on opening the HTML, It gives a blank page.
An old question, but if you are facing similar issues... That message is related to the OS while executing X tool.
As a workaround, this helped me to solve the issue on a CentOS 7.7 while trying to execute different binaries!
export QTWEBENGINE_DISABLE_SANDBOX=1