I'm writing an application in Python to add data to Google Fit. I am successful in creating data sources and adding datasets with points for heart rate, cadence, speed, steps, etc. as well as sessions for those datasets.
I am now trying to add location data so that activities in Google Fit show maps of the activity but not having any luck with that. Something that is unclear to me is that while all of the above items are a single data point, location is 4 data points according to https://developers.google.com/fit/datatypes/location#location_sample.
Do these 4 different items in the data point need to be named in any way, or do I just add them as 4 fpVals one after another in the same order as described on the above reference? I.e. in building my array of points for the dataset's patch operation do I just add them to the value array as such:
gfit_loc.append(dict(
dataTypeName='com.google.location.sample',
endTimeNanos=p.time.timestamp() * 1e9,
startTimeNanos=p.time.timestamp() * 1e9,
value=[dict(fpVal=p.latitude),
dict(fpVal=p.longitude),
dict(fpVal=10),
dict(fpVal=p.elevation)]
))
where the dataset is added with:
data = service.users().dataSources().datasets().patch(
userId='me',
dataSourceId='raw:com.google.location.sample:718486793782',
datasetId='%s-%s' % (min_log_ns, max_log_ns),
body=dict(
dataSourceId='raw:com.google.location.sample:718486793782',
maxEndTimeNs=max_log_ns,
minStartTimeNs=min_log_ns,
point=gfit_loc
)
).execute()
So it turns out that I was doing everything correctly with a small exception. I was setting the activity type to 95, which is defined as Walking (treadmill) for all activities in my prototype. I had not gotten to allowing the user to specify the activity type and given that 95 is an indoor treadmill activity, Google Fit was simply not showing any location data for the activity in the form of a map.
Once I started using non-treadmill activity types, maps started showing up in my Google Fit activities.
Related
I’m new to here-aip, and I’m not sure if my needs can be addressed via service offering (I'm open to other location like services / utilities).
I have a list of (ordered) geo-points describing a transit (already taken place in the past) and I would like to get a clue of a feasible transportation mode in which this transit had taken place (e.g.: this looks like a train trip!).
input example :
lon
lat
121.240436
24.952392
121.24043
24.95239
121.240436
24.952392
121.23966
24.952605
121.23964
24.9526
121.23964
24.95227
121.23964
24.95227
121.239683
24.952316
121.23967
24.95232
121.240149
24.951126
121.24016
24.95111
I have thought about providing the list of points as a constraint and receive the estimated duration using each transportation mode (then on my side I can compare the possible duration with the actual duration and conclude the transportation mode).
I currently understand that if I provide two points (e.g. start and end) I can get a duration estimation and route, but I need more then that (e.g. if the actual transit is circular, providing start point and end point will not be meaningful).
Any ideas?
You can make use of VIA parameter to define the multiple waypoints while calculating the route.
Please refer the below link for more details.
https://developer.here.com/documentation/routing-api/dev_guide/topics/use-cases/via.html
And Refer below example which explain how we can use the VIA parameter in the request.
https://router.hereapi.com/v8/routes?apikey={YOUR_API_KEY}&origin=52.542948,13.37152&destination=52.523734,13.40671&via=52.529061,13.369288&via=52.522272,13.381991&return=polyline,summary,actions,instructions,turnByTurnActions&transportMode=car&departureTime=2022-04-05T20:45:00-07:00
I try to consume some GTFS Feeds and work with them.
I created a MySQL Database and a Python Script, which downloads GTFS - Files and import them in the right tables.
Now I am using the LeafLet Map - Framework, to display the Stops on the map.
The next step is to display the routes of a bus or tram line on the map.
In the GTFS - Archive is no shapes.txt.
Is there a way to display the routes without the shapes.txt ?
Thanks!
Kali
You will have to generate your own shape using underlying street data or public transit lines. See detailed post by Anton Dubrau (he is an angel for writing this blog post).
https://medium.com/transit-app/how-we-built-the-worlds-prettiest-auto-generated-transit-maps-12d0c6fa502f
Specifically:
Here’s an example. In the diagram below, we have a trip with three
stops, and no shape information whatsoever. We extract the set of
tracks the trip uses from OSM (grey lines). Our matching algorithm
then finds a trajectory (black line) that follows the OSM, while
minimizing its length and the errors to the stops (e1, e2, e3).
The only alternative to using shapes.txt would be to use the stops along the route to define shapes. The laziest way would be to pick a single trip for that route, get the set of stops from stop_times.txt, and then get the corresponding stop locations from stops.txt.
If you wanted or needed to, you could get a more complete picture by finding the unique ordered sets of stops among all of the trips on that route, and define a shape for each ordered set in the same way.
Of course, these shapes would only be rough estimates because you don't have any information about the path taken by the vehicles between stops.
I am new with Python. Recenty,I have a project which processing huge amount of health data in xml file.
Here is an example:
In my data, there is about 100 of them and each of them have different id, origin, type and text . I want to store in data all of them so that I could training this dataset, the first idea in my mind was to use 2D arry ( one stores id and origin the other stores text). However, I found there are too many features and I want to know which features belong to each document.
Could anyone recommend a best way to do it.
For scalability ,simplicity and maintainance, you should normalised those data, build a database schema and move those stuff into database (sqlite,postgres,mysql,whatever)
This will move complicate data logic out of python. This is a typical practice of Model-view-controller.
Create a python dictionary and traverse it are quick and dirty. It will become huge technical time sink very soon if you want to make practical sense out of the data.
I am building a couple of web applications to store data using Django. This data would be generated from lab tests and might have up to 100 parameters being logged against time. This would leave me with an NxN matrix of data.
I'm struggling to see how this would fit into a Django model as the number of parameters logged may change each time, and it seems inefficient to create a new model for each dataset.
What would be a good way of storing data like this? Would it be best to store it as a separate file and then just use a model to link a test to a datafile? If so what would be the best format for fast access and being able to quickly render and search through data, generate graphs etc in the application?
In answer to the question below:
It would be useful to search through datasets generated from the same test for trend analysis etc.
As I'm still beginning with this site I'm using SQLite, but planning to move to full SQL as it grows
I sometime have a very large size of gtfs zip file - valid for a period of 6 months, but this is not economic for loading such big data size into a low resource (for example, 2 gig of memory and 10 gig hard disk) EC2 server.
I hope to be able split this large size gtfs into 3 smaller gtfs zip files with 2 months (6months/3files) period worth of valid data, of course that means I will need to replace data every 2 months.
I have found a python program that achieve the opposite goal MERGE here https://github.com/google/transitfeed/blob/master/merge.py (this is a very good python project btw.)
I am very thankful for any pointer.
Best regards,
Dunn.
It's worth noting that entries in stop_times.txt are usually the biggest memory hog when it comes to loading a GTFS feed. Since most systems do not replicate trips+stop_times for the dates when those trips are active, reducing the service calendar probably won't save you much.
That said, there are some tools for slicing and dicing GTFS. Check out the OneBusAway GTFS Transformer tool, for example:
http://developer.onebusaway.org/modules/onebusaway-gtfs-modules/1.3.3/onebusaway-gtfs-transformer-cli.html
Another, more recent option for processing large GTFS files is transitland-lib. It's written in the Go programming language, which is quite efficient at parsing huge GTFS feeds.
See the transitland extract command, which can take a number of arguments to cut an existing GTFS feed down to smaller size:
% transitland extract --help
Usage: extract <input> <output>
-allow-entity-errors
Allow entities with errors to be copied
-allow-reference-errors
Allow entities with reference errors to be copied
-create
Create a basic database schema if none exists
-create-missing-shapes
Create missing Shapes from Trip stop-to-stop geometries
-ext value
Include GTFS Extension
-extract-agency value
Extract Agency
-extract-calendar value
Extract Calendar
-extract-route value
Extract Route
-extract-route-type value
Extract Routes matching route_type
-extract-stop value
Extract Stop
-extract-trip value
Extract Trip
-fvid int
Specify FeedVersionID when writing to a database
-interpolate-stop-times
Interpolate missing StopTime arrival/departure values
-normalize-service-ids
Create Calendar entities for CalendarDate service_id's
-set value
Set values on output; format is filename,id,key,value
-use-basic-route-types
Collapse extended route_type's into basic GTFS values