I have a large number of GIS (latitude, longitude) coordinates, and I'd like to get the distance between them. Is there a service that will calculate the shortest path for me? I know about google maps, but I'd like something I can use from Python, and that can handle a large batch of requests at once.
I'm looking for the driving distance, so a straight distance won't do.
Thanks
So I take it based on your question and the answers posted that you are asking what program to use? If you can find a way to get a copy for free or cheap (like through work, school, etc) I'd recommend ArcGIS 9.x. It has its quirks, but it's highly supported by the user community and there are a lot of forums and help/training books available for it. Also, they have adopted Python as their official scripting language for the program (Sweeeet!).
Another option that is less expensive is GRASS. It's a free, open-source, well established, powerful and multiplatform GIS program. It might have a bit steeper learning-curve than ArcGIS, but I've heard very good things about it considering it's a free program.
This website lists info on free, open-source (FOS) GIS programs http://opensourcegis.org/ and could give you some good info on your other choices.
I couldn't tell if you were asking a question about how to measure the distance between two points and finding shortest travel distances in a GIS program or if you were just mentioning that's the kind of stuff you would need to do. Either way, ArcGIS is well suited for those tasks. Some of the tools in ArcGIS's ArcToolbox already have commands to help you find optimal transportation routes. This link lets you explore some of the tools available ArcToolbox Help. Most of the tools in ArcToolbox have a GUI batch processing option automatically included as well. Measuring point to point distances on an individual basis is easy in ArcGIS, and if you needed to measure a bunch of point to point pairs, you could write a quick Python script to easily do it for you.
I think I've answered all of your questions. Feel free to let me know if there is something I missed or that doesn't make sense. Hope this helps, buddy.
Check out OpenStreetMap. You can download their map data and have it lying around on your local system. http://wiki.openstreetmap.org/wiki/Routing discusses the various routing systems for their data.
You are aware that the traveling salesman problem is np-complete?
using Qgis:
Use the delimited text plugin to import the data
save the import as a shape file
Open the shape file
using the ftools plugin, calculate the matrix distance
If you have interconnections between the points you could use Dijkstra's algorithm for a 'shortest path from a single point' or Floyd's algorithm for an 'all pairs' shortest path computation.
Neither are particularly complicated, however they do assume you know the lengths of the roads between the points. You will need to have this data to compute a driving distance.
Related
I'm producing an ugv prototype. The goal is to perform the desired actions to the targets set within the maze. When I surf the Internet, the mere right to navigate in the labyrinth is usually made with a distance sensor. I want to consult more ideas than the question.
I want to navigate the labyrinth by analyzing the image from the 3d stereo camera. Is there a resource or successful method you can suggest for this? As a secondary problem, the car must start in front of the entrance of the labyrinth, see the entrance and go in, and then leave the labyrinth after it completes operations in the labyrinth.
I would be glad if you suggest a source for this problem. :)
The problem description is a bit vague, but i'll try to highlight some general ideas.
An useful assumption is that labyrinth is a 2D environment which you want to explore. You need to know, at every moment, which part of the map has been explored, which part of the map still needs exploring, and which part of the map is accessible in any way (in other words, where are the walls).
An easy initial data structure to help with this is a simple matrix, where each cell represents a square in the real world. Each cell can be then labelled according to its state, starting in an unexplored state. Then you start moving, and exploring. Based on the distances reported by the camera, you can estimate the state of each cell. The exploration can be guided by something such as A* or Q-learning.
Now, a rather subtle issue is that you will have to deal with uncertainty and noise. Sometimes you can ignore it, sometimes you don't. The finer the resolution you need, the bigger is the issue. A probabilistic framework is most likely the best solution.
There is an entire field of research of the so-called SLAM algorithms. SLAM stands for simultaneous localization and mapping. They build a map using some sort of input from various types of cameras or sensors, and they build a map. While building the map, they also solve the localization problem within the map. The algorithms are usually designed for 3d environments, and are more demanding than the simpler solution indicated above, but you can find ready to use implementations. For exploration, something like Q-learning still have to be used.
I am trying to assess the potential of Python to calculate the service area of two points.
The idea is to create a map showing which terminal is more efficient in serving a given cell based in distance or cost or time (different map for each).
The image shows point A and point B as terminals, I am trying to calculate the service area (or influence area) for each of the terminals.
In the example on the right the domain is homogeneous, and in the example on the left we have rail (green) and waterway (yellow). The different transportation modes will change the cost and time to market of any shipment to/from A and B. Intermodal operations are possible when any of the modes intercept i.e. green to white, white to yellow, yellow to green, etc.
By service area I mean a given cell is closer/cheaper/faster to a A or to B. Once I have this information than I´d be able to create a service area map of A and B.
My question is if python is the right tool for this. As you might notice I am not familiar with programming and would appreciate any tips (tutorials, etc).
Please feel free to ask any questions back if the problem description is not clear.
Domain of the problem:
You can solve this problem in almost any programming language.
Python is a high-level programming language, meaning it takes care of things like memory management. This makes it somewhat slower but easier to learn as you have to write fewer lines of code to do what you want.
It is also versatile, well supported and established, making it a good candidate for a first language.
However, ultimately, the question is what you are going to do with it? For example, if you want to develop something for the web, then going with JavaScript is probably better.
Here a rough guide where different programming languages are used
Otherwise google "which programming language should I learn" to find any of millions of articles on this topic.
I am creating an appEngine application in python that will need to perform efficient geospatial queries on datastore data. An example use case would be, I need to find the first 20 posts within a 10 mile radius of the current user. Having done some research into my options, I have found that currently what seems like the 2 best approaches for achieving this type of functionality would be:
Indexing geoHashed geopoint data using Python's GeoModel library
Creating/deleting documents of structured data using Google's newer SearchAPI
It seems from a high level perspective that indexing geohashes and performing queries on them directly would be less costly and much faster than having to create and delete a document for every geospatial query, however i've also read that geohashing can be very inaccurate along the equator or along 'faultlines' created by the hashing algorithm. I've seen very few posts contrasting the best methods in detail, and I think stack is a good place to have this conversation, so my questions are as follows:
Has anyone implemented similar features and had positive experiences with either methods?
Which method would be the cheaper alternative?
Which would be the faster alternative?
Is there another important method I'm leaving out?
Thanks in advance.
Geohashing does not have to be inaccurate at all. It's all in the implementation details. What I mean is you can check the neighbouring geocells as well to handle border-cases, and make sure that includes neighbours on the other side of the equator.
If your use case is finding other entities within a radius as you suggest, I would definitely recommend using the Search API. They have a distance function tailored for that use.
Search API queries are more expensive than Datastore queries yes, but if you weigh in the computation time to do these calculations in your instance and probably iterating through all entities for each geohash to make sure the distance is actually less than the desired radius, then I would say Search API is the winner. And don't forget about the implementation time.
You can have a look at this post, it can be another great alternative.
I have used this within my app and it works great for my requirement to find my app users with-in provided radius .
I am searching for a lib that helps me to use many sound properties.
I mean, I need something to get each frequency of sounds, get the sound waves length and width, get the peak and trough (in a measurement way) of the sounds.
I need something that leads me as close as possible to manipulate and measure sounds waves in some ways, this is something that I need more for a scientific research than for an application.
It is hard to find something like that, If you could help me with some links or anything, would be a great help for me.
If you have something even in other languages, it could help me.
I will keep this question updated as I find answers as well.
Thanks in advance.
The Python wiki page PythonInMusic has a lot of links, some of which will probably be useful to you. It includes a whole range of projects to input and output sound in different formats. A quick glance shows a couple of more specialised projects that might also be helpful:
audiolab - bridges the gap between numpy and sound formats
musickit - support for signal processing, and apparently used in 'scientific experiments'
These will probably give you the tools to read sounds in and convert them into a useful form for analysis.
After that, it seems to me that what you are describing is more about signal/waveform analysis, than sound per se, so that may be a more helpful direction to search in. I'm not aware of any Python package that does exactly what you're looking for. Measurement of things like wavelength, peak and trough doesn't sound particularly difficult to me though - you could look at coding your own routines for this using SciPy.
I took a scientific programming course this semester that I really enjoyed and experimented with a lot. We used python, and all the related modules. I am taking a physics lab next semester and I just wanted to hear from some of you how python can help me in ways that excel can't or in ways that are better than excel's capabilities. I use Mathematica for symbolic stuff so I would use python for data purposes.
Off the top of my head, here are the related things I can do:
All of the things you would expect in a intro course (loops, arrays, slicing arrays, etc).
Reading data from a text file.
Plotting scatter, line, and bar graphs.
Learning how to plot linear regression but haven't totally figured it out.
I have done 7 of the problems on Project Euler (nothing to brag about, but it might give you a better idea of where I stand in skills).
Looking forward to hearing from some of you. You don't have to explain how to use the things you mention, I could look up the documentation.
The paper Python all a scientist needs comes to mind. I hope you can make the needed transformations from Biology to Physics.
Scipy will also be useful to you, as it includes many more advanced analysis tools. For example, Scipy includes a linear regression, and gets more interesting from there. Along with the other tools you mentioned, you'll probably find most of your needs covered.
Other notes on tool selection:
Mathematica is a great tool, if you can afford it. I've played around with the other options, like Sympy, and sadly, they don't come close to being as useful as Mathematica.
I can't imagine using Excel for any serious scientific work. If you're planning to continue forward using the tools that you learn in class, you might as well start with tools that offer you that potential.
Don't reject Excel outright. It's still great for doing simple data analysis and plotting. Excel also has the considerable advantage of being installed on most engineer and scientist's computers, making it a lot easier to share your work with colleagues.
That said, I do use Python when Excel just won't cut it; times when I've had to:
color the points in a scatter plot based on a third column
plot a field of vectors
extract a few values from each of several thousand data files to do statistical process control
generate dozens of scatter plots over different dimensions of a large data set to find which variables are important
solve a nonlinear equation at several intermediate points of a calculation, not just as the final result.
accept variable length input from a user to define a problem
VBA in Excel can do a lot of those things too, but it becomes painful fast in such a primitive language. I dream that Microsoft will make IronPython a first-class scripting language in the next version of Excel. Until then, you might want to try Resolver One
I can recall 2 presentations by Jan Martinek on EuroScipy 2008, he's PhD candidate and presented some fun experiments with Physics in the background. Abstracts are here and I'm sure he would't mind to share more if you contact him directly. Also, take a look at other presentation from EuroScipy, there are some more Physics-related ones.