Hi im new in python programming,
currently im in a project which need to find distance between 2 points (lat&lon) offline.
I know google maps provide this service but i cant use it since it has a limit for free account.
So, im googling around and find pyroutelib2 can do this for me with using openstreetmap map data.
pyroutelib link
and now im kinda stuck. im running on Windows 8 x64. my python is 2.7.
i have downloaded pyroutelib from this link
http://svn.openstreetmap.org/applications/routing/pyroutelib2/
and have my country map (osm.bz2 file) ready. the problem is, while i type the command
loadosm.py f:\asia.osm car
loadosm.py f:\asia.osm.bz2 car
loadosm.py f:\asia.osm.pbf car
(the osm file is in different directory)
in my console, the osm file wont be loaded and returning this message:
Loaded 0 nodes
Loaded 0 cycle routes
Searching for node: found None
anybody please help me. Thanks
I get the same output. Either pyroutelib2 or its documentation is broken.
I suggest to just use another routing library/tool. See the OSM wiki about routing as well as the list of online routers and offline routers. There are lots of interesting solutions available.
Check out osmapi, it's what I've used to get OSM files and import them into pyroutelib2. I don't know if that will solve your problems, but I've had luck going that route.
Related
I want to post a carousel of n images using python. I have found various sources on how to upload a single picture with a caption by using libraries like InstaBot, but I could not find any source on how to do this with a carousel, if it is at all possible.
I have all the files stored locally and know how to get the filenames and everything within my script. I want to be able to set the order of the images as well. There tend to be 5 images, so the maximum of 10 will never be surpassed.
Anybody know if this is possible and if yes, how do I achieve it?
as far as my knoledge no library has this option,But it is possible if u use API on your own
update: check instagrapi library
guys come on why -2 reputation, i am just trying to help as far as i know, i need reputation to comment on answers guys
I got a practice task, which I can not get any further.
The task is the following:
Project Description The goal of the projects is to create a map view
similar to the Google Maps, where the user can see some imagery data
captured by drones.
User should be able to move around the map freely, as well as zoom in
and zoom out to take a closer look at the captured imagery data.
It is strongly desired that the served imagery data will support
transparency while minimizing the file size and bandwidth usage. This
does not have to be implemented, but solution ideas are welcomed.
The raw imagery data will be provided as GeoTIFF files. Imagery
visible on the map can be added by placing a file inside a directory
that is read by the server. Project Delivery Method
Project should be delivered as a Git repository with documentation
required to setup and run the project.
Requirements
1. Server implementation in Python 3.5+
2. Project must be able to run on Ubuntu Server 16.04
3. Optimal disk space usage for imagery data displayed to the user (as the app may be processing terabytes of satellite imagery data)
4. Relatively conservative bandwidth usage
Notes:
1. The project will be deployed on a machine that is already running other Python software. Dependency conflicts must be avoided.
(virtualenv, Docker)
2. The UI can be a simple HTML page with embedded libraries and inline scripts.
In addition, it was specified in an e-mail:
"The test task is not code but just the approach and rough app
architecture
```I'm attaching a tank spec. Like I've mentioned. I'm more interested
in problem-solving and your ideas. I expect a working prototype tough.
Use any libraries you wish to use. Create an elegant, easy to
understand the solution. You can use as much time as you want. Would be
great if you could deliver the code by git.... ```"
So far I have done:
Ubuntu as VM
Venv
Postgres and PostGIS installed (Django writes error-free in a database)
Django project and app created
Documentation up to this point
I have now integrated the geotiff via console and that seems to work too:
from django.contrib.gis.gdal import GDALRaster
raster = GDALRaster('base/static/base/geotiff/xto-site3-rgb.tif')
raster.name
Out[4]: 'base/static/base/geotiff/xto-site3-rgb.tif'
raster.width, raster.height
Out[5]: (23001, 9668)
In the models.py is so far:
from django.contrib.gis.db import models
class RasterBase(models.Model):
raster = models.RasterField()
name = models.TextField()
How does it work that I install the grid so that I can portray this in an html similar to google maps? If I understand correctly, I must now write the geotiff in the database, and read on from there, right?
Unfortunately, I find in the network largely only outdated stuff, or often examples, which is assumed by shapefiles. Should I convert the grid to a shapefile and continue like that?
So far, I only make small things in Django, like my own blog and a few statistics, but this with Geodjango is a bit fierce because I have to give it up, as it were tomorrow. Latest Tuesday morning.
I would be very grateful if someone could give me some tips. All in all, that's pretty important to me, and it would be a shame if I messed up half of the task (or the last third) of the task.
Django is version 2.0
The GeoTIFF ~900mb
Thanks for all. :-)
Late to the party, but maybe some people are searching for a solution here.
When you want to display geodata on a map, you can use a WebGIS framework like Openlayers or Leaflet. They provide all the functionality to move the map and zoom in / out.
I would not recommend to store large raster data in a database. You can serve it directly from a file server via a TileLayer or use a XYZ tiling structure to minimize the bandwidth usage.
Openlayers has a lot of examples on how to server GeoTiff files.
EDIT : Let's try to clarify all this.
I'm writing a python script, and I want it to tell me the song that Spotify is currently playing.
I've tried looking for libraries that could help me but didn't find any that are still maintained and working.
I've also looked through Spotify's web API, but it does not provide any way to get that information.
The only potential solution I found would be to grab the title of my Spotify (desktop app) window. But I didn't manage to do that so far.
So basically, what I'm asking is whether anyone knows :
How to apply the method I'm already trying to use (get the window's title from a program), either in pure python or using an intermediary shell script.
OR
Any other way to extract that information from Spotify's desktop app or web client.
Original post :
I'm fiddling with the idea of a python status bar for a linux environment, nothing fancy, just a script tailored to my own usage. What I'm trying to do right now is to display the currently playing track from spotify (namely, the artist and title).
There does not seem to be anything like that in their official web API. I haven't found any third party library that would do that either. Most libraries I found are either deprecated since spotify released their current API, or they are based on said API which does not do what I want.
I've also read a bunch of similar question in here, most of which had no answers, or a deprecated solution.
I thought about grabbing the window title, since it does diplay the information I need. But not only does that seem really convoluted, I also have difficulties making this happen. I was trying to get it by running a combination of the linux commands xdotools and xprop inside my script.
It's worth mentionning that since I'm already using the psutil lib for other informations, I already have access to spotify's PID.
Any idea how I could do that ?
And in case my method was the only one you can think of, any idea how to actually make it work ?
Your help will be appreciated.
The Spotify client on Linux implements a D-Bus interface called MPRIS - Media Player Remote Interfacing Specification.
http://specifications.freedesktop.org/mpris-spec/latest/index.html
You could access the title (and other metadata) from python like this:
import dbus
session_bus = dbus.SessionBus()
spotify_bus = session_bus.get_object("org.mpris.MediaPlayer2.spotify",
"/org/mpris/MediaPlayer2")
spotify_properties = dbus.Interface(spotify_bus,
"org.freedesktop.DBus.Properties")
metadata = spotify_properties.Get("org.mpris.MediaPlayer2.Player", "Metadata")
# The property Metadata behaves like a python dict
for key, value in metadata.items():
print(key, value)
# To just print the title
print(metadata['xesam:title'])
For windows:
The library can be found at github: https://github.com/XanderMJ/spotilib. Keep in mind that this is still work in progress.
Just copy the file and place it in your Python/Lib directory.
import spotilib
spotilib.artist() #returns the artist of the current playing song
spotilib.song() #returns the song title of the current playing song
spotilib.artist() returns only the first artist. I started working on an other library spotimeta.py to solve this issue. However, this is not working at 100% yet.
import spotimeta
spotimeta.artists() #returns a list of all the collaborating artists of the track
If an error occurs, spotimeta.artists() will return only the first artist (found with spotilib.artist())
I'm building a web form to accommodate users uploading .obj and .fbx 3D models to a site. We need a server-side solution to convert these files to Collada (dae).
It would be massively helpful if someone could point me in the right direction as I have no solid ideas yet on a possible solution. I'd like to hear what others think before I go off down one path.
I can only think something along the lines of a python/perl script triggered off by the PHP during upload?
Many thanks in advance,
I would use a Python prograam on the server triggered by PHP. I would look around for a Python library for working with Collada files (e.g. http://collada.in4lines.com/) then I would use the FBX Python SDK to convert FBX files to Collada. For OBJ maybe something like http://pygame.org/wiki/OBJFileLoader would be helpful.
Update: I recently wrote a blog post about using FBX and Python as a web server.
I was using the xgoogle python library for one of my projects. It was working fine till recently. I am not getting the resultset that I used to get before. If anyone who has used this library written by Peter Krummins, faced a similar situation, can you please suggest a work around ?
The presence of BeautifulSoup.py hints that this library uses web scraping to get its result.
A common problem with this is that it will easily break when the design/layout of the page being scraped changes. And the problem you see seems to coincide with the new search results layout that Google introduced just recently.
Another problem is that it often is against the terms of service of the site being scraped. And according to point 5.3 of the Google Terms Of Service it actually is:
You specifically agree not to access (or attempt to access) any of the Services through any automated means (including use of scripts or web crawlers) [...]
A better idea would be to use the Custom Search API.
Peter Krumin's product xgoogle looks to be extremely useful both to me and I image many others.
https://github.com/pkrumins/xgoogle
For me the current version is 1.3 is not working.
I tried a new install from GitHub, ran the examples and nothing is returned.
Adding a debugger to the source code and tracing the data captured in a query to its disappearance the problem occurs in a routine called search.py subroutine "_extract_results" at a parser command
results = soup.findAll('li', {'class': 'g'})
The soup object has material in it but the "findAll" fails to return anything.
Looks like its searching for lists and if there are none it returns nothing.
I am unsure what html you would try to match to get a result.
If anyone knows how to make the is work I am very interested.
A little more googling and it appears xgoogle is no longer supported or works.
Part of the trouble is that Google changes the layout of its results pages every so often and so any scraping software that assumes some standard layout is in time doomed to failure.
There are however other search engines that are locally installed and thus provide a results layout that are less likely change with upgrades and will not change at all if you don't upgrade.
I am currently investigating Yacy. Easy to install and can be pointed at specific sites if you want.