What is the Group.lostsprites attribute for in pygame? - python

In pygame, groups have a lostsprites attribute. What is this for?
Link to where its first defined in the code: https://github.com/pygame/pygame/blob/main/src_py/sprite.py#L361
It seems to be some sort of internal thing as I was unable to find any documentation on its purpose:
Searching on the pygame website yields 1 result (which doesn't explain its purpose):
https://www.pygame.org/docs/search.html?q=lostsprites
I also tried searching on google but I couldn't find anything

Related

Libtorrent. Answer some questions

To begin with, English is not my native language, so it's hard for me to read the libtorrent documentation and all this question has been translated.
I ask you to answer these questions, if you know any of them, answer only him.
I am using libtorrent 2.0.7 and Python 3.8
It is not necessary to answer questions in python, I will try to figure it out even if you answer in c++
At the moment when the torrent is not loaded yet. How do I get all the files to be uploaded?
At the moment when the torrent is loaded. How do I get the path to the files that were uploaded?
(I found a similar question, but its answer stopped working because of deprecated)
I'm trying to use
handle.get_torrent_info()
to answer point 1, but returns
DeprecationWarning: get_torrent_info() is deprecated
I tried to look in the source file, but it doesn't say what to use instead of this function. Do you know?
I would like to set a download speed limit for the entire session. To do this, I found
session.download_rate_limit()
in its parameters , but when using it, it returns
DeprecationWarning: download_rate_limit() is deprecated.
I also tried to look in the documentation, but I didn't find it. I also didn't figure out what parameters it accepts, I tried int, but it returned an error. As in point 2, it is not written what to use instead of the outdated function. Do you know?
I would like the session to download only 1 torrent at a time, and the rest queued in the order of enabling the download from the pause state. How to do this, I do not know at all. Help please
I found the answer to the 1st and 2nd question:
test = handle.status()
for i in range(test.torrent_file.files().num_files()):
print(test.torrent_file.files().file_path(i))

Unable to get desired google search result using module "google"

i have been trying to scrap google search data.
Let me explain what i have done so far.
i have used google module to get the search results, with Beautiful soup. Below i have given the sample search i have made,
>>> from google import search
>>>
>>> for i in search("tom and jerry", tld="co.in",num=10,stop=1): print i
https://www.youtube.com/watch?v=mugo5LoG8Ws
https://en.wikipedia.org/wiki/Tom_and_Jerry
http://www.dailymail.co.uk/debate/article-2390792/How-sense-humour-censor-Tom-Jerry-racist-By-Mail-TV-critic-CHRISTOPHER-STEVENS.html
http://edp.wikia.com/wiki/Tom_and_Jerry
https://www.youtube.com/watch?v=gSK5curwV_o
https://www.youtube.com/watch?v=xb8jTvSwJbw
https://www.youtube.com/watch?v=Kj8VuTr5q9g
https://www.youtube.com/watch?v=iIprJoPTJoI
https://www.youtube.com/watch?v=UaX3hvrZDJA
http://www.cartoonnetwork.com/games/tomjerry/
https://www.facebook.com/TomandJerry/
http://www.dailymotion.com/video/x2mn36a
http://www.dailymotion.com/video/x2p0k8j
>>>
But this result actually differs from the manual search result.
How actually it differs, if we make any changes to the init.py file of google library we can get some efficient result?
Please sort me out a possible way..
Thanks in advance.
[Note] : already surfed for previous discussions in stackoverflow. If it is a Dup, I apologize... :)
EDIT 1: Also i get duplicate links sometimes,. First link is repeated few times in the generator output i am getting from google.search(*arg) command. Please advice me how to get rid of this
I got how this DUP came. It is the sublinks shown for the popular websites in google search page.
sorry the pixel was too small. :)
Researching more on the API output and the way the output is parsed. Thanks for all who could have thought of helping me :)

Get spotify currently playing track

EDIT : Let's try to clarify all this.
I'm writing a python script, and I want it to tell me the song that Spotify is currently playing.
I've tried looking for libraries that could help me but didn't find any that are still maintained and working.
I've also looked through Spotify's web API, but it does not provide any way to get that information.
The only potential solution I found would be to grab the title of my Spotify (desktop app) window. But I didn't manage to do that so far.
So basically, what I'm asking is whether anyone knows :
How to apply the method I'm already trying to use (get the window's title from a program), either in pure python or using an intermediary shell script.
OR
Any other way to extract that information from Spotify's desktop app or web client.
Original post :
I'm fiddling with the idea of a python status bar for a linux environment, nothing fancy, just a script tailored to my own usage. What I'm trying to do right now is to display the currently playing track from spotify (namely, the artist and title).
There does not seem to be anything like that in their official web API. I haven't found any third party library that would do that either. Most libraries I found are either deprecated since spotify released their current API, or they are based on said API which does not do what I want.
I've also read a bunch of similar question in here, most of which had no answers, or a deprecated solution.
I thought about grabbing the window title, since it does diplay the information I need. But not only does that seem really convoluted, I also have difficulties making this happen. I was trying to get it by running a combination of the linux commands xdotools and xprop inside my script.
It's worth mentionning that since I'm already using the psutil lib for other informations, I already have access to spotify's PID.
Any idea how I could do that ?
And in case my method was the only one you can think of, any idea how to actually make it work ?
Your help will be appreciated.
The Spotify client on Linux implements a D-Bus interface called MPRIS - Media Player Remote Interfacing Specification.
http://specifications.freedesktop.org/mpris-spec/latest/index.html
You could access the title (and other metadata) from python like this:
import dbus
session_bus = dbus.SessionBus()
spotify_bus = session_bus.get_object("org.mpris.MediaPlayer2.spotify",
"/org/mpris/MediaPlayer2")
spotify_properties = dbus.Interface(spotify_bus,
"org.freedesktop.DBus.Properties")
metadata = spotify_properties.Get("org.mpris.MediaPlayer2.Player", "Metadata")
# The property Metadata behaves like a python dict
for key, value in metadata.items():
print(key, value)
# To just print the title
print(metadata['xesam:title'])
For windows:
The library can be found at github: https://github.com/XanderMJ/spotilib. Keep in mind that this is still work in progress.
Just copy the file and place it in your Python/Lib directory.
import spotilib
spotilib.artist() #returns the artist of the current playing song
spotilib.song() #returns the song title of the current playing song
spotilib.artist() returns only the first artist. I started working on an other library spotimeta.py to solve this issue. However, this is not working at 100% yet.
import spotimeta
spotimeta.artists() #returns a list of all the collaborating artists of the track
If an error occurs, spotimeta.artists() will return only the first artist (found with spotilib.artist())

Scraping and parsing with Python - lxml with long Xpaths

I am loading and scrolling on dynamically loading pages. An example is the Facebook "wall", which only loads the next items once you have scrolled to somewhere near the bottom.
I scroll until the page is veeeery long, then I copy the source code, save it as a text file and go on to parsing it.
I would like to extract certain parts of the webpage. I have been using the lxml module in python, but with limited success. On there website they only show examples with pretty short Xpaths.
Below is an example of the function and a path that gets me the user names included on the page.
usersID = elTree.xpath('//a[#class="account-group js-account-group js-action-profile js-user-profile-link js-nav"]')
this works fairly well, however I am getting some errors (another post of mine), such as:
TypeError: 'NoneType' object has no attribute 'getitem'
I have also been looking at the Xpaths that Firebug provides. These are of course much longer and very specific. Here is an example for a reoccuring element on the page:
/html/body/div[2]/div[2]/div/div[2]/div[2]/div/div[2]/div/div/div/div/div[2]/ol[1]/li[26]/ol/li/div/div[2]/p
The part towards the end li[26] shows it is the 26th item in a list of the same element, which are found at the same level of the HTML tree.
I would like to know how I might use such firebug-Xpaths with the lxml library, or of anybody knows of a better way to use Xpaths in general?
Using example HTML code and tools like this for test purposes, the Xpaths from Firebug don't work at all. Is that path just ridiculous in people's experience?
Is is very specific to the source code? Are there any other tools like Firebug that produce more reliable output for use with lxml?
FireBug actually generates really poor xpaths. They are long and fragile because they're incredibly non specific beyond hierarchy.
Pages today are incredibly dynamic.
The best way to work with xpath on dynamic pages is to locate common elements as the hook and perform xpath ops from those as your path root.
What I mean here by common elements is stable structural elements that are highly likely or guaranteed to be present. Pick the one closest to your target in terms of containment hierarchy. Shorter paths are faster and clearer.
From there you need to create paths that locate some specific unique attribute or attribute value on the target element.
Sometimes that's not possible so another strategy is to target the closest uniquely identifiable container element then get all elements similar to yours under that and iterate them looking for your goal.
Highly dynamic pages require sophisticated and dynamic approaches.
Facebook changes a lot and will require script maintenance frequently.
I found two things which, together, worked very well for me.
The first thing:
The lxml package allows the usage of some functions in combination with the Xpath. I used the starts-with function, as follows:
tweetID = elTree.xpath("//div[starts-with(#class, 'the-non-varying-part-of-a-very-long-class-name')]")
When exploring the HTML code (tree) using tools such as Firebug/Firepath, everything is shown nice and neatly - for example:
*
*
When I used the highlighted path, i.e. tweet original-tweet js-original-tweet js-stream-tweet js-actionable-tweet js-profile-popup-actionable has-cards has-native-media with-media-forward media-forward cards-forward - to search my elTree within the code above, nothing was found.
Having a look at the actual HTML file I was trying to parse, I saw it was really spread over many lines - like this:
this explains why the lxml package was not finding it according to my search.
The second thing:
I know is not generally recommended as a workaround, but the Python approach that it is "easier to ask for forgiveness than permission" applied in my case - The next things I did was to use the python try / except on a TypeError that I kept getting at seemingly arbitrary lines of my code
This may well be specific to my code, but after checking the output on many cases, it seems as though it worked well for me.

Search function with PyGTKsourceview

I'm writing a small html editor in python mostly for personal use and have integrated a gtksourceview2 object into my Python code. All the mayor functions seem to work more or less, but I'm having trouble getting a search function to work. Obvioiusly the GUI work is already done, but I can't figure out how to somehow buildin methods of the GTKsourceview.Buffer object (http://www.gnome.org/~gianmt/pygtksourceview2/class-gtksourcebuffer2.html) to actually search through the text in it.
Does anybody have a suggestion? I find the documentation not very verbose and can't really find a working example on the web.
Thanks in advance.
The reference for the C API can probably be helpful, including this chapter that I found "Searching in a GtkSourceBuffer".
As is the reference for the superclass gtk.TextBuffer
Here is the python doc, I couldn't find any up-to-date documentation so I stuffed it in my dropbox. Here is the link. What you want to look at is at is the gtk.iter_forward_search and gtk.iter_backward_search functions.

Categories

Resources