Is it possible to remove videos from Plex's "Recently Added" section? I'd like to add some old videos to my server for posterity but not have them appears as newly added.
When I initially sought an answer to this question, I looked on support forums, including Plex's own and Reddit. All answers said it was either not possible or suggested editing the metadata_items table of the SQLite database directly (yuck). I have since found a better way to do this, but those support threads are now locked, so I'm unable to share this solution there. Hopefully answering this question here helps others find it.
This actually is possible and quite easy via the REST API. First, pip install plexapi. Then use the following script to update the addedAt field of your video of choice.
import os
import sys
from plexapi.myplex import MyPlexAccount
USERNAME = os.environ.get("PLEX_USERNAME")
PASSWORD = os.environ.get("PLEX_PASSWORD")
account = MyPlexAccount(USERNAME, PASSWORD)
plex = account.resource("<YOUR_PLEX_HOSTNAME>").connect()
library = plex.library.section("<YOUR_PLEX_LIBRARY_NAME>")
video = library.get(title="<YOUR MOVIE TITLE>")
updates = {"addedAt.value": "2018-08-21 11:19:43"}
video.edit(**updates)
That's it! What we're doing here is changing the addedAt value to be something that is older, since "Recently Added" is sorted by this date, so we're moving the video to the back of the line.
alexdlaird suggestion worked very well for me! I had to make one change, and I believe that's because I have 2FA on my Plex account:
import os
import sys
import plexapi
from plexapi.server import PlexServer
baseurl = 'http://plexserver:32400'
token = 'YOUR PLEX TOKEN'
plex = PlexServer(baseurl, token)
library = plex.library.section("Movies")
video = library.get(title="MOVIE NAME")
updates = {"addedAt.value": "2018-08-21 11:19:43"}
video.edit(**updates)
I was faced to this situation and while gathering documentations to modify the SQL db, I thought about another solution and it did the trick for me. As previously said, all support threads regarding this issue are closed elsewhere so I'm posting it here for others to eventually find it :
Stopped Plex Server
Disabled the server (computer) internal clock automatic sync
Set the internal clock manually to a year before
Started Plex Server
Added the movies
Scanned the library
Movie got added to the library without showing into recently added BUT the metadata from moviedb where not able to be loaded (probably caused by the difference in time and date between my server and moviedb server)
Stopped Plex Server
Restored the internal clock automatic sync
Started Plex server
Updated all metadata of the library (actually I did it manually for each as I didn't know if I had some custom metadata somewhere else in the library)
Enjoyed !
Related
I want to get some data from the browser's cache. Chrome's cache filename is like f_00001, which is meaningless. ChromeCacheView can obtain the request link corresponding to the cache file name.
ChromeCacheView is a small utility that reads the cache folder of Google Chrome Web browser, and displays the list of all files currently stored in the cache. For each cache file, the following information is displayed: URL, Content type, File size, Last accessed time, Expiration time, Server name, Server response, and more.
You can easily select one or more items from the cache list, and then extract the files to another folder, or copy the URLs list to the clipboard.
But this is a GUI program that can only run on Windows. I want to know how it works.
In other words, how can I get more information about cached files, especially request links etc.
After my long search, I found the answer.
Instructions for Chrome disk cache format can be found on the following pages:
Disk cache
Chrome Disk Cache Format
By reading these documents, we can implement parsers in arbitrary programming languages.
Fortunately, I found two python libraries to do this.
dtformats
pyhindsight
The first one doesn't seem to work correctly under Python3. The first one doesn't seem to work properly under Python3. And the second one is fantastic and does the job perfectly. About how to use pyhindsight, there are detailed instructions on the home page, I will introduce how to integrate it into our project.
import pyhindsight
from pyhindsight.analysis import AnalysisSession
import logging
import os
analysis_session = AnalysisSession()
cache_dir = '~\AppData\Local\Microsoft\Edge\User Data\Default'
analysis_session.input_path = cache_dir
analysis_session.cache_path = os.path.join(cache_dir, 'Cache\Cache_Data')
analysis_session.browser_type = 'Chrome'
analysis_session.no_copy = True
analysis_session.timezone = None
logging.basicConfig(filename=analysis_session.log_path, level=logging.FATAL,
format='%(asctime)s.%(msecs).03d | %(levelname).01s | %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
run_status = analysis_session.run()
for p in analysis_session.parsed_artifacts:
if isinstance(p, pyhindsight.browsers.chrome.CacheEntry):
print('Url: {}, Location: {}'.format(p.url, p.location))
That's all, please join it. Thanks for pyhindsight.
I’m trying to create a web scraper in python for a website called Canvas LMS to find a course link. The course link is formatted like this: [schoolname].instructure.com/courses/[id] I need to figure out how to have a bot log in with my api key and check all IDs from 1 to 10,000 for courses that does not contain the phrase “Unauthorized” or “Page Not Found” in the title so I can check them manually. However, I cannot figure out how to do any of this, as there is no guide (to my knowledge) that says how to do any of this. Tips would be appreciated.
I was working with a similar problem a few weeks back and found a solution.
Firstly, there is extensive Canvas LMS documentation available here
Secondly, find the relevant tags which can be identified and then copied to a download set. I forked a repository here and ran as a Docker container on my local machine.
I would also recommend viewing the data in a Jupyter Notebook. As a starter (and I do not advocate nested 'for loops' but you get the idea) try these:
from canvasapi import Canvas
canvas = Canvas("your-course-url", "api-token")
#Get courses and print
courses = canvas.get_courses()
for course in courses:
print(f"{course}")
#Get modules and print
modules = course.get_modules()
for module in modules:
print(f"{module}")
#Get modules items and print
module_items = course.get_module_items()
for item in module_items:
print(f"{item}")
Good luck
I have a Node project that's bundled and added to Github as releases. At the moment, it checks my Github for a new release via the API and lets the user download it. The user must then stop the Node server, unzip the release.zip to the folder and overwrite everything to update the project.
What I'm trying to do is write a Python script that I can execute in Node by spawning a new process. This will then kill the Node server using PM2, and then Python script will then check the Github API, grab the download url, downloads it, unzips the contents to the current folder, deletes the zip and then starts up the Node server again.
What I'm struggling with though is checking the Github API and downloading the latest release file. Can anyone point me in the right direction? I've read that wget shouldn't be used in Python, and instead use urlopen
If you are asking for ways to get data from a web server, the two main libraries are:
Requests
Urllib
Personally, I prefer requests. They both have good documentation.
With requests, getting JSON data is as simple as:
r = requests.get("example.com")
r = r.json()
You can add headers and other information easily, though keep in mind that while it supports HTTP, it doesn't support HTTPS.
You need to map out your workflow and dataflow better. You can do it in words or pictures. If you can express your problem clearly and completely in words step by step in list format in words, then translate it to pseudocode. Python is great because you can go almost immediately from a good written description, to pseudocode, to a working implementation. Then at least you have something that works, and you can optimize performance, simplify functionality or usability from there. This is the process of translating a problem into a solution.
When asking questions on SO, you need to show your current thinking, what you've already tried, preferably with your code that doesn't yet work, or work the way you need it to work. People can vote you down and give you negative reputation points if you ask a question with just a vague description, a question that is an obvious cry for help with homework (yours is not that), or a muse or a vague question with not even an attempt at a solution, because it does not contribute back to the community in any way.
Do you have any code or detailed pseudocode steps for checking the GitHub API and checking for the "latest release" of file(s) you are trying to update?
EDIT : Let's try to clarify all this.
I'm writing a python script, and I want it to tell me the song that Spotify is currently playing.
I've tried looking for libraries that could help me but didn't find any that are still maintained and working.
I've also looked through Spotify's web API, but it does not provide any way to get that information.
The only potential solution I found would be to grab the title of my Spotify (desktop app) window. But I didn't manage to do that so far.
So basically, what I'm asking is whether anyone knows :
How to apply the method I'm already trying to use (get the window's title from a program), either in pure python or using an intermediary shell script.
OR
Any other way to extract that information from Spotify's desktop app or web client.
Original post :
I'm fiddling with the idea of a python status bar for a linux environment, nothing fancy, just a script tailored to my own usage. What I'm trying to do right now is to display the currently playing track from spotify (namely, the artist and title).
There does not seem to be anything like that in their official web API. I haven't found any third party library that would do that either. Most libraries I found are either deprecated since spotify released their current API, or they are based on said API which does not do what I want.
I've also read a bunch of similar question in here, most of which had no answers, or a deprecated solution.
I thought about grabbing the window title, since it does diplay the information I need. But not only does that seem really convoluted, I also have difficulties making this happen. I was trying to get it by running a combination of the linux commands xdotools and xprop inside my script.
It's worth mentionning that since I'm already using the psutil lib for other informations, I already have access to spotify's PID.
Any idea how I could do that ?
And in case my method was the only one you can think of, any idea how to actually make it work ?
Your help will be appreciated.
The Spotify client on Linux implements a D-Bus interface called MPRIS - Media Player Remote Interfacing Specification.
http://specifications.freedesktop.org/mpris-spec/latest/index.html
You could access the title (and other metadata) from python like this:
import dbus
session_bus = dbus.SessionBus()
spotify_bus = session_bus.get_object("org.mpris.MediaPlayer2.spotify",
"/org/mpris/MediaPlayer2")
spotify_properties = dbus.Interface(spotify_bus,
"org.freedesktop.DBus.Properties")
metadata = spotify_properties.Get("org.mpris.MediaPlayer2.Player", "Metadata")
# The property Metadata behaves like a python dict
for key, value in metadata.items():
print(key, value)
# To just print the title
print(metadata['xesam:title'])
For windows:
The library can be found at github: https://github.com/XanderMJ/spotilib. Keep in mind that this is still work in progress.
Just copy the file and place it in your Python/Lib directory.
import spotilib
spotilib.artist() #returns the artist of the current playing song
spotilib.song() #returns the song title of the current playing song
spotilib.artist() returns only the first artist. I started working on an other library spotimeta.py to solve this issue. However, this is not working at 100% yet.
import spotimeta
spotimeta.artists() #returns a list of all the collaborating artists of the track
If an error occurs, spotimeta.artists() will return only the first artist (found with spotilib.artist())
I wonder if it's possible to create and delete events with Google API in a secondary calendar. I know well how to do it in main calendar so I only ask, how to change calendar_service to read and write to other calendar.
I've tried loging with secondary calendar email, but that's not possible with BadAuthentication Error. The URL was surely correct, becouse it was read by API.
Waiting for your help.
A'm answering my own question so I can finally accept this one. The problem has been solved some time ago.
The most important answer is in this documentation.
Each query can be run with uri as argument. For example "InsertEvent(event, uri)". Uri can be set manually (from google calendar settings) or automatically, as written in post below. Note, that CalendarEventQuery takes only username, not the whole url.
The construction of both goes this way:
user = "abcd1234#group.calendar.google.com"
uri = "http://www.google.com/calendar/feeds/{{ user }}/private/full-noattendees"
What's useful, is that you can run queries with different uri and add/delete events to many different calendars in one script.
Hope someone finds it helpful.
I got the same issue but I found this solution (I do not remember where)
This solution is to extract the secondary calendar user from its src url provided by google
This is probably not the better one but it's a working one
Note:the code is extracted from a real project [some part has been removed] and must be adapted to your particular case and is provide as sample just to have a support for explaination (It will not work as is)
# 1 - Connect using the main user email address in a classical way
cal_client = gdata.calendar.service.CalendarService()
# insert here after connection stuff
# 2 - For each existing calendars
feed = cal_client.GetAllCalendarsFeed():
# a loop the find the calendar by it's title (cal_title)
for a_calendar in feed.entry:
if cal_title in a_calendar.title.text:
cal_user = a_calendar.content.src.split('/')[5].replace('%40','#')
# If you print a_calendar.content.src.split you will see that the url
# contains an email like definition. This is the one to used to work
# with the calendar
Then you just have to replace the default user by the cal_user in the api to work on the secondary calendar.
Replace call is required because google api function are doing internal conversion on special characters like '%'
I hope this will help you.