Please help me fix this, this is my code which I've already tried.
I really appreciate your help.
import urllib.request
import re
search_keyword="ill%20wiat"
html = urllib.request.urlopen("https://www.youtube.com/results?search_query=" + search_keyword)
video_ids = re.findall(r"watch?v=(\S{11})", html.read().decode())
print("https://www.youtube.com/watch?v=" + video_ids[0])
First of all check page you try to parse. you wrote:
r"watch?v=(\S{11})"
just remember that ? char here will be parsed as REGEX operator and not string you want,
so first of all you need to write it like:
/watch[?]v=(\S{11})
so your regex will be parsed properly
Second: good practice to print your list to see what you get and iterate via list using FOR loop instead of directly accessing index [0].
in you case you get this error just because your list of id is empty.
next code is working for me
import urllib.request
import re
search_keyword="ill%20wiat"
url="https://www.youtube.com/results?search_query="+search_keyword
with urllib.request.urlopen(url) as response:
video_ids = re.findall("/watch[?]v=(\S{11})", response.read().decode())
for video in video_ids:
print("https://www.youtube.com/watch?v=" + video)
P.S don't wrap your code with try/except to catch such thrown errors
urlib won't give you data
use
import requests
html=requests.get('https://www.youtube.com/results?search_query='+search_keyword)
text=html.text
text have all html data so search from text
Related
I want to only print all the "words" that start with "/watch" from the string, and then add all the '/watch...' to a list. Thanks in advance!
# Take a random video from my youtube recommended and add it to watch2gether
import requests
from bs4 import BeautifulSoup as BS
import time
import random
# Importing libraries
num = random.randint(1, 20)
recommended = requests.get('https://www.youtube.com/results?search_query=svenska+youtube+klassiker&sp=EgIQAQ%253D%253D')
recommended_soup = BS(recommended.content, features='lxml')
recommended_vid = recommended_soup.find_all('a', href=True)
for links in recommended_vid:
print(links['href'])
Output:
/
//www.youtube.com/upload
/
/feed/trending
/feed/history
/premium
/channel/UC-9-kyTW8ZkZNDHQJ6FgpwQ
/channel/UCEgdi0XIXXZ-qJOFPf4JSKw
/gaming
/feed/guide_builder
/watch?v=PbVt_O1kFpA
/watch?v=PbVt_O1kFpA
/user/thedjdoge
/watch?v=1lcksCjvuSs
/watch?v=1lcksCjvuSs
/channel/UCn-puiDqHNMhRvq6wsU3nsQ
/watch?v=AKj_pxp2l1c
/watch?v=AKj_pxp2l1c
/watch?v=QNnEqTQD6DM
/watch?v=QNnEqTQD6DM
/channel/UCDuOAYzgiZzqqlXd2G3GAwg
....
Maybe I can use something like .remove or .replace, don't know what to do so I appreciate all help.
yea re is definitely overkill here. this is a perfect use case for filter
a_list = ["/watch/blah", "not/watch"]
new_list = filter(lambda x: x.startswith("/watch"), a_list)
print(list(new_list))
['/watch/blah']
just be aware it returns a generator, so wrap it in list if you want the list.
http://book.pythontips.com/en/latest/map_filter.html is good if you want more information on functions that do this kind of data cleaning. If you need to get really fancy with your data cleaning look into using pandas. It has a steep learning curve, but it's fantastic for complicated data cleaning.
you can do the following
for links in recommended_vid:
if "/watch" in links[href]:
print(link[href])
This should help you find all the /watch links.
import re
pattern = re.compile(r"/watch")
# pattern = re.compile(r"/watch\?v=[a-zA-Z_0-9]{11}") -- This pattern is to find all the links as well
matches = pattern.finditer(<your_string>)
for m in matches:
print(m) #will print all the locations at which /watch occurs
You can collect all the URLs in a list and proceed. Good Luck!!
Looking at your code, a simple if statement with str.startswith() should suffice to get what you want.
Assuming the links['href'] contains a str, then:
for links in recommended_vid:
href = links['href'] # I think 'href' will be of type 'str'
if href.startswith('/watch'):
print(href)
Note: .startswith() will only work if /watch is really at the start of the href; you could also try if '/watch' in href:, which will match if that string appears anywhere in href.
So I'm new to Python and am working on a simple program that will read a text file of protein names (PDB IDs) and create a URL to search a database (the PDB) for that protein and some associated data.
Unfortunately, as a newbie, I forgot to save my script, so I can't recall what I did to make my code work!
Below is my code so far:
import urllib
import urllib.parse
import urllib.request
import os
os.chdir("C:\\PythonProjects\\Samudrala Lab Projects")
protein_file = open("protein_list.txt","r")
protein_list = protein_file.read()
for item in protein_list:
item = item[0:4]
query_string =urlencode('customReportColumns','averageBFactor','resolution','experimentalTechnique','service=wsfile','format=csv')
**final_URL = url + '?pdbid={}{}'.format(url, item, query_string)**
print(final_URL)
The line of code I'm stuck on is starred.
The object "final_url" within the loop is missing some modification to indicate that I'd like the URL to search for the item as a pdbid. Can anyone give me a hint as to how I can tell the URL to plug in each item on the list as a PDBID?
I'm getting a type error indicating that it's not a valid non-string sequence or mapping object. Original post was edited to add this info.
Please let me know if this is an unclear question, or if you need any additional info.
Thanks!
How about something like this?
final_URL = "{}?pdbids={}{}".format(url, item, query_string)
I've been trying to make an application in python and I'm new to python.
Well, what I actually want to do is that . I want the feedparser to read the values from an RSS of a website... say reddit... and then I want to make that output as a stringand pass the value further to my code... my code right now..
import feedparser
import webbrowser
feed = feedparser.parse('http://www.reddit.com/.rss')
print feed['entries'][1]['title']
print feed['entries'][1]['link']
It is working right now.. it parses the feed and I get the output I want... Now, I want to use the "link" from the "print feed['entries'][1]['link'] " and use it in the code further...
how can I do so..? To be more specific.. I want to open that URL in my browser...
I concluded to something like this..
import feedparser
import webbrowser
feed = feedparser.parse('http://www.reddit.com/.rss')
print feed['entries'][1]['title']
print feed['entries'][1]['link']
mystring = 'feed['entries'][1]['link']'
webbrowser.open('mystring')
It is of course not working... Please Help... if you need to know anything else.. please let me know...
This is Reddit specific so it won't work on other RSS feeds but I thought this might help you.
from __future__ import print_function
import praw
r = praw.Reddit("my_cool_user_agent")
submissions = r.get_front_page()
for x in submissions:
print("Title: {0} URL: {1} Permalink: {2}".format(x, x.url, x.permalink))
print ("------------------------------------------------------------")
For Reddit there are 2 URLs that you might be interested in: the actual link that is submitted (the 'external' link... think imgur, etc) and the permalink to the Reddit post itself.
Instead of passing the feed[entries][1][link] as a string, just pass the value inside to the webbrowser.
Example -
webbrowser.open(feed['entries'][1]['link'])
I wrote a script that parses a webpage and get the amount of links('a' tag) on it:
import urllib
import lxml.html
connection = urllib.urlopen('http://test.com')
dom = lxml.html.fromstring(connection.read())
for link in dom.xpath('//a/#href'):
print link
The output of a script:
./01.html
./52.html
./801.html
http://www.blablabla.com/1.html
#top
How can i convert it to list to count the amount of links? I use link.split() but it got to me:
['./01.html']
['./52.html']
['./801.html']
['http://www.blablabla.com/1.html']
['#top']
But i want to get:
[./01.html, ./52.html, ./801.html, http://www.blablabla.com/1.html, #top]
Thanks!
link.split() tries to split link itself. But you must work with entity that represents all links. In your case: dom.xpath('//a/#href').
So this must help you:
links = list(dom.xpath('//a/#href'))
And getting length with a built-in len function:
print len(links)
list(dom.xpath('//a/#href'))
This will take the iterator that dom.xpath returns and puts every item into a list.
I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:
http://www.domainname.com/page?CONTENT_ITEM_ID=1234¶m2¶m3
How could I slice out:
http://www.domainname.com/page?CONTENT_ITEM_ID=1234
Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.
Cheers
Use the urlparse module. Check this function:
import urlparse
def process_url(url, keep_params=('CONTENT_ITEM_ID=',)):
parsed= urlparse.urlsplit(url)
filtered_query= '&'.join(
qry_item
for qry_item in parsed.query.split('&')
if qry_item.startswith(keep_params))
return urlparse.urlunsplit(parsed[:3] + (filtered_query,) + parsed[4:])
In your example:
>>> process_url(a)
'http://www.domainname.com/page?CONTENT_ITEM_ID=1234'
This function has the added bonus that it's easier to use if you decide that you also want some more query parameters, or if the order of the parameters is not fixed, as in:
>>> url='http://www.domainname.com/page?other_value=xx¶m3&CONTENT_ITEM_ID=1234¶m1'
>>> process_url(url, ('CONTENT_ITEM_ID', 'other_value'))
'http://www.domainname.com/page?other_value=xx&CONTENT_ITEM_ID=1234'
The quick and dirty solution is this:
>>> "http://something.com/page?CONTENT_ITEM_ID=1234¶m3".split("&")[0]
'http://something.com/page?CONTENT_ITEM_ID=1234'
Another option would be to use the split function, with & as a parameter. That way, you'd extract both the base url and both parameters.
url.split("&")
returns a list with
['http://www.domainname.com/page?CONTENT_ITEM_ID=1234', 'param2', 'param3']
I figured it out below is what I needed to do:
url = "http://www.domainname.com/page?CONTENT_ITEM_ID=1234¶m2¶m3"
url = url[: url.find("&")]
print url
'http://www.domainname.com/page?CONTENT_ITEM_ID=1234'
Parsin URL is never as simple I it seems to be, that's why there are the urlparse and urllib modules.
E.G :
import urllib
url ="http://www.domainname.com/page?CONTENT_ITEM_ID=1234¶m2¶m3"
query = urllib.splitquery(url)
result = "?".join((query[0], query[1].split("&")[0]))
print result
'http://www.domainname.com/page?CONTENT_ITEM_ID=1234'
This is still not 100 % reliable, but much more than splitting it yourself because there are a lot of valid url format that you and me don't know and discover one day in error logs.
import re
url = 'http://www.domainname.com/page?CONTENT_ITEM_ID=1234¶m2¶m3'
m = re.search('(.*?)&', url)
print m.group(1)
Look at the urllib2 file name question for some discussion of this topic.
Also see the "Python Find Question" question.
This method isn't dependent on the position of the parameter within the url string. This could be refined, I'm sure, but it gets the point across.
url = 'http://www.domainname.com/page?CONTENT_ITEM_ID=1234¶m2¶m3'
parts = url.split('?')
id = dict(i.split('=') for i in parts[1].split('&'))['CONTENT_ITEM_ID']
new_url = parts[0] + '?CONTENT_ITEM_ID=' + id
An ancient question, but still, I'd like to remark that query string paramenters can also be separated by ';' not only '&'.
beside urlparse there is also furl, which has IMHO better API.