i have a script that check the input link, if it's equivalent to one i specified in the code, then it will use my code, else it open the link in chrome.
i want to make that script kind of as a default browser, as to gain speed compared to opening the browser, getting the link with an help of an extension and then send it to my script using POST.
i used procmon to check where the process in question query the registry key and it seem like it tried to check HKCU\Software\Classes\ChromeHTML\shell\open\command so i added a some key there and in command, i edited the content of the key with my script path and arguments (-- %1)(-- only here for testing purposes)
unfortunately, once the program query this to send a link, windows prompt to choose a browser instead of my script, which isn't what i want.
Any idea?
in HKEY_CURRENT_USER\Software\Classes\ChromeHTML\Shell\open\command Replace the value in default with "C:\Users\samdra.r\AppData\Local\Programs\Python\Python39\pythonw.exe" "[Script_path_here]" %1
when launching a link, you'll be asked to set a default browser only once (it ask for a default browser for each change you make to the key):
i select chrome in my case
as for the python script, here it is:
import sys
import browser_cookie3
import requests
from bs4 import BeautifulSoup as BS
import re
import os
import asyncio
import shutil
def Prep_download(args):
settings = os.path.abspath(__file__.split("NewAltDownload.py")[0]+'/settings.txt')
if args[1] == "-d" or args[1] == "-disable":
with open(settings, 'r+') as f:
f.write(f.read()+"\n"+"False")
print("Background program disabled, exiting...")
exit()
if args[1] == "-e" or args[1] == "-enable":
with open(settings, 'r+') as f:
f.write(f.read()+"\n"+"True")
link = args[-1]
with open(settings, 'r+') as f:
try:
data = f.read()
osupath = data.split("\n")[0]
state = data.split("\n")[1]
except:
f.write(f.read()+"\n"+"True")
print("Possible first run, wrote True, exiting...")
exit()
if state == "True":
asyncio.run(Download_map(osupath, link))
async def Download_map(osupath, link):
if link.split("/")[2] == "osu.ppy.sh" and link.split("/")[3] == "b" or link.split("/")[3] == "beatmapsets":
with requests.get(link) as r:
link = r.url.split("#")[0]
BMID = []
id = re.sub("[^0-9]", "", link)
for ids in os.listdir(os.path.abspath(osupath+("/Songs/"))):
if re.match(r"(^\d*)",ids).group(0).isdigit():
BMID.append(re.match(r"(^\d*)",ids).group(0))
if id in BMID:
print(link+": Map already exist")
os.system('"'+os.path.abspath("C:/Program Files (x86)/Google/Chrome/Application/chrome.exe")+'" '+link)
return
if not id.isdigit():
print("Invalid id")
return
cj = browser_cookie3.load()
print("Downloading", link, "in", os.path.abspath(osupath+"/Songs/"))
headers = {"referer": link}
with requests.get(link) as r:
t = BS(r.text, 'html.parser').title.text.split("·")[0]
with requests.get(link+"/download", stream=True, cookies=cj, headers=headers) as r:
if r.status_code == 200:
try:
id = re.sub("[^0-9]", "", link)
with open(os.path.abspath(__file__.split("NewAltDownload.pyw")[0]+id+" "+t+".osz"), "wb") as otp:
otp.write(r.content)
shutil.copy(os.path.abspath(__file__.split("NewAltDownload.pyw")[0]+id+" "+t+".osz"),os.path.abspath(osupath+"/Songs/"+id+" "+t+".osz"))
except:
print("You either aren't connected on osu!'s website or you're limited by the API, in which case you now have to wait 1h and then try again.")
else:
os.system('"'+os.path.abspath("C:/Program Files (x86)/Google/Chrome/Application/chrome.exe")+'" '+link)
args = sys.argv
if len(args) == 1:
print("No arguments provided, exiting...")
exit()
Prep_download(args)
you obtain the argument %1 (the link) with sys.argv()[-1] (since sys.argv is a list) and from there, you just check if the link is similar to the link you're looking for (in my case it need to look like https://osu.ppy.sh/b/ or https://osu.ppy.sh/beatmapsets/)
if that's the case, do some code, else, just launch chrome with chrome executable and the link as argument. and if the id of the beatmap is found in the Songs folder, then i also open the link in chrome.
to make it work in the background i had to fight with subprocesses and even more tricks, and at the end, it started working suddenly with pythonw and .pyw extension.
Related
I am trying to get the like count from someone's latest posts (and also get the Instagram link for that post) using Python but I can't seem to figure it out. I have tried every single method that is used online but none of them seem to work anymore.
My idea was to let Python open a browser tab and go to the www.instagram.com/p/ link but that also doesn't seem to work anymore.
I have no code to upload because it's all a big mess with different strategies so I just deleted it and decided to start over.
Just grab an https proxy list and here ya go, I wrote this a while back and can be easily edited to your needs.
import requests, sys, time
from random import choice
if len(sys.argv) < 2: sys.exit(f"Usage: {sys.argv[0]} <Post Link> <Proxy List>")
Comments = 0
ProxList = []
Prox = open(sys.argv[2], "r")readlines()
for line in ReadProx:
ProxList.append(line.strip('\n'))
while True:
try:
g = requests.get(sys.argv[1], proxies={'https://': 'https://'+choice(ProxList)})
print(g.json())
time.sleep(5)
if len(g.json()['data']['shortcode_media']['edge_liked_by']['count']) > Comments:
print(f"[+] Like | {g.json()['data']['shortcode_media']['edge_liked_by']['edges'][int(Comments)]['node']['username']}")
if len(g.json()['data']['shortcode_media']['edge_liked_by']['count']) == Comments - 1:
pass
else:
Comments += 1
time.sleep(1.5)
except KeyboardInterrupt: sys.exit("Done")
except Exception as e: print(e)
Hello there I want to code a python program, that opens a website. When you just type a shortcut e.g. "google" it will open "https://www.google.de/". The problem is that it won´t open the right url.
import webbrowser
# URL list
google = "https://www.google.de"
ebay = "https://www.ebay.de/"
# shortcuts
Websites = ("google", "ebay")
def inputString():
inputstr = input()
if inputString(google) = ("https://www.google.de")
else:
print("Please look for the right shortcut.")
return
url = inputString()
webbrowser.open(url)
Using your example you can do:
google = "https://www.google.de"
ebay = "https://www.ebay.de/"
def inputString():
return input()
if inputString() == "google":
url = google
webbrowser.open(url)
Or you can do it the simple way as #torxed said:
inputstr = input()
sites = {'google' : 'https://google.de', 'ebay':'https://www.ebay.de/'}
if inputstr in sites:
webbrowser.open(sites[inputstr])
How about:
import webbrowser
import sys
websites = {
"google":"https://www.google.com",
"ebay": "https://www.ebay.com"
}
if __name__ == "__main__":
try:
webbrowser.open(websites[sys.argv[1]])
except:
print("Please look for the right shortcut:")
for website in websites:
print(website)
run like so python browse.py google
I've compiled this code to perform an iteration of downloads from a webpage which has multiple download links. Once the download link is clicked, the webpage produces a webform which has to be filled and submitted for the download to start. I've tried running the code and face issue in 'try'& 'except' block code (Error: Too broad exception clause) and towards the end there is an error associated with the 'submit' (Error: method submit maybe static) both of these subsequently result in 'SyntaxError: invalid syntax '. Any suggestions / help will be much appreciated. Thank you.
import os
from selenium import webdriver
fp = webdriver.FirefoxProfile()
fp.set_preference("browser.download.folderList",2)
fp.set_preference("browser.download.manager.showWhenStarting",False)
fp.set_preference("browser.download.dir", os.getcwd())
fp.set_preference("browser.helperApps.neverAsk.saveToDisk", "application/x-msdos-program")
driver = webdriver.Firefox(firefox_profile=fp)
driver.get('http://def.com/catalog/attribute')
#This is to find the download links in the webpage one by one
i=0
while i<1:
try:
driver.find_element_by_xpath('//*[#title="xml (Open in a new window)"]').click()
except:
i=1
#Once the download link is clicked this has to fill the form for submission which fill download the file
class FormPage(object):
def fill_form(self, data):
driver.find_element_by_xpath('//input[#type = "radio" and #value = "Non-commercial"]').click()
driver.find_element_by_xpath('//input[#type = "checkbox" and #value = "R&D"]').click()
driver.find_element_by_xpath('//input[#name = "name_d"]').send_keys(data['name_d'])
driver.find_element_by_xpath('//input[#name = "mail_d"]').send_keys(data['mail_d'])
return self
def submit(self):
driver.find_element_by_xpath('//input[#value = "Submit"]').click()
data = {
'name_d': 'abc',
'mail_d': 'xyz#gmail.com',
}
FormPage().fill_form(data).submit()
driver.quit()
Actually you have two warnings and a error:
1 - "Too broad exception" this is a warning telling you that you should except espefic errors, not all of them. In your "except" line should be something like except [TheExceptionYouAreTreating]: an example would be except ValueError:. However this should not stop your code from running
2 - "Error: method submit maybe static" this is warning telling you that the method submit is a static method (bassically is a method that don't use the self attribute) to supress this warning you can use the decorator #staticmethod like this
#staticmethod
def submit():
...
3 - "SyntaxError: invalid syntax" this is what is stopping your code from running. This is a error telling you that something is written wrong in your code. I think that may be the indentation on your class. Try this:
i=0
while i<1:
try:
driver.find_element_by_xpath('//*[#title="xml (Open in a new window)"]').click()
except:
i=1
#Once the download link is clicked this has to fill the form for submission which fill download the file
class FormPage(object):
def fill_form(self, data):
driver.find_element_by_xpath('//input[#type = "radio" and #value = "Non-commercial"]').click()
driver.find_element_by_xpath('//input[#type = "checkbox" and #value = "R&D"]').click()
driver.find_element_by_xpath('//input[#name = "name_d"]').send_keys(data['name_d'])
driver.find_element_by_xpath('//input[#name = "mail_d"]').send_keys(data['mail_d'])
return self
def submit(self):
driver.find_element_by_xpath('//input[#value = "Submit"]').click()
data = {
'name_d': 'abc',
'mail_d': 'xyz#gmail.com',
}
FormPage().fill_form(data).submit()
driver.quit()
One more thing. Those are really simple errors and warnings, you should be able to fix them by yourself by carefully reading what the error has to say. I also reccomend you reading about Exceptions
Does anyone know how I would be able to take the the URL as an argument in Python as page?
Just to readline in the script, user inputs into the shell and pass it through as an argument just to make the script more portable?
import sys, re
import webpage_get
def print_links(page):
''' find all hyperlinks on a webpage passed in as input and
print '''
print '\n[*] print_links()'
links = re.findall(r'(\http://\w+\.\w+[-_]*\.*\w+\.*?\w+\.*?\w+\.*[//]*\.*?\w+ [//]*?\w+[//]*?\w+)', page)
# sort and print the links
links.sort()
print '[+]', str(len(links)), 'HyperLinks Found:'
for link in links:
print link
def main():
# temp testing url argument
sys.argv.append('http://www.4chan.org')
# Check args
if len(sys.argv) != 2:
print '[-] Usage: webpage_getlinks URL'
return
# Get the web page
page = webpage_get.wget(sys.argv[1])
# Get the links
print_links(page)
if __name__ == '__main__':
main()
It looks like you kind of got started with command line arguments but just to give you an example for this specific situation you could do something like this:
def main(url):
page = webpage_get.wget(url)
print_links(page)
if __name__ == '__main__':
url = ""
if len(sys.argv >= 1):
url = sys.argv[0]
main(url)
Then run it from shell like this
python test.py http://www.4chan.org
Here is a tutorial on command line arguments which may help your understanding more than this snippet http://www.tutorialspoint.com/python/python_command_line_arguments.htm
Can you let me know if I miss understood your question? I didn't feel to confident in the meaning after I read it.
My program takes an user input and searches it through a particular webpage . Further i want it to go and click on a particular link and then download the file present there .
Example :
The webpage : http://www.rcsb.org/pdb/home/home.do
The search Word :"1AW0"
after you search the word on the website it takes you to :
http://www.rcsb.org/pdb/explore/explore.do?structureId=1AW0
I want the program to go on the right hand side of the webpage and download the pdb file from the DOWNLOAD FILES option
I have managed to write a program using the mechanize module to automatically search the word however unable to find a way i can click on a link
my code :
import urllib2
import re
import mechanize
br = mechanize.Browser()
br.open("http://www.rcsb.org/pdb/home/home.do")
## name of the form that holds the search text area
br.select_form("headerQueryForm")
## "q" name of the teaxtarea in the html script
br["q"] = str("1AW0")
response = br.submit()
print response.read()
any help or any suggestions would help .
Btw i am intermediate programmer in Python and I am trying to learn the Jython module to try make this work .
Thanks in advance
Here's how I would have done it:
'''
Created on Dec 9, 2012
#author: Daniel Ng
'''
import urllib
def fetch_structure(structureid, filetype='pdb'):
download_url = 'http://www.rcsb.org/pdb/download/downloadFile.do?fileFormat=%s&compression=NO&structureId=%s'
filetypes = ['pdb','cif','xml']
if (filetype not in filetypes):
print "Invalid filetype...", filetype
else:
try:
urllib.urlretrieve(download_url % (filetype,structureid), '%s.%s' % (structureid,filetype))
except Exception, e:
print "Download failed...", e
else:
print "Saved to", '%s.%s' % (structureid,filetype)
if __name__ == "__main__":
fetch_structure('1AW0')
fetch_structure('1AW0', filetype='xml')
fetch_structure('1AW0', filetype='png')
Which provides this output:
Saved to 1AW0.pdb
Saved to 1AW0.xml
Invalid filetype... png
Along with the 2 files 1AW0.pdb and 1AW0.xml which are saved to the script directory (for this example).
http://docs.python.org/2/library/urllib.html#urllib.urlretrieve