Connecting to a part of a website depending on input. Python - python

Im new to python and wondering if there is a way for it to open a webpage depending on whats been inputted. EG
Market=input("market")
ticker=input("Ticket")
would take you to this part of the website.
https://www.tradingview.com/symbols/'market'-'ticker'/technicals
Thanks

Looks like you were pretty much there, but it python you can use the + sign to concatenate strings and then cause it to open that link using webbrowser library
import webbrowser
market=input("market")
ticker=input("Ticket")
webbrowser.open('https://www.tradingview.com/symbols/'+market+'-'+ticker+'/technicals')

Its cleaner to use format string like this:
import webbrowser
market=input("market")
ticker=input("Ticket")
webbrowser.open(f'https://www.tradingview.com/symbols/{market}-{ticker}/technicals')

Related

How can I convert docx to doc using Python?

How can I convert file.docx to file.doc using Python? I have a code that outputs a file in docx format, but this program is for someone who can only use Word 2003, so I need to convert that file to .doc using Python. How can I do it? Thank you in advance.
Bit clunky, but you could use pywinauto to programatically open your .docx documents in word, then save-as .doc. It'd be using word to do the conversion, so it should be as clean as you could get.
This is a snippet of what I've used for converting to pdf within word (it was just a test). You'd have to follow the keystrokes necessary to save as a .doc
import pywinauto
from pywinauto.application import Application
app1 = Application(backend="uia").connect(path="C:\\Program Files (x86)\\Microsoft Office\\root\\Office16\\WINWORD.EXE")
wordhndl = app1.top_window()
wordhndl.type_keys('^o')
wordhndl.type_keys('%f')
wordhndl.type_keys('^o')
wordhndl.type_keys('^o')
#Now that we're in a sub-window, using the top_window() handle doesn't work...
#Instead refer to absolute (using friendly_class_name())
app1.Dialog.Open.type_keys("Y:\\996.Software\\04.Python\\Test\\SampleDoc1.docx")
app1.Dialog.Open.type_keys('~')
#Publish it to pdf
app1.SampleDoc1docx.type_keys('%f')
app1.SampleDoc1docx.type_keys('e')
app1.SampleDoc1docx.type_keys('p')
app1.SampleDoc1docx.type_keys('a')
app1.SampleDoc1docx.PublishasPDForXPS.Publish.type_keys('~')
#Deal with popups & prompts
if app1.Dialog.PublishasPDForXPS.ConfirmSaveAs.exists():
app1.Dialog.PublishasPDForXPS.ConfirmSaveAs.Yes.click() #This line can take some time...
I think the .doc keystrokes would be (not tested)
app1.SampleDoc1docx.type_keys('%f')
app1.SampleDoc1docx.type_keys('a')
app1.SampleDoc1docx.type_keys('y')
app1.SampleDoc1docx.type_keys('4')
app1.SampleDoc1docx.type_keys('{DOWN}')
app1.SampleDoc1docx.type_keys('{DOWN}')
app1.SampleDoc1docx.type_keys('~')
app1.SampleDoc1docx.type_keys('{RIGHT}')
app1.SampleDoc1docx.type_keys('~')
But... the better solution is to use word. I've used VBA within word to do this exact thing before. Don't have the code to hand, but a good pointer would be:
https://www.datanumen.com/blogs/3-quick-ways-to-batch-convert-word-doc-to-docx-files-and-vice-versa/
you can do something of the likes of:
import docx
doc = docx.Document("myWordxFile.docx")
doc.save('myNewWordFile.doc')
check this too. Good luck!

Python urllib module TypeError

I'm trying to get into CTF's and I found a cool website ment to practice some web based CTF skills called ctf.slothparadise.com. I've managed to get 4 of the Flags but two of them are giving me the finger and sadly I've had to dust off the good Ol' Python skills.
import urllib.error
import urllib.request
import urllib.parse
import urllib
import sys
while True:
about_page = urllib.request.urlopen("http://ctf.slothparadise.com/about.php").read()
if "KEY" in about_page:
print(about_page)
sys.exit(0)
ctf.slothpython.com/about.php is the page I'm programming for and it spits out the key in the source code every 1000 visitors. Instead of being a moron and refreshing it till 1000 I wrote that code in hopes it would keep opening the page until the phrase "KEY" appeared in the pages source code.
I'm getting this: (TypeError: 'str' does not support the buffer interface)
From what I know about TypeErrors I'm guessing that I may have "KEY" in the wrong format perhaps? I'm not really sure, I also may not even be using the right modules but the old urllib2 module I would typically use for this got split up into different modules so I'm learning as I go with these new modules.
Any help is appreciated in fixing this issue, also if my interpretaion of TypeErrors is wrong feel free to correct me.
The object returned by urlopen().read() acts like a context manager.
You are not using it correctly.
Try something like that:
import urllib.request
while True:
with urllib.request.urlopen('http://ctf.slothparadise.com/about.php') as response:
html = response.read()
if b"KEY" in html:
print(html)
sys.exit(0)
urllib.request.urlopen returns an http.client.HTTPResponse object and that object's read returns an encoded bytes object. How to decode may be in the returned http header, or in your case, embedded in an html meta tag. You likely don't want to parse the html for this particular test, so just look for the bytes object b'KEY'.
I don't know what you want to do with the data next, but if you want it to print nicely or scan the html, then you will have to do some parsing.
import urllib.error
import urllib.request
import urllib.parse
import urllib
import sys
while True:
about_page = urllib.request.urlopen("http://ctf.slothparadise.com/about.php").read()
if b"KEY" in about_page:
print(about_page)
sys.exit(0)
Make about_page a string with
about_page=str(urllib.request.urlopen("http://ctf.slothparadise.com/about.php").read())
This should make your code work. Hope this helps!!

Import Skin Weight Maps (Python) - (Maya)

Like mentioned on this post, I would like to just import a skin weightmap (a .weightMap file) into a scene without having to open a dialogue box. Trying to reverse - engineer the script mentioned in the reply didn't get me anywhere.
When I do it manually thru maya's ui - the script history shows...
ImportSkinWeightMaps;
...as a command. But my searches on this keep leading me to the deformerWeights command.
Thing is, there is no example on the documentation as to how to correctly write the syntax. Writing the flags, the path thru trial and error with it didn't work out, plus additional searches keep giving me the hint that I need to use a .xml file for some reason? when all I want to do is import a .weightMap file.
I even ended up looking at weight importer scripts in highend3d.com in hopes at looking at what a proper importing syntax should look like.
All I need is the correct syntax (or command) for something like:
mel.eval("ImportSkinWeightMaps;")
or
cmds.deformerWeights (p = "path to my .weightMap file", im=True, )
or
from pymel.core import *
pymel.core.runtime.ImportSkinWeightMaps ( 'targetOject', 'path to .weightMap file' )
Any help would be greatly appreciated.
Thanks!
why not using some cmds.skinPercent ?
It is more reliable.
http://tech-artists.org/forum/showthread.php?5490-Faster-way-to-find-number-of-influences-Maya&p=27598#post27598

How to write a python script for downloading?

I want to download some files from this site: http://www.emuparadise.me/soundtracks/highquality/index.php
But I only want to get certain ones.
Is there a way to write a python script to do this? I have intermediate knowledge of python
I'm just looking for a bit of guidance, please point me towards a wiki or library to accomplish this
thanks,
Shrub
Here's a link to my code
I looked at the page. The links seem to redirect to another page, where the file is hosted, clicking which downloads the file.
I would use mechanize to follow the required links to the right page, and then use BeautifulSoup or lxml to parse the resultant page to get the filename.
Then it's a simple matter of opening the file using urlopen and writing its contents out into a local file like so:
f = open(localFilePath, 'w')
f.write(urlopen(remoteFilePath).read())
f.close()
Hope that helps
Make a url request for the page. Once you have the source, filter out and get urls.
The files you want to download are urls that contain a specific extension. It is with this that you can do a regular expression search for all urls that match your criteria.
After filtration, then do a url request for each matched url's data and write it to memory.
Sample code:
#!/usr/bin/python
import re
import sys
import urllib
#Your sample url
sampleUrl = "http://stackoverflow.com"
urlAddInfo = urllib.urlopen(sampleUrl)
data = urlAddInfo.read()
#Sample extensions we'll be looking for: pngs and pdfs
TARGET_EXTENSIONS = "(png|pdf)"
targetCompile = re.compile(TARGET_EXTENSIONS, re.UNICODE|re.MULTILINE)
#Let's get all the urls: match criteria{no spaces or " in a url}
urls = re.findall('(https?://[^\s"]+)', data, re.UNICODE|re.MULTILINE)
#We want these folks
extensionMatches = filter(lambda url: url and targetCompile.search(url), urls)
#The rest of the unmatched urls for which the scrapping can also be repeated.
nonExtMatches = filter(lambda url: url and not targetCompile.search(url), urls)
def fileDl(targetUrl):
#Function to handle downloading of files.
#Arg: url => a String
#Output: Boolean to signify if file has been written to memory
#Validation of the url assumed, for the sake of keeping the illustration short
urlAddInfo = urllib.urlopen(targetUrl)
data = urlAddInfo.read()
fileNameSearch = re.search("([^\/\s]+)$", targetUrl) #Text right before the last slash '/'
if not fileNameSearch:
sys.stderr.write("Could not extract a filename from url '%s'\n"%(targetUrl))
return False
fileName = fileNameSearch.groups(1)[0]
with open(fileName, "wb") as f:
f.write(data)
sys.stderr.write("Wrote %s to memory\n"%(fileName))
return True
#Let's now download the matched files
dlResults = map(lambda fUrl: fileDl(fUrl), extensionMatches)
successfulDls = filter(lambda s: s, dlResults)
sys.stderr.write("Downloaded %d files from %s\n"%(len(successfulDls), sampleUrl))
#You can organize the above code into a function to repeat the process for each of the
#other urls and in that way you can make a crawler.
The above code is written mainly for Python2.X. However, I wrote a crawler that works on any version starting from 2.X
Why yes! 5 years later and, not only is this possible, but you've now got a lot of ways to do it.
I'm going to avoid code-examples here, because mainly want to help break your problem into segments and give you some options for exploration:
Segment 1: GET!
If you must stick to the stdlib, for either python2 or python3, urllib[n]* is what you're going to want to use to pull-down something from the internet.
So again, if you don't want dependencies on other packages:
urllib or urllib2 or maybe another urllib[n] I'm forgetting about.
If you don't have to restrict your imports to the Standard Library:
you're in luck!!!!! You've got:
requests with docs here. requests is the golden standard for gettin' stuff off the web with python. I suggest you use it.
uplink with docs here. It's relatively new & for more programmatic client interfaces.
aiohttp via asyncio with docs here. asyncio got included in python >= 3.5 only, and it's also extra confusing. That said, it if you're willing to put in the time it can be ridiculously efficient for exactly this use-case.
...I'd also be remiss not to mention one of my favorite tools for crawling:
fake_useragent repo here. Docs like seriously not necessary.
Segment 2: Parse!
So again, if you must stick to the stdlib and not install anything with pip, you get to use the extra-extra fun and secure (<==extreme-sarcasm) xml builtin module. Specifically, you get to use the:
xml.etree.ElementTree() with docs here.
It's worth noting that the ElementTree object is what the pip-downloadable lxml package is based on, and made make easier to use. If you want to recreate the wheel and write a bunch of your own complicated logic, using the default xml module is your option.
If you don't have to restrict your imports to the Standard Library:
lxml with docs here. As i said before, lxml is a wrapper around xml.etree that makes it human-usable & implements all those parsing tools you'd need to make yourself. However, as you can see by visiting the docs, it's not easy to use by itself. This brings us to...
BeautifulSoup aka bs4 with docs here. BeautifulSoup makes everything easier. It's my recommendation for this.
Segment 3: GET GET GET!
This section is nearly exactly the same as "Segment 1," except you have a bunch of links not one.
The only thing that changes between this section and "Segment 1" is my recommendation for what to use: aiohttp here will download way faster when dealing with several URLs because it's allows you to download them in parallel.**
* - (where n was decided-on from python-version to ptyhon-version in a somewhat frustratingly arbitrary manner. Look up which urllib[n] has .urlopen() as a top-level function. You can read more about this naming-convention clusterf**k here, here, and here.)
** - (This isn't totally true. It's more sort-of functionally-true at human timescales.)
I would use a combination of wget for downloading - http://www.thegeekstuff.com/2009/09/the-ultimate-wget-download-guide-with-15-awesome-examples/#more-1885 and BeautifulSoup http://www.crummy.com/software/BeautifulSoup/bs4/doc/ for parsing the downloaded file

How to download a file using Python

I tried to download something from the Internet using Python, I am using urllib.retriever from the urllib module but I just can't get it work. I would like to be able to save the downloaded file to a location of my choice.
If someone could explain to me how to do it with clear examples, that would be VERY appreciated.
I suggest using urllib2 like so:
source = urllib2.urlopen("http://someUrl.com/somePage.html").read()
open("/path/to/someFile", "wb").write(source)
You could even shorten it to (although, you wouldnt want to shorten it if you plan to enclose each individual call in a try - except):
open("/path/to/someFile", "wb").write(urllib2.urlopen("http://someUrl.com/somePage.html").read())
You can also use the urllib:
source = urllib.request.urlopen(("full_url")).read()
and then use what chown used above:
open("/path/to/someFile", "wb").write(source)

Categories

Resources