Read pptx file content from a url - python

I found this solution to read word file content from a url
from urllib.request import urlopen
from bs4 import BeautifulSoup
from io import BytesIO
from zipfile import ZipFile
file = urlopen(url).read()
file = BytesIO(file)
document = ZipFile(file)
content = document.read('word/document.xml')
word_obj = BeautifulSoup(content.decode('utf-8'))
text_document = word_obj.findAll('w:t')
for t in text_document:
print(t.text)
Anyone know a similar way to process pptx files? I have seen several solutions but to read the file directly, not from a url.

i don't know if it can help you but with urllib you obtain the content of the pptx (variable file), use cStringIO.StringIO(file) in function that read a pptx file path to simulate a file.

Related

Getting code from a .txt on a website and pasting it in a tempfile PYTHON

I was trying to make a script that gets a .txt from a websites, pastes the code into a python executable temp file but its not working. Here is the code:
from urllib.request import urlopen as urlopen
import os
import subprocess
import os
import tempfile
filename = urlopen("https://randomsiteeeee.000webhostapp.com/script.txt")
temp = open(filename)
temp.close()
# Clean up the temporary file yourself
os.remove(filename)
temp = tempfile.TemporaryFile()
temp.close()
If you know a fix to this please let me know. The error is :
File "test.py", line 9, in <module>
temp = open(filename)
TypeError: expected str, bytes or os.PathLike object, not HTTPResponse
I tried everything such as a request to the url and pasting it but didnt work as well. I tried the code that i pasted here and didnt work as well.
And as i said, i was expecting it getting the code from the .txt from the website, and making it a temp executable python script
you are missing a read:
from urllib.request import urlopen as urlopen
import os
import subprocess
import os
import tempfile
filename = urlopen("https://randomsiteeeee.000webhostapp.com/script.txt").read() # <-- here
temp = open(filename)
temp.close()
# Clean up the temporary file yourself
os.remove(filename)
temp = tempfile.TemporaryFile()
temp.close()
But if the script.txt contains the script and not the filename, you need to create a temporary file and write the content:
from urllib.request import urlopen as urlopen
import os
import subprocess
import os
import tempfile
content = urlopen("https://randomsiteeeee.000webhostapp.com/script.txt").read() #
with tempfile.TemporaryFile() as fp:
name = fp.name
fp.write(content)
If you want to execute the code you fetch from the url, you may also use exec or eval instead of writing a new script file.
eval and exec are EVIL, they should only be used if you 100% trust the input and there is no other way!
EDIT: How do i use exec?
Using exec, you could do something like this (also, I use requests instead of urllib here. If you prefer urllib, you can do this too):
import requests
exec(requests.get("https://randomsiteeeee.000webhostapp.com/script.txt").text)
Your trying to open a file that is named "the content of a website".
filename = "path/to/my/output/file.txt"
httpresponse = urlopen("https://randomsiteeeee.000webhostapp.com/script.txt").read()
temp = open(filename)
temp.write(httpresponse)
temp.close()
Is probably more like what you are intending

Combine multiple HTML files into one html file Using Python

I have a task, I'm using jupyter and I have to combine or merge multiple html files into one html file.
Any ideas how?
I did this with excel but didn't work with html files:
import os
import pandas as pd
data_folder='C:\\Users\\hhhh\Desktop\\test'
df = []
for file in os.listdir(data_folder):
if file.endswith('.xlsx'):
print('Loading file {0}...'.format(file))
df.append(pd.read_excel(os.path.join(data_folder , file), sheet_name='sheet1'))
Sounds like a task for Beautiful Soup.
You would get anything inside the <body> tag of each HTML document, I assume, and then combine them.
Maybe something like:
import os
from bs4 import BeautifulSoup
output_doc = BeautifulSoup()
output_doc.append(output_doc.new_tag("html"))
output_doc.html.append(output_doc.new_tag("body"))
for file in os.listdir(data_folder):
if not file.lower().endswith('.html'):
continue
with open(file, 'r') as html_file:
output_doc.body.extend(BeautifulSoup(html_file.read(), "html.parser").body)
print(output_doc.prettify())

Python: Reading fortran file from url

I would like to do the following in Python 3: Read in a FortranFile, but from an URL rather than a local file. The reason is that for my concrete example there are a lot of files and I want to avoid having to download them all first.
I have managed to
a) read in a simple .txt file from an URL
import urllib
from urllib.request import urlopen
url='http://www.deus-consortium.org/deus-library/filelist/deus_file_list_501.txt'
data=urllib.request.urlopen(url)
i=0
for line in data: # files are iterable
print(i,line)
i+=1
#alternative: data.read()
b) read in a local FortranFile (binary little endian unformated Fortran file) like this:
The file is from: http://www.deus-consortium.org/deus-library/efiler1/Babel_le/boxlen648_n2048_lcdmw7/post/fof/output_00090/fof_boxlen648_n2048_lcdmw7_masst_00000
from scipy.io import FortranFile
filename='../../Downloads/fof_boxlen648_n2048_rpcdmw7_masst_00000'
ff = FortranFile(filename, 'r')
nhalos=ff.read_ints(dtype=np.int32)[0]
print('number of halos in file',nhalos)
Is there any way to avoid downloading and reading FortranFiles directly from the URL? I tried
import urllib
from urllib.request import urlopen
url='http://www.deus-consortium.org/deus-library/efiler1/Babel_le/boxlen648_n2048_lcdmw7/cube_00090/fof_boxlen648_n2048_lcdmw7_cube_00000'
pathname = urllib.request.urlopen(url)
ff = FortranFile(pathname, 'r')
ff.read_ints()
gives "OSError: obtaining file position failed". pathname.read() doesn't work either because it's a fortran file.
Any ideas? Thanks in advance!
Maybe you can use tempfile module to download and read the data?
For example:
import urllib
import tempfile
from scipy.io import FortranFile
from urllib.request import urlopen
url='http://www.deus-consortium.org/deus-library/efiler1/Babel_le/boxlen648_n2048_lcdmw7/cube_00090/fof_boxlen648_n2048_lcdmw7_cube_00000'
with tempfile.TemporaryFile() as fp:
fp.write(urllib.request.urlopen(url).read())
fp.seek(0)
ff = FortranFile(fp, 'r')
info = ff.read_ints()
print(info)
Prints:
[12808737]

unable to download pdf from this particular url using python

I have tried downloading a .pdf file using the following code but I can't open the downloaded file, it shows pdf error. I also tried doing the same with urllib2, requests none of them helped. Please help in resolving this.
import urllib
import os
pdf_link = "https://www.indeed.com/resumes/account/login?dest=%2Fr%2F23c59475ad19d393/pdf"
pdf_file = "sample.pdf"
response = urllib.urlopen(pdf_link)
file = open(pdf_file, 'wb')
file.write(response.read())
file.close()

Working with a pdf from the web directly in Python?

I'm trying to use Python to read .pdf files from the web directly rather than save them all to my computer. All I need is the text from the .pdf and I'm going to be reading a lot (~60k) of them, so I'd prefer to not actually have to save them all.
I know how to save a .pdf from the internet using urllib and open it with PyPDF2. (example)
I want to skip the saving-to-file step.
import urllib, PyPDF2
urllib.urlopen('https://bitcoin.org/bitcoin.pdf')
wFile = urllib.urlopen('https://bitcoin.org/bitcoin.pdf')
lFile = PyPDF2.pdf.PdfFileReader(wFile.read())
I get an error that is fairly easy to understand:
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
fil = PyPDF2.pdf.PdfFileReader(wFile.read())
File "C:\Python27\lib\PyPDF2\pdf.py", line 797, in __init__
self.read(stream)
File "C:\Python27\lib\PyPDF2\pdf.py", line 1245, in read
stream.seek(-1, 2)
AttributeError: 'str' object has no attribute 'seek'
Obviously PyPDF2 doesn't like that I'm giving it the urllib.urlopen().read() (which appears to return a string). I know that this string is not the "text" of the .pdf but a string representation of the file. How can I resolve this?
EDIT: NorthCat's solution resolved my error, but when I try to actually extract the text, I get this:
>>> print lFile.getPage(0).extractText()
ˇˆ˘˘˙˘˘˝˘˛˘ˇ˘ˇ˚ˇˇˇ˘ˆ˘˘˘˚ˇˆ˘ˆ˘ˇ˜ˇ˝˚˘˛˘ˇ ˘˘˘ˇ˛˘˚˚ˆˇˇ!
˝˘˚ˇ˘˘˚"˘˘ˇ˘˚ˇ˘˘˚ˇ˘˘˘˙˘˘˘#˘˘˘ˆ˘˛˘˚˛˙ ˘˘˚˚˘˛˙#˘ˇ˘ˇˆ˘˘˛˛˘˘!˘˘˛˘˝˘˘˘˚ ˛˘˘ˇ˘ˇ˛$%&˘ˇ'ˆ˛
$%&˘ˇˇ˘˚ˆ˚˘˘˘˘ ˘ˆ(ˇˇ˘˘˘˘ˇ˘˚˘˘#˘˘˘ˇ˛!ˇ)˘˘˚˘˘˛ ˚˚˘ˇ˘˝˘˚'˘˘ˇˇ ˘˘ˇ˘˛˙˛˛˘˘˚ˇ˘˘ˆ˘˘ˆ˙
$˘˘˘*˘˘˘ˇˆ˘˘ˇˆ˛ˇ˘˝˚˚˘˘ˇ˘ˆ˘"˘ˆ˘ˇˇ˘˛ ˛˛˘˛˘˘˘˘˘˘˛˘˘˚˚˘$ˇ˘ˇˆ˙˘˝˘ˇ˘˘˘ˇˇˆˇ˘ ˘˛ˇ˝˘˚˚#˘˛˘˚˘˘
˘ˇ˘˚˛˛˘ˆ˛ˇˇˇ ˚˘˘˚˘˘ˇ˛˘˙˘˝˘ˇ˘ˆ˘˛˙˘˝˘ˇ˘˘˝˘"˘˛˘˝˘ˇ ˘˘˘˚˛˘˚)˘˘ˆ˛˘˘
˘˛˘˛˘ˆˇ˚˘˘˘˘˚˘˘˘˘˛˛˚˘˚˝˚ˇ˘#˘˘˚ˆ˘˘˘˝˘˚˘ˆˆˇ˘ˆ
˘˘˘ˆ˘˝˘˘˚"˘˘˚˘˚˘ˇ˘ˆ˘ˆ˘˚ˆ˛˚˛ˆ˚˘˘˘˘˘˘˚˛˚˚ˆ#˘ˇˇˆˇ˘˝˘˘ˇ˚˘ˇˇ˘˛˛˚ ˚˘˘˘ˇ˚˘˘ˇ˘˘˚ˆ˘*˘
˘˘ˇ˘˚ˇ˘˙˘˚ˇ˘˘˘˙˙˘˘˚˚˘˘˝˘˘˘˛˛˘ˇˇ˚˘˛#˘ˆ˘˘ˇ˘˚˘ˇˇ˘˘ˇˆˇ˘$%&˘ˆ˘˛˘˚˘,
Try this:
import urllib, PyPDF2
import cStringIO
wFile = urllib.urlopen('https://bitcoin.org/bitcoin.pdf')
lFile = PyPDF2.pdf.PdfFileReader( cStringIO.StringIO(wFile.read()) )
Because PyPDF2 does not work, there are a couple of solutions, however, require saving the file to disk.
Solution 1
You can use ps2ascii (if you are using linux or mac ) or xpdf (Windows). Example of using xpdf:
import os
os.system('C:\\xpdfbin-win-3.03\\bin32\\pdftotext.exe C:\\xpdfbin-win-3.03\\bin32\\bitcoin.pdf bitcoin1.txt')
or
import subprocess
subprocess.call(['C:\\xpdfbin-win-3.03\\bin32\\pdftotext.exe', 'C:\\xpdfbin-win-3.03\\bin32\\bitcoin.pdf', 'bitcoin2.txt'])
Solution 2
You can use one of online pdf to txt converter. Example of using pdf.my-addr.com
import MultipartPostHandler
import urllib2
def pdf2text( absolute_path ):
url = 'http://pdf.my-addr.com/pdf-to-text-converter-tool.php'
params = { 'file' : open( absolute_path, 'rb' ),
'encoding': 'UTF-8',
}
opener = urllib2.build_opener( MultipartPostHandler.MultipartPostHandler )
return opener.open( url, params ).read()
print pdf2text('bitcoin.pdf')
Code of MultipartPostHandler you can find here. I tried to use the cStringIO instead open(), but it did not work.
Maybe it will be helpful for you.
I know this question is old, but I had the same issue and here is how I solved it.
In the newer docs of Py2PDF there is a section about streaming data
The example there looks like this:
from io import BytesIO
# Prepare example
with open("example.pdf", "rb") as fh:
bytes_stream = BytesIO(fh.read())
# Read from bytes_stream
reader = PdfReader(bytes_stream)
Therefore, what I did instead was this:
import urllib
from io import BytesIO
from PyPDF2 import PdfReader
NEW_PATH = 'https://example.com/path/to/pdf/online?id=123456789&date=2022060'
wFile = urllib.request.urlopen(NEW_PATH)
bytes_stream = BytesIO(wFile.read())
reader = PdfReader(bytes_stream)

Categories

Resources