I have a plotly figure that I want to download the HTML for.
When I try:
fig.to_html()
It will only output some of the HTML string and ends with this:
*** WARNING: skipped 3421789 bytes of output ***
Does anyone know how I can force databricks to show all of the HTML or copy it to my clipboard?
Still a bit hacky, but the following should work to save and download a file from Databricks containing the html:
Assuming you have a plotly figure object and are working in a databricks notebook:
Create an html string and assign a variable to it:
html_string = fig.to_html()
Save to the databricks filesystem using dbutils:
dbutils.fs.put("/FileStore/my_html_file.html", html_string)
Download the html string by navigating in a browser to:
https://<databricks-instance>/files/my_html_file.html
Note:
The databricks instance is usually something like <some_letters>-<some_numbers>.databricks.net
If you're within an organisation (i.e. your databricks URLs typically end in "?o=<organisation_id>"), then you may need to add this to the end of the URL in step 3.
Related
I'm trying to write a Python script based on the atlassian-python-api module which will copy the spaces from one space and the create them in another space hosted in a different server using the following commands:
pages = sourceConfluence.get_all_pages_from_space(space = source_Space, start=0, limit=100, status=None, expand='body.storage.content', content_type='page')
for i in pages:
status = destConfluence.create_page(space = dest_Space, title=i['title'], body=i['body'], parent_id=None, type='page', representation='storage')
This works fine until pages with content like pdf or images comes in. In that case, it creates a invalid link for the contents in the newly generated pages.
How can I move the pages with the content intact using the wrapper or Confluence REST API directly?
As per my understanding, there’s no proper way to do this programmatically as of now using Python.
The only ways we can achieve this are as follows:
Copy Paste the pages manually one by one from edit mode.
XML Export & Import the space by putting the XML file in the JIRA server.
Using Bobswift CLI tool which supports this functionality.
I would like to generate a report with reference links to local files using relative paths. I want this to run on my *nix system and the Windows machines used by others. I have tried the following:
from pandas import *
df = DataFrame(['example.com'])
df.to_html('file.html')
For some reason this does not render the href for me. I have tried it in multiple browsers and none render the html.
How can I generate a report using python which includes hyperlinks to open files using local paths? I would like this document to be able to be used by others without running python.
You just need to escape the tags.
df.to_html(pth,escape=False)
Im currently trying to retrieve data from the google trends site directly into python. By inspecting the "download csv"-button I was able to extract the underlying url of the file, which looks like this:
https://trends.google.com/trends/api/widgetdata/multiline/csv?req=%7B%22time%22%3A%222018-08-30%202019-08-30%22%2C%22resolution%22%3A%22WEEK%22%2C%22locale%22%3A%22de%22%2C%22comparisonItem%22%3A%5B%7B%22geo%22%3A%7B%7D%2C%22complexKeywordsRestriction%22%3A%7B%22keyword%22%3A%5B%7B%22type%22%3A%22BROAD%22%2C%22value%22%3A%22trump%22%7D%5D%7D%7D%5D%2C%22requestOptions%22%3A%7B%22property%22%3A%22%22%2C%22backend%22%3A%22IZG%22%2C%22category%22%3A0%7D%7D&token=APP6_UEAAAAAXWrEbGVLW-ssfeQvOJgr9938DRgYO1sm&tz=-120
Unquoted :
https://trends.google.com/trends/api/widgetdata/multiline/csv?req={"time":"2018-08-30 2019-08-30","resolution":"WEEK","locale":"de","comparisonItem":[{"geo":{},"complexKeywordsRestriction":{"keyword":[{"type":"BROAD","value":"trump"}]}}],"requestOptions":{"property":"","backend":"IZG","category":0}}&token=APP6_UEAAAAAXWrEbGVLW-ssfeQvOJgr9938DRgYO1sm&tz=-120
I can now easily get this csv into a pandas dataframe. Ideally, I would now just manipulate the url in order to make custom requests and load new data. The problem I have is that I cannot use the same token parameter, since it is somehow generated newly for each individual csv-request. I think that the answer from shaochuancs in Origin of tokens in Google trends API call describes the problem as I am facing it. Can anyone explain how I can request this token which I could then use for the second request ( the actual csv download )? :/
I'm plotting some data using shap, which returns me (in console) a
<IPython.core.display.HTML object>. I would like to convert the object to a PDF and save it.
I do understand that there is some hassle, as I probably would need to simulate a browser to open and view the HTML in.
What would be the most straight-forward way of storing the html output as PDF?
Please note that I am not inside a browser/iPython notebook, and I do not want to create and convert a whole iPython notebook.
I am doing a digital signage project using Raspberry-Pi. The R-Pi will be connected to HDMI display and to internet. There will be one XML file and one self-designed HTML webpage in R-Pi.The XML file will be frequently updated from remote terminal.
My idea is to parse the XML file using Python (lxml) and pass this parsed data to my local HTML webpage so that it can display this data in R-Pi's web-browser.The webpage will be frequently reloading to reflect the changed data.
I was able to parse the XML file using Python (lxml). But what tools should I use to display this parsed contents (mostly strings) in a local HTML webpage ?
This question might sound trivial but I am very new to this field and could not find any clear answer anywhere. Also there are methods that use PHP for parsing XML and then pass it to HTML page but as per my other needs I am bound to use Python.
I think there are 3 steps you need to make it work.
Extracting only the data you want from the given XML file.
Using simple template engine to insert the extracted data into a HTML file.
Use a web server to service the file create above.
Step 1) You are already using lxml which is a good library for doing this so I don't think you need help there.
Step 2) Now there are many python templating engines out there but for a simple purpose you just need an HTML file that was created in advance with some special markup such as {{0}}, {{1}} or whatever that works for you. This would be your template. Take the data from step 1 and just do find and replace in the template and save the output to a new HTML file.
Step 3) To make that file accessible using a browser on a different device or a PC you need to service it using a simple HTTP web server. Python provides http.server library or you can use an 3rd party web server and just make sure it can access the file created on step 2.
Instead of passing the parsed data (parsed from a XML file) to specific components in the HTML page, I've written python code such that it rewrites the entire HTML webpage's code periodically.
Suppose we have a XML file, a python script, a HTML webpage.
XML file : Contains certain values that are updated periodically and are to be parsed.
Python Script : Parses the XML file (when ever there are changes in XML file) and updates the HTML page with the newly parsed values
HTML webpage : To be shown on R-Pi screen and reloaded periodically (to reflect any changes at the browser)
The python code will have a string (say, str) declared, that contains the code of the HTML page, say the below code.
<!DOCTYPE html>
<html>
<body>
<h1>My First Heading</h1>
<p>My first paragraph.</p>
</body>
</html>
Then suppose we would like to update the My first paragraph with a value we parsed from XML, we can use Python string replacement function,
str.replace("My first paragraph",root[0][1].text)
After the replacement is done, write that entire string (str) into the HTML file. Now, the HTML file will have new code and once it's reloaded, the updated webpage will show up in the browser (of R-Pi)