My org had a dashboard some time ago that was very complex and took some time to load, however, once we loaded it once, performance was fine for the rest of the day (due to cache memory being refreshed). Therefore, we had someone in our team go open the dashboard everyday early in the morning so the cache memory for the day was refreshed. I am now learning about Tableau Server Client (TSC) module in python, which uses the Tableau API to manipulate Tableau server, and I am wondering if I could accomplish the same thingĀ using TSC. Is that possible? If so, what would be the syntax? I looked into the documentation and I didn't see anything about it. I am currently able to trigger an extract refresh using TSC, but I would like to see if I can just refresh the cache memory of a dashboard. Or perhaps there is another function of TSC that does something similar. Any thoughts? Thanks
You can do what is called cache warming for Tableau Server 9.0+ using subscriptions.
Here are the steps found in the tableau kb:
Starting with Tableau Server 9.0, the Cache Server can be "warmed" with data using the Subscription function. Triggering a subscription email that includes a thumbnail after executing an extract refresh will cause the queries to run for the viz and load into the external query cache.
If a user wants a fast-loading view for an 8 AM meeting:
Tableau Administrator schedules an extract refresh at 2 AM.
Tableau Administrator schedules a subscription email at 5 AM.
User loads workbook quickly from stored cache at 8 AM.
Note: if "Refresh More Often" is selected in the Data Connections tab of Configure Tableau Server, the cache will be cleared every time the view is loaded. Additionally, regardless of cache settings, if a user hits the "Refresh Data" button on the toolbar, the Tableau Server will fetch new data.
If you want to use python then I would just use requests.get() to hit your url on a scehdule.
Related
I'm working on a Django project. Right now (if you're the dev) you can make changes to the HTML pretty easily. i'm not having any issues with that, i just want to know how to get the server to reload when the HTML changes. here's the github.
https://github.com/jeffcrockett86/django-unchained
You could have the client poll the server (eg. every 10 seconds). The server could then respond with a timestamp (e.g. unix millis). If the time is after the client was loaded (you could have the server put this in an <input type=hidden>, the client runs window.location.reload();
I have an onsite SQL server which runs and posts relevant records to to a data warehouse accessible via API endpoint. I need to create a webhook to detect changes whenever rows are added or deleted from the warehouse table.Preferably, the webhook should trigger a message to Azure queue storage via a httptrigger.
How can I go about this in azure? I cant get my hands on any straightforward documentation or tutorial.
If it cant be done in Azure, are there any other third party platforms with which I can create a webhook to detect changes to the table given the end points url?
I have been able to create a webhook in ArcGIS which is currently successfully running on the same logic. I am however now required to change and have that triggered by activity on the datawarehouse API. Any help will be appreciated?
I have set up an automated python process that periodically updates google spreadsheets with sensitive data. It is extremely important that these spreadsheets remain restricted to users/emails with which I have shared the spreadsheets.
I have been following the documentation here:
https://gspread.readthedocs.io/en/latest/oauth2.html#enable-api-access
I am able to restrict access to the spreadsheet, share the spreadsheet with the client email in the project credentials I have created and desired users, and successfully update the spreadsheet via python. However, not long after the spreadsheet has been updated, the client email is booted from the share, and the access restriction is reverted to being available to everyone within my organization. Further confusing the matter is that it says my email address is responsible for making both of those changes, even though I never have! I have also requested that my organization whitelist the client email domain, but this has not changed anything.
This only seems to happen after the spreadsheet is updated via the python process. The first few times I had remained logged in to my google drive and had the spreadsheets open in my browser, occurring ~20 minutes following the successful update. On the most recent occasion, I logged out of my google account prior to kicking off the python process, and this time the issue occurred just 3 minutes after the sheets were updated.
Has anyone run into this before or know how to solve this issue?
Thank you,
Sumanth
I have an application that uses Python appengine, there is a service that updates the status of users, if an admin person has a page open, I would need it to update in real time. I know that appengine has CRON and task queues, what would be the correct way to handle this? Should I set an update flag in the models that that triggers jscript?
The Channel API can be used to send real-time(ish) data to clients, without the need of clients polling the server.
I want to make it that one user on the site can chat request another user on my Django site. I want the requestee to get a realtime box that say: "Do you want to chat?"
How does the following client polling approach sound:
user1 clicks on users2 nickname, generating a POST request to some /message/requests, which creates a Message of type CHAT_REQUEST in the database. Meanwhile, a Javascript piece at user2's browser, repeatedly queries the server for message updates. When it receives a Message of type CHAT_REQUEST it opens a popup...
The problem with this approach seems to be the database access.
If the client is polling every 10 seconds, and 100 users leave their browser windows open, that is 10 database requests per seconds.
Would it be better to store these messages not in the database, but in Django RAM or session information? Or will this database table be cached in RAM with PostgreSQL, and the retrieval fast?
A database table for this would put a load on your server, as you said, but might be useful if you want to keep a record of these requests for whatever reason.
Using something like memcached or an AMQP server might give you better performance. If you like you could even use a higher-performance key-value-store such as Tokyo Cabinet / Tokyo Tyrant.
I suggest you look for a "COMET like" communication instead of "AJAX like" if you worry about server performance and bandwidth usage.
By the way, REDIS looks very well suited for handling that kind of in-memory data structures.