I'm trying to create a listener in python that automatically retrieve changes on a Cloudant database as they occur. When a change occurs I want to call a specific function.
I have read through the documentation and the API-specifications but couldn't find anything.
Is there a way to do this?
Here's a basic streaming changes feed reader (disclaimer: I wrote it):
https://github.com/xpqz/pylon/blob/master/pylon.py#L165
The official Cloudant Python client library also contains a changes feed follower:
https://python-cloudant.readthedocs.io/en/latest/feed.html
It's pretty easy to get a basic changes feed reader going as the _changes endpoint with a feed=continuous parameter does quite a lot off the bat for you, including passing the results back as self-contained json-objects per line. The hard bit is dealing with quite a non-obvious set of failure conditions.
Related
I have a question and hope someone can direct me in the right direction; Basically every week I have to run a query (SSMS) to get a table containing some information (date, clientnumber, clientID, orderid etc) and then I copy all the information and that table and past it in a folder as a CSV file. it takes me about 15 min to do all this but I am just thinking can I automate this, if yes how can I do that and also can I schedule it so it can run by itself every week. I believe we live in a technological era and this should be done without human input; so I hope I can find someone here willing to show me how to do it using Python.
Many thanks for considering my request.
This should be pretty simple to automate:
Use some database adapter which can work with your database, for MSSQL the one delivered by pyodbc will be fine,
Within the script, connect to the database, perform the query, parse an output,
Save parsed output to a .csv file (you can use csv Python module),
Run the script as the periodic task using cron/schtask if you work on Linux/Windows respectively.
Please note that your question is too broad, and shows no research effort.
You will find that Python can do the tasks you desire.
There are many different ways to interact with SQL servers, depending on your implementation. I suggest you learn Python+SQL using the built-in sqlite3 library. You will want to save your query as a string, and pass it into an SQL connection manager of your choice; this depends on your server setup, there are many different SQL packages for Python.
You can use pandas for parsing the data, and saving it to a ~.csv file (literally called to_csv).
Python does have many libraries for scheduling tasks, but I suggest you hold off for a while. Develop your code in a way that it can be run manually, which will still be much faster/easier than without Python. Once you know your code works, you can easily implement a scheduler. The downside is that your program will always need to be running, and you will need to keep checking to see if it is running. Personally, I would keep it restricted to manually running the script; you could compile to an ~.exe and bind to a hotkey if you need the accessibility.
I have a Python script that will regulary check an API for data updates. Since it runs without supervision I would like to be able monitor what the script does to make sure it works properly.
My initial thought is just to write every communication attempt with the API to a text file with date, time and if data was pulled or not. A new line for every imput. My question to you is if you would recommend doing it in another way? Write to excel for example to be able to sort the columns? Or are there any other options worth considering?
I would say it really depends on two factors
How often you update
How much interaction do you want with the monitoring data (i.e. notification, reporting etc)
I have had projects where we've updated Google Sheets (using the API) to be able to collaboratively extract reports from update data.
However, note that this means a web call at every update, so if your updates are close together, this will affect performance. Also, if your app is interactive, there may be a delay while the data gets updated.
The upside is you can build things like graphs and timelines really easily (and collaboratively) where needed.
Also - yes, definitely the logging module as answered below. I sort of assumed you were using the logging module already for the local file for some reason!
Take a look at the logging documentation.
A new line for every input is a good start. You can configure the logging module to print date and time automatically.
My web app asks users 3 questions and simple writes that to a file, a1,a2,a3. I also have real time visualization of the average of the data (reads real time from file).
Must I use a database to ensure that no/minimal information is lost? Is it possible to produce a queue of read/writes>(Since files are small I am not too worried about the execution time of each call). Does python/flask already take care of this?
I am quite experienced in python itself, but not in this area(with flask).
I see a few solutions:
read /dev/urandom a few times, calculate sha-256 of the number and use it as a file name; collision is extremely improbable
use Redis and command like LPUSH, using it from Python is very easy; then RPOP from right end of the linked list, there's your queue
I've successfully written a Powershell script which:
query AD to obtain the list of computer
query every computer through WMI to obtain software/hardware information
insert the collected data into a MySQL database.
The script works fine but I don't like how it's implemented. It's procedural and there is a lot of code duplication, causing a mess everytime I need to change something.
Now, what I would like to ask you is: what is the cleanest way to implement it in python using OOP? I've thought something similar to this (pseudocode):
Class ADquery
function get_computers( UO ): return a list of computers in the specified UO
Class Computer
constructor( computername )
function query(): connect to the computer and pull the info through WMI
function print(): print the collected info to the console (debug)
property RAM
property CPU
...
Questions:
In order to save the collected data into a Database, do I have to create another object (e.g. Database) and pass the Computer object to him or add a member function to the Computer class (e.g. save_db() ) ?
If I go for the second option, that wouldn't cause a massive number of MySQL connections when I'm dealing with multiple objects?
Thanks a lot and sorry for my bad English
That architecture looks reasonable to me.
You could do either, I'm not sure it really makes a huge difference with a small application like this.
Potentially. Depending on how it's implemented you could get a lot of connections going on. If you're doing a reasonable number of inserts I'd stick them in a list and insert all at once, if that's possible with your code.
You could also grab an Object Oriented Design book from the internet or your local bookstore, e.g. Rumbaugh et al.. I would also recommend reading up on Design Patterns, e.g. the book by Gamma et. al.. I'm currently doing that, and it is really helpful to look at standard patterns of how to solve a particular problem to shape your thought process about object oriented programming.
ps Your English isn't bad at all (note that I'm also not a native speaker ;)).
Hello I have developed a small game in python using ZODB as backend for DB processing.I have never done game programming before.I was hoping if someone can tell me as to how I can save my current game and then reload it using python.The database filename is data.fs and there are three more ZODB files in my folder.One being for locks rest I'm not aware about.
I don't know your particular case but maybe ZODB is overkill for this. Check first pickle and also anydbm, both at the standard library. If they are not enough for your requirements you can then go for ZODB.
In ZODB, once you are connected, you can put anything you want inside as you would with a plain Python dictionary.
You can also use repoze.zodbconn to facilitate the configuration, the connection at startup and closing it properly when your program exists.