Export a MySQL database with contacts to a compatible CardDav system - python

I have a standard MySQL database, with a table containing contacts (I'm adding contacts to the table using a webapp using Zend Framework), thus with my own fields.
Is it possible to create a server which would be compatible to be used with the Address Book system of OsX? I think I must be compatible with the CardDav system.
Has anyone already done that? If yes, how did you handle it? Created your own server? Is there a CardDav library for Python for example? I just want to be able to read my contacts using the Address Book of OsX.
Thanks a lot for your answers,
Best,
Jean

Is it possible to create a server which would be compatible to be used
with the Address Book system of OsX? I think I must be compatible with
the CardDav system.
Yes you can create such a server and there are plenty already. You can choose between either CardDAV or LDAP, depending on your needs. When LDAP is good enough for your use case, you might get even get away with just configuring OpenLDAP to use your database.
LDAP is usually just read & query only (think big company address book / yellow pages). CardDAV is usually read/write and full sync.
Has anyone already done that?
Many people have, the CalConnect CardDAV Server Implementations site alone lists 16, most of them FOSS. There are more.
If yes, how did you handle it? Created your own server?
I think this is the most common approach.
Is there a CardDav library for Python for example?
Please do your research, this is trivial to figure out ...
Many PHP servers (you mentioned Zend) are using SabreDAV as a basis.
I just want to be able to read my contacts using the Address Book of OsX.
That makes it a lot easier. While you can use a library like SabreDAV, implementing CardDAV readonly is really not that hard. Authentication, a few XML requests for locating an addressbook and then some code to render your existing records as vCards.
If you want to add editing, things get more complicated.

Related

Sage 100 ERP setup on windows server machine

We have purchased Sage 100 partner account. I have also set up Sage ERP 100 on windows server 2016. But, I am stuck at the following points.
Where to add business
How to set up web services and access REST APIs
How I will make server configuration
Any help in Sage 100 setup will be appreciated.
Typically you would work with a Sage partner or reseller to set up your Sage 100 environment. Depending on your location, there should be several available. You would typically check the Sage website to see the Sage partners in your area.
With that said, I used to do a lot of programming against Sage 100 and I can tell you that there is no REST or web services API. What you would typically do is deploy your own API that reads from Sage 100 as a database. There is an ODBC connection that is included by default with the product, called SOTAMAS90, that will allow you read-only access to all the Sage 100 tables. The 32-bit connector is installed automatically when you install the program. There is a 64 bit version as well, but that takes more work to set up. The 32 bit version is easiest, but it does require that your API code be running as a 32 bit service or program.
I would typically write C# programs that consume the SOTAMAS90 data and serve it via REST. ASP.NET Web API or Core are both good choices for doing this.
Since the SOTAMAS90 ODBC client is read-only, you will have to do something else if you need to write data back to Sage 100. The two interfaces that I'm familiar with are VI and BOI.
VI, or Visual Integrator is basically a utility for importing data from a source file (typically a CSV). It has some limitations, but it does work. You can launch it programmatically, which makes it usable on-demand. If doesn't throw error messages, however. If a row can't be written, it just skips it. You can view a report after the fact to see what wrote and what didn't.
BOI, or the Business Object Interface, is a COM component that you can code against. It provides more robust data validation, and does throw errors on a per-record (and sometimes a per-field) basis so you can respond to those in your code accordingly. Unfortunately, while most of the modules are exposed the BOI, not all of them are. Every year, Sage is porting more and more functionality to "the new framework" which also means it is available via BOI.
Finally, you can also set up a linked server in SQL Server to serve the ODBC data that way. Any way you hit that SOTAMAS90 DSN though, it's slow. Some developers like to copy all of the data to SQL Server and serve it from there. If you do that, be sure to add foreign keys and indexes. And run a nightly ETL to keep the data fresh. There are also solutions via User Defined Scripts that will allow you to respond to individual row CRUD events.
Hope that helps.
Aaron

How can I create a new principal on a CalDAV/WebDAV server?

I want to programmatically create a principal with a calendar for every user of my web site. There is lots of documentation on how to create calendars, but I have a hard time finding anything on creating principals.
Any hint is appreciated, preferred language is python, but docs for other languages could help me as well.
Thank you for your help!
WebDAV ACL does not provide a way to manage principals. And I'm not aware of any draft/RFC adding that feature.
In short: You can't manage principals using WebDAV and how principals are backed is highly server specific.
Some servers may use the LDAP standard to manage their accounts, for example the CalDAV server which is part of macOS X Server does.
If the LDAP server is configured to allow that (which often is not the case), you may be able to create accounts using that protocol. I'm sure there are Python libraries providing access to LDAP.
Other servers often provide proprietary protocols or tools to create accounts.

Small "embeddable" database that can also be synced over the network?

I am looking for a small database that can be "embedded" into my Python application without running a separate server, as one can do with SQLite or Metakit. I don't need an SQL database, in fact storing free-form data like Python dictionaries or JSON is preferable.
The other requirement is that to be able to run an instance of the database on a server, and have instances of my application (clients) sync the database with the server (two-way), similar to what CouchDB replication can do.
Is there a database that will do this?
From what you describe, it sounds like you could get by using pickle and FTP.
If you don't need an SQL database, what's wrong with CouchDB? You can spawn a local process to serve the DB, and you could easily write a server wrapper to allow only access from your app. I'm not sure about the access story, but I believe the latest Ubuntu uses CouchDB for synchronizeable user-level data.
Seems like the perfect job for CouchDB: 2 way sync is incredibly easy, schema-less JSON documents are the native format. If you're using python, couchdb-python is a great way to work with CouchDB.
Do you need clients to work offline and then resync when they reconnect to the network? I don't know if MongoDB can handle the offline client scenario, but if the client is online all the time, MongoDB might be a good solution too. It has pretty goode python support. Still a separate process, but perhaps easier to get running on Windows than CouchDB.
BerkeleyDB might be another option to check out, and it's lightweight enough. easy_install bsddb3 if you need a Python interface.
HSQLDB does this, but unfortunately it's Java rather than Python.
Firebird SQL might be closer to what you want, since it does seem to have a Python interface.

backend for python

which is the best back end for python applications and what is the advantage of using sqlite ,how it can be connected to python applications
What do you mean with back end? Python apps connect to SQLite just like any other database, you just have to import the correct module and check how to use it.
The advantages of using SQLite are:
You don't need to setup a database server, it's just a file
No configurations needed
Cross platform
Mainly, desktops applications are the ones that take real advantage of this. For web apps, SQLite is not recommended, since the file containing the data, is easily readable (lacks any kind of encryption), and when the web server lacks special configuration, the file is downloadable by anyone.
Django, Twisted, and CherryPy are popular Python "Back-Ends" as far as web applications go, with Twisted likely being the most flexible as far as networking is concerned.
SQLite can, as has been previously posted, be directly interfaced with using SQL commands as it has native bindings for Python, or it can be accessed with an Object Relational Manager such as SQLObject (another Python library).
As far as performance is concered, SQLite is fairly scalable and should be able to handle most use cases that don't require a seperate database server (nothing enterprise level). An additional benefit of SQLite is that the database is self-contained in a single file allowing for easy backup while remained a common enough format that multiple applications can access the data. A word of advice on using SQLite with Python, however, is that you may run into issues with threading (in the past most of the bindings for SQLite were not thread-safe, although this may have changed over time).
The language you are using at the application layer has little to do with your database choice underneath. You need to examine the advantages of other DB packages to get an idea of what you want.
Here are some popular database packages for cheap or free:
ms sql server express, pg/sql, mysql
If you mean "what is the best database?" then there's simply no way to answer this question. If you just want a small database that won't be used by more than a handful of people at a time, SQLite is what you're looking for. If you're running a database for a giant corporation serving thousands, you're probably looking for Oracle. In between those, you have MySQL, PostgreSQL, SQL Server, db2, and probably more.
If you're familiar with one of those, that may be the best to go with from a practical standpoint. If you're doing a typical webapp, my advice would be to go with MySQL or PostgreSQL as they're free and well supported by just about any ORM you could think of (my personal preference is towards PostgreSQL, but I'm not experienced enough with either of these to make a good argument one way or another). If you do go with one of those two, my recommendation is to use storm as the ORM.
(And yes, there are free versions of SQL Server and Oracle. You won't have as many choices as far as ORMs go though)

Which Python client library should I use for CouchdB? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm starting to experiment with CouchDB because it looks like the perfect solution for certain problems we have. Given that all work will be on a brand new project with no legacy dependencies, which client library would you suggest that I use, and why?
This would be easier if there was any overlap on the OSes we use. FreeBSD only has py-simplecouchdb already available in its ports collection, but that library's project website says to use CouchDBKit instead. Neither of those come with Ubuntu, which only ships with CouchDB. Since those two OSes don't have an libraries in common, I'll probably be installing something from source (and hopefully submitting packages to the Ubuntu and FreeBSD folks if I have time).
For those interested, I'd like to use CouchDB as a convenient intermediate storage place for data passed between various services - think of a message bus system but with less formality. For example, we have daemons that download and parse web pages, then send interesting bits to other daemons for further processing. A lot of those objects are ill-defined until runtime ("here's some HTML, plus a set of metadata, and some actions to run on it"). Rather than serialize it to an ad-hoc local network protocol or stick it in PostgreSQL, I'd much rather use something designed for the purpose. We're currently using NetWorkSpaces in this role, but it doesn't have nearly the breadth of support or the user community of CouchDB.
I have been using couchdb-python with quite a lot of success and as far as I know the guys of desktopcouch use it in ubuntu. The prerequisites are very basic and you should have not problems:
httplib2
simplejson or cjson
Python
CouchDB 0.9.x (earlier or later versions are unlikely to work as the interface is still changing)
For me some of the advantages are:
Pythonic interface. You can work with the database like if it was a dict.
Interface for design documents.
a CouchDB view server that allows writing view functions in Python
It also provides a couple of command-line tools:
couchdb-dump: Writes a snapshot of a CouchDB database
couchdb-load: Reads a MIME multipart file as generated by couchdb-dump and loads all the documents, attachments, and design documents into a CouchDB database.
couchdb-replicate: Can be used as an update-notification script to trigger replication between databases when data is changed.
If you're still considering CouchDB then I'll recommend Couchdbkit (http://www.couchdbkit.org). It's simple enough to quickly get a hang on and runs fine on my machine running Karmic Koala. Prior to that I've tried couchdb-python but some bugs (maybe ironed out by now) with httplib was giving me some errors (duplicate documents..etc) but Couchdbkit got me up and going so far without any problems.
spycouch
Simple Python API for CouchDB
Python library for easily manage CouchDB.
Compared to ordinarily available libraries on web, works with the latest version CouchDB - 1.2.1
Functionality
Create a new database on the server
Deleting a database from the server
Listing databases on the server
Database information
Database compression
Create map view
Map view
Listing documents in DB
Get document from DB
Save document to DB
Delete document from DB
Editing of a document
spycouch on >> https://github.com/cernyjan/repository
Considering the task you are trying to solve (distributed task processing) you should consider using one of the many tools designed for message passing rather than using a database. See for instance this SO question on running multiple tasks over many machines.
If you really want a simple casual message passing system, I recommend you shift your focus to MorbidQ. As you get more serious, use RabbitMQ or ActiveMQ. This way you reduce the latency in your system and avoid having many clients polling a database (and thus hammering that computer).
I've found that avoiding databases is a good idea (That's my blog) - and I have a end-to-end live data system running using MorbidQ here
I have written a couchdb client library built on python-requests (which is in most distributions). We use this library in production.
https://github.com/adamlofts/couchdb-requests
Robust CouchDB Python interface using python-requests.
Goals:
Only one way to do something
Fast and stable (connection pooled)
Explicit is better than implicit. Buffer sizes, connection pool size.
Specify query parameters, no **params in query functions
After skimming through the docs of many couchdb python libraries, my choice went to pycouchdb.
All I needed to know was very quick to grasp from the doc: https://py-couchdb.readthedocs.org/en/latest/ and it works like a charm.
Also, it works well with Python 3.

Categories

Resources