Setting up the configuration of sqlite3 database in Python SQLITE_CONFIG_MULTITHREAD - python

As per the sqlite3 documentation http://www.sqlite.org/compile.html#threadsafe:
"When SQLite has been compiled with SQLITE_THREADSAFE=1 or
SQLITE_THREADSAFE=2 then the threading mode can be altered at run-time
using the sqlite3_config() interface together with one of these verbs:
SQLITE_CONFIG_SINGLETHREAD
SQLITE_CONFIG_MULTITHREAD
SQLITE_CONFIG_SERIALIZED "
Can you please help me with the proper Python syntax for configuring a database with SQLITE_THREADSAFE=1 and SQLITE_CONFIG_MULTITHREAD
Thank you for reading, and apologies for filling up stackoverflow with such a basic problem.
BTW, if it matters at all, what I am doing is, I have multiple threads running, and in each I have a several calls to different database connections. The python script worked well when running on the windows machine I originally wrote it on, but now that I have migrated it to an Ubuntu machine I get "ProgrammingError: SQLite objects created in a thread can only be used in that same thread.." I tried connecting with check_same_thread = False but then I get an error that the database is locked. This is why I need to see if the configs above may help solve my problem, I just have trouble with their syntax.

Related

Automate a manual task using Python

I have a question and hope someone can direct me in the right direction; Basically every week I have to run a query (SSMS) to get a table containing some information (date, clientnumber, clientID, orderid etc) and then I copy all the information and that table and past it in a folder as a CSV file. it takes me about 15 min to do all this but I am just thinking can I automate this, if yes how can I do that and also can I schedule it so it can run by itself every week. I believe we live in a technological era and this should be done without human input; so I hope I can find someone here willing to show me how to do it using Python.
Many thanks for considering my request.
This should be pretty simple to automate:
Use some database adapter which can work with your database, for MSSQL the one delivered by pyodbc will be fine,
Within the script, connect to the database, perform the query, parse an output,
Save parsed output to a .csv file (you can use csv Python module),
Run the script as the periodic task using cron/schtask if you work on Linux/Windows respectively.
Please note that your question is too broad, and shows no research effort.
You will find that Python can do the tasks you desire.
There are many different ways to interact with SQL servers, depending on your implementation. I suggest you learn Python+SQL using the built-in sqlite3 library. You will want to save your query as a string, and pass it into an SQL connection manager of your choice; this depends on your server setup, there are many different SQL packages for Python.
You can use pandas for parsing the data, and saving it to a ~.csv file (literally called to_csv).
Python does have many libraries for scheduling tasks, but I suggest you hold off for a while. Develop your code in a way that it can be run manually, which will still be much faster/easier than without Python. Once you know your code works, you can easily implement a scheduler. The downside is that your program will always need to be running, and you will need to keep checking to see if it is running. Personally, I would keep it restricted to manually running the script; you could compile to an ~.exe and bind to a hotkey if you need the accessibility.

Zope Legacy Code - Accessing DA Functions

We're working with an older zope version (2.10.6-final, python 2.4.5) and working with a database adapter called ZEIngresDA. We have an established connection for it, and the test function shows that it is totally functional and can connect and run queries.
My job is to change the way that the queries are actually executing, so that they're properly parameterizing variables to protect against sql injection. With that said, I'm running into a security issue that I'm hoping someone can help with.
connection = container.util.ZEIngresDAName()
#returning connection at this point reveals it to be of type ZEIngresDA.db.DA,
#which is the object we're looking for.
connection.query("SELECT * from data WHERE column='%s';", ('val1',))
#query is a function that is included in class DA, functions not in DA throw errors.
Here we run into the problem. Testing this script brings up a login prompt that, when logged into, immediately comes up again. I recognize that this is likely some type of security setting, but I've been unable to find anything online about this issue, though this old of zope documentation isn't spectacular online anyways. If this sounds familiar to you or you have any ideas, please let me know.
I have some experience using Zope2 but it's hard to give a good answer with the limited information you've posted. I'm assuming here that you're using a Python script within the ZMI
Here's a list of things I would check:
Are you logged into the root folder rather than a sub folder in the ZMI? This could cause a login prompt as you're requesting a resource that you do not have access to use
In the ZMI double check the "security" tab of the script you're trying to run to ensure that your user role has permission to run the script
Whilst you're there check the "proxy" tab to ensure that the script itself has permission to call the functions within it
Also worth checking that the products you're trying to use were installed by a user which is still listed in the root acl_user folder - from memory this can cause issues with the login prompt
Best of luck to you - happy (also sad) to hear that there's at least one other Zope user out there!

Windows Azure Web sites python

After a whole load of hard work I've eventually got a hello world flask app running on Windows Azure, the app is built locally and runs fine, deploying it to Azure is a nightmare though. So I've sort of got two questions here.
I can't seem to get a stack trace at all, I've tried setting things in web.config, but the documentation on how to use all this stuff is just apawling, all I can find is just literally badly written blog posts dotted around one of microsoft's millions of blogs. Which doesn't even help me to fix my problem.
The second question relates to the first one, due to some horrible debugging methods (taking my application apart and commenting things out) I feel like it could be pymongo causing this, I've built it without the C extensions and it's in my site-packages and it works on my local machine. However without a stack trace I've just no idea how to fix this without wanting to pull my hair out.
Can anyone shed some light on this? Really disappointing because the rest of azure isn't too bad, theres far better website hosting alternatives out there like heroku which are literally 10 command setups. I've been working on this all day so far..
Solved
For those who are interested I ended up solving this problem my manually adding error handling into my flask application completely bypassing the IIS settings and windows azure configs - far too complicated with no documentation at all.
from werkzeug.debug import get_current_traceback
#app.errorhandler(500)
def internal_server_error(e):
base = os.path.dirname(os.path.abspath(__file__))
f = open('%s/logs/error.log' % (base), 'a')
track = get_current_traceback(skip=1, show_hidden_frames=True, ignore_system_exceptions=False)
track.log(f)
f.close()
return 'An error has occured', 500

Using Django and PyEnchant: Getting MemoryError on shared hosting, but not locally

I'm a beginner level user of Django and Python right now, and so far anything I do locally has immediately worked on my hosting once uploaded. My hosting is provided by Hostmonster.
However, I've just installed PyEnchant. All I use it for is basic spell checking and suggesting new words. Also, 'string' is always a string of words separated by '+'.
from enchant import Dict
def spellcheck(string):
spellcheck = Dict("en-GB")
suggestedword = []
for word in string.split('+'):
if len(word) > 2 and not spellcheck.check(word):
suggestedword.append(spellcheck.suggest(word)[0])
else:
suggestedword.append(word)
return suggestedword
Locally, using the Django dev server, all works fine. On my host I get:
Django Version: 1.4
Exception Type: MemoryError
Exception Location: /home/user/python/lib/python2.7/ctypes/__init__.py in _reset_cache, line 279
It seems to be throwing the error a few steps after 'from enchant import Dict'.
I'm guessing the dictionary is too large to store in temporary memory?
Any idea how to get around this? Please go easy on me if I'm either asking something very stupid, or in a very stupid way :).
If I'm leaving out any vital data, it's because I don't know it's important, so feel free to tell me what other information would help solve this (if it can be solved on a shared host).
Thanks in advance for any help!
EDIT1:
Using SSH, I can import and use PyEnchant:
>>> import enchant
>>> spellcheck = enchant.Dict("en-GB")
>>> spellcheck.suggest('nmae')
['name', 'mane']
Which makes me even more confused, as I have had no luck avoiding 'MemoryError' when I use it as above in my question.
EDIT2:
Still not able to figure this out. If I do 'import enchant' in any module, it seems to cause the MemoryError, yet I am able to use 'import enchant' via remote shell and the python interpreter.
EDIT3:
Still, after a few days of googling and trying things out, I can't get this MemoryError to go away. Has anyone seen this before with 'PyEnchant'? I'm thinking my host is perhaps not giving enough ram to load the PyEnchant import? Is there any way to change how memory is used by a module?
I have just had the same problem after moving my Django installation. The problem was httpd (Apache) access to the database. In my case it was Selinux but I assume that general UNIX type file permissions would cause a similar problem. In this instance it worked fine on the Django server but not on my local Apache when trying out a viable production setup.
Does your host use Linux?
Could you run Apache to help determine the problem?

py2neo - neo4j.GraphDatabaseService(db_string) crashes python; no error-log

the last couple of days I installed Python 2.7.3 and Neo4J community edition 1.8.M01. I managed to get the embedded python bindings to work, but as I need the py2neo REST bindings I've installed them like described at http://py2neo.org/. Moreover I can't download directly from git due to a "Permission denied (publickey)" error so I took the available py2neo-1.2.6.tar.gz version from the download section.
While the installation itself was not the problem, I can't get the example to work as on calling neo4j.GraphDatabaseService('http://localhost:7474/db/data') python crashes without any error message - Win7 64bit only pops up a message that the applicationn does not respond. Java, Python and Neo4J are all running on 64 bit basis and the server is accessible on http://localhost:7474. I even tried to force an output as described here: Catching a python app before it exits - but still no stacktrace or error-log.
I've installed everything from scratch or via the executables provided at www.lfd.uci.edu/~gohlke/pythonlibs/ several times now but nothing managed to get this example to work.
I have installed both tornado 2.2.1 and pycurl 7.23.1. pycurl.version_info() reveals: (3, '7.23.1', 464641, 'Windows', 28, 'OpenSSL/0.9.8s', 0, '1.2.5', ('gopher', 'http', 'https', 'imap', 'imaps', 'pop3', 'pop3s', 'rtsp', 'smtp', 'smtps'), None, 0, None) - moreover
c = pycurl.Curl()
c.setopt(c.URL, 'http://stackoverflow.com')
c.perform()
returns the content of the startpage.
I've followed the stacktrace via print-messages into tornado.IOLoop.start() and there into _run_callback() where it actually executes callback() and crashes. Not shure if the callback-function defined inside of tornado.HTTPClient.fetch() should be called here - printing the callback results in <tornado.stack_context._StackContextWrapper object at ...>
Any suggestion on how to fix this issue?
Thanks in advance,
Roman
edit: corrected port as of a typo
edit2: after a longer debug-session which narrowed the point of failure a bit, Nigel provided me with a way to deal with my issues by exchanging
self._http = http or httpclient.HTTPClient(curl_httpclient.CurlAsyncHTTPClient)
with
self._http = http or httpclient.HTTPClient()
in line 55 of rest.py. This is a workaround but does not solve the problem in the back of tornado/pycurl. The windows management console declares pycurl.pyd as the reason for the crash and as some of the nodes (after a seldom functioning intialization of the GraphDatabaseService) get stored within Neo4J and the Debug-Output below isn't shown anymore, the crash must occur between the send request and the return to the main application. I currently believe that either the Selection poll, which I fall back on Windows, is the reason for the crash or maybe the curl-handle gets shared between different threads - which should not happen (http://curl.haxx.se/libcurl/c/libcurl-tutorial.html) - and is somehow the most comprehensible reason imo
Sorry to hear that you're having issues with py2neo. I haven't carried out any testing under Windows since I only run Linux so I'm unsure whether there are any general incompatibilities there. I am also aware that error reporting is less than it should be which has been limited by the amount of time I've had to work on the project.
That said, I notice that you are running on port 4747 instead of the default 7474 - or this this a typo? Have you tried your short cURL test against the root database URI directly?
You seem to have covered all the bases looking at the layers involved so I'm unsure what else to look at here. I have considered adding an option to be able to switch between the curl_httpclient and the simple_httpclient - this may give an alternative to try. I will try to get something put up over the next few days.
Nige

Categories

Resources