I'm creating a gui in python to manipulate stored records and I have the mysql script to set up the database and enter all information. How do I get from the mysql script to the .db file so that python can access and manipulate it?
db files are SQLite databases most of the time. What you are trying to do is converting a dumped MySQL database into an SQLite database. Doing this is not trivial, as I think both dialects are not compatible. If the input is simple enough, you can try running each part of it using an SQLite connection in your Python script. If it uses more complex features, you may want to actually connect to a (filled) MySQL database and fetch the data from there, inserting it back into a local SQLite file.
Related
The approach I am trying is to write a dynamic script that would generate mirror tables as in Oracle with similar data types in SQL server. Then again, write a dynamic script to insert records to SQL server. The challenge I see is incompatible data types. Has anyone come across similar situation? I am a sql developer but I can learn python if someone can share their similar work.
Have you tried the "SQL Server Import and Export Wizard" in SSMS?
i.e. if you create an empty SQL server database and right click on it in SSMS then one of the "tasks" menu options is "Import Data..." which starts up the "SQL Server Import and Export Wizard". This builds a once-off SSIS package .. which can be saved if you want to re-use.
There is a data source option for "Microsoft OLE DB Provider for Oracle".
You might have a better Oracle OLE DB Provider available also to try.
The will require Oracle client software to be available.
I haven't actually tried this (Oracle to SQL*Server) so am not sure if reasonable or not.
How many tables, columns?
Oracle DB may also have Views, triggers, constraints, Indexes, Functions, Packages, sequence generators, synonyms.
I used linked server, got all the metadata of the tables from dba_tab_columns in Oracle. Wrote script to create tables based on the metadata. I needed to use SSIS script task to save the create table script for source control. Then I wrote sql script to insert data from oracle, handled type differences through script.
How do I keep my connection to a SQL server database alive when running scripts from a Python pyodbc program?
The reason I ask is because I want to automate a task at my job that uses a Temp DB to store information, and then uses Excel to refresh its data based on those temp databases. However, when I run the query through pyodbc in my python script, the databases fall off automatically, and I'm assuming that's because it kills the connection after it's done running.
Is there a way to keep the connection open in Python so that I can still refresh my Excel spreadsheets?
Thanks
After scanning the very large daily event logs using regular expression, I have to load them into a SQL Server database. I am not allowed to create a temporary CSV file and then use the command line BCP to load them into the SQL Server database.
Using Python, is it possible to use BCP streaming to load data into SQL Server database? The reason I want to use BCP is to improve the speed of the insert into SQL Server database.
Thanks
The BCP API is only available using the ODBC call-level interface and the managed SqlClient .NET API using the SqlBulkCopy class. I'm not aware of a Python extension that provides BCP API access.
You can insert many rows in a single transaction to improve performance. This can be accomplished by batching individual insert statements or by passing multiple rows at once using an XML parameter (which also reduces round-trips).
In my python/django based web application I want to export some (not all!) data from the app's SQLite database to a new SQLite database file and, in a web request, return that second SQLite file as a downloadable file.
In other words: The user visits some view and, internally, a new SQLite DB file is created, populated with data and then returned.
Now, although I know about the :memory: magic for creating an SQLite DB in memory, I don't know how to return that in-memory database as a downloadable file in the web request. Could you give me some hints on how I could reach that? I would like to avoid writing stuff to the disc during the request.
I'm not sure you can get at the contents of a :memory: database to treat it as a file; a quick look through the SQLite documentation suggests that its API doesn't expose the :memory: database to you as a binary string, or a memory-mapped file, or any other way you could access it as a series of bytes. The only way to access a :memory: database is through the SQLite API.
What I would do in your shoes is to set up your server to have a directory mounted with ramfs, then create an SQLite3 database as a "file" in that directory. When you're done populating the database, return that "file", then delete it. This will be the simplest solution by far: you'll avoid having to write anything to disk and you'll gain the same speed benefits as using a :memory: database, but your code will be much easier to write.
With web content you can easily serve files as raw binary with a content type specified in the response.
Django makes this fairly easy - here's a snippet I use on one of my sites for generating a barcode for a user.
def barcode(request):
from core import ugbarcode
bar = ugbarcode.UGBar("0001")
binStream = bar.asString('gif')
return HttpResponse(binStream, 'image/gif')
See also this post for more details in specifying it is an attachment to trigger download: Generating file to download with Django
If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10.
--datadir for just the data or --basedir
You can try the Blackhole and Memory table types in MySQL.