How do I keep my connection to a SQL server database alive when running scripts from a Python pyodbc program?
The reason I ask is because I want to automate a task at my job that uses a Temp DB to store information, and then uses Excel to refresh its data based on those temp databases. However, when I run the query through pyodbc in my python script, the databases fall off automatically, and I'm assuming that's because it kills the connection after it's done running.
Is there a way to keep the connection open in Python so that I can still refresh my Excel spreadsheets?
Thanks
Related
Want a way to automate the transaction logshipping process for SQL server through Python script. Also the secondary DB should get created in "stand-by" mode once we do the log shipping configuration.
I'm creating a gui in python to manipulate stored records and I have the mysql script to set up the database and enter all information. How do I get from the mysql script to the .db file so that python can access and manipulate it?
db files are SQLite databases most of the time. What you are trying to do is converting a dumped MySQL database into an SQLite database. Doing this is not trivial, as I think both dialects are not compatible. If the input is simple enough, you can try running each part of it using an SQLite connection in your Python script. If it uses more complex features, you may want to actually connect to a (filled) MySQL database and fetch the data from there, inserting it back into a local SQLite file.
The approach I am trying is to write a dynamic script that would generate mirror tables as in Oracle with similar data types in SQL server. Then again, write a dynamic script to insert records to SQL server. The challenge I see is incompatible data types. Has anyone come across similar situation? I am a sql developer but I can learn python if someone can share their similar work.
Have you tried the "SQL Server Import and Export Wizard" in SSMS?
i.e. if you create an empty SQL server database and right click on it in SSMS then one of the "tasks" menu options is "Import Data..." which starts up the "SQL Server Import and Export Wizard". This builds a once-off SSIS package .. which can be saved if you want to re-use.
There is a data source option for "Microsoft OLE DB Provider for Oracle".
You might have a better Oracle OLE DB Provider available also to try.
The will require Oracle client software to be available.
I haven't actually tried this (Oracle to SQL*Server) so am not sure if reasonable or not.
How many tables, columns?
Oracle DB may also have Views, triggers, constraints, Indexes, Functions, Packages, sequence generators, synonyms.
I used linked server, got all the metadata of the tables from dba_tab_columns in Oracle. Wrote script to create tables based on the metadata. I needed to use SSIS script task to save the create table script for source control. Then I wrote sql script to insert data from oracle, handled type differences through script.
Pyodbc is correctly connecting to the same db. When I run
SELECT name FROM sys.databases;
SELECT name FROM master.dbo.sysdatabases;
I get the list of all the DBs I can see in MSSQLSMS.
When I look at my Event Profiler in SSMS, I can see that Pyodbc is executing code actions on the same database in the same server as I look at with SSMS. I see my create table statements, select statements, that I'm running in Python with Pyodbc, executing on my SQL server.
So why can I not see the tables I've created in SSMS? Why, when I run the same queries in SSMS, do I not see the table I've created using Pyodbc?
I am extremely confused. Pyodbc appears to be connecting to my local SQL server correctly, and executing SQL code on it, but I'm not able to view the results using SSMS. I can find the table with Pyodbc, and Pyodbc and SSMS are both telling me they're looking at the same places, but SSMS can't see anything Pyodbc has done.
EDIT : Solved
conn.autocommit=True is required for Pyodbc to make permanent changes.
SQL Server allows some DDL statements (e.g., CREATE TABLE) to be executed inside a transaction. Therefore we also have to remember to commit() those changes if we haven't specified autocommit=True on the Connection.
If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10.
--datadir for just the data or --basedir
You can try the Blackhole and Memory table types in MySQL.