how to import file csv without using bulk insert query? - python

i have tried import file csv using bulk insert but it is failed, is there another way in query to import csv file without using bulk insert ?
so far this is my query but it use bulk insert :
bulk insert [dbo].[TEMP] from
'C:\Inetpub\vhosts\topimerah.org\httpdocs\SNASPV0374280960.txt' with
(firstrow=2,fieldterminator = '~', rowterminator = ' ');

My answer is to work with bulk-insert.
1. Make sure you have bulk-admin permission in server.
2. Use SQL authentication login (For me most of the time window authentication login haven't worked.) for bulk-insert operation.

Related

Pyodbc Connection to Access, creating table with Pandas to_sql(method='multi') throwing errror

I've installed sql-alchemy Access so that I'm able to use pandas and pyodbc to query my Access DB's.
The issue is, it's incredibly slow because it does single row inserts. Another post suggested I use method='multi' and while it seems to work for whoever asked that question, it throws a CompileError for me.
CompileError: The 'access' dialect with current database version settings does not support in-place multirow inserts.
AttributeError: 'CompileError' object has no attribute 'orig'
import pandas as pd
import pyodbc
import urllib
from sqlalchemy import create_engine
connection_string = (
r"DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};"
rf"DBQ={accessDB};"
r"ExtendedAnsiSQL=1;"
)
connection_uri = f"access+pyodbc:///?odbc_connect={urllib.parse.quote_plus(connection_string)}"
engine = create_engine(connection_uri)
conn = engine.connect()
# Read in tableau SuperStore data
dfSS = pd.read_excel(ssData)
dfSS.to_sql('SuperStore', conn, index=False, method='multi')
Access SQL doesn't support multi-row inserts, so a to_sql will never be able to support them as well. That other post is probably using SQLite.
Instead, you can write the data frame to CSV, and insert the CSV by using a query.
Or, of course, not read the Excel in Python at all, but just insert the Excel file by query. This will always be much faster as Access can directly read the data instead of Python reading it and then transmitting it.
E.g.
INSERT INTO SuperStore
SELECT * FROM [Sheet1$] IN "C:\Path\To\File.xlsx"'Excel 12.0 Macro;HDR=Yes'
You should be able to execute this using pyodbc without needing to involve sqlalchemy. Do note the double and single quote combination, they can be a bit painful when embedding them in other programming languages.

Importing Data into SQL Server with Visual Studio Code

I am attempting to use Visual Studio Code(VSC) to import a csv file into SQL Server.
I can access SQL Server in VSC using the MSSQL extension. I am able to select, add columns, create tables ect... I can use python to load and manipulate the csv file.
However, I don't know how to connect the Python and the SQL scripts, or alternatively, how to use an sql script to query a csv file on my local computer.
One option is to just use Python, but I've had some trouble successfully setting up that connection.
I don't know about a VS Code specific tool but if you can run SQL scripts then you can look at OPENROWSET. Example F is probably a close match for what you are looking for but there are lots of options with this command to get exactly what you want.
INSERT INTO MyTable SELECT a.* FROM
OPENROWSET (BULK N'D:\data.csv', FORMATFILE =
'D:\format_no_collation.txt', CODEPAGE = '65001') AS a;
first try to install mysql connector using this- pip install mysql-connector-python
after that use import mysql.connector to import the connector
Then connect your database using this-
myconn = mysql.connector.Connect(user='root',password="1234567890",database="world",auth_plugin='mysql_native_password')
where password is your mysql password and database is your name of database you want to connect
suppose you want to fetch all rows from your selected table-
myconn = mysql.connector.Connect(user='root',password="1234567890",database="world")
cur = myconn.cursor()
cur.execute("Select * from emptab")
allrows = cur.fetchall()
print(allows)
this would generate result like that-
(1001, 'RamKumar', 10000)
(1002, 'Ganesh Kumar', 1000)
(1003, 'Rohan', 3450)
(1004, 'Harish Kumar', 56000)
(1005, 'Mohit', 12000)
(1006, 'Harish Nagar', 56000)
to convert above data to CSV form first convert it into an pandas data frame-
df = pd.DataFrame(allrows)
df.columns = ['empno','name','salary']
this would convert data into pandas df and with columns above
at last use
df_final.to_csv(r'final.csv', index = False)
like command to save result to csv form

SQLite Database and Python

I have been given an SQLite file to exam using python. I have imported the SQLite module and attempted to connect to the database but I'm not having any luck. I am wondering if I have to actually open the file up as "r" as well as connecting to it? please see below; ie f = open("History.sqlite","r+")
import sqlite3
conn = sqlite3.connect("history.sqlite")
curs = conn.cursor()
results = curs.execute ("Select * From History.sqlite;")
I keep getting this message when I go to run results:
Operational Error: no such table: History.sqlite
An SQLite file is a single data file that can contain one or more tables of data. You appear to be trying to SELECT from the filename instead of the name of one of the tables inside the file.
To learn what tables are in your database you can use any of these techniques:
Download and use the command line tool sqlite3.
Download any one of a number of GUI tools for looking at SQLite files.
Write a SELECT statement against the special table sqlite_master to list the tables.

Periodically outputting SQL table to a file

I'm trying to write a python script that continuously updates a file with the contents of a table in my database.
The table in the database is changing continuously, so I need to periodically update this file as well. I could do a select * query and get all the entries, but what would be great is if I could get the output table when applying the formatting of .mode column and .headers on.
What I've tried to do is create a SQL cursor and execute ".output file.txt", but that gives me an sqlite syntax error. I tried to call from the script os.sys("sqlite3 dbname.db 'select * from table;' > file.txt") but that doesn't seem to work either ("'module' object is not callable").
Is there a way for me to get the nicely formatted sqlite table?
import subprocess
f=open('file.txt','w')
subprocess.call(['sqlite3', 'dbname.db', '-csv', 'select * from table'], stdout=f)

How to convert, sort and save to CSV MS Access database .mdb file in Python

I tried researching the answer but was not able to find a good solution. I have files with strange extensions .res. I was told that they are MS Access files. Not sure if they are the same as .mdb but I was able to open them in MS Access. How can I open those files, extract necessary data, sort that data and produce .csv file? I tried using this script: http://mazamascience.com/WorkingWithData/?p=168 and mdb tools on Linux. I got some output with errors in terminal but all the files produced were blank. It could be due to encoding. I am not sure. The file is in ASCII encoding I think.
Error: Table fo_Table
Smart_Battery_Data_Table
MCell_Aci_Data_Table
Aux_Global_Data_Table
Smart_Battery_Clock_Stretch_Table
does not exist in this database.
On Windows I have no idea how to do it. My first step for now is just to dump the necessary table from that database file into .csv. But ideally I need the script to take the file, sort it, extract necessary data, do some calculations (like data in one column divided by data in another column) and save all that stuff into nice .csv.
Thanks a lot. I am not an experienced programmer so please have mercy.
Using the generic pyodbc library should do it. Looks like it has already an embedded MS access driver. This question can probably help you out.
I dont have any MS Access database files with me (It has been ages that I dont have to work with them), but following the examples your code should be something like this:
import pyodbc
db_file = r'''/path/to/the/file.res'''
user = 'admin'
password = 'password'
odbc_conn_str = 'DRIVER={Microsoft Access Driver (*.mdb)};DBQ=%s;UID=%s;PWD=%s' % (db_file, user, password)
conn = pyodbc.connect(odbc_conn_str)
cursor = conn.cursor()
cursor.execute("select * from table order by some_column")
for row in cursor.fetchall():
print ", ".join((row.column1, row.column2, row.columnN))

Categories

Resources