Inserting data frames to teradata by using teradatasql package - python

I am using teradatasql package which native solution of Teradata as a connector between python and Teradata to load data from DB. However, I want to insert data frames I created in python back to DB. Is it possible to write data frames to the database by using teradatasql package?
Thanks

SQLAlchemy provides the linkage between pandas dataframes and a SQL database.
Typically, you would use the pandas dataframe to_sql method to insert the contents of a dataframe into a table in the database:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html
We offer a SQLAlchemy dialect for the Teradata SQL Driver for Python:
https://pypi.org/project/teradatasqlalchemy/
You can install it with: pip install teradatasqlalchemy

Related

SQL Query Performance in Jupyter Notebook (Python Teradatasql package) vs. TeraData SQL Assistant Performance

Why would a SQL query in TeraData SQL Assistant (or any SQL manager) run faster than in a Jupyter Notebook using Python (i.e. SQLAlchemy / TeraDatasql packages)?
What is the technical reasoning?

Data Pipeline using SQl and Python

I need to create a data pipeline using Python. I want to connect with MySql in Python and read the tables in dataframes, perform pre-processing and then load the data back to Mysql Db. I was able to connect to the MySql Db using mysql connector and then pre-process the dataframes. However, I'm not able to load these dataframes from Python back to Mysql. Error: ValueError: unknown type str96 python.
Please help me with methods to complete this task.
I'm new to programming. Any help will be greatly appreciated. Thanks!
It is a bug and has been fixed in version 1.1.3.
upgrade pandas package
pip3 install --upgrade pandas

Retrieving Data from Elasticsearch-SQL CLI - Insert into Dataframe

I am working with Elasticsearch 6.7 which has an Elasticsearch SQL cli. This allows me to run more standard SQL queries. This is preferred over the API method as the query capabilities are much more robust.
I am attempting to run a query through this CLI and insert those results into a pandas data frame. Is this something I can do via the subprocess method or is there an easier/better way. This will go into production so it needs to run on multiple environments.
This python program will be running on a different host than the Elasticsearch machine.

Insert pd.dataframe to teradata via bteq

When I want to insert some data from a pandas dataframe to a teradata table I do the following steps:
save the pd.DataFrame as a .csv
write a .sql file with the .logon, .import vartext from file, sql insert query and .logoff and save it locally
excecute the .sql file with bteq via the python package subprocess
wait until the process is finished and then continue with the script.
So the issue here is that it is not really save (e.g. there are laying some files locally around with the databasepassword) and a dependent on the right saving of the csv.
So what i would like to do is the following:
create the bteq query internally in the original python process without writing something to a local file (no .csv and no .sql)
Execute this query within in the python process
I know that there is already a teradata python package, hoewever it uses odbc and not bteq (there is a possibility to use bteq, but it is the same with executing a .sql file). And odbc is not installed on the server the script is executed.

pinging mysql using mysql alchemy and python

how do i ping mysql using mysql alchemy and python?
Use mysqlshow to see if MySQL is running as expected.
http://dev.mysql.com/doc/refman/5.1/en/mysqlshow.html
Assure that SQLAlchemy has supprt for MySQL.
http://www.sqlalchemy.org/docs/05/dbengine.html#supported-dbapis
Use a simple query through SQLAlchemy.
http://www.sqlalchemy.org/docs/05/ormtutorial.html

Categories

Resources