AWS CQL editor showing incorrect data when queried - python

Iam querying aws keyspaces cql editor with below statement.
SELECT * FROM [keyspace].[table] where date ='2013-01-01' ALLOW FILTERING;
Ideally it should fetch the record which having date column as 2013-01-01.But I could see the value in date column as null in the result.But when checked using putty to fetch the record,I could see the original value for the date field .What could be the reason for it for showing incorrect data in cql editor.How to overcome this?

Related

Get description columns Oracle use sqlalchemy

To connect to BBDD (Oracle) through Python, I use the sqlalchemy library.
With the connection.execute(query) command I get the data without problems but I am not able to get the description that each field has.
For example, for the variable called ID_Customer=1111 and it has as description: it is the customer identifier.
How can I get that description?
Thanks!

Issue with Bigquery table created using Dataframe in Python

I have created a temporary Bigquery table using Python and loaded data from a panda dataframe (code snippet given below).
client=bigquery.Client(project)
client.create_table(tmp_table)
client.load_table_from_dataframe(df,tmp_table)
The table is being created successfully and I can run select queries from web UI.
But when I run a select query using python
query =f"""select * from {tmp_table.project_id}.{tmp_table.dataset_id}.{tmp_table.table_id} """
It throws error select * would expand to zero columns
This is because there python is not able to detect any schema. Below query returns null:
print(tmp_table.schema)
If I hardcode the table name like below, it works fine :
query =f"""select * from project_id.dataset_id.table_id """
Can someone suggest how do I get data from the temporary table using a select query in python? I can't hardcode table name as it's being created at runtime.

When I save a PySpark DataFrame with saveAsTable in AWS EMR Studio, where does it get saved?

I can save a dataframe using df.write.saveAsTable('tableName') and read the subsequent table with spark.table('tableName') but I'm not sure where the table is actually getting saved?
It is stored under the default location of your database.
You can get the location by running the following spark sql query:
spark.sql("DESCRIBE TABLE EXTENDED tableName")
You can find the Location under the # Detailed Table Information section.
Please find a sample output below:

Long integer values in pandas dataframe change when sent to SQLite database using to_sql

I am using pandas to organize and manipulate data I am getting from the twitter API. The 'id' key returns a very long integer (int64) that pandas has no problem handling (i.e. 481496718320496643).
However, when I send to SQL:
df.to_sql('Tweets', conn, flavor='sqlite', if_exists='append', index=False)
I now have tweet id: 481496718320496640 or something close to that number.
I converted the tweet id to str but Pandas SQLite Driver / SQLite still messes with the number. The data type in the SQLite database is [tweet_id] INTEGER. What is going on and how do I prevent this from happening?
I have found the issue -- I am using SQLite Manager (Firefox Plugin) as a SQLite client. For whatever reason, SQLite Manager displays the tweet IDs incorrectly even though they are properly stored (i.e. when I query, I get the desired values). Very strange I must say. I downloaded a different SQLite client to view the data and it displays properly.

Google Cloud SQL rejecting my query

I've been running a Django application on my local machine and am trying to push it to appengine. One of the queries I was making before that never caused any trouble was:
ALTER TABLE Records ADD COLUMN Id
but when I try to execute this query on Cloud SQL, I get this error:
Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1
What am I doing wrong?
The syntax for Alter table is
ALTER TABLE table_name ADD column_name datatype
You forgot to specify the datatype for Id
Something like
ALTER TABLE Records ADD COLUMN Id INT

Categories

Resources