I have to manually push data from my django app to be inserted on another DB which using using another desktop application . Nearlly all the data saved on my postgress db for that particular id is inserted on that other DB .
But it seems only one data is missing. Could it be that i have not set any Connection timeout on my setting.py file fro my DB .
Related
I am attempting to retrieve all documents from a specified collection from MongoDB via Cosmos DB. I am returning an empty list instead of the documents I've requested.
def retrieve_transactions(collection):
client = MongoClient(environ.get('DB_URI')) # MongoClient is imported from pymongo
db = client[str(environ.get('DB'))]
transaction_collection = db[collection].transactions
transaction_list = list(transaction_collection.find({}))
client.close()
return transaction_list
The primary URI is being retrieved from the App Services application settings. The function successfully retrieves test data from my IDE as expected. This leads me to believe the issue involves Cosmos DB itself. I'm successfully inserting documents to this database from a separate App Services instance too. The database's Insights tab shows find requests and zero failed requests.
I'm stumped. Any thoughts?
I solved this by removing the dots (".") from my collection's name.
example.com.transactions -> examplecom
Cosmos DB (MongoDB API) must not support this structure.
I connecting to database on Azure using authentication ActiveDirectoryPassword.
ss
cnxn = pyodbc.connect('DRIVER='+driver+';SERVER='+host+';UID='+user+';PWD='+password+';Authentication=ActiveDirectoryPassword')
It is working. The issue is that using this connection string I do not specify the DB. It just connecting me to master. How can I switch to DB I need. I have tried different connection strings (with database specified) but only this one works with ActiveDirectiryPassword.
You could try the below :
pyodbc.connect('DRIVER='+driver+';SERVER='+host+';DATABASE='+database+';UID='+user+';PWD='+password+';Authentication=ActiveDirectoryPassword')
My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there is a better way. On my local machine, I will take the following steps:
"dropdb databasename" and then "createdb databasename"
python manage.py makemigrations
python manage.py migrate
Is there something like this that I can do on amazon to delete my database and put it back online again without deleting the entire application? When I tried just deleting the RDS instance a while ago and making a new one, I was having problems with elasticbeanstalk.
The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just add the necessary arguments to connect to the DB. For example:
$ createdb -h <RDS endpoint> -U <user> -W ebdb
You can also create a RDS snapshot when the DB is empty, and use the RDS instance actions Restore to Point in Time or Migrate Latest Snapshot.
I had the same problem and came up with a workaround. In your python code just add and run the following method when deploying your app the next time:
FOR SQLALCHEMY AFTER VERSION 2.0
from sqlalchemy import create_engine, text
tables = ["table1_name", "table2_name"] # the names of the tables you want to delte
engine = create_engine("sqlite:///example.db") # here you create your engine
def delete_tables(tables):
for table in tables:
sql = text(f"DROP TABLE IF EXISTS {table} CASCADE;") # CASCADE deltes the tables even if they had some connections to other tables
with engine.connect() as connection:
with connection.begin():
connection.execute(sql)
delete_tables(tables) # Comment this line out after running it once.
FOR SQLALCHEMY BEFORE VERSION 2 (I guess)
def delete_tables(tables):
for table in tables:
engine.execute(f"DROP TABLE IF EXISTS {table} CASCADE;")
delete_tables(tables) # Comment this line out after running it once.
After you deployed and ran this code 1 time, all your tables will be deleted.
IMPORTANT: Delete or comment out this code after that, otherwise you will delete all your tables every time when you deploy your code to AWS
After navigating to a directory and typing the commands in the Python shell:
from sqlalchemy import *
db = create_engine('sqlite:///tutorial.db')
I do not see a database file called tutorial.db file in the directory. Do I have to use a different command to create the actual database file and save it?
SQLAlchemy's Engine object lazily constructs the actual underlying database connection, waiting until the first database operation before trying to connect to the database.
Try running a query or creating a table and see if the database appears.
I have the database backup script in python which inserts some data in mysql database .
Now my Django is in different database.
How can i access different database because i don't have any objects in Models.py.
i want to display some data in django interface
Yes, you can setup multiple database and access every one of them.
you can get the specified database connection cursor using this:
from django.db import connections
cursor = connections['my_db_alias'].cursor()
where my_db_alias is your another db alias .
check the doc:
https://docs.djangoproject.com/en/1.3/topics/db/multi-db/