How to delete pymongo.Database.Database object - python

I am using pymongo to connect to mongodb in my code. I am writing a google analytic kind of application. My db structure is like that for each new website I create a new db. So when someone registers a website I create a new db with that name, however when unregistering the website I wish the database to be deleted. I remove all the collection but still the database could not be removed
And as such the list of databases is growing very large. When I do
client = MongoClient(host=MONGO_HOST,port=27017,max_pool_size=200)
client.database_names()
I see a more than a 1000 list of apps. Many of them are just empty databases. Is there a way that I remove the mongo databases ?

Use drop_database method:
client = MongoClient(host=MONGO_HOST,port=27017,max_pool_size=200)
client.drop_database("database_name")

Related

Flask + SQLAlchemy - Is there a way to dynamically change which databases I have a connection to?

I am on a new project, and we are in the process of creating a web application from an old desktop application. Mind you, I didn't design any part of the old application. One of the requirements is for a client to connect to 4 separate databases (each has a specific purpose and set of data). These databases store information unique to each user's session whenever a they run the application. The end user can also create a new session and chose another set of these databases to run analysis on.
I've been trying to find a way to recreate this paradigm that the desktop application currently uses except now trying to use Flask to serve our GUIs instead. I already attempted to use Flask-Session to store SQLAlchemy engine objects so I can later update them on the fly, but engine objects are not picklable.
I know I can write a function that will run create_engine() and return a new Session using sessionmaker() but it seems very dirty to create an engine for every single request. I'm stuck on how to proceed with being able to do this using SQLAlchemy.
An example of what I am trying to accomplish would look something like this but these database connections need to be unique per client using the web application:
db1 = create_engine('sqlite:///database1.db')
db2 = create_engine('sqlite:///database2.db')
db3 = create_engine('sqlite:///database3.db')
db4 = create_engine('sqlite:///database4.db')
# Some logic and functions
def update_dbs():
db1 = create_engine('sqlite:///new_database1.db')
db2 = create_engine('sqlite:///new_database2.db')
db3 = create_engine('sqlite:///new_database3.db')
db4 = create_engine('sqlite:///new_database4.db')

Find documents with MongoDB via Cosmos DB

I am attempting to retrieve all documents from a specified collection from MongoDB via Cosmos DB. I am returning an empty list instead of the documents I've requested.
def retrieve_transactions(collection):
client = MongoClient(environ.get('DB_URI')) # MongoClient is imported from pymongo
db = client[str(environ.get('DB'))]
transaction_collection = db[collection].transactions
transaction_list = list(transaction_collection.find({}))
client.close()
return transaction_list
The primary URI is being retrieved from the App Services application settings. The function successfully retrieves test data from my IDE as expected. This leads me to believe the issue involves Cosmos DB itself. I'm successfully inserting documents to this database from a separate App Services instance too. The database's Insights tab shows find requests and zero failed requests.
I'm stumped. Any thoughts?
I solved this by removing the dots (".") from my collection's name.
example.com.transactions -> examplecom
Cosmos DB (MongoDB API) must not support this structure.

Python: Create and return an SQLite DB as a web request result

In my python/django based web application I want to export some (not all!) data from the app's SQLite database to a new SQLite database file and, in a web request, return that second SQLite file as a downloadable file.
In other words: The user visits some view and, internally, a new SQLite DB file is created, populated with data and then returned.
Now, although I know about the :memory: magic for creating an SQLite DB in memory, I don't know how to return that in-memory database as a downloadable file in the web request. Could you give me some hints on how I could reach that? I would like to avoid writing stuff to the disc during the request.
I'm not sure you can get at the contents of a :memory: database to treat it as a file; a quick look through the SQLite documentation suggests that its API doesn't expose the :memory: database to you as a binary string, or a memory-mapped file, or any other way you could access it as a series of bytes. The only way to access a :memory: database is through the SQLite API.
What I would do in your shoes is to set up your server to have a directory mounted with ramfs, then create an SQLite3 database as a "file" in that directory. When you're done populating the database, return that "file", then delete it. This will be the simplest solution by far: you'll avoid having to write anything to disk and you'll gain the same speed benefits as using a :memory: database, but your code will be much easier to write.
With web content you can easily serve files as raw binary with a content type specified in the response.
Django makes this fairly easy - here's a snippet I use on one of my sites for generating a barcode for a user.
def barcode(request):
from core import ugbarcode
bar = ugbarcode.UGBar("0001")
binStream = bar.asString('gif')
return HttpResponse(binStream, 'image/gif')
See also this post for more details in specifying it is an attachment to trigger download: Generating file to download with Django

Python database WITHOUT using Django (for Heroku)

To my surprise, I haven't found this question asked elsewhere. Short version, I'm writing an app that I plan to deploy to the cloud (probably using Heroku), which will do various web scraping and data collection. The reason it'll be in the cloud is so that I can have it be set to run on its own every day and pull the data to its database without my computer being on, as well as so the rest of the team can access the data.
I used to use AWS's SimpleDB and DynamoDB, but I found SDB's storage limitations to be to small and DDB's poor querying ability to be a problem, so I'm looking for a database system (SQL or NoSQL) that can store arbitrary-length values (and ideally arbitrary data structures) and that can be queried on any field.
I've found many database solutions for Heroku, such as ClearDB, but all of the information I've seen has shown how to set up Django to access the database. Since this is intended to be script and not a site, I'd really prefer not to dive into Django if I don't have to.
Is there any kind of database that I can hook up to in Heroku with Python without using Django?
You can get a database provided from Heroku without requiring your app to use Django. To do so:
heroku addons:add heroku-postgresql:dev
If you need a larger more dedicated database, you can examine the plans at Heroku Postgres
Within your requirements.txt you'll want to add:
psycopg2
Then you can connect/interact with it similar to the following:
import psycopg2
import os
import urlparse
urlparse.uses_netloc.append('postgres')
url = urlparse.urlparse(os.environ['DATABASE_URL'])
conn = psycopg2.connect("dbname=%s user=%s password=%s host=%s " % (url.path[1:], url.username, url.password, url.hostname))
cur = conn.cursor()
query = "SELECT ...."
cur.execute(query)
I'd use MongoDB. Heroku has support for it, so I think it will be really easy to start and scale out: https://addons.heroku.com/mongohq
About Python: MongoDB is a really easy database. The schema is flexible and fits really well with Python dictionaries. That's something really good.
You can use PyMongo
from pymongo import Connection
connection = Connection()
# Get your DB
db = connection.my_database
# Get your collection
cars = db.cars
# Create some objects
import datetime
car = {"brand": "Ford",
"model": "Mustang",
"date": datetime.datetime.utcnow()}
# Insert it
cars.insert(car)
Pretty simple, uh?
Hope it helps.
EDIT:
As Endophage mentioned, another good option for interfacing with Mongo is mongoengine. If you have lots of data to store, you should take a look at that.
I did this recently with Flask. (https://github.com/HexIce/flask-heroku-sqlalchemy).
There are a couple of gotchas:
1. If you don't use Django you may have to set up your database yourself by doing:
heroku addons:add shared-database
(Or whichever database you want to use, the others cost money.)
2. The database URL is stored in Heroku in the "DATABASE_URL" environment variable.
In python you can get it by doing.
dburl = os.environ['DATABASE_URL']
What you do to connect to the database from there is up to you, one option is SQLAlchemy.
Create a standalone Heroku Postgres database. http://postgres.heroku.com

Can i access different database in django than other default database

I have the database backup script in python which inserts some data in mysql database .
Now my Django is in different database.
How can i access different database because i don't have any objects in Models.py.
i want to display some data in django interface
Yes, you can setup multiple database and access every one of them.
you can get the specified database connection cursor using this:
from django.db import connections
cursor = connections['my_db_alias'].cursor()
where my_db_alias is your another db alias .
check the doc:
https://docs.djangoproject.com/en/1.3/topics/db/multi-db/

Categories

Resources