users path common config
exchange_rate
prod_data
delivery_fee
site shoppingmall settings description
highlight
prohibit_words
I did the following but failed.
db = MongoClient("localhost:99999").users
config_data = get_config() --> just get config_data (json)
db.path.common.config.insert(config_data)
I would like to make it this way for each customer.
What should I do?
(I like examples because I am a beginner... (T.T))
thank you!!
I think failed because your data is not json. if you want insert data from csv file you can try this :
import pandas as pd
from pymongo import MongoClient
import json
def mongoimport(csv_path, db_name, coll_name, db_url='localhost', db_port=27000)
""" Imports a csv file at path csv_name to a mongo colection
returns: count of the documants in the new collection
"""
client = MongoClient(db_url, db_port)
db = client[db_name]
coll = db[coll_name]
data = pd.read_csv(csv_path)
payload = json.loads(data.to_json(orient='records'))
coll.remove()
coll.insert(payload)
return coll.count()
this code simple to understand and this code from https://gist.github.com/jxub/f722e0856ed461bf711684b0960c8458
Related
I have this following python code for a Flask server. I am trying to have this part of the code list all my vehicles that match the horsepower that I put in through my browser. I want it to return all the car names that match the horsepower, but what I have doesn't seem to be working? It returns nothing. I know the issue is somewhere in the "for" statement, but I don't know how to fix it.
This is my first time doing something like this and I've been trying multiple things for hours. I can't figure it out. Could you please help?
from flask import Flask
from flask import request
import os, json
app = Flask(__name__, static_folder='flask')
#app.route('/HORSEPOWER')
def horsepower():
horsepower = request.args.get('horsepower')
message = "<h3>HORSEPOWER "+str(horsepower)+"</h3>"
path = os.getcwd() + "/data/vehicles.json"
with open(path) as f:
data = json.load(f)
for record in data:
horsepower=int(record["Horsepower"])
if horsepower == record:
car=record["Car"]
return message
The following example should meet your expectations.
from flask import Flask
from flask import request
import os, json
app = Flask(__name__)
#app.route('/horsepower')
def horsepower():
# The type of the URL parameters are automatically converted to integer.
horsepower = request.args.get('horsepower', type=int)
# Read the file which is located in the data folder relative to the
# application root directory.
path = os.path.join(app.root_path, 'data', 'vehicles.json')
with open(path) as f:
data = json.load(f)
# A list of names of the data sets is created,
# the performance of which corresponds to the parameter passed.
cars = [record['Car'] for record in data if horsepower == int(record["Horsepower"])]
# The result is then output separated by commas.
return f'''
<h3>HORSEPOWER {horsepower}</h3>
<p>{','.join(cars)}<p>
'''
There are many different ways of writing the loop. I used a short variant in the example. In more detail, you can use these as well.
cars = []
for record in data:
if horsepower == int(record['Horsepower']):
cars.append(record['Car'])
As a tip:
Pay attention to when you overwrite the value of a variable by using the same name.
so far in my code bellow I managed to store my data into mongoDB.
Now I want to be able to retrieve the data I have stored.
As you can see I have been trying but keep on getting an error.
With BSON do I have to first decode the data to retrieve it from mongoDB?
Any help would be greatly appreciated!
(Apologies for the messy code, I am just practicing through trial and error)
import json
from json import JSONEncoder
import pymongo
from pymongo import MongoClient
from bson.binary import Binary
import pickle
#Do this for each
client = MongoClient("localhost", 27017)
db = client['datacampdb']
coll = db.personpractice4_collection #creating a collection in the database
#my collection on the database is called personpractice4_collection
class Person:
def __init__(self, norwegian, dame, brit, german, sweed):
self.__norwegian = norwegian
self.__dame = dame
self.__brit = brit
self.__german = german #private variable
self.__sweed = sweed
# create getters and setters later to make OOP
personone = Person("norwegian", "dame", "brit", "german","sweed")
class PersonpracticeEncoder(JSONEncoder):
def default(self, o):
return o.__dict__
#Encode Person Object into JSON"
personpracticeJson = json.dumps(personone, indent=4, cls=PersonpracticeEncoder)
practicedata = pickle.dumps(personpracticeJson)
coll.insert_one({'bin-data': Binary(practicedata)})
#print(personpracticeJson)
#print(db.list_collection_names()) #get then names of my collections in DB
#retriving data from mongodb
#Retrieving a Single Document with find_one()
print(({'bin-data': Binary(practicedata)}).find_one()) #not working
the find_one method should be called on a collection
{'bin-data': Binary(practicedata)} is a query to find a document
coll.find_one({'bin-data': Binary(practicedata)})
Witch means : Find a document in the collection coll where bin-data is equal to Binary(practicedata)
I have a model that I would like to populate with csv data, I have followed a few tutorials on this and am now trying to do it on my own. This is the code that I have so far;
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project_name.settings')
import django
django.setup()
#Import models
from app_name.models import Instance
# Third Party Imports
import pandas as pd
# Pull csv data into script
file = 'path/to/file/filename.csv'
collected_data = pd.read_csv(file,index_col='Timestamp')
# this is a dataframe with three columns and a datetime index
for timestamp, row in collected_data.iterrows():
info1 = row[0]
info2 = row[1]
info3 = row[2]
inst = Instance.objects.get_or_create(timestamp = timestamp,
info1 = info1,
info2 = info2,
info3 = info3)[0]
I am getting the following error, which I don't really understand, as I am quite new to Django.
SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
Let me know if there is any more information needed for a MCVE
Try to put your script into management command. Then you would have no problem with running only the ORM in standalone.
The example you can find here:
https://gist.github.com/kharandziuk/08de1d24845b05dfaa6acfbfda3cd28e
There is a long answer here: stackoverflow.com/questions/937742/use-django-orm-as-standalone
This question already has answers here:
How to download a file over HTTP?
(30 answers)
Closed 7 years ago.
While building a flask website, I'm using an external JSON feed to feed the local mongoDB with content. This feed is parsed and fed while repurposing keys from the JSON to keys in Mongo.
One of the available keys from the feed is called "img_url" and contains, guess what, an url to an image.
Is there a way, in Python, to mimic a php style cURL? I'd like to grab that key, download the image, and store it somewhere locally while keeping other associated keys, and have that as an entry to my db.
Here is my script up to now:
import json
import sys
import urllib2
from datetime import datetime
import pymongo
import pytz
from utils import slugify
# from utils import logger
client = pymongo.MongoClient()
db = client.artlogic
def fetch_artworks():
# logger.debug("downloading artwork data from Artlogic")
AL_artworks = []
AL_artists = []
url = "http://feeds.artlogic.net/artworks/artlogiconline/json/"
while True:
f = urllib2.urlopen(url)
data = json.load(f)
AL_artworks += data['rows']
# logger.debug("retrieved page %s of %s of artwork data" % (data['feed_data']['page'], data['feed_data']['no_of_pages']))
# Stop we are at the last page
if data['feed_data']['page'] == data['feed_data']['no_of_pages']:
break
url = data['feed_data']['next_page_link']
# Now we have a list called ‘artworks’ in which all the descriptions are stored
# We are going to put them into the mongoDB database,
# Making sure that if the artwork is already encoded (an object with the same id
# already is in the database) we update the existing description instead of
# inserting a new one (‘upsert’).
# logger.debug("updating local mongodb database with %s entries" % len(artworks))
for artwork in AL_artworks:
# Mongo does not like keys that have a dot in their name,
# this property does not seem to be used anyway so let us
# delete it:
if 'artworks.description2' in artwork:
del artwork['artworks.description2']
# upsert int the database:
db.AL_artworks.update({"id": artwork['id']}, artwork, upsert=True)
# artwork['artist_id'] is not functioning properly
db.AL_artists.update({"artist": artwork['artist']},
{"artist_sort": artwork['artist_sort'],
"artist": artwork['artist'],
"slug": slugify(artwork['artist'])},
upsert=True)
# db.meta.update({"subject": "artworks"}, {"updated": datetime.now(pytz.utc), "subject": "artworks"}, upsert=True)
return AL_artworks
if __name__ == "__main__":
fetch_artworks()
First, you might like the requests library.
Otherwise, if you want to stick to the stdlib, it will be something in the lines of:
def fetchfile(url, dst):
fi = urllib2.urlopen(url)
fo = open(dst, 'wb')
while True:
chunk = fi.read(4096)
if not chunk: break
fo.write(chunk)
fetchfile(
data['feed_data']['next_page_link'],
os.path.join('/var/www/static', uuid.uuid1().get_hex()
)
With the correct exceptions catching (i can develop if you want, but i'm sure the documentation will be clear enough).
You could put the fetchfile() into a pool of async jobs to fetch many files at once.
https://docs.python.org/2/library/json.html
https://docs.python.org/2/library/urllib2.html
https://docs.python.org/2/library/tempfile.html
https://docs.python.org/2/library/multiprocessing.html
Is there an equivalent function in PyMongo or mongoengine to MongoDB's mongodump? I can't seem to find anything in the docs.
Use case: I need to periodically backup a remote mongo database. The local machine is a production server that does not have mongo installed, and I do not have admin rights, so I can't use subprocess to call mongodump. I could install the mongo client locally on a virtualenv, but I'd prefer an API call.
Thanks a lot :-).
For my relatively small small database, I eventually used the following solution. It's not really suitable for big or complex databases, but it suffices for my case. It dumps all documents as a json to the backup directory. It's clunky, but it does not rely on other stuff than pymongo.
from os.path import join
import pymongo
from bson.json_utils import dumps
def backup_db(backup_db_dir):
client = pymongo.MongoClient(host=<host>, port=<port>)
database = client[<db_name>]
authenticated = database.authenticate(<uname>,<pwd>)
assert authenticated, "Could not authenticate to database!"
collections = database.collection_names()
for i, collection_name in enumerate(collections):
col = getattr(database,collections[i])
collection = col.find()
jsonpath = collection_name + ".json"
jsonpath = join(backup_db_dir, jsonpath)
with open(jsonpath, 'wb') as jsonfile:
jsonfile.write(dumps(collection))
The accepted answer is not working anymore. Here is a revised code:
from os.path import join
import pymongo
from bson.json_util import dumps
def backup_db(backup_db_dir):
client = pymongo.MongoClient(host=..., port=..., username=..., password=...)
database = client[<db_name>]
collections = database.collection_names()
for i, collection_name in enumerate(collections):
col = getattr(database,collections[i])
collection = col.find()
jsonpath = collection_name + ".json"
jsonpath = join(backup_db_dir, jsonpath)
with open(jsonpath, 'wb') as jsonfile:
jsonfile.write(dumps(collection).encode())
backup_db('.')