So, I wanted to move my local database to the Firebase by using Firebase's Realtime Database feature, however, I am struggling a bit as I am completely new to Firebase, and I am using the library called 'pyrebase'
What I am looking for:
database {
userid1 {
mail:"email1"
},
userid2 {
mail:"email2"
},
userid3 {
mail:"email3"}
...
}
My first question is regarding to how to create such structure using Firebase?
If such structure in the realtime database was accomplished, how to update any specific userid's data?
If wanted any of the user to be deleted from the system by just using their userid, how is it done?
And lastly, which is very important, if wanted to retrieve any of the user emails by looking through their userid, how is it retrieved?
What I have done so far:
I have created the realtime database so far
Downloaded and integrated the credentials
p.s literally in need of source related to Firebase.
So, I have finally figured out how to do all of these as shown below:
Inserting Data:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://xxxxx.firebaseio.com/', None)
data = { 'Name': 'Vivek',
'RollNo': 1,
'Percentage': 76.02
}
result = firebase.post('/python-sample-ed7f7/Students/',data)
print(result)
Retrieving Data:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://xxxx.firebaseio.com/', None)
result = firebase.get('/python-sample-ed7f7/Students/', '')
print(result)
Updating Data:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://xxxx.firebaseio.com/', None)
firebase.put('/python-sample-ed7f7/Students/-LAgstkF0DT5l0IRucvm','Percentage',79)
print('updated')
Delete Data:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://xxxx.firebaseio.com/', None)
firebase.delete('/python-sample-ed7f7/Students/', '-LAgt5rGRlPovwNhOsWK')
print('deleted')
Related
Campaign Monitor is a service where we can send emails to a set of subscribers. We can create multiple lists within Campaign Monitor and add the required users to these lists as subscribers(to whom we can send personalised emails). So, here I am trying to send a set of customers' details like their name, emails, total_bookings, and first_booking to the campaign monitor's list using the API in Python so that I can send emails to this set of users.
More details on campaign monitor: https://www.campaignmonitor.com/api/v3-3/subscribers/
I am new to using Campaign Monitor. I have searched documentation, a lot of posts and blogs for examples on how to push data with multiple custom fields to Campaign Monitor using Python. By default, a list in Campaign Monitor will have a name and an email that can be added, but I want to add other details for each subscriber(here I want to add total_bookings and first_booking data) and Campaign Monitor provides custom fields to achieve this.
For instance:
I have my data stored in a redshift table named customer_details with the fields name, email, total_bookings, first_booking. I was able to retrieve this data from redshift table using Python with the following code.
# Get the data from the above table:
cursor = connection.cursor()
cursor.execute("select * from customer_details")
creator_details = cursor.fetchall()
# Now I have the data as a list of sets in creator_details
Now I want to push this data to a list in the Campaign Monitor using API like request.put('https://api.createsend.com/api/../.../..'). But I am not sure on how to do this. Can someone please help me here.
400 indicated invalid parameters
we can first see the request is POST not PUT
so first change requests.put to requests.post
the next thing is that all the variables need to be sent either as www-formdata or as json body data not sure which
and lastly you almost certainly cannot verify with basic auth ... but maybe
something like the following
some_variables = some_values
...
header = {"Authorization": f"Bearer {MY_API_KEY}"}
data = {"email":customer_email,"CustomFields":[{"key":"total_bookings","value":customer_details2}]}
url = f'https://api.createsend.com/api/v3.3/subscribers/{my_list_id}.json'
res = requests.post(url,json=data,headers=header)
print(res.status_code)
try:
print(res.json())
except:
print(res.content)
after looking more into the API docs it looks like this is the expected request
{
"EmailAddress": "subscriber#example.com",
"Name": "New Subscriber",
"MobileNumber": "+5012398752",
"CustomFields": [
{
"Key": "website",
"Value": "http://example.com"
},
{
"Key": "interests",
"Value": "magic"
},
{
"Key": "interests",
"Value": "romantic walks"
}
],
"Resubscribe": true,
"RestartSubscriptionBasedAutoresponders": true,
"ConsentToTrack":"Yes"
}
which we can see has "EmailAddress" not "email" so you would need to do
data = {"EmailAddress":customer_email,"CustomFields":[{"key":"total_bookings","value":customer_details2}]}
Im not sure if all of the fields are required or not ... so you may also need to provide "Name","MobileNumber",Resubscribe",etc
and looking at "Getting Started" it looks like the publish a python package to make interfacing simpler
http://campaignmonitor.github.io/createsend-python/
which makes it as easy as
import createsend
cli = createsend.CreateSend({"api_key":MY_API_KEY})
cli.subscriber.add(list_id,"user#email.com","John Doe",custom_fields,True,"No")
(which I found here https://github.com/campaignmonitor/createsend-python/blob/master/test/test_subscriber.py#L70)
My task: copy the information from the field 'ACCOUNT' from one document (123) and paste it into another document (456). Both documents are in the same database and the same collection. This is triggered by a client (Windows) using pymongo:
# ... creating the target_collection object ...
info_id = 123 # the document with this id contains the info which shall be written into the new entry
write_data = target_collection.find_one({'_id': info_id})['ACCOUNT'] # extract the info
update_query = {'$addToSet': {'ACCOUNT': {'$each': [write_data]}}}
target_doc = {'_id': 456}
target_collection.update_one(target_doc, update_query, upsert=True)
This code snippet works totally fine, but it is doing two round trips (fetch the info from 123, write it to 456), which makes the transaction not atomic and slows it down significantly. With the Mongo Shell I am able to do this:
db.audit.update(
{_id: 456},
{$addToSet: {'ACCOUNT': {$each: db.getCollection('audit').findOne({_id: 123})["ACCOUNT"] } } }
);
How can I achieve the same in pymongo? Is it possible to send a query string to the server?
Versions I use:
MongoDB 4.4 (on RHEL 7)
pymongo 3.11.4 (on Windows 10)
I'm using firebase-admin Python SDK to store data into Firestore. However, when I'm calling add() method in my collection reference the data gets stored to Firestore but its format is not proper.
message = {
u'avatar_url': get_avatar_url(user),
u'timestamp': get_utc_timestamp(),
u'event': u'adding_avatar'
}
def add_data(message):
coll_ref.add(message)
When I look into the Firestore collection, I find that the data stored is of the format:
{
avatar_url: aHR0cDovL2,
timestamp: 1569404588,
event: 'adding_avatar'
}
I expect the data to look like this(when viewing from Firebase console):
{
avatar_url: 'https://localhost:8000/images/avatar.png',
timestamp: 1569404588,
event: 'adding_avatar'
}
The type of avatar_url is blob. I don't know why is this happening? What is the standard way to add a python dictionary as a payload to Firestore with appropriate datatype?
I have a chat app using Firebase that keeps on having a
setValue at x failed: DatabaseError: permission denied
error every time I type a message.
I set my Database to be public already:
service cloud.firestore {
match /databases/{database}/documents {
match /{allPaths=**} {
allow read, write: if request.auth.uid != null;
}
}
}
Is it something from within my chat reference?
private void displayChat() {
ListView listOfMessage = findViewById(R.id.list_of_message);
Query query = FirebaseDatabase.getInstance().getReference();
FirebaseListOptions<Chat> options = new FirebaseListOptions.Builder<Chat>()
.setLayout(R.layout.list_item)
.setQuery(query, Chat.class)
.build();
adapter = new FirebaseListAdapter<Chat>(options) {
#Override
protected void populateView(View v, Chat model, int position) {
//Get reference to the views of list_item.xml
TextView messageText, messageUser, messageTime;
messageText = v.findViewById(R.id.message_text);
messageUser = v.findViewById(R.id.message_user);
messageTime = v.findViewById(R.id.message_time);
messageText.setText(model.getMessageText());
messageUser.setText(model.getMessageUser());
messageTime.setText(DateFormat.format("dd-MM-yyyy (HH:mm:ss)", model.getMessageTime()));
}
};
listOfMessage.setAdapter(adapter);
}
Your code is using the Firebase Realtime Database, but you're changing the security rules for Cloud Firestore. While both databases are part of Firebase, they are completely different and the server-side security rules for one, don't apply to the other.
When you go the database panel in the Firebase console, you most likely end up in the Cloud Firestore rules:
If you are on the Cloud Firestore rules in the Firebase console, you can change to the Realtime Database rules by clicking Cloud Firestore BETA at the top, and then selecting Realtime Database from the list.
You can also directly go to the security rules for the Realtime Database, by clicking this link.
The security rules for the realtime database that match what you have are:
{
"rules": {
".read": "auth.uid !== null",
".write": "auth.uid !== null"
}
}
This will grant any authenticated user full read and write access to the entire database. Read my answer to this question on more on the security/risk trade-off for such rules: Firebase email saying my realtime database has insecure rules.
change this
request.auth.uid != null
to
request.auth.uid == null
or defined a proper auth mechanism before starting the conversation where user defined by userID
Is it possible to extract data (to google cloud storage) from a shared dataset (where I have only have view permissions) using the client APIs (python)?
I can do this manually using the web browser, but cannot get it to work using the APIs.
I have created a project (MyProject) and a service account for MyProject to use as credentials when creating the service using the API. This account has view permissions on a shared dataset (MySharedDataset) and write permissions on my google cloud storage bucket. If I attempt to run a job in my own project to extract data from the shared project:
job_data = {
'jobReference': {
'projectId': myProjectId,
'jobId': str(uuid.uuid4())
},
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
I get the error:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Value 'myProjectId' in content does not agree with value
sharedProjectId'. This can happen when a value set through a parameter
is inconsistent with a value set in the request.">
Using the sharedProjectId in both the jobReference and sourceTable I get:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Access Denied: Job myJobId: The user myServiceAccountEmail
does not have permission to run a job in project sharedProjectId">
Using myProjectId for both the job immediately comes back with a status of 'DONE' and with no errors, but nothing has been exported. My GCS bucket is empty.
If this is indeed not possible using the API, is there another method/tool that can be used to automate the extraction of data from a shared dataset?
* UPDATE *
This works fine using the API explorer running under my GA login. In my code I use the following method:
service.jobs().insert(projectId=myProjectId, body=job_data).execute()
and removed the jobReference object containing the projectId
job_data = {
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
but this returns the error
Access Denied: Table sharedProjectId:sharedDatasetId.sharedTableId: The user 'serviceAccountEmail' does not have permission to export a table in
dataset sharedProjectId:sharedDatasetId
My service account now is an owner on the shared dataset and has edit permissions on MyProject, where else do permissions need to be set or is it possible to use the python API using my GA login credentials rather than the service account?
* UPDATE *
Finally got it to work. How? Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)
After trying to reproduce the issue, I was running into the parse errors.
I did how ever play around with the API on the Developer Console [2] and it worked.
What I did notice is that the request code below had a different format than the documentation on the website as it has single quotes instead of double quotes.
Here is the code that I ran to get it to work.
{
'configuration': {
'extract': {
'sourceTable': {
'projectId': "sharedProjectID",
'datasetId': "sharedDataSetID",
'tableId': "sharedTableID"
},
'destinationUri': "gs://myBucket/myFile.csv"
}
}
}
HTTP Request
POST https://www.googleapis.com/bigquery/v2/projects/myProjectId/jobs
If you are still running into problems, you can try the you can try the jobs.insert API on the website [2] or try the bq command tool [3].
The following command can do the same thing:
bq extract sharedProjectId:sharedDataSetId.sharedTableId gs://myBucket/myFile.csv
Hope this helps.
[2] https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert
[3] https://cloud.google.com/bigquery/bq-command-line-tool
Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)