How to update and delete csv data in a flask website - python

I'm a beginner in Flask and would like to know how to update and delete csv data using a flask website.
My csv Database is:
name
Mark
Tom
Matt
I would like to know how I could add, update, and delete data on a csv file using a flask website.

Try out pandas
# Load the Pandas libraries with alias 'pd'
import pandas as pd
# Read data from file 'filename.csv'
data = pd.read_csv("filename.csv")
# Preview the first 5 lines of the loaded data
data.head()
Check out more here pandas

Why do you need to storing or processing of data into a CSV file ? Probably you will need to conditional CRUD. Looks like very troublesome way.
You may use SQLite or similar databases that more efficiently way instead of a CSV file. SQLite
Even so if you are determined to use a CSV file maybe it helps. CRUD with CSV

Related

Add new data to existing SharePoint xlsx files

I would like to write into an existing xlsx file in SharePoint. Is that even possible? Mydata is in the form of a dataframe and if possible, just append the dataframe instead of overwriting the whole xlsx file. I tried to use xlsxwriter library but did not get anywhere. Any help would be appreciated
#Coder123,
As you're using SP Online, you could update the content of xlsx file stored in SPO via MS Graph API:
https://learn.microsoft.com/en-us/graph/api/table-update?view=graph-rest-1.0&tabs=http
through this API, you can update the table/worksheet of an xlsx file. And it has offered a python library:
https://github.com/microsoftgraph/msgraph-sdk-python-core

Orange3 and Python Data Export/Import

Orange 3 seems to be a great tool. However, I had trouble saving and reading my files in my python code (jupyterlab with pandas). There is a way to save the file in an orange pickle format but had no luck in finding a way to properly open the file.
If there is a better way in exporting data tables as well, that will be much appreciated.
You can easily open a pickle file that holds the data table with:
from Orange.data import Table
table = Table("pickled_file.pkl")
```python
You can save the Orange Table in various formats (.tab, .csv, .pickle, ...). Just use the `save` method on the table.
Here is the example on the Iris dataset.
from Orange.data import Table
table = Table("iris")
table.save("iris.csv")

How do I use python pandas to read an already opened excel sheet

Assuming I have an excel sheet already open, make some changes in the file and use pd.read_excel to create a dataframe based on that sheet, I understand that the dataframe will only reflect the data in the last saved version of the excel file. I would have to save the sheet first in order for pandas dataframe to take into account the change.
Is there anyway for pandas or other python packages to read an opened excel file and be able to refresh its data real time (without saving or closing the file)?
Have you tried using mitosheet package? It doesn't answer your question directly, but it allows you working on pandas dataframes as you would do in excel sheets. In this way, you may edit the data on the fly as in excel and still get a pandas dataframe as a result (meanwhile generating the code to perform the same operations with python). Does this help?
There is no way to do this. The table is not saved to disk, so pandas can not read it from disk.
Be careful not to over-engineer, that being said:
Depending on your use case, if this is really needed, I could theoretically imagine a Robotic Process Automation like e.g. BluePrism, UiPath or PowerAutomate loading live data from Excel into a Python environment with a pandas DataFrame continuously and then changing it.
This use case would have to be a really important process though, otherwise licensing RPA is not worth it here.
df = pd.read_excel("path")
In variable explorer you can see the data if you run the program in SPYDER ide

Lost data in bigquery

I made a Python 3 script to process some CSV files, but I have a problem with the data.
I send to the stream with the insert_rows function, if I only import one file I have the same rows in the CSV and the BigQuery, but when I import more files, BigQuery lost rows respect CSV file, but insert_rows don't return errors.
errors = connection.client.insert_rows(table_ref, info, selected_fields=schema) # API request
Thanks for the help
Issue was fixed by adding a new unique column into the CSV file, using this Python Standard Library to generate a new column and add in all rows an unique id.

What is best way to get CSV file data into database? Python (Falcon), Angular, MySQL

Currently I am working in a project where user will be able to upload a CSV file and the data in CSV file will be stored into database. This project is developing on Falcon Framework as back-end where API requests are sending from client side Angular 4.
From Angular side, I can parse CSV file data into JSON data. There are some packages available as example ngx-papaparse. Is there other way around like getting CSV file in python and process CSV file data to be stored into database. Then what is the best way to do.
Python is quite flexible for processing data, you may also convert your csv data to json using Pandas in python
import pandas as pd
df = pd.read_csv('filename.csv')
df.to_json('jsonfilename.json')
You can also store csv files in mysql by exporting them using python , please read http://www.vallinme.com/v1/?p=95
Petl is another library for such purpose.

Categories

Resources