How to reformat dataframe in Pandas using Python? - python

I'm pretty new to Pandas and Python, but have solid coding background. I've decided to pick this up because it will help me automate certain financial reports at work..
To give you a basic background of my issue, I'm taking a PDF and using Tabula to reformat it into a CSV file, which is working fine but giving me certain formatting issues. The reports come in about 60 page PDF files, which I am exporting to a CSV and then trying to manipulate the data in Python using Pandas.
The issue: when I reformat the data, I get a CSV file that looks something like this -
The issue here is that certain tables are shifting and I think it is due to the amount of pages and multiple headings within those.
Would it be possible for me to reformat this data using Pandas, and basically create a set of rules for how it gets reformatted?
Basically, I would like to shift the rows that are misplaced back into their respective places based on something like blank spaces.
Is it possible for me to delete rows with certain strings - deleting extra/unnecessary headers.
Can I somehow save the 'Total' data at the bottom by searching for the row with 'Total' and placing it somewhere else?
In essence, is there a way to partition this data by a set of commands (without specifying row numbers - because this changes daily) and then reposition it accordingly so that I can manipulate the data however necessary?

Related

Fill out Excel Template with Python

I'm having trouble finding a solution to fill out an excel template using python. I currently have a pandas dataframe where I use openpyxl to write the necessary data to specific Rows and Cells in a for loop. The issue I have is that in my next project several of the cells I have to write are not continuous so for example instead of going A1,A2,A3 it can go A1,A5,A9. However this time if I were to list the cells like I did in the past it would be impractical.
So I was looking for something that would work similar to a Vlookup in excel. Where in the template we have Python would match the necessary Row and Column to drop the information. I know I might need to use different commands.
I added a picture below as an example. So I would need to drop values in the empty cells and ideally Python would read "USA and Revenue" and know to drop that information on cell B2. I know I might need something to map it also I am just not sure on how to start or if it is even possible.
enter image description here

Pandas Pivot table and Excel's style cells

I get data measurements from instruments. These measurements depend on several parameters, and a pivot table is a good solution to represent the data. Every measurement can be associated to a scope screenshoot to be more explicit. I get all the data in the following csv format :
The number of measurements and parameters can change.
I am trying to write a Python script (for now with Pandas lib) which allows me to create a pivot table in Excel. With Pandas, I can color the data in and out of a defined range. However, I would like also to to create a link on every cell who can send me to the corresponding screenshot. But I am stuck here.
I would like a result like the following (but with the link to the corresponding screenshot) :
Actually, I found out a way to add the link thanks to the =HYPERLINK() Excel function to all the cells with the apply() Pandas function.
However, I cannot apply a conditional formatting thanks to xlsxWriter anymore because the cells don't have a numerical content anymore
I can apply the conditional formatting first and then iterate through the whole sheet to add a link, but it will be a total mess to retrieve the relation between the data and the different parameters measurement
I would like your help to find ideas and efficient ways to do what I would like
xlsxwriter has a function called write_url ,but first while creating new worksheet you must apply write_url and then use openyxl to insert your pandas data frame
1)create worksheet and insert write_url
2)use openyxl to write data into already formatted cells.

Reading a table with blank cells with tabula-py

I am trying to load a large table (an example is attached) from form 10-K into Python using tabula-py. The table does not have clear border, and have a lot of blank cells, which cause several issues.
My code is
df = tabula.read_pdf("firm_xxx_10K.pdf", pages='100-101',guess=True,stream=True,columns=(144,210,300,340,380,420,450))
With stream=True, I get all the data, but the information in multiple rows are recognized as separate entries. With lattice=True, then the cells with multiple rows are correctly recognized as one cell, but now the results miss a lot of observations.
Is there a better way to set the options? I tried many options, but now I am stuck. Any help is much appreciated.
Best,
Example of the Table I am Trying to Read

Selecting a specific row entry in excel and printing the entire row if condition is satisfied

I have csv file in excel that looks like this (sorry cant place pictures in the post yet)
RAW DATA
Here is what i want to do:
1) I want python to read through column B and find the phrase RCOM (highlighted)
2) Once it find that phrase, i want it to show me the date entry and the corresponding amounts which i have made bold and are in the red color.
3) hopefully making it read something like this:
30-08-2018 273585.8
27-09-2018 275701.4
25-10-2018 276780
*If possible putting the entries on seperate lines would be great, but if not thats fine too.
4) I will then store these in a variable of my choice and print it out as needed.
I know the column where the word RCOM is located, and i know the column where the amounts i want are located (B and K respectively)
I am very new to coding, any help will be appreciated. Im just trying to automate the boring stuff :)
Thanks
you can generate a data frame using read_csv function from pandas library. Once you have the data in data frame format, you can reach to data mentioned in your question by filtering the data according your requirements. I know this answer is very generic and does not provide a code suggestion but I believe that all information you need can be found in following page https://pandas.pydata.org/pandas-docs/stable/10min.html
For importing data Getting Data In/Out section will be helpful and for filtering (masking) the data Selection section will help.

Change columns when writing a fixed amount of data per column using openpyxl

I am writing a program that will process a bunch of data and fill a column in excel. I am using openpyxl, and strictly using write_only mode as well. Each column will have a fixed 75 cell size, and each cell in the row will have the same formula applied to it. However, I can only process the data one column at a time, I cannot process an entire row, then iterate through all of the rows.
How can I write to a column, then move onto the next column once I have filled the previous one?
This is a rather open ended question, but may I suggest using Pandas. Without some kind of example of what you are trying to achieve it's difficult to make a great recommendation, but I have used pandas in the past a ton for automating processing of excel files. Basically you would just load whatever data into a Pandas DataFrame, then do your transformations/calculations and whenever you are done write it back to either the same or a new excel file (or a number of other formats).
Because the OOXML file format is row-oriented, you must write in rows in write-only mode, it is simply not possible otherwise.
What you might be able to do is to create some kind transitional object that will allow to fill it with columns and then use this to write to openpyxl. A Pandas DataFrame would probably be suitable for this and openpyxl supports converting these into rows.

Categories

Resources