Sort and order output data [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 months ago.
Improve this question
I have developed a small program that reads and outputs the live data of a machine. However, the data is outputted in a confusing and unordered way.
My question is, what exactly can I do to sort the output data e.g. in a table.
Best
enter image description here

You wrote many (topic, payload) tuples to a file, test_ViperData.txt.
Good.
Now, to view them in an ordered manner, just call /usr/bin/sort:
$ sort test_ViperData.txt
If you wish to do this entirely within python,
without e.g. creating a subprocess,
you might want to build up a long list of result tuples.
results = []
...
results.append((topic, payload))
...
print(sorted(results))
The blank-delimited file format you are using is OK, as far as it goes.
But you might prefer to use comma-delimited CSV format.
Then you could view the file within spreadsheet software,
or could manipulate it with the standard csv module
or various pandas tools.
When you review the text file next week,
you might find it more useful if
each record includes a timestamp:
import datetime as dt
...
results.append((topic, payload, dt.datetime.now()))

Related

What are in-memory data structures in python? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I was given a coding challenge in which I have to parse a text file and build "A data structure in memory to work with." . I then have to perform descriptive statistics on it. So far I've parsed the text and build a dictionary containing all the needed data.
I haven't used SQlite or something similar because they specifically asked for data structures and not databases.
I am not sure if dictionary is correct here. So my question is: What are in-memory data sructures in python? I've tried the web but couldn't get an definitive answer.
An in memory data structure is one that is stored in RAM (as opposed to saved to disk or “pickled”). If you’re not using external programs that store to disk for you (like databases) and not explicitly storing to disk, you’ve created an in-memory data structure. Dicts, lists, sets, etc. are all data structures, and if you don’t save it to disk they’re in-memory data structures.

What is the best way to write and store data in python 3 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to store data from stock market transactions I make. I want to have a file like an excel workbook, with every buy or sell price being neatly listed. I have looked at some options, including pickle, xlwt, and pandas. However, I cannot find any that can write onto a pre existing file
The information I will be storing will look like this:
DATE, TIME, STOCK_INDICATOR, BUY/SELL, PRICE
I will need the program to be able to write new rows every time a purchase is made.
Pandas has a method to write data to a csv files, if this is what you mean (pandas.DataFrame.to_csv). You can specify 'write' mode, to rewrite all data, or 'append' mode, if you want to update the file with new rows.
I guess there is no best way.
I would probably do it in a list-dictonary like fashion, although not sure if this would be the greates way.
something like:
myPurchases = [
{"date": []},
{"Time": []},...
]
// and if a new purchase is made I just would write a function which gets the list
// and appends a new entry into the dictonarys.

read an excel file in python by importing csv [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
How can I read an excel file in python by importing csv? How can I read the columns and the rows? For instance I want to write a piece of code, which classifies a certain column if its value is greater than a determined number as accepted and otherwise (if less than that number) not accepted.
I do not want to read the file in reverse order.
Use the csv package. Check out the documentation here:
https://docs.python.org/2/library/csv.html.
Specifically for your issue, I'd do something like this:
import csv
with open(myfile) as fp:
rows = list(csv.reader(fp))
for row in rows:
# let's grab all columns beyond the fifth
for column in row[5:]:
# do stuff
Personally, I like to label my columns. I'd recommend exploring csv.DictReader so that you can access your data by label, instead relying on arbitrary column headers, which are kind of arbitrary.
Just make sure to export your Excel file as a csv.

How to write unit tests for text parser? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
For background, I am somewhat of a self-taught Python developer with only some formal training with a few CS courses in school.
In my job right now, I am working on a Python program that will automatically parse information from a very large text file (thousands of lines) that's a output result of a simulation software. I would like to be doing test driven development (TDD) but I am having a hard time understanding how to write proper unit tests.
My trouble is, the output of some of my functions (units) are massive data structures that are parsed versions of the text file. I could go through and create those outputs manually and then test but it would take a lot of time. The whole point of a parser is to save time and create structured outputs. Only testing I've been doing so far is trial and error manually which is also cumbersome.
So my question is, are there more intuitive ways to create tests for parsers?
Thank you in advance for any help!
Usually parsers are tested using a regression testing system. You create sample input sets and verify that the output is correct. Then you put the input and output in libraries. Each time you modify the code, you run the regression test system over the library to see if anything changes.

Web scraping a forum [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
My concern revolves around how to store the data I'm trying to retrieve data from certain threads of a forum. I want to be able to plot as much information as I want, so I don't want to store everything in a rigid structure; I want to be able to use as much info as I can (timezones more active, timezones more active per user, keywords throughout the years, points throughout posters, etc).
How should I store this? A tree with upper nodes being pages and lower as posts? How do I store that tree in a way it is easy* to read?
* easy as in encapsulated in a format I could export easily to other stuff.
I suggest to scrape only the posts (why would you ever need the pages?) into JSON, which you can keep in PostgreSQL in a jsonb field—it allows querying your JSON flexibly.
Later you’d write a script, or multiple, that would iterate over posts and do useful stuff like cleaning up the data, normalizing values, aggregating stats, etc.
See also
Someone wrote a post about PostgreSQL and querying JSON

Categories

Resources