How to iterate thru different filenames in python - python

I have assigned to variables different files. Now I want to make some operations iterating those variables. For example:
reduced_file1= 'names.xlsx'
reduced_file2= 'surnames.xlsx'
reduced_file3= 'city.xlsx'
reduced_file4= 'birth.xlsx'
the operations I want to iterate (with a FOR loop ) are:
xls= pd.ExcelFile(reduced_file1)
xls= pd.ExcelFile(reduced_file2)
xls= pd.ExcelFile(reduced_file3)
xls= pd.ExcelFile(reduced_file4)
...and so on
Basically every time is changing the name of the variable : reduced_file(i)
Thanks

files= ['names.xlsx', 'surnames.xlsx', 'city.xlsx', 'birth.xlsx']
for file in files:
xls = pd.ExcelFile(file)
You can also change string names by using f-strings:
for i in range(4):
print(f"this is number {i}")

Related

python: glob, loops and local variables

My python script loops over many filles in the directory and performs some operations on each of the file, storing results for each of the file in specific variables, defined for each file, using exec() function:
# consider all filles within the current dirrectory, having pdb extension
pdb_list = glob.glob('*.pdb')
#make a list of the filles
list=[]
# loop over the list and make some operation with each file
for pdb in pdb_list:
# take file name w/o its extension
pdb_name=pdb.rsplit( ".", 1 )[ 0 ]
# save file_name of the file
list.append(pdb_name)
#set variable u_{pdb_name}, which will be accosiated with some function that do something on the corresponded file
exec(f'u_{pdb_name} = Universe(pdb)')
exec(f'print("This is %s computed from %s" % (u_{pdb_name}, pdb_name))')
# plot a graph using matplot liv
# exec(f'plt.savefig("rmsd_traj_{pdb_name}.png")')
Basically in my file-looping scripts I tend to use exec(f'...') when I need to save a new variable consisted of the part of some existing variable (like a name of the current file, u_{pdb_name})
Is it possible to do similar taks with the names of variavles but avoiding constantly exec() ?
You could try something like this:
lst = []
universes = {}
# loop over the list and make some operation with each file
for pdb in pdb_list:
# take file name w/o its extension
pdb_name = pdb.rsplit(".", 1)[0]
# save file_name of the file
lst.append(pdb_name)
key = f'u_{pdb_name}'
universes[key] = Universe(pdb)
print(f"This is {key} computed from {pdb_name}")
To access some value, just do:
universes[key] # where key is the variable name
If you want to iterate over all keys and values, do:
for key, universe in universes.items():
print(key)
print(universe.some_function())

Need help adding value to a variable

Here is the whole code section
for entry in auth_log:
# timestamp is converted to milliseconds for CEF
# repr is used to keep '\\' in the domain\username
extension = {
'rt=': str(time.ctime(int(entry['timestamp']))),
'src=': entry['ip'],
'dhost=': entry['host'],
'duser=': repr(entry['username']).lstrip("u").strip("'"),
'outcome=': entry['result'],
'cs1Label=': 'new_enrollment',
'cs1=': str(entry['new_enrollment']),
'cs2Label=': 'factor',
'cs2=': entry['factor'],
'ca3Label=': 'integration',
'cs3=': entry['integration'],
}
log_to_cef(entry['eventtype'], entry['eventtype'], **extension)
In line 5 (rt=), I would like to add the timestamp output to a variable where I can call it later in the script.
You can access the value from the dictionary directly with extension["rt="]?
If you are looking for a way to have a list of all the variables outside of your loop you can use this method.
Before your loop you should make an empty list like this:
extensionRt = []
Then after extension is created inside each loop use:
extensionRt.append(extension["rt="])
You can then access the values in this list by index:
extensionRt[YOUR INDEX HERE]

Parsing and arranging text in python

I'm having some trouble figuring out the best implementation
I have data in file in this format:
|serial #|machine_name|machine_owner|
If a machine_owner has multiple machines, I'd like the machines displayed in a comma separated list in the field. so that.
|1234|Fred Flinstone|mach1|
|5678|Barney Rubble|mach2|
|1313|Barney Rubble|mach3|
|3838|Barney Rubble|mach4|
|1212|Betty Rubble|mach5|
Looks like this:
|Fred Flinstone|mach1|
|Barney Rubble|mach2,mach3,mach4|
|Betty Rubble|mach5|
Any hints on how to approach this would be appreciated.
You can use dict as temporary container to group by name and then print it in desired format:
import re
s = """|1234|Fred Flinstone|mach1|
|5678|Barney Rubble|mach2|
|1313|Barney Rubble||mach3|
|3838|Barney Rubble||mach4|
|1212|Betty Rubble|mach5|"""
results = {}
for line in s.splitlines():
_, name, mach = re.split(r"\|+", line.strip("|"))
if name in results:
results[name].append(mach)
else:
results[name] = [mach]
for name, mach in results.items():
print(f"|{name}|{','.join(mach)}|")
You need to store all the machines names in a list. And every time you want to append a machine name, you run a function to make sure that the name is not already in the list, so that it will not put it again in the list.
After storing them in an array called data. Iterate over the names. And use this function:
data[i] .append( [ ] )
To add a list after each machine name stored in the i'th place.
Once your done, iterate over the names and find them in in the file, then append the owner.
All of this can be done in 2 steps.

Use files from different folders in a function in a loop?

I have a main folder like this:
mainf/01/streets/streets.shp
mainf/02/streets/streets.shp #normal files
mainf/03/streets/streets.shp
...
and another main folder like this:
mainfo/01/streets/streets.shp
mainfo/02/streets/streets.shp #empty files
mainfo/03/streets/streets.shp
...
I want to use a function that will take as first parameter the first normal file from the upper folder (normal files) and as second the corresponding from the other folder (empty files).
Based on the [-3] level folder number (ex.01,02,03,etc)
Example with a function:
appendfunc(first_file_from_normal_files,first_file_from_empty_files)
How to do this in a loop?
My code:
for i in mainf and j in mainfo:
appendfunc(i,j)
Update
Correct version:
first = ["mainf/01/streets/streets.shp", "mainf/02/streets/streets.shp", "mainf/03/streets/streets.shp"]
second = ["mainfo/01/streets/streets.shp", "mainfo/02/streets/streets.shp", "mainfo/03/streets/streets.shp"]
final = [(f,s) for f,s in zip(first,second)]
for i , j in final:
appendfunc(i,j)
An alternative to automatically put in a list all the files in a main folder with full paths?
first= []
for (dirpath, dirnames, filenames) in walk(mainf):
first.append(os.path.join(dirpath,dirnames,filenames))
second = []
for (dirpath, dirnames, filenames) in walk(mainfo):
second.append(os.path.join(dirpath,dirnames,filenames))
Use zip:
first = ["mainf/01/streets/streets.shp", "mainf/02/streets/streets.shp", "mainf/03/streets/streets.shp"]
second = ["mainf/01/streets/streets.shp", "mainf/02/streets/streets.shp", "mainf/03/streets/streets.shp"]
final = [(f,s) for f,s in zip(first,second)]
print(final)
You can't use a for ... and loop. You can loop one iterable in one statement, and another iterable in another statement. This still won't give you what you want:
for i in mainf:
for j in mainfo:
appendfunc(i,j)
What you probably want is something like (I'm assuming mainf and mainfo are essentially the same, except one is empty):
for folder_num in range(len(mainf)):
appendfunc(mainf[folder_num], mainfo[folder_num])
You haven't said what appendfunc is supposed to do, so I'll leave that to you. I'm also assuming that, depending on how you're accessing the files, you can figure out how you might need to modify the calls to mainf[folder_num] and mainfo[folder_num] (eg. you may need to inject the number back into the directory structure somehow (mainf/{}/streets/streets.shp".format(zero_padded(folder_num))).

Count and flag duplicates in a column in a csv

this type of question has been asked many times. So apologies; I have searched hard to get an answer - but have not found anything that is close enough to my needs (and I am not sufficiently advanced (I am a total newbie) to customize an existing answer). So thanks in advance for any help.
Here's my query:
I have 30 or so csv files and each contains between 500 and 15,000 rows.
Within each of them (in the 1st column) - are rows of alphabetical IDs (some contain underscores and some also have numbers).
I don't care about the unique IDs - but I would like to identify the duplicate IDs and the number of times they appear in all the different csv files.
Ideally I'd like the output for each duped ID to appear in a new csv file and be listed in 2 columns ("ID", "times_seen")
It may be that I need to compile just 1 csv with all the IDs for your code to run properly - so please let me know if I need to do that
I am using python 2.7 (a crawling script that I run needs this version, apparently).
Thanks again
It seems the most easy way to achieve want you want would make use of dictionaries.
import csv
import os
# Assuming all your csv are in a single directory we will iterate on the
# files in this directory, selecting only those ending with .csv
# to list files in the directory we will use the walk function in the
# os module. os.walk(path_to_dir) returns a generator (a lazy iterator)
# this generator generates tuples of the form root_directory,
# list_of_directories, list_of_files.
# So: declare the generator
file_generator = os.walk("/path/to/csv/dir")
# get the first values, as we won't recurse in subdirectories, we
# only ned this one
root_dir, list_of_dir, list_of_files = file_generator.next()
# Now, we only keep the files ending with .csv. Let me break that down
csv_list = []
for f in list_of_files:
if f.endswith(".csv"):
csv_list.append(f)
# That's what was contained in the line
# csv_list = [f for _, _, f in os.walk("/path/to/csv/dir").next() if f.endswith(".csv")]
# The dictionary (key value map) that will contain the id count.
ref_count = {}
# We loop on all the csv filenames...
for csv_file in csv_list:
# open the files in read mode
with open(csv_file, "r") as _:
# build a csv reader around the file
csv_reader = csv.reader(_)
# loop on all the lines of the file, transformed to lists by the
# csv reader
for row in csv_reader:
# If we haven't encountered this id yet, create
# the corresponding entry in the dictionary.
if not row[0] in ref_count:
ref_count[row[0]] = 0
# increment the number of occurrences associated with
# this id
ref_count[row[0]]+=1
# now write to csv output
with open("youroutput.csv", "w") as _:
writer = csv.writer(_)
for k, v in ref_count.iteritems():
# as requested we only take duplicates
if v > 1:
# use the writer to write the list to the file
# the delimiters will be added by it.
writer.writerow([k, v])
You may need to tweek a little csv reader and writer options to fit your needs but this should do the trick. You'll find the documentation here https://docs.python.org/2/library/csv.html. I haven't tested it though. Correcting the little mistakes that may have occurred is left as a practicing exercise :).
That's rather easy to achieve. It would look something like:
import os
# Set to what kind of separator you have. '\t' for TAB
delimiter = ','
# Dictionary to keep count of ids
ids = {}
# Iterate over files in a dir
for in_file in os.listdir(os.curdir):
# Check whether it is csv file (dummy way but it shall work for you)
if in_file.endswith('.csv'):
with open(in_file, 'r') as ifile:
for line in ifile:
my_id = line.strip().split(delimiter)[0]
# If id does not exist in a dict = set count to 0
if my_id not in ids:
ids[my_id] = 0
# Increment the count
ids[my_id] += 1
# saves ids and counts to a file
with open('ids_counts.csv', 'w') as ofile:
for key, val in ids.iteritems():
# write down counts to a file using same column delimiter
ofile.write('{}{}{}\n'.format(key, delimiter, value))
Check out the pandas package. You can read an write csv files quite easily with it.
http://pandas.pydata.org/pandas-docs/stable/10min.html#csv
Then, when having the csv-content as a dataframe you convert it with the as_matrix function.
Use the answers to this question to get the duplicates as a list.
Find and list duplicates in a list?
I hope this helps
As you are a newbie, Ill try to give some directions instead of posting an answer. Mainly because this is not a "code this for me" platform.
Python has a library called csv, that allows to read data from CSV files (Boom!, surprised?). This library allows you to read the file. Start by reading the file (preferably an example file that you create with just 10 or so rows and then increase the amount of rows or use a for loop to iterate over different files). The examples in the bottom of the page that I linked will help you printing this info.
As you will see, the output you get from this library is a list with all the elements of each row. Your next step should be extracting just the ID that you are interested in.
Next logical step is counting the amount of appearances. There is also a class from the standard library called counter. They have a method called update that you can use as follows:
from collections import Counter
c = Counter()
c.update(['safddsfasdf'])
c # Counter({'safddsfasdf': 1})
c['safddsfasdf'] # 1
c.update(['safddsfasdf'])
c # Counter({'safddsfasdf': 2})
c['safddsfasdf'] # 2
c.update(['fdf'])
c # Counter({'safddsfasdf': 2, 'fdf': 1})
c['fdf'] # 1
So basically you will have to pass it a list with the elements you want to count (you could have more than 1 id in the list, for exampling reading 10 IDs before inserting them, for improved efficiency, but remember not constructing a thousands of elements list if you are seeking good memory behaviour).
If you try this and get into some trouble come back and we will help further.
Edit
Spoiler alert: I decided to give a full answer to the problem, please avoid it if you want to find your own solution and learn Python in the progress.
# The csv module will help us read and write to the files
from csv import reader, writer
# The collections module has a useful type called Counter that fulfills our needs
from collections import Counter
# Getting the names/paths of the files is not this question goal,
# so I'll just have them in a list
files = [
"file_1.csv",
"file_2.csv",
]
# The output file name/path will also be stored in a variable
output = "output.csv"
# We create the item that is gonna count for us
appearances = Counter()
# Now we will loop each file
for file in files:
# We open the file in reading mode and get a handle
with open(file, "r") as file_h:
# We create a csv parser from the handle
file_reader = reader(file_h)
# Here you may need to do something if your first row is a header
# We loop over all the rows
for row in file_reader:
# We insert the id into the counter
appearances.update(row[:1])
# row[:1] will get explained afterwards, it is the first column of the row in list form
# Now we will open/create the output file and get a handle
with open(output, "w") as file_h:
# We create a csv parser for the handle, this time to write
file_writer = writer(file_h)
# If you want to insert a header to the output file this is the place
# We loop through our Counter object to write them:
# here we have different options, if you want them sorted
# by number of appearances Counter.most_common() is your friend,
# if you dont care about the order you can use the Counter object
# as if it was a normal dict
# Option 1: ordered
for id_and_times in apearances.most_common():
# id_and_times is a tuple with the id and the times it appears,
# so we check the second element (they start at 0)
if id_and_times[1] == 1:
# As they are ordered, we can stop the loop when we reach
# the first 1 to finish the earliest possible.
break
# As we have ended the loop if it appears once,
# only duplicate IDs will reach to this point
file_writer.writerow(id_and_times)
# Option 2: unordered
for id_and_times in apearances.iteritems():
# This time we can not stop the loop as they are unordered,
# so we must check them all
if id_and_times[1] > 1:
file_writer.writerow(id_and_times)
I offered 2 options, printing them ordered (based on Counter.most_common() doc) and unoredered (based on normal dict method dict.iteritems()). Choose one. From a speed point of view I'm not sure which one would be faster, as one first needs to order the Counter but also stops looping when finding the first element non-duplicated while the second doesn't need to order the elements but needs to loop every ID. The speed will probably be dependant on your data.
About the row[:1] thingy:
row is a list
You can get a subset of a list telling the initial and final positions
In this case the initial position is omited, so it defaults to the start
The final position is 1, so just the first element gets selected
So the output is another list with just the first element
row[:1] == [row[0]] They have the same output, getting a sublist of only the same element is the same that constructing a new list with only the first element

Categories

Resources