Ok, so I'm writing some python code (I don't write python much, I'm more used to java and C).
Anyway, so I have collection of integer literals I need to store.
(Ideally >10,000 of them, currently I've only got 1000 of them)
I would have liked to be accessing the literals by file IO, or by accessing there source API, but that is disallowed.
And not ontopic anyway.
So I have the literals put into a list:
src=list(0,1,2,2,2,0,1,2,... ,2,1,2,1,1,0,2,1)
#some code that uses the src
But when I try to run the file it comes up with an error because there are more than 255 arguments.
So the constructor is the problem.
How should I do this?
The data is intitally avaiable to me as a space deliminated textfile.
I just searched and replaced and copied it in
If you use [] instead of list(), you won't run into the limit because [] is not a function.
src = [0,1,2,2,2,0,1,2,... ,2,1,2,1,1,0,2,1]
src = [int(value) for value in open('mycsv.csv').read().split(',') if value.strip()]
Or are you not able to save text file in your system?
Related
I am trying to read sav files using pyreadstat in python but for some rare scenarios I am getting error of UnicodeDecodeError since the string variable has special characters.
To handle this I think instead of loading the entire variable set I will load only variables which do not have this error.
Below is the pseudo-code that I have with me. This is not a very efficient code since I check for error in each item of list using try and except.
# Reads only the medata to get information about the variables
df, meta = pyreadstat.read_sav('Test.sav', metadataonly=True)
list = meta.column_names # All variables are stored in list
result = []
for var in list:
print(var)
try:
df, meta = pyreadstat.read_sav('Test.sav', usecols=[str(var)])
# If no error that means we can store this variable in result
result.append(var)
except:
pass
# This will finally load the sav for non error variables
df, meta = pyreadstat.read_sav('Test.sav', usecols=result)
For a sav file with 1000+ variables it takes a long amount of time to process this.
I was thinking if there is a way to use divide and conquer approach and do it faster. Below is my suggested approach but I am not very good in implementing recursion algorithm. Can someone please help me with pseudo code it would be very helpful.
Take the list and try to read sav file
In case of no error then output can be stored in result and then we read the sav file
In case of error then split the list into 2 parts and run these again ....
Step 3 needs to run again until we have a list where it does not give any error
Using the second approach 90% of my sav files will get loaded on the first pass itself hence I think recursion is a good method
You can try to reproduce the issue for sav file here
For this specific case I would suggest a different approach: you can give an argument "encoding" to pyreadstat.read_sav to manually set the encoding. If you don't know which one it is, what you can do is iterate over the list of encodings here: https://gist.github.com/hakre/4188459 to find out which one makes sense. For example:
# here codes is a list with all the encodings in the link mentioned before
for c in codes:
try:
df, meta = p.read_sav("Test.sav", encoding=c)
print(encoding)
print(df.head())
except:
pass
I did and there were a few that may potentially make sense, assuming that the string is in a non-latin alphabet. However the most promising one is not in the list: encoding="UTF8" (the list contains UTF-8, with dash and that fails). Using UTF8 (no dash) I get this:
నేను గతంలో వాడిన బ
which according to google translate means "I used to come b" in Telugu. Not sure if that fully makes sense, but it's a way.
The advantage of this approach is that if you find the right encoding, you will not be loosing data, and reading the data will be fast. The disadvantage is that you may not find the right encoding.
In case you would not find the right encoding, you anyway would be reading the problematic columns very fast, and you can discard them later in pandas by inspecting which character columns do not contain latin characters. This will be much faster than the algorithm you were suggesting.
Why am I seeing extra ] characters in output of a list construction that should have just a list of lists? Is this a terminal problem (using CoCalc's terminal)?
Particularly, the output should have just two levels of lists, the global list and each of the sublists inside it.
But when I read through the output of data in a Python interpreter in CoCalc's terminal. Then I see this kind of thing:
Notice the extra ] as if there was inner lists that should not exist. Also notice the numbering which seems to not be in order, even though in the data it is ordered.
What's happening here?
To reconstruct the problem:
Download the dorothea_valid.data file from here:
https://archive.ics.uci.edu/ml/machine-learning-databases/dorothea/DOROTHEA/
Then create a project in CoCalc (https://cocalc.com/). Upload dorothea_valid.data to that project.
Start a Linux terminal in CoCalc, and make sure you know the path/working directory so that you can find dorothea_valid.data from Python. In the Linux terminal start the Python interpreter by writing python.
Paste the following function meant for reading a file with sequences of integer values separated by "\n" to the interpreter:
def read_datafile(fname):
data = list()
with open(fname, 'r') as file:
for line in file:
data.append([int(i) for i in line.split()])
return data
# and then call print(read_datafile(fname)) to get the output.
Then call read_datafile() on dorothea_valid.data, and then print the resulting object as suggested in the above comment. The screen captured lines are seen when scrolling right to the bottom, however problems may be seen from other parts of the output as well.
EDIT:
It's now 10/08/2022 and I'm unable to see the problem. Maybe it has been fixed in CoCalc.
You are creating inner lists. You're using one list comprehension per line of the file so it's making one list of integers per line. If you want it all as one list, use extend rather than append:
for line in file:
data.extend(int(i) for i in line.split())
Notice I'm using a generator expression here rather than a list comprehension. Using a list comprehension is a waste becaues it creates the whole list in memory only to be read through once and then discarded.
I have the following parameters in a Python file that is used to send commands pertaining to boundary conditions to Abaqus:
u1=0.0,
u2=0.0,
u3=0.0,
ur1=UNSET,
ur2=0.0,
ur3=UNSET
I would like to place these values inside a list and print that list to a .txt file. I figured I should convert all contents to strings:
List = [str(u1), str(u2), str(u3), str(ur1), str(ur2), str(ur3)]
This works only as long as the list does not contain "UNSET", which is a command used by Abaqus and is neither an int or str. Any ideas how to deal with that? Many thanks!
UNSET is an Abaqus/cae defined symbolic constant. It has a member name that returns the string representation, so you might do something like this:
def tostring(v):
try:
return(v.name)
except:
return(str(v))
then do for example
bc= [0.,1,UNSET]
print "u1=%s u2=%s u3=%s\n"%tuple([tostring(b) for b in bc])
u1=0. u2=1 u3=UNSET
EDIT simpler than that. After doing things the hard way I realize the symbolic constant is handled properly by the string conversion so you can just do this:
print "u1=%s u2=%s u3=%s\n"%tuple(['%s'%b for b in bc])
I read and extracted information of atoms from a PDB file and did a Superimposer() to align a mutation to wild-type. How can I write the aligned values of atoms back to PDB file? I tried to use PDBIO() library but it doesn't work since it doesn't accept a list as an input. Anyone has an idea how to do it?
mutantAtoms = []
mutantStructure = PDBParser().get_structure("name",pdbFile)
mutantChain = mutStructure[0]["B"]
# Extract information of atoms
for residues in mutantChain:
mutantAtoms.append(residues)
# Do alignment
si =Superimposer()
si.set_atoms(wildtypeAtoms, mutantAtoms)
si.apply(mutantAtoms)
Now mutantAtoms is the aligned atom to wild-type atom. I need to write this information to a PDB file. My question is how to convert from list of aligned atoms to a structure and use PDBIO() or some other ways to write to a PDB file.
As I see in an example in the PDBIO package documentation in Biopython documentation:
p = PDBParser()
s = p.get_structure("1fat", "1fat.pdb")
io = PDBIO()
io.set_structure(s)
io.save("out.pdb")
Seems like PDBIO module needs an object of class Structure to work, which is in principle what I understand Superimposer works with. When you say it does not accept a list do you mean you have a list of structures? In that case you could simply do it by iterating throught the structures as in:
for s in my_results_list:
io.set_structure(s)
io.save("out.pdb")
If what you have is a list of atoms, I guess you could create a Structure object with that and then pass it to PDBIO.
However, it is difficult to tell more without knowing more about your problem. You could put on your question the code lines where you get the problem.
Edit: Now I have better understood what you want to do. So I have seen in an interesting Biopython Structural Bioinformatics FAQ some information about the Structure class, which is a little complex apparently. At first sight, I do not see a very easy way to create Structure objects from scratch, but what you could do is modify the structure you get from PDBIO substituting the atoms list with the result you get from Superimposer and then write the .pdb file using the same modified structure. So you could try to put your mutantAtoms list into the mutantStructure object you already have.
So lets say I'm using Python's ftplib to retrieve a list of log files from an FTP server. How would I parse that list of files to get just the file names (the last column) inside a list? See the link above for example output.
Using retrlines() probably isn't the best idea there, since it just prints to the console and so you'd have to do tricky things to even get at that output. A likely better bet would be to use the nlst() method, which returns exactly what you want: a list of the file names.
This best answer
You may want to use ftp.nlst() instead of ftp.retrlines(). It will give you exactly what you want.
If you can't, read the following :
Generators for sysadmin processes
In his now famous review, Generator Tricks For Systems Programmers An Introduction, David M. Beazley gives a lot of receipes to answer to this kind of data problem with wuick and reusable code.
E.G :
# empty list that will receive all the log entry
log = []
# we pass a callback function bypass the print_line that would be called by retrlines
# we do that only because we cannot use something better than retrlines
ftp.retrlines('LIST', callback=log.append)
# we use rsplit because it more efficient in our case if we have a big file
files = (line.rsplit(None, 1)[1] for line in log)
# get you file list
files_list = list(files)
Why don't we generate immediately the list ?
Well, it's because doing it this way offer you much flexibility : you can apply any intermediate generator to filter files before turning it into files_list : it's just like pipe, add a line, you add a process without overheat (since it's generators). And if you get rid off retrlines, it still work be it's even better because you don't store the list even one time.
EDIT : well, I read the comment to the other answer and it says that this won't work if there is any space in the name.
Cool, this will illustrate why this method is handy. If you want to change something in the process, you just change a line. Swap :
files = (line.rsplit(None, 1)[1] for line in log)
and
# join split the line, get all the item from the field 8 then join them
files = (' '.join(line.split()[8:]) for line in log)
Ok, this may no be obvious here, but for huge batch process scripts, it's nice :-)
And a slightly less-optimal method, by the way, if you're stuck using retrlines() for some reason, is to pass a function as the second argument to retrlines(); it'll be called for each item in the list. So something like this (assuming you have an FTP object named 'ftp') would work as well:
filenames = []
ftp.retrlines('LIST', lambda line: filenames.append(line.split()[-1]))
The list 'filenames' will then be a list of the file names.
Is there any reason why ftplib.FTP.nlst() won't work for you? I just checked and it returns only names of the files in a given directory.
Since every filename in the output starts at the same column, all you have to do is get the position of the dot on the first line:
drwxrwsr-x 5 ftp-usr pdmaint 1536 Mar 20 09:48 .
Then slice the filename out of the other lines using the position of that dot as the starting index.
Since the dot is the last character on the line, you can use the length of the line minus 1 as the index. So the final code is something like this:
lines = ftp.retrlines('LIST')
lines = lines.split("\n") # This should split the string into an array of lines
filename_index = len(lines[0]) - 1
files = []
for line in lines:
files.append(line[filename_index:])
If the FTP server supports the MLSD command, then please see section “single directory case” from that answer.
Use an instance (say ftpd) of the FTPDirectory class, call its .getdata method with connected ftplib.FTP instance in the correct folder, then you can:
directory_filenames= [ftpfile.name for ftpfile in ftpd.files]
I believe it should work for you.
file_name_list = [' '.join(each_file.split()).split()[-1] for each_file_detail in file_list_from_log]
NOTES -
Here I am making a assumption that you want the data in the program (as list), not on console.
each_file_detail is each line that is being produced by the program.
' '.join(each_file.split())
To replace multiple spaces by 1 space.