I am using FloPy to load an existing MODFLOW-USG model.
load_model = flopy.modflow.Modflow.load('HTHModel',model_ws='model_ws',version='mfusg',exe_name='exe_name',
verbose = True, check = False)
In the process of loading the LPF package, python shows that hk and hani have been successfully loaded, and then the following error is reported:
loading bas6 package file...
adding Package: BAS6
BAS6 package load...success
loading lpf package file...
loading IBCFCB, HDRY, NPLPF...
loading LAYTYP...
loading LAYAVG...
loading CHANI...
loading LAYVKA...
loading LAYWET...
loading hk layer 1...
loading hani layer 1...
D:\Anaconda\program\lib\site-packages\flopy\utils\util_array.py in parse_control_record(line,
current_unit, dtype, ext_unit_dict, array_format)
3215 locat = int(line[0:10].strip())
ValueError: invalid literal for int() with base 10: '-877.0
How can I solve this kind of problem.
By the way, I created this model by using the"save native text copy" function in GMS. Flopy can read other contents in the LPF package normally, and the position where it reports the error appears in the part of reading the [ANGLEX(NJAG)] data.
I compared the LFP file with the input and output description of MODFLOW-USG, and it meets the format requirements of the input file.
I am a newbie to pyhton and flopy and this question confused me a lot. Thank you very much for providing me with some reference information, whether it is about Python, FloPy, MODFLOW-USG or GMS.
Can you upload your lpf file? Then I can check this out. But at first glance, that "'" before the -877.0 looks suspect - is that in the lpf file?
Related
Although there's a lot of subjects related to my question already, the answers are usually no understandable for me, as I am just a beginner in the "writting scripts in Python" field.
Here is my situation :
There's a machine learning software that writes models in a .pkl format at the end of its learning phase. I would like to make those model.pkl files openable by an operator to check what there is inside the model. Thus I began to write a script that would use the pickle.load method and write the data contained in my model.pkl into a .txt file. Here's what I wrote to begin with:
import pickle
import os
model_path=input("Model Path = ")
with open(model_path, "rb") as model :
load = pickle.load(model, encoding='utf-8')
new_model_path = model_path.split('.pkl')[0] +'.txt'
print("creating new file at : ", new_model_path)
model_readable = open(new_model_path, 'rt')
model_readable.write(load)
print("writing model as readable : ", load)
model_readable.close()
model.close()
If I try to run it here's the output :
python3.7 unpickler.py
Model Path = /home/ouriacc/Desktop/workspace/SESAM/Base_de_tests/Anomalie_1/Models/OCSVM/EyeSat/CI_HEATER_CAMERA_VOLTAGE.pkl
Traceback (most recent call last):
File "unpickler.py", line 7, in <module>
load = pickle.load(model, encoding='utf-8')
_pickle.UnpicklingError: invalid load key, '_'.
I couldn't find any explanation about this error that didn't imply an incomplete or corrupted download, which can't be my case here as the model.pkl files are not modified once they've been created by the AI software.
Could someone help me to solve the error or even indicate me an other methode to achieve my goal ? All I need is a script that gives access for a user to what the .pkl file contains.
Thank you very much !
So I figured out why #wundermahn asked about scikit-learn. It seems my model.pkl files were generated by joblib and not exactly pickle library. This is why it wouldn't work apparently. It changed my code by replacing pickle.load() by joblid.load() and it works better !
Thank you !
I'm loading this object detection model in python. I can load it with the following lines of code:
import tflite_runtime.interpreter as tflite
model_path = 'path_to_model_file.tf'
interpreter = tflite.Interpreter(model_path)
I'm able to perform inferences on this without any problem. However, labels are suposed to be included in the metadata, according to model's documentation, but I can't extract it.
The closest I was, it was when following this:
from tflite_support import metadata as _metadata
displayer = _metadata.MetadataDisplayer.with_model_file(model_path)
export_json_file = "extracted_metadata.json")
json_file = displayer.get_metadata_json()
# Optional: write out the metadata as a json file
with open(export_json_file, "w") as f:
f.write(json_file)
but the very first line of code, fails with this error: {AtributeError}'int' object has no attribute 'tobytes'.
How to extract it?
If you only care about the label file, you can simply run command like unzip model_path on Linux or Mac. TFLite model with metadata is essentially a zip file. See the public introduction for more details.
You code snippet to extract metadata works on my end. Make sure to double check model_path. It should be a string, such as "lite-model_ssd_mobilenet_v1_1_metadata_2.tflite".
If you'd like to read label files in an Android app, here is the sample code to do so.
I am new to python and am having trouble reading a *.npy file that somebody else saved. If I use the following commands:
import numpy as np
np.load('lat.npy')
I get the following error:
ValueError: Cannot load file containing pickled data when allow_pickle=False
So, I set allow_pickle=True:
np.load('lat.npy',allow_pickle=True)
Then, I get a different error:
OSError: Failed to interpret file 'lat.npy' as a pickle
Maybe it is relevant that I am on a PC, and the other file was written on a Mac.
Am I doing something wrong? (I am sorry if this question has been asked already.) Thank you!
I learned that my colleague's data file was written in python 2, while I am using python 3. Using the np.load command with the following options will work:
np.load('lat.npy',allow_pickle=True,fix_imports=True,encoding='latin1')
It seems I need to set all of those options, but the 'encoding' argument seems especially important. The doc for numpy.load says about the encoding argument, "Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays."
I am trying to load a TF1 model, using hub and following this guide.
This model has a sentencepiece model which comes with it:
spm_path = m.signatures['spm_path']
<tensorflow.python.eager.wrap_function.WrappedFunction at 0x1341129e8>
If I execute this function:
spm_path()
{'default': <tf.Tensor: id=5905, shape=(), dtype=string, numpy=b'SAVEDMODEL-ASSET'>}
However if I use the output of b'SAVEDMODEL-ASSET' to load my sentencepiece model - I get the following error:
sp.Load(b'SAVEDMODEL-ASSET')
OSError: Not found: "SAVEDMODEL-ASSET": No such file or directory Error #2
The issue is that I am not sure where this asset is located - where does hub store dowloaded modules?
I can find the following: os.environ['TFHUB_CACHE_DIR'] = '/tmp/tfhub' but this is not enough for me to locate the actual file on my machine and pass in the correct file.
It's a bit clunky but here's one way to do it:
uselite = hub.load("https://tfhub.dev/google/universal-sentence-encoder-lite/2")
sp = sentencepiece.SentencePieceProcessor()
sp.load(uselite.asset_paths[0].asset_path.numpy())
The asset_paths contains only one item and it's the path to the spm model.
I have the same problem, trying to load the assets needed for the tokenization for the ALBERT model. I found that there is an asset list in m.asset_paths. There are the Asset objects, and you can access the path with the .asset_path property. The problem is that you need to check the paths of the assets, to find the one that you need. Maybe there is a better way, but I don't know it.
I've been working on some dataframes with Python. I load them in using readCSV(filename, index=0) and it's all fine. The files also open fine in Excel. I also opened them in notepad, and the seem alright; below is an example line:
851,1.218108787,0.636454978,0.269719611,-0.849476404,-0.143909689,0.050626813,-0.094248374,-0.3096134,-0.131347142,0.671271112,0.167593329,0.439417259,-0.198164647,-0.031552824,-0.215189948,-0.1791156,0.092648696,-0.107840318,-0.162596466,0.019324121,0.040572892,-0.008307331,-0.077819297,-0.023809355,-0.148229913,-0.041082835,0.138234498,-0.070986117,0.024788437,-0.050982962,0.24689969,0
The first column is as I understand it an index column. Then there's a bunch of Principal Components, and at the end is a 1/0.
When I try and load the file into WEKA, however, it gives me a nasty error and urges me to use the converter, saying:
Reason:
32 Problem encountered on line: 2
When I attempt to use the converter with the default settings, it states a new error:
Couldn't read object file_name.csv invalid stream header: 2C636F6D
Could anyone help with any of this? I can't provide the entire data file but if requested I can try and maybe cut out a few rows and only paste those if the error still occurs. Are there any flags I need to specify when saving a file to CSV in python? At the moment I just use a .toCSV('x.csv').
I think the index column not having an issue would prevent weka from reading it, when you write using pandas.to_csv() set the index = False
df.to_csv(index = False)