I have a large number of fits files that I'm unable to open becuase of a missing SIMPLE keyword. When I try to open them using astropy's fits.open() it gives the following error:
OSError: No SIMPLE card found, this file does not appear to be a valid FITS file
I tried adding the ignore_missing_simple=True option to fits.open(), but this still gives an error:
OSError: Empty or corrupt FITS file
I'm not sure what to do here. My first thought was to edit the fits header to add a SIMPLE keyword, but if I can't open the file I don't know how to deal with this. I'm wondering if theres more going on than just the missing SIMPLE keyword given the second error.
If it matters, I've downloaded the fits files, I did not generate them myself.
We traced the problem to astropy.io.fits version 4.3
Files generated with version 4.2 can be read with 4.0 or 4.2 but this error pops up with 4.3.1. You could try again after downgrading astropy.io.fits.
Which version of astropy are you using? The problem occurs on 4.3, but it works just fine in 4.2.
This GitHub issue reports the problem in the astropy repository.
I propose you downgrade until the issue is fixed.
Related
I'm using anaconda3 and facing such problem:
FileNotFoundError: [Errno 2] No such file or directory 'C:\Users\pobox\.matplotlib\fontlist-v300.json'
And the fact is in the folder .matplotlib I didn't find such file fontlist-v300.json.
I know the problem may be connected with the matplotlib, but the problem is occuring even after updating the version of matplotlib.
Any ideas how to fix it?
my screenshot
Based from the update of the issue https://github.com/matplotlib/matplotlib/issues/12173/, you could try update matplotlib to the latest version. But people still reporting the problems persists, so the bug might still in works.
You could try this option, and match it with your current Python environments
For the file itself, you may try to check this: https://gitlab.lagash.com/josemg/sofy-luis-quality/blob/b1ddf1ec3577bdec0d798c8586c30b6d6fc10f5f/.cache/matplotlib/fontlist-v300.json
I am working on windows 10. I installed spark, and the goal is to use pyspark. I have made the following steps:
I have installed Python 3.7 with anaconda -- Python was added to C:\Python37
I download wintils from this link -- winutils is added to C:\winutils\bin
I downloaded spark -- spark was extracted is: C:\spark-3.0.0-preview2-bin-hadoop2.7
I downloaded Java 8 from AdoptOpenJDK
under system variables, I set following variables:
HADOOP_HOME : C:\winutils
SPARK_HOME: C:\spark-3.0.0-preview2-bin-hadoop2.7
JAVA_HOME: C:\PROGRA~1\AdoptOpenJDK\jdk-8.0.242.08-hotspot
And finally, under system path, I added:
%JAVA_HOME%\bin
%SPARK_HOME%\bin
%HADOOP_HOME%\bin
In the terminal:
So I would like to know why I am getting this warning:
unable to load native-hadoop library... And why I couldn't bind on port 4040...
Finally, inside Jupyter Notebook, I am getting the following error when trying to write into Parquet file. This image shows a working example, and the following one shows the code with errors:
And here is DataMaster__3.csv on my disk:
And the DaterMaster_par2222.parquet:
Any help is much appreciated!!
If you are writing the file in csv format, I have found that the best way to do that is using the following approach
LCL_POS.toPandas().to_csv(<path>)
There is another way to save it directly without converting to pandas but the issue is it ends up getting split into multiple files (with weird names so I tend to avoid those). If you are happy to split the file up, its much better to write a parquet file in my opinion.
LCL_POS.repartition(1).write.format("com.databricks.spark.csv").option("header", "true").save(<path>)
Hope that answers your question.
I am trying to use nltk in python, but am receiving a pop up error (windows) describing that I am missing a drive at the moment I call import nltk
Does anyone know why or how to fix this?
The error is below:
"There is no disk in the drive. Please insert a disk into drive \Device\Harddisk4\DR4."
NLTK searches for nltk_data directory until it finds one.
On Windows, these locations are scanned: %userprofile%\nltk_data, C:\nltk_data, D:\nltk_data, and so on.
Installing NLTK data or creating an empty directory solves the error.
http://www.nltk.org/data.html
My installation is Win 10, Python 3.5.2 64-bit, nltk 3.2.1 (Christoph Gohlke's binary).
While i am not sure exactly where the problem arises, I had this same error happen to me (it started 'overnight' - the code had been working, i hand not re-installed nltk, so i have no idea what caused it to start happening). I still had the problem after upgrading to the latest version of nltk (3.2.1), and re-downloading the nltk data.
shiratori's answer helped me solve my problem, although at least for me it was slightly more complicated. Specifically, my nltk data was stored in C:\Users\USERNAME\AppData\Roaming\nltk_data (i think this is a default location). This is where it had always been stored, and always had worked fine, however suddenly nltk did not seem to be recognizing this location, and hence looked in the next drive. To solve it, I copied and pasted all the data in that folder to C:\nltk_data and now it is running fine again.
Anyway, not sure if this is Windows induced problem, or what exactly changed to cause code that was working to stop working, but this solved it.
I had the same problem (Win 7, Python 3.6.5, nltk 3.3).
A simpler solution is to just define the Windows environment variable NLTK_DATA. In my case (like #kyrenia), the data is in C:\Users\USERNAME\AppData\Roaming\nltk_data.
This solution is described in the nltk 3.3 documentation: http://www.nltk.org/data.html
If you did not install the data to one of the above central locations, you will need to set the NLTK_DATA environment variable to specify the location of the data. (On a Windows machine, right click on “My Computer” then select Properties > Advanced > Environment Variables > User Variables > New...)
Im using the reportlab framework for creating pdf's. I'm also using a custom font in my pdf's called '3of9'. Now, sometimes I'm getting the following error:
IOError: Cannot open resource "/usr/lib/python2.6/site-packages/reportlab/fonts/LeERC___.AFM", while looking for faceName='3of9'
This doesn't happens everytime, but too often. And in most of the cases everything works well, so I have no idea why the error comes up.
Has anyone an idea how to solve this?
either make sure you have LeERC___.AFM at the given path or try to upgrade to a more recent reportlab version.
LeERC___.AFM is part of the reportlab distribution to version 2.1 (which can be downloaded at
http://www.reportlab.com/ftp/ReportLab_2_1.zip)
Looking at this tutorial here: link text it references matplotlib.get_example_data('goog.npy'). I get an error when I run this, and furthermore I can't even find the mpl-data/ folder in my macosx installation of matplot lib when searching using Spotlight. Any idea what this function is now called?
It's defined in __init__.py (around line 490), if that's what you're asking.
-- MarkusQ
P.S. The mpl_data/ directory is there too (both of them are in the top level directory). Are you sure you've got a good / complete installation?
I just had a similar (the same?) problem for Ubuntu 10.04 (Python 2.6.5, MPL 0.99.1.1).
There was no get_sample_data, however a get_example_data in /usr/lib/pymodules/python2.6/matplotlib/__init__.py which didn't really work. The directory /usr/local/lib/python2.6/site-packages was empty.
As a workaround, I downloaded the file goog.npy directly and replaced the loading of the data with (probably similar)
with open("goog.npy") as f:
r = np.load(f).view(np.recarray)