I know this particular question has been asked loads of times but I have searched through and tried multiple approaches to solve this all with the same issue still existing so unfortunately have to ask for help because I cannot solve myself.
I have the following package installed: https://github.com/skagr/footballdata
When trying to read_games I get the dreadful error message. After reading through some of the package it looks as though it should overwrite the data if it already exists but I think this is more of a windows error.
Any help will be appreciated.
I have tried creating a parent structure in the folders but the same error exists. Also tried using Pathlib to say if folder exists=OK but still doesnt solve the error.
edit: Code added.
import numpy as np
import pandas as pd
import requests
import unidecode
import footballdata as foo
import pathlib
pathlib.Path('C:\\Users\\username\\.spyder-py3\\data').mkdir(parents=True, exist_ok=True)
EPL = foo.MatchHistory('ENG-Premier League', [22]).read_games()
print(EPL.head())
As you can see there isnt much to it but getting the error stops anything getting past this point.
thanks
I'm trying to use scipy.io to load the contents of a .mat file to a python dictionary. This seems to be working except in regard for the data in the table "data". This seems to be working regardless of the file I'm loading. Is there a way to fix this, or a workaround that I can use? I've looked into mat4py, mat73, mat2py and I can't seem to get any of them working. The code definitely finds the correct file.
import scipy as sp
import os
Dir=r"Directory"
os.chdir(Dir)
mat_contents = scipy.io.loadmat("file")
mat_contents['data']
This returns the following error "KeyError: 'data'"
In addition,
mat_contents.keys()
Gives the following error
dict_keys(['header', 'version', 'globals', 'None', 'function_workspace'])
I made a simple code that loads in data called 'original.dat' from a folder named 'data' in my computer. The code was working great and i was able to view my spectra graph. This morning I ran the code again, but it completely crashed giving the error " OSError: data/original.dat not found." Even though nothing changed. The data is infact still in the folder named 'data' and there isn't any spelling mistakes. Can anyone please help understand why its now giving me this error? The code was working perfectly the day before.
here is the code I used :
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
OPUSimaginary = np.loadtxt('data/original.dat', delimiter = ',')
Data file position, Error:cant find the file,Error: suggested code to find file
Few things that you can do to avoid file not found type of issues:
Add a check if the file exists before trying to load it.
Example:
import os
os.path.isfile(fname) # Returns True is file found
Make sure the file permissions are correct and the file can be read.
sudo chmod -R a+rwx <file-name>
I'm new to the Databricks, need help in writing a pandas dataframe into databricks local file system.
I did search in google but could not find any case similar to this, also tried the help guid provided by databricks (attached) but that did not work either. Attempted the below changes to find my luck, the commands goes just fine, but the file is not getting written in the directory (expected wrtdftodbfs.txt file gets created)
df.to_csv("/dbfs/FileStore/NJ/wrtdftodbfs.txt")
Result: throws the below error
FileNotFoundError: [Errno 2] No such file or directory:
'/dbfs/FileStore/NJ/wrtdftodbfs.txt'
df.to_csv("\\dbfs\\FileStore\\NJ\\wrtdftodbfs.txt")
Result: No errors, but nothing written either
df.to_csv("dbfs\\FileStore\\NJ\\wrtdftodbfs.txt")
Result: No errors, but nothing written either
df.to_csv(path ="\\dbfs\\FileStore\\NJ\\",file="wrtdftodbfs.txt")
Result: TypeError: to_csv() got an unexpected keyword argument 'path'
df.to_csv("dbfs:\\FileStore\\NJ\\wrtdftodbfs.txt")
Result: No errors, but nothing written either
df.to_csv("dbfs:\\dbfs\\FileStore\\NJ\\wrtdftodbfs.txt")
Result: No errors, but nothing written either
The directory exists and the files created manually shows up but pandas to_csv never writes nor error out.
dbutils.fs.put("/dbfs/FileStore/NJ/tst.txt","Testing file creation and existence")
dbutils.fs.ls("dbfs/FileStore/NJ")
Out[186]: [FileInfo(path='dbfs:/dbfs/FileStore/NJ/tst.txt',
name='tst.txt', size=35)]
Appreciate your time and pardon me if the enclosed details are not clear enough.
Try with this in your notebook databricks:
import pandas as pd
from io import StringIO
data = """
CODE,L,PS
5d8A,N,P60490
5d8b,H,P80377
5d8C,O,P60491
"""
df = pd.read_csv(StringIO(data), sep=',')
#print(df)
df.to_csv('/dbfs/FileStore/NJ/file1.txt')
pandas_df = pd.read_csv("/dbfs/FileStore/NJ/file1.txt", header='infer')
print(pandas_df)
This worked out for me:
outname = 'pre-processed.csv'
outdir = '/dbfs/FileStore/'
dfPandas.to_csv(outdir+outname, index=False, encoding="utf-8")
To download the file, add files/filename to your notebook url (before the interrogation mark ?):
https://community.cloud.databricks.com/files/pre-processed.csv?o=189989883924552#
(you need to edit your home url, for me is :
https://community.cloud.databricks.com/?o=189989883924552#)
dbfs file explorer
So I tried to run this code.
import pandas
i = input("hi input a csv file..")
df = pandas.read_csv(i)
and I got an error saying
FileNotFoundError: File b'"C:\\Users\\thomas.swenson\\Downloads\\hi.csv"' does not exist
but then if I hard code that path that 'doesn't exist' into my program it works fine.
import pandas
df = pandas.read_csv("C:\\Users\\thomas.swenson\\Downloads\\hi.csv")
it works just fine.
Anyone know why this may be happening?
I'm running python 3.6 and using a virtualenv
looks like the input function was placing another set of quotes around the input.
so ill just have to remove them and it works fine.