an example showing how we can train an end-to-end time series dataset and achieve an above 97% accuracy using just default settings
# Run this cell to install the latest version of fastai2 shared on github
!pip install git+https://github.com/fastai/fastai2.git
# Run this cell to install the latest version of fastcore shared on github
!pip install git+https://github.com/fastai/fastcore.git
# Run this cell to install the latest version of timeseries shared on github
!pip install git+https://github.com/ai-fast-track/timeseries.git
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai2.basics import *
from timeseries.all import *
Training and End-to-End Model¶
This is a code-dense example. For a more detailed example showing all options found in this
timeseries
package please refer to this [notebook] (https: //github.com/ai-fast-track/timeseries/blob/master/nbs/index.ipynb)
Default settings:¶
Param | Value |
---|---|
model | inception_time |
opt_func | Ranger |
loss_func | LabelSmoothingCrossEntropy() |
Normalize | per_sample_per_channel |
Training a model¶
# You can choose any multivariate or univariate dataset listed the `data.py` file
path = unzip_data(URLs_TS.NATOPS)
fnames = [path/'NATOPS_TRAIN.arff', path/'NATOPS_TEST.arff']
dls = TSDataLoaders.from_files(bs=32,fnames=fnames, batch_tfms=[Normalize()], num_workers=0) # num_workers=0 is for Windows platform
learn = ts_learner(dls)
learn.fit_one_cycle(25, lr_max=1e-3)
Graphs¶
dls.show_batch(chs=range(0,12,3))
learn.recorder.plot_loss()
learn.show_results(chs=range(0,24,3))
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
interp.most_confused()