an example showing how we can train an end-to-end time series dataset and achieve an above 97% accuracy using just default settings

Open In Colab

# Run this cell to install the latest version of fastai2 shared on github
!pip install git+https://github.com/fastai/fastai2.git
# Run this cell to install the latest version of fastcore shared on github
!pip install git+https://github.com/fastai/fastcore.git
# Run this cell to install the latest version of timeseries shared on github
!pip install git+https://github.com/ai-fast-track/timeseries.git
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai2.basics import *
from timeseries.all import *

Training and End-to-End Model

This is a code-dense example. For a more detailed example showing all options found in this timeseries package please refer to this [notebook] (https: //github.com/ai-fast-track/timeseries/blob/master/nbs/index.ipynb)

Default settings:

Param Value
model inception_time
opt_func Ranger
loss_func LabelSmoothingCrossEntropy()
Normalize per_sample_per_channel

Training a model

# You can choose any multivariate or univariate dataset listed the `data.py` file
path = unzip_data(URLs_TS.NATOPS)
fnames = [path/'NATOPS_TRAIN.arff', path/'NATOPS_TEST.arff']
dls = TSDataLoaders.from_files(bs=32,fnames=fnames, batch_tfms=[Normalize()], num_workers=0) # num_workers=0 is for Windows platform
learn = ts_learner(dls)
learn.fit_one_cycle(25, lr_max=1e-3) 
epoch train_loss valid_loss accuracy time
0 2.803271 1.788028 0.138889 00:02
1 2.640071 1.800265 0.152778 00:02
2 2.337025 1.809550 0.152778 00:02
3 1.958659 1.631379 0.375000 00:02
4 1.638520 0.996783 0.763889 00:02
5 1.393388 0.791485 0.833333 00:02
6 1.217224 0.741780 0.861111 00:02
7 1.084642 0.672229 0.861111 00:02
8 0.981334 0.743353 0.833333 00:02
9 0.901527 0.734145 0.847222 00:02
10 0.835500 0.730749 0.819444 00:02
11 0.783602 0.624519 0.888889 00:02
12 0.731175 0.746540 0.847222 00:02
13 0.688649 0.614995 0.916667 00:02
14 0.649237 0.624472 0.930556 00:02
15 0.615866 0.593726 0.944444 00:02
16 0.586982 0.589635 0.958333 00:02
17 0.562923 0.549071 0.986111 00:02
18 0.541431 0.557641 0.958333 00:02
19 0.523363 0.556317 0.944444 00:02
20 0.509103 0.574100 0.944444 00:02
21 0.495959 0.559972 0.958333 00:02
22 0.486048 0.553628 0.958333 00:02
23 0.477561 0.550643 0.958333 00:02
24 0.470691 0.549953 0.958333 00:02

Graphs

dls.show_batch(chs=range(0,12,3))
learn.recorder.plot_loss()
learn.show_results(chs=range(0,24,3))
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
interp.most_confused()
[('2.0', '1.0', 1), ('2.0', '3.0', 1), ('3.0', '2.0', 1)]