Epileptic seizure detection with the InceptionTime SageMaker algorithm

Real-time monitoring of epileptic patients can prevent injuries and complications by alerting caregivers or medical personnel during a seizure, ensuring prompt assistance and reducing the risk of accidents or unexpected death. Continuous remote patient monitoring also allows healthcare providers to collect more accurate and detailed data on seizure frequency and duration, which enables them to tailor treatment plans more effectively.

Different wearable devices have been developed for long-term real-time monitoring of epileptic patients, offering a more practical and less invasive alternative to traditional electroencephalographic (EEG) systems. Deep learning models, such as convolutional neural networks (CNNs) and long short-term memory (LSTM) networks, have been found to be effective at detecting different types of epileptic seizures from wearable device data [1].

In this post, we will demonstrate how to use our Amazon SageMaker implementation of the InceptionTime model [2], the InceptionTime SageMaker algorithm, for detecting epileptic seizures from the readings of a wearable 3D accelerometer sensor. We will train and validate the InceptionTime model on a small dataset collected from healthy participants who simulated epileptic seizures following a specific medical protocol [3]. We will find that the InceptionTime model achieves a ROC-AUC score of 99.63% on this dataset.

Model

InceptionTime [2] is a state-of-the-art deep learning model for time series classification. InceptionTime is an ensemble of multiple models. The only difference between the models in the ensemble is in the initial values of the weights, which are sampled from the Glorot uniform distribution.

Each model consists of a stack of Inception blocks. Each block includes three convolutional layers with kernel sizes of 10, 20 and 40 and a max pooling layer. The block input is processed by the four layers in parallel, and the four outputs are concatenated before being passed to a batch normalization layer followed by a ReLU activation.

Inception block.

Inception block.

A residual connection is applied between the input time series and the output of the second block, and after that between every three blocks. The residual connection processes the inputs using an additional convolutional layer with a kernel size of 1 followed by a batch normalization layer. The processed inputs are then added to the output, which is transformed by a ReLU activation. The output of the last block is passed to an average pooling layer which removes the time dimension, and then to a final linear layer.

At inference time, the class probabilities predicted by the different models are averaged in order to obtain a unique predicted probability and, therefore, a unique predicted label, for each class.

Note

The InceptionTime SageMaker algorithm implements the model as described above with no changes. However, the initial values of the weights are not sampled from the Glorot uniform distribution, but are determined using PyTorch’s default initialization method.

Data

We use the "Epilepsy" dataset introduced in [3] and available in the UCR Time Series Classification Archive [4]. The data was collected from 6 study participants who conducted 4 different activities while wearing a triaxial accelerometer sensor on their wrist: walking, running, sewing and simulating epileptic seizures. The epileptic seizures were simulated following a protocol defined by a medical expert.

The dataset contains 275 three-dimensional time series. Each time series includes 206 observations. The data was recorded at a sampling frequency of 16 Hz, and therefore the time series span approximately 13 seconds. 137 time series (corresponding to 3 participants) are included in the training set, while the remaining 138 time series (corresponding to the 3 remaining participants) are included in the test set.

Epilepsy dataset (combined training and test sets)

Epilepsy dataset (combined training and test sets).

Code

Warning

To be able to run the code below, you need to have an active subscription to the InceptionTime SageMaker algorithm. You can subscribe to a free trial from the AWS Marketplace in order to get your Amazon Resource Name (ARN). In this post we use version 1.8 of the InceptionTime SageMaker algorithm, which runs in the PyTorch 2.1.0 Python 3.10 deep learning container.

Environment Set-Up

We start by importing all the requirements and setting up the SageMaker environment.

import io
import sagemaker
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import arff
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, roc_auc_score

# SageMaker algorithm ARN, replace the placeholder below with your AWS Marketplace ARN
algo_arn = "arn:aws:sagemaker:<...>"

# SageMaker session
sagemaker_session = sagemaker.Session()

# SageMaker role
role = sagemaker.get_execution_role()

# S3 bucket
bucket = sagemaker_session.default_bucket()

# EC2 instance
instance_type = "ml.m5.2xlarge"

Data Preparation

After that we define a function for reading and preparing the data in the format required by the algorithm. The algorithm expects the column names of the one-hot encoded class labels to start with "y" and the column names of the time series values to start with "x". The algorithm also requires including unique sample identifiers in a column named "sample" and unique feature identifiers in a column named "feature".

def read_data(dimension, split):

    # load the data
    df = pd.DataFrame(data=arff.loadarff(f"EpilepsyDimension{dimension}_{split}.arff")[0])

    # extract the features and labels
    features, labels = df.iloc[:, :-1], df.iloc[:, -1:]

    # rename the features
    features.columns = [f"x_{i}" for i in range(1, 1 + features.shape[1])]

    # one-hot encode the labels
    ohe = OneHotEncoder(sparse_output=False).fit(labels)
    labels = pd.DataFrame(data=ohe.transform(labels), columns=[f'y_{c.decode("utf-8")}' for c in ohe.categories_[0]])

    # merge the labels and features
    data = labels.join(features)

    # add the sample ids
    data.insert(0, "sample", range(1, 1 + len(df)))

    # add the feature ids
    data.insert(1, "feature", dimension)

    return data

Training Data

We now load the training data from the ARFF files.

# load the training data
training_dataset = pd.concat([read_data(d, "TRAIN") for d in range(1, 4)]).sort_values(by=["sample", "feature"], ignore_index=True)
training_dataset.shape
(411, 212)
training_dataset.head()
First 6 rows of training dataset
training_dataset.tail()
Last 6 rows of training dataset

We save the training dataset to a CSV file in S3, such that it can be used by the training algorithm.

# save the training data in S3
training_data = sagemaker_session.upload_string_as_file_body(
    body=training_dataset.to_csv(index=False),
    bucket=bucket,
    key="Epilepsy_train.csv"
)

Test Data

We then load the test data from the ARFF files.

# load the test data
test_dataset = pd.concat([read_data(d, "TEST") for d in range(1, 4)]).sort_values(by=["sample", "feature"], ignore_index=True)
test_dataset.shape
(414, 212)

We split the test data into two different data frames: a data frame containing the time series that we will use for inference, and a separate data frame containing the class labels that we will use for validation.

# extract the time series
test_inputs = test_dataset[["sample", "feature"] + [c for c in test_dataset.columns if c.startswith("x")]]
test_inputs.head()
First 6 rows of test inputs
test_inputs.tail()
Last 6 rows of test inputs
# extract the class labels
test_outputs = test_dataset[["sample"] + [c for c in test_dataset.columns if c.startswith("y")]].drop_duplicates(ignore_index=True)
test_outputs.head()
First 6 rows of test outputs
test_outputs.tail()
Last 6 rows of test outputs

We save the data frame with the time series to a CSV file in S3, such that it can be used by the inference algorithm.

# save the test data in S3
test_data = sagemaker_session.upload_string_as_file_body(
    body=test_inputs.to_csv(index=False),
    bucket=bucket,
    key="Epilepsy_test.csv"
)

Training

Now that the training dataset is available in an accessible S3 bucket, we can train the model. We train an ensemble of 5 models, where each model has 6 blocks. We set the number of filters of each convolutional layer in each block equal to 32. We run the training for 100 epochs with a batch size of 256 and a learning rate of 0.001.

# create the estimator
estimator = sagemaker.algorithm.AlgorithmEstimator(
    algorithm_arn=algo_arn,
    role=role,
    instance_count=1,
    instance_type=instance_type,
    input_mode="File",
    sagemaker_session=sagemaker_session,
    hyperparameters={
        "filters": 32,
        "depth": 6,
        "models": 5,
        "batch-size": 256,
        "lr": 0.001,
        "epochs": 100,
        "task": "multiclass"
    },
)

# run the training job
estimator.fit({"training": training_data})

Inference

Once the training job has completed, we can run a batch transform job on the test dataset.

# create the transformer
transformer = estimator.transformer(
    instance_count=1,
    instance_type=instance_type,
    max_payload=100,
)

# run the transform job
transformer.transform(
    data=test_data,
    content_type="text/csv",
)

The results are saved in an output file in S3 with the same name as the input file and with the ".out" file extension. The results include the predicted class labels, whose column names start with "y", and the predicted class probabilities, whose column names start with "p"

# load the model outputs from S3
predictions = sagemaker_session.read_s3_file(
    bucket=bucket,
    key_prefix=f"{transformer.latest_transform_job.name}/Epilepsy_test.csv.out"
)

# convert the model outputs to data frame
predictions = pd.read_csv(io.StringIO(predictions))
predictions.shape
(138, 9)
predictions.head()
First 6 rows of predictions
predictions.tail()
Last 6 rows of predictions

Evaluation

Finally, we calculate the classification metrics on the test set.

# calculate the classification metrics
metrics = pd.DataFrame(columns=[c.replace("y_", "") for c in test_outputs.columns if c.startswith("y")])
for c in metrics.columns:
    metrics[c] = {
        "Accuracy": accuracy_score(y_true=test_outputs[f"y_{c}"], y_pred=predictions[f"y_{c}"]),
        "ROC-AUC": roc_auc_score(y_true=test_outputs[f"y_{c}"], y_score=predictions[f"p_{c}"]),
        "Precision": precision_score(y_true=test_outputs[f"y_{c}"], y_pred=predictions[f"y_{c}"]),
        "Recall": recall_score(y_true=test_outputs[f"y_{c}"], y_pred=predictions[f"y_{c}"]),
        "F1": f1_score(y_true=test_outputs[f"y_{c}"], y_pred=predictions[f"y_{c}"]),
    }

We find that the model achieves a ROC-AUC score of 99.63% and an accuracy score of 97.1% in the detection of epileptic seizures.

Results on Epilepsy dataset (test set)

Results on Epilepsy dataset (test set).

After the analysis has been completed, we can delete the model.

# delete the model
transformer.delete_model()

Tip

You can download the notebook with the full code from our GitHub repository.

References

[1] Yu, S., El Atrache, R., Tang, J., Jackson, M., Makarucha, A., Cantley, S., Sheehan, T., Vieluf, S., Zhang, B., Rogers, J.L., Mareels, I., Harrer, S. & Loddenkemper, T. (2023). Artificial intelligence‐enhanced epileptic seizure detection by wearables. Epilepsia, 64(12), 3213-3226. doi: 110.1111/epi.17774.

[2] Ismail Fawaz, H., Lucas, B., Forestier, G., Pelletier, C., Schmidt, D. F., Weber, J., Webb, G. I., Idoumghar, L., Muller, P. A., & Petitjean, F. (2020). InceptionTime: Finding AlexNet for time series classification. Data Mining and Knowledge Discovery, 34(6), 1936-1962. doi: 10.1007/s10618-020-00710-y.

[3] Villar, J. R., Vergara, P., Menéndez, M., de la Cal, E., González, V. M., & Sedano, J. (2016). Generalized models for the classification of abnormal movements in daily life and its applicability to epilepsy convulsion recognition. International journal of neural systems, 26(06), 1650037. doi: 10.1142/S0129065716500374.

[4] Dau, H. A., Bagnall, A., Kamgar, K., Yeh, C. C. M., Zhu, Y., Gharghabi, S., Ratanamahatana, C. A., & Keogh, E. (2019). The UCR time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6), pp. 1293-1305. doi: 10.1109/JAS.2019.1911747.