{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Extract Features Using Dask for Large Time Series Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook demonstrates how to use the `interpreTS` library with the Dask framework to process and extract features efficiently from large time series datasets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1: Import Libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "from interpreTS.core.feature_extractor import FeatureExtractor, Features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2: Generate Large Time Series Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we create a large dataset with 100 unique time series (`id`) and 1,000 data points for each, resulting in a total of 100,000 rows. Each `id` represents a distinct time series." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "data = pd.DataFrame({\n", " 'id': np.repeat(range(100), 1000), # 100 time series\n", " 'time': np.tile(range(1000), 100), # 1,000 time steps per series\n", " 'value': np.random.randn(100000) # Random values\n", "})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3: Initialize the FeatureExtractor" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We specify the following parameters for feature extraction:\n", "\n", "- `features`: Extracting only the mean (Features.MEAN) from the value column.\n", "- `feature_column`: The column from which to calculate the feature.\n", "- `id_column`: Grouping the data by the unique id column.\n", "- `window_size`: Rolling windows of 3 samples.\n", "- `stride`: Sliding by 5 samples per step." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "feature_extractor = FeatureExtractor(\n", " features=[Features.MEAN], # Extract mean feature\n", " feature_column=\"value\", # Target column\n", " id_column=\"id\", # Unique identifier for time series\n", " window_size=3, # Rolling window size\n", " stride=5 # Sliding step size\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 4: Extract Features Using Dask" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To handle the large dataset efficiently, we use the `mode='dask'` parameter in the `extract_features` method. This processes the data in parallel using Dask." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[########################################] | 100% Completed | 3.59 sms\n" ] } ], "source": [ "features_df = feature_extractor.extract_features(data, mode='dask')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 5: Display the Extracted Features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we print the first few rows of the extracted features." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | mean_value | \n", "
---|---|
0 | \n", "0.147607 | \n", "
1 | \n", "-1.034064 | \n", "
2 | \n", "0.846525 | \n", "
3 | \n", "-0.319443 | \n", "
4 | \n", "-0.763688 | \n", "