1 Introduction

In almost all areas of science and technology, sensors are becoming more prevalent. In recent years we have seen applications of sensor technology in fields as diverse as energy saving in smart home environments (Lima et al. 2015), performance assessment in archery (Eckelt et al. 2020), detection of mooring ships (Waterbolk et al. 2019), early detection of Alzheimer disease (Varatharajan et al. 2018) and recognition of emotional states (Kołakowska et al. 2020), to name just a few.

Our main interest lies in the detection of human activities using sensors attached to the body. Sensors generate unannotated raw data, suggesting the use of unsupervised learning methods. If an activity specified in advance is of interest, then supervised learning and labelled data are required. However, the task of labelling activities manually from sensor data is labour-intensive and prone to errors, which creates the need for fast and accurate automated methods.

Human activity recognition (HAR) attracted much attention since its inception in the ’90s. A plethora of methods are currently being used to detect human activities (Lara and Labrador 2013), with various deep learning techniques leading the charge (Minh Dang et al. 2020; Wang et al. 2019). In many studies (Ann Ronao and Cho 2017; Capela et al. 2015; Aviles-Cruz et al. 2019) only sensors embedded in a smartphone are used to classify user activities. Physical sensors, such as accelerometers or gyroscopes attached directly to a body or video recordings (from a camera), are the most popular sources of data for activity recognition (Rednic et al. 2012; Zhu and Sheng 2011; Cornacchia et al. 2016). Similarly, cameras can be either placed on the subject (Li et al. 2011; Ryoo and Matthies 2013; Watanabe et al. 2011) or they can observe the subject (Song and Chen 2011; Laptev et al. 2008; Ke et al. 2005). Rarely, both camera and inertial sensor data are captured at the same time (Chen et al. 2015).

The temporal structure of the time series should be taken into account when choosing a method for activity recognition. Simple classification techniques (such as logistic regression or decision trees) ignore time dependencies and will need to be improved after the procedure. Alternatively, methods which are more complicated and more difficult to train have to be deployed. Another challenge lies in the reliability of manual labelling (in case of supervised learning). Quite often it is unreasonable to assume that labels annotating the observed data are exact with regards to timings of transitions from one activity to another (Ward et al. 2006). Timing uncertainty can be caused by a deficiency of the manual labelling or the inability to objectively detect boundaries between different activities. This issue is well-known in the literature, for instance, Yeh et al. (2017) introduced a scalable, parameter-free and domain-agnostic algorithm that deals with this problem in the case of one-dimensional time series.

The main contribution of this paper is the introduction of a post-processing procedure, which improves a result of activity classification by eliminating too short activities. The method requires a single parameter which can be interpreted as the minimum duration of the activites (hence the choice of this parameter is driven by domain knowledge). It allows us to mitigate the problem of activites being fragmented in cases where some domain-specific information about state durations is available. In the current literature, ad hoc techniques are employed for post-processing of human activities and they are particularly suitable when the initial classifier is already performing satisfactory. A method that exemplifies this approach, utilizing majority voting, can be found in the article by Shakerian et al. (2022). Some more advanced approaches have also been devised in special cases, e.g. the approach proposed by Gil-Martín et al. (2020), which is limited to neural network classifiers. In comparison to any existing methods, our post-processing procedure ensures removal of all too short events and allows to specify the minimum length of activities accepted in the post-processed result. Based on empirical evidence, the performance of classical machine learning classifiers improves significantly by our method. This enables simple and fast but less accurate classification methods to be upgraded to accurate and fast classifiers.

In order to compare the quality of competing activity recognition methods, an appropriate criterion for evaluating the performance is needed (also to demonstrate the performance of the post-processing procedure we introduce). Below are some commonly used performance measures:

  • accuracy, precision, the F-measure (Lara and Labrador 2013; Lima et al. 2019),

  • similarity measures for time series classification (Serrà and Arcos 2014), such as Dynamic Time Warping or Minimum Jump Costs Dissimilarity,

  • custom vector-valued performance metric (Ward et al. 2011).

Our objective is to design a performance measure that satisfies problem-specific conditions, which will be specified later.

The outline of the paper is as follows. Section 2 provides a method for improving classification with a post-processing scheme that uses background knowledge on the specific context. In particular, it validates the state durations and provides an improved classification that satisfies the physical constraints on the state durations imposed by the context. Section 3 introduces specialized performance measures for assessing the quality of classification in general and in activity recognition in particular. The new performance measure also serves the purpose of showing the advantages of the post-processing fairly. Section 4 presents an application of the techniques in a simulated setting. The post-processing method was able to improve the estimates significantly. The method achieves similar results in an application to football data.

2 Improving classification by imposing physical restrictions

2.1 Post-processing by projection

When recognizing human activities, it is often the case that the result of the classification contains events (time intervals in which a classification result is constant) that are too short.Footnote 1 Usually ad hoc methods are used in order to discard those events, e.g. removal of any short events and replacing them with the next state in the classification, whose length is above a fixed threshold. There are also more advanced approaches, such as the one proposed by Gil-Martín et al. (2020). However, this particular method is suitable only when using a neural network as the classifier of choice, it does not ensure that too short events will always be eliminated (no matter what is exactly meant by ‘too short’) and lastly does not provide an intuitive understanding of the choice of its tuning parameter. Hence, our interest in a more formal method that could be used in combination with any activity classifier. The goal of this section is to introduce a formalized approach to correcting for the classifier’s mistakes regarding the activity durations by introducing a novel post-processing procedure.

Consider the set of states \(\mathcal {S}=\{1,\ldots ,M\}\) and a metric d on \(\mathcal {S}\). Let \(\rho\) denote the discrete metricFootnote 2 on \(\mathcal {S}\). Any state-valued function of time will be called a state sequence. In reality we are only able to obtain a discrete-time signal, however, the relevant information contained in such a signal is a list of all the state transitions, which can more easily be encoded in a function with continuous argument. Hence, we define \(\mathcal {T}\), the set of all càdlàgFootnote 3 functions \(f:\mathbb {R}\rightarrow \mathcal {S}\) with a finite number of discontinuities. We define the standard distance induced by a metric d between two state sequences as

$$\begin{aligned} \text {dist}:\mathcal {T}\times \mathcal {T}\ni (f,g)\rightarrow \text {dist}(f,g)=\int \limits _\mathbb {R}d(f(t),g(t))dt. \end{aligned}$$
(1)

If d is a metric on \(\mathcal {S}\), then \(\text {dist}\) is a metric on \(\mathcal {T}\). The standard distance induced by the discrete metric is the time spent by f in a state different from g.

Now, we define a measure of closeness between functions in \(\mathcal {T}\), as our goal is to find a function close enough to a given function in \(\mathcal {T}\), while reducing the number of jumps it has (which in turn will eliminate short events in the state sequence). Let \(f,g\in \mathcal {T}\). Then we introduce the notation:

$$\begin{aligned} E_\gamma (f,g)=\text {dist}(f,g) + \gamma \cdot \vert J(g)\vert , \end{aligned}$$
(2)

where J(g) is the set of all discontinuities of g, \(\vert J(g)\vert\) is the number of all discontinuities of g and \(\gamma\) is a penalty for a single jump of g.

Given \(f\in \mathcal {T}\), our goal is to find any solution \(\hat{f}\in \mathcal {T}\) of the minimization problem

$$\begin{aligned} \hat{f}\in \mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{g\in \mathcal {T}}E_\gamma (f,g). \end{aligned}$$
(3)

As a default, we will use the standard distance induced by the discrete metric.

In order to characterize the solution \(\hat{f}\) of problem (3) we present the following lemma.

Lemma 2.1

Let \(\gamma >0\) and \(f\in \mathcal {T}\). Let J denote the set of all discontinuities of the function f. There exists a solution \(\hat{f}\) of the problem (3) such that it does not contain jumps outside of J.

Lemma 2.1 leads to the conclusion that in search for the solution of the minimization problem we can limit ourselves to a finite set of functions, namely a subset of \(\mathcal {T}\) with jumps only allowed at the same locations as the function f. The proof of lemma 2.1 can be found in the appendix.

In this minimization problem the choice of the parameter \(\gamma\) plays a crucial role. We will now show an interpretation of the penalty parameter that will ease the process of choosing it. It will also allow us to reformulate problem (3). First, we define a new set of functions.

Definition 2.1

(Function with bounded minimum duration of states) Given a parameter \(\gamma >0\) we define \(\mathcal {G}_\gamma \subset \mathcal {T}\), the set of functions with bounded minimum duration of states, such that for \(g\in \mathcal {G}_\gamma\) we have

  • \(g=\sum \limits _{i=1}^{n-1} s_i\mathbbm {1}_{[t_i,t_{i+1})}\) for some constant \(n\in \mathbb {N}\), a sequence of states \(\{s_1,\ldots ,s_{n-1}\}\), such that \(s_i\ne s_{i+1}\) for \(i=1,\ldots ,n-2\), and an increasing sequence \(t_1< t_2< \cdots < t_n\) (we allow \(t_1=-\infty\) and \(t_n=\infty\)),

  • if \(n\ge 2\), then \(\forall _{i\ge 2}\quad t_i-t_{i-1} \ge \gamma\).

Lemma 2.2 below yields a connection between the penalty \(\gamma\) and the minimum duration of states that we impose on the solution of our minimization problem.

Lemma 2.2

Let \(\gamma >0\) and \(f\in \mathcal {T}\). Any solution \(\hat{f}\) of problem (3) is an element of \(\mathcal {G}_\gamma\).

This lemma can be used in practice to select the size of the penalty. The Proof of Lemma 2.2 can be found in the appendix.

Given \(f\in \mathcal {T}\), by Lemma 2.2 the minimization problem (3) is equivalent to the minimization problem

$$\begin{aligned} \hat{f}\in \mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{g\in \mathcal {G}_\gamma }E_\gamma (f,g). \end{aligned}$$
(4)

\(\hat{f}\) will be called a projection of f onto \(\mathcal {G}_\gamma\).

As mentioned before, the regularization by penalizing high numbers of jumps narrows down the set of possible solutions to a finite nonempty subset of \(\mathcal {G}_\gamma\) (thanks to lemma 2.1), which leads to the existence of \(\hat{f}\). However, the solution might not be unique, as illustrated by the following example.

Consider \(\mathcal {S}=\{0,1\}\), \(f=\mathbbm {1}_{[0.35,0.45)}+\mathbbm {1}_{[0.55,+\infty )}\) and \(\gamma =0.2\). Both \(\hat{f}_1=\mathbbm {1}_{[0.35,+\infty )}\) as well as \(\hat{f}_2=\mathbbm {1}_{[0.55,+\infty )}\) are projections of f. One could think of it as an issue, however, it reflects well our understanding of the original problem. The assumption is that f has impossibly short windows, because it is uncertain which activity is actually performed in the interval [0.35, 0.55). Looking only at f we are unable to decide which solution is more suitable, hence it is only natural that the method also returns two possible options.

We close with a remark regarding influence of the extreme values of \(\gamma\) on projection \(\hat{f}\).

Remark 2.1

Let \(f\in \mathcal {T}\). If \(\gamma =0\), then \(\hat{f}=f\) is the only projection of f. If \(\gamma =\infty\) and \(E_\gamma (f,g)<\infty\) for some function \(g\in \mathcal {T}\),Footnote 4 then g is constant and equal everywhere to the most common state of f and \(\hat{f}=f\).Footnote 5

2.2 Connection with the shortest path problem

In this section we devise a method for finding a projection in an efficient manner. It will be shown that the problem of finding the shortest path in a particular graph is equivalent to the minimization problem (4). This is possible thanks to the lemmas 2.1 and 2.2, which narrowed down the set of possible solutions to a finite set.

First, we present a lemma which further characterizes a projection of f.

Lemma 2.3

Let \(f\in \mathcal {T}\). Suppose \(f\equiv c\) on an interval [ab] for some constant \(c\in \mathbb {R}\). If \(b-a>2\gamma\), then \(\hat{f}\equiv c\) on [ab]. If \(b-a=2\gamma\), then there exists a projection such that \(\hat{f}\equiv c\) on [ab].

The Proof of Lemma 2.3 can be found in the appendix.

Remark 2.2

If \(n>2\), then there exists a projection such that the second and the second-to-last jump locations of the original function are not the first and the last (resp.) jump locations of this projection.

Remark 2.2 will be used when defining a particular graph and the proof can be found in the appendix.

We will assume that f has \(n\ge 2\) jumpsFootnote 6 at time points \(t_i\) for \(i=1,\ldots ,n\):

$$\begin{aligned} f=\sum \limits _{i=0}^n s_i\mathbbm {1}_{[t_i,t_{i+1})}, \end{aligned}$$
(5)

where \(s_i\in \mathcal {S}\) for \(i=0,\ldots ,n\) and \(s_i\ne s_{i+1}\) for \(i=0,\ldots ,n-1\). We use the following notation: \(t_0=-\infty\), \(t_{n+1}=\infty\). In light of Lemma 2.3 we assume that

$$\begin{aligned} t_{i+1}-t_i<2\gamma \end{aligned}$$
(6)

for \(i=1,\ldots ,n-1\). If this is not the case, then consider the coarsest partition of the set J of jumps of f:

$$\begin{aligned} J=\bigcup \limits _{i=1}^r J_i \end{aligned}$$

such that for jumps in \(J_i\) for \(i=1,\ldots ,r\) Eq. (6) holds and \(\min J_i-\max J_{i-1}\ge 2\gamma\) for \(i=2,\ldots ,r\). For each \(J_i\) for \(i=1,\ldots ,r\) consider a function \(f_i:\mathbb {R}\rightarrow \mathcal {S}\), such that \(f_i\equiv f\) on \([\min J_i-2\gamma ,\max J_i + 2\gamma ]\) and the only jumps of \(f_i\) lie in \(J_i\). Once a projection \(\hat{f}_i\) is found for \(f_i\) for all \(i=1,\ldots ,r\), we can then consider a function \(\hat{f}\), defined as follows

$$\begin{aligned} \hat{f}(x)=\hat{f}_i(x) \end{aligned}$$
(7)

given \(x\in [\min J_i-2\gamma ,\max J_i + 2\gamma ]\) for some \(i=1,\ldots ,r\). By Lemma 2.3, there exists a projection which does not change the states longer than or equal \(2\gamma\), hence \(\hat{f}\) defined as in (7) is a projection of f. Given this remark, we can now assume that f is of the form (5) and satisfies (6).

We will now define a graph for the purpose of showing the connection between the problem of finding a projection \(\hat{f}\) and the problem of finding a shortest path in a directed graph. Let \(G=(V,A)\) be a directed graph such that the set of vertices V is given by

$$\begin{aligned} V=\{t_0,t_1,\ldots ,t_n,t_{n+1}\} \end{aligned}$$
(8)

and the set of directed arcs is given byFootnote 7

$$\begin{aligned} A=\{(t_k,t_l)\in V^2:t_l-t_k\ge \gamma \}\backslash \{(t_0,t_2),(t_{n-1},t_{n+1})\}. \end{aligned}$$
(9)

There is a correspondence between each path from \(t_0\) to \(t_{n+1}\) and a sequence of jumps in the interval \((t_1-\gamma ,t_n+\gamma )\). A path \((t_0,t_{l_1},\ldots ,t_{l_m},t_{n+1})\) can be associated with a function g with jumps at \(t_{l_1}\), ..., \(t_{l_m}\), such that \(g(t_{l_k})\) is the most common value of f in interval \([t_{l_k},t_{l_{k+1}})\). The definition (9) of the set of directed arcs ensures that all paths in the graph G correspond to at least one function in \(\mathcal {G}_\gamma\).

We now introduce a weight function \(W:A\rightarrow \mathbb {R}_+\) ensuring that the cost of the path coincides with the error \(E(f,\cdot )\) of the corresponding function in the interval \((t_1-\gamma ,t_n+\gamma )\). Let \(I_k=t_{k+1}-t_k\) for \(k=0,\ldots ,n\). It is noteworthy that \(I_0, I_n=\infty\), while \(I_k<2\gamma\) for \(k=1,\ldots ,n-1\). We introduce the penalty for a jump \(\phi _k=\gamma\) for \(k=1,\ldots ,n\) and \(\phi _{n+1}=0\). Now we define the weight function W:

$$\begin{aligned} W((t_k,t_l))=\sum \limits _{m=k}^{l-1}I_md(s_{kl},s_m)+ \phi _l, \end{aligned}$$
(10)

for \((t_k,t_l)\in A\), where \(s_{kl}\) represents the most common state in the interval \([t_k,t_l)\) of the original function f. The first term equals the \(\textrm{dist}(f,g)\) in \([t_k,t_l]\). The second term adds a penalty for jump at \(t_l\) if \(t_l\) is finite (the penalty for jump at \(t_k\) was added on a previous arc in the path, if \(k>0\)).

Theorem 2.1

(Problem equivalence) Let \(\gamma >0\) and \((t_1,\ldots ,t_n)\) be the only discontinuities of a function \(f\in \mathcal {T}\). Let \(G=(V,A,W)\) be a weighted, directed graph as defined in (8), (9), (10) above. The task of finding a projection of f onto \(\mathcal {G}_\gamma\), as defined in (4), is equivalent to finding the shortest path from \(t_0\) to \(t_{n+1}\) in the graph G.

The proof of the theorem can be found in the appendix. Now, we will illustrate the method by an example.

Given \(\gamma =0.2\) and \(\mathcal {S}=\{0,1,2,3\}\), consider the function \(f=\mathbbm {1}_{[0.2, 0.35)}+2\cdot \mathbbm {1}_{[0.4, 0.55)}+3\cdot \mathbbm {1}_{[0.55,0.75)}+2\cdot \mathbbm {1}_{[0.75,+\infty )}\). The graph G, as defined in (8), (9), (10), for f, is shown in Fig. 1. Note that the vertex corresponding to 0.35 is omitted in the graph, since there is no path from the vertex corresponding to \(-\infty\) to it (according to the definition (9), the arc (0.2, 0.35) is not included).

Fig. 1
figure 1

Graph G constructed for the function f

There are nine possible paths from \(-\infty\) to \(+\infty\). The path \(\hat{P}=(-\infty ,0.4,\infty )\) has the cost equal to 0.55 and is the shortest path from \(-\infty\) to \(+\infty\). Hence we conclude that \(\hat{f}=2\cdot \mathbbm {1}_{[0.4, \infty )}\) is the projection of f onto \(\mathcal {G}_{0.2}\) (in this case, it can be shown \(\hat{f}\) is the only projection of f).

2.3 Binary case

In case the set of states \(\mathcal {S}\) consists of only two elements, a stronger result than Lemma 2.2 can be achieved. The main advantage of the binary case comes from the fact that we do not need to specify the sequence of states since knowing the starting state, each jump signifies a move to the only other available state. First, we present a supporting remark which further strengthens the relation between jumps of a function from \(\mathcal {T}\) and its projection.

For the remainder of the section, we will always assume that \(\mathcal {S}=\{0,1\}\).Footnote 8

Lemma 2.4

Let \(\gamma >0\) and \(f\in \mathcal {T}\). Let J denote the set of all discontinuities of the function f. If a function \(g\in \mathcal {G}_\gamma\) contains a jump \(j\in J(f)\), but in an opposite direction than in f, then g cannot be a projection of f onto \(\mathcal {G}_\gamma\).

Lemma 2.5

Let \(\gamma >0\) and \(f\in \mathcal {T}\). Any solution \(\hat{f}\) of the problem (3) is an element of \(\mathcal {G}_{2\gamma }\).

The Proofs of Lemma 2.4 and lemma 2.5 can be found in the appendix. Lemma 2.5 leads to the equivalence of the problem (4) with the minimization problem:

$$\begin{aligned} \hat{f}\in \mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{g\in \mathcal {G}_{2\gamma }}E_\gamma (f,g). \end{aligned}$$
(11)

The strengthening of Lemma 2.1 by restricting not only the locations of the jumps but also their directions is a favorable change as it narrows the set of possible solutions.

Lemma 2.6

Let \(f\in \mathcal {T}\). Suppose \(f\equiv c\) on an interval [ab] for some constant \(c\in \mathbb {R}\). If \(b-a>\gamma\), then \(\hat{f}\equiv c\) on [ab]. If \(b-a=\gamma\), then there exists a projection such that \(\hat{f}\equiv c\) on [ab].

Proofs of Lemma 2.6 can be found in the appendix.

Lemma 2.6 potentially reduces the number of jumps that have to be considered in the post-processing. Moreover, lemma 2.4 reduces the number of arcs when building the graph making the process of finding the shortest path more effective.

Additionally, Remark 2.2 can also be strengthened.

Remark 2.3

If \(n>2\) and all states are shorter than \(\gamma\) (except for the first and the last state), there exists a projection such that the second and the second-to-last jump of the original function are not present in it.

Remark 2.3 allows us to ignore the second and the penultimate jump of the projected function when searching for jump locations in the projection. The proof of this remark can be found in the appendix.

The directed graph G has a different set of vertices compared to (8):Footnote 9

$$\begin{aligned} V=\{t_0,t_1,\ldots ,t_n,t_{n+1}\}\backslash \{t_2,t_{n-1}\}, \end{aligned}$$
(12)

and of directed arcs compared to (9):

$$\begin{aligned} A=\{(t_k,t_l)\in V^2:t_l-t_k\ge 2\gamma \; \text {and}\; l-k\bmod 2\equiv 1\}. \end{aligned}$$
(13)

Theorem 2.2

(Problem equivalence-binary version) Let \(\gamma >0\) and \((t_1,\ldots ,t_n)\) be the only discontinuities of a function \(f\in \mathcal {T}\). Let \(G=(V,A,W)\) be a weighted, directed graph as defined in (10), (12), (13). The task of finding a projection of f onto \(\mathcal {G}_{2\gamma }\), as defined in (11), is equivalent to finding a shortest path from \(t_0\) to \(t_{n+1}\) in the graph G.

Proof of the theorem 2.2 can be found in the appendix.

3 Incorporating domain knowledge into the performance measure of classification

3.1 Problem-specific requirements on the performance measure

In order to choose an appropriate performance measure for a given classification task, it is important to understand the problem-specific demands on the result. The standard distance (1), which can be understood as a continuous analogue of the most common performance metric, namely the misclassification rate, can often be inadequate to compare the classification results as it is a one-fits-all type of metric and if more is known about the problem, it might not represent the idea of accuracy that users have in mind. On the other hand, there have been other approaches to performance metrics, e.g. (Ward et al. 2011). Their approach focuses on characterizing the error in terms of the number of inserted, deleted, merged and fragmented events. Event fragmentation occurs when an event in the true labelsFootnote 10 is represented by more than one event in the estimated labels,Footnote 11 whereas merging refers to several events in true labels being represented by a single event in the estimated labels. Ward et al. (2011) provide an overview of different performance metrics used in activity recognition proposing a solution to the problem of timing uncertainty as well as event fragmentation and merging. Their solution is based on segments, which are intervals in which neither the true labels nor the estimated labels change the state. If the state in the estimate and the state in the true labels agree in a given segment, they denote it as correctly classified. If that is not true, the segment is classified accordingly as fragmenting segment, inserted segment, deleted segment or merged segment. This provides a deeper level of error characterization, which is then used in different metrics of classifier performance. Their vector-valued performance metric is preferable when in-depth overview of the types of mistakes made by the classifier is needed. We will introduce a novel scalar-valued performance metric, which can be easily compared and includes problem-specific information such as timing uncertainty in the labels.

In this section, we aim at highlighting the main characteristics of the classification of movements based on wearable sensors and at translating them into specific requirements on the performance measure. Our first requirement comes from physical restrictions. The states considered in our application represent human activities, but also in more general contexts they often cannot be arbitrarily short; there is a lower bound on the length of the events in a state sequence. Hence, estimated labels that violate this lower bound indicate a bad performance. The lower bound condition requires two parameters: the lower bound and the penalty for each violation. The lower bound can either be estimated or determined from domain knowledge, while the penalty can be chosen more freely. Through physical restrictions we can see a deeper connection with the method introduced in Sect. 2. It is clear that the standard classification methods cannot ensure that the state sequence contains only events longer than a certain level. The post-processing method addresses this issue directly and as a consequence we can expect classifiers to benefit from it in the context of the new performance measure.

The issue of timing uncertainty should also be addressed when designing the performance measure. To illustrate its importance more clearly, we present an example. Five people were asked to detect boundaries between activities in different time series using a visualization tool. The tool outputs an animated stick figure modelFootnote 12 given sensor data.

Three time series were selected, each with one of the following activities: running, jumping and ball kick. The start and the end of each activity were recorded by participants. Table 1 presents the results of the experiment.

Table 1 The results of the labelling experiment; all times are in seconds

The experiment indicates there is indeed uncertainty regarding the state transitions. Granted that the sample size is very small, we notice more variation in results referring to the end of activities rather than the beginnings. Additionally, we see more variation in the results for the kick than the jumping. So the boundaries of some activities seem to be more difficult to identify than of others.

3.2 Globally time-shifted distance

The standard distance (1) is an unsatisfying measure to compare two state sequences, since it does not incorporate the requirements posed in the previous section. In order to improve it, we start by modelling the timing uncertainty. Let \(f\in \mathcal {T}\) be the true labels process and let f have n discontinuities \(t_1,\ldots ,t_n\). The locations of the discontinuities are corrupted by additive noise:

$$\begin{aligned} t_i=T_i+X_i, \end{aligned}$$

for all \(i=1,\ldots ,n\), where \(T_i\) is the true and unknown location of the i-th jump. In this section we will assume that \(X_1=X_2=\cdots =X_n\) (all jumps are moved by the same value; the global time shift), although in general, it is more realistic to assume that \(X_1,\ldots ,X_n\) are independent random variables. We will relax this condition later.

We define a class of Globally Time-Shifted distances (GTS distances), loosely inspired by the Skorokhod distance on the space of càdlàg functions (Billingsley 1999, pp. 121). The GTS distances are parametrized by two parameters. The parameter w controls the weight of misclassification occurring from the uncertainty of the true labels, while the parameter \(\sigma\) controls the magnitude of the shift of activities.

Definition 3.1

(Globally Time-Shifted distance) Let \(f,g\in \mathcal {T}\). Given \(w\ge 0,\sigma >0\) and a metric d on \(\mathcal {S}\) we define a Globally Time-Shifted distance as:

$$\begin{aligned} GTS_{w,\sigma }(f,g)=\inf \limits _{\epsilon \in [-\sigma ,\sigma ]}\{\text {dist}(f\circ \tau _\epsilon ,g)+w\vert \epsilon \vert \}, \end{aligned}$$

where for \(\epsilon >0\) \(\tau _\epsilon :\mathbb {R}\rightarrow \mathbb {R}\) is a time shift defined as follows:

$$\begin{aligned} \tau _\epsilon (t)=t-\epsilon . \end{aligned}$$

Depending on the choice of parameters the GTS distance possesses certain properties. For \(w>0\) and \(\sigma =\infty\), the GTS distance is an extended metricFootnote 13 and a proof of this fact is given in the appendix. If \(w>0\) and \(\sigma >0\), then it is a semimetric meaning that it has all properties required for a metric, except for the triangle inequality.

The main downside of using the GTS distance is the unrealistic assumption on timing uncertainty. However, if we know that the true labels preserve the true state durations then it is a good choice. Consider a function \(f\in \mathcal {T}\) with two state transitions located at \(t_1\) and \(t_2\). Let \(g\in \mathcal {T}\) also feature two state transitions located at \(t_1 - \tau _1\) and \(t_2 - \tau _2\). If \(\tau _1\ne \tau _2\), then there is no global time shift that can align the functions f and g. This implies that the true state durations need to be preserved in the estimate in order to align functions using the global time shift.

3.3 Locally time-shifted distance and the duration penalty term

The global time shift stresses the state durations, which is not always desirable. For instance, if the true labels do not preserve the real state durations, or e.g. if the additive noise terms in the locations of the jumps are independent. Here is an example: Fig. 2 shows f and its approximations \(g_i\) for \(i=1,2,3\). It is impossible to align f with any of the \(g_i\) with a single time shift, however, it would be possible if each state transition could be shifted ‘locally’.

Fig. 2
figure 2

The function f represents the true labels with an uncertainty around state boundaries, \(g_i\) are the approximations of f

Naturally, to accommodate for this issue, a suitable modification would be to replace one global time shift with multiple local time shifts. We now introduce a measure of closeness between state sequences which conceptually can be seen as derived from the GTS measure. We will be working with sequences of jumps, but more specifically given two sequences of state boundaries we will combine them together and sort the resulting joint sequence in an increasing order. Subsequent pairs of values in this sequence are determining segments understood as in Ward et al. (2011). We weigh different types of segments and the result is a weighted average of segment lengths, which is supposed to reflect well the error magnitude of the classifier.

We define segments formally and introduce a new distance on \(\mathcal {T}\).

Definition 3.2

(Segments) Let \(f, g\in \mathcal {T}\). The elements of the smallest partitionFootnote 14 of \(\mathbb {R}\) such that in each element of the partition neither f nor g changes state will be called segments.

Since functions from \(\mathcal {T}\) are piece-wise constant and have a finite number of discontinuities, there is always a finite number of segments. The general form of segments that we will use is as follows:

$$\begin{aligned} (-\infty , a_1)\cup \bigcup \limits _{i=1}^{l-1}[a_i, a_{i+1})\cup [a_l,\infty ), \end{aligned}$$
(14)

where \(a_1< a_2\cdots < a_l\) if f and g are not both constant on the real line. Otherwise there is only one segment, consisting of the whole real line. By convention, \(a_0=-\infty\) and \(a_{l+1}=\infty\), and

$$\begin{aligned} f(a_0)=f(a_1^{-})=\lim \limits _{x\rightarrow -\infty }f(x),\;f(a_{l+1})=f(a_l). \end{aligned}$$

Definition 3.3

(Locally Time-Shifted distance) Let \(w\ge 0\), \(\sigma >0\) and d be a metric on \(\mathcal {S}\). Let \(f,g\in \mathcal {T}\) and their set of segments to be denoted as in (14). We define the Locally Time-Shifted distance (LTS distance) as

$$\begin{aligned} LTS_{w, \sigma }(f, g)=\sum \limits _{i=1}^{l-1}\delta _i(a_{i+1}-a_i)d(f(a_i),g(a_i)), \end{aligned}$$

where

$$\begin{aligned} \delta _i= {\left\{ \begin{array}{ll} w, &{} a_{i+1}-a_i\le \sigma ,\; f(a_{i-1})=g(a_{i-1}), f(a_{i+1})=g(a_{i+1})\\ 1, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Similarly to the GTS distance, the parameter w controls the weight of misclassification occurring from the uncertainty of the true labels. The case when \(w < 1\) is more interesting to us, since it corresponds to timing uncertainty of the labels. If \(w\ge 1\), then we put more importance on the timings of the jumps (opposite to timing uncertainty). The LTS distance is an extended semimetric for \(w>0\) (for a proof, see the appendix). The triangle inequality does not hold in general.

The LTS distance addresses the issue of timing uncertainty in the true labels. Let \(\zeta >0\)Footnote 15 be the lower bound on the lengths of the events as determined by the domain knowledge (or through estimation if possible). Let \(\lambda >0\) be the penalty for each violation of the lower bound condition. For \(f\in \mathcal {T}\) with its discontinuities \(t_1,\ldots ,t_n\), we introduce a duration penalty term:

$$\begin{aligned} DP_{\lambda ,\zeta }(f) = \lambda \sum \limits _{k=1}^{n-1}\mathbbm {1}_{[0,\zeta )}(t_{k+1}-t_k). \end{aligned}$$

This term will allow to lower the performance of classifications with unrealistically short events.

In practice, we will need to extend the functions to the real line in order to use the LTS distance as it is defined for functions with domain equal to the whole of \(\mathbb {R}\). One natural extension could be to extend the first and the last state of each function indefinitely. However, this solution leads to a problem. Let \(M>0\). Consider two functions \(f:[0,M]\rightarrow \mathcal {S}\) and \(g:[0,M]\rightarrow \mathcal {S}\) such that for some \(0<a<M\), \(f(t)\ne g(t)\) on [0, a). No matter how small a is, the distance between extended f and g will always be infinite when using this extension, since in this case extended f and g are in different states on the whole half line \((-\infty ,a)\). Both functions need to be extended by the same state for the distance to be finite. We extend any function f defined on interval [0, M] to the real line, setting its value to an arbitrary state outside of [0, M). The distance is independent of the chosen state, as on the infinite segments that it introduces f and g are both equal. Without loss of generality, we choose state 1.

$$\begin{aligned} f^*(t)= {\left\{ \begin{array}{ll} f(t), &{} t\in [0,M)\\ 1, &{} t\not \in [0,M). \end{array}\right. } \end{aligned}$$
(15)

Notice that this extension does not have the problem stated above as \(f^*\) and \(g^*\) are equal on the segments that it introduces and does not change the value on the original segments regardless of the choice of the state outside of [0, M].

We combine the LTS distance and the duration penalty term to define the LTS measure of closeness of two state sequences.

Definition 3.4

Let f be a function of true labels and g its estimate, both defined on [0, M]. The LTS measure is defined as:

$$\begin{aligned} LTS_{w,\sigma ,\lambda ,\zeta }(f,g)=\exp (-LTS_{w,\sigma }(f^*,g^*)/M-DP_{\lambda ,\zeta }(g)). \end{aligned}$$

The scaling through the division by M normalizes the LTS distance to the interval [0, 1]. The transformation \([0,+\infty )\ni x\rightarrow \exp (-x)\in (0,1]\) maps the sum of the LTS distance and the duration penalty term to the interval (0, 1], while reversing the order as well: g is closer to f if the LTS measure is closer to 1.

4 Application to activity recognition

4.1 Simulation study

We consider a dataset created using a random procedure, which mimics the behavior of activity recognition classifiers with varying accuracy (depending on the parameters). Let \(\mathcal {S}=\{1,2,3\}\). Consider a function f representing a 60 second long state sequence:

$$\begin{aligned} f=\mathbbm {1}_{[0,5)}+2\cdot \mathbbm {1}_{[5,15)}+3\cdot \mathbbm {1}_{[15,30)}+2\cdot \mathbbm {1}_{[30,40)}+3\cdot \mathbbm {1}_{[40,55)}+\mathbbm {1}_{[55,60]}. \end{aligned}$$

f will be referred to as the correct labels. We introduce noise into f in the following manner:

  • two sequences of i.i.d. random variables are considered \(\{Y_k\}\) and \(\{Z_k\}\), with \(Y_k\sim Exp(\mu _1)\) and \(Z_k\sim Exp(\mu _2)\) for some parameters \(\mu _1,\mu _2>0\),

  • \(\{Y_k\}\) represents the time spent in the correct state, while \(\{Z_k\}\) represents the time spent in the incorrect state,

  • we use the sequence \(Y_1, Z_1, Y_2, Z_2, \ldots\) to generate noisy labels, where the sequence ends when the sum of all drawn numbers is exceeding 60 seconds,

  • for each variable \(Z_i\) an incorrect state is chosen randomly out of the remaining two and f is changed to that state on interval \([\sum \limits _{k=1}^{i-1}(Y_k+Z_k) +Y_i,\sum \limits _{k=1}^{i}(Y_k+Z_k))\),

  • \(\mu _1\) and \(\mu _2\) control the duration of the states.

As our performance measure we choose the LTS measure with parameters: \(w = 0.6\), \(\sigma = 0.35\), \(\lambda = 0.0001\), \(\zeta = 0.5\), \(d=\rho\). The post-processing is performed for the noisy labels with parameter \(\gamma =0.5s\). To demonstrate the utility of the post-processing procedure, we draw the noisy function 1000 times for a given set of parameters \((\mu _1,\mu _2)\) and compare the accuracy of the noisy labels, the accuracy of the post-processed labels, the LTS measure of the noisy labels and the LTS measure of the post-processed labels.

In the first setting, we fix \(\mu _1=0.1s\). The procedure is repeated for \(\mu _2\in [0.01, 0.1]\) (100 sample points from the interval are chosen). Figure 3 compares the mean accuracy of the noisy labels and the post-processed labels as well as the mean LTS measure of the noisy labels and the post-processed labels.

Fig. 3
figure 3

The top left plot shows the mean accuracy of the noisy labels (red) and the mean accuracy of the post-processed labels (green) as calculated for different values of \(\mu _2\). The top right plot shows the LTS measure of the noisy labels (red) and the post-processed labels (green) as calculated for different values of \(\mu _2\). All lines drawn for 100 different values of \(\mu _2\). The bottom left boxplot shows the variability of the accuracy amongst the estimates (red) and the post-processed estimates (green). The bottom right boxplot shows the variability of the LTS measure amongst the estimates (red) and the post-processed estimates (green). Boxplots have been constructed for 5 different values of \(\mu _2\)

In the second setting, we fix \(\mu _1=0.5s\). The procedure is repeated for \(\mu _2\in [0.05, 0.5]\) (100 sample points from the interval are chosen). Figure 4 shows the mean accuracy of the noisy labels and the post-processed labels as well as the mean LTS measure of both the noisy labels and the post-processed labels.

Fig. 4
figure 4

The top left plot shows the mean accuracy of the noisy labels (red) and the mean accuracy of the post-processed labels (green) as calculated for different values of \(\mu _2\). The top right plot shows the LTS measure of the noisy labels (red) and the post-processed labels (green) as calculated for different values of \(\mu _2\). All lines drawn for 100 different values of \(\mu _2\). The bottom left boxplot shows the variability of the accuracy amongst the estimates (red) and the post-processed estimates (green). The bottom right boxplot shows the variability of the LTS measure amongst the estimates (red) and the post-processed estimates (green). Boxplots have been constructed for 5 different values of \(\mu _2\)

In the third setting, we fix \(\mu _1=1s\). The procedure is repeated for \(\mu _2\in [0.1, 1]\) (100 sample points from the interval are chosen). Figure 5 shows the mean accuracy of the noisy labels and the post-processed labels as well as the mean LTS measure of both the noisy labels and the post-processed labels.

Fig. 5
figure 5

The top left plot shows the mean accuracy of the noisy labels (red) and the mean accuracy of the post-processed labels (green) as calculated for different values of \(\mu _2\). The top right plot shows the LTS measure of the noisy labels (red) and the post-processed labels (green) as calculated for different values of \(\mu _2\). All lines drawn for 100 different values of \(\mu _2\). The bottom left boxplot shows the variability of the accuracy amongst the estimates (red) and the post-processed estimates (green). The bottom right boxplot shows the variability of the LTS measure amongst the estimates (red) and the post-processed estimates (green). Boxplots have been constructed for 5 different values of \(\mu _2\)

All three experiments show the improvement in the accuracy as well as the LTS measure thanks to the use of post-processing. Additionally, we conclude that the post-processing method behaves better when dealing with multiple shorter intervals rather than fewer longer intervals. Moreover, the boxplots show more variability in the performance of the post-processed estimates when dealing with initial estimates with fewer but longer intervals of misclassification. This can be due to the fact that at the level of around 0.5 in accuracy and in the LTS measure, the post-processing starts behaving much worse and is not able to recover the original signal as reliably. It shows the limits of the method and the fact that there is a point at which the method starts to behave worse.

We also investigate the importance of the parameter \(\gamma\) on the results. We fix \(\mu _1=0.1\), \(\mu _2=0.08\). The procedure is repeated for \(\gamma \in [0.01, 2.5]\) (100 sample points from the interval are chosen). Figure 6 shows the mean LTS measure of the post-processed labels.

Fig. 6
figure 6

The line shows the LTS measure of the post-processed labels drawn for 100 different values of \(\gamma\). The mean accuracy of noisy labels was equal to 0.556 and the mean LTS measure of noisy labels was equal to 0.602

We conclude that the parameter \(\gamma\) can influence the LTS measure of the post-processed functions \(\hat{g}_i\). It needs to be chosen carefully since too low values will lead to accepting unrealistically short events while too high values will eliminate true events. In our case the values of \(\gamma\) between 0.5 and 1 are the most favourable. In practice the minimal length of the events in the true labels can inform on the choice of \(\gamma\).

We finish the simulation study with a look at the parameters of the LTS measure. We will investigate the weight w first. Let all the other parameters of the LTS measure be set to \(\sigma =0.35\), \(\lambda =0.0001\), \(\zeta =0.5\). We fix \(\mu _1=0.1, \mu _2=0.08, \gamma =0.5\). The procedure is repeated for 100 different values of w in the interval [0, 2]. Figure 7 shows the mean LTS measure of the post-processed labels.

Fig. 7
figure 7

The line shows the LTS measure of the post-processed labels drawn for 100 different values of w. The mean accuracy of noisy labels was equal to 0.555

Figure 7 shows the effect of the parameter w on the LTS measure. As we can see on the y-axis, the values of the LTS measure are quite close together. Hence, we conclude that the choice of w is less important as its effect on the LTS measure is minimal. The main reason for this behaviour stems from the fact that \(\sigma\) restricts many of the erroneous intervals and the remaining ones for which w takes effect are quite small. Hence, the effect of w on the LTS measure is not large.

The parameter \(\sigma\) determines the length of the misclassified events up to which they are caused by timing uncertainty of the labels. \(\sigma\) can be chosen based on the domain knowledge, based on the experiment described in Sect. 3.1. The parameter \(\zeta\) is a lower bound on the lengths of the events, hence can be determined by the domain knowledge. Given their clear interpretation, the parameters \(\sigma\) and \(\zeta\) will not be subjected to the same procedure as the parameter w. Hence, the only parameter left to investigate is \(\lambda\). As before, we fix \(\mu _1=1, \mu _2=0.8, \gamma =0.5\). We choose \(w=0.6\). The procedure is repeated for 100 values of \(\lambda\) between 0 and 0.5. Figure 8 shows the mean LTS measure of the post-processed labels. We can see that high values of \(\lambda\) can influence the LTS measure significantly, hence choices lower than 0.01 are preferable. We want to avoid that the penalty term is overshadowing the LTS distance.

Fig. 8
figure 8

The line shows the LTS measure of the post-processed labels drawn for different values of \(\lambda\). The mean accuracy of noisy labels was equal to 0.56

4.2 Application to a football dataset

We will now demonstrate the benefits of the post-processing by projection in a real-life setting, utilizing the LTS measure to compare different methods of classification. Wilmes et al. (2020) give an extensive description of the football dataset of which we give a short summary below.

Eleven amateur football players participated in a coordinated experiment at a training facility of the Royal Dutch Football Association of The Netherlands. Five Inertial Measurement Units (IMUs) were attached to 5 different body parts: left shank (LS), right shank (RS), left thigh (LT), right thigh (RT) and pelvis (P). Each IMU sensor contains a 3-axis accelerometer (Acc) and a 3-axis gyroscope (Gyro). Athletes were asked to perform exercises on command, e.g. ‘jog for 10 meters’ or ‘long pass’. For each athlete and exercise this resulted in a 30-dimensional time series (5 body parts times 6 features per IMU) of length varying from 4 to 14 seconds. Each athlete performed 70–100 exercises which amounts to nearly 900 time series (each with a sampling frequency of 500 Hz). Time series are labelled with the command given to an athlete, but there are still other activities performed in each of the time series, for example standing still. This causes a problem; ignoring standing periods and treating them as part of the main signal pollutes the data and lowers the quality of the classification. To show the advantages of post-processing by projection, we select only two states: ‘standing’ and ‘other activity’ encoded as 0 and 1, respectively. 15 time series (representative of all possible actions performed by athletes) were manually labelled time point by time point in order to be able to train classifiers, and these will form our sample. All 15 time series were chosen from the single athlete.

In pre-processing we are using the sliding window technique on the sensors (Dietterich 2002). This method transforms the original raw data using windows of fixed length d and a statistic of choice T: given a time point t, its neighbourhood of size d is fed to the statistic T for each variable separately. Performing the procedure for each time point results in a time series of the same dimension as the original one, but every observation is equipped with some knowledge about the past and the future through the statistic T and through forming the neighbourhoods of size d. Regarding the choice of the statistic T one needs to be careful, since the sensors are highly correlated with each other. The information about standing contained in one variable is comparable to the one in another, namely the variance of the signal is low when the person is standing (differences can occur when considering different legs; a low variance on one leg might be misleading since the other leg might already be transitioning into another position).

Leave-5-out cross-validation will be performed in order to select the best performing classification method out of the 7 standard machine learning methods, which will be listed below. A typical approach to k-fold cross-validation with a training sample of size \(k-1\) cannot be applied here, since a single time series is not a representative sample of different types of events. 15 time series will be used. In each iteration 10 time series will be randomly chosen for training and 5 for testing. The results are going to be shown for post-processed classifiers, unless specified otherwise. Before cross-validation can be performed, we need to fix the parameters of the performance measure we introduced in Sect. 2. The parameters of the LTS measure are chosen as follows:

  • We have limited information regarding how uncertain locations of state transitions are, but based on the small experiment described in Sect. 3.1 we select \(\sigma =0.35\) (the largest deviation between different true labels).

  • The parameter w is chosen as 0.6, but as shown in Sect. 4.1 its choice is not that important.

  • The lower bound \(\gamma\) on the duration of activities is selected as the length of the shortest activity in the learning dataset, which is equal to 0.8s in our case.

  • A penalty \(\lambda\) represents the cost of additional or missing jumps in a state sequence compared to the true labels. We decide for the penalty \(\lambda =0.01\) in order not to overshadow the LTS distance with too much importance placed on the penalty term (more details on that were given in Sect. 4.1, specifically in Fig. 8).

Before assessing classifiers on the training set, one needs to consider an appropriate feature set. Our variables are highly dependent on one another, so we start with feature selection. We perform feature ranking using the Relieff algorithm and select the 6 most relevant features based on the Relieff weights (more details on the method in Kononenko et al. 1997). Then we test all possible combinations of these features, which is now computationally feasible, in order to find the best set for each of the classifiers. The features selected by the Relieff algorithm are RTGyroX, RTGyroY, RTAccX, RTAccZ, LTAccY, PAccY, where the naming convention is as follows: RTGyroX refers to the x-axis of the gyroscope located on the right thigh and so forth.

Table 2 Average of the leave-5-out cross-validation scores for all classifiers using the best sensor set for each of them

Proceeding with the cross-validation we select the following classifiers (with their abbreviations) to be assessed: DT—Decision Tree, kNN—k-Nearest Neighbors, LR—Logistic Regression, MLP—Multi-layer Perceptron, NB—Naive Bayes, RF—Random Forest, SVM—Support Vector Machine. The results of leave-5-out cross-validation are shown in table 2. It is striking that the test scores of the post-processed classifiers are at most 0.028 apart. This is due to post-processing by projection. The correction it provides brings all classifiers closer together. This result can be extended even further. The test score of a decision tree ranges from 59–86% for different sensor sets before post-processing, while using the post-processing results in a range of test scores from 93–96.5% and this is not specific to decision trees only.

The example shows that the post-processing is crucial. Firstly, it increases the accuracy of a given estimator on a given feature set by 35%. Secondly, it diminishes the impact of feature selection as the difference in accuracy between different feature subsets decreases substantially. Feature selection is of course still important as it decreases the computational complexity of the problem and allows to get rid of redundancy in the feature set. However, with methods that only rank features such as Relieff the choice of the threshold we choose to classify a feature as significant or not is less important. Finally and most importantly, the post-processing by projection allows to select a method according to criteria other than the performance, namely the computational speed.

5 Conclusion

In this paper we have introduced a post-processing scheme that allows to improve estimates. It finds estimated activities that are too short and eliminates them in an optimal way by finding the shortest path in a directed acyclic graph.

A simulation study is conducted to assess the benefits brought by the post-processing method. Generated noisy labels are improved with the use of the post-processing. The positive effects on the LTS measure are more significant when the noisy sequence contains more short intervals of misclassification.

Real-life football sensor data were used to assess the adequacy of the post-processing scheme in the more realistic setting. It significantly improved the performance of the classifiers. At the same time, the post-processed classifiers are closer to each other in performance than the original ones. This allows placing more importance on other criteria, such as the computational speed of the method. It should be noted that post-processing cannot correct for uncertainty in the classification result of the estimators. It can be seen in Figs. 3, 4 and 5 that the worse the original estimate the worse the post-processed one (at least as a rule of thumb as there can be cases when it is reversed). However, most importantly, the results of the application to the football dataset are promising. The post-processing by projection was able to improve the estimators of accuracy ranging from 59% to 86% up to a score of 93% to 96.5%. We note that the lowest score of the post-processed estimates for any given classification method is still higher than the highest score of the original estimates. An alternative to our method could be to integrate the penalization of too short windows into the classifier. This is not an easy idea to realize, since classifiers usually do not consider the duration of activities themselves and they classify in a time linear manner. Nonetheless, if an appropriate scheme were to be defined, it would expand on the theory developed in this paper.

Another contribution are novel measures of classifier performance in the task of activity recognition using wearable sensors. They address the issue of timing offsets as well as unrealistic classifications, while retaining a typical scalar output of a performance measure allowing for easy comparisons between classifiers.