Keywords

1 Introduction

Similar as many vision tasks, the progress on human pose estimation problem is significantly advanced by deep learning. Since the pioneer work in [30, 31], the performance on the MPII benchmark [3] has become saturated in three years, starting from about 80% PCKH@0.5 [30] to more than 90% [7, 8, 22, 33]. The progress on the more recent and challenging COCO human pose benchmark [20] is even faster. The mAP metric is increased from 60.5 (COCO 2016 Challenge winner [5, 9]) to 72.1 (COCO 2017 Challenge winner [6, 9]) in one year. With the quick maturity of pose estimation, a more challenging task of “simultaneous pose detection and tracking in the wild” has been introduced recently [2].

At the same time, the network architecture and experiment practice have steadily become more complex. This makes the algorithm analysis and comparison more difficult. For example, the leading methods [7, 8, 22, 33] on MPII benchmark [3] have considerable difference in many details but minor difference in accuracy. It is hard to tell which details are crucial. Also, the representative works [5, 6, 12, 21, 24] on COCO benchmark are also complex but differ significantly. Comparison between such works is mostly on system level and less informative. About pose tracking, although there has not been much work [2], the system complexity can be expected to further increase due to the increased problem dimension and solution space.

This work aims to ease this problem by asking a question from the opposite direction, how good could a simple method be? To answer the question, this work provides baseline methods for both pose estimation and tracking. They are quite simple but surprisingly effective. Thus, they hopefully would help inspiring new ideas and simplifying their evaluation. The code, as well as pre-trained models, will be released to facilitate the research community.

Our pose estimation is based on a few deconvolutional layers added on a backbone network, ResNet [13] in this work. It is probably the simplest way to estimate heat maps from deep and low resolution feature maps. Our single model’s best result achieves the state-of-the-art at mAP of 73.7 on COCO test-dev split, which has an improvement of 1.6% and 0.7% over the winner of COCO 2017 keypoint Challenge’s single model and their ensembled model [6, 9].

Our pose tracking follows a similar pipeline of the winner [11] of ICCV’17 PoseTrack Challenge [2]. The single person pose estimation uses our own method as above. The pose tracking uses the same greedy matching method as in [11]. Our only modification is to use optical flow based pose propagation and similarity measurement. Our best result achieves a mAP score of 74.6 and a MOTA score of 57.8, an absolute 15% and 6% improvement over 59.6 and 51.8 of the winner of ICCV’17 PoseTrack Challenge [11, 26]. It is the new state-of-the-art.

This work is not based on any theoretic evidence. It is based on simple techniques and validated by comprehensive ablation experiments, at our best. Note that we do not claim any algorithmic superiority over previous methods, in spite of better results. We do not perform complete and fair comparison with previous methods, because this is difficult and not our intent. As stated, the contribution of this work are solid baselines for the field.

2 Pose Estimation Using a Deconvolution Head Network

ResNet [13] is the most common backbone network for image feature extraction. It is also used in [6, 24] for pose estimation. Our method simply adds a few deconvolutional layers over the last convolution stage in the ResNet, called \(C_5\). The whole network structure is illustrated in Fig. 1(c). We adopt this structure because it is arguably the simplest to generate heatmaps from deep and low resolution features and also adopted in the state-of-the-art Mask R-CNN [12].

Fig. 1.
figure 1

Illustration of two state-of-the-art network architectures for pose estimation (a) one stage in Hourglass [22], (b) CPN [6], and our simple baseline (c).

By default, three deconvolutional layers with batch normalization [15] and ReLU activation [19] are used. Each layer has 256 filters with \(4\times 4\) kernel. The stride is 2. A \(1\times 1\) convolutional layer is added at last to generate predicted heatmaps \(\{H_{1}\dots H_{k}\}\) for all k key points.

Same as in [22, 30], Mean Squared Error (MSE) is used as the loss between the predicted heatmaps and targeted heatmaps. The targeted heatmap \(\hat{H_{k}}\) for joint k is generated by applying a 2D gaussian centered on the \(k^{th}\) joint’s ground truth location.

Discussions. To understand the simplicity and rationality of our baseline, we discuss two state-of-the-art network architectures as references, namely, Hourglass [22] and CPN [6]. They are illustrated in Fig. 1.

Hourglass [22] is the dominant approach on MPII benchmark as it is the basis for all leading methods [7, 8, 33]. It features in a multi-stage architecture with repeated bottom-up, top-down processing and skip layer feature concatenation.

Cascaded pyramid network (CPN) [6] is the leading method on COCO 2017 keypoint challenge [9]. It also involves skip layer feature concatenation and an online hard keypoint mining step.

Comparing the three architectures in Fig. 1, it is clear that our method differs from [6, 22] in how high resolution feature maps are generated. Both works [6, 22] use upsampling to increase the feature map resolution and put convolutional parameters in other blocks. In contrary, our method combines the upsampling and convolutional parameters into deconvolutional layers in a much simpler way, without using skip layer connections.

A commonality of the three methods is that three upsampling steps and also three levels of non-linearity (from the deepest feature) are used to obtain high-resolution feature maps and heatmaps. Based on above observations and the good performance of our baseline, it seems that obtaining high resolution feature maps is crucial, but no matter how. Note that this discussion is only preliminary and heuristic. It is hard to conclude which architecture in Fig. 1 is better. This is not the intent of this work.

Fig. 2.
figure 2

The proposed flow-based pose tracking framework.

3 Pose Tracking Based on Optical Flow

Multi-person pose tracking in videos first estimates human poses in frames, and then tracks these human pose by assigning a unique identification number (id) to them across frames. We present human instance P with id as \(P=(J, id)\), where \(J=\{j_{i}\}_{1:N_{J}}\) is the coordinates set of \(N_{J}\) body joints and id indicates the tracking id. When processing the \(k^{th}\) frame \(I^k\), we have the already processed human instances set \(\mathcal {P}^{k-1} = \{P^{k-1}_{i}\}_{1:N_{k-1}}\) in frame \(I^{k-1}\) and the instances set \(\mathcal {P}^k = \{P^{k}_{i}\}_{1:N_{k}}\) in frame \(I^k\) whose id is to be assigned, where \(N_{k-1}\) and \(N_{k}\) are the instance number in frame \(I^{k-1}\) and \(I^{k}\). If one instance \(P^{k}_{j}\) in current frame \(I^{k}\) is linked to the instance \(P^{k-1}_{i}\) in \(I^{k-1}\) frame, then \(id^{k-1}_{i}\) is propagated to \(id^{k}_{j}\), otherwise a new id is assigned to \(P^{k}_{j}\), indicating a new track.

The winner [11] of ICCV’17 PoseTrack Challenge [2] solves this multi-person pose tracking problem by first estimating human pose in frames using Mask R-CNN [12], and then performing online tracking using a greedy bipartite matching algorithm frame by frame.

The greedy matching algorithm is to first assign the id of \(P^{k-1}_{i}\) in frame \(I^{k-1}\) to \(P^{k}_{j}\) in frame \(I^k\) if the similarity between \(P^{k-1}_{i}\) and \(P^{k}_{j}\) is the highest, then remove these two instances from consideration, and repeat the id assigning process with the highest similarity. When an instance \(P^{k}_{j}\) in frame \(I^k\) has no existing \(P^{k-1}_{i}\) left to link, a new id number is assigned, which indicates a new instance comes up.

We mainly follow this pipeline in [11] with two differences. One is that we have two different kinds of human boxes, one is from a human detector and the other are boxes generated from previous frames using optical flow. The second difference is the similarity metric used by the greedy matching algorithm. We propose to use a flow-based pose similarity metric. Combined with these two modifications, we have our enhanced flow-based pose tracking algorithm, illustrated in Fig. 2. We elaborate our flow-based pose tracking algorithm in the following.

3.1 Joint Propagation Using Optical Flow

Simply applying a detector designed for single image level (e.g. Faster-RCNN [27], R-FCN [16]) to videos could lead to missing detections and false detections due to motion blur and occlusion introduced by video frames. As shown in Fig. 2(c), the detector misses the left black person due to fast motion. Temporal information is often leveraged to generate more robust detections [35, 36].

We propose to generate boxes for the processing frame from nearby frames using temporal information expressed in optical flow.

Given one human instance with joints coordinates set \(J^{k-1}_{i}\) in frame \(I^{k-1}\) and the optical flow field \(F_{k-1\rightarrow k}\) between frame \(I^{k-1}\) and \(I^k\), we could estimate the corresponding joints coordinates set \(\hat{J^{k}_{i}}\) in frame \(I^k\) by propagating the joints coordinates set \(J^{k-1}_{i}\) according to \(F_{k-1\rightarrow k}\). More specifically, for each joint location (xy) in \(J^{k-1}_{i}\), the propagated joint location would be \((x+\delta {x}, y+\delta {y})\), where \(\delta {x},\delta {y}\) are the flow field values at joint location (xy). Then we compute a bounding of the propagated joints coordinates set \(\hat{J^{k}_{i}}\), and expand that box by some extend (15% in experiments) as the candidates box for pose estimation.

When the processing frame is difficult for human detectors that could lead to missing detections due to motion blur or occlusion, we could have boxes propagated from previous frames where people have been detected correctly. As shown in Fig. 2(c), for the left black person in images, since we have the tracked result in previous frames in Fig. 2(a), the propagated boxes successfully contain this person.

3.2 Flow-Based Pose Similarity

Using bounding box IoU (Intersection-over-Union) as the similarity metric (\(S_{Bbox}\)) to link instances could be problematic when an instance moves fast thus the boxes do not overlap, and in crowed scenes where boxes may not have the corresponding relationship with instances. A more fine-grained metric could be a pose similarity (\(S_{Pose}\)) which calculates the body joints distance between two instances using Object Keypoint Similarity (OKS). The pose similarity could also be problematic when the pose of the same person is different across frames due to pose changing. We propose to use a flow-based pose similarity metric.

Given one instance \(J^{k}_{i}\) in frame \(I^k\) and one instance \(J^{l}_{j}\) in frame \(I^l\), the flow-based pose similarity metric is represented as

$$\begin{aligned} S_{Flow}(J^{k}_{i}, J^{l}_{j}) = OKS(\hat{J^{l}_{i}}, J^{l}_{j}), \end{aligned}$$
(1)

where OKS represents calculating the Object Keypoint Similarity (OKS) between two human pose, and \(\hat{J^{l}_{i}}\) represents the propagated joints for \(J^{k}_{i}\) from frame \(I^k\) to \(I^l\) using optical flow field \(F_{k\rightarrow l}\).

Due to occlusions with other people or objects, people often disappear and re-appear again. Considering consecutive two frames is not enough, thus we have the flow-based pose similarity considering multi frames, denoted as \(S_{Multi-flow}\), meaning the propagated \(\hat{J_k}\) comes from multi previous frames. In this way, we could relink instances even disappearing in middle frames.

3.3 Flow-Based Pose Tracking Algorithm

With the joint propagation using optical flow and the flow-based pose similarity, we propose the flow-based pose tracking algorithm combining these two, as presented in Algorithm 1. Table 1 summarizes the notations used in Algorithm 1.

Table 1. Notations in Algorithm 1.
figure a

First, we solve the pose estimation problem. For the processing frame in videos, the boxes from a human detector and boxes generated by propagating joints from previous frames using optical flow are unified using a bounding box Non-Maximum Suppression (NMS) operation. The boxes generated by propagating joints serve as the complement of missing detections of the detector (e.g. in Fig. 2(c)). Then we estimate human pose using the cropped and resized images by these boxes through our proposed pose estimation network in Sect. 2.

Second, we solve the tracking problem. We store the tracked instances in a double-ended queue (Deque) with fixed length \(L_{Q}\), denoted as

$$\begin{aligned} Q = [ \mathcal {P}_{k-1},\mathcal {P}_{k-2},\ldots ,\mathcal {P}_{k-L_{Q}} ] \end{aligned}$$
(2)

where \(\mathcal {P}_{k-i}\) means tracked instances set in previous frame \(I^{k-i}\) and the Q’s length \(L_{Q}\) indicates how many previous frames considered when performing matching.

The Q could be used to capture previous multi frames’ linking relationship, initialized in the first frame in a video. For the \(k^{th}\) frame \(I^k\), we calculate the flow-based pose similarity matrix \(M_\mathrm{sim}\) between the untracked instances set of body joints \(\mathcal {J}^{k}\) (id is none) and previous instances sets in Q. Then we assign id to each body joints instance J in \(\mathcal {J}^{k}\) to get assigned instance set \(\mathcal {P}^{k}\) by using greedy matching and \(M_\mathrm{sim}\). Finally we update our tracked instances Q by adding up \(k^{th}\) frame instances set \(\mathcal {P}^{k}\).

4 Experiments

4.1 Pose Estimation on COCO

The COCO Keypoint Challenge [20] requires localization of multi-person keypoints in challenging uncontrolled conditions. The COCO train, validation, and test sets contain more than 200k images and 250k person instances labeled with keypoints. 150k instances of them are publicly available for training and validation. Our models are only trained on all COCO train2017 dataset (includes 57K images and 150K person instances) no extra data involved, ablation are studied on the val2017 set and finally we report the final results on test-dev2017 set to make a fair comparison with the public state-of-the-art results [5, 6, 12, 24].

The COCO evaluation defines the object keypoint similarity (OKS) and uses the mean average precision (AP) over 10 OKS thresholds as main competition metric [9]. The OKS plays the same role as the IoU in object detection. It is calculated from the distance between predicted points and ground truth points normalized by scale of the person.

Training. The ground truth human box is made to a fixed aspect ratio, e.g., \(height{:}width = 4{:}3\) by extending the box in height or width. It is then cropped from the image and resized to a fixed resolution. The default resolution is 256:192. It is the same as the state-of-the-art method [6] for a fair comparison. Data augmentation includes scale (\(\pm 30\%\)), rotation (\(\pm 40 ^\circ \)) and flip.

Our ResNet [13] backbone network is initialized by pre-training on ImageNet classification task [28]. In the training for pose estimation, the base learning rate is 1e−3. It drops to 1e−4 at 90 epochs and 1e−5 at 120 epochs. There are 140 epochs in total. Mini-batch size is 128. Adam [18] optimizer is used. Four GPUs on a GPU server is used.

ResNet of depth 50, 101 and 152 layers are experimented. ResNet-50 is used by default, unless otherwise noted.

Testing. A two-stage top-down paradigm is applied, similar as in [6, 24]. For detection, by default we use a faster-RCNN [27] detector with detection AP 56.4 for the person category on COCO val2017. Following the common practice in [6, 22], the joint location is predicted on the averaged heatmaps of the original and flipped image. A quarter offset in the direction from highest response to the second highest response is used to obtain the final location.

Table 2. Ablation study of our method on COCO val2017 dataset. Those settings used in comparison are in bold. For example, (a, e, f) compares backbones.

Ablation Study. Table 2 investigates various options in our baseline in Sect. 2.

  1. 1.

    Heat map resolution. Method (a) uses three deconvolutional layers to generate \(64\times 48\) heatmaps. Method (b) generates \(32\times 24\) heatmaps with two deconvolutional layers. (a) outperform (b) by 2.5 AP with only slightly increased model capacity. By default, three deconvolutional layers are used.

  2. 2.

    Kernel size. Methods (a, c, d) show that a smaller kernel size gives a marginally decrease in AP, which is 0.3 point decrease from kernel size 4 to 2. By default, deconvolution kernel size of 4 is used.

  3. 3.

    Backbone. As in most vision tasks, a deeper backbone model has better performance. Methods (a, e, f) show steady improvement by using deeper backbone models. AP increase is 1.0 from ResNet-50 to Resnet-101 and 1.6 from ResNet-50 to ResNet-152.

  4. 4.

    Image size. Methods (a, g, h) show that image size is critical for performance. From method (a) to (g), the image size is reduced by half and AP drops points. On the other hand, relative 75% computation is saved. Method (h) uses a large image size and increases 1.8 AP from method (a), at the cost of higher computational cost.

Table 3. Comparison with Hourglass [22] and CPN [6] on COCO val2017 dataset. Their results are cited from [6]. OHKM means Online Hard Keypoints Mining.

Comparison with Other Methods on COCO val2017. Table 3 compares our results with a 8-stage Hourglass [22] and CPN [6]. All the three methods use a similar top-down two-stage paradigm. For reference, the person detection AP of hourglass [22] and CPN [6] is 55.3 [6], which is comparable to ours 56.4.

Compared with Hourglass [6, 22], our baseline has an improvement of 3.5 in AP. Both methods use an input size of \(256\times 192\) and no Online Hard Keypoints Mining (OHKM) involved.

CPN [6] and our baseline use the same backbone of ResNet-50. When OHKM is not used, our baseline outperforms CPN [6] by 1.8 AP for input size \(256\times 192\), and 1.6 AP for input size \(384\times 288\). When OHKM is used in CPN [6], our baseline is better by 0.6 AP for both input sizes.

Note that the results of Hourglass [22] and CPN [6] are cited from [6] and not implemented by us. Therefore, the performance difference could come from implementation difference. Nevertheless, we believe it is safe to conclude that our baseline has comparable results but is simpler.

Table 4. Comparisons on COCO test-dev dataset. Top: methods in the literature, trained only on COCO training dataset. Middle: results submitted to COCO test-dev leaderboard [9], which have either extra training data (*) or models ensamled (\(^+\)). Bottom: our single model results, trained only on COCO training dataset.

Comparisons on COCO test-dev Dataset. Table 4 summarizes the results of other state-of-the-art methods in the literature on COCO Keypoint Leaderboard [9] and COCO test-dev dataset. For our baseline here, a human detector with person detection AP of 60.9 on COCO std-dev split dataset is used. For reference, CPN [6] use a human detector with person detection AP of 62.9 on COCO minival split dataset.

Compared with CMU-Pose [5], which is a bottom-up approach for multi-person pose estimation, our method is significantly better. Both G-RMI [24] and CPN [6] have a similar top-down pipeline with ours. G-RMI also uses ResNet as backbone, as ours. Using the same backbone Resnet-101, our method outperforms G-RMI for both small (\(256\times 192\)) and large input size (\(384\times 288\)). CPN uses a stronger backbone of ResNet-Inception [29]. As evidence, the top-1 error rate on ImageNet validation set of Resnet-Inception and ResNet-152 are 18.7% and 21.4% respectively [29]. Yet, for the same input size \(384\times 288\), our result 73.7 outperforms both CPN’s single model and their ensembled model, which have 72.1 and 73.0 respectively.

4.2 Pose Estimation and Tracking on PoseTrack

PoseTrack [2] dataset is a large-scale benchmark for multi-person pose estimation and tracking in videos. It requires not only pose estimation in single frames, but also temporal tracking across frames. It contains 514 videos including 66,374 frames in total, split into 300, 50 and 208 videos for training, validation and test set respectively. For training videos, 30 frames from the center are annotated. For validation and test videos, besides 30 frames from the center, every fourth frame is also annotated for evaluating long range articulated tracking. The annotations include 15 body keypoints location, a unique person id and a head bounding box for each person instance.

The dataset has three tasks. Task 1 evaluates single-frame pose estimation using mean average precision (mAP) metric as is done in [25]. Task 2 also evaluates pose estimation but allows usage of temporal information across frames. Task 3 evaluates tracking using multi-object tracking metrics [4]. As our tracking baseline uses temporal information, we report results on Task 2 and 3. Note that our pose estimation baseline also performs best on Task 1 but is not reported here for simplicity.

Training. Our pose estimation model is fine-tuned from those pre-trained on COCO in Sect. 4.1. As only key points are annotated, we obtain the ground truth box of a person instance by extending the bounding box of its all key points by 15% in length (7.5% on both sides). The same data augmentation as in Sect. 4.1 is used. During training, the base learning rate is 1e−4. It drops to 1e−5 at 10 epochs and 1e−6 at 15 epochs. There are 20 epochs in total. Other hyper parameters are the same as in Sect. 4.1.

Testing. Our flow based tracking baseline is closely related to the human detector’s performance, as the propagated boxes could affect boxes from a detector. To investigate its effect, we experiment with two off-the-shelf detectors, a faster but less accurate R-FCN [16] and a slower but more accurate FPN-DCN [10]. Both use ResNet-101 backbone and are obtained from public implementation [1]. No additional fine tuning of detectors on PoseTrack dataset is performed.

Fig. 3.
figure 3

Some sample results on PoseTrack Challenge test set.

Similar as in [11], we first drop low-confidence detections, which tends to decrease the mAP metric but increase the MOTA tracking metric. Also, since the tracking metric MOT penalizes false positives equally regardless of the scores, we drop low confidence joints first to generate the result as in [11]. We choose the boxes and joints drop threshold in a data-driven manner on validation set, 0.5 and 0.4 respectively.

For optical flow estimation, the fastest model FlowNet2S in FlowNet family [14] is used, as provided on [23]. We use the PoseTrack evaluation toolkit for results on validation set and report final results on test set from the evaluation server. Figure 3 illustrates some results of our approach on PoseTrack test dataset.

Our main ablation study is performed on ResNet-50 with input size \(256\times 192\), which is already strong when compared with state-of-the-art. Our best result is on ResNet-152 with input size \(384\times 288\).

Table 5. Ablation study on PoseTrack Challenge validation dataset. Top: Results of ResNet-50 backbone using R-FCN detector. Middle: Results of ResNet-50 backbone using FPN-DCN detector. Bottom:Results of ResNet-152 backbone using FPN-DCN detector.
Table 6. Multi-person pose estimation performance on PoseTrack Challenge dataset. “*” means models trained on train + validation set. Top: Results on PoseTrack validation set. Bottom: Results on PoseTrack test set

Effect of Joint Propagation. Table 5 shows that using boxes from joint propagation introduces improvement on both mAP and MOTA metrics using different backbones and detectors. With R-FCN detector, using boxes from joint propagation (method \(a_3\) vs. \(a_1\)) introduces improvement of 4.3% mAP and 3.8% MOTA. With the better FPN-DCN detector, using boxes from joint propagation (method \(b_3\) vs. \(b_1\)) introduces improvement of 3.1% mAP and 2.3% MOTA. With ResNet-152 as backbone (method \(c_3\) vs. \(c_1\)), improvement is 3.8% mAP and 2.8% MOTA. Note that such improvement does not only come from more boxes. As noted in [11], simply keeping more boxes of a detector, e.g., by using a smaller threshold, would lead to an improvement in mAP, but a drop in MOTA since more false positives would be introduced. The joint propagation improves both mAP and MOTA metrics, indicating that it finds more persons that are missed by the detector, possibly due to motion blur or occlusion in video frames.

Another interesting observation is that the less accurate R-FCN detector benefits more from joint propagation. For example, the gap between using FPN-DCN and R-FCN detector in ResNet-50 is decreased from 3.3% mAP and 2.2% MOTA (from \(a_1\) to \(b_1\)) to 2.1% mAP and 0.4% MOTA (from \(a_3\) to \(b_3\)). Also, method \(a_3\) outperforms method \(b_1\) by 1.0% mAP and 1.6% MOTA, indicating that a weak detector R-FCN combined with joint propagation could perform better than a strong detector FPN-DCN along. While, the former is more efficient as joint propagation is fast.

Effect of Flow-Based Pose Similarity. Flow-based pose similarity is shown working better when compared with bounding box similarity and pose similarity in Table 5. For example, flow-based similarity using multi frames (method \(b_6\)) and single frame (method \(b_5\)) outperforms bounding box similarity (method \(b_3\)) by 0.8% MOTA and 0.3% MOTA.

Note that flow-based pose similarity is better than bounding box similarity when person moves fast and their boxes do not overlap. Method \(b_6\) with flow-based pose similarity considers multi frames and have an 0.5% MOTA improvement when compared to method \(b_5\), which considers only one previous frame. This improvement comes from the case when people are lost shortly due to occlusion and appear again.

Comparison with State-of-the-Art. We report our results on both Task 2 and Task 3 on PoseTrack dataset. As verified in Table 5, method \(b_6\) and \(c_6\) are the best settings and used here. Backbones are ResNet-50 and ResNet-152, respectively. The detector is FPN-DCN [10].

Table 6 reports the results on pose estimation (Task 2). Our small model (ResNet-50) outperforms the other methods already by a large margin. Our larger model (ResNet-152) further improves the state-of-the-art. On validation set it has an absolute 16.1% improvement in mAP over [11], which is the winner of ICCV’17 PoseTrack Challenge, and also has an 10.2% improvement over a recent work [32], which is the previous best.

Table 7 reports the results on pose tracking (Task 3). Compared with [11] on validation and test dataset, our larger model (ResNet-152) has an 10.2 and 5.8 improvement in MOTA over its 55.2 and 51.8 respectively. Compared with the recent work [32], our best model (ResNet-152) has 7.1% and 6.6% improvement on validation and test dataset respectively. Note that our smaller model (ResNet-50) also outperform the other methods [11, 32].

Table 8 summarizes the results on PoseTrack’s leaderboard. Our baseline outperforms all public entries by a large margin. Note that all methods differ significantly and this comparison is only on system level.

Table 7. Multi-person Pose Tracking Performance on PoseTrack Challenge dataset.“*” means models trained on train + validation set. Top: Results on PoseTrack validation set. Bottom: Results on PoseTrack test set
Table 8. Results of mulit-person pose tracking on PoseTrack challenge leaderboard.“*” means models trained on train + validation set.

5 Conclusions

We present simple and strong baselines for pose estimation and tracking. They achieve state-of-the-art results on challenging benchmarks. They are validated via comprehensive ablation studies. We hope such baselines would benefit the field by easing the idea development and evaluation.