skip to main content
10.1145/3613904.3642588acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access
Artifacts Available / v1.1

Characterizing and Quantifying Expert Input Behavior in League of Legends

Published:11 May 2024Publication History

Abstract

To achieve high performance in esports, players must be able to effectively and efficiently control input devices such as a computer mouse and keyboard (i.e., input skills). Characterizing and quantifying a player’s input skills can provide useful insights, but collecting and analyzing sufficient amounts of data in ecologically valid settings remains a challenge. Targeting the popular esports game, League of Legends, we go beyond the limitations of previous studies and demonstrate a holistic pipeline of input behavior analysis: from quantifying the quality of players’ input behavior (i.e., input skill) to training players based on the analysis. Based on interviews with five top-tier professionals and analysis of input behavior logs from 4,835 matches played freely at home collected from 193 players (including 18 professionals), we confirmed that players with higher ranks in the game implement eight different input skills with higher quality. In a three-week follow-up study using a training aid that visualizes a player’s input skill levels, we found that the analysis provided players with actionable lessons, potentially leading to meaningful changes in their input behavior.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Esports refers to highly organized competitive video gaming and has grown to become an important part of today’s gaming culture [34, 54]. Even in the 2022 Asian Games, esports was selected as an official event, which symbolically shows that it has a large fan base [17] and that social environments and systems for nurturing professional athletes [70] have been formed significantly [89]. With such recent growth, esports has also begun to be recognized as an important research topic in the field of human-computer interaction (HCI) [54, 110].

Although the extent may vary from player to player [74], it is clear that most players pursue one primary goal in esports: winning matches [10, 115]. To develop the skills necessary to win, players actively discuss in online communities [77, 115], analyze in-game data collected during matches [30, 98], and use a variety of training tools [81, 88]. Especially in esports, because matches are computer-mediated, large amounts of data can be collected [5, 6, 19, 23, 31, 32, 43, 75, 79, 102, 108, 115, 118], analyzed relatively quickly, and provided to players. Most popular esports games today provide statistics on match results and player characteristics within the game, and also provide separate Application Programming Interfaces (APIs) to allow players to access large amounts of data stored on their servers. This makes the esports learning and training culture more data-driven [50, 87, 108, 113], making it unique and distinct from traditional sports. Recent studies [10, 50, 87, 108, 109, 113] have consistently stated the importance of data analytics in esports and the potential value it can provide to players.

However, due to its short history, today’s esports data analysis technologies are not sufficiently mature and have room for significant improvement [53, 54, 71, 92, 94]. For example, problems such as low granularity of information [2, 3, 55, 109], low interpretability [50, 108, 113], lack of real-time analysis [10], lack of support at the team level [70], and lack of standard operationalization [92] have been pointed out, and meaningful studies have been conducted to address those problems. From a more ecological perspective, another challenge that remains is verifying whether esports data analysis and visualization technologies can actually bring meaningful behavioral and performance changes to players and teams [8, 49, 72, 89, 103].

Figure 1:

Figure 1: The three steps that analyzed expert input behavior in this study

Inspired by such previous studies, and to contribute to creating a more scientific and effective esports analysis and training system, we focus in this study on the analysis of a type of esports data that, despite its importance, has not been covered in depth [77, 81]: players’ low-level input behavior logs (e.g., key input sequences, pointer trajectories). With few exceptions (e.g., card games [104]), most popular esports games today belong to the real-time video game genre, where players are required to efficiently and effectively manipulate input devices such as a computer mouse under time pressure. Even if players accurately perceive the game situation and build an optimal plan, it is meaningless if the plan is not realized through appropriate manipulation of the input device [80, 81, 92]. Especially the quality of a player’s input behavior (or input skill in this study) has a relatively consistent effect across all matches [15]. For example, if a player is slower than other players in pointer control using a computer mouse, that difference will lead to a penalty that will occur consistently in all future matches. Therefore, analysis of input logs has the potential to provide more insights about the capabilities of an individual player [5, 14, 15, 23, 53, 77, 102, 115], independent of external factors beyond the player’s control, such as matchmaking. At the same time, input skill analysis can ultimately inform players at a low level about how to perform input behavior expertly in a game, which is expected to be more actionable for players than lessons derived from more abstract and aggregated numbers, such as match summary statistics (e.g., average damage dealt or kill-death ratio).

Despite its potential value, and despite researchers [5, 12, 14, 15, 23, 27, 30, 33, 53, 58, 77, 80, 81, 94, 100, 102, 115] and players [77, 115] agreeing on its significance, how to analyze and utilize player input behavior data in esports has not been explored in depth. There are two main reasons for this: First, because input behavioral data has high spatio-temporal resolution and is relatively heavy, it is not easy to collect and analyze in large quantities, unlike light match summary statistics. In fact, no significant public database of input behavior in esports currently exists, and previous studies on input behavior had to be conducted using custom input logger on relatively small numbers of participants (N=3 to 55) in controlled lab environments [7, 11, 14, 15, 40, 41, 42, 52, 77, 81, 94]. Such low ecological validity and external validity have made it difficult to draw meaningful conclusions about what expert input behavior is in esports. Second, even if the dataset is available, the raw input behavior data (e.g., pointer trajectory) does not speak for itself, so indices of expert input behavior that everyone can agree on must be separately operationalized, which is challenging in itself. For instance, in order to actually help players learn and train, analysis results (i.e., indices of expert input behavior) must be sufficiently explainable and interpretable. Most previous studies [5, 6, 7, 14, 19, 23, 41, 53, 75, 94, 95, 118] on input behavior analysis applied machine learning-based black box modeling techniques, making it difficult for players to receive actionable lessons from the analysis results.

In this study, we overcome the aforementioned challenges and demonstrate a holistic pipeline of input behavior analysis in esports, consisting of three steps (see Figure 1): (1) operationalization of expert input behavior (Step 1), (2) data collection and analysis (Step 2), (3) visualization-based player training (Step 3). The study is conducted on League of Legends (LoL), the most popular esports game in the MOBA (Multi-player Online Battle Arena) genre today and, to our knowledge, has the least understanding of expert input behavior in the game1.

The goal of Step 1 was to present a standard operationalization of expert input behavior in LoL that most players could agree on and that was easily interpretable. We interviewed top-tier LoL professionals about how keyboard and mouse input should be performed to achieve high performance in LoL. Through a thematic analysis of the interview transcripts, we identified eight themes, each representing an expert input skill required in LoL (Table 4). The interviewees also provided descriptions of what patterns a player’s keyboard and mouse input behavior would exhibit when each skill reached an expert level. Based on the descriptions, we devised eight input skill indices that are calculated from a player’s mouse and keyboard logs during a match, quantifying how close the player’s input behavior patterns were to recommendations made by experts.

The goal of Step 2 was to verify whether the indices devised in Step 1 were significantly correlated with players’ actual in-game expertise level (i.e., rank). Based on input logs collected from 4,835 matches played by a total of 193 LoL players, including 18 professionals, we confirmed that the proposed indices showed a significant correlation to players’ rank. More specifically, we found that players with higher expertise displayed stronger patterns of expert input behavior. For ecological validity, all data were collected unobtrusively in the wild at the participants’ homes using custom software. This unique dataset is being released as open source to serve as a benchmark for future esports research2.

The goal of Step 3 was to show whether our input skill indices could actually provide actionable lessons to players. We developed a training aid that visualizes input skill indices and provided it to a local esports academy. In a three-week training program involving ten students using the training aid, we observed a trend (p = 0.056) in student behavior changing in the direction we expected, even though the students’ baseline ranks were already high on average.

We summarize the contributions of this study as follows:

A large-scale input behavior dataset collected in an ecologically valid setting from a large number of esports players with various ranks was analyzed.

Quantitative indices of eight input skills required in LoL, a MOBA genre game, were presented and their correlation with in-game rank was shown.

Through a three-week longitudinal study, we verified whether visualization of input skill indices could provide actionable lessons to esports players.

Skip 2BACKGROUND AND RELATED WORK Section

2 BACKGROUND AND RELATED WORK

2.1 League of Legends: An Overview

LoL is a game belonging to the MOBA genre where two teams of five players control and grow their own champions to destroy the enemy’s building, the Nexus, to win. In the battlefield called Summoner’s Rift, there are three lanes: top, middle, and bottom and each lane has defensive buildings (Figure 2). Nexus spawns bots called minions every 30 seconds, which move along each lane and attack enemies they encounter. Between the lanes, there is an area called jungle where neutral monsters live. By killing minions or monsters, players can gain gold and experience points, which can be used to develop their champions. In general, players take on one of five roles, depending on the area on the map that they are primarily responsible for: Top, Jungle, Middle, Bottom, Support.

Figure 2:

Figure 2: The main game map used in LoL: Summoner’s Rift

Players control champions through keyboard and mouse input (Table 1). Although there are a few exceptions, players generally right-click to move the champion to a specific location and left-click to perform a skill or attack. Which skill to use is usually determined through key inputs such as Q, W, E, and R. Number keys 1 through 7 are usually pressed to use items or place wards3. Function keys F2 to F5 are pressed to switch to a screen centered on the champion controlled by a teammate. While the TAB key is pressed, a scoreboard is displayed where players can check the overall situation of the game, including the resources of enemies and allies. Players can also left-click on the minimap to switch the view to where they want. When the pointer touches the edge of the screen, the local camera view translates in that direction at a set speed, which is called edge-panning (see Figure 3).

Figure 3:

Figure 3: When the mouse pointer is located on an edge of the screen, edge-panning occurs and the local camera view is translated in that direction.

Table 1:
TypeDescription
Right-clickMoving champion, Performing basic attack
Left-clickPerforming actions such as using skills or smart pings,Switching local camera view by clicking on the minimap
Edge-panningTranslating local camera view
Q, W, E, RActivating champion unique abilities
APerforming basic attack with left-click
D, FUsing summoner spell that can be used commonlyby all champions
G, VActivating smart ping*
Ctrl, AltActivating smart ping, Performing other functionswith other keys
1-7Activating item abilities, Placing wards
F2 - 5 / SpacebarSwitching local camera view to teammate champion/ the player’s champion
TABDisplaying scoreboard with resources of players
* Used to convey more specific intentions to teammates (Figure 4).

Table 1: Types of input a player can use in LoL

Figure 4:

Figure 4: (left) UI for activating a smart ping, (right) six representative types of smart ping and their meanings

2.2 Analyzing Player Input Behavior in Esports

Several previous studies have analyzed player input behavior and explored its applications in competitive real-time video games. Representative studies include player identification based on input behavior pattern recognition [5, 7, 23, 41, 94, 118] and predicting player skill level from input behavior through machine learning [14, 15, 19, 53, 94]. Among them, only six studies [7, 14, 15, 41, 53, 94] collected raw input behavior data directly from players. For the rest, only aggregated statistics of input behavior (i.e., action-per-minute) in the existing dataset were used. Studies that directly collected raw input behavior were conducted in controlled labs targeting only a relatively small number of participants (N=3 to 55) within a narrow range of expertise. Overall, the results of machine learning studies are not interpretable enough to be useful for player training. Only two studies [7, 94] were conducted on LoL.

There have also been studies that analyzed player input behavior to test researchers’ own hypotheses about what expert input behavior is in esports. Leavitt et al. [58] found that the frequency with which players activate smart ping for communication in LoL is related to their in-game performance. Tan et al. [100] recorded voice communication between players in LoL and found that communication word frequency can be used as a proxy of team cohesion. Jeong et al. [40] found that expert players in StarCraft had more spread-out on-screen gaze and faster saccadic movements. Thompson et al. [102] observed a stronger tendency for motor chunking in local view movement sequences of expert players in StarCraft 2. Yan et al. [115] found that expert players showed a more sustaining control group use pattern, also in StarCraft 2. Among these studies, only one study [40] directly measured raw input behavior in a controlled lab environment with a small number of participants (N=8). These studies are limited by covering only some of the input skills that may be required in a game (without in-depth discussions with expert players) and not testing whether their findings are significant for player learning or training.

Similar to our study, a few previous studies attempted to more broadly characterize the skills required in esports through expert interviews [12, 33, 92] and community surveys [77, 115]. Fanfarelli et al. [33] interviewed 11 professional Overwatch players and found that having high game sense and mechanical skills are essential in the game. Bonilla et al. [12] interviewed 11 esports players (including 5 LoL players) and found that both tactical and psychological skills are essential. Yan et al. [115] addressed the significance of unit group control skills in StarCraft 2 by analyzing posts in online forums. Sharpe et al. [92] formed a technical expert panel (TEP) of 28 esports players and researchers (including 13 professional players) and revealed that mouse control and keyboard proficiency can be indicators of a player’s performance in Counter-Strike: Global Offensive (CS:GO). Meanwhile, these studies did not reveal what specific patterns of input behavior are required at a lower level.

Park et al. [77] surveyed conjectures floating around in online forums about expert input behavior in FPS and then presented input skill indices that could verify each conjecture. Their study is most similar to ours, but it has limitations in ecological validity in that it recruited a small number of participants (N=16) and was conducted in a controlled lab environment with special equipment.

2.3 Computer-Supported Gameplay, Spectating, Learning, and Training in Esports

Several pioneering studies have sought to go beyond the default features of games and assist players in their gameplay, speculating, learning, and training processes through additional data extraction, analysis, and visualization. Studies have developed machine learning models to recommend items [95], predict win rates [6, 24, 25, 28, 71, 93], or assist in the ban-pick process4 [70]. Such techniques directly assist players in making more optimal decisions. On the other hand, computational tools for more indirect assistance have also been presented. By visualizing and providing more detailed in-game situation information that games do not provide by default, the studies sought to aid players’ spectating [17], post-game reviews [2, 3, 55, 66, 109], and tactic discovery [113]. In general, players want to obtain more interpretable information [49, 108] about a specific game context [50] beyond as-is visualization of the game situation. In that regard, methods have been proposed that apply explainable AI techniques [95] (e.g., LIME [85]) or allow users to ask what-if questions [66, 70]. Esports or games are also attracting attention as a medium for education [21, 46, 54], and such novel data analysis and visualization tools can also contribute to enhancing students’ data literacy and metacognition skills in the educational process. Analysis and visualization of low-level input behavior are also desired by players [48, 77, 81, 108, 115] and spectators [20], but previous studies have rarely addressed them.

Sabtan et al. [89] interviewed six professional LoL coaches and found that there is no standard way for today’s esports players to learn and train, and that more effective performance indices are needed for scouting and talent development. Kleinman et al. [49] identified challenges that exist in the esports learning process, especially the need for high explainability in player evaluation, and discussed implications for the design of computational support tools. In a similar vein, Wallner et al. [109] showed that players can infer meaningful information from visualizations of the spatio-temporal evolution of games, and Kleinman et al. [50] explored how players interpret post-play visualizations. Pluss et al. [81] tested how reliable and valid player evaluation metrics were in a commercial esports training software. However, to our knowledge, there is no research yet that has explored how esports training based on custom data analysis and visualization techniques can bring about significant changes in players’ behavior and performance.

2.4 HCI Studies on User Input Performance

We can refer to HCI studies that address users’ input performance in more general, everyday interactions rather than competitive video gaming. Studies analyzing the kinematics of mouse pointer trajectories have shown that fewer corrective movements or submovements are observed in higher-performance pointing behaviors [59, 60]. During the pointing, users (or players) perform initial fast ballistic movements toward the target, but if the amount of motor noise added during this process is too large [68, 90], more corrective movements must follow. In fact, previous studies have shown that professional esports players have significantly lower completion times [80] and fewer corrective movements [77] than amateurs.

Regarding button input behavior, there have been attempts to understand the user’s level of expertise from the sequence of button inputs or commands the user enters when using applications [36, 69, 86, 107]. By analyzing the connections between commands used, the expertise [96] or procedural knowledge of a user can be characterized [4]. More expert users may have more combinations of command links [96] or be able to use a wider variety of commands [37]. This also applies to mental games like chess; Experts were able to more easily remember the positions of chess pieces through cognitive chunking [18]. In the competitive video game domain [15], a previous study observed higher complexity button press sequences in more skilled players in FPS.

Skip 3CHARACTERIZING EXPERT INPUT BEHAVIOR IN LOL: A QUALITATIVE STUDY Section

3 CHARACTERIZING EXPERT INPUT BEHAVIOR IN LOL: A QUALITATIVE STUDY

We conducted semi-structured interviews with top-notch professional players and coaches to gain a deeper understanding of what expert input behavior is in LoL. For the interview transcripts, we performed a thematic analysis [13], the results of which are later used to devise quantitative indices of player input skills.

3.1 Participants

We contacted a local esports agency and an academy to recruit four former professional athletes and a professional coach (Table 2). We tried to recruit top-tier experts with professional experience in the LoL Champions Korea (LCK), where the world’s best LoL athletes are active. In particular, we ensured that this study represented different groups of experts (players and coaches) and different in-game roles. Since the number of athletes active in the LCK is around 100 per year, it was realistically difficult to recruit more participants [49], and interviews with esports experts in previous studies were also conducted with a similar number of participants (N=1 to 6) [33, 70, 83, 89, 113]. Some studies targeted significantly more participants (N=13 to 22) but did not recruit top-tier experts [49], covered a variety of genres [8], or covered concepts broader than input skills (e.g., general performance) [92].

Table 2:
IDProfessional Career(country / period)MainRoleHighestRankActivityPeriod
P1China / 15 monthsKorea / 32 monthsMexico / 3monthsJungleChallenger7 years
P2Korea / 6 monthsChina / 8 monthsMiddleChallenger3 years
P3Korea / 20 monthsTurkey / 9 monthsUnited States / 10 monthsTopChallenger6 years
P4Korea / 11 monthsTaiwan / 11 monthsThailand / 4 monthsBottomChallenger5 years
P5Korea / 6 monthsTaiwan / 9 monthsMexico / 4 monthsCoachMaster5 years

Table 2: Information about the five participants in the qualitative study

3.2 Surveying Core Tasks in Esports

Table 3:
GenreTasksTask GroupReferences
MOBA, FPSCharacter Movement & PositioningCharacter Control & Combat[14, 29, 33, 77, 89, 111, 116, 117]
MOBA, FPSAttack Opponent & Use Ability[29, 31, 33, 57, 111, 116, 117]
FPSAim & Reload[14, 33, 77]
RTSGroup Multiple Units[115]
MOBATranslate Camera View[30]
MOBA, FPS, RTSAware Game StateSituational Awareness[27, 33, 40, 53, 56, 57]
MOBASwitch Camera View[27, 56]
MOBA, FPSCommunicate with TeammateTeam Communication[12, 16, 21, 30, 33, 45, 49, 53, 58, 74, 108]

Table 3: Three core tasks obtained as a result of the survey, specific tasks belonging to each core task, and their references

If we inquire about the necessary expert input skills in LoL without providing additional context, interviewees may likely struggle to provide a specific answer. Therefore, we decided to first investigate the core tasks generally required in esports and then ask the interviewees what expert input behaviors were required in the context of each task. Through this, we were able to make the interview broadly cover a variety of input skills while at the same time allowing the interviewees to give more specific answers.

We surveyed twenty-four previous studies that mentioned certain tasks required in esports. Eight studies [14, 29, 33, 77, 89, 111, 116, 117] noted the significance of precise character movement and strategic positioning, taking into account the locations of both allies and enemies. Seven studies [29, 31, 33, 57, 111, 116, 117] also specified the importance of attacking opponents and using abilities. In the FPS, predominantly focused on firearms, three studies [14, 33, 77] pointed out quick and accurate aiming at enemies and timely bullet reloading. One study [115] revealed the difference between expert and novice in selecting and grouping units in RTS that require controlling a large number of units. In MOBA, where a top-down perspective is used instead of a first-person view, the efficient management of the local camera view was mentioned as key to controlling the character effectively [30]. Across all surveyed genres, six studies [27, 33, 40, 53, 56, 57] mentioned comprehending the overall game flow is significant for victory. It was also identified that in MOBA, it is important for players to switch alternatively the local camera view to understand the situation of their teammates and control the champion [27, 56]. For team-based game genres, 11 studies [12, 16, 21, 30, 33, 45, 49, 53, 58, 74, 108] indicated the necessity of team communication to employ team-level strategies.

Although the details may vary depending on the game genre, we confirmed from this survey that the core tasks required in esports can be broadly classified into three categories: (1) character control and combat, (2) situational awareness, and (3) team communication. The survey results are also summarized in Table 3. The following section describes in detail the interview protocol constructed based on the survey results.

3.3 Interview Protocol

A semi-structured interview was done, and the interview questions were discussed and decided by the authors. The interview was largely divided into four parts to help participants recall specific input skills. The first three parts focused on what expert input skills are required to successfully conduct each of the three core tasks required in esports: (Part 1) character control and combat, (Part 2) situational awareness, and (Part 3) team communication. In the fourth part, we asked whether the general characteristics of experts’ behavior in pointing and button input, which were identified in previous HCI studies (see Section 2.4), were thought to be important in LoL as well. The final main questions are listed below:

How should keyboard and mouse input be performed when a player is moving a champion to attack or evade an enemy attack? (Part 1-Q1)

How should keyboard and mouse input be performed when a player wants to fire normal or special attacks at neutral monsters or enemy champions? (Part 1-Q2)

When a player wants to move a champion or attack an enemy, the player’s local view window often needs to be moved together. How should the keyboard and mouse input for moving the view be performed? (Part 1-Q3)

How should keyboard and mouse input be performed when a player switches the view of the screen to check the position and status of teammates or enemies? (Part 2-Q4)

How should keyboard and mouse input be performed when players communicate with their teammates? (Part 3-Q5)

Generally, when we control a pointer, we can save time if we can move the pointer to its destination one at a time without additional corrective movements [59, 60]. What does a motor execution skill like this mean for LoL gameplay? (Part 4-Q6)

In video games in general, memorizing button press sequences (i.e., combos) helps players react quickly to a given situation. What does this chained input skill mean for LoL gameplay? (Part 4-Q7)

To prevent any possible bias, the fourth part began after all three parts of the interviews had been completed. During the interview, we also improvised new questions when unexpected and interesting topics came up. The full interview protocol is included in the Supplementary Material.

Table 4:
Game TaskTheme (Index)DescriptionQuotes
Positioning SkillFrequent and large changes of directions of the champion movement(P3) "I continuously click my mouse in various directions, ensuring my champion keeps moving without pause." (P4) "I constantly shift slightly to the left or right, making it unpredictable which way I will move. It’s like deceiving others constantly."
Risk Compensation SkillPreparation of quick follow-up clicks for the risk of click failure(P1) "When I try to move by right-clicking, I often click multiple times to ensure accuracy." (P2) "I keep right-clicking until I’m sure the movement is actually happening."
Edge-Panning SkillEdge-panning to keep important scenes in the center of the screen(P1) "It’s crucial to ensure that the entire battleground is visible on the screen." (P2) "Because there’s an unnecessary view. For example, in a 5-5 clash, the view behind me is a useless field to see."
Motor Execution SkillMovement of the pointer without corrective movement to the desired point(P1) "It’s very important to be able to move the mouse to where you want it to be with a single motion." (P2) "Regardless of the mouse’s speed, maintaining a consistent pace is certainly beneficial."
ChampionControlandCombatChained Input SkillFamiliar with more diverse combos(P4) "If you don’t know how to attack your opponent with a combo, you have no choice but to play defensively." (P5) "The more skilled you are, the more diverse the combos you may use."
Monitoring SkillSwitching view and using tab key frequently(P1) "Since the situation keeps changing, we have to keep changing views. We have to do that work repeatedly." (P4) "It’s good to press the TAB and look at the scoreboard often because I can plan how I will play the game no matter what situation I face next."
SituationalAwarenessVisual Processing SkillDwell short in switched view(P2) "It’s best to take a quick look and get back to it immediately."(P4) "It’s better to check it out quickly and then dive straight into my play. (...) That’s something you work on with practice."
TeamCommuni-cationActiveCommuni-cation SkillHigh frequency of smart ping(P4) "There is a good method called ping made by Riot, and I think the more you use it, the better." (P5) "It’s good to communicate with a clear intent through smart ping."

Table 4: Eight themes (indices) obtained through thematic analysis and key quotes for each theme

3.4 Interview Procedure and Analysis

The first author (i.e., interviewer) conducted interviews with all five participants (i.e., interviewees). The interviews were carried out in person at a location chosen by the participant. The interview was voice-recorded via the interviewer’s phone after obtaining consent from the interviewees. The interviewer informed the interviewees that they could quit the interview at any time if they felt uncomfortable. In order to make the participants feel comfortable, all interviews began with the interviewer asking them about their gaming experience. Afterward, the interviewer asked the interviewee questions about expert input skills in LoL according to the interview protocol. All these processes were approved by the Institutional Review Board (IRB) of the university.

The interview recording files were automatically converted to text by utilizing the function of the voice-recording application. The first author then reviewed and transcribed the recordings for accuracy. We performed an inductive thematic analysis on the interview transcripts using the six-phases proposed by Terry et al. [101]. To validate our analysis, we employed researcher triangulation. The first author, second author, and fifth author were actively involved in multiple coding and review sessions. Three authors thoroughly reviewed the transcripts and independently coded the interview with Participant 1 (P1). Then, three authors consolidated their codes through discussion. Based on these consolidated codes, the first author coded the remaining interviews. The second and fifth authors then reviewed every coded segment, marking agreement or disagreement and providing feedback. All authors then collaboratively refined the codes into themes. These themes were further discussed and refined in subsequent sessions. Finally, we organized these themes, resulting in eight themes under three main game tasks. Diagrams of the theme development process are available in the Supplementary Material.

3.5 Results

After the analysis of interview transcripts, eight distinct themes were identified about the input skills of players. The following six themes were identified from the interviews in the first three parts: (1) positioning skill, (2) risk compensation skill, (3) edge-panning skill, (4) visual processing skill, (5) monitoring skill, and (6) active communication skill. Two other themes were identified in the fourth part of the interview: (7) motor execution skill and (8) chained input skill. Table 4 summarizes the meaning of each theme and representative quotes for each input skill.

Positioning Skill. Good positioning skills are crucial when moving the champion (character) in the game. If the player moves the champion directly to the intended point, enemies can easily predict the path and attack the champion. In the fast-paced environment of LoL, where enemy attacks can happen at any moment, it is essential to stay alert and be ready to dodge. That is why all five experts recommend frequently modifying the champion’s movement path by right-clicking the mouse. By preparing the mouse at a vastly different point from the direction of the movement, the player can move the champion more strategically and dodge enemy attacks more effectively.

Risk Compensation Skill. During discussions regarding moving the champion, experts also brought up the concept of Risk Compensation Skill. Specifically, three experts have reported a habit of clicking the mouse more than twice when designating a destination point. They explained that this was due to the potential for clicking to fail when moving the mouse at high speeds.

Edge-Panning Skill. In most battlefield games, players are given a limited local view, so it is important to continually move the local view to locations where important events are occurring or are expected to occur in the future. In LoL, moving the mouse to the edge of the screen results in the local view translating in that direction. All five experts emphasized that important scenes should be consistently centered on the screen through edge panning.

Motor Execution Skill. In LoL, players must place the pointer on a target to perform basic tasks such as attacking enemies or moving the character to the desired location. Quick and precise mouse movement to the target is crucial in these situations. The four experts agreed that such mouse movements require the mouse pointer to reach the desired point at a constant speed and without unnecessary stopping.

Chained Input Skill. In the majority of battle games, players can attack appropriately depending on their chosen character or item. Even in LoL, where every champion has unique abilities, players are required to use the most practical combinations. Players must memorize optimal combos and quickly adopt skill chains in any given situation. All five experts agreed that players who are skilled at combining skills have a significant advantage in the game.

Monitoring Skill. Players should constantly monitor the game’s overall progress, including keeping track of other player’s locations, duel locations, and match scores. In LoL, players can switch to the desired screen view through various control keys. All five experts said they frequently switch the view (e.g., pressing the tab key to check match statistics) even when busy with champion control.

Visual Processing Skill. To achieve higher multitasking performance, players must process the visual information provided in a switched view as quickly as possible. If the number of view-switchings can show the player’s Monitoring Skill, then the time spent on the switched view can be defined as Visual Processing Skill. All five experts said they perceive a switched view quickly, within one second, and then return to their main screen. One expert recommended reducing viewing time through training.

Figure 5:

Figure 5: The larger the angle between the two vectors created by three consecutive clicks, the greater the change in the champion’s movement direction, making it difficult for the opponent to predict.

Active Communication Skill. Effective communication in esports is crucial, and players communicate with teammates through voice chat, messaging, and the smart ping in LoL. The smart ping system allows players to quickly convey messages by marking specific locations on the game map. All five experts mentioned the importance of utilizing smart ping frequently, as it provides a means to share information with teammates instantly.

Skip 4QUANTIFYING EXPERT INPUT BEHAVIOR IN LOL: EIGHT INPUT SKILL INDICES Section

4 QUANTIFYING EXPERT INPUT BEHAVIOR IN LOL: EIGHT INPUT SKILL INDICES

Through interviews with experts, we have come to understand what expert input behavior is in LoL. Based on the qualitative study results, in this section, we propose input skill indices that can quantify the level of expertise shown in a player’s input behavior.

4.1 Indices of Expert Input Behavior

In Section 3, expert players pointed out the various input skills required in LoL and explained in detail how those input skills could be implemented at the mouse and keyboard input level. For example, experts recommend keeping a certain distance from enemies when players right-click to move their champion. Given a player’s mouse input log, we can verify whether the player controlled the champion in the way the experts recommended. Thus, a total of eight input behavior indices, at least one for each input skill, are proposed. Each index is calculated per match, and we can use the calculated value to evaluate how close the player’s input behavior during the match was to the pattern recommended by experts.

In what follows, we present detailed descriptions of each of the indices. The exact algorithm by which each index is calculated from input logs is presented in the Supplementary Material.

4.1.1 Positioning Skill Index.

In Section 3, experts recommended two input behaviors required for better champion positioning (see Table 4). Each recommendation shared the requirement of frequent and large changes to the champion’s direction of movement. The champion’s movement direction is determined by the angle of the line connecting the right-click point from the champion’s position (see Figure 5). We can quantify a player’s positioning skill based on the angles of the vectors generated by the player’s two consecutive right-clicks, on average, during a match. Positioning Skill Index calculated this way ranges approximately from 59 to 121° according to our observations.

Figure 6:

Figure 6: As the pointer speed increases, the probability of failing to click increases.

4.1.2 Risk Compensation Skill Index.

Experts said in interviews that if there is a risk of click failure (see Figure 6), a quick follow-up click must be prepared to make up for it. According to a recent study [76], click actions are planned and executed while the pointer is moving toward the target, and the probability of failing to acquire the target (i.e., risk) is proportional to the speed of the pointer. In that regard, Risk Compensation Skill Index is calculated as the average of the pointer speeds (i.e., proportional to the amount of risk) each time a double click occurred. A higher value of this index means that double-clicks were performed more often in higher risk situations. According to our observations in this study, this index ranges approximately from 222 to 2554 \(\frac{px}{s}\).

Figure 7:

Figure 7: Optimal edge panning is expected to cause the region of interest to be located close to the center of the screen.

4.1.3 Edge-Panning Skill Index.

In the previous interview, we learned that the ultimate goal of edge-panning for experts is to keep the most important scene (which requires immediate action) and its surroundings consistently visible from their main view. Interpreting this, the goal of experts is to position the current scene of interest near the center of the screen through edge-panning. This inspired us to calculate Edge-Panning Skill Index from how close, on average, the first mouse click location was from the center of the screen immediately after an edge-panning. A higher value of this index means that the location the player had to click after edge-panning was, on average, further away from the center, which indirectly means that important scenes requiring action were not controlled to be located in the center of the screen through edge-panning (see Figure 7). According to our observations in this study, the index ranges approximately from 299 to 1277 px.

Figure 8:

Figure 8: The LZ complexity of button input sequences allows us to quantify how diversified combat strategies players have implemented.

4.1.4 Motor Execution Skill Index.

In the previous interview, four out of five experts agreed that it is important to quickly move the pointer to the desired point without corrective movements. According to previous studies [59, 60, 68, 77, 90], if there are more corrective movements, the number of submovements parsed from the pointer trajectory increases. Motor Execution Skill Index is calculated by counting how many submovements were present in a player’s pointer trajectory between two consecutive right-clicks (see Figure 10), on average, during the match.

Submovement parsing followed the methods of previous studies [59, 60], and detailed algorithms are in the Supplementary Material. According to our observations in this study, the index ranges approximately from 0.9 to 2.5 counts.

4.1.5 Chained Input Skill Index.

In Section 3, the interviewees all agreed that expert players would know and be familiar with more diverse combos (i.e., button input sequences) than regular players. If the combos implemented by a player in a match were more varied, it would be difficult to compress the button input sequence into a shorter sequence without loss of information. Algorithms [63] allow us to quantify how incompressible a sequence is, usually called the complexity of the sequence. Chained Input Skill Index is calculated as the complexity of a player’s button input sequence (including mouse clicks) in a match (see Figure 8). A more detailed algorithm for the index is in the Supplementary Material. The index ranges approximately from 9 to 21 (unit: number of sublists).

Figure 9:

Figure 9: User interface of League of Legends Logger, a special data acquisition software developed for this study

Figure 10:

Figure 10: Motor Execution Skill Index is calculated by counting the number of corrective submovements between clicks. Submovements preceding the highest maxima (here 1) are excluded following the recommendations in previous studies.

4.1.6 Monitoring Skill Index.

Experts recommended frequent view-switching for situational awareness of teammates and enemies in LoL (see Figure 12). In particular, it is said that situation monitoring must be performed continuously even when busy with champion control. Correspondingly, Monitoring Skill Index first calculates the average number of view-switches per match. The value is then multiplied by the average mouse pointer speed (i.e., amount of activity or busyness) at the time of view-switching to take into account the players’ multi-tasking capabilities (see Figure 11). Matches can be of different lengths, so the final index is calculated per minute divided by the match length. According to our observations in this study, this index ranges approximately from 0.03 to 12.55 \(\frac{px\cdot counts}{sec^2}\).

4.1.7 Visual Processing Skill Index.

According to the experts in the interview, players should visually process the necessary information in the switched view as quickly as possible, then return to the main view to control their champ (see Figure 12). In line with this, Visual Processing Skill Index is calculated as the average duration a player spends in a switched view in a match. This index ranges approximately from 91 to 6276 ms.

4.1.8 Active Communication Skill Index.

We devised Active Communication Skill Index referring to experts’ emphasis on a sufficient amount of communication with team members. The index is calculated by dividing the frequency of Smart Ping by the length of the match. The index typically ranges from 0 to 7.2 \(\frac{counts}{min}\).

4.2 Data Acquisition and Analysis

Previously, we presented eight input skill indices that reveal a player’s expertise in LoL. Another representative index that reveals a player’s expertise is the rank in the Solo Ranked mode. In this section, we look at the effect of player rank on the input skill indices. Data is collected from a large number of players in-the-wild during their gameplay at home. In this process, not only the player’s keyboard and mouse input signals, but also various match result statistics (e.g., win/lose, kill-death ratio, and etc.) are collected through the data acquisition software we developed for this study.

4.2.1 Data Acquisition Software.

Figure 9 shows the graphical user interface (GUI) of the data-acquisition software. Using the interface, players can enter their age, gender, summoner name (i.e., unique ID), and sensitivity (i.e., dots per inch, DPI) of the mouse they are using. After entering all information, players can press the red button at the bottom to indicate that they are ready to participate in data collection. From then on, players are free to play LoL as they normally would, and all data is unobtrusively collected in the background. Data collection is automatically paused when the LoL client is not in the foremost window.

More specifically, the following information is collected for each match with timestamps: (1) keyboard input signal (60 Hz sampling rate), (2) mouse raw input signal dx and dy (60Hz), (3) pointer position on the screen x and y (60Hz), (4) mouse click event, (5) screenshot of the game screen 3 minutes after the start of the game, (6) player’s personal information (age, gender, and summoner name), (7) peripheral device information (mouse DPI, monitor resolution, and etc.), and (8) match results. Match results were obtained through LoL’s official data acquisition API. Details on the implementation of the software and data content and format are presented in the Supplementary Material.

Note: the official anti-cheat team of Riot Games, the developer of LoL, confirmed to us that our data acquisition process has no impact on players’ gaming experience and therefore does not penalize players for using our software.

Figure 11:

Figure 11: Monitoring Skill Index is calculated as the number of view-switchings relative to the activity amount.

Figure 12:

Figure 12: Players can switch views with various commands from the main screen. The Visual Processing Skill is calculated from the time difference between consecutive view-switchings.

4.2.2 Participants.

A total of 266 players were recruited from local universities, online communities, esports academies, and professional LoL teams (μ =23.1 years, σ =4.6). Of those, 18 were professional LoL players. As a reward for providing data for a total of 20 matches, participants received between 10 and 75 USD, depending on the burden of participation. Participants from local universities and online communities received 20 USD, those from local esports academies received 40 USD, and those from professional teams received 75 USD.

4.2.3 Procedure.

Participants first read the detailed description of the study and filled out the consent form. Next, the participants responded to a questionnaire about demographic information and game experience. When all responses were completed, a link was sent to the participants to download the data acquisition software. The link also provided a tutorial video for participants on how to use the software. Participants were asked to play at least 20 matches in Solo Ranked mode (i.e., most competitive and skill-critical mode) freely, whenever they wanted, with the software on. Participants could stop participating at any time and were still compensated even if 20 matches were not played. All of these processes were conducted online and were approved by the IRB of the university.

The rank of each participant in the Solo Ranked mode was also separately obtained by crawling on a website ( http://ifi.gg/summoner/). LoL uses a season system and ranks are reset or demoted at the start of a new season, so we have collected the highest ranks players have reached in all previous seasons.

4.2.4 Dataset.

Due to the instability of the LoL server, errors in the data transmission process, and player mistakes, not all matches were valid. First, 593 matches of too short duration (less than 5 minutes) were excluded. Second, 143 matches with a large difference between the duration of match data and the duration of input signal data (difference of more than 3 minutes) were also excluded from the dataset. Third, 983 matches for which no match data or input behavior data were collected for unknown reasons were also excluded from the dataset. Fourth, 54 matches played in modes other than Solo Ranked mode were excluded from the dataset. Finally, 164 matches of participants who played too few matches (less than 10 matches) than our required target amount (20 matches) were dropped. As a result, 4,835 matches data from a total of 193 participants were left in the dataset (71.40 %).

The average match duration for all participants was 26.77 minutes (σ =6.62). The number of participants for each rank is as follows: Challenger (7), Grandmaster (7), Master (12), Diamond 1 (3), Diamond 2 (4), Diamond 3 (3), Diamond 4 (14), Platinum 1 (6), Platinum 2 (6), Platinum 3 (8), Platinum 4 (32), Gold 1 (5), Gold 2 (12), Gold 3 (20), Gold 4 (27), Silver 1 (6), Silver 2 (5), Silver 3 (2), Silver 4 (9), Bronze 2 (2), Bronze 3 (1), Bronze 4 (2). A total of 155 champions were played at least once, and there was no notable difference in champion selection distribution, role selection distribution, and DPI selection distribution between groups (Supplementary Material).

Prior to the analysis of input skill indices, the dataset was subjected to two additional post-processing steps. First, game screen settings unique to each player (screen resolution, window aspect ratio, minimap position, and size) were extracted through simple computer vision techniques. This is essential to unify the mouse input coordinate system among players. Second, keyboard input data recorded during in-game chat has been deleted for all players. This is to prevent privacy issues.

4.2.5 Statistical Analysis.

With the final dataset, we calculated the input skill indices presented in Section 4.1 for each match and for each participant. Then, to conduct a statistical significance test, we allocated the participants into the following four groups: Level 1 (Silver 1 to Bronze 4, N=27), Level 2 (Gold 1 to Gold 4, N=64), Level 3 (Diamond 3 to Platinum 4, N=69), and Level 4 (Challenger to Diamond 2, N=33). The criteria for classification were determined to allocate a similar and sufficient number of participants to each group. For the significance testing, ANOVA was applied at an α level of 0.05. Bonferroni correction was applied to post-hoc analyses.

Table 5 summarizes the mean and standard deviation of each index for each participant group, the statistical significance of the differences between groups, and the effect size. In the table, for all eight input skill indices, we confirmed that a significant difference existed between groups. In particular, from the post-hoc analysis, we could confirm that the highest-ranked players (Level 4) had behavioral characteristics that were distinctly different from those of other ranks. The trend can be seen better in Figure 13.

According to our classification criteria, the Level 4 group is a mixture of professional players and top-level amateur players. To further confirm the significance of the proposed input skill indices and provide deeper insights, we separately analyzed the indices for a group containing only professional players (see the second column of Table 5). Interestingly, going from Level 4 to the Professional group, almost all input skill indices seem to further improve.

Skip 5TRAINING BASED ON INPUT SKILL ANALYSIS: A LONGITUDINAL STUDY Section

5 TRAINING BASED ON INPUT SKILL ANALYSIS: A LONGITUDINAL STUDY

The ultimate goal of analyzing players’ input behavior in LoL is to provide players with actionable insights in training. Toward this goal, we finally conducted a three-week longitudinal study of how players’ behavior and performance changed when they were provided with training aids centered on input skill analysis.

Table 5:
Input Skill IndexProfessional PlayersLevel 4Level 3Level 2Level 1ANOVA ResultsPost-hoc analysis
PositioningSkill (°)(Higher is better)μ = 91.636σ = 6.985μ = 92.374σ = 6.885μ = 86.120σ = 6.735μ = 85.279σ = 7.239μ = 83.128σ = 8.876F(3,189) = 9.820p<0.001\(\eta _p^{2}\)=0.1354 & 3 (p<0.001)4 & 2 (p<0.001)4 & 1 (p<0.001)
Risk CompensationSkill (\(\frac{px}{s}\))(Higher is better)μ = 1152.008σ = 302.852μ = 1113.057σ = 281.888μ = 1120.217σ = 310.914μ = 1093.084σ = 336.188μ = 917.694σ = 305.785F(3,189) = 2.934p=0.035\(\eta _p^{2}\)=0.0443 & 1 (p=0.030)
Edge-panningSkill (px)(Lower is better)μ = 468.878σ = 54.314μ = 482.225σ = 54.483μ = 505.134σ = 65.603μ = 533.087σ = 78.003μ = 527.591σ = 75.420F(3,189) = 4.595p=0.004\(\eta _p^{2}\)=0.0684 & 2 (p=0.005)
Motor ExecutionSkill (counts)(Lower is better)μ = 1.121σ = 0.071μ = 1.156σ = 0.087μ = 1.156σ = 0.085μ = 1.208σ = 0.101μ = 1.247σ = 0.135F(3,189) = 7.599p<0.001\(\eta _p^{2}\)=0.1084 & 1 (p=0.003)3 & 2 (p=0.017)3 & 1 (p<0.001)
Chained InputSkill (counts)(Higher is better)μ = 14.959σ = 1.037μ = 14.576σ = 1.299μ = 13.663σ = 1.413μ = 13.443σ = 1.257μ = 12.867σ = 1.680F(3,189) = 8.259p<0.001\(\eta _p^{2}\)=0.1164 & 3 (p=0.013)4 & 2 (p=0.001)4 & 1 (p<0.001)
Monitoring Skill(\(\frac{px\cdot counts}{sec^2}\))(Higher is better)μ = 0.435σ = 0.167μ = 0.359σ = 0.174μ = 0.233σ = 0.155μ = 0.176σ = 0.113μ = 0.132σ = 0.103F(3,189) = 16.877p<0.001\(\eta _p^{2}\)=0.2114 & 3 (p<0.001)4 & 2 (p<0.001)4 & 1 (p<0.001)3 & 1 (p=0.009)
Visual ProcessingSkill (ms)(Lower is better)μ = 520.020σ = 202.436μ = 620.852σ = 276.833μ = 609.951σ = 289.251μ = 730.392σ = 427.985μ = 1041.140σ = 819.598F(3,189) = 6.681p<0.001\(\eta _p^{2}\)=0.0964 & 1 (p=0.002)3 & 1 (p<0.001)2 & 1 (p=0.015)
ActiveCommunicationSkill (\(\frac{counts}{min}\))(Higher is better)μ = 3.165σ = 1.130μ = 2.708σ = 1.256μ = 1.797σ = 0.831μ = 1.635σ = 0.503μ = 1.456σ = 0.841F(3,189) = 15.133p<0.001\(\eta _p^{2}\)=0.1944 & 3 (p<0.001)4 & 2 (p<0.001)4 & 1 (p<0.001)

Table 5: Mean, standard deviation of each input skill index for each participant group, the statistical significance of the differences between groups, and post-hoc analysis applied Bonferroni correction

Figure 13:

Figure 13: Mean of each input skill index for each participant group with standard error; Indices where higher (or lower) is better are indicated by an upward (or a downward) arrow.

5.1 Method

Figure 14:

Figure 14: Overview of the training schedule

5.1.1 Participants and Coaches.

Ten students from a private esports academy participated in this study (μ =16.1 years, σ =1.1). According to the preliminary survey, participants play LoL for an average of 11 hours per week (σ =4.2), and four of them are aiming for a professional debut. Participants started playing LoL on average 3.7 years ago (σ =1.3). The participants’ highest ranks, in descending order, are as follows: Grandmaster (1), Master (5), Diamond (3), Platinum (1). Prior to participating in this study, they had been enrolled in the academy for an average of 10.9 months (σ =3.5). Three coaches from the academy also participated in this study (μ =29.3 years, σ =1.8). They have been coaching for an average of 3.5 years (σ = 1.6). Two of the three had professional player experience, and the other coach had professional coaching experience.

5.1.2 Design and Procedure.

The study followed the format of a typical coaching-based training. One coach was assigned to every three to four participants. The study lasted for three weeks, and during the weekdays, participants trained at home through free gameplay, taking into account the comments received from the coaches in the previous week. On weekends, participants visited the academy to review their progress with the coaches, assess the achievement of the previous week’s goals, and set revised goals for the upcoming week. Additionally, participants responded to an online questionnaire about their training experiences over the past week. For the first week of training, a preliminary survey was conducted to collect basic information from the participants. Figure 14 shows the detailed training schedule as a diagram. Note: The first week of coaching is based on participants’ baseline behavior, and the last week of coaching is completed without additional intervention. One of the authors attended the weekend training as an observer.

All coaching was centered on analyzing participants’ input behavior. We provided them with two software applications for input behavior analysis and visualization. The first software is for collecting participants’ input logs and is the same as the one used in the quantitative study in Section 4.2. The second software, specially implemented for this study, analyzes the collected input logs, calculates the input skill indices presented in Section 4, and visualizes them in a form that can be used for coaching (see Figure 15). We collected all the raw data generated during the training period.

Participants received a thorough explanation of the training content before the training began and filled out a consent form. Parental consent was also obtained for minor participants. This study protocol was approved by the university’s IRB, which also provided an accredited system for parental consent.

5.1.3 Training Aid Software.

The data acquisition software described in Section 4.2 was installed on participants’ home computers, and the training aid software was installed on computers at the academy where weekend coaching took place. We collaborated with the head coach a month before the start of classes to develop the training aid software. In particular, we designed the software UI by referring to guidelines from previous studies so that users can understand the context of the visualization [50, 113] (e.g., selected champion, in-game role), compare themselves to various player groups [50, 108], and check performance changes over time [108]. Figure 15 shows the final UI of the software.

The order of use of this software is as follows. First, users select the past matches they wish to analyze. Second, users select one of four levels and one of five in-game roles to be set as a comparison group. The software then finishes all analysis in the background and visualizes the results as graphs on the main panel of the UI. The graphs provide the following information: (1) change in average input skill indices over match (using a line graph), (2) difference in average input skill indices between the comparison group and the user (using a radar chart), (3) changes in input skill indices over game time for each match (using a line graph), (4) change in average in-game performance statistics5 over match (using a line graph), (5) difference in average in-game performance statistics between the comparison group and the user (using a radar chart), (6) changes in in-game performance statistics over game time for each match (using a line graph), (7) correlation between a selected input skill index and a selected in-game performance statistic (using a scatter plot). The visualization of the comparison group was performed based on the data collected in the study in Section 4. For more details on the implementation, see the Supplementary Material.

Figure 15:

Figure 15: Main UI of the training aid software

5.1.4 Training Experience Questionnaire.

The questionnaire to evaluate the participants’ training experience was structured according to the guidelines in a book authored by Kirkpatrick et al. [47]. According to the guidelines, to evaluate the effectiveness of a training program, we should focus on four levels: (1) subjective satisfaction of training participants (Reaction), (2) extent to which participants learned from training (Learning), (3) changes in participants’ behavior (Behavior), and (4) extent to which training goals were met, and in our case, improvements in participants’ in-game performance (Results). The latter two can be checked from game and behavior logs, so the questionnaire focuses on evaluating the first two. The items of the questionnaire for each week are as follows:

Week 1– first intervention based on baseline analysis:

Q1. How easy was it to understand the meaning and analysis results of each index?

Q2. How much have you learned from the analysis of each index?

Week 2– reflection on weekly training effects:

Q1. How much effort have you put into improving each skill index over the past week?

Q2. Do the changes in each skill index match your expectations?

Q3. Has training based on the analysis of each skill index helped you improve?

Q4. How much effort will you put into improving each skill index over the next week?

Week 3– reflection on weekly training effects and summary of overall training results:

Q1. How easy do you think it is to improve each skill index relative to the effort put in?

Q2. Do the changes in each skill index match your expectations?

Q3. Has training based on the analysis of each skill index helped you improve?

Q4. Would you recommend this program to other players?

Q5. What is your overall satisfaction with the program?

Participants were asked to provide answers to each question on a 5-point scale for each input skill index. Furthermore, each week’s questionnaires concluded with an open-ended question, asking them to provide feedback on using the training program, including positive experiences, difficulties, and suggestions for improvements.

5.2 Results

5.2.1 Coaching Pattern.

Coaching during weekend training was aimed at identifying and reducing the differences between the participants and the comparison group on input skill indices. The comparison group was set as the group to which the participating students currently belong. When coaching took place, there were cases where a student’s skill level was already higher than the group average for some input skill indices. In such cases, coaches did not provide separate coaching for the indices or requested that unnecessary input be reduced if the value was too large.

Note: Near the end of the three-week training, we discovered that there was an error in the visualization of Active Communication Skill Index in our training aid software. Therefore, this index was excluded from subsequent analyses.

5.2.2 Effect of Training on Input Skill Indices.

We conducted an evaluation to ascertain whether the training had a tangible impact on the trainees’ input skill indices. In particular, we examined the effect of the trainee-group difference on the index change, considering that the coaching was aimed at reducing the difference between the trainee and the comparison group.

First, the difference in index values between the trainee and the comparison group for each week was calculated (Δx). Second, for each week, the amount of change in the trainee’s input skill indices compared to the previous week was calculated (Δy). Since there were seven input skill indices (excluding Active Communication Skill Index) and two coaching interventions for a total of ten trainees, a total of 140 (Δx, Δy) pairs were obtained. Figure 17 shows the scatterplot of all (Δx, Δy) points, with the following linear regression line: Δy = 0.066Δx − 0.013 (R2=0.026, p = 0.056). Note that in all of these calculations, each input skill index was normalized to a range of 0 to 1 based on the maximum and minimum values observed in the Section 4 data.

5.2.3 Effect of Training on In-game Performance Statistics.

Figure 16 shows how the in-game performance statistics observed from the trainees changed as training progressed. When running a repeated-measures ANOVA with Training Week as the independent variable, the effect on all performance statistics was not significant (p>0.377). The highest effect size was observed for the Gold Difference statistic: F=1.011, p=0.377, and \(\eta ^2_p\)=0.070.

5.2.4 Training Experience.

Table 6 summarizes the median and inter quatile range (IQR) of trainees’ responses to each question for each week. Particularly low response values were observed for question Q1 in Week 3 (i.e., how easy was it to improve each input skill?). The overall satisfaction with the training program was high (Q4 and Q5 in Week 3).

Figure 16:

Figure 16: Effect of Training Week on each in-game performance statistic; Statistics where higher (or lower) is better are indicated by an upward (or a downward) arrow.

In response to the open-ended question, seven students mentioned that it was good to see input skill indices at a glance and check which skill they were lacking. One coach stated, “It was useful to be able to see specifically how much the things I asked students to improve actually improved”. Another coach mentioned, “I could see students getting curious about the software and trying it out on their own to learn more.”. However, there were also suggestions for improvements, such as simplifying the visualizations and reducing the program’s start-up time.

Skip 6DISCUSSION AND IMPLICATIONS Section

6 DISCUSSION AND IMPLICATIONS

In a series of three studies, we demonstrated a holistic pipeline for analyzing raw input behavioral data in esports. In this section, we critically discuss the results obtained from the three studies and present implications for esports players, coaches, and researchers.

6.1 Implications for Input Behavior Research

In previous studies, a player’s input skill has been referred to by various terms such as mechanical skill [30, 33], tactical skill [12], micro play skill [20, 35, 38, 89], task or action performance [92]. Those studies tend to consider a player’s input skills as a concept independent of the player’s higher-level awareness of the game context or game system (i.e., game sense [33], metagame [30, 38, 51, 78], macro [19, 58, 112, 114]). For example, quick reaction and effective mouse operation (i.e., precise aiming), which are often mentioned as mechanical skills in previous studies [92], can be measured to some extent outside the game context in a controlled lab environment; And there actually exist many previous studies that attempted to separately operationalize the mechanical skills of esports players [9, 27, 56, 65, 67, 97, 103, 105]. Applications that evaluate a player’s mechanical skills on a platform separate from the game, such as AimLab [88] for FPS games and Mobalytics Proving Ground for LoL [81], are also widely used among players.

This study goes beyond the perspective of existing studies on binary expertise in esports (i.e., mechanical vs. mental skill) and focuses on the fact that any level of task required in esports cannot be successfully completed without being accompanied by appropriate input behavior; a player’s input behavior in this study is considered a window to understand the player’s expertise in various tasks given within the game. From a similar perspective, Larsen’s 2022 study [57] presented the concept of "ability to execute" as one of the core components of esports skills, going beyond simple mastery of motor skills. Meanwhile, such input skills are not a goal in themselves but are naturally acquired in the process of trying to complete a specific task well, and therefore an expert may not have thought deeply enough to explain it to someone. However, in our study, experts mentioned the pattern of input behavior required for each task and its rationale quite consistently and without difficulty, indirectly showing that it is a robust and valid construct.

We believe that similar studies could be conducted on other genres of esports games. In particular, we expect that Motor Execution, Risk Compensation, and Chained Input skill indices can be calculated using the same algorithm presented in this study in all esports games played using mouse and keyboard. Algorithms that calculate Monitoring or Visual Processing skill indices need to be modified to take into account which button inputs in the game cause rapid view-switching. For example, FPS games such as Overwatch, similar to LoL, display a scoreboard where players can check the overall game situation while pressing the TAB key, so the algorithm proposed in this study only needs to be slightly modified. Positioning or Edge-Panning skill indices are most dependent on the game genre and may have to be newly devised or discarded unless the game requires controlling the local view from a top-down perspective, such as MOBA or RTS. Depending on the game genre, when calculating Active Communication skill Index, it is worth trying to analyze through AI how players communicated with team members through voice or chat [73, 82, 91, 100].

Beyond player skill evaluation, the indices presented in our study are expected to provide meaningful insights into more general topics in game research, such as dynamic difficulty adjustment [14, 15, 62], fraud user detection [7, 22, 41, 118], game balancing [39, 61, 99], and game-based education [21, 46].

6.2 Significance of Input Skill Analysis

Showing a player’s input behavior can help spectators engage [20] and coaches evaluate [89]. However, the high spatio-temporal resolution of raw input behavior makes its effective visualization and intuitive interpretation difficult. For that reason, it is essential that they are provided with statistics that summarize and characterize raw input behavioral data, but such statistics available today are very limited, for example, action-per-minute (APM)6. The indices derived from this study are able to quantify more diverse aspects of input skills than existing statistics, and can also be lightly calculated over a relatively short time window (per three minutes in our training study). Because all of our analysis code is released as open source, we expect our indices to be widely used by streamers, coaches, and championship organizers in the esports industry.

Ultimately, in this study, we expected that input skill indices would be able to provide actionable lessons for players on how to improve their input behavior. However, in a three-week longitudinal training study, we were able to observe only a marginal effect (p = 0.056) of training on player input skills. This result was unexpected because in surveys conducted during the training, participants said the meaning of each index was easy to understand and gave them new insights. However, we note here that participants in the survey said that improving their input skill indices was generally difficult (Q1 in Week 3, Table 6). This is not their complaint about the inaccuracy of the indices, because participants also reported that the input skill values aligned well with their expectations.

One potential reason why the training effect was marginal may be that most of our indices are indirect proxies of the player’s expertise in some high-level task (e.g., situational awareness). That is, our indices are not easily changed unless expertise in the high-level tasks they are associated with actually improves, which is more difficult for our participants who are already highly ranked [103]. Of course, participants can ignore the deeper meaning of the indices and simply try to change their surface values. For example, players can easily improve their Visual Processing Skill Index by repeatedly pressing the TAB key without purpose. However, since it would inevitably result in a decrease in overall game performance, it would have been a meaningless option for high-ranked participants in our study. Had our training study been conducted with lower-ranked players, different results may have been obtained, which is a topic for future research. Additionally, considering the importance of personalized adaptive training in esports [72], the fact that participants were only able to check their input skills on weekends rather than during the weekdays when actual training took place may have led to a marginal training effect.

Figure 17:

Figure 17: Effect of training on input skill indices

In conclusion, even though high-ranked participants were targeted and feedback was not provided frequently, we believe that the observation of a marginal training effect is a meaningful result that can promote future training studies based on input skill analysis. In particular, as can be seen from the survey results and coaches’ comments, the participants were overall satisfied with the training and actively participated. Considering that the participants in our study were serious players who enrolled themselves at a local esports academy to improve their performance (rather than the typical randomly recruited participants), it is a notable result that our training program was able to meet their high standards.

6.3 Insights for LoL Players of Different Ranks

Although the effect of Player Group was statistically significant for all of the proposed input skill indices, post-hoc analysis showed that only the following four indices showed significant differences between the top (Level 4) and near-top players (Level 3): Positioning, Chained-Input, Monitoring, Active Communication Skill. What these indices have in common is that they are deeply related to how well players understand the complex rules and systems of LoL. To improve Positioning and Chained-Input Skills, players need to be familiar with the characteristics of numerous champions. Monitoring and Active Communication Skill can only be improved if players truly understand that LoL is a team game with complex and unpredictable dynamics. On a similar note, there were no significant differences between the top (Level 4) and the near-top players (Level 3) on Motor Execution and Visual Processing Skills, two indices expected to be relatively independent of whether a player has a deep understanding of the game rules and systems. This can be a useful lesson for players training hard with the goal of moving beyond the near-top level and making their professional debut.

Table 6:
Week 1Week 2Week 3
Q1 Q2Q1Q2Q3 Q4Q1Q2Q3Q4 Q5
Overall4(1.75)5(0)3.5(1)3(0.75)4.5(1)4.5(1)2(0.75)3(0.75)4(1.75)4(1.75)4(1.5)
Positioning Skill4(1)4.5(1)4(1.75)3(1)4(1.75)4(0.75)2(1)3(1)4(1)
Risk Compensation Skill5(1)4.5(1)3(0.75)3.5(1)4(0)4(1.5)2.5(1)4(0.75)4.5(1)
Edge-Panning Skill4(1.5)4(1.75)3(1)4(0.75)4(1)4(1.75)3(1.75)4.5(1.75)4(1)
Motor Execution Skill5(0)4.5(1)3(1)3(0.75)4(1)4(1.75)2(1)3.5(1)4(1.75)
Chained Input Skill5(1)5(1)4.5(1.75)4(0.75)5(1)4.5(1.75)4(1.75)4(0.75)5(1)
Monitoring Skill5 (0.75)4 (1.75)3(0.75)4(0.75)4(1.75)4(1)2.5(1)3(1.75)4(1.5)
Visual Processing Skill4 (0)4(1.75)4(1)4(1)4(0.75)4(1)3(1.5)4(0.75)4(0.75)
Week1) Q1. The ease of understanding the meaning and analysis results Q2. The degree of learning from the analysis
Week2) Q1. The effort put into skill index improvement Q2. The alignment of changes with expectations Q3. The improvement achieved through training Q4. The degree of planned effort in improvement for next week
Week3) Q1. The relative ease of improvement compared to the effort put in Q2. The alignment of changes with expectations Q3. The improvement achieved through training Q4. The likelihood of recommending the training program Q5. The satisfaction with the program

Table 6: The median (IQR) of each question. If the median is lower than 3, it is marked separately.

Meanwhile, looking at the differences in Motor Execution Skill and Visual Processing Skill between the Professional and Level 4 groups, it is difficult to completely rule out the possibility that those input skills contribute to high achievement in LoL. Although statistical significance was not tested, those indices for the Professional group were again noticeably lower compared to the Level 4 group (lower is better). In fact, several previous studies [26, 56, 67, 80, 106] have reported significant performance differences between professional and amateur LoL players in controlled perceptual or motor experiments isolated from the game context. We see a need for future studies to obtain more solid conclusions about the importance of those two input skills.

Our study also provides insight into what improvements novice players most urgently need to advance to higher levels. According to the post-hoc analysis, significant differences between the Level 2 and Level 1 groups were found only in Visual Processing Skill Index. The difference between the two groups amounted to 310 ms. However, compared to other indices, Visual Processing Skill Index is distinct in that players should not perceive reducing its value as a primary goal. It is a simple matter to reduce the time spent in a switched view, regardless of the amount of information actually obtained from the view. In fact, in the longitudinal study in Section 5, trainees responded that they could improve the index relatively easily (see Table 6). Therefore, if training guidelines are provided to novice players to reduce Visual Processing Skill Index, a side effect may occur in which they simply try to reduce the time spent in the view rather than trying to quickly obtain more information from the switched view. Instead, novice players should be given guidelines to actively experience a wide range of game situations rather than taking a passive stance to avoid failure. It is recommended to periodically check whether Visual Processing Skill Index naturally decreases during the training process.

Skip 7LIMITATIONS AND FUTURE WORK Section

7 LIMITATIONS AND FUTURE WORK

This study has some obvious limitations in experimental design, data analysis, and practical applicability. First, since this study analyzes behavioral indices averaged over multiple matches for a player, it only speaks of average trends of phenomena, and the findings in this study do not directly inform how players should behave in the individual matches they are currently playing. For example, while this research might tell us that actively monitoring game situations is generally important (i.e., higher Monitoring Skill), it doesn’t tell us whether it’s equally important for all champions, all roles, or all team combinations. Also, it should be noted that the variability of input skill indices was not small, even among players within the same group (Table 5), which means that players who do not exhibit the behavioral virtues proposed in this study have the potential to reach a sufficiently high rank.

Second, it is unclear whether the findings in this study will be generalizable beyond the range of indices investigated in this study. For example, using Smart Ping more actively on average to communicate with team members is a key indicator of higher expertise, but excessive frequency of Smart Ping can lead to bullying or taunting by team members [58]. In addition, due to human cognitive and physical limitations, if players try to improve only one index too much, they may encounter a trade-off effect that leads to an unwanted decrease in other indices.

Third, it should be noted that the sample in this study underrepresents female LoL players. According to a 2021 survey [1], 9.0% percent of LoL players are female. However, in this study, only 4.9% of participants were female. The reason for this bias is that many of the participants in this study (N=51) were recruited from a local esports academy and a professional team, and the proportion of female players in those groups was much lower than that of the general population. On the other hand, we do not think that the generalizability of our findings is significantly harmed by such sample bias, as a previous study [84] reported that female and male players had little or no statistical difference in the rate of gaining expertise or the highest skill level attainable in LoL [84]. Rather, we are concerned that this study may reinforce the existing prejudice that esports is a male-dominated culture [84].

Fourth, when collecting raw input behavior data in this study, there was no control over the participants’ input devices or computer specifications, which may have had an unintended effect on the analysis results. For example, higher-ranked players may have found the optimal input device settings for themselves through years of experience [44, 60], which may have made their input behavior more efficient than that of lower-ranked players.

Lastly, we have built a large dataset in this study, but the dataset has been partially utilized to make a more focused contribution. For example, that timeline data that records the type and timing of important events that occurred in each match (e.g., champion death) could be useful in the future to study input behavior in different game contexts. Another limitation of our dataset, such as not collecting the input behavior of all players participating in a match [77, 94], needs to be addressed in future research.

Skip 8CONCLUSION Section

8 CONCLUSION

To achieve high performance in real-time esports, players must be able to implement high-quality input behavior. Characterizing and quantifying the quality of a player’s input behavior can provide meaningful insights into esports training, coaching, and speculating, but previous related research has been limited. In this study, we demonstrated a holistic pipeline of player input behavior analysis targeting League of Legends (LoL), the most popular esports game today. Through expert interviews, we characterized the eight input skills required to successfully accomplish the three core tasks given in LoL. Then we devised eight input skill indices, calculated from a player’s raw input log, that quantify how high quality the player is implementing the eight input skills mentioned by experts. By analyzing input behavior logs from 4,835 matches collected from 193 players, we confirmed that the proposed input skill indices showed a significant correlation with players’ in-game rank. Finally, through a three-week longitudinal training study, we observed what changes could occur in players’ behavior and performance when they were given training aid software for analyzing and visualizing their input skills. We want this research to contribute to a more robust, scientific foundation for esports, and to that end, we are releasing the code and dataset as open source.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

This research was funded by National Research Foundation of Korea (RS-2023-00223062), Institute of Information and Communications Technology Planning and Evaluation (2020-0- 01361), and Korea Creative Content Agency (R2021040105). We thank Game Coach Academy (GCA) and SandBox Gaming (SBXG) for their help in recruiting participants for the user study in this research and for providing useful advice on software design. Lastly, we thank the anonymous reviewers for their constructive feedback.

Footnotes

  1. 1 Most previous studies have been conducted on input behavior in first-person shooters [14, 15, 52, 53, 77] or real-time strategy [5, 19, 23, 40, 41, 102, 115] games.

    Footnote
  2. 2 https://github.com/narey54541/LLL

    Footnote
  3. 3 Clears away fog from the installed area to ensure visibility

    Footnote
  4. 4 Before picking characters, players have the option to ban a character, making it unavailable for both their team and the opposing team in that match. Teams alternate in picking characters, with each team being aware of the other’s picks.

    Footnote
  5. 5 Out of the 250 statistics obtained per match from the official API, the eight most important were calculated through discussion with the head coach. More details on the selected statistics are provided in the Supplementary Material.

    Footnote
  6. 6 Calculated as the total number of actions (i.e., button presses) per minute [64].

    Footnote
Skip Supplemental Material Section

Supplemental Material

Video Preview

Video Preview

mp4

7.3 MB

Video Presentation

Video Presentation

mp4

137 MB

References

  1. [n.d.]. League of Legends community survey. https://www.reddit.com/r/leagueoflegends/comments/mjfpbl/rleague_2021_demographic_survey_results/. Accessed: 2022-12-02.Google ScholarGoogle Scholar
  2. Ana Paula Afonso, Maria Beatriz Carmo, and Rafael Afonso. 2021. VisuaLeague: Visual Analytics of Multiple Games. In 2021 25th International Conference Information Visualisation (IV). IEEE, 54–62. https://doi.org/10.1109/IV53921.2021.00019Google ScholarGoogle ScholarCross RefCross Ref
  3. Ana Paula Afonso, Maria Beatriz Carmo, Tiago Gonçalves, and Pedro Vieira. 2019. VisuaLeague: Player performance analysis using spatial-temporal data. Multimedia Tools and Applications 78 (2019), 33069–33090. https://doi.org/10.1007/s11042-019-07952-zGoogle ScholarGoogle ScholarCross RefCross Ref
  4. Matthew P Anderson, James E McDonald, and Roger W Schvaneveldt. 1987. Empirical user modeling: Command usage analyses for deriving models of users. In Proceedings of the Human Factors Society Annual Meeting, Vol. 31. SAGE Publications Sage CA: Los Angeles, CA, 41–45. https://doi.org/10.1177/154193128703100109Google ScholarGoogle ScholarCross RefCross Ref
  5. Tetske Avontuur, Pieter Spronck, and Menno Van Zaanen. 2013. Player skill modeling in Starcraft II. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Vol. 9. 2–8. https://doi.org/10.1609/aiide.v9i1.12682Google ScholarGoogle ScholarCross RefCross Ref
  6. Farnod Bahrololloomi, Sebastian Sauer, Fabio Klonowski, Robin Horst, and Ralf Dörner. 2022. A Machine Learning based Analysis of e-Sports Player Performances in League of Legends for Winning Prediction based on Player Roles and Performances.. In VISIGRAPP (2: HUCAPP). 68–76. https://doi.org/10.5220/0010895900003124Google ScholarGoogle ScholarCross RefCross Ref
  7. Isaac Da Silva Beserra, Lucas Camara, and Márjory Da Costa-Abreu. 2016. Using keystroke and mouse dynamics for user identification in the online collaborative game league of legends. In 7th International Conference on Imaging for Crime Detection and Prevention (ICDP 2016). IET, 1–6. https://doi.org/10.1049/ic.2016.0076Google ScholarGoogle ScholarCross RefCross Ref
  8. Andrzej Białecki, Peter Xenopoulos, Paweł Dobrowolski, Robert Białecki, and Jan Gajewski. 2023. ESPORT: Electronic Sports Professionals Observations and Reflections on Training. arXiv preprint arXiv:2311.05424 (2023). https://doi.org/10.48550/arXiv.2311.05424Google ScholarGoogle ScholarCross RefCross Ref
  9. Peter Bickmann, Konstantin Wechsler, Kevin Rudolf, Chuck Tholl, Ingo Froböse, and Christopher Grieben. 2021. Comparison of reaction time between esports players of different genres and sportsmen. International Journal of eSports Research (IJER) 1, 1 (2021), 1–16. https://doi.org/10.4018/IJER.20210101.oa1Google ScholarGoogle ScholarCross RefCross Ref
  10. Paul Binder. 2023. Real-time Training Visualization for League of Legends/Author Paul Binder, BSc. (2023). https://resolver.obvsg.at/urn:nbn:at:at-ubl:1-62099Google ScholarGoogle Scholar
  11. Paris Mavromoustakos Blom, Sander Bakkes, and Pieter Spronck. 2019. Towards multi-modal stress response modelling in competitive league of legends. In 2019 IEEE Conference on Games (CoG). IEEE, 1–4. https://doi.org/10.1109/CIG.2019.8848004Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Ivan Bonilla, Andrés Chamarro, and Carles Ventura. 2022. Psychological skills in esports: Qualitative study of individual and team players. Aloma 40, 1 (2022), 35–41. https://doi.org/10.51698/aloma.2022.40.1Google ScholarGoogle ScholarCross RefCross Ref
  13. Virginia Braun and Victoria Clarke. 2012. Thematic analysis.American Psychological Association. https://doi.org/10.1037/13620-004Google ScholarGoogle ScholarCross RefCross Ref
  14. David Buckley, Ke Chen, and Joshua Knowles. 2013. Predicting skill from gameplay input to a first-person shooter. In 2013 IEEE Conference on Computational Inteligence in Games (CIG). IEEE, 1–8. https://doi.org/10.1109/CIG.2013.6633655Google ScholarGoogle ScholarCross RefCross Ref
  15. David Buckley, Ke Chen, and Joshua Knowles. 2015. Rapid skill capture in a first-person shooter. IEEE Transactions on Computational Intelligence and AI in Games 9, 1 (2015), 63–75. https://doi.org/10.1109/TCIAIG.2015.2494849Google ScholarGoogle ScholarCross RefCross Ref
  16. MIKKEL CHAMBERS. 2019. LEAGUE OF LEGENDS: HIGH-FREQUENCY COMMUNICATION FRAMEWORK AND HOW IT AFFECTS TEAM PERFORMANCE. Ph.D. Dissertation. BETHEL UNIVERSITY.Google ScholarGoogle Scholar
  17. Sven Charleer, Kathrin Gerling, Francisco Gutiérrez, Hans Cauwenbergh, Bram Luycx, and Katrien Verbert. 2018. Real-time dashboards to support esports spectating. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 59–71. https://doi.org/10.1145/3242671.3242680Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. William G Chase and Herbert A Simon. 1973. Perception in chess. Cognitive psychology 4, 1 (1973), 55–81. https://doi.org/10.1016/0010-0285(73)90004-2Google ScholarGoogle ScholarCross RefCross Ref
  19. Yinheng Chen, Matthew Aitchison, and Penny Sweetser. 2020. Improving StarCraft II Player League Prediction with Macro-Level Features. In Australasian Joint Conference on Artificial Intelligence. Springer, 256–268. https://doi.org/10.1007/978-3-030-64984-5_20Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Gifford Cheung and Jeff Huang. 2011. Starcraft from the stands: understanding the game spectator. In Proceedings of the SIGCHI conference on human factors in computing systems. 763–772. https://doi.org/10.1145/1978942.1979053Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Alexander Cho, AM Tsaasan, and Constance Steinkuehler. 2019. The building blocks of an educational esports league: lessons from year one in orange county high schools. In Proceedings of the 14th international conference on the foundations of digital games. 1–11. https://doi.org/10.1145/3337722.3337738Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Minyeop Choi, Gihyuk Ko, and Sang Kil Cha. 2023. { BotScreen} : Trust Everybody, but Cut the Aimbots Yourself. In 32nd USENIX Security Symposium (USENIX Security 23). 481–498.Google ScholarGoogle Scholar
  23. Zoran Ćirović and Nataša Ćirović. 2019. A StarCraft 2 Player Skill Modeling. In y-BIS 2019 Conference Book: Recent Advances n Data Sc ence and Bus ness Analyt cs. 121.Google ScholarGoogle Scholar
  24. Lincoln Magalhaes Costa, Rafael Gomes Mantovani, Francisco Carlos Monteiro Souza, and Geraldo Xexeo. 2021. Feature analysis to league of legends victory prediction on the picks and bans phase. In 2021 IEEE Conference on Games (CoG). IEEE, 01–05. https://doi.org/10.1109/CoG52621.2021.9619019Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Lincoln Magalhaes Costa, Alinne C Correa Souza, and Francisco Carlos M Souza. 2019. An approach for team composition in league of legends using genetic algorithm. In 2019 18th Brazilian symposium on computer games and digital entertainment (SBGames). IEEE, 52–61. https://doi.org/10.1109/SBGames.2019.00018Google ScholarGoogle ScholarCross RefCross Ref
  26. Maxime Delmas, Loïc Caroux, and Céline Lemercier. 2022. Searching in clutter: Visual behavior and performance of expert action video game players. Applied Ergonomics 99 (2022), 103628. https://doi.org/10.1016/j.apergo.2021.103628Google ScholarGoogle ScholarCross RefCross Ref
  27. Yue Ding, Xin Hu, Jiawei Li, Jingbo Ye, Fei Wang, and Dan Zhang. 2018. What makes a champion: the behavioral and neural correlates of expertise in multiplayer online battle arena games. International Journal of Human–Computer Interaction 34, 8 (2018), 682–694. https://doi.org/10.1080/10447318.2018.1461761Google ScholarGoogle ScholarCross RefCross Ref
  28. Tiffany D Do, Seong Ioi Wang, Dylan S Yu, Matthew G McMillian, and Ryan P McMahan. 2021. Using machine learning to predict game outcomes based on player-champion experience in League of Legends. In The 16th International Conference on the Foundations of Digital Games (FDG) 2021. 1–5. https://doi.org/10.1145/3472538.3472579Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Victor do Nascimento Silva and Luiz Chaimowicz. 2017. Moba: a new arena for game ai. arXiv e-prints (2017), arXiv–1705. https://doi.org/10.48550/arXiv.1705.10443Google ScholarGoogle ScholarCross RefCross Ref
  30. Scott Donaldson. 2017. Mechanics and metagame: Exploring binary expertise in League of Legends. Games and Culture 12, 5 (2017), 426–444. https://doi.org/10.1177/1555412015590063Google ScholarGoogle ScholarCross RefCross Ref
  31. Joshua A Eaton, David J Mendonça, and Matthew-Donald D Sangster. 2018. Attack, Damage and Carry: Role Familiarity and Team Performance in League of Legends. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 62. SAGE Publications Sage CA: Los Angeles, CA, 130–134. https://doi.org/10.1177/1541931218621030Google ScholarGoogle ScholarCross RefCross Ref
  32. Joshua A Eaton, Matthew-Donald D Sangster, Molly Renaud, David J Mendonca, and Wayne D Gray. 2017. Carrying the team: the importance of one player’s survival for team success in league of legends. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 61. SAGE Publications Sage CA: Los Angeles, CA, 272–276. https://doi.org/10.1177/1541931213601550Google ScholarGoogle ScholarCross RefCross Ref
  33. Joey R Fanfarelli. 2018. Expertise in professional overwatch play. International Journal of Gaming and Computer-Mediated Simulations (IJGCMS) 10, 1 (2018), 1–22. https://doi.org/10.4018/IJGCMS.2018010101Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Jessica Formosa, Nicholas O’donnell, Ella M Horton, Selen Türkay, Regan L Mandryk, Michael Hawks, and Daniel Johnson. 2022. Definitions of Esports: A Systematic Review and Thematic Analysis. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–45. https://doi.org/10.1145/3549490Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Ryousuke Furukado and Goichi Hagiwara. [n.d.]. Exploring Micromanagement and Multiple Target Tracking Skills in Esports Players: Insights from Perceptual and Cognitive Tasks. ([n. d.]).Google ScholarGoogle Scholar
  36. Mohammad Gharehyazie, Bo Zhou, and Iulian Neamtiu. 2016. Expertise and behavior of Unix command line users: an exploratory study. In Proceedings of the Ninth International Conference on Advances in Computer-Human Interactions (ACHI). 1–5.Google ScholarGoogle Scholar
  37. Saul Greenberg and Ian H Witten. 1988. Directing the user interface: how people use command-based computer systems. IFAC Proceedings Volumes 21, 5 (1988), 349–355. https://doi.org/10.1016/S1474-6670(17)53932-4Google ScholarGoogle ScholarCross RefCross Ref
  38. Neal C Hinnant. 2013. Practicing work, perfecting play: League of Legends and the sentimental education of e-sports. (2013). https://doi.org/10.57709/4864722Google ScholarGoogle ScholarCross RefCross Ref
  39. Robin Hunicke, Marc LeBlanc, Robert Zubek, 2004. MDA: A formal approach to game design and game research. In Proceedings of the AAAI Workshop on Challenges in Game AI, Vol. 4. San Jose, CA, 1722.Google ScholarGoogle Scholar
  40. Inhyeok Jeong, Kento Nakagawa, Rieko Osu, and Kazuyuki Kanosue. 2022. Difference in gaze control ability between low and high skill players of a real-time strategy game in esports. PloS one 17, 3 (2022), e0265526. https://doi.org/10.1371/journal.pone.0265526Google ScholarGoogle ScholarCross RefCross Ref
  41. Ryan Kaminsky, Miro Enev, and Erik Andersen. 2008. Identifying game players with mouse biometrics. University of Washington. Technical Report (2008).Google ScholarGoogle Scholar
  42. Haneol Kim, Seonjin Kim, and Jianhua Wu. 2022. Perceptual-motor abilities of professional esports gamers and amateurs. Journal of Electronic Gaming and Esports 1, 1 (2022). https://doi.org/10.1123/jege.2022-0001Google ScholarGoogle ScholarCross RefCross Ref
  43. Jooyeon Kim, Brian C Keegan, Sungjoon Park, and Alice Oh. 2016. The proficiency-congruency dilemma: Virtual team design and performance in multiplayer online games. In Proceedings of the 2016 CHI conference on human factors in computing systems. 4351–4365. https://doi.org/10.1145/2858036.2858464Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Sunjun Kim, Byungjoo Lee, Thomas Van Gemert, and Antti Oulasvirta. 2020. Optimal sensor position for a computer mouse. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13. https://doi.org/10.1145/3313831.3376735Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Young Ji Kim, David Engel, Anita Williams Woolley, Jeffrey Yu-Ting Lin, Naomi McArthur, and Thomas W Malone. 2017. What makes a strong team? Using collective intelligence to predict team performance in League of Legends. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. 2316–2329. https://doi.org/10.1145/2998181.2998185Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Yoon Jeon Kim and Jose A Ruipérez-Valiente. 2020. Data-driven game design: The case of difficulty in educational games. In Addressing Global Challenges and Quality Education: 15th European Conference on Technology Enhanced Learning, EC-TEL 2020, Heidelberg, Germany, September 14–18, 2020, Proceedings 15. Springer, 449–454. https://doi.org/10.1007/978-3-030-57717-9_43Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Donald Kirkpatrick and James Kirkpatrick. 2006. Evaluating training programs: The four levels. Berrett-Koehler Publishers. https://doi.org/10.1016/S1098-2140(99)80206-9Google ScholarGoogle ScholarCross RefCross Ref
  48. Erica Kleinman, Christian Gayle, and Magy Seif El-Nasr. 2021. “Because I’m Bad at the Game!” A Microanalytic Study of Self Regulated Learning in League of Legends. Frontiers in Psychology 12 (2021), 5570. https://doi.org/10.3389/fpsyg.2021.780234Google ScholarGoogle ScholarCross RefCross Ref
  49. Erica Kleinman, Murtuza N Shergadwala, and Magy Seif El-Nasr. 2022. Kills, deaths, and (computational) assists: Identifying opportunities for computational support in esport learning. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–13. https://doi.org/10.1145/3491102.3517654Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Erica Kleinman, Jennifer Villareale, Murtuza Shergadwala, Zhaoqing Teng, Andy Bryant, Jichen Zhu, and Magy Seif El-Nasr. 2022. Towards an Understanding of How Players Make Meaning from Post-Play Process Visualizations. In International Conference on Entertainment Computing. Springer, 47–58. https://doi.org/10.1007/978-3-031-20212-4_4Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Athanasios Kokkinakis, Peter York, Moni Patra, Justus Robertson, Ben Kirman, Alistair Coates, Alan Pedrassoli Chitayat, Simon Peter Demediuk, Anders Drachen, Jonathan David Hook, 2021. Metagaming and metagames in Esports. International Journal of Esports (2021).Google ScholarGoogle Scholar
  52. Denis Koposov, Maria Semenova, Andrey Somov, Andrey Lange, Anton Stepanov, and Evgeny Burnaev. 2020. Analysis of the reaction time of esports players through the gaze tracking and personality trait. In 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). IEEE, 1560–1565. https://doi.org/10.1109/ISIE45063.2020.9152422Google ScholarGoogle ScholarCross RefCross Ref
  53. Alexander Korotin, Nikita Khromov, Anton Stepanov, Andrey Lange, Evgeny Burnaev, and Andrey Somov. 2019. Towards understanding of esports athletes’ potentialities: The sensing system for data collection and analysis. In 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). IEEE, 1804–1810. https://doi.org/10.1109/SmartWorld-UIC-ATC-SCALCOM-IOP-SCI.2019.00319Google ScholarGoogle ScholarCross RefCross Ref
  54. Simone Kriglstein, Anna Lisa Martin-Niedecken, Laia Turmo Vidal, Madison Klarkowski, Katja Rogers, Selen Turkay, Magy Seif El-Nasr, Elena Márquez Segura, Anders Drachen, and Perttu Hämäläinen. 2021. Special Interest Group: The present and future of esports in HCI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–4. https://doi.org/10.1145/3411763.3450402Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Yen-Ting Kuan, Yu-Shuen Wang, and Jung-Hong Chuang. 2017. Visualizing real-time strategy games: The example of starcraft ii. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 71–80. https://doi.org/10.1109/VAST.2017.8585594Google ScholarGoogle ScholarCross RefCross Ref
  56. Adam M Large, Benoit Bediou, Sezen Cekic, Yuval Hart, Daphne Bavelier, and C Shawn Green. 2019. Cognitive and behavioral correlates of achievement in a complex multi-player video game. Media and Communication 7, 4 (2019), 198–212. https://doi.org/10.17645/mac.v7i4.2314Google ScholarGoogle ScholarCross RefCross Ref
  57. Lasse Juel Larsen. 2022. The play of champions: Toward a theory of skill in eSport. Sport, ethics and philosophy 16, 1 (2022), 130–152. https://doi.org/10.1080/17511321.2020.1827453Google ScholarGoogle ScholarCross RefCross Ref
  58. Alex Leavitt, Brian C Keegan, and Joshua Clark. 2016. Ping to win? Non-verbal communication and team performance in competitive online multiplayer games. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4337–4350. https://doi.org/10.1145/2858036.2858132Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Byungjoo Lee and Hyunwoo Bang. 2013. A kinematic analysis of directional effects on mouse control. Ergonomics 56, 11 (2013), 1754–1765. https://doi.org/10.1080/00140139.2013.835074Google ScholarGoogle ScholarCross RefCross Ref
  60. Byungjoo Lee, Mathieu Nancel, Sunjun Kim, and Antti Oulasvirta. 2020. AutoGain: Gain Function Adaptation with Submovement Efficiency Optimization. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12. https://doi.org/10.1145/3313831.3376244Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Injung Lee, Hyunchul Kim, and Byungjoo Lee. 2021. Automated playtesting with a cognitive model of sensorimotor coordination. In Proceedings of the 29th ACM International Conference on Multimedia. 4920–4929. https://doi.org/10.1145/3474085.3475429Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Injung Lee, Sunjun Kim, and Byungjoo Lee. 2019. Geometrically compensating effect of end-to-end latency in moving-target selection games. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12. https://doi.org/10.1145/3290605.3300790Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Abraham Lempel and Jacob Ziv. 1976. On the complexity of finite sequences. IEEE Transactions on information theory 22, 1 (1976), 75–81. https://doi.org/10.1109/TIT.1976.1055501Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Joshua Lewis, Patrick Trinh, and David Kirsh. 2011. A corpus analysis of strategy video game play in starcraft: Brood war. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 33.Google ScholarGoogle Scholar
  65. Bingxin Li, Xiangqian Li, Gijsbert Stoet, and Martin Lages. 2023. Processing Speed Predicts Mean Performance in Task-Switching but Not Task-Switching Cost. Psychological Reports 126, 4 (2023), 1822–1846. https://doi.org/10.1177/00332941211072228Google ScholarGoogle ScholarCross RefCross Ref
  66. Quan Li, Peng Xu, Yeuk Yin Chan, Yun Wang, Zhipeng Wang, Huamin Qu, and Xiaojuan Ma. 2016. A visual analytics approach for understanding reasons behind snowballing and comeback in moba games. IEEE transactions on visualization and computer graphics 23, 1 (2016), 211–220. https://doi.org/10.1109/TVCG.2016.2598415Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Xiangqian Li, Liang Huang, Bingxin Li, Haoran Wang, and Chengyang Han. 2020. Time for a true display of skill: Top players in league of legends have better executive control. Acta Psychologica 204 (2020), 103007. https://doi.org/10.1016/j.actpsy.2020.103007Google ScholarGoogle ScholarCross RefCross Ref
  68. Ray F Lin and Yi-Chien Tsai. 2015. The use of ballistic movement as an additional method to assess performance of computer mice. International Journal of Industrial Ergonomics 45 (2015), 71–81. https://doi.org/10.1016/j.ergon.2014.12.003Google ScholarGoogle ScholarCross RefCross Ref
  69. Frank Linton and Hans-Peter Schaefer. 2000. Recommender systems for learning: Building user and expert models through long-term observation of application use. User Modeling and User-Adapted Interaction 10 (2000), 181–208. https://doi.org/10.1023/A:1026521931194Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Shiyi Liu, Ruofei Ma, Chuyi Zhao, Zhenbang Li, Jianpeng Xiao, and Quan Li. 2023. BPCoach: Exploring Hero Drafting in Professional MOBA Tournaments via Visual Analytics. arXiv preprint arXiv:2311.05912 (2023). https://doi.org/10.48550/arXiv.2311.05912Google ScholarGoogle ScholarCross RefCross Ref
  71. Philip Z Maymin. 2021. Smart kills and worthless deaths: eSports analytics for League of Legends. Journal of Quantitative Analysis in Sports 17, 1 (2021), 11–27. https://doi.org/10.1515/jqas-2019-0096Google ScholarGoogle ScholarCross RefCross Ref
  72. Francesco Neri, Carmelo Luca Smeralda, Davide Momi, Giulia Sprugnoli, Arianna Menardi, Salvatore Ferrone, Simone Rossi, Alessandro Rossi, Giorgio Di Lorenzo, and Emiliano Santarnecchi. 2021. Personalized adaptive training improves performance at a professional first-person shooter action videogame. Frontiers in Psychology 12 (2021), 598410. https://doi.org/10.3389/fpsyg.2021.598410Google ScholarGoogle ScholarCross RefCross Ref
  73. Joaquim AM Neto, Kazuki M Yokoyama, and Karin Becker. 2017. Studying toxic behavior influence and player chat in an online video game. In Proceedings of the international conference on web intelligence. 26–33. https://doi.org/10.1145/3106426.3106452Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Rune Kristian Lundedal Nielsen and Thorkild Hanghøj. 2019. Esports skills are people skills. In 13th European Conference on Game-based Learning. Academic Conferences and Publishing International, 535–542.Google ScholarGoogle Scholar
  75. Hao Yi Ong, Sunil Deolalikar, and Mark Peng. 2015. Player behavior and optimal team composition for online multiplayer games. arXiv preprint arXiv:1503.02230 (2015). https://doi.org/10.48550/arXiv.1503.02230Google ScholarGoogle ScholarCross RefCross Ref
  76. Eunji Park and Byungjoo Lee. 2020. An intermittent click planning model. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13. https://doi.org/10.1145/3313831.3376725Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Eunji Park, Sangyoon Lee, Auejin Ham, Minyeop Choi, Sunjun Kim, and Byungjoo Lee. 2021. Secrets of Gosu: Understanding Physical Combat Skills of Professional Players in First-Person Shooters. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14. https://doi.org/10.1145/3411764.3445217Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Dustin P Peabody. 2018. Detecting metagame shifts in league of legends using unsupervised learning. (2018).Google ScholarGoogle Scholar
  79. Johannes Pfau and Magy Seif El-Nasr. 2023. Player-Driven Game Analytics: The Case of Guild Wars 2. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–14. https://doi.org/10.1145/3544548.3581404Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Matthew A Pluss, Andrew R Novak, KJ Bennett, Derek Panchuk, Aaron J Coutts, and Job Fransen. 2020. Perceptual-motor abilities underlying expertise in esports. Journal of Expertise 3, 2 (2020), 133–143.Google ScholarGoogle Scholar
  81. Matthew A Pluss, Andrew R Novak, Kyle JM Bennett, Derek Panchuk, Aaron J Coutts, and Job Fransen. 2023. The reliability and validity of mobalytics proving ground as a perceptual-motor skill assessment for esports. International Journal of Sports Science & Coaching 18, 2 (2023), 470–479. https://doi.org/10.1177/17479541221086793Google ScholarGoogle ScholarCross RefCross Ref
  82. Susanne Poeller, Martin Johannes Dechant, Madison Klarkowski, and Regan L Mandryk. 2023. Suspecting Sarcasm: How League of Legends Players Dismiss Positive Communication in Toxic Environments. Proceedings of the ACM on Human-Computer Interaction 7, CHI PLAY (2023), 1–26. https://doi.org/10.1145/3611020Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Daniel Railsback and Nicholas Caporusso. 2019. Investigating the human factors in eSports performance. In Advances in Human Factors in Wearable Technologies and Game Design: Proceedings of the AHFE 2018 International Conferences on Human Factors and Wearable Technologies, and Human Factors in Game Design and Virtual Environments, Held on July 21–25, 2018, in Loews Sapphire Falls Resort at Universal Studios, Orlando, Florida, USA 9. Springer, 325–334. https://doi.org/10.1007/978-3-319-94619-1_32Google ScholarGoogle ScholarCross RefCross Ref
  84. Rabindra A Ratan, Nicholas Taylor, Jameson Hogan, Tracy Kennedy, and Dmitri Williams. 2015. Stand by your man: An examination of gender disparity in League of Legends. Games and culture 10, 5 (2015), 438–462. https://doi.org/10.1177/1555412014567228Google ScholarGoogle ScholarCross RefCross Ref
  85. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144. https://doi.org/10.1145/2939672.2939778Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Elaine Rich. 1983. Users are individuals: individualizing user models. International journal of man-machine studies 18, 3 (1983), 199–214. https://doi.org/10.1016/S0020-7373(83)80007-8Google ScholarGoogle ScholarCross RefCross Ref
  87. Frans Rijnders, Günter Wallner, and Regina Bernhaupt. 2022. Live Feedback for Training Through Real-Time Data Visualizations: A Study with League of Legends. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–23. https://doi.org/10.1145/3549506Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Cheselle Jan Roldan and Yogi Tri Prasetyo. 2021. Evaluating The Effects of Aim Lab Training on Filipino Valorant Players’ Shooting Accuracy. In 2021 IEEE 8th International Conference on Industrial Engineering and Applications (ICIEA). IEEE, 465–470. https://doi.org/10.1109/ICIEA52957.2021.9436822Google ScholarGoogle ScholarCross RefCross Ref
  89. Bader Sabtan, Shi Cao, and Naomi Paul. 2022. Current practice and challenges in coaching Esports players: An interview study with league of legends professional team coaches. Entertainment Computing 42 (2022), 100481. https://doi.org/10.1016/j.entcom.2022.100481Google ScholarGoogle ScholarCross RefCross Ref
  90. Richard A Schmidt, Howard Zelaznik, Brian Hawkins, James S Frank, and John T Quinn Jr. 1979. Motor-output variability: a theory for the accuracy of rapid motor acts.Psychological review 86, 5 (1979), 415. https://doi.org/10.1037/0033-295X.86.5.415Google ScholarGoogle ScholarCross RefCross Ref
  91. Sercan Sengün, Joni Salminen, Soon-gyo Jung, Peter Mawhorter, and Bernard J Jansen. 2019. Analyzing hate speech toward players from the MENA in League of Legends. In Extended abstracts of the 2019 CHI conference on human factors in computing systems. 1–6. https://doi.org/10.1145/3290607.3312924Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. Benjamin T Sharpe, Nicolas Besombes, Matthew R Welsh, and Phil DJ Birch. 2023. Indexing esport performance. Journal of Electronic Gaming and Esports 1, 1 (2023).Google ScholarGoogle ScholarCross RefCross Ref
  93. Antonio Luis Cardoso Silva, Gisele Lobo Pappa, and Luiz Chaimowicz. 2018. Continuous outcome prediction of league of legends competitive matches using recurrent neural networks. SBC-Proceedings of SBCGames (2018), 2179–2259.Google ScholarGoogle Scholar
  94. Anton Smerdov, Bo Zhou, Paul Lukowicz, and Andrey Somov. 2020. Collection and Validation of Psychophysiological Data from Professional and Amateur Players: a Multimodal eSports Dataset. arXiv preprint arXiv:2011.00958 (2020). https://doi.org/10.48550/arXiv.2011.00958Google ScholarGoogle ScholarCross RefCross Ref
  95. Robin Smit. 2019. A machine learning approach for recommending items in League of Legends. Ph.D. Dissertation. Bachelors thesis.Google ScholarGoogle Scholar
  96. KM Snyder and JR Lewis. 1989. Cognitive representations of DOS commands as a function of expertise. In [1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track, Vol. 2. IEEE, 447–456. https://doi.org/10.1109/HICSS.1989.48026Google ScholarGoogle ScholarCross RefCross Ref
  97. Daniel Eriksson Sörman, Karl Eriksson Dahl, Daniel Lindmark, Patrik Hansson, Mariana Vega-Mendoza, and Jessica Körning-Ljungberg. 2022. Relationships between Dota 2 expertise and decision-making ability. Plos one 17, 3 (2022), e0264350. https://doi.org/10.1371/journal.pone.0264350Google ScholarGoogle ScholarCross RefCross Ref
  98. Tom Stafford and Michael Dewar. 2014. Tracing the trajectory of skill learning with a very large sample of online game players. Psychological science 25, 2 (2014), 511–518. https://doi.org/10.1177/0956797613511466Google ScholarGoogle ScholarCross RefCross Ref
  99. Ayoung Suh, Christian Wagner, and Lili Liu. 2015. The effects of game dynamics on user engagement in gamified systems. In 2015 48th Hawaii international conference on system sciences. IEEE, 672–681. https://doi.org/10.1109/HICSS.2015.87Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. Evelyn TS Tan, Katja Rogers, Lennart E Nacke, Anders Drachen, and Alex Wade. 2022. Communication sequences indicate team cohesion: A mixed-methods study of Ad Hoc league of legends teams. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–27. https://doi.org/10.1145/3549488Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. Gareth Terry, Nikki Hayfield, Victoria Clarke, and Virginia Braun. 2017. Thematic analysis. The SAGE handbook of qualitative research in psychology 2 (2017), 17–37. https://doi.org/10.4135/9781526405555Google ScholarGoogle ScholarCross RefCross Ref
  102. Joseph J Thompson, CM McColeman, Ekaterina R Stepanova, and Mark R Blair. 2017. Using video game telemetry data to research motor chunking, action latencies, and complex cognitive-motor skill learning. Topics in cognitive science 9, 2 (2017), 467–484. https://doi.org/10.1111/tops.12254Google ScholarGoogle ScholarCross RefCross Ref
  103. Adam J Toth, Niall Ramsbottom, Christophe Constantin, Alain Milliet, and Mark J Campbell. 2021. The effect of expertise, training and neurostimulation on sensory-motor skill in esports. Computers in Human Behavior 121 (2021), 106782. https://doi.org/10.1016/j.chb.2021.106782Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Selen Türkay and Sonam Adinolf. 2017. Appeal of online collectible card games: social features of hearthstone. In Proceedings of the 29th Australian Conference on Computer-Human Interaction. 622–627. https://doi.org/10.1145/3152771.3156185Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. Carlos Valls-Serrano, Cristina de Francisco, Eduardo Caballero-López, and Alfonso Caracuel. 2022. Cognitive Flexibility and Decision Making Predicts Expertise in the MOBA Esport, League of Legends. SAGE Open 12, 4 (2022), 21582440221142728. https://doi.org/10.1177/21582440221142728Google ScholarGoogle ScholarCross RefCross Ref
  106. Carlos Valls-Serrano, Cristina De Francisco, María Vélez-Coto, and Alfonso Caracuel. 2022. Visuospatial working memory and attention control make the difference between experts, regulars and non-players of the videogame League of Legends. Frontiers in Human Neuroscience 16 (2022). https://doi.org/10.3389/fnhum.2022.933331Google ScholarGoogle ScholarCross RefCross Ref
  107. Kent P Vaubel and Charles F Gettys. 1990. Inferring user expertise for adaptive interfaces. Human-Computer Interaction 5, 1 (1990), 95–117. https://doi.org/10.1207/s15327051hci0501_3Google ScholarGoogle ScholarDigital LibraryDigital Library
  108. Günter Wallner, Marnix Van Wijland, Regina Bernhaupt, and Simone Kriglstein. 2021. What players want: information needs of players on post-game visualizations. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13. https://doi.org/10.1145/3411764.3445174Google ScholarGoogle ScholarDigital LibraryDigital Library
  109. Günter Wallner, Letian Wang, and Claire Dormann. 2023. Visualizing the Spatio-Temporal Evolution of Gameplay using Storyline Visualization: A Study with League of Legends. Proceedings of the ACM on Human-Computer Interaction 7, CHI PLAY (2023), 1002–1024. https://doi.org/10.1145/3611058Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. Benjamin Watson, Josef Spjut, Joohwan Kim, Jennifer Listman, Sunjun Kim, Raphael Wimmer, David Putrino, and Byungjoo Lee. 2021. Esports and high performance HCI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–5. https://doi.org/10.1145/3411763.3441313Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Hua Wei, Jingxiao Chen, Xiyang Ji, Hongyang Qin, Minwen Deng, Siqin Li, Liang Wang, Weinan Zhang, Yong Yu, Liu Linc, 2022. Honor of kings arena: an environment for generalization in competitive reinforcement learning. Advances in Neural Information Processing Systems 35 (2022), 11881–11892.Google ScholarGoogle Scholar
  112. Bin Wu. 2019. Hierarchical macro strategy model for moba game ai. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 1206–1213. https://doi.org/10.1609/aaai.v33i01.33011206Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. Peter Xenopoulos, João Rulff, and Claudio Silva. 2022. GgViz: Accelerating large-scale esports game analysis. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–22. https://doi.org/10.1145/3549501Google ScholarGoogle ScholarDigital LibraryDigital Library
  114. Sijia Xu, Hongyu Kuang, Zhuang Zhi, Renjie Hu, Yang Liu, and Huyang Sun. 2019. Macro action selection with deep reinforcement learning in starcraft. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Vol. 15. 94–99. https://doi.org/10.1609/aiide.v15i1.5230Google ScholarGoogle ScholarCross RefCross Ref
  115. Eddie Q Yan, Jeff Huang, and Gifford K Cheung. 2015. Masters of control: Behavioral patterns of simultaneous unit group manipulation in starcraft 2. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 3711–3720. https://doi.org/10.1145/2702123.2702429Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, 2020. Towards playing full moba games with deep reinforcement learning. Advances in Neural Information Processing Systems 33 (2020), 621–632.Google ScholarGoogle Scholar
  117. Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, 2020. Mastering complex control in moba games with deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 6672–6679. https://doi.org/10.1609/aaai.v34i04.6144Google ScholarGoogle ScholarCross RefCross Ref
  118. Sizhe Yuen, John D Thomson, and Oliver Don. 2020. Automatic Player Identification in Dota 2. arXiv preprint arXiv:2008.12401 (2020). https://doi.org/10.48550/arXiv.2008.12401Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Characterizing and Quantifying Expert Input Behavior in League of Legends

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Article Metrics

      • Downloads (Last 12 months)472
      • Downloads (Last 6 weeks)472

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format