Plaintext
Exploring Help Facilities in Game-Making Software
Dominic Kao
Purdue University
West Lafayette, Indiana
kaod@purdue.edu
ABSTRACT systematic review of 85 game-making software, we find that the
Help facilities have been crucial in helping users learn about soft- majority of game-making software incorporates text help, while
ware for decades. But despite widespread prevalence of game en- about half contain video help, and only a small number contain
gines and game editors that ship with many of today’s most popular interactive help. Given the large discrepancies in help facility im-
games, there is a lack of empirical evidence on how help facilities plementation across different game-making software, it becomes
impact game-making. For instance, certain types of help facili- important to question if different help facilities make a difference
ties may help users more than others. To better understand help in user experience, behavior, and the game produced.
facilities, we created game-making software that allowed us to sys- Help facilities can teach users how to use game-making soft-
tematically vary the type of help available. We then ran a study of ware, leading to increased quality in created games. Through foster-
1646 participants that compared six help facility conditions: 1) Text ing knowledge about game-making, help facilities can better help
Help, 2) Interactive Help, 3) Intelligent Agent Help, 4) Video Help, novice game-makers transition to becoming professionals. While
5) All Help, and 6) No Help. Each participant created their own studies on game-making and help facilities do not currently ex-
first-person shooter game level using our game-making software ist, there is good motivation for this topic from gaming. A key
with a randomly assigned help facility condition. Results indicate study by Andersen et al. [3] suggests that help facilities can be
that Interactive Help has a greater positive impact on time spent, beneficial in complex games (increasing play time by as much as
controls learnability, learning motivation, total editor activity, and 29%), but their effects were non-significant in simpler games where
game level quality. Video Help is a close second across these same mechanics can be discovered through experimentation. Because
measures. game-making software often presents users with a larger number
and higher complexity of choices compared to games [43], game-
CCS CONCEPTS making is likely a domain in which help facilities play an important
role.
• Applied computing → Computer games; • Human-centered
In this paper, we start by first reviewing the help facilities in
computing → Empirical studies in HCI.
popular game-making software, including game engines and game
editors. This allowed us to understand which types of help facilities
KEYWORDS
are present in game-making software, as well as how those help fa-
game making, tutorials, help facilities, text documentation, interac- cilities are implemented. This review directly influenced the design
tive tutorial, intelligent agent, video of our help facility conditions in our main study. We then describe
ACM Reference Format: our game-making software, GameWorld, which allows users to cre-
Dominic Kao. 2020. Exploring Help Facilities in Game-Making Software. ate their own first-person shooter (FPS) games. Lastly, we describe
In International Conference on the Foundations of Digital Games (FDG ’20), a between-subjects experiment conducted on Amazon Mechanical
September 15–18, 2020, Bugibba, Malta. ACM, New York, NY, USA, 14 pages. Turk that varied the help facility available to the user. This allowed
https://doi.org/10.1145/3402942.3403014
us to isolate the impact of help facility type while keeping all other
aspects of the game-making software identical. In this experiment,
1 INTRODUCTION we had 5 research questions:
Many successful video games, such as Dota 2 and League of Legends RQ1: Do help facilities lead to higher motivated behavior (time
(from WarCraft 3), Counter-Strike (from Half-Life), and the recent spent, etc.)?
Dota Auto Chess (from Dota 2), are modifications of popular games RQ2: Do help facilities improve learnability of controls?
using game-making or level-editing software. The popular game RQ3: Do help facilities improve learning motivation?
engine Unity powers 50% of mobile games, and 60% of all virtual RQ4: Do help facilities improve cognitive load?
reality and augmented reality content [116]. Despite the reach and RQ5: Do help facilities improve created game levels?
impact of game-making, very few empirical studies have been done RQ6: Does time spent on help facilities vary?
on help facilities in game-making software. For example, in our Results show that the interactive help has a substantial positive
impact on time spent, controls learnability, learning motivation,
cognitive load, game-making actions, and final game level qual-
ity. The video help has a similarly positive impact on time spent,
This work is licensed under a Creative Commons Attribution International 4.0 License. learning motivation, cognitive load, and final game level quality.
FDG ’20, September 15–18, 2020, Bugibba, Malta On the other hand, results show that having no help facility re-
© 2020 Copyright held by the owner/author(s). sults in the least amount of time spent, lowest controls learnability,
ACM ISBN 978-1-4503-8807-8/20/09.
https://doi.org/10.1145/3402942.3403014
FDG ’20, September 15–18, 2020, Bugibba, Malta Dominic Kao
lowest learning motivation, highest cognitive load, lowest game- materials [77]. These principles include the multimedia principle
making actions, and lowest final game level quality. We found that (people learn better from words and pictures than from words
the other help facility conditions (text, intelligent agent, all) gen- alone), the spatial contiguity principle (people learn better when
erally did not significantly differ from no help, except in cognitive corresponding words and pictures are presented near rather than
load (text is better than no help, but worse than all other conditions). far from each other), and the temporal contiguity principle (people
Finally, we conclude with a discussion of design implications based learn better when corresponding words and pictures are presented
on the results of the study. simultaneously rather than successively) [78]. We utilize this frame-
work as one internal guide in developing our help facilities. In the
2 RELATED WORK remaining sections, we survey different modalities of help facilities.
HCI and games researchers have long been interested in games
and learning [40–42, 48, 50, 56, 60, 62, 103, 118, 120]. This includes 2.3 Text-Based Help Facilities
studies on tutorials [3], differences in frequent versus infrequent Early forms of computing documentation originated from the ad-
gamers’ reactions to tutorials [82], leveraging reward systems [32], vent of commercial mainframe computing [125]. Much of the re-
encouraging a growth mindset, or the idea that intelligence is mal- search on text help is dedicated to improving the user experience
leable [55, 86, 87], avatars [57, 59, 63, 66], embellishment [58, 65], of computing documentation. Converging evidence suggests that
and many more. AI and games researchers have also begun to take user frustration with computers is a persistent issue that has not
interest, such as the automatic generation of video game tutorials been satisfactorily ameliorated by accompanying documentation
[36, 37], and the adaptation of tutorials to individual user skill levels [71, 80, 109]. Over the past few decades, researchers have proposed
[8]. several methods for improving the user experience of computing
In this section, we begin with an overview of software learn- documentation, including standardizing key software terminol-
ability and multimedia learning. We then review the types of help ogy and modes of expression [6, 119]; automatically generating
facilities that are found most often in game-making software. These documentation material [93]; using semantic wiki systems to im-
are text documentation, video tutorials, and interactive tutorials. prove and accelerate the process of knowledge retrieval [24]; and
We also investigate intelligent agents. Despite not being present in drastically shortening text manuals by eliminating large sections
most game-making software, intelligent agents have been widely of explanation and elaborations [18]. A significant issue in this
explored as an effective means of teaching in the academic litera- research, however, is the dearth of systematic reviews and com-
ture. Although there are many other types of help (e.g., showing a prehensive models for evaluating the efficacy of text-based help
random tip on startup) and variations thereof [3], our focus is on: 1) facilities [126]. As a result, it remains difficult to determine both
Core help facilities commonly available in game-making software, the utility of computing documentation for users and developers
and 2) Common implementations of those facilities. Both this litera- and whether the benefits of production outweigh the costs [25].
ture, and the review of game-making software, provides a baseline In one study of tutorials and games, text-based tutorials were
for developing our own help facility conditions. associated with a 29% increase in length of play in the most complex
game; there was no significant increase with the tutorials for simpler
2.1 Software Learnability games, which suggests that investing in the creation of tutorials
Software learnability is a general term that refers to learning how for simpler games may not be worth it [3]. Researchers have stated
to use a piece of software. Software learnability can be measured that official gaming documentation faces a gradual but substantial
along different dimensions, including task metrics (i.e., task per- decline [38]. This can be attributed to several factors. Scholars
formance), command metrics (i.e., based on commands issued by and consumers of computer games typically agree that the best
the user), mental metrics (i.e., related to cognitive processes), and gaming experience is immersive and, therefore, that necessitating
subjective metrics (i.e., learnability questionnaires). In this paper, any documentation to understand gameplay is a hindrance; at the
we triangulate across these multiple categories by leveraging expert same time, complex games that lack text-based help facilities are
game level ratings (task), total game-making actions (command), frequently criticized for having steep learning curves that make
cognitive load measures (mental), and a questionnaire assessing immersion difficult [81]. Moreover, researchers have argued that
learnability of controls (subjective), to gauge the effects of help there is a lack of standardization across different games, and that
facilities on game-making software learnability. One important help documentation is often written by game developers themselves
aspect of our study is cognitive load—this refers to human work- (often through simply augmenting internal development texts),
ing memory usage [112]. Here, we are interested in studying the which has decreased the efficacy of text-based documentation [2,
amount of cognitive load experienced by users in each of the help 38, 81, 121].
facility conditions. Although help facilities may help users moder-
ate cognitive load through scaffolding the game-making activity, 2.4 Interactive Tutorial Help Facilities
they may also lead to negative impacts, e.g., overwhelming the user Since the mid-1980s, early research has sought to understand the
with information [122]. effectiveness of interactive tutorials on learning [20, 73, 74]. Interac-
tive tutorials have been found to be especially effective in subjects
2.2 Multimedia Learning Theory that benefit from visualizing concepts in detail; engineering stu-
Multimedia learning theory illustrates the principles which lead to dents, for example, can interact with graphical representations of
the most effective multimedia (i.e., visual and auditory) teaching objects that are difficult or impossible to do so with still images
Exploring Help Facilities in Game-Making Software FDG ’20, September 15–18, 2020, Bugibba, Malta
[74]. Interactive tutorials have been found to be highly effective in video tutorials were preferred for learning new content; however,
learning problem-solving [29, 114]. Additionally, interactive tuto- text tutorials were useful for looking up specific information [54].
rials have been found to be superior to non-interactive methods Video walkthroughs are common instructional tutorials used in
in learning factual knowledge [73], database programming [90], games to help players overcome a game’s challenges through the
medical library instruction [5], and basic research skills [111]. imitation of actions [12, 17, 85]. For example, a classroom study
Game designers often emphasize the importance of user experi- supplemented the use of video games with walkthroughs, and found
mentation while learning new concepts [97]. This experimentation, that students found the video-based walkthroughs more helpful
James Gee argues, should take place in a safe and simplified en- than the text-based ones [12].
vironment where mistakes are not punished [34]. Kelleher and
Pausch have shown that restricting user freedom improves tutorial 2.7 Game-Making
performance [68]. Using the Alice programming environment, they Academic interest in game-making has its roots in constructionism:
find that with an interactive tutorial called Stencils, users are able the theory of learning in which learners construct mental models
to complete the tutorial faster and with fewer errors than a paper- for understanding the world [92]. Early manifestations included
based version of the same tutorial. Molnar and Kostkova found that “Turtle Geometry,” an environment for programming an icon of
children 10-13 years of age reacted positively to the incorporation
a turtle trailing lines across a computer display. Research at the
of an interactive tutorial that guides the player explicitly through
intersection of HCI, game-making, and education has shown that
game mechanics [83]. On the other hand, participants that did not game-making has promise for increasing engagement, knowledge,
play the interactive tutorial found the game more awkward [84]. and skills in a variety of domains [1, 26, 33, 44, 45, 52, 53, 64, 100–
Frommel et al. found that in a VR game, players taught more in- 102]. However, despite an extensive literature on game-making and
teractively had higher positive emotions and higher motivation education, game-making software is seldom studied. [75] is one rare
[30]. example in which 8 game-making tools were contrasted on their
immersive features. Given the scarcity of work on game-making
2.5 Intelligent Agent Help Facilities software, it is difficult to predict which types of help facilities will
The persona effect was one of the earliest studies that revealed be most effective. Even in games, despite employing a wide variety
that the mere presence of a life-like character in a learning envi- of tutorial styles, the relative effectiveness of these styles is not well
ronment increased positive attitudes [61, 72]. Intelligent agents understood [3]. The main goal of the current study is to explore
are on-screen characters that respond to feedback from users in the effects of help facilities within game-making software.
order to enhance their experience [94, 123]. These are often used
to effectively tailor learning environments for individual students 3 INTERSECTIONALITY BETWEEN PLAY
[94, 123]. Intelligent agents can help to personalize learning more AND MAKING
effectively than standard teaching tools [94], and allow for human-
Before proceeding, it is crucial to discuss our approach in studying
like interaction between the software and the user that would not
game-making software. Frequently, in developing this work and
otherwise be possible [10, 108]. Intelligent agents have been inte-
discussing it with others, we often broached the topic of what
grated into several games whose purpose is to teach the player. The
making is, and what play is. Yet in trying to define these terms, even
TARDIS framework uses intelligent agents in a serious game for
in a specific context such as games, we reach a deep philosophical
social coaching for job interviews [4]. Other educational games
impasse. Huizinga is well-known to be the progenitor of one of the
have utilized intelligent agents to teach the player number factoriza-
most widely used (but widely contested) definitions of play [46, 107].
tion [22, 23], the Java compilation process [35], and computational
Piaget made the following observation: “the many theories of play
algorithms [31].
expounded in the past are clear proof that the phenomenon is
difficult to understand” [95]. Instead of attempting to delineate
2.6 Video Tutorial Help Facilities the two terms, we argue that it is precisely their intersectionality
Video-based tutorials utilize the modalities of audio, animation, that needs further theoretical and empirical grounding. We argue
and alphabetic text. Research has shown that user performance is that strictly categorizing an activity as play or making threatens
increased when animation is combined with an additional semiotic to constrain researchers to drawing on traditional epistemologies
mode, such as sound or words [79]. Video animation is effective for inherent to how the terms have been defined previously, rather
recall when illustrating highly visual facts, concepts, or principles than building new interdisciplinary bridges which shed light on
[99] (p.116). For instance, video tutorials can display a task sequence both parallels and divergences.
in the same way a user would see it on their own computer screen, For example, we find in the next section that game-making soft-
leading to congruence between the video and the real-life task ware often appears to fall on a continuum that is neither fully soft-
execution [115]. Many studies have shown that video tutorials can ware nor fully game. We could argue that LittleBigPlanet should
be highly effective [14, 117, 124]. For example, one study found that be categorized as a game, and that Unity should be categorized as
24.2% of students without videos failed a course on introductory software. Yet elements of play and making are present even in these
financial accounting, whereas the failure rate was only 6.8% among more extreme examples—in Unity, users engage in frequent play-
students that had the videos available [14]. Another study that testing, in part to see if their created game is “fun”. Therefore, there
compared text tutorials to video tutorials for learning software tools appear to be a number of both parallels and divergences between
found that both types of tutorials had their advantages. Namely, play and making, and their degree of overlap in any given context
FDG ’20, September 15–18, 2020, Bugibba, Malta Dominic Kao
Figure 1: Doom’s SnapMap interactive tutorial. Figure 2: LittleBigPlanet 3 interactive tutorial. Figure 3: Valve’s Hammer editor text doc.
will inevitably depend on the definitions that one has decided to as a core help facility (such as a random tip appearing each time
apply. In this paper, we avoid strict categorization of game-making the software boots) were excluded.
software as being a pure “game” or pure “software”—this allows our
survey to more flexibly encompass a wide range of game-making
Text Documentation
Text Documentation
Interactive Tutorial
Interactive Tutorial
systems, regardless of whether they exist as independent environ-
Intelligent Agent
Intelligent Agent
Video Tutorial
Video Tutorial
ments or embedded inside the ecology of a game.
4 REVIEW OF GAME-MAKING SOFTWARE
Before designing our experiment, we reviewed 85 different game- Unity Engine 3 3 7 3 Amazon Lumberyard 3 3 7 3
making software. This includes both game engines and official Unreal Engine 3 3 7 3 Shiva Engine 3 7 7 3
GameMaker Studio 2 3 7 7 3 Hero Engine 3 7 7 3
level editors. We retrieved the list of software based on commercial Godot Engine 3 3 7 7 3 ImpactJS 3 7 7 3
success and popularity (Wikipedia/Google), critical reception (Meta- CryEngine 3 7 7 3 Turbulenz 3 7 7 7
Cocos2d-x 3 7 7 3 JMonkeyEngine 3 7 7 3
critic), and user reception (Slant.co). For example, Slant.co shows Buildbox 3 7 7 3 Torque 3D 3 7 7 7
user-driven rankings for “What are the best 2D game engines?” and StarCraft 3 7 7 7 Panda 3D 3 7 7 7
StarCraft 2 3 7 7 7 Corona 3 7 7 3
“What are the best 3D game engines?”. Legend of Grimrock 2 3 7 7 3 Unigine 3 7 7 3
Each piece of software was first installed on an appropriate Neverwinter Nights 3 ~ 7 7 Leadwerks 3 7 7 3
Neverwinter Nights 2 3 7 7 7 Wintermute Engine 3 7 7 7
device, then explored by 2 experienced (8+ years of professional Doom (2016) 3 3 7 7 ORX Engine 3 7 7 7
game development experience) game developers independently for Dota 2 3 7 7 7 libGDX 3 7 7 7
Half-Life 3 7 7 7 Urho 3D 3 7 7 7
1 hour. Each individual then provided their own summary of the Half-Life 2 3 7 7 7 GameSalad 3 7 7 3
software and the help facilities available. At this stage, all possible Garry’s Mod 7 ~ 7 7 ClickTeam Fusion 3 7 7 3
help facilities were included, such as interactive tutorials, startup Shadowrun Returns 3 7 7 7 Stencyl 3 7 7 3
Tenchu 2 7 7 7 7 GameGuru 3 7 7 3
tips, community forums, and so on. In some rare instances, we Construct 2 3 7 7 3 Axis Game Factory 3 3 7 3
excluded software prior to review that did not come from an official RPG Maker MV 3 3 ~ 7 CopperCube 3 7 7 3
WarCraft 2 3 7 7 7 Phaser 3 7 7 3
source. (One notable example is Grand Theft Auto V, which does WarCraft 3 3 7 7 7 Xcode 3 7 7 3
not have official modding tools.) For examples, see Figure 1, 2, and 3. LittleBigPlanet 7 3 ~ 3 Android Studio 3 3 7 3
LittleBigPlanet 2 7 3 ~ 3 PlayCanvas 3 7 7 3
Next, we condensed our review into a table summarizing the LittleBigPlanet 3 7 3 ~ 3 GamePlay 3 7 7 7
different types of help available for each software. The table was Torchlight 7 7 7 3 ZGameEditor 3 7 7 3
Torchlight 2 3 7 7 3 Gamebryo 7 7 7 7
coded independently by the 2 developers, then discussed and re- Skyrim 3 7 7 7 Polycode 7 7 7 7
coded repeatedly until consensus was reached. At this stage, we Project Spark 3 3 7 3 Spring Engine 3 7 7 7
made the distinction between help facilities contained directly in Minecraft 3 3 7 3 Vanda 3 7 7 3
Morrowind 3 7 7 7 Angel2D 3 7 7 7
the software versus external help facilities. External help facilities Halo 5: Guardians 3 7 7 7 Gideros 3 7 7 3
included online courses, community forums, and e-mailing support. Super Mario Maker 3 3 ~ 7 LE 2D 3 7 7 3
TrackMania 3 3 7 3 GDevelop 3 7 7 3
These types of help fall outside the main intent of our current ShootMania 3 3 7 3 Pygame 3 7 7 7
research, which is to study help facilities contained in the software Dying Light 3 7 7 3 Allegro 3 7 7 7
Portal 2 3 3 7 7 HaxePunk 3 7 7 7
itself and were therefore excluded. An exception was made for Duke Nukem 3D 7 7 7 7 HaxeFlixel 2D 3 3 7 7
common internal help facilities that were external, so long as they Left 4 Dead 3 7 7 7 Monkey 2 3 7 7 7
Left 4 Dead 2 3 7 7 7 Flixel 3 7 7 7
came from an official source and so long as a link was included to AppGameKit 3 7 7 3 Babylon.js 3 7 7 3
those help facilities directly from within the software (e.g., online MonoGame 3 7 7 3
text documentation, videos, etc.). Unofficial sources of help were
not included. Finally, types of help that were contained within the Figure 4: Game-making software and their official help facilities. Green
software but were not substantive enough to warrant their inclusion means “Yes” ( ✓ ), orange means “Somewhat” (∼), and red means “No” (×).
Exploring Help Facilities in Game-Making Software FDG ’20, September 15–18, 2020, Bugibba, Malta
Our final table contained the following categories of help facili- 2) Realistic implementations similar to current game-making soft-
ties: text documentation, interactive tutorial, and video tutorial. In ware. To this end, we sought freelancer game developers to help
addition, intelligent agent was included as a result of our earlier with “Providing Feedback on Game-Making Software”. We told
rationale. See Figure 4. Overall, the review shows that text docu- game developers that we wanted critical feedback on game-making
mentation is prevalent in the majority of game-making software software being developed. We hired a total of 15 professional game
(89.4%). Official video tutorials are present in approximately half of developers, each with an average of 4 years (SD=2.0) of game devel-
game-making software (52.9%). A lesser number of game-making opment experience. Each game developer had worked with at least
software contain interactive tutorials (20.0%). Finally, no games 3 different game engines, with more than half of the developers
contained an intelligent agent that responded to user choices (0.0%). having experience with 5+. Developers all had work experience and
A few game-making software contained a character that would lead portfolios which reflected recent game development experience (all
the player through a tutorial, but these were purely aesthetic and within one year). These game developers provided input throughout
were not full-fledged intelligent agents. the help facility development process. Game developers provided
feedback at three different times during the the creation of our help
5 THE GAME-MAKING SOFTWARE facility conditions: During the initial design, after initial prototypes,
We developed a game-making software called GameWorld 1 . Game- and after completing the polished version. Game developer feed-
World was developed using a spiral HCI approach by repeatedly back was utilized to create help facilities that developers thought
designing, implementing, and evaluating prototypes in increasingly would be helpful, as well as similar to existing implementations
complex iterations. Evaluation of prototypes was performed with in game-making software. Additionally, the authors of this paper
experienced game developers known to the author. GameWorld was incorporated multimedia learning principles wherever possible in
developed specifically for novice game-makers, and allows users to developing the help facilities. Finally, a questionnaire was adminis-
create a first-person shooter game without any coding. tered to the developers to verify that our objectives of consistent
Figure 5 shows the main interface elements. The top of the in- quality and realistic implementations was satisfied.
terface is primarily dedicated to object-related actions. The left
side allows additional object manipulations. For example, objects in
6.1 Conditions
GameWorld are typically aligned to an underlying grid. However, We created 6 help facility conditions:
the user can hold down Control while modifying position, rotation, • No Help
or scale, which ignores the grid alignment and gives the user more • Text Help
flexibility. Therefore, the “Align” buttons allow for objects to be • Interactive Help
snapped back into grid alignment. Objects can also be grouped • Intelligent Agent Help
(for easier management), and be made dynamic (which means they • Video Help
are moveable during play, for instance from collisions with bullets • All Help
or player models). Dynamic is an important modifier for certain See Figures 6, 7, 8, and 9. The No Help condition was a baseline
objects, such as a door, which consists of a door frame, a door joint, control condition. The All Help condition contained all of the help
and a door which has the dynamic modifier enabled. facilities. Every help facility contained the same identical quick
Objects. There are 36 pre-made objects that users can place. These tutorial which consisted of learning navigation (moving around the
include simple objects (e.g., a sphere), to more complex objects (e.g., editor), creating the world (creating and rotating objects), adding ene-
a guard room). Players can also create their own objects, for example mies (creating enemy respawn points), level configuration (changing
by grouping objects together and saving them as a “pre-fab”. Objects additional level parameters), and play (play testing). Upon complet-
can be textured and colored. There are 76 pre-made textures that ing the quick tutorial, users will have learned all of the concepts
users can choose. There are 146 color choices. necessary to create their own level. Help facilities are integrated
Special Objects. Special objects are non-standard objects like door directly into the application to facilitate data tracking.
joints, invisible walls, lights, player and enemy spawn points, and When the editor first loads, the user is presented with the dia-
trees. These are manipulated in the same way as normal objects. log “Go to X now?” (X is replaced by Text Help, Interactive Help,
Level Properties. Within the level properties menu, players can Intelligent Agent Help, or Video Help). If the user clicks “Yes”, the
modify the starting health of the player, number of enemies, whether help facility is opened. If the user presses “No” then the user is
enemies should respawn after death (and how often), certain modes notified that they can access the help facility at any time by click-
useful for testing (e.g., player invincibility), etc. Some of these set- ing on the help icon. This happens only once for each of the help
tings can also be changed during play testing in the pause menu. facility conditions. In the All Help condition, every help facility is
Builder Tool. The builder tool allows users to create arbitrary presented in the opening dialog in a randomized order (randomized
objects using cubes, each cube corresponding to one grid volume. per-user), with each help facility presented as a button and “No
Thanks” at the bottom of the list. In the No Help condition, no
6 DEVELOPING HELP FACILITIES opening dialog is presented, and pressing the help icon brings up
In developing the help facility conditions, our primary objectives the dialog “Currently Unavailable”.
were: 1) Consistent quality across the different help facilities, and 6.1.1 Text Help. The Text Help window is a document that con-
tains a menu bar and links to quickly navigate to different sections
1 Demo: https://youtu.be/O7_VH0IyWdo of the text help. The Text Help window can be closed or minimized.
FDG ’20, September 15–18, 2020, Bugibba, Malta Dominic Kao
Figure 5: Interface overview. Each interface element has a corresponding tooltip.
In either case, the Text Help window will re-open at the same loca- SALSA With RandomEyes and Amplitude for WebGL Unity packages.
tion the user was at previously. When the Text Help is minimized, For gestures, we created custom talk and idle animations.
it appears as an icon near the bottom of the editor screen with the When the Intelligent Agent Help is activated, it provides several
caption “Text Help”. options: 1) Quick Tutorial (identical to the interactive tutorial and
The Text Help contains the quick tutorial. However, after the everything is spoken by the intelligent agent), 2) Interface Questions
quick tutorial, the Text Help contains additional reference material. (clicking anywhere on the screen provides an explanation—this
This includes in-depth (advanced) documentation on navigation, also works with dialogs that are open such as level properties),
level management, objects management, special objects, and play 3) Other Questions (a pre-populated list of questions weighted by
testing. Additional information is provided that is not covered in the user’s least taken actions, e.g., if the user has already accessed
the quick tutorial (e.g., hold down shift to amplify navigation, how level properties, this particular question will appear on a later page;
to peek from behind corners during play, how to add lighting, etc.). there are three pages of common questions, e.g., “How do I add
Screenshots are provided throughout to add clarity. enemies?”). The agent can be closed and re-activated at any time.
6.1.2 Interactive Help. The Interactive Help provides the quick
6.1.4 Video Help. The Video Help provides the quick tutorial in
tutorial interactively. Players are limited to performing a specific
a video format (4 minutes, 27 seconds). Audio is voiced by the
action at each step (a darkened overlay only registers clicks within a
same female actor as for the Intelligent Agent Help. Captions are
cut-out area). When users are presented with information that does
provided at the beginning of each section (e.g., “Navigating”). A
not require a specific action, users can immediately click “Next”.
scrubber at the bottom allows the user to navigate the video freely.
Users can close the interactive tutorial at any time. If a user re-
The Video Help can be closed or minimized. In either case, the
opens a previously closed interactive tutorial, the tutorial starts at
Video Help window will re-open at the same location the user was
the beginning—this behavior is consistent with existing interactive
at previously. When the Video Help is minimized, it appears as an
tutorials in game-making software.
icon near the bottom of the editor screen with the caption “Video
Help”.
6.1.3 Intelligent Agent Help. The Intelligent Agent Help is an intel-
ligent agent that speaks to the user through dialog lines. A female
voice actor provided the dialog lines of the intelligent agent. The in- 6.1.5 All Help. In the All Help condition, users have access to all
telligent agent has gestures, facial expressions, and lip movements help facilities. Help facilities work the same as described, except
that are synchronized to the audio. This was facilitated using the only one help facility can be active at a time.
Exploring Help Facilities in Game-Making Software FDG ’20, September 15–18, 2020, Bugibba, Malta
Figure 6: Text Help condition. Figure 7: Video Help condition.
Figure 8: Interactive Help condition. Figure 9: Intelligent Agent Help condition.
6.2 Validating Help Facilities similarity scores for each help facility was M=6.3, SD=0.7 (Interac-
The feedback of the professional game developers played an im- tive Tutorial), M=6.1, SD=0.7 (Video Tutorial), M=6.3, SD=0.8 (Text
portant role in the creation of our help facilities. For example, the Tutorial), M=6.2, SD=0.7 (Intelligent Agent).
initial prototype of the Text Help was a simple text document with
images. However, developers commented that most game-making 6.3 Validating Frame Rate
software would contain easy-to-navigate documentation. There- To ensure the validity of the experiment, one of our initial goals
fore, we enhanced the Text Help with a menu bar that contained was to normalize frames per second across the help facilities. A
section links. lower frames-per-second count while one of the help facilities was
After completing the final versions of the help facilities, we asked active would present a possible experiment confound. We wanted
the game developers to answer a short survey. Each game developer, to ensure that, in particular, the Intelligent Agent which is a 3D
on their own, explored each help facility in a randomized order for model that moved with gestures and facial expressions in a We-
at least 30 minutes. After each help facility, game developers anony- bGL application, did not create performance issues and a possible
mously answered two questions: “Overall, I felt that the quality of degradation of the experience.
the X was excellent,” and “Overall, I felt that the X was similar to For testing, we used a 2018 PC (Windows 10) and a 2012 Macbook
how I would expect it to be implemented in other game-making Pro (MacOS High Sierra). The PC had an Intel Core i7-7700k CPU
software,” on a scale of 1:Strongly Disagree to 7:Strongly Agree. (4.20 GHz), an NVIDIA GeForce GTX 1070, and 16 GB of RAM. The
A one-way ANOVA found no significant effect of help facil- Mac had an Intel Core i5 (2.5 GHz), an Intel HD Graphics 4000 GPU,
ity condition on game developer quality ratings at the p<.05 level and 6 GB of RAM. Both systems used Firefox Quantum 63.0 to run
[F(3,56) = 0.24, p = 0.87]. The average quality score for each help the Unity WebGL game and for performance profiling.
facility was M=6.0, SD=1.3 (Interactive Tutorial), M=5.9, SD=1.1 We produced a 1-minute performance profile for each machine
(Video Tutorial), M=6.1, SD=0.8 (Text Tutorial), M=5.9, SD=0.8 (In- and for each help facility. In the case of Text Help, Interactive
telligent Agent). A one-way ANOVA found no significant effect of Help, and Intelligent Agent Help, interactions occurred at a reading
help facility condition on game developer similar implementation speed of 200 words per minute [127]. We produced a performance
ratings at the p<.05 level [F(3,56) = 0.35, p = 0.79]. The average profile for each machine and for each help facility. All help facilities
FDG ’20, September 15–18, 2020, Bugibba, Malta Dominic Kao
were within ~1 fps: intelligent agent (PC: 59.14 fps, Mac: 59.04), 7.2 Participants
interactive tutorial (PC: 60.00, Mac: 59.11), text tutorial (PC: 60.00, After a screening process that disqualified participants with multi-
Mac: 59.43), video tutorial (PC: 60.00, Mac: 58.74). ple surveys with zero variance, multiple surveys with ±3SD, or a
failed attention check, 1646 Amazon Mechanical Turk participants
7 METHODS were retained. The data set consisted of 976 male, and 670 female
participants. Participants were between the ages of 18 and 73 (M
7.1 Quantitative Measures
= 32.3, SD = 9.6), were all from the United States, and could all
7.1.1 Learnability of Controls. The “Controls” subscale from the read/write English.
Player Experience of Need Satisfaction (PENS) scale [106] was
adapted for use in this study. This consisted of 3 questionnaire items 7.3 Design
as follows: “Learning GameWorld’s controls was easy”, “Game-
A between-subjects design was used: help facility condition was
World’s controls are intuitive”, and “When I wanted to do some-
the between-subject factor. Participants were randomly assigned
thing in GameWorld, it was easy to remember the corresponding
to a condition.
control”. Cronbach’s alpha was 0.86.
7.1.2 Learning Motivation Scale. Learning motivation was cap- 7.4 Protocol
tured using a scale adapted from [47] which consisted of 7 items on Participants filled out a pre-survey assessing previous experience
a 6-point Likert scale (1: Strongly Disagree to 6: Strongly Agree), e.g., playing games, programming, and creating games (conditions did
“I would like to learn more about GameWorld”. Cronbach’s alpha not differ significantly across any of these measures, p=0.320, p=0.676,
was 0.93. p=0.532). Then for a minimum of 10 minutes, each participant in-
teracted with GameWorld. After the 10 minutes had passed, the quit
7.1.3 Cognitive Load. Cognitive load used measures adapted from button became active and participants could exit at any time. After
[88] and [113]. It consists of 8 items on a 6-point Likert scale (1: quitting, participants completed the PENS, the learning motiva-
Strongly Disagree to 6: Strongly Agree). There are two sub-scales: tion scale, and the cognitive load scale. Participants then provided
mental load (e.g., “GameWorld was difficult to learn for me”), and ratings on their game levels before filling out demographics.
mental effort (e.g., “Learning how to use GameWorld took a lot of
mental effort”). Cronbach’s alpha was 0.90 and 0.85. 7.5 Analysis
Separate MANOVAs are run for each separate set of items—PENS,
7.1.4 Game Quality Ratings. Users were asked to rate their final User Level Ratings, Expert Level Ratings; with the independent
game level on the dimensions of: “Aesthetic” (Is it visually appeal- variable—help facility condition. To detect the significant differ-
ing?), “Originality” (Is it creative?), “Fun” (Is it fun to play?), “Diffi- ences between badge conditions, we utilized one-way MANOVA.
culty” (Is it difficult to play?), and “Overall” (Is it excellent overall?) These results are reported as significant when p<0.05 (two-tailed).
on a scale of 1: Strongly Disagree to 7: Strongly Agree. Prior to running our MANOVAs, we checked both assumption of
Expert ratings were given by 3 QA testers we hired. All QA homogeneity of variance and homogeneity of covariance by the
testers had extensive games QA experience. The 3 QA testers first test of Levene’s Test of Equality of Error Variances and Box’s Test
underwent one-on-one training with a GameWorld expert for one of Equality of Covariance Matrices; and both assumptions were
hour. QA testers then reviewed 250 game levels on their own with- met by the data. For individual measures, we use one-way ANOVA.
out scoring them. QA testers were then given 50 game levels at
random to rate. The GameWorld expert provided feedback on the 8 RESULTS
ratings, and the game levels were rescored as necessary. Afterwards,
RQ1: Do help facilities lead to higher motivated behavior?
QA testers worked entirely independently.
The Interactive Help and Video Help promoted greater time spent.
All 3 QA testers were blind to the experiment—the only infor-
The Interactive Help promoted a higher number of actions. The No
mation they received was a spreadsheet containing links to each
Help condition results in the least time spent and the lowest number
participant’s game level. Each game level was played by the QA
of actions.
tester before being rated. They were debriefed on the purpose of
A one-way ANOVA found a significant effect of help facility
their work after they completed all 1646 ratings. The 3 QA testers
condition on time spent at the p<.05 level [F(5,1640) = 10.23, p <
each spent an average of 64 hours (SD=9.3) over 3 weeks, at $10
0.001, ηp2 = 0.03]. Post-hoc testing using Tukey HSD found that
USD/hr.
participants in both the Interactive Help and Video Help conditions
7.1.5 Total Time. We measure both total time, and time spent in spent a longer total time than participants in any of the four other
each help facility. For all types of help, this is the amount of time conditions, p<.05, d in the range of 0.27–0.46. See Figure 10.
that the help is on-screen (and maximized if it is Text Help or Video A one-way ANOVA found a significant effect of help facility
Help). condition on total game-making actions at the p<.05 level [F(5,1640)
= 4.22, p < 0.001, ηp2 = 0.01]. Post-hoc testing using Tukey HSD
7.1.6 Other Measures. We were additionally interested in whether found that participants in the Interactive Help condition performed
the player activated the help immediately on startup and how many a higher number of actions than Text Help (d=0.23), Intelligent
total game-making actions were performed (this was an aggregate Agent Help (d=0.21), All Help (d=0.22), and No Help (d=0.33), p<.05.
measure that combined object creations, object manipulations, etc.). See Figure 11.
Exploring Help Facilities in Game-Making Software FDG ’20, September 15–18, 2020, Bugibba, Malta
300 6
Learn. Motivation (6-Likert)
Controls Score (7-Likert)
Time Spent (seconds)
1000 6
Number of Actions
200 4
4
No Help No Help No Help No Help
500 Text Help Text Help Text Help Text Help
Video Help 100 Video Help
2 Video Help
2 Video Help
Interactive Help Interactive Help Interactive Help Interactive Help
Intelligent Agent Help Intelligent Agent Help Intelligent Agent Help Intelligent Agent Help
All Help All Help All Help All Help
0 0 0 0
Figure 10: Time spent (+/- SEM). Figure 11: Editor actions (+/- SEM). Figure 12: Controls score (+/- SEM). Figure 13: Learning mot. (+/- SEM).
RQ2: Do help facilities improve learnability of controls? p <.05, ηp2 in the range of 0.01–0.02. Posthoc testing using Tukey
The Interactive Help promoted controls learnability. HSD found that for aesthetic, originality, fun, and overall: Both
A one-way ANOVA found a significant effect of help facility Interactive Help and Video Help were significantly higher than No
condition on the PENS controls score at the p<.05 level [F(5,1640) Help, p <.05, d in the range of 0.25–0.39.
= 3.96, p < 0.005, ηp2 = 0.01]. Post-hoc testing using Tukey HSD For expert ratings, intraclass correlation across the three raters
found that participants in the Interactive Help condition had a was ICC=0.83 (two-way random, average measures), indicating
higher PENS controls score than participants in any of the other high agreement. The MANOVA was statistically significant across
conditions except All Help and Video Help, p <.05, d in the range help facility conditions across the expert-rated game level quality
of 0.27–0.34. See Figure 12. dimensions F(25, 6079) = 5.97, p <.001; Wilk’s λ = 0.914, ηp2 = 0.02.
RQ3: Do help facilities improve learning motivation? ANOVAs found that the effect was significant across all dimensions,
The Interactive Help and Video Help promoted learning motivation. p <.005, ηp2 in the range of 0.02–0.06. Posthoc testing using Tukey
No Help results in the lowest learning motivation. HSD found that Interactive Help and Video Help were highest
A one-way ANOVA found a significant effect of help facility across all dimensions (significant values: p <.05, d in the range
condition on learning motivation at the p<.05 level [F(5,1640) = 6.42, of 0.22–0.75). On the other hand, No Help was lowest across all
p < 0.001, ηp2 = 0.02]. Post-hoc testing using Tukey HSD found that dimensions (significant values: p <.05, d in the range of 0.32–0.75).
participants in both the Interactive Help and Video Help conditions For the overall dimension: Both Interactive Help and Video Help
had higher learning motivation than participants in any of the other were significantly higher than all other conditions except Intelligent
conditions except Intelligent Agent Help, p<.05, d in the range of Agent Help and Text Help, p <.05, d in the range of 0.26–0.58. No
0.27–0.37. See Figure 13. Help was lower than all other conditions, p <.05, d in the range of
RQ4: Do help facilities improve cognitive load? 0.38–0.58. See Figure 15.
All conditions had lower cognitive load relative to No Help, except RQ6: Does time spent on help facilities vary?
Text Help. Interactive Help, Intelligent Agent Help, and All Help have The Interactive Help, Intelligent Agent Help, and Video Help lead
lower cognitive load than Text Help. to longest time spent on help. In the All Help condition, participants
A one-way ANOVA found a significant effect of help facility spent the most time in the Interactive Help, and next longest in Video
condition on mental load at the p<.05 level [F(5,1640) = 8.14, p < Help. In the All Help condition, participants are less likely to activate
0.001, ηp2 = 0.02]. Post-hoc testing using Tukey HSD found that any help facility on load.
participants in the No Help condition had a higher mental load A one-way ANOVA found a significant effect of help facility
than participants in any other condition except Text Help, p<.005, condition on time spent on help at the p<.05 level [F(5,1640) = 36.85,
d in the range of 0.32–0.39. Participants in the Text Help condition p < 0.001, ηp2 = 0.10]. Post-hoc testing using Tukey HSD found that
had a higher mental load than Interactive Help (d=0.31), Intelligent participants in the Interactive Help (M=241, SD=313), Intelligent
Agent Help (d=0.33), and All Help (d=0.28), p <.05. Agent Help (M=246, SD=382), and Video Help (M=238, SD=331),
A one-way ANOVA found a significant effect of help facility conditions spend more time on help than participants in Text Help
condition on mental effort at the p<.05 level [F(5,1640) = 8.29, p (M=117, SD=198), and All Help (M=158, SD=202), p<.05, d in the
< 0.001, ηp2 = 0.03]. Post-hoc testing using Tukey HSD found that range of 0.29–0.42.
participants in the No Help condition exerted higher mental effort A one-way within subjects ANOVA was conducted to compare
than participants in any other condition, p<.005, d in the range of time spent across different help facilities in the All Help condition.
0.15–0.42. Participants in the Text Help condition exerted higher There was a significant difference in time spent, Wilk’s λ = 0.929, F
mental effort than Interactive Help (d=0.26) and Intelligent Agent (3,275) = 7.04, p < .001, ηp2 = 0.07. Post-hoc testing using a Bonferroni
Help (d=0.28), p <.05. See Figure 14. correction found that participants in the All Help condition spent
RQ5: Do help facilities improve created game levels? significantly longer in the Interactive Help (M=63, SD=135) than
The Interactive Help and Video Help led to the highest quality game in the Text Help (M=24, SD=81, d=0.35) and the Intelligent Agent
levels, both from the user’s perspective and expert ratings. No Help Help (M=24, SD=86, d=0.34), p<.001. Time spent in Video Help was
leads to the lowest quality. M=47, SD=144.
The MANOVA was statistically significant across help facility A one-way ANOVA found a significant effect of help facility
conditions across the self-rated game level quality dimensions, F(25, condition on likelihood of startup help activation at the p<.05
6079) = 1.53, p <.05; Wilk’s λ = 0.977, ηp2 = 0.01. ANOVAs found that level [F(5,1640) = 282.64, p < 0.001, ηp2 = 0.46]. Post-hoc testing
the effect was significant across all dimensions except difficulty, using Tukey HSD found that participants in the All Help condition
FDG ’20, September 15–18, 2020, Bugibba, Malta Dominic Kao
6 7
Overall Level Ratings (7-Likert)
Cognitive Load (6-Likert)
5 6
5
4
4
3
3
No Help No Help
2 Text Help Text Help
Video Help
2 Video Help
1 Interactive Help Interactive Help
Intelligent Agent Help 1 Intelligent Agent Help
All Help All Help
0 0
Mental Load Mental Effort User Ratings Expert Ratings
Figure 14: Means of 6-point Likert ratings for cognitive load (+/- SEM). Figure 15: Mean 7-point Likert overall ratings for game levels (+/- SEM).
(M=66%) are significantly less likely to activate a help facility on time in these help facilities over Text Help and Intelligent Agent
startup than any other help facility condition, p<.05, d in the range Help.
of 0.20–0.58.
9.3 Why These Findings Occurred
Both Interactive Help and Video Help outperformed other condi-
9 DISCUSSION
tions. Both Interactive Help and Video Help are able to moderate
9.1 Game Making Help Facilities Are Crucial cognitive load in comparison to No Help and Text Help, through al-
The results show that Interactive Help promoted time spent, total lowing users to follow guided step-by-step instructions. A reduction
editor activity, controls learnability, and learning motivation. Video in cognitive load often results in better performance [89], which in
Help promoted time spent, and learning motivation. These results this context translated to time spent, editor actions, controls learn-
highlight the important role that help facilities had in promoting ability, learning motivation, and game level quality. The additional
motivated behavior, controls learnability, and learning motivation benefits of Interactive Help above and beyond other conditions in
in GameWorld. this study could be a result of performing actions immediately as
For cognitive load, No Help had the highest load with Text Help they are being described during instruction. For example, decades
the second highest. All other conditions had lower cognitive load. of studies have shown the effectiveness of learning techniques that
These results show that help facilities can reduce the cognitive involve the act of doing while learning, including hands-on learning
load for users, and that help facilities (e.g., Text Help) can have [98], active learning [110], and situated learning [70]. In a meta-
differential impacts. analysis of 255 studies, active learning—which promotes directly
Finally, results show that help facilities improve the quality of interacting with learning material [13]—was shown to reduce fail-
produced game levels. Both Interactive Help and Video Help led ure rates in courses by 11% and increase student performance on
to the highest quality game levels, both self and expert rated. On course assessments by 0.47 standard deviations [110]. Therefore, in-
the other hand, No Help led to the lowest quality. This demon- teractive help facilities have a strong theoretical and empirical basis
strates that help facilities improve the objective quality of produced for their effectiveness. More work, however, is needed to understand
artifacts in GameWorld. why Intelligent Agent Help was less helpful than Interactive Help. It
is possible that the number of dialog choices contained in the Intelli-
gent Agent Help was overwhelming for users [27, 28, 67, 104]. More
9.2 Not All Help Facilities Are Made Equal research is needed to understand how to best optimize different
Results show that No Help is detrimental to most outcomes. Having help facilities.
some help facility was better than having no help facility. How-
ever, there was significant variance between help facilities. Text 9.4 Recommendations for Game Making
Help only marginally improved outcomes compared to No Help. Software
Similarly, All Help and Intelligent Agent Help saw only marginal Interactive Help and Video Help Improved Outcomes. Our
improvements compared to No Help, with the exception of cog- results show that Interactive Help and Video Help lead to improved
nitive load (on which both All Help and Intelligent Agent Help outcomes. However, in our review of game-making software, we
scored low). On the other hand, the results show that Interactive found that while 89.4% had text documentation, only 52.9% had
Help and Video Help led to significant improvements over No Help videos and 20.0% interactive tutorials. This indicates a potential
with medium–small effect sizes [21]. missed opportunity for game-making software to better introduce
The results show that participants in the All Help condition are systems to users2 .
less likely to activate a help facility on startup. This potentially indi- Any Help Is Better Than No Help. Participants in No Help per-
cates too much choice, or a choice that was simply not meaningful formed badly on all outcomes (time spent, controls learnability,
[27, 28, 67, 104]. On the other hand, the two effective help facilities, learning motivation, cognitive load, total editor activity, and game
Interactive Help and Video Help, are linear. These two help facilities
are also the ones that participants in the All Help condition spent 2 Oneaspect not analyzed in this study is cost/ease of development, which may be a
the most time with, suggesting that users preferred to spending reason for Text Help’s ubiquity.
Exploring Help Facilities in Game-Making Software FDG ’20, September 15–18, 2020, Bugibba, Malta
level quality). Having some help facility was always better than knowledge in game-making. Results show that Interactive Help
having no help facility at all. This indicates that game-making soft- was the most promising help facility, leading to a greater positive
ware should always incorporate some form of help, even if simply impact on time spent, controls learnability, learning motivation,
basic documentation. total editor activity, and game level quality. Video Help is a close
Be Wary of Giving Users Choice of Help. Results show that second across these same measures. These results are directly rele-
participants in the All Help condition, in which participants were vant to designers, researchers, and developers, as they reveal how
able to choose which help facility to use, led to worse outcomes than to best support novice game-making through help facilities. Future
Interactive Help or Video Help alone. Participants in All Help were research in this domain can help cultivate the next generation of
less likely to activate help on startup than any other condition, and game-makers in an age where play and making are, more than ever,
spent less time on help compared to Interactive Help, Video Help, both ubiquitous and intertwined.
and Intelligent Agent Help. This indicates that initially prompting
the user with one good help facility will be more effective. REFERENCES
[1] By Yasemin Allsop. 2016. A reflective study into children’s cognition when
10 LIMITATIONS making computer games. British Journal of Educational Technology (2016).
[2] Scott Ambler. 2012. Best Practices for Agile/Lean Documentation. Agile Modeling
Amazon Mechanical Turk (AMT) has been shown to be a reliable (2012).
[3] Erik Andersen, Eleanor O’Rourke, Yun-En Liu, Rich Snider, Jeff Lowdermilk,
platform for experiments (e.g., [15, 76]). AMT workers also tend to David Truong, Seth Cooper, and Zoran Popovic. 2012. The impact of tutorials on
represent a more diverse sample than the U.S. population [15, 19, games of varying complexity. In CHI. https://doi.org/10.1145/2207676.2207687
91]. However, future experiments restricted to experienced game [4] Keith Anderson, Elisabeth André, T. Baur, Sara Bernardini, M. Chollet, E. Chrys-
safidou, I. Damian, C. Ennis, A. Egges, P. Gebhard, H. Jones, M. Ochs, C.
developers could give more insight into help facilities’ effects on Pelachaud, Kaśka Porayska-Pomsta, P. Rizzo, and Nicolas Sabouret. 2013. The
experts. Despite that our AMT sample consists mainly of novices, TARDIS framework: Intelligent virtual agents for social coaching in job inter-
these are likely the users who need the most scaffolding, and hence views. Lecture Notes in Computer Science (including subseries Lecture Notes in
Artificial Intelligence and Lecture Notes in Bioinformatics) 8253 LNCS (2013),
are an appropriate population to study in the context of help. 476–491. https://doi.org/10.1007/978-3-319-03161-3_35
Longitudinal studies are needed. Although in the short-term, [5] Rozalynd P. Anderson and Steven P. Wilson. 2009. Quantifying the effectiveness
of interactive tutorials in medical library instruction. Medical Reference Services
our results show that Interactive Help and Video Help are highly Quarterly 28, 1 (2009), 10–21. https://doi.org/10.1080/02763860802615815
beneficial, the other help facilities could become more effective over [6] G. Antoniol, G. Canfora, A. De Lucia, and E. Merlo. 2003. Recovering code to
time. For example, long-time users may find Text Help useful for documentation links in OO systems. (2003), 136–144. https://doi.org/10.1109/
wcre.1999.806954
looking up reference information. Longitudinal studies may deter- [7] Ivon Arroyo, Beverly Park Woolf, James M. Royer, and Minghui Tai. 2009.
mine, for example, that certain types of help are more appropriate Affective gendered learning companions. In Frontiers in Artificial Intelligence
for different levels of experience. and Applications, Vol. 200. 41–48. https://doi.org/10.3233/978-1-60750-028-5-41
[8] Batu Aytemiz, Isaac Karth, Jesse Harder, Adam M Smith, and Jim Whitehead.
We took care to design help facilities in consultation with highly 2018. Talin : A Framework for Dynamic Tutorials Based on the Skill Atoms
experienced game developers. Moreover, we ensured that game Theory. Aiide (2018), 138–144.
[9] Jeremy N. Bailenson, Jim Blascovich, and Rosanna E. Guadagno. 2008. Self-
developers perceived the help facilities to be of high/similar quality, representations in immersive virtual environments. Journal of Applied Social
and that the help facilities were implemented similarly to other Psychology 38, 11 (2008), 2673–2690.
game-making software. Nevertheless, these help facilities could be [10] Amy Baylor. 1999. Intelligent agents as cognitive tools for education. Educational
technology 39, 2 (1999), 36–40.
constructed differently. For example, the intelligent agent could [11] Al Baylor and Yanghee Kim. 2004. Pedagogical agent design: The impact of
have been constructed to be similar to the user. A significant litera- agent realism, gender, ethnicity, and instructional role. Intelligent Tutoring
ture has shown that intelligent agents that are more similar to their Systems 1997 (2004), 592–603. https://doi.org/10.1007/978-3-540-30139-4_56
[12] Kelly Bergstrom, Jennifer Jenson, Emily Flynn-Jones, and Cristyne Hebert.
users (termed similarity-attraction [16, 49]) along the dimensions 2018. Videogame Walkthroughs in Educational Settings: Challenges, Successes,
of age, gender, race, clothing, etc. promote learning [7, 9, 11, 39, and Suggestions for Future Use. Proceedings of the 51st Hawaii International
Conference on System Sciences 9 (2018), 1875–1884. https://doi.org/10.24251/
51, 69, 96, 105]. Indeed, there are any number of changes to these hicss.2018.237
help facilities that we can imagine. Nonetheless, there is value in [13] Charles C Bonwell and James A Eison. 1991. Creating Excitement in the Classroom.
contrasting the baseline developer-driven and developer-validated WHAT IS ACTIVE LEARNING AND WHY IS IT IMPORTANT? Technical Report.
[14] H David Brecht and Suzanne M Ogilby. 2008. Enabling a Comprehensive
implementations here. Teaching Strategy: Video Lectures. Journal of Information Technology
Finally, there are other forms of help that we are interested in. Education 7 (2008), 71–86. http://go.galegroup.com/ps/i.do?action=
For example, providing template projects can be useful both as a interpret{&}id=GALE{%}7CA199685531{&}v=2.1{&}u=ggcl{&}it=r{&}p=
AONE{&}sw=w{&}authCount=1
starting point and for dissecting/understanding pre-built games. [15] Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. 2011. Amazon’s Me-
Additionally, we are interested in augmenting GameWorld with chanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives
on psychological science 6, 1 (2011), 3–5.
additional game genres (e.g., platformer, action RPG, etc.), and [16] Donn Byrne and Don Nelson. 1965. Attraction as a linear function of proportion
capabilities. of positive reinforcements. Journal of Personality and Social Psychology 1, 6
(1965), 659–663. https://doi.org/10.1037/h0022073
[17] Diane Carr. 2005. Contexts, gaming pleasures, and gendered preferences.
11 CONCLUSION Simulation and Gaming 36, 4 (2005), 464–482. https://doi.org/10.1177/
1046878105282160
Game-making is increasingly pervasive, with an ever-larger number [18] John M Carroll, Penny L Smith-Kerker, James R Ford, and Sandra A Mazur-
of game engines and software. With today’s game-making software, Rimetz. 1987. The Minimal Manual. Human-Computer Interaction 3, 2 (1987),
it is increasingly becoming possible for novices and experts alike to 123–153. https://doi.org/10.1207/s15327051hci0302_2
[19] Jesse Chandler and Danielle Shapiro. 2016. Conducting clinical research using
create games. Nonetheless, game-making software is often complex. crowdsourced convenience samples. Annual Review of Clinical Psychology 12
Help facilities can therefore play an important role in scaffolding (2016).
FDG ’20, September 15–18, 2020, Bugibba, Malta Dominic Kao
[20] Davida H. Charney and Lynne M. Reder. 1986. Designing Interactive Tutorials [46] Johan Huizinga. 2014. Homo Ludens Ils 86. Routledge.
for Computer Users. Human-Computer Interaction 2, 4 (1986), 297–317. https: [47] Gwo-Jen Hwang and Hsun-Fang Chang. 2011. A formative assessment-based
//doi.org/10.1207/s15327051hci0204_2 mobile learning approach to improving the learning attitudes and achievements
[21] Jacob Cohen. 1992. A power primer. Psychological bulletin 112, 1 (1992), 155. of students. Computers & Education 56, 4 (2011), 1023–1031.
[22] Cristina Conati. 2002. Probabilistic assessment of user’s emotions in educational [48] Ioanna Iacovides, Anna L. Cox, and Thomas Knoll. 2014. Learning the Game:
games. Applied Artificial Intelligence 16, 7-8 (2002), 555–575. https://doi.org/10. Breakdowns, Breakthroughs and Player Strategies. Proceedings of the extended
1080/08839510290030390 abstracts of the 32nd annual ACM conference on Human factors in computing
[23] Cristina Conati and Xiaohong Zhao. 2004. Building and evaluating an intelligent systems - CHI EA ’14 (2014), 2215–2220. https://doi.org/10.1145/2559206.2581304
pedagogical agent to improve the effectiveness of an educational game. (2004), [49] Katherine Isbister and Clifford Nass. 2000. Consistency of personality in
6. https://doi.org/10.1145/964442.964446 interactive characters: verbal cues, non-verbal cues, and user characteris-
[24] Klaas Andries de Graaf. 2011. Annotating software documentation in semantic tics. International Journal of Human-Computer Studies 53 (2000), 251–267.
wikis. (2011), 5. https://doi.org/10.1145/2064713.2064718 https://doi.org/10.1006/ijhc.2000.0368
[25] Sergio Cozzetti B. de Souza, Nicolas Anquetil, and Káthia M. de Oliveira. 2005. [50] Colby Johanson and Regan L. Mandryk. 2016. Scaffolding Player Location
A study of the documentation essential to software maintenance. (2005), 68. Awareness through Audio Cues in First-Person Shooters. (2016), 3450–3461.
https://doi.org/10.1145/1085313.1085331 https://doi.org/10.1145/2858036.2858172
[26] Jill Denner, Linda Werner, and Eloy Ortiz. 2012. Computer games created by [51] Amy M. Johnson, Matthew D. Didonato, and Martin Reisslein. 2013. Animated
middle school girls: Can they be used to measure understanding of computer agents in K-12 engineering outreach: Preferred agent characteristics across age
science concepts? Computers and Education 58, 1 (2012), 240–249. https: levels. Computers in Human Behavior 29, 4 (2013), 1807–1815.
//doi.org/10.1016/j.compedu.2011.08.006 [52] Yasmin B Kafai. 2006. Playing and making games for learning: Instructionist
[27] Miriam Evans and Alyssa R. Boucher. 2015. Optimizing the power of choice: and constructionist perspectives for game studies. Games and Culture 1, 1 (2006),
Supporting student autonomy to foster motivation and engagement in learning. 36–40. https://doi.org/10.1177/1555412005281767
Mind, Brain, and Education 9, 2 (2015), 87–91. https://doi.org/10.1111/mbe.12073 [53] Yasmin B. Kafai and Quinn Burke. 2015. Constructionist Gaming: Understanding
[28] Terri Flowerday and Gregory Schraw. 2000. Teacher beliefs about instructional the Benefits of Making Games for Learning. Educational Psychologist 50, 4 (2015),
choice: A phenomenological study. Journal of Educational Psychology 92, 4 313–334. https://doi.org/10.1080/00461520.2015.1124022
(2000), 634–645. https://doi.org/10.1037/0022-0663.92.4.634 [54] Verena Käfer, Daniel Kulesz, and Stefan Wagner. 2017. What Is the Best Way
[29] Vera Frith, Jacob Jaftha, and Robert Prince. 2004. Evaluating the effectiveness For Developers to Learn New Software Tools? The Art, Science, and Engineering
of interactive computer tutorials for an undergraduate mathematical literacy of Programming 1, 2 (2017). https://doi.org/10.22152/programming-journal.org/
course. British Journal of Educational Technology 35, 2 (2004), 159–171. 2017/1/17
[30] Julian Frommel, Kim Fahlbusch, Julia Brich, and Michael Weber. 2017. The [55] Dominic Kao. 2019. Exploring the Effects of Growth Mindset Usernames in
Effects of Context-Sensitive Tutorials in Virtual Reality Games. (2017), 367–375. STEM Games. American Education Research Association (2019).
https://doi.org/10.1145/3116595.3116610 [56] Dominic Kao. 2019. JavaStrike: A Java Programming Engine Embedded in
[31] Vihanga Gamage and Cathy Ennis. 2018. Examining the effects of a virtual Virtual Worlds. In Proceedings of The Fourteenth International Conference on the
character on learning and engagement in serious games. (2018), 1–9. https: Foundations of Digital Games.
//doi.org/10.1145/3274247.3274499 [57] Dominic Kao. 2019. The Effects of Anthropomorphic Avatars vs. Non-
[32] Jacqueline Gaston and Seth Cooper. 2017. To Three or not to Three: Improving Anthropomorphic Avatars in a Jumping Game. In The Fourteenth International
Human Computation Game Onboarding with a Three-Star System. Proceedings Conference on the Foundations of Digital Games.
of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17 [58] Dominic Kao. 2020. The effects of juiciness in an action RPG. Entertainment
(2017), 5034–5039. https://doi.org/10.1145/3025453.3025997 Computing 34, November 2018 (2020), 100359. https://doi.org/10.1016/j.entcom.
[33] Elisabeth R. Gee and Kelly M. Tran. [n.d.]. Video Game Making and Modding. 2020.100359
238–267 pages. https://doi.org/10.4018/978-1-4666-8310-5.ch010 [59] Dominic Kao and D. Fox Harrell. 2015. Exploring the Impact of Role Model
[34] James Paul Gee. 2005. Learning by Design: Good Video Games as Learning Avatars on Game Experience in Educational Games. The ACM SIGCHI Annual
Machines. E-Learning and Digital Media 2, 1 (2005), 5–16. https://doi.org/10. Symposium on Computer-Human Interaction in Play (CHI PLAY) (2015).
2304/elea.2005.2.1.5 arXiv:arXiv:1011.1669v3 [60] Dominic Kao and D. Fox Harrell. 2015. Mazzy: A STEM Learning Game. Foun-
[35] Gómez-Martín, Marco A., Pedro P. Gómez-Martín and Pedro A. González-Calero. dations of Digital Games (2015).
2004. Game-driven intelligent tutoring systems. In International Conference on [61] Dominic Kao and D Fox Harrell. 2016. Exploring the Effects of Dy-
Entertainment Computing. Springer, 108–113. namic Avatars on Performance and Engagement in Educational Games. In
[36] Michael Cerny Green, Ahmed Khalifa, Gabriella A. B. Barros, Tiago Machado, Games+Learning+Society (GLS 2016).
Andy Nealen, and Julian Togelius. 2018. AtDelfi: Automatically Designing [62] Dominic Kao and D. Fox Harrell. 2016. Exploring the Effects of Encouragement
Legible, Full Instructions For Games. In FDG. arXiv:1807.04375 http://arxiv. in Educational Games. Proceedings of the 34th Annual ACM Conference Extended
org/abs/1807.04375 Abstracts on Human Factors in Computing Systems (CHI 2016) (2016).
[37] Michael Cerny Green, Ahmed Khalifa, Gabriella A. B. Barros, and Julian Togelius. [63] Dominic Kao and D. Fox Harrell. 2016. Exploring the Impact of Avatar Color on
2018. "Press Space to Fire": Automatic Video Game Tutorial Generation. (2018), Game Experience in Educational Games. Proceedings of the 34th Annual ACM
75–80. arXiv:1805.11768 http://arxiv.org/abs/1805.11768 Conference Extended Abstracts on Human Factors in Computing Systems (CHI
[38] Jeffrey Greene and Laura Palmer. 2011. It’s all in the game: Technical communi- 2016) (2016).
cation’s role in game documentation. Intercom 3, 63 (2011), 6–9. [64] Dominic Kao and D. Fox Harrell. 2017. MazeStar: A Platform for Studying
[39] Rosanna E Guadagno, Jim Blascovich, Jeremy N Bailenson, and Cade Mc- Virtual Identity and Computer Science Education. In Foundations of Digital
Call. 2007. Virtual humans and persuasion: The effects of agency and be- Games.
havioral realism. Media Psychology 10, 1 (2007), 1–22. https://doi.org/10.108/ [65] Dominic Kao and D. Fox Harrell. 2017. Toward Understanding the Impact
15213260701300865 of Visual Themes and Embellishment on Performance, Engagement, and Self-
[40] Carl Gutwin, Rodrigo Vicencio-Moreira, and Regan L Mandryk. 2016. Does Efficacy in Educational Games. The annual meeting of the American Educational
Helping Hurt?: Aiming Assistance and Skill Development in a First-Person Research Association (AERA) (2017).
Shooter Game. Proceedings of the 2016 Annual Symposium on Computer-Human [66] Dominic Kao and D. Fox Harrell. 2018. The Effects of Badges and Avatar
Interaction in Play (2016), 338–349. https://doi.org/10.1145/2967934.2968101 Identification on Play and Making in Educational Games. In Proceedings of the
[41] Erik Harpstead, Brad A. Myers, and Vincent Aleven. 2013. In search of learning: SIGCHI Conference on Human Factors in Computing Systems - CHI’18.
Facilitating data analysis in educational games. In Conference on Human Factors [67] Idit Katz and Avi Assor. 2007. When choice motivates and when it does not.
in Computing Systems - Proceedings. https://doi.org/10.1145/2470654.2470667 Educational Psychology Review 19, 4 (2007), 429–442. https://doi.org/10.1007/
[42] Casper Harteveld and Steven Sutherland. 2015. The Goal of Scoring: Exploring s10648-006-9027-y
the Role of Game Performance in Educational Games. Proceedings of the 33rd [68] Caitlin Kelleher and Randy Pausch. 2005. Stencils-Based Tutorials: Design and
annual ACM conference on Human factors in computing systems (CHI 2015) (2015). Evaluation. Proceedings of the SIGCHI conference on Human factors in computing
[43] Elisabeth R. Hayes and Ivan Alex Games. 2008. Making Computer Games and systems - CHI ’05 (2005), 541. https://doi.org/10.1145/1054972.1055047
Design Thinking. Games and Culture 3, 3-4 (2008), 309–332. https://doi.org/10. [69] Yanghee Kim and Amy L. Baylor. 2006. Pedagogical agents as learning com-
1177/1555412008317312 panions: The role of agent competency and type of interaction. Educational
[44] Elisabeth R. Hayes and Ivan Alex Games. 2008. Making computer games and Technology Research and Development 54, 3 (2006), 223–243.
design thinking: A review of current software and strategies. Games and Culture [70] Jean Lave and Etienne Wenger. 1991. Situated learning: Legitimate peripheral
3, 3-4 (2008), 309–332. https://doi.org/10.1177/1555412008317312 participation. Learning in doing 95 (1991), 138. https://doi.org/10.2307/2804509
[45] Kate Howland and Judith Good. 2015. Learning to communicate computationally arXiv:arXiv:1011.1669v3
with Flip: A bi-modal programming language for game creation. Computers and [71] Jonathan Lazar, Adam Jones, and Ben Shneiderman. 2006. Workplace user
Education 80 (2015), 224–240. https://doi.org/10.1016/j.compedu.2014.08.014 frustration with computers: An exploratory investigation of the causes and
Exploring Help Facilities in Game-Making Software FDG ’20, September 15–18, 2020, Bugibba, Malta
severity. Behaviour & Information Technology 25, 03 (2006), 239–251. [96] Jean A. Pratt, Karina Hauser, Zsolt Ugray, and Olga Patterson. 2007. Looking
[72] James C. Lester, Sharolyn a. Converse, Susan E. Kahler, S. Todd Barlow, Brian a. at human-computer interface design: Effects of ethnicity in computer agents.
Stone, and Ravinder S. Bhogal. 1997. The Persona Effect: Affective Impact of Interacting with Computers 19, 4 (2007), 512–523.
Animated Pedagogical Agents. Proceedings of the SIGCHI conference on Human [97] Sheri Graner Ray. 2010. Tutorials: Learning to play. Gamasutra, http://www.
Factors in Computing Systems - CHI ’97 (1997), 359–366. https://doi.org/10.1145/ gamasutra. com/view/feature/134531/tutorials_learning_to_play. php (2010).
258549.258797 [98] Melissa Regan and Sheri Sheppard. 1996. Interactive multimedia courseware and
[73] Gillian Lieberman, Richard Abramson, Kevin Volkan, and Patricia J. McArdle. the hands-on learning experience: an assessment study. Journal of engineering
2002. Tutor versus computer: A prospective comparison of interactive tutorial education 85, 2 (1996), 123–132.
and computer-assisted instruction in radiology education. Academic Radiology [99] Lloyd P Rieber. 1994. Computers graphics and learning. Brown & Benchmark
9, 1 (2002), 40–49. https://doi.org/10.1016/S1076-6332(03)80295-7 Pub.
[74] Dennis K. Lieu. 1999. Using interactive multimedia computer tutorials for [100] Judy Robertson. 2012. Making games in the classroom: Benefits and gender
engineering Graphics education. Journal for Geometry and Graphics 3, 1 (1999), concerns. Computers and Education 59, 2 (2012), 385–398. https://doi.org/10.
85–91. 1016/j.compedu.2011.12.020
[75] Eric Malbos, Ronald M. Rapee, and Manolya Kavakli. 2013. Creation of inter- [101] Judy Robertson. 2013. The influence of a game-making project on male and
active virtual environments for exposure therapy through game-level editors: female learners’ attitudes to computing. Computer Science Education 23, 1 (2013),
Comparison and tests on presence and anxiety. International Journal of Human- 58–83. http://10.0.4.56/08993408.2013.774155{%}5Cnhttp://search.ebscohost.
Computer Interaction (2013). https://doi.org/10.1080/10447318.2013.796438 com/login.aspx?direct=true{&}db=eue{&}AN=87373893{&}site=ehost-live
[76] Winter Mason and Siddharth Suri. 2012. Conducting behavioral research on [102] Judy Robertson and Cathrin Howells. 2008. Computer game design: Opportu-
Amazon’s Mechanical Turk. Behavior Research Methods 44, 1 (2012), 1–23. https: nities for successful learning. Computers and Education 50, 2 (2008), 559–578.
//doi.org/10.3758/s13428-011-0124-6 arXiv:/ssrn.com/abstract=1691163 [http:] https://doi.org/10.1016/j.compedu.2007.09.020
[77] Richard E Mayer. 2002. Multimedia learning. In Psychology of learning and [103] David Rojas, Rina R. Wehbe, Lennart E. Nacke, Matthias Klauser, Bill Kapralos,
motivation. Vol. 41. Elsevier, 85–139. and Dennis L. Kappen. 2013. EEG-based assessment of video and in-game
[78] Richard E Mayer. 2006. Ten research-based principles of multimedia learning. learning. (2013), 667. https://doi.org/10.1145/2468356.2468474
Web-based learning: Theory, research, and practice (2006), 371–390. [104] David H Rose and Anne Meyer. 2002. Teaching Every Student in the Digital
[79] Richard E. Mayer and Richard B. Anderson. 1991. Animations Need Narrations: Age: Universal Design for Learning. Ericedgov (2002), 216. https://doi.org/10.
An Experimental Test of a Dual-Coding Hypothesis. Journal of Educational 1007/s11423-007-9056-3
Psychology 83, 4 (1991), 484–490. https://doi.org/10.1037/0022-0663.83.4.484 [105] Rinat B. Rosenberg-Kima, E. Ashby Plant, Celestee E. Doerr, and Amy Baylor.
[80] Barbara Mirel. 1998. "Applied Constructivism" for User Documentation: Alter- 2010. The influence of computer-based model’s race and gender on female
natives to Conventional Task Orientation. Journal of Business and Technical students’ attitudes and beliefs towards engineering. Journal of Engineering
Communication 12, 1 (1998), 7–49. https://doi.org/10.1177/1050651998012001002 Education (2010), 35–44. https://doi.org/10.1002/j.2168-9830.2010.tb01040.x
[81] Ryan M Moeller and Others. 2016. Computer games and technical communication: [106] Richard M. Ryan, C. Scott Rigby, and Andrew Przybylski. 2006. The Motivational
Critical methods and applications at the intersection. Routledge. Pull of Video Games: A Self-Determination Theory Approach. Motivation and
[82] Raphaël Moirn, P.-M. Léger, Sylvain Senecal, M.-C.B. Roberge, Mario Lefebvre, Emotion 30, 4 (2006), 344–360. https://doi.org/10.1007/s11031-006-9051-8
and Marc Fredette. 2016. The effect of game tutorial: A comparison between [107] Katie Salen and Eric Zimmerman. 2004. Rules of Play: Game Design Fundamentals.
casual and hardcore gamers. CHI PLAY 2016 - Proceedings of the Annual Sym- 670 pages. https://doi.org/10.1093/intimm/dxs150 arXiv:arXiv:1011.1669v3
posium on Computer-Human Interaction in Play Companion (2016), 229–237. [108] Ted Selker. 2002. COACH: a teaching agent that learns. Commun. ACM (2002).
https://doi.org/10.1145/2968120.2987730 https://doi.org/10.1145/176789.176799
[83] Andreea Molnar and Patty Kostkova. 2013. If you build it would they play ? [109] Aviv Shachak, Rustam Dow, Jan Barnsley, Karen Tu, Sharon Domb, Alejandro R
Challenges and Solutions in Adopting Health Games for Children. CHI 2003 Jadad, and Louise Lemieux-Charles. 2013. User Manuals for a Primary Care
Workshop: Let’s talk about Failures: Why was the Game for Children not a Success? Electronic Medical Record System: A Mixed-Methods Study of User- and Vendor-
June (2013), 9–12. Generated Documents. IEEE Transactions on Professional Communication 56, 3
[84] Andreea Molnar and Patty Kostkova. 2014. Gaming to master the game: Game (2013), 194–209. https://doi.org/10.1109/tpc.2013.2263649
usability and game mechanics. SeGAH 2014 - IEEE 3rd International Conference [110] Mel Silberman. 1996. Active Learning: 101 Strategies To Teach Any Subject. ERIC.
on Serious Games and Applications for Health, Books of Proceedings May (2014). [111] Katherine Stiwinter. 2013. Using an interactive online tutorial to expand library
https://doi.org/10.1109/SeGAH.2014.7067091 instruction. Internet Reference Services Quarterly 18, 1 (2013), 15–41.
[85] Niklas Nylund. 2015. Walkthrough and let’s play: evaluating preservation [112] John Sweller. 1988. Cognitive load during problem solving: Effects on learning.
methods for digital games. In Proceedings of the 19th International Academic Cognitive Science (1988). https://doi.org/10.1016/0364-0213(88)90023-7
Mindtrek Conference. ACM, 55–62. [113] John Sweller, Jeroen J G Van Merrienboer, and Fred G W C Paas. 1998. Cognitive
[86] E O’Rourke, Kyla Haimovitz, Christy Ballweber, Carol S. Dweck, and Zoran architecture and instructional design. Educational psychology review 10, 3 (1998),
Popović. 2014. Brain points: a growth mindset incentive structure boosts per- 251–296.
sistence in an educational game. Proceedings of the 32nd annual ACM con- [114] Philip W Tiemann and Susan M Markle. 1990. Effects of varying interactive
ference on Human factors in computing systems - CHI ’14 (2014), 3339–3348. strategies provided by computer-based tutorials for a software application
http://dl.acm.org/citation.cfm?id=2557157 program. Performance Improvement Quarterly 3, 2 (1990), 48–64.
[87] Eleanor O’Rourke, Erin Peach, Carol S Dweck, and Zoran Popovi. 2016. Brain [115] Barbara Tversky, Julie Bauer Morrison, and Mireille Betrancourt. 2002. Anima-
Points: A Deeper Look at a Growth Mindset Incentive Structure for an Educa- tion: can it facilitate? Int. J. Human-Computer Studies Schnotz & Kulhavy 57
tional Game. Proceedings of the Third ACM Conference on Learning@Scale (2016), (2002), 247–262. https://doi.org/10.1006/ijhc.1017 arXiv:arXiv:1011.1669v3
41–50. https://doi.org/10.1145/2876034.2876040 arXiv:arXiv:1011.1669v3 [116] Unity. 2019. Unity Public Relations. https://unity3d.com/public-relations
[88] Fred G Paas. 1992. Training strategies for attaining transfer of problem-solving [117] Hans van der Meij and Jan Van Der Meij. 2014. A comparison of paper-based
skill in statistics: A cognitive-load approach. Journal of educational psychology and video tutorials for software learning. Computers & education 78 (2014),
84, 4 (1992), 429. 150–159.
[89] Fred G.W.C. Paas. 1992. Training Strategies for Attaining Transfer of Problem- [118] Monica Visani Scozzi, Ioanna Iacovides, and Conor Linehan. 2017. A Mixed
Solving Skill in Statistics: A Cognitive-Load Approach. Journal of Educational Method Approach for Evaluating and Improving the Design of Learning in
Psychology (1992). https://doi.org/10.1037/0022-0663.84.4.429 Puzzle Games. (2017), 217–228. https://doi.org/10.1145/3116595.3116628
[90] Claus Pahl. 2002. An Evaluation of Scaffolding for Virtual Interactive Tutorials. [119] Xiaobo Wang, Guanhui Lai, and Chao Liu. 2009. Recovering Relationships
Proceedings of World Conference on E-Learning in Corporate, Government, Health- between Documentation and Source Code based on the Characteristics of Soft-
care, and Higher Education (2002), 740–746. http://www.editlib.org/p/15295 ware Engineering. Electronic Notes in Theoretical Computer Science 243 (2009),
[91] Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. 2010. Running 121–137. https://doi.org/10.1016/j.entcs.2009.07.009
experiments on amazon mechanical turk. (2010). [120] Helen Wauck and Wai-Tat Fu. 2017. A Data-Driven, Multidimensional Approach
[92] S Papert and Idit Harel. 1991. Situating Constructionism. Constructionism to Hint Design in Video Games. (2017), 137–147. https://doi.org/10.1145/
(1991). 3025171.3025224
[93] Cécile Paris and Keith Vander Linden. 2007. Building knowledge bases for the [121] Megan A. Winget and Wiliam Walker Sampson. 2011. Game development
generation of software documentation. (2007), 734. https://doi.org/10.3115/ documentation and institutional collection development policy. (2011), 29.
993268.993296 arXiv:9607026 [cmp-lg] https://doi.org/10.1145/1998076.1998083
[94] Clara-inés Pena, J L Marzo, and Josep-lluis De Rosa. 2002. Intelligent Agents in a [122] Alyssa Friend Wise and Kevin O’Neill. 2009. Beyond more versus less: A
Teaching and Learning Environment on the Web. Proceedings of the International reframing of the debate on instructional guidance. In Constructivist Instruction:
Conference on Advanced Learning Technologies (2002), 21–27. Success or Failure? https://doi.org/10.4324/9780203878842
[95] Jean Piaget. 2013. Play, dreams and imitation in childhood. https://doi.org/10. [123] Michael Wooldridge and Nicholas R Jennings. 1995. Intelligent agents: Theory
4324/9781315009698 and practice. The knowledge engineering review (1995). https://doi.org/10.1017/
FDG ’20, September 15–18, 2020, Bugibba, Malta Dominic Kao
S0269888900008122 of technical writing and communication 31, 1 (2001), 61–76.
[124] George J Xeroulis, Jason Park, Carol-Anne Moulton, Richard K Reznick, Vicki [126] Junji Zhi, Vahid Garousi-Yusifoğlu, Bo Sun, Golara Garousi, Shawn Shahnewaz,
LeBlanc, and Adam Dubrowski. 2007. Teaching suturing and knot-tying skills and Guenther Ruhe. 2015. Cost, benefits and quality of software development
to medical students: a randomized controlled study comparing computer-based documentation: A systematic mapping. Journal of Systems and Software 99,
video instruction and (concurrent and summary) expert feedback. Surgery 141, January (2015), 175–198. https://doi.org/10.1016/j.jss.2014.09.042
4 (2007), 442–449. [127] Martina Ziefle. 1998. Effects of Display Resolution on Visual Performance.
[125] Mark Zachry. 2001. Constructing usable documentation: A study of communica- Human Factors: The Journal of the Human Factors and Ergonomics Society 40, 4
tive practices and the early uses of mainframe computing in industry. Journal (1998), 554–568. https://doi.org/10.1518/001872098779649355