Monday, November 20, 2017

God of War - Attack Cancellation


I recently replayed God of War 3 and found myself thinking that the attack cancellation system is the heartbeat of combat. Just for clarity, cancellation is the ability to interrupt one animation before it is completed, by beginning another animation. For example, by pressing square, square, triangle the player can execute a combo attack called Plume of Prometheus(PoP). However, in mid combo the player can choose to roll away or guard to cancel the combo and the associated animation.

On a high level, why this works so well for God of War is because it adds responsiveness to a generally heavy combat system. I use the word heavy to indicate that Kratos is not a nimble character:

a) A basic melee attack (ex: pressing square) can take about 0.5 – 1.5 seconds to play out after player input is registered.

b) Certain basic melee attacks have a “readying” animation before the attack is executed. For example, holding down triangle launches an NPC into the air. This delay is induced because the attack leaves the enemy vulnerable for a large amount of time.

c) There is a supported time gap between individual actions of a combo. For example, in executing the PoP the player can press the first square, watch the animation play out for half a second, before executing the second square to continue the combo.

Given these factors, combat would slow down dramatically without the cancellation system. With it, combat can support enemy characters much faster than Kratos. It adds a real-time element with a much stronger sense of timing. It requires the player to simultaneously watch for incoming enemy attacks while in mid attack and cancel accordingly. It allows emergent combos that can take place despite enemy attacks. For example, square, square, enemy attacks, roll to evade, continue attacking. This is made possible by cancelling the second square and cancelling the completion of the roll.



However, not all attacks can be cancelled at any given instant in time. That would be an overpowered mechanic. Where it gets interesting is that every attack or combo has a cancellation window. This is where the system begins to support interesting player decision making. This is because the cancellation determines duration of player vulnerability:

a) Partially Cancellable – Most combos cannot be cancelled in their final stage. For example, the PoP can be cancelled only before the final triangle attack. The final attack in the combo delivers significantly more damage than the rest of the combo combined and leaves the enemy in a stagger state for a certain duration. However, because it is non-cancellable it also leaves the player vulnerable during the animation and for a certain duration of time at the end of the attack. This creates a high risk – high reward scenario. Additionally, the assessment of risk is not arbitrary. It is informed by a few factors:

i. Damage Radius – This informs how many enemies will be left staggered at the end of the attack. Therefore, the player must assess whether the finale of the combo will leave sufficiently many enemies staggered for her to recover from her vulnerability duration.

ii. Redirectability – The direction of certain finale attacks can be changed mid attack. The player can consider the optimal direction of the attack to maximize the number of staggered enemies.

b) Non-Cancellable – Certain combos that play out for longer durations are completely non-cancellable. For example, Cyclone of Chaos (L1 + square) does radial damage for as long as 3-4 seconds. Once again, we have a high risk-high reward situation. This decision is typically based on enemy defense ability. If the player is surrounded by enemies that can block the Cyclone, then it probably doesn’t make sense to execute the attack. There are certain attacks that are non-cancellable and invulnerable, meaning that the player can’t be damaged in the duration of the combo. These are typically attacks budgeted based on metrics like mana. The player can use these attacks in times of intense strife.

The cancellation system also makes other systems feasible. For example, consider the attack redirection system.When the player blocks an enemy attack in the exact moment that it is about to impact the player, the attack is fired back at the NPC. With cancellation, the player can be in mid attack, cancel her initial attack to redirect an enemy attack. This allows an unbroken combat flow.

Friday, October 20, 2017

Isles of War : A Retrospective Analysis


‘Isles of War’ was an MMO RTS game that was developed and published by Disney Interactive in October 2013. The overarching business goal was to expand the Disney portfolio into a “mid-core” audience that was growing in the social gaming space. Kixeye had recently launched their very successful game titled ‘Battle Pirates’. Therefore, the decision was made to create a similar naval combat strategy game. The key differentiating factors were to be the Pirates of the Caribbean IP and a unique systemically deep combat system that focused on the real-time component over the strategy component. The latter came from the thinking that boys aged between 15 and 25 would have a high tolerance threshold towards complex action systems.

We started by thinking about how we could do something new with ship based combat, a genre that’s been done many times before. The idea of a broadside based firing system was proposed wherein ships fired projectiles on their side as shown below:



This automatically gave us a basis to create lots of different types of ship that varied in combat ability based on range, radius, damage, health, velocity etc. Players would enjoy funneling their resources to create fleets that would serve different types of situations.  This also gave us a nice basis to monetize. Players would pay for premium ship upgrades etc.

In the prototyping phase we found that the core engagement in this mechanic was navigating your ships to a point where you could fire at your opponent but your opponent couldn’t fire against you:


In our exuberance to create a system where the player was in charge, we let the player control every ship in their fleet by clicking on it. The player then selected the opponent ship to fire, also by clicking and fired by using spacebar. When a player ship was not selected it would be AI controlled. The result we thought would be chaotic multitasking fun:


As they say, a plan never survives first contact with the player. This battle system didn’t work for several reasons:
  1. Poor Understanding of Target Audience – “Chaotic fun” to us, was just confusing for our target audience. Turns out “mid-core” audiences vary greatly across platform for their tolerance levels towards battle systems. F2P gamers aren’t the same as console gamers.
  2. Misalignment with Core Fantasy and Genre – Ultimately, the core fantasy of any MMO based pirate game is going to be world exploration, forming and battling against alliances etc. Our battle system demanded too much attention and didn’t let the player enjoy the other facets of the game.
  3. Inability to Scale – The battle system would need to have AI that was tuned to different types of ships. With the new emphasis on multiple ship categories unlocked by exploring the world we didn’t have the bandwidth to support satisfying AI for all these categories.
  4. Additional Battle Mode –  The scaling problem was made even worse by the need to support a second battle mode: ship versus city battles:
So, we went back to the drawing board. We needed to deliver a satisfying battle system that had a lower entry threshold barrier, focused on the strategy component instead of the real-time component and scaled systemically, i.e. without the need to look at each ship class individually. All of this while preserving the original core source of engagement and the differentiating factor of broadside firing. In this vein, new goals were established for the system:

  1. The AI would need to scale systemically without the need for scripting on individual ships.
  2. Players should be competitive even if their ships didn’t have the same attack capabilities.
  3. Movement would have to be deterministic to support other features such as “watch mode”
Ultimately what we shipped was a flocking follow the leader styled mechanic. You can read more on this mechanic by clicking on the portfolio link below if you are interested:



With this movement system, all the mentioned goals were hit effectively. However, there were several drawbacks:

  1.  The system was overtly predictable.
  2. The system favored ships with faster turning radius: Example: Level 57 Fleet obliterating Level 70 fleet
  3. Combat didn’t change as players leveled up.
While this system aligned with the higher-level goals for the game, simple additions could have solved some of these problems. One example was a ship boost system. In this system, the enemy flagship ship would burst forwards while in player range for a short duration of time. This could be implemented in a systemic manner that didn’t require scripting on an individual ship level. It could also be implemented in a manner that didn’t break the requirement for the overall battle system to be deterministic. In doing so it would at least alleviate some of the above-mentioned issues:
  1. This would introduce an element of surprise that would invalidate player movement strategy and force her to reassess the board when a boost occurred.
  2. The number of ship boosts available to the NPC would be dependent on the turning radius and the overall level difference between the two fleets.


Wednesday, October 18, 2017

VR Titan Combat Part 3 - NPC AI Design


God of War and Final Fantasy are huge inspirations for me when it comes to epic set piece design. Boss encounters have multiple stages with changing boss abilities, each requiring the player to develop a new mental model of strategy and execution. These battles also have interest curves that usually looks something like this :



Our goal then is to give ourselves enough levers to effectively create this type of a curve. That said, let us try and discover what strengths and weaknesses a boss would need given the above player abilities.
  1. Mobility – Consider an aerial mech boss that has two modes that it alternates between – aerial and grounded. This gives us an axis to manipulate difficulty as the player’s abilities are grounded and less effective against aerial enemies.

  2. Aerial Mode – The boss has two modes of movement.
    • Darting - Rapid darts from one spot to another. There is a cooldown between any two consecutive darts, where the player can effectively launch a missile.
    • Flying – Slower continuous aerial movement. The laser stream is effective in this mode as the re-positioning of the cursor can be done in a predictable manner.

      Core blast is highly ineffective in this mode. However, the player may still choose to employ this in a dire situation to escape transition from aerial mode to the easier grounded mode.
  3. Aerial Mode Attacks
    • Rapid Shot – Lower damage, high speed projectile. This requires the player to anticipate and boost to dodge, or orient shield at a split second’s notice.
    • Spiral Missile – Lower speed, high damage projectile that loops around and trails the player. Boosting too early will simply cause the missile to follow the player. Bringing up shield will cause it significant damage and will need the player to watch the spiraling path closely.
    • Charged Blast – Similar to the player’s blast. The only way to effectively guard against this attack is to bring up special defence.

VR Titan Combat Part 2 - Combat


Basic Attacks and UX – In line with the same goal of mantaining symetry, both robotic hands contain the same offence. Below is the proposed list of attacks and control scheme :
Trigger : Enemy projectile fire. All attacks are projectile based. This decision is also inline with mantaining a slower combat pace. Hitscan weapons are typically used only in low time to kill, fast mobility shooters.

A/X button : Change projectile type. Player has two projectile types. Each are intended for different purposes and exercise different skills. New weapons in each category can be discovered and equipped through the course of the game.


  1. Laser Bolts
    • Pulling the trigger emits a continuous rapid stream of laser bolt like attacks.
    • High Frequency, Low Damage attack.
    • Intended to attack fast moving, high HP targets or distributed low HP targets.
    • Skill involved is continually reorienting cursor to ensure optimal number of bolts are landing on targets.
    • Easier to use against “bullet sponge” slow moving, high HP targets or clustered low HP targets.
    • These types of targets can be used to give players a break from high intensity challenges, i.e., a dip in the interest curve.
  2. Charged Missile
    • Pulling the trigger shoots a missile. Holding the trigger and releasing launches a more powerful missile.
    • Low Frequency, High Damage attack.
    • Intended to attack medium paced, high HP targets.
    • Skill involved is watching the enemy and identifying right time to release.
B/Y Button – With the slightly slower paced combat and emphasis on higher time to kill opponents, it is likely that players will spend several minutes in a battle arena. Therefore, an attack type that alters the space might be useful. For example, consider an explosive mine detonator. The player uses the B or Y button to launch a mine. She then needs to shoot the mine to detonate it. Detonation causes damage within a certain radius and leaves it in a permanent state of fire. This fire could be friendly fire or cause damage to the player.


Encounter Design

Optimal Weapon Configuration - Both weapons can potentially be equipped simulatenously, i.e., one on each hand or the same weapon can be equipped on either. Encounters can be designed with the intension of having players decipher the optimal sequence of configurations.



This concept when combined with 1v2 or 1v3 battles where the ‘n’ NPCs on the board demand a different optimal configuration could result in some interesting player mental models. Players must now map each NPC’s state to an optimal weapon configuration and change their loadouts when they encounter each NPC.


Setting up Flank – Players can use a combination of attacks to flank opponents.


Special Attack – The player also has one high damage, high cool down time attack. This attack is a hitscan core blast.


The player executes this attack by pulling both triggers and bringing his controllers together. Once the core is triggered, player must keep both triggers held to keep the core blast active. She may move her hands to reorient the laser core blast. However, the core will move slower than her hand, i.e., there is not a 1:1 mapping between player hand movement and blast trajectory. This is to prevent the attack from becoming overpowered.


Defense – Each hand has a shield that covers a portion of the player. The shield is sufficiently small that the player will need to reorient the shield to block attacks from different directions.


The player accesses the shield using the grip button. The player may not attack and defend from a single hand simultaneously. The shield also degenerates as it blocks incoming attacks. It regenerates automatically when it is inactive. Once a shield “breaks” it may not be used for a brief amount of time. This potentially creates some interesting physical posturing to continue blocking and attacking simultaneously from both hands.


Special Defence – Mirroring special attack is special defence, which is a mode where the player is enveloped in an invincible shield for a brief amount of time. The player may not however, trigger core blast when in special defence mode. Special defence is triggered by holding down both grips simultaneously.

VR Titan Combat Part 1 - Locomotion


Player Character - Consider a slow moving robotic character, heavy but combat responsive. The player views the world through the cockpit control center. Think Titanfall or Archangel. This type of a character and combat system suits VR in a number of ways as will be demonstrated.



Health System – Consider a health system where the cracked nature of the cockpit glass is a representation of the health. As the player takes damage, the more cracked the glass becomes.


This gives us the opportunity to create diagetic UI, i.e. part of the game world. It may seem trivial, however with VR I believe it is critical to remove all elements of gamelike camera locked UI which break immersion.

Additionally, there is an interesting negative feedback loop that could be exploited regarding world visibility. The cracked glass could be used to obfuscate portions of the game world making combat more difficult the more damage the player takes. Essentially the opposite of an equalizer. It may seem counterintitive, but there is potential to make for interesting player choice when combined with the right health regeneration system.

Consider a health regeneration system that is powered by health collectibles. The player and his AI enemy both have very little health left. The player now has to choose between trying to finish the enemy off with his limited world visibility or try to find a health collectible. Both choices pose a different nature of risk. Finding a health collectible entails greater survival chances once it is found. However, the player may wind up dead before it is found. On the other hand continuing battle with impaired vision means lower odds of winning.

This health system could be even more interesting when layered on top of a second health layer such as a regenerative shield. Player’s health does not drop until the shield goes down. The shield regenerates over time automatically and separate power ups are available to restore shield as well. Think Mass Effect. This adds another layer of decision making when the player is deciding whether to engage with enemies or back away to regenerate shield or recover health.

Mobility – As mentioned before the player should be using her tools at a subcsonscious level. Something that aids this greatly is a symetrical control scheme. Consider a system where the player uses either joystick on the Oculus Touch Controller to move steadily. Inline with character’s large robotic structure, the player lumbers slowly. The forward direction is indicated based on the orientation of the HMD. This essentially provides a navigation system that is coupled with the player’s frustum of view.


When the player uses both controllers simulatenously in the same direction, the character boosts for a short period of time. There is also a cooldown period between consecutive boosts. This ability is primarily intended to evade enemy attacks. The conceived user experience is for the player to be using one joystick to be moving around and when an enemy attack is seen, momentarily to use the other joystick.


This type of a navigation scheme is beneficial to VR in a few ways :
  1. Slow Movement - By keeping movement slow, we allow the player to engage with enemies all around them. Farpoint is a good example of a game that employs fast movement and is forced to spawn enemies directly in front of the player to avoid nausea due to rapid head movement.
  2. Continuos Movement – Movement is non teleport based, which is benefitial for a VR action game for a number of reasons.

  • Teleporting introduces additional combat design issues. With any teleport system, there is a lag between player action and character movement. What happens if an enemy projectile hits in this interval? In addition, there is a tradeoff between providing a comfortable user experience of the teleport and the desire to minimize this lag for combat effeciency.
  • Teleporting breaks the flow of combat and forces the player to reorient once the teleport is complete.
  • Saves development effort of having to implement nausea free effects.
  • Teleporting tends to be an over powered evasion mechanism:
    https://www.youtube.com/watch?v=2YrXn1O6ScY

A VR Multitasking Mechanic Proposal


Two individual game mechanics are often more engaging when combined in a parallel, time bound manner. Action adventure games do this all the time. Fire at an enemy while paying attention to your health to retreat to cover. Avoid obstacles while being dragged along the street, while also pointing a gun to fire at an NPC etc

What if we combine a flying obstacle avoidance mechanic with an enemy shooting mechanic. The player flies by orienting her left controller in the direction she wants to fly. The player is continually drifting forwards towards obstacles and may not point her left controller backwards. She shoots using her right controller (for right handed players) at enemies that spawn and float towards her.



This mechanic leverages VR in many ways:
  1. Leverages large frustum of view – The player’s moment to moment experience and decision making must be made based on visual input from the world all around her.
  2. Leverages hands in physical space – The player’s input to the world is based on physical orientation of her hands. Both are made possible only through VR.
  3. Hand Synchronization – This benefit is not necessarily unique to VR, but made more visceral by the medium. Human beings are mentally wired to move their limbs with a degree of association between them. This is why playing an instrument is difficult. The studio within PlayStation I’m working at titled Pixelopus, launched Entwined based on breaking this association. In the game, the two PlayStation joysticks each mapped to control of a separate character. The game was about navigating these characters independently but parallelly. The proposed mechanic could potentially leverage this, in the context of breaking association between hand movements.

                       
Before and during prototyping I would also be engaging in a risk analysis phase:
  1. Alignment with Higher Level Goals – Does this mechanic align with the type of games the studio is interested in? Can it support a narrative? Is it too “arcady”? Should this be a primary or secondary game mechanic?
  2. Player Comfort – Is it fun to be looking around the world rapidly? What is the optimal play session duration before a player’s hands begin to ache? Does flying induce nausea? Is it even possible to be moving one’s hands independent of each other?
  3. Visual Input – A large frustum of view is great. But does the mechanic cause sensory overload? Can the player process everything in the world intelligently? Are her actions random or driven by visual input?
  4. Production Impact – Do we have the resources to create content that provides a meaningful arc to the experience based on this mechanic?
Parallel to the risk analysis phase, I would also engage in identifying the core engagement of the mechanic. The findings here feed directly into the risk analysis phase. Basically, this phase is to identify the different ways this mechanic can be engaging. A large part of this phase, is identifying the basis for player decision making and the time scale at which these decisions are being made. Stated otherwise, the we’re trying to identify the time between visual input and player reaction. In first person shooters, this is a quantifiable metric called Time to Kill (TTK). In our case, this boils down to level design semantics:
  1. Reactionary – Here obstacles are placed close to each other and the player weaves in and out, making on the fly decisions. Think about driving in GTA.
    Enemy Spawn - Close to the player, Intended as elements of surprise
    Obstacle Sizes – Small
    Enemy Health – Low
    Player Speed – High


     
  2. Strategy Based – Here obstacles are clumped together with an escape route. The player must identify and navigate to the escape route in time before the collision. Think about the flying obstacle sequences in God of War.
    Obstacle Sizes – Large
    Player Speed – Low

     

    Enemies and obstacles appear alternatingly, so the player isn’t forced to exercise evasion and firing skills simultaneously. Enemies appear in waves and the player must prioritize enemies based on distance to player, speed, motion type, firing ability etc. Think Doom.
Enemy Spawn – Far away from player,
(Intended for players to prioritize)
Enemy Health – Varied (Medium to High)





Teaching Input Paradigms in VR

Everyone expects cool things from VR. Give us experiences where we are spell casting wizards, astronauts in space stations and cowboys in badlands, our players tell us. When I was at Oculus, Brendan Iribe used to say our mission was let users be anyone, anywhere at anytime.

Well guess what, wizards, astronauts and cowboys interact with the world in very different ways! They have very different verbs. In order to truly fulfill the carnal fantasy of being these characters, we need to tutorialize players into different input paradigms. This is tricky for several reasons:
  1. Novelty – VR input controllers are still in their first generation. This means even the basic tactile elements like where buttons are, takes some getting used to.
  2. Lack of Input Standardization – The Oculus Touch Controller, HTC Vive Controllers and PS Move are all very different. Additionally, somehow we need to make you feel like a wizard, a cowboy and an astronaut all with the same input device!
  3. The Uncanny Valley – No one complains that pressing the X button doesn’t “really feel” like jumping. That’s because the experience is 2 degrees removed from the player. The player presses keys on a controller that affects pixels on a screen. In VR there is a 1:1 real time mapping between your virtual and real world hands. This level of immersion greatly enhances player expectation. With great power comes great responsibility!
However, for a player to enter flow state, a critical requirement is that the tools at her disposal simply seep into her subconscience. When you are playing Destiny, you aren’t thinking about where your finger needs to move to fire or fly. You just do it.

So clearly, there is a tension between creating engaging “game like” experiences that make the player feel competent while also fulfilling the fantasy of “being” that character. The sooner this tension is realized, the better your experience will be. There are many ways one might go about resolving this tension:
  1. Bottom Up Approach – We need to start conceiving VR experiences from the ground up, i.e., thinking about interactions first and experience next.
  2. Subvert Player Expectation using Affordances - The experience I’m currently working on at PlayStation, is about painting. However, in our case the player’s hands are a brush and a palette. She isn’t holding those objects in her hands, her hands are the objects themselves. As simple as it sounds, this has powerful effects:
    • By making her hands disappear the experience tells the player what she can and cannot do in the world.
    • It avoids simulation fatigue. Things that may seem engaging to do once or twice quickly lose their novelty. Explicitly turning doorknobs to open a door is a good example of an activity that is subject to simulation fatigue.
  3. Extrapolate Player Input - Players care more about having a great experience and feeling powerful than they are about having an experience where they are fully in control. For this reason, in our painting project we give the player template brushes instead of an actual palette. For example, the player gets a tree brush, a sun brush etc. Additionally, the tree brush doesn’t draw a tree in the exact shape the player traces her hand. It instead collects points across her hand movement and constructs an elegant spline that looks beautiful. This makes the player feel powerful with minimal skill or effort. Instead of testing the player’s painting skills, the core engagement derives from expressivity. All these decisions were made after a bottom up testing of the central interaction.
  4. Recontextualize Player Verbs – It is important to recognize the shelf life of an interaction. How long can it stay juicy? This becomes a particularly important when an experience is not necessarily about a progression in skill. This tends to be true about most VR experiences for several reasons:
    • They tend to be short due to production budgets. Furthermore, greater iteration time is needed to “find the fun” because the medium is so new.
    • They tend to be narrative focused because the medium is so experiential.
    • Most interaction systems are still hard to master in VR.
For these reasons, you may feel a need to add different types of interactions to your world. This however, comes with the added burden of tutorializing another input paradigm. Instead you can recontextualize the existing verb. This simply means using the same input to produce different effects. For example, in our experience we were faced with the desire to progress from 2D painting to 3D painting. Instead of implementing a new input mechanism for 3D painting, we have the player paint on a canvas which in turn produces 3D objects.

Sunday, October 8, 2017

The Last of Us : Technical Design for Stealth Mechanics


Consider a stealth based game where guards are patrolling with set paths. If an enemy detects the player, the enemy goes into a pursue state and pursues the player for some fixed period of time. Afterwards, they resume patrol.  If an enemy detects an incapacitated body (incapacitated by the player), he will enter a search state in which they search nearby for a specific time. Enemies aware of the player deal damage at a fixed rate when in proximity to the player.

Consider an architecture where a list of high level prioritized skills invoke lower level behaviors that interface with the locomotion controller:


Each skill decides internally whether it is on or off. Each skill also has an internal state machine. When a skill makes a decision it pushes a low level behavior onto a stack. All behaviors inherit from a BaseBehavior which the stack functions on. Similarly, there exists a BaseSkill behavior with functionality to push onto the stack etc.

Examples of behavior pseudo code is as follows :

1) Move To Destination - Receives parameters speed, destination and animation from invoking skill.

  • Invoke path finding algorithm. May contain a reference to a navmesh in order to compute path.
  • Read next node in path.
  • If node is reached, rotate to orient towards next point in path. 
  • Translate in forward vector direction at set speed.
  • Play Animation. May have a blend space between walk and run animations based on speed.

2) Attack - Receives attack type, attack direction, damage on success

  • Register for collision events
  • Play Animation
  • If collision registered, send damage message to PlayerController. PlayerController might manipulate the damage received based on state variables such as armor, defense state etc.
For skills to remain autonomous (i.e., not reference other skills) they will probably reference similar data. In order to ensure operations are not being duplicated across skills and to ensure skills are modular and decoupled, we could use a "shared blackboard".


The following skills may be required.

1) Patrol
  • To use this skill, the designer defines a set of patrol nodes and a patrol type (Looping, Ping-Pong etc)
  • To determine it's status, the skill reads timestamp player was last detected and time duration that the player was seen. continuously.
  • The blackboard itself updates these metrics based on the vision cone hitting the player.
2) Search
  • When a player is spotted for a threshold amount of time, search is enabled.
  • When enabled, raycasts are fired in a cone centered around the location the player was seen.
  • This returns a list of environment objects.
  • Based on the orientation of the NPC entity, a search point is generated for each object.
  • These points are represented in a graph.
  • A breadth first search is performed on the graph, with the point closest to last known player location being treated as origin.
  • Each node is visited one by one.

Thursday, October 5, 2017

The Last of Us : Modularizing a Perception System


It is generally good AI practice to architect a codebase in a manner that reflects the rules of the world. The infected enemies in The Last of Us have tremendously blurred vision and instead perceive the world through an auditory system. Therefore, the AI code for the infected perception should contain a kind of system where events in the world generate "logical sounds" that raises the entity's awareness of the player.

I thought I would spend a fun couple of hours trying to see how this kind of a system would be modularized for scalability and reuse. Here is the barebones of the technical architecture I came up with. This was made with Unity and C#

1) SoundEvent - When an event in the world produces sound, it creates a SoundEvent object.


2) For example, when the player is walking, the PlayerController creates SoundEvent objects based on movement.


3) SoundPropogationManager - All sound events flow through this singleton manager. This is intended for a couple of reasons:

  • To ensure decoupling between sound generator and receiver.
  • To contain logic for things like dampening over distance and environment that may occule the sound etc.
  • It is singleton because it is almost like a static helper class. However, it may need to maintain state information.
a) soundEventTriggered - Fetches all components that are registered to listen for sound events, computes the dampened sound for each object and propagates them to the listening components.


b) IListenable - The interface that SoundPropogationManager identifies components that are listening for sound events. This ensures decoupling between the manager and component, essentially allowing multiple types of AI behaviors to listen for sound events.


c) getListenableObjects - The listening entities should be decoupled with the manager. Therefore, the manager should pull this data from the scene instead of individual entities registering with the manager (even if it comes with a performance overhead).

The manager may also register and unregister certain entities based on the position of the player and the entity. For example, if a player is in a room only other entities in the room need to be listening for sound events.


d) dampIntensity - The intensity dampening reduces exponentially over distance. This means that movement closer to an NPC has a much greater impact on the NPC than the same movement at a greater distance away.


4) ListenBehavior - A low level NPC behavior that listens for sound events. This behavior may be employed by multiple high level "skills". For example, a "wander" skill as well as a "sleep" skill may both call ListenBehavior. The behavior implements the IListenable interface. 


The result is captured in a video below. You can also find a link to the codebase here :



 





Friday, July 28, 2017

Uncharted 4 : Systemic Combat


In his GDC talk, Matthew Gallant (combat designer at Naughty Dog) discusses how for Uncharted 4 the combat team reevaluated how they approached combat design. Instead of tightly scripted encounters, the game moved towards a more systemic approach to NPC AI behavior. Today I'd like to examine the player experience that flows through from this approach and how it relates to the underlying system.

COMBAT ROLES
  • Inspired by Pacman's Inky, Blinky and Pinky, Uncharted 4 has 3 combat roles in the game :

    1. Engagers - Close in on player position.
    2. Ambushers - Pick an ambush point and hold their ground. As the player gets closer, they pop out and surprise players adding tension and making the place feel fuller.
    3. Defenders - Assigned to hard points where they hold their ground and fire. This gives the player focus points to navigate towards.
    4.  

  • The assignment of these roles are systemic and based on a set of global parameters.



  • But ofcourse the player is unaware of this internal system. So how does this system translate into a player experience?
  • The player's mental model of a combat arena (when in an active state of combat, i.e., not sneaking) is quite simply an internal priority list for who to fire at. Below are some of the factors based on which this prioritization occurs :
  1. Damage dealing ability
  2. Distance from player
  3. Jeopardy to current position
  4. Maneuverability to next cover position or vantage point
  • The key to the nature of these global params is to accommodate for the ability to break the player's mental model in incremental steps.
  • For example, the engager count and time delay to replace an engager result in an NPC dynamically becoming an engager when an NPC is killed. This essentially invalidates the player's established next steps but in a manner that she can clearly react to.
  • The params also need to be constructed in a manner that the player is incapable of predicting the reaction of the system to her actions.
  • Therefore, the resulting experience from this system is one of constant re-prioritization where the player is constantly reacting to systemic changes.

DIFFERENTIATION FROM THE LAST OF US
  • In TLOU, due to the instant death nature of contact with clickers the player is expected to formulate a strategy for as large a window as time as possible.
  • From that point onward, gameplay revolves around deft execution of that strategy through timing.
  • The player is expected to react at points where she fails to execute on a given strategy or when the strategy has failed to accommodate certain aspects of the layout.
  • However, the result is to scramble to safety and reevaluate. In this manner player's are much more constrained than in Uncharted, in two ways :
  1. In UC planning is only incentivized not enforced
  2. In TLOU, reactions are high pressure break points from game flow. In UC, the player is in a constant reactionary state.
  • However, the tight constraints in TLOU, implies that situations where the player is pressured into breaking free from these constraints are meaningfully differentiated and create a tension that Uncharted's systemic combat can never achieve.
SYSTEMIC AUTHORSHIP - ILLUSION OF AGENCY
  • In the talk, Gallant also makes the assertion that it was important to the design team to be able to continue to exert a controllable degree of authorship over each encounter. I'd like to examine how systems can accommodate for such tight authorship when required.
  • The solution is heavily dependent on level design based indirect control. I think it is safe to say the more vast the encounter space, the harder it is to author.
  • For example, consider the following encounter (Reunion) :

  • From the player's perspective this may seem like a pretty open space at first glance. However, there are really only two arteries that run through this space. It roughly corresponds to the following layout :
  • The furthest defender is an RPG wielding bad ass so he's pretty much your directional indicator. The natural behavior from this layout is to use one of the first two cover points to get a better layout of the land.
 
  • At this stage, the player is forced to choose one of the two arteries that cut through the level. However, the game employs a few tricks to force the player into the decision it wants her to make, i.e., select the elevated path :
  1. Each of the cover points are destructible forcing the player to shuffle right, i.e. towards the "right" path.
  2. Incoming engagers and defenders reinforce the decision to go right to gain additional cover.
  3. The nature of the layout is sufficiently clear for the player to decipher vantage ability.
  • Once the player has climbed onto the ledges, the game is now poised to counter the player with tailored scenarios made possible systemically. For example :
    • The player hanging on the ledge triggers an ambusher setting the player up for an over the ledge gun shot.
   
    • Similarly, the player jumping thresholds triggers ambushers on a different vertical axes as shown below. Additionally, killing the ambusher allows the defenders to be reassigned to engagers further pressing the player to utilize his vertical vantage.