top of page
Oberak - Wildseed Games
Associate Technical Designer

Platform
PC

Engine
Unreal Engine 5

Duration
November 2022 - July 2023

Team Size

~15
 

Overview

A third person martial-arts action RPG and social sim hybrid that fuses anime and wuxia influences with western hip-hop.

Accomplishments

  • Primary owner of combat design and implementation from early preproduction.

  • Rapidly iterated on core melee combat functionality to achieve solid fundamentals: intuitive enemy targeting, attack motion warping, impact effects, combos, enemy hit reactions, etc.

  • Tweaked existing attack animations by slicing, retiming, and applying curves.

  • Designed and implemented a “Qi Meter” resource and a variety of abilities for greater depth: utility-focused attacks with resource cost, perfect dodges, contextual counter attacks, etc.

  • Designed and implemented several camera systems including a combat camera that effectively frames nearby enemies. Utilized Unreal’s Camera Modifier system for scalability.

  • Worked closely with animation, VFX, engineering, and other designers to achieve quality results.

  • Implemented a robust interaction system that I expanded to meet the needs of other designers.

  • Implemented combat testing level; used data tables to define enemy types that the user can spawn.

  • Iterated on movement abilities: dashing, wall-running, jump feel & contextual recovery animations.

  • Quickly familiarized myself with the complex Lyra framework that the project was built atop.

Combat Breakdown

Context

I took ownership on the combat a few months into my time on the project, and what I had to start with was an engineering foundation created by external contractors which provided the structure for attack abilities, hitbox spawning, and combos. 

OberakPortfolioImage1.PNG

The current version of the core combat Blueprint.

One of my first tasks was to familiarize myself with that functionality and make some necessary modifications:

  • I added a simple input buffering system so that the player can perform combos by pressing attack at any point during the current attack animation instead of needing to press attack during the precise anim notify window during which the current attack can be interrupted.

  • Instead of simply playing hit reaction montages on characters when they take damage, I created hit reaction Gameplay Abilities so that we could use GAS to prevent characters from performing other actions while hit reacting.

  • And many more minor changes.

Goals

I put some effort into attempting to formalize our combat design goals. For instance, I created this Pillar diagram:

OberakPortfolioImage2.PNG

This diagram illustrates the 3 main areas of Oberak (Combat, Locomotion, and Cultivation), and details the Pillars within and between each area.

Ultimately though, our plans shifted enough that a lot of the above diagram lost relevance and we probably should've revisited this formalization process more routinely. However, I should note that we were in preproduction building a demo to show to investors and get more funding, so our primary goal was to achieve simple but fun and competent combat with the knowledge that it'll likely go through large redesigns once we hit production.

​

To achieve that goal, I took a problem-oriented approach:

  • Determine the high-level problems with our current combat. Informed by teammates and playtesters.

    • Break down those problems into more specific sub-problems that might be causing the larger problem.​

      • Rank the likelihood of whether each of these sub-problems really is causing the larger problem.​​​​​​​​​

  • Implement solutions to the granular problems that are the most likely culprits.

    • Test results and determine whether the problems are fixed and if new problems are introduced.​

And we repeat the cycle indefinitely. Note that not all solutions are created equal; we do want solutions that bring us closer to our high-level experience goals, even if they are a bit loosely defined.

​

In the next several sections I'll outline some of the problems we ran into and how I solved them.

Enemy Targeting

One of the most immediately clear problems we had was that it was very difficult to hit enemies, and the player would often translate past them when attacking. The first requirement to fix this problem is that we identify which enemy the player is trying to attack.

OberakPortfolioTargeting.gif

A red debug sphere is drawn on the target enemy, and blue debug spheres are drawn on other enemies in-range. The priority score of each enemy is also drawn.

The system checks for enemies in a sphere around the player, and assigns each of them a priority score based on distance to the player and the similarity of the player's left-stick input angle to the angle between the player and enemy. The system computes a weighted average of those factors and the enemy with the highest score becomes the target enemy.

​

This system went through some iteration according to player feedback, and now heavily weights input direction to ensure that the left-stick is the most important determinant of the target enemy, and the distance check just ensures that the closer enemy is picked when enemies are roughly collinear in relation to the player.

Motion Warping

We know the target enemy, but now we need the player's attack to actually translate toward that target enemy. For this I leveraged Unreal's Motion Warping functionality with some C++ modifications.

OberakPortfolioImage3.PNG

The attack montages contain Animation Notify windows during which the player will warp toward a target location defined in Blueprints.

The Animation Notfy utilizes my C++ Root Motion Modifier "CustomWarp", which is a modified version of the built-in "SkewWarp". See the commented variables for descriptions of added behavior.

OberakPortfolioMotionWarping.gif

The end result.

The resulting system has the following benefits:

  • We're not adding any translation to the attack, only scaling the baked root motion, which looks and feels more natural and true to the animator's intent.

  • We can easily tweak:

    • When in the animation the scaling occurs.

    • The minimum and maximum extents to which the root motion translation can be scaled per-attack.

      • This effectively determines the range of the attack (baked root motion translation distance multiplied by MaxScalar).​

Note that rotation is handled in much the same way as translation. Attack Montages also contain an Animation Notify window defining when the character rotates toward the target enemy:

OberakPortfolioImage6.PNG

In this enemy attack montage, the enemy will rotate toward the player during the selected animation notify window.
Note that this again uses my CustomWarp modifier, but now WarpTranslation is set to false and WarpRotation is set to true.

The rotation window always ends before the translation window starts. Defining these windows separately is vital to ensuring that characters do not rotate while translating forward, as this leads to very bizarre and disorienting movement as well as enemies that feel like they track the player unfairly.

Impact Effects

I implemented several effects to improve the feel of attack impacts:

  • Hitstop: Both the attacking and attacked character's animations freeze for a predefined duration.

  • Mesh shake: The attacked character's mesh shakes while hitstop is active.

  • Camera shake: The camera shakes on impact, using Unreal's built-in camera shake system.

OberakPortfolioImpacts.gif

The end result in slow motion (10% of normal speed).

Animation and Combo Tweaking

Our animator had made a lot of animations during his time at the company; I selected which animations to use for our attacks and hit reactions, and retimed them by slicing them into several sequences in each animation montage like so:

OberakPortfolioImage7.PNG

This kick animation is split into 4 sequences. Each sequence uses the same underlying animation, but a different portion of it. This allows me to retime the sequences independently (note that the selected sequence's Play Rate is set to 2.5)

With this technique I was able to tweak the timing of the windup, impact, and recovery portions of the animations which was critical to achieving good feeling combat. I did the same sorts of tweaks to the enemy hit reaction animations and also created combos and tweaked their rhythm:

OberakPortfolioCombos.gif

Punch and kick combos that I created.

Qi Meter and Abilities

With the above work finished we had the fundamentals of combat in a good place, but of course our combat lacked depth. We decided to address this by introducing a Meter system and associated utility-focused abilities with a resource cost, keeping in mind that we want to facilitate a fun "one vs. many" dynamic. I implemented this system as follows:

OberakPortfolioQiMeter.gif

The Qi Meter in the top-left is comprised of 3 segments. The player fills them as they perform positive actions in combat.

The Qi Meter fills when:

  • The player defeats an enemy.

  • The player lands a combo finisher attack.

  • The player performs a perfect dodge (more on this in the next section).

This system rewards the player for actively engaging with enemies and performing skillful actions. We discussed a system where Qi would fill automatically over time, as well as a cooldown-based system, but our chosen solution avoids potentially encouraging player passivity.

The player can use segments of the meter to perform powerful, utility-focused abilities by holding one of the triggers and pressing one of the attack buttons, like so:

OberakPortfolioQiAbilities.gif

The player performs a tornado kick that stuns enemies and a powerful punch that sends an enemy flying, damaging other enemies if they hit any on their way and taking extra damage when they hit a wall.

The player can also choose to spend the entire meter to enter Super Mode, which for a limited duration grants increased speed and damage while also preventing enemy attacks from interrupting the player.

OberakPortfolioSuperMode.gif

Super mode. Note the Qi Meter draining once active.

Perfect Dodge and Counter Attacks

The perfect dodge and counter attack abilities were recommendations from our Senior Designer to incorporate more risk/reward into the combat, and I implemented these abilities.

If the player dodges just before an enemy attack lands, a perfect dodge is triggered:

OberakPortfolioPerfect Dodge.gif

A perfect dodge.

I implemented this by spawning a stationary capsule at the player's location at the start of the dodge and checking whether that capsule gets hit by an attack for a short duration, after which the capsule is destroyed.

As previously mentioned, perfect dodges increase the player's Qi Meter, but they also allow the player to interrupt the recovery portion of the dodge animation and perform a counter attack:

OberakPortfolioCounterAttack.gif

The player can punch during a perfect dodge for a single-target counter attack, or kick for a 360 degree counter attack.

Counter attacks always target the attacking enemy rather than allowing the player to target any enemy like normal, but this presented an issue: there are also enemies that can cast projectiles which the player can get a perfect dodge from, but how should a counter attack work in that context?

My solution was to treat the projectile itself as the target of the counter-attack, and I implemented two unique abilities that the player can perform in this context.

OberakPortfolioCounterAttackProjectile.gif

The player can press punch after perfect dodging a projectile to send it back at the caster with increased speed and damage, or press kick to split the projectile into 3 friendly projectiles that each home in on a random enemy.

Traversal Abilities

Dash

I also implemented most of the traversal abilities in the game.  The dash, which appears in the previous section, is one such example.

I should note that I made extensive use of Unreal's Gameplay Ability System (GAS) to implement all of these abilities so that we could easily specify which abilities should cancel or block each other.

OberakPortfolioImage8.PNG

This is GA_Dash, the gameplay ability blueprint that defines the dash behavior. Green sections execute on ability activation, red sections execute on ability end, and blue sections execute repeatedly on a short timer.

GASTags.PNG

These tags define which abilities are blocked or cancelled by the dash. All abilities in the game are implemented this way.

OberakPortfolioDash.gif

The player can dash on the ground and in the air.

The dash has both an invulnerable phase at the start and a vulnerable recovery phase at the end. The durations of these phases are easily tweakable.

I use a curve to define the speed of the player while dashing, so we can easily tweak that as well.

DashCurve.PNG

This curve defines the player's speed over the .4 seconds that the dash is active.

Jump

I use the built-in jump functionality from Unreal's character class, but with a good deal of extra behavior for improved game feel. For instance, I increase the player character's gravity scale at the moment they hit the apex of their jump to address the problem of the jump feeling floaty.

​

The jump is again implemented as a gameplay ability for easy control of how it interacts with other abilities. To allow players to hold the jump button to jump higher (which players desired), I send gameplay events to notify the ability of when the player releases the jump button, at which point I call ACharacter's StopJumping function.

One of the more complex additions are the jump recovery/landing animations:

OberakPortfolioJumpRecovery.gif

There are 3 possible recovery animations: stationary, moving, and rolling.

These recovery animations are also implemented as gameplay abilities that simply play their respective animations and prevent the player from performing other actions. Which recovery is chosen depends on how fast the player is falling just before they hit the ground as well as their speed in the XY plane.

​

Aside from the roll animation, the other recovery animations are blended with the base locomotion pose so that they look believable regardless of how the player is moving. In fact, both the stationary and moving recoveries use this animation:

OberakPortfolioJumpRecoveryBaseAnimation.gif

This simple animation is used for both the stationary and moving jump recoveries, with different blending. Also note the notify in the middle of the animation, which I'll explain in a second.

Applying a layered blend per bone with varying weights can give surprisingly different results even with the same animation. Here is how the moving jump recovery is blended:

OberakPortfolioJumpRecoveryNormalBlend.PNG

The moving jump recovery animation is applied with a layered blend per bone. The legs have a fixed blend weight of 0.25, and the torso has a variable blend weight that depends on the player's vertical velocity at the moment they hit the ground.

The player cannot jump during the recovery animations, since being able to jump immediately upon landing was making the character feel weightless. However, having to wait until the entire recovery animation finished playing out felt horribly unresponsive. To reach a middle ground, I added a notify into the recovery animations indicating when the player can jump again.

Wall Run

I also implemented wall-running:

OberakPortfolioWallRun.gif

Wall running and jumping.

I created a custom trace channel that the wall run functionality uses when performing a capsule trace to check for a wall-runnable surface. Objects ignore this custom channel by default, and level designers can set particular objects to block this channel to make them wall-runnable.

WallRunCollision.PNG

Both of these wall run objects are set to block the Wall Run channel. Note that for added control, the visible meshes are not the ones set to wall-runnable; instead, we use invisible collision objects (with green wireframe visible in the editor).

Fall Out of Bounds Triggers and Respawning

The wall run areas did not always have that dangerous rushing water under them; initially, if the player failed the wall run they would land on normal ground and have to jump back up to the start of the wall run in order to try again. We found in playtesting that this was quite a tedious process, and so I recommended streamlining that process by simply respawning the player just before the wall run if they fall.

​

I implemented this respawn feature:

OberakPortfolioFallReset1.gif

The player respawns if they fail the wall run.

Once the player hits an invisible volume, I spawn a new camera that remains stationary and tracks the player's position. I immediately switch to this camera and disable player input. I fade the screen out, teleport the player to a variable target point, fade the screen back in, switch back to the normal camera and re-enable the player's input.

OberakPortfolioFallReset2.gif

The same thing happens if the player falls into the void here. Note that the player disappears in a puff of smoke.

A second invisible trigger is placed beneath the first, and when the player hits it I spawn a puff of smoke VFX, hide the player's mesh, and freeze all movement. Without this the player would just keep falling and clipping through whatever geometry might be present, which looks pretty odd.

Other Features

Combat Camera

We received feedback that the camera was not framing combat very effectively, as it was simply anchored to the player. I implemented this combat camera functionality to more effectively frame enemies:

OberakPortfolioCombatCamera.gif

The green debug sphere is the camera focal point. The red debug sphere is a weighted average of the enemy positions.

The weights of each enemy are also displayed.

I calculate a weighted average of the enemy positions, with several factors influencing the weights:

  • Distance to the player.

  • Whether the enemy is the player's current target.

  • Whether the enemy is attacking.

  • Whether the enemy is currently knocked down or KO'd.

​

The precise influence of these factors on the enemy weight calculation is easily tweakable, but the general idea is that enemy weights are proportional to the threat they currently pose to the player. Closer enemies are prioritized with bonuses if they're attacking or targeted and penalties if they're knocked down.

​

The camera focal point is a weighted average of the player's position and the enemy position weighted average. These weights are dynamic; by default the focal point is weighted all the way to the player, and as the sum of all of the enemy weights increases, the enemy position weighted average is given a higher weight until the camera focal point is halfway between the player and the enemy position weighted average.

​

The end result is that the camera smoothly moves toward clumps of enemies. I also set the distance of the camera to the focal point to be proportional to the distance from the focal point to the player and the enemy position weighted average (whichever is greater). That way the player stays on screen, and in the event that the player is surrounded by enemies and the camera is roughly centered on the player, the camera still pulls out to show the surrounding enemies.

Interaction System

I implemented a robust interaction system to support interaction with objects and characters in the world.

OberakPortfolioInteractable.gif

My interaction system displays a button-prompt and handles interaction with this health pickup. It also shows a white dot indicator instead of the button prompt when the player is nearby but not close enough to interact.

Here are some of the features of the system:

OberakPortfolioInteractableCustomWidgetLocation.gif

Users can freely specify the widget's position relative to the object.

OberakPortfolioInteractableCustomizableIndicatorRange.gif

Users can freely specify how close the player must be to be able to see the white dot indicator.

OberakPortfolioInteractableCustomizableInteractionVolume.gif

Users can specify where exactly the player must stand to be able to interact with the object.

OberakPortfolioInteractableCustomizableInteractionRange.gif

Users can freely specify how close the player must be to be able to interact with the object.

OberakPortfolioInteractableDistancePrioritization.gif

The system prioritizes the closest interactable when multiple interactables are in range.

OberakPortfolioInteractableCustomizableIndicationVolume.gif

Users can specify where exactly the player must stand to be able to see the white dot indicator.

Day/Night Cycle

The game also has a social sim aspect; while at your dojo you can train your students, give them gifts, etc. Time passes while in your dojo, and I implemented this time passing and day/night cycle feature:

OberakPortfolioDayNightCycle.gif

The time and day phase (morning, afternoon, evening, night) are displayed at the top of the screen. As morning turns to afternoon, the sun moves in an arc from its morning position to its afternoon position.

Time passes continuously, but the clock is only updated every ten in-game minutes. One in-game day, spanning from 7 AM to 11 PM, is currently set to last 25 earth-minutes. These variables are easily tweakable. I implemented the core time-passing functionality as a C++ GameInstance Subsystem. This made sense for several reasons; subsystems are singletons, and GameInstance subsystems persist when loading new levels, and both of these characteristics were desirable for this system.

GameInstance.PNG

Since the time-passing functionality is implemented as a GameInstance Subsystem, users can set exposed parameters here in the class defaults of the GameInstance class.

I also implemented the movement of the sun and moon as the day phase changes. This was a process of playing around with trigonometry to adjust the yaw and pitch of the sun and moon directional lights over time, until I achieved a fairly natural looking motion.

OberakPortfolioDayNightCycle2.gif

As evening turns to night, the movement of both the sun and moon are visible. The sky material also changes, and the stars in the night sky become visible.

Ending Cutscene

I collaborated with our animator and tech art director to implement this ending cutscene:

OberakPortfolioEndCutscene1.gif

The beginning of the ending cutscene.

This involved some research into Unreal sequences. For example, the sequence wasn't initially playing at the correct location but instead at the world origin. I eventually found that if you enable "Override Instance Data" on a LevelSequenceActor, you can specify an actor to act as the origin of the sequence.

​

I also implemented the dialogue in this sequence, and went back and forth with our animator to retime certain shots that were too short to contain their associated voice lines.

bottom of page