How I Would Fix Cheating in Destiny 2

Tim Keating
6 min readApr 24, 2021

--

When I saw recently that Bungie is planning on growing their security team, I thought for a moment about reaching out. Like many Destiny players, I’m tired of my KDA getting destroyed by cheaters¹. However, two things dissuaded me: I love my current company, and also my wife informed me that if I were going to move to Seattle, I would be doing it by myself, so…

However, I couldn’t resist thinking about how I would tackle this problem. And since I’m not planning to get divorced, I figured “why let those thoughts go to waste? Perhaps someone on the Destiny team might read this and it could spark some ideas.” Seems like the sort of thing medium.com was invented for!

¹ In the spirit of full disclosure, my mediocre KDA is not entirely attributable to cheaters.

My Bona Fides

Your first thought on reading that was probably “why should I give a flying… fig… what this guy thinks about how to fix cheaters in Destiny?” Well, I’ve worked in the game industry since 1999, both on game teams and support systems, although most of my work has been on back end systems. I used to work on an MMO (Ultima Online) and dealt with a lot of cheating there (although it wasn’t an action game). My one presentation at GDC, a paper called “Dupes, Speedhacks and Black Holes,” was specifically about the types of cheats players engaged in, and how we dealt with them.

The Nature of the Problem

Destiny 2 (rightfully) moved their physics simulation into their own data centers. P2P was the only way to go back in the day, but the internet is getting better, thank glob, and while it was great they could schlep randos together grouped by geographic location, that doesn’t help when I (in Austin) am playing with clanmates from Seattle and Orlando.

There are two places you can detect cheating in real-time: on the server, or on the client. Both are impractical, for similar and different reasons. On the server side, detecting some kinds of cheats is incredibly expensive — for example, to authoritatively detect wallhacks, you have to raycast every player’s sightline and weapon aim for every frame. This would be cost-prohibitive in terms of the amount of hardware you have to throw at it. And the more different types of cheats you have to scan for, the more expensive it gets.

On the client side, you have a cost of a different sort: system resources. Every bit of computing power you throw at detecting whether that player is legit is power you take away from drawing frames faster. However, this is largely moot, because that problem is eclipsed by another one: namely, as Gordon Walton famously puts it, “the client is in the hands of the enemy.” The degree of control you have over the client, short of installing some kind of anti-cheat rootkit, is limited by clients’ ability to modify their own memory.

So What’s the Answer, O Wise One?

What I would do, in their shoes, is not try to catch cheaters in real-time. I’m making two assumptions here that I think are true, though I certainly could be wrong:

  1. It’s not necessary to catch an instance of cheating as it’s happening, as long as players know cheaters are being caught and removed.
  2. It’s not necessary to catch every single cheater, so long as enough are getting caught that it deters casual cheating.

That said, I’m going to suggest an offline solution that is scalable, economical, and is suitable for fast rollout: you mirror all your traffic through a server architecture that can filter and process that traffic in slow-time to do all the things that are super-hard to do in real-time, whether on the (untrusted) client, or the (super-expensive) server.

AWS diagram theme from draw.io, which is awesome.

1The collection queue. Queues are designed to be fast and accommodate a lot of throughput. They serve to add slack to a pipeline, so if inputs heat up, it can buffer that flow without interruption. This should make this pipeline fairly load-tolerant.

The only change to game servers with this solution is that they also forward all game traffic to the queue. This seems like a lot, and it is — you’re doubling the entire network usage of the game! — but assuming your server hosts live in the same AZ (to use AWS parlance) as your cheat analysis engine, that will be the cheapest network capacity you will ever buy.

2 The filter server. This service’s role is simply to pop data off the collection queue and evaluate whether those packets need to be saved or not.

First, there will be certain packets that can simply be discarded. We don’t need to know that Player A opened their inventory. Chat messages. Stuff like that, that has no bearing on whether or not a player is cheating.

Then we look for red flags. These are the same criteria players use now to determine whether someone in-game is a cheater: excessive and consistent high weapon accuracy; private Steam profile, or very low level, or very few friends; a VAC ban; low lifetime hours in-game and high KDA; whether someone else has reported that player for cheating. If there’s a red-flagged player in the game, all the pertinent traffic (positioning and aiming data, attack actions etc.) get filtered in. We can also configure specific types of traffic based on what heuristics are currently active (see following) — for example, if you’re not hunting for wall hackers, maybe you don’t need aiming data.

Note that this means that probably 90% of the mirrored network traffic will simply be thrown away — that’s what this queue-and-filter architecture is for.

3Persistent storage. Data that passes through the filter is aggregated (so you can easily pull all the saved traffic for a single session) and stored, maybe in another queue, possibly in a more economical form of persistent storage like a NoSQL database if game data is going to sit around for a while before being analyzed.

4The heuristics servers. Here it is, the magic hour.

A single heuristic is designed to detect a single type of cheat. This is the way the solution can roll out quickly and gradually: start by developing scanners for the cheats that are easiest to detect and are the most pervasive. For example, for an infinite ammo cheat detector, the heuristic server tracks the state of every player’s ammo, which is computationally inexpensive and straightforward to implement. As new detectors come off the assembly line, you can just pop them into the infrastructure.

This is also the key to making the solution both effective and cost-scalable. New wallhack gets published? Dial up the wallhack scanners and dial down the others. Doing a big drive that will get a lot of new players into the game (e.g., an expansion release, or a big feature like crossplay)? Spend a bunch of money to crank up all your scanners for a couple of weeks to process queues quickly and expunge a lot of cheaters before they drive new players out of the game. Under normal use, just have a random assortment of heuristics instances running to fall within your anti-cheat budget. Ultimately, this could even be more effective, since we know from stuff like loot boxes that randomness can have a dramatic impact on the human psyche (thank you, B.F. Skinner).

Because this is an offline, deferred-processing approach, you can take advantage of AWS spot instances (or whatever the equivalent feature is for your cloud provider of choice) to reduce processing costs by up to 80%.

Each heuristics server can be its own piece of software (built on a common platform, presumably) if the complexity warrants it, or there could be one codebase with all the heuristics and you could turn specific scanners on and off via runtime configuration. I personally would only ever have a heuristics service instance handle one cheat at a time, and then control the balance of detection in play by scaling the number of instances, but YMMV.

Oh, and One More Thing

This system could also be leveraged for playbacks, which would be nice for generating footage of great plays from multiple angles. Handy for, say, a game intended to be a competitive e-sport meant for streaming. Hint hint.

--

--

Tim Keating
Tim Keating

Written by Tim Keating

Tim Keating is a Senior Software Engineer at Indeed.com. He has been developing software, and talking about developing software, for a long, long time.

No responses yet