- Log entry "Project showcase: Defender"

> Author: Twentysix
> Inserted on: 2020-09-06 07:00:22 +0000
> Total words: 1260
> Estimated reading time: 7 minutes
> Estimated reader's enjoyment: ERROR: Division by zero.
> Tags: red discord
==========================================

After a long long hiatus from hobby development I’ve been getting back on track during my vacations. Because what’s a better time to code than summer? The sun sucks. I’ve made a few cool projects and one of these is a new cog for Red, Defender.

# My secret recipe for original project names
from random import choice

def get_new_project_name():
    return choice(common_dictionary_words)

project = Project(get_new_project_name())

> Defending your turf

They say that necessity is the mother of all inventions, and well, it is true. At Red’s we have a pretty sizable userbase and daily traffic, and despite not even being close to absolute behemoths like the Fortnite server (God bless the poor souls moderating the place) we do get our fair share of trolls, advertisers and bad actors in general. Our staff also has the bad habit of sleeping or, worse, being away from their IM device of choice from time to time. We also believed to be at risk of raids recently, so I was extra motivated in not having us getting caught unprepared.

“So what did you do 26, did you hire new staff?”

Hell no, I took my Red instance and turned it into 10… slightly dumb, tireless, staff members.

> Ranks of trust

My main goal was to develop some automod features, so actions that Red can take with no manual input from us, plus some manual tools for both our staff and trusted regulars. I’ve had quite a few automod ideas in mind: raid detection, anti advertisement, reporting of suspicious joins. A few cogs already exist for this, but one thing that bothered me, however, was that I didn’t want everyone to be subjected to these new measures, they had to be tailored to counter common patterns. For example, I wouldn’t want a regular user to get auto-banned for posting an invite. And I wouldn’t want a contributor to get banned for message spam while testing their bot. Whitelists of channels, users, roles are a thing but I wasn’t looking for something that required a lot of upkeep. So I decided that if Defender were to work how I wanted, it had to split our userbase in 4 different ranks. For each new action a user takes Defender first identifies which rank the user belongs to by running a series of checks, and then that rank will be run against all the individual auto features (auto modules from now on) to decide if that user will be subjected to them. triangle of trust This marvellous Picasso right here, other than showing exactly why I didn’t pick art school, illustrates how the various ranks are assigned: based on high roles (trusted users), join date and activity. Roles that can be considered trusted are of course configurable, and so are the days and messages of the lower ranks. Defender, at the time of writing, stores pretty much no user data other than a counter of messages: everything is computed at runtime. Each auto module has a settable rank as a target: for example we have our invite filter set at Rank 4, targeting only users who recently joined and have very low recorded activity. This protects trusted users, regulars and people who are getting to know the server from getting caught in the crossfire. Each auto module is configurable, allowing you to decide target rank, action to take, etc

> Weaponized all seeing eye

Defender offers a variety of manual modules meant to help staff members in case of large scale attacks and also tools to monitor the server with ease. Some notable mentions:


Some cool toys uh? But of course, even with all the modules that I listed, there is still the need for human intervention sometimes. Because it’s just not possible to auto-detect everything: some situations can only be judged by the human factor. One of my main worries was moderation holes, the situation where half of the team is asleep and the other half is busy watching Netflix doing stuff.

“So you DID hire new staff!”

No! I’m a developer! I have added another feature to counter this situation.

> In case of emergency break glass

I don’t know about your community but at Red’s we have quite a lot of users that if given a little bit of power most likely won’t set the whole place on fire. So, why not let them help when staff accidentally decides to take a break all at the same time? We’ll call these users helpers. In the first versions of Defender that I’ve made I included the alert command. This command allowed said helpers to shoot a @Staff ping in our staff channel, which also gave us precise context about who issued the alert and where. It was already pretty useful as is but I have decided to expand this command: there is now the option to make it trigger an emergency mode after X minutes of staff inactivity. During emergency mode modules that have been set to be enabled during said mode will be available to helpers. The already mentioned vaporize and silence are two of these modules that can be set as such, but these are pretty OP weapons to give out. The one module that I have designed explicitly for this is voteout. Voteout allows helper roles to start a voting session against a user; if the session reaches the threshold the user is expelled. Of course target rank, votes threshold and action are all configurable, like most parts of this cog. When the staff finally comes back, Defender will detect their activity and emergency mode will automatically be lifted, revoking access to the emergency modules. Emergency mode can also be manually triggered by staff in case, for some reason, you might want to ring the alarm bells and turn all your helpers in temporary mods :-)


Hope you have enjoyed the write-up. It’s a lot to take in, the cog is quite feature packed but the [p]defender status command tries to give a very good explanation about how everything works and how everything is currently set.


See ya!

> A small addendum (09/18)

Defender now includes Warden, a new versatile auto-module that allows the admins to define custom rules by combining a rich set of events, conditions and actions. This has a lot of applications: enhanced monitoring, message filtering, staff notifications, automod actions… The possibilities are many. Take a look at the guide here!