Skip to the main content
Skip to the entry’s beginning
First published on .

rangifer’s diary: pt. xci

What’s our level range, again?

If you’ve ever done “points farming” or BRPQ (“Boss Rush Party Quest”; “BPQ”) during a MapleLegends event, then you already know how party recruitment goes. For farming event points, you’re looking for fellow HHGII-goers (or PP1-goers, or what have you) that are within 20 levels of you. In this context, “𝑐1 is within 𝑘 levels of 𝑐2” means that the absolute difference between the levels of PCs 𝑐1 and 𝑐1 is less than or equal to 𝑘. Symbolically, we’d write this as |ℓ1 − ℓ2| ≤ 𝑘, where ℓ𝑖 is 𝑐𝑖’s level. For event point farming, 𝑘 = 20; for BRPQ, 𝑘 = 30.


You’ll notice that “𝑐1 is within 𝑘 levels of 𝑐2” sounds exactly like some kind of binary relation. More specifically, for a given value of 𝑘, this is a homogeneous relation over 𝐶, where 𝐶 is the set of all PCs (𝑐𝑖 ∈ 𝐶). This homogeneous relation — which we’ll just call 𝑅𝑘 — has some nice properties:

However, it also lacks some properties that we’re familiar with having in many binary relations:

Like I said, 𝑅𝑘 can’t be an equivalence relation because it lacks transitivity. However, I found out that there’s actually a special name for this kind of relation: a tolerance relation. The name makes sense; we essentially have some level of “tolerance” (which in our particular case is called 𝑘) for how “distant” objects may be from one another before we consider them to be “discernible” from one another. Equivalence relations (or rather, congruence relations) are the special case of tolerance relations where the “tolerance” is “zero”, or is otherwise nonexistent in some sense; then, it’s not possible for “tolerances” to “accumulate” (as they do in the “𝑐1⁡𝑅20⁡𝑐3” example above) and cause intransitivity. In the language of tolerance relations, when 𝑐1⁡𝑅𝑘⁡𝑐2 holds, we say that “𝑐1 is indiscernible from 𝑐2”. This makes (𝐶, 𝑅𝑘) a tolerance space.

Furthermore, a tolerance relation over a finite set is called a dependency relation. Dependency relations are important in concurrent computation, where they are so called because they relate two objects iff those objects depend on one another in some way, thus precluding them from being computed concurrently. For example, if you want to cook some food and then eat it, then there is a dependency between the “cooking” part and the “eating” part; you cannot do them concurrently — unless you actually wanted to eat the uncooked food, in which case, why even bother cooking in the first place?

Aggregate tolerance

In our particular case of recruiting for a party (whether it be for farming event points, or for BRPQing, or for something similar), we ideally want — and, in the case of BRPQ, absolutely need — this tolerance relation 𝑅𝑘 to hold pairwise for all members of the party. Symbolically, if our party is 𝑆 (s.t. 𝑆 ⊆ 𝐶), then we want:

∀(𝑐1, 𝑐2) ∈ 𝑆2 : 𝑐1⁡𝑅𝑘⁡𝑐2.

With this in mind, there’s a particular phenomenon that we quickly notice: as the party’s size (i.e. its number of members) grows, we see a shrinking of the range of levels that a new member could possibly be. For example, if we represent our party as a multiset of levels, and we let 𝑘 ≔ 20, then we might start with e.g. {100}, and the range of levels that a new member could possibly be is [80, 120]. If we select some arbitrary level from this interval, let’s say e.g. 110, then our party is {100, 110}, and the valid level interval for a new member is now [90, 120], which is a strict subset of the original interval. In general, adding a new party member does not necessarily change the valid level interval, so the new interval is always a subinterval (not necessarily strict) of the old one. This means that the length of the interval decreases weakly monotonically as the party size grows.

In particular, the valid level interval for a new party member, as a function of the multiset of levels of the current party members (call this multiset 𝐿), is:

ℐ𝑘⁡(𝐿) = [𝗆𝖺𝗑⁡(𝐿) − 𝑘, 𝗆𝗂𝗇⁡(𝐿) + 𝑘].

Note that an interval like [1, 0] — where the lower bound is strictly greater than the upper bound — is the empty set.

As a result, the interval only changes if the new member (we’re not going to consider the possibility of members leaving the party or being kicked out) changes the maximum level or minimum level present in the party. In other words, the interval only changes if the new member’s level is either strictly greater than, or strictly less than, all old member’s levels.

Interval/bound convergence

I noted that the length of the interval decreases weakly monotonically as the party size grows, and in fact, it’s clear that it can “converge” to an absolutely minimal length of 𝑘. This occurs when 𝗆𝖺𝗑⁡(𝐿) − 𝗆𝗂𝗇⁡(𝐿) = 𝑘, and thus ℐ𝑘⁡(𝐿) = [𝗆𝗂𝗇⁡(𝐿), 𝗆𝖺𝗑⁡(𝐿)]. At exactly this point, it no longer matters how many new members join, nor what levels those new members are — the interval will never change.

We don’t typically have any way to accurately predict the level of the next PC who will ask to join the party — we’re usually just recruiting whichever random people happen to show up at that time. Thus, we have no good choice but to model the level of a new party member as effectively a random variable. In doing so, assuming that we assign a nonzero probability to each level[1], we can intuitively[2] see that |ℐ𝑘| almost surely converges to 𝑘, as the party size tends to +∞.

Given that we expect to converge in this way after a sufficient number of new members join our party, the question that I want to ask seems simple: how long does it take to converge?

In order to answer this question, we need to define things in a way that is a bit more clear-cut. When we say “how long”, we really mean “how many new party members”; each time that a new party member is added, I call this a single “step” of the process. Furthermore, as it is, there’s actually no guarantee that we converge at all, because we’re still talking about MapleStory parties, which are limited to a maximum cardinality of 6. So we want to lift that restriction, allowing arbitrarily large (but still always finite) parties. Similarly, we want to lift the level restrictions that MapleStory inherently possesses, because it makes things more asymmetrical and more difficult to reason about; if 𝑘 = 20 and 𝐿 = {190}, then the formula above suggests that ℐ𝑘⁡(𝐿) = [170, 210], but in practice it’s actually [170, 200], because the maximum level in MapleStory is 200. So instead, we will assume that any element of ℤ is a valid level for a PC to have, and anything else (e.g. 13.5, π, ♣, etc.…) is invalid. Also similarly, we want to constrain 𝑘 so that 𝑘 ∈ ℕ.[3]

Another part that we have to specify is how PC levels are distributed. We’ve already accepted that the level of a new party member is random, but we’ve yet to specify in what way. By invoking the principle of indifference, we can just assume that PC levels are distributed uniformly. Obviously, in practice, the “actual” (empirical) distribution is not strictly uniform; for example, level 120 characters are probably more common than level 119 characters in MapleLegends. But trying to get a better approximation is almost certainly more trouble than it’s worth, and much more importantly, we don’t actually care; although this problem is modelled on empirical player-driven phenomena that occur in (a particular implementation of) MapleStory, it is really just a mathematical problem that we have in mind.

WLOG (we’ll get a better idea of why this doesn’t lose generality in just a bit), we can assume for simplicity that our initial value of 𝐿 is {0}. Then, given some value of 𝑘, we want to ask: what is the probability that we converge after 𝑛 steps (𝑛 ∈ ℕ)? What we’re asking for here is a probability mass function (PMF) that is parameterised by 𝑘. Because this PMF more or less defines[4] the probability distribution that we’re after, I’m going to refer to the PMF as simply 𝖱𝖡𝖢⁡(𝑘), where “𝖱𝖡𝖢” stands for “recruitment bound convergence”. Then, 𝖱𝖡𝖢⁡(𝑘)⁡(𝑛) is the probability that, given some value for 𝑘, we converge in exactly 𝑛 steps.

Why do we care about 𝖱𝖡𝖢⁡(𝑘)? Iunno. I just thought of it one time when I was farming event points or something. Sue me. The point is that I spent absolutely way too long obsessing over it, and it is now that we will see what I have uncovered about this mysterious probability distribution. I’m going to go over what I learned in roughly the order that I learned it, and with roughly the methods by which I learned it.

Footnotes for “Interval/bound convergence”

  1. [↑] At least, to every level that is valid for a PC to be, which in MapleStory is usually [1, 200] ∩ ℤ. Of course, merely assigning a nonzero probability to each level in ℐ𝑘 does the trick, as well.
  2. [↑] By waving our hooves and invoking the infinite monkey theorem, or something…
  3. [↑] In general, we could loosen the constraints on 𝑘 so that it merely has to be a non-negative real number. However, if we do this, then ⌊𝑘⌋ and 𝑘 are effectively the same, so it will make things a lot easier to just assume by convention that 𝑘 is a natural number.
  4. [↑] See also: “Probability space”.

Phase I: Worked examples

Because this whole thing was a little too abstract, I decided to first make it concrete by working out, by hoof, a particular example.

Consider 𝑘 ≔ 4. What’s the probability that we converge in exactly 1 step? In other words, what is the value of 𝖱𝖡𝖢⁡(4)⁡(1)? Well, we converge in exactly 1 step iff the first PC to join the party is level 4 or is level −4. We can see that ℐ4⁡{0} = [−4, 4], so the set of possible levels that this first step could add to the party is the intersection of this interval with ℤ, viz. {−4, −3, −2, −1, 0, 1, 2, 3, 4}. The cardinality of this set is 9, and there are 2 possible values in the set that cause us to converge (viz. {−4, 4}), so 𝖱𝖡𝖢⁡(4)⁡(1) = 2⁄9 = 0.222…. Easy, right?

Okay, but what about 𝖱𝖡𝖢⁡(4)⁡(2)? Well, we can take advantage of the symmetry of 𝑅𝑘 to observe that, in the first step, adding a level ℓ PC to the party is the same as adding a level −ℓ PC to the party. More generally, instead of laboriously tracking the exact state of 𝐿 at each step, we can note that we really only care about |ℐ4⁡(𝐿)|, which itself only depends on 𝗆𝗂𝗇⁡(𝐿) and 𝗆𝖺𝗑⁡(𝐿). So, because levels are distributed uniformly (meaning that 𝐿 is effectively invariant under constant shifts[1], e.g. adding +17 to all levels), we can simply pretend that 𝐿 = {0, 𝗆𝖺𝗑⁡(𝐿) − 𝗆𝗂𝗇⁡(𝐿)}. For example, we’d simplify {0, 1, 3} to {0, 3}, and we’d simplify {−2, −2, −1, 1} to {0, 3} as well. These observations & simplifications will make it significantly easier to compute values of 𝖱𝖡𝖢 by hoof, but will also contribute to a more systematic (read: not by hoof) treatment of 𝖱𝖡𝖢. Working through the calculation of 𝖱𝖡𝖢⁡(4)⁡(2), we end up with something like:

we add𝐿valid new levelswe addprobability
−3, 3{0, 3}−1, 0, 1, 2, 3, 4−1, 42⁄9 ⋅ 2⁄6 = 2⁄27
−2, 2{0, 2}−2, −1, 0, 1, 2, 3, 4−2, 42⁄9 ⋅ 2⁄7 = 4⁄63
−1, 1{0, 1}−3, −2, −1, 0, 1, 2, 3, 4−3, 42⁄9 ⋅ 2⁄8 = 1⁄18
0{0, 0}−4, −3, −2, −1, 0, 1, 2, 3, 4−4, 41⁄9 ⋅ 2⁄9 = 2⁄81

These 4 cases make up all possible cases where we converge in exactly 2 steps: with our simplifications, there are only 4 possible levels that we can effectively add in the first step without converging (we can’t converge at that point because then it would be converging in exactly 1 step instead of 2), and then we always have to choose a level that causes us to converge in the second step, of course. Notice that no matter how many steps we converge in, there are always exactly 2 possible levels that can be added to 𝐿 in the final step to make it converge: either the level that’s exactly high enough to converge, or that’s exactly low enough. Because these 4 causes are exhaustive, we can simply add their probabilities to get the value of 𝖱𝖡𝖢⁡(4)⁡(2) = 2⁄27 + 4⁄63 + 1⁄18 + 2⁄81 = 247⁄1134 ≈ 0.217 813.

That was a slight bit of a pain in the ass, but don’t worry — it gets way more painful from here. What about 𝖱𝖡𝖢⁡(4)⁡(3)? Prepare to be sorry that you asked:

we add𝐿valid new levelswe add𝐿valid new levelswe addprobability
0{0, 0}−4, −3, −2, −1, 0, 1, 2, 3, 40      {0, 0}−4, −3, −2, −1, 0, 1, 2, 3, 4−4, 41⁄9 ⋅ 1⁄9 ⋅ 2⁄9 = 2⁄729
−1   1    {0, 1}−3, −2, −1, 0, 1, 2, 3, 4−3, 41⁄9 ⋅ 2⁄9 ⋅ 2⁄8 = 1⁄162
−2        2  {0, 2}−2, −1, 0, 1, 2, 3, 4−2, 41⁄9 ⋅ 2⁄9 ⋅ 2⁄7 = 4⁄567
−3             3{0, 3}−1, 0, 1, 2, 3, 4−1, 41⁄9 ⋅ 2⁄9 ⋅ 2⁄6 = 2⁄243
−1, 1{0, 1}−3, −2, −1, 0, 1, 2, 3, 40 1    {0, 1}−3, −2, −1, 0, 1, 2, 3, 4−3, 42⁄9 ⋅ 2⁄8 ⋅ 2⁄8 = 1⁄72
−1     2  {0, 2}−2, −1, 0, 1, 2, 3, 4−2, 42⁄9 ⋅ 2⁄8 ⋅ 2⁄7 = 1⁄63
−2          3{0, 3}−1, 0, 1, 2, 3, 4−1, 42⁄9 ⋅ 2⁄8 ⋅ 2⁄6 = 1⁄54
−2, 2{0, 2}−2, −1, 0, 1, 2, 3, 40 1 2  {0, 2}−2, −1, 0, 1, 2, 3, 4−2, 42⁄9 ⋅ 3⁄7 ⋅ 2⁄7 = 4⁄147
−1       3{0, 3}−1, 0, 1, 2, 3, 4−1, 42⁄9 ⋅ 2⁄7 ⋅ 2⁄6 = 4⁄189
−3, 3{0, 3}−1, 0, 1, 2, 3, 40 1 2 3{0, 3}−1, 0, 1, 2, 3, 4−1, 42⁄9 ⋅ 4⁄6 ⋅ 2⁄6 = 4⁄81

Fun!!! Adding up the probabilities of these (exhaustive) cases gets us 𝖱𝖡𝖢⁡(4)⁡(3) = 2⁄729 + 1⁄162 + 4⁄567 + 2⁄243 + 1⁄72 + 1⁄63 + 1⁄54 + 4⁄147 + 4⁄189 + 4⁄81 = 48649⁄285768 ≈ 0.170 239 5.

I think that’s about enough of that…

Footnotes for “Phase I: Worked examples”

  1. [↑] More formally, the elements of 𝐿 are points in an affine space. This affinity is naturally induced by the fact that 𝑅𝑘 is defined in terms of subtraction between these “points”.

Phase II: What’s the probability that we converge in exactly 𝑛 steps?

Now that we’ve worked a good way through an example, we’re going to try to make the leap to a generalisation. I did a lot of scribbling nonsense, and a lot of thinking really hard about it, and I’m going to try to give a brief explanation of my reasoning.

First of all, we’re going to define a new variable called 𝑑, which represents the difference (you can also think of it as standing for “distance” or “diameter”) between 𝗆𝖺𝗑⁡(𝐿) and 𝗆𝗂𝗇⁡(𝐿). In other words, the simplification of 𝐿 that I talked about above can be defined as simply {0, 𝑑}.

As I was trying to come up with a general solution, I started scribbling out some pseudocode, and I ended up with something like this to capture the general structure of the example used in Phase I — in particular, 𝖱𝖡𝖢⁡(4)⁡(3):

sum a ← [-3..3]:
    d_a := |a|
    sum b ← [(d_a - 3)..3]:
        d_b := max{d_a - b, b, d_a}
        sum _ ← [0, 1]:
            1/9 * 1/(9 - d_a) * 1/(9 - d_b)

I’m using the sum a ← [-3..3]: notation to mean that we’re summing over a, with the values of a being taken from the list [-3..3] (which, in Haskell notation, is the same as the list [-3, -2, -1, 0, 1, 2, 3]), and the summands simply being the value(s) that the block takes on (which may themselves be summations…). I’m also using := for assignment.

As you can see, I’m essentially just summing over all possible cases, i.e. every possible level that the first added party member could have, combined with every possible level that the second added party member could have, and so on. In the first two sum loops, we are avoiding convergence and considering every remaining possibility, and in the third and final sum loop, we just consider the 2 cases that cause convergence — which is why there is no summation variable, and instead just a blank _ — I’ve used the list [0, 1], but any list of length 2 would work equally well.

The pattern here is hopefully clear: if we wanted to consider larger values of 𝑛 like 𝖱𝖡𝖢⁡(4)⁡(4), 𝖱𝖡𝖢⁡(4)⁡(5), ⋯, we would just add another sum layer (sum c ← [(d_b - 3)..3]:, ⋯) that calculates yet another value of 𝑑 (d_c := max{d_b - c, c, d_b}, ⋯), and the arithmetic expression in the innermost sum loop would look like 1/9 * 1/(9 - d_a) * 1/(9 - d_b) * 1/(9 - d_c) * ⋯.

Of course, there’s also the question of how I came up with “max{d_a - b, b, d_a}” in the first place. I basically came up with this expression by considering two cases, which are together exhaustive: that the summation variable 𝑏 is negative (i.e. 𝑏 < 0), and that 𝑏 is non-negative (i.e. 𝑏 ≥ 0):

In other words:

𝗆𝖺𝗑’ing the “𝑏 − 𝑑𝑎” with 0 covers the “𝑏 is somewhere within the interval [0, 𝑑𝑎]” case above, causing 𝑑 to be increased by 0 (i.e. to not change) in that case. Thus, we can refine how we split up the computation of “𝑑𝑎 + 𝗆𝖺𝗑{𝑏 − 𝑑𝑎, 0}” above:

So, we now have these three exhaustive cases:

…Which is where “𝗆𝖺𝗑{𝑑𝑎 − 𝑏, 𝑏, 𝑑𝑎}” (max{d_a - b, b, d_a}) comes from.

This brings us to the first implementation of 𝖱𝖡𝖢 that I came up with that, you know, at the very least didn’t require doing anything “by hoof”. It’s an implementation in Python, with a function called pmf0:

from fractions import Fraction as Q

def pmf0(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return sum(pmf_inner0(k, n, 0))

def pmf_inner0(k, n, d):
    denom = 2 * k + 1 - d

    if n == 1:
        yield Q(2, denom)
        for elem in range(d - (k - 1), k):
            d_prime = max(d - elem, elem, d)
            for f in pmf_inner0(k, n - 1, d_prime):
                yield Q(1, denom) * f

pmf0 is entirely correct, and it calculates its result with unlimited precision (using Python’s built-in rational number arithmetic). It is, however, basically the worst implementation — I continued refining it gradually to make something better.

As you’d expect, pmf0 (or rather, pmf_inner0) is recursive, in order to implement that variably-sized (viz. the size of 𝑛) nested stack of sum loops that we saw in the pseudocode above. The strategy here is to implement the inner recursive bit (pmf_inner0) as a generator that yields the summands one by one, and then pmf0 does the job of summing them all up with the built-in sum function.

We also have special-casing for when 𝑘 = 0, in which case the starting state (𝐿 ≔ {0}) is already converged, so the probability of converging in exactly 0 steps is 1, and the probability of converging in exactly 𝑛 steps for any other value of 𝑛 is 0. On the other hand, if 𝑘 ≠ 0 and yet 𝑛 < 1, then we cannot possibly converge (as we need at least 1 step to get from 𝐿 ≔ {0} to convergence), so we always return 0 in such cases.

Phase III: Can we go faster?

So, we accomplished something nice: we now have a computer program that will calculate 𝖱𝖡𝖢⁡(𝑘)⁡(𝑛) accurately (indeed, with unlimited precision) for any values of 𝑘 and 𝑛. But I still have three serious issues with what we’ve got so far:

  1. This implementation is ludicrously slow. Using pmf0 is not recommended unless you just want to laugh at how slow it is.
  2. What I was dreaming of, in my mind, when starting on this problem, was coming up with a nice-looking formula — the kind that you might see in a textbook on, say, probability theory or something. We’re definitely not there yet.
  3. I also care about some other properties of the distribution. Knowing the PMF is certainly extremely important and effectively defines the distribution, but I also want to know things like the expectation, the variance, etc.

So, let’s tackle these three issues, roughly in the order stated above.

My next iteration on the Python implementation is called pmf1, and it looks something like this:

def pmf1(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return pmf_inner1(k, n, 0)

def pmf_inner1(k, n, d):
    denom = 2 * k + 1 - d

    if n == 1:
        return Q(2, denom)
        return sum(
            Q(1, denom) * pmf_inner1(k, n - 1, max(d - i, i, d))
            for i in range(d - (k - 1), k)

As you can see, pmf1 basically looks like pmf0 except that it calls pmf_inner1 instead of pmf_inner0, and it doesn’t sum the results. The main thing that distinguishes pmf_inner1 from pmf_inner0 is that it does its own summation, thus being an ordinary function (which just returns a single rational number) instead of a generator. This makes things more explicit. It also possibly removes some overhead that the generator approach might have, but really I wouldn’t expect it to be much faster — the added explicitness is what’s important.

That brings us to the next iteration, pmf2, which is where we see the real performance gains:

def pmf2(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return pmf_inner2(k, n, 0)

def pmf_inner2(k, n, d):
    cache = {}

    def f(n, d):
        if (n, d) in cache:
            return cache[(n, d)]

        denom = 2 * k + 1 - d

        if n == 1:
            return Q(2, denom)
            r = sum(
                Q(1, denom) * f(n - 1, max(d - i, i, d))
                for i in range(d - (k - 1), k)
            cache[(n, d)] = r

            return r

    return f(n, d)

That’s right, this is a job for dynamic programming! By memoising already-computed values of f(n, d) (k is a constant for a given invocation of pmf_inner2, so we don’t need to store its values) in a dictionary called cache, we avoid most of the recursion that we would otherwise perform, thus dramatically cutting down the computational complexity of pmf2.

Oh, but we can do even better. As it turns out, we’re actually unnecessarily duplicating quite a bit of work. We did all of that reasoning above to arrive at “𝑑𝑏 ≔ 𝗆𝖺𝗑{𝑑𝑎 − 𝑏, 𝑏, 𝑑𝑎}”. However, there’s actually a similar, but easier, way to look at it. If 𝐿 = {0, 𝑑} and we haven’t yet converged, then the next level added to 𝐿 falls into exactly one of three cases: strictly less than 0, somewhere within [0, 𝑑], or strictly greater than 𝑑. Crucially, the number of possible levels in the first case is exactly the same as the number of possible levels in the third and final case — viz. 𝑘 − 𝑑 — and these levels are symmetrical; we get the same result if we consider adding −1 to 𝐿 as if we consider adding 𝑑 + 1, the same result if we consider adding −2 as if we consider adding 𝑑 + 2, and so on. And adding any level to 𝐿 that is within [0, 𝑑] yields the same result as adding any other level that’s also within that interval, so we can just do the computation for one particular level, and then multiply the result by the number of possible levels within this interval, viz. 𝑑 + 1.

Just to make things more concrete with an example, consider 𝑘 = 8 and 𝐿 ≔ {0, 3}. The set of possible levels that could be added to 𝐿 in the next step are as follows (this is computed as ℐ8⁡{0, 3} ∩ ℤ):

{−5, −4, −3, −2, −1, 0, 1, 2, 3, 4, 5, 6, 7, 8}.

We can partition this set into the three cases that we have in mind:

{−5, −4, −3, −2, −1} ∪ {0, 1, 2, 3} ∪ {4, 5, 6, 7, 8}.

As expected, the first and last sets in this partition both have cardinality 𝑘 − 𝑑, and the middle set has cardinality 𝑑 + 1. In this example, the symmetry is that −1 & 4 are paired, −2 & 5 are paired, −3 & 6 are paired, −4 & 7 are paired, and −5 & 8 are paired. By “paired”, I of course mean that the result (i.e. the new value of 𝑑) is effectively the same, no matter which element of the pair is added to 𝐿 in the next step. So, for example, instead of computing a probability under the assumption that −1 is added to 𝐿 in the next step, and then doing the same for 4, and then adding those two results; we can simply do it for just 4, and then double the result. Furthermore, instead of doing separate computations for every element of {0, 1, 2, 3} and then summing the results, we can just do it for 0, and then multiply that result by 𝑑 + 1.

This gets us the next iteration of our little Python program, which I simply call pmf:

def pmf(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return pmf_inner(k, n, 0)

def pmf_inner(k, n, d):
    cache = {}

    def f(n, d):
        if (n, d) in cache:
            return cache[(n, d)]

        denom = 2 * k + 1 - d

        if n == 1:
            return Q(2, denom)
            r = sum(Q(2, denom) * f(n - 1, i) for i in range(d + 1, k)) + Q(
                d + 1, denom
            ) * f(n - 1, d)
            cache[(n, d)] = r

            return r

    return f(n, d)

This was about as far as I could manage to refine this function, at least whilst still getting results with unlimited precision. We can go quite a bit faster by switching from Python’s fractions.Fraction to its built-in float datatype, which simultaneously limits our precision to a fixed number of bits (viz. 64 bits), and takes advantage of the fact that modern CPUs contain specialised hardware for working with IEEE 754 double-precision (a.k.a. binary64) floating point numbers. The result is pmf_float:

def pmf_float(k, n):
    if k == 0:
        return 1.0 if n == 0 else 0.0
    elif n < 1:
        return 0.0

    return pmf_inner_float(k, n, 0)

def pmf_inner_float(k, n, d):
    cache = {}

    def f(n, d):
        if (n, d) in cache:
            return cache[(n, d)]

        denom = float(2 * k + 1 - d)

        if n == 1:
            return 2.0 / denom
            r = (
                2.0 * sum(f(n - 1, i) for i in range(d + 1, k))
                + float(d + 1) * f(n - 1, d)
            ) / denom
            cache[(n, d)] = r

            return r

    return f(n, d)

Note that this implementation sticks to integer arithmetic wherever possible, only resorting to floating-point arithmetic when absolutely necessary. pmf_float is very usable and gives results of quite good accuracy, as there’s nothing being done here that is particularly numerically unstable. So, it should be preferred over pmf unless you actually need to know the exact rational number for some reason.

Okay, but, how fast is it, really? Oh yes, it’s time for some benchmarks! But only really crappy ad hoc ones because I’m lazy and there’s no reason to be obsessive about accuracy here…

But first, we need just one more implementation to benchmark: a Rust implementation of pmf_float! I didn’t bother making a Rust implementation of pmf because, again, I was too lazy, and I really just wanted to see how quickly I could compute 𝖱𝖡𝖢 to an accuracy that’s sufficient for any realistic application. The Rust implementation looks something like this:

use rustc_hash::FxHashMap as Map;

pub fn pmf(k: u32, n: u32) -> f64 {
    if k == 0 {
        if n == 0 {
        } else {
    } else if n < 1 {
    } else {
        pmf_inner(k, n, 0)

fn pmf_inner(k: u32, n: u32, d: u32) -> f64 {
    fn f(cache: &mut Map<(u32, u32), f64>, k: u32, n: u32, d: u32) -> f64 {
        if let Some(x) = cache.get(&(n, d)) {
            return *x;

        let denom = f64::from(2 * k - d + 1);

        if n == 1 {
            2.0 / denom
        } else {
            let sum: f64 = (d + 1..k).map(|i| f(cache, k, n - 1, i)).sum();
            let r =
                (2.0 * sum + f64::from(d + 1) * f(cache, k, n - 1, d)) / denom;
            cache.insert((n, d), r);


    let mut cache = Map::default();

    f(&mut cache, k, n, d)

As you can see, this is pretty much exactly equivalent to how the Python pmf_float function is written, with the slight exception that I don’t use the hash function from Rust’s standard library (std), and instead depend on the rustc-hash crate.[1]

So, uhm, let’s see the (crappy) benchmarks?:

Raw outputs

pmf2: 19.257901867997134
pmf: 7.60964249499375
pmf_float: 0.23313972300093155


pmf(20, 24)             time:   [50.477 µs 50.482 µs 50.487 µs]
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) low mild
  2 (2.00%) high mild
languagefunctiontime (µs⧸iter)time (relative)
Pythonpmf2192 579.023 814.96×
Pythonpmf76 096.421 507.46×
Pythonpmf_float2 331.4046.18×

The implementations were benchmarked solely on their ability to compute 𝖱𝖡𝖢⁡(20)⁡(24). You’ll notice that I did not benchmark pmf0 nor pmf1, because those implementations lack the memoisation required to run in a reasonable amount of time.

For Python, I used timeit to do the benchmarking (see, and for Rust, I used (see benches/

You can see the full listings for the Python implementations at, and for the Rust implementation at src/ & src/

Footnotes for “Phase III: Can we go faster?”

  1. [↑] As of this writing, Rust’s standard library uses SipHash (specifically SipHash-1-3) by default. This is because SipHash makes use of an ARX structure that makes it highly resistant to hash flooding attacks, thus making it a good default choice if you’re not sure whether or not the user will need that security. However, we don’t care about that kind of security in this case, so we can just use rustc⁠-⁠hash to hash things stupidly quickly. If you’re wondering what rustc⁠-⁠hash actually looks like, it hashes a usize (which is a pointer-sized unsigned integer) at a time, like so (simplified under the assumption that you’re compiling for a 64-bit architecture):

    const K: usize = 0x517cc1b727220a95;
    fn add_to_hash(&mut self, i: usize) {
        self.hash = self.hash.rotate_left(5).bitxor(i).wrapping_mul(K);

    Note that 0x517cc1b727220a95 = 5 871 781 006 564 002 453 = 32 ⋅ 7 ⋅ 112 ⋅ 47 ⋅ 173 ⋅ 94 732 711 681.

Phase IV: Where da formula @, doe?

I regret to inform you that I could not figure out a closed-form expression for 𝖱𝖡𝖢 — at least, not for a restrictive definition of “closed-form”. Booooo… 😤👎 Embarrassing……

However, it is certainly possible to come up with a recursive “formula” modelled on the programmatic implementations that you see above. This is not a single expression, so it’s not a “closed-form expression”, but it is a fixed number of expressions (really, just two expressions) that are, individually, technically closed-form — at least, by Wikipedia’s definition of “closed-form”, which allows summation, so long as it’s not infinite. Basically, this is the best I got:

Formula for 𝖱𝖡𝖢 ◐

Raw LaTeX
\mathrm{RBC}(k, n) &\equiv f(k, n, 0)                           \\[1ex]
        f(k, 1, d) &=      \frac{2}{2k + 1 - d}                 \\[1ex]
        f(k, n, d) &=      \frac{1}{2k + 1 - d}\sum_{i = d - (k - 1)}^{k - 1}
                             f(k,\,n - 1,\,\max\{d - i, d, i\}) \\[1ex]
                   &=      \frac{1}{2k + 1 - d}\left((d + 1)\,f(k,\,n - 1,\,d)
                             + 2\sum_{i = d + 1}^{k - 1}
                             f(k,\,n - 1,\,i)\right)

As you can see, there are two forms that you can choose from for the recursive bit of 𝑓: the one on top is modelled on pmf2, and the one on bottom is modelled on pmf. The pmf2-based one actually looks a bit cleaner and less verbose, but of course, we know that it’s computationally just not as good. Plus, we avoid using 𝗆𝖺𝗑 at all in the pmf-based one, which is nice.

So, that is (or rather, “those are”, as there are two versions) the closest thing that I have to a “nice formula” for 𝖱𝖡𝖢. If you are a better mathematician than I am (frankly, not a very high bar to exceed…) and can figure out an even nicer formula, please please do let me know about it!

To get an idea of what 𝖱𝖡𝖢 looks like, we can plot it! Check out this plot that focuses in on just the first 15 values of 𝖱𝖡𝖢⁡(20):

A plot of 𝖱𝖡𝖢⁡(20)⁡(𝑛), for 𝑛 from 1 to 15

(This, and other plots shown here, were created using matplotlib; see

Cool!! I’ve used cubic Hermite spline interpolation to show a smooth curve that fits all 15 points, but do remember that this is a PMF, not a PDF! There are really only 15 points (probability masses) here, and the smoothed curve is just there to guide your eyeballs.

To get a bigger picture than this, we can consider the first 100 values of 𝑛 instead of the first 15, and we can consider multiple values of 𝑘, as well:

A plot of 𝖱𝖡𝖢⁡(𝑘)⁡(𝑛), for 𝑘 ∈ {10, 20, 40, 80}, and 𝑛 from 1 to 100

🤩 Wowww… Pretty~

You can see that this distribution gets real flat-like for larger values of 𝑘. For 𝑘 = 80, it nearly looks like a damn linear function with a mildly negative slope!

Phase V: How do I know what to expect?

So, we have a PMF. But we also want other nice things, like the expectation of that PMF. How in the damn hell am I supposed to figure that out?

Well, I decided that estimating the expectation was a good start. So I wrote a little Python function to do just that:

def estimate_expectation(k, epsilon):
    e = 0.0
    cumu_prob = 0.0
    n = 1
    while cumu_prob < 1.0 - epsilon:
        prob = pmf_float(k, n)
        old_cumu_prob = cumu_prob
        cumu_prob += prob
        e += prob * float(n)
        n += 1

        if old_cumu_prob == cumu_prob:

    return e

The idea here is very simple: for a given value of 𝑘, we calculate 𝖱𝖡𝖢⁡(𝑘)⁡(1), 𝖱𝖡𝖢⁡(𝑘)⁡(2), 𝖱𝖡𝖢⁡(𝑘)⁡(3), and so on, keeping track of the sum of the probability masses that we’ve calculated so far. For each such 𝖱𝖡𝖢 calculation, we add 𝑛 ⋅ 𝖱𝖡𝖢⁡(𝑘)⁡(𝑛) to our running total, and then return that running total when the sum of the probability masses that we’ve calculated so far is sufficiently close to 1.

What does “sufficiently close” mean? Well, that is determined by the value of epsilon (ε); we just need to get to at least 1 − ε. I used sys.float_info.epsilon for the value of ε, but the basic idea is that larger values of ε will give less accurate results more quickly, and vice versa.

By running estimate_expectation for the first few values of 𝑘, I got something like this:


Naturally, these are just estimations, so we expect them to be a little bit off the mark — specifically, a little bit lower than the true value. Nevertheless, this level of accuracy was enough to see some clear patterns. I decided to take the difference sequence of these estimations, essentially calculating estimations of 𝖤⁡[𝖱𝖡𝖢⁡(2)] − 𝖤⁡[𝖱𝖡𝖢⁡(1)], 𝖤⁡[𝖱𝖡𝖢⁡(3)] − 𝖤⁡[𝖱𝖡𝖢⁡(2)], 𝖤⁡[𝖱𝖡𝖢⁡(4)] − 𝖤⁡[𝖱𝖡𝖢⁡(3)], and so on. I then went through this sequence, by hoof, and for each difference in the sequence, I wrote down which rational number it looked like it was estimating. For example, 1.4999999999999991 looks like it’s estimating 3⁄2. The result was this:

3⁄2, 3⁄4, 2⁄3, 5⁄8, 3⁄5, 7⁄12, 4⁄7, 9⁄16, 5⁄9, 11⁄20, 6⁄11, 13⁄24, 7⁄13, 15⁄28, 8⁄15, ⋯

As it turns out, the first element in this sequence (3⁄2) is a red herring — it results from the fact that we’ve defined 𝖱𝖡𝖢⁡(0)⁡(𝑛) to be 0 for any value of 𝑛. By ignoring this first term, I was able to see that the terms in this sequence are just (𝑘 + 1)∕(2⁢𝑘), with the exception of the value for 𝑘 = 1 being 3⁄2 by fiat. It was this knowledge that I was able to come up with the expected value for 𝖱𝖡𝖢:

Expected value for 𝖱𝖡𝖢 ◐

Raw LaTeX
            N(k) &\sim \mathrm{RBC}(k) \\[1ex]
\mathrm{E}[N(1)] &=    \frac{3}{2}     \\[1ex]
\mathrm{E}[N(k)] &=    \mathrm{E}[N(k - 1)] + \frac{k + 1}{2k}



\mathrm{E}[N(k)]       ={}& \frac{1}{2}\left(k + H_k + 1\right) \\[1ex]
                 \approx{}& \frac{1}{2}
                              \left(k + \ln(k) + \gamma + 1 +
                              \frac{1}{2k}\right)               \\[1ex]
                  \simeq{}& \Theta(k)

In this context, “𝑁⁡(𝑘) ∼ 𝖱𝖡𝖢⁡(𝑘)” means that 𝑁⁡(𝑘) is a random variable that has 𝖱𝖡𝖢⁡(𝑘) as its distribution. I’ve called it “𝑁” because the concrete values that it takes on are values of 𝑛, i.e. exactly how many steps it took us to converge.

Below the horizontal line, I show an equivalent formula that apparently isn’t recursive, and thus looks like a closed-form expression for 𝖤⁡[𝑁⁡(𝑘)]. However, 𝐻𝑘 denotes the 𝑘th harmonic number, which itself doesn’t have a closed-form definition in the strictest sense of “closed-form” — there’s just the usual “partial sum of the first 𝑘 terms of the harmonic series”, which may or may not be a closed form, depending on whom you ask. 😉

In any case, there is a really really (like, really) good approximation for 𝐻𝑘 that is most definitely closed-form, which I used to come up with the approximation that you see after the “≈”. The “𝗅𝗇” stands for “natural logarithm”, and γ is the Euler–Mascheroni constant (γ ≈ 0.577 215 664 901 532 9). I’ve also used big Θ notation to express that 𝖤⁡[𝑁⁡(𝑘)] asymptotically grows as Θ⁡(𝑘) — that is, linearly.

If you want to know just how good this approximation is, feast your eyes:

A plot comparing the approximation of 𝖤⁡[𝑁⁡(𝑘)] to its true values

Soooo… yeah. It’s basically indistinguishable from the true value at 𝑘 ≥ 10 or so… If you’re wondering what’s up with the random singularity that appears in the plot of the approximation, that’s due to the 1∕(2⁢𝑘) term, which comes from a truncation (yet another partial sum!) of the Taylor series representation.

Also in, there are implementations for calculating the exact expectation…:

def expectation(k):
    if k == 0:
        return Q(0)

    e = Q(3, 2)
    for i in range(2, k + 1):
        e += Q(i + 1, 2 * i)

    return e

…And the same thing, but using floats…:

def expectation_float(k):
    if k == 0:
        return 0.0

    e = 1.5
    for i in range(2, k + 1):
        e += (i + 1.0) / (2.0 * i)

    return e

…As well as for the approximation:

from math import log

GAMMA_PLUS_1 = 1.577_215_664_901_532_8

def expectation_approx(k):
    if k == 0:
        return 0.0

    k = float(k)

    return 0.5 * (k + log(k) + GAMMA_PLUS_1 + 1.0 / (2.0 * k))

Phase VI: How do I know what not to expect?

Okay, so we know the expected value of 𝑁⁡(𝑘), which is great. But what about other nice statistical properties? Like…


The variance? Once again I’ve no clue how I would go about finding this analytically, so we’re going to once again resort to estimations.

The variance can be expressed as simply the expectation of the square of the value, minus the square of the expectation of the value. Symbolically:

𝖵𝖺𝗋⁡(𝑁⁡(𝑘)) = 𝖤⁡[𝑁⁡(𝑘)2] − 𝖤⁡[𝑁⁡(𝑘)]2.

Using this equation directly in computation — when using floating-point arithmetic — can be quite numerically unstable, but we’re going to do it anyways, because I don’t think that we’re going to have that issue (particularly, because it seems like 𝖵𝖺𝗋⁡(𝑁⁡(𝑘)) is quite large).

Thus, we can use the same strategy that we used for estimating the expectation to approximate 𝖤⁡[𝑁⁡(𝑘)2]:

def estimate_expectation_sq(k, epsilon):
    e = 0.0
    cumu_prob = 0.0
    n = 1
    while cumu_prob < 1.0 - epsilon:
        prob = pmf_float(k, n)
        old_cumu_prob = cumu_prob
        cumu_prob += prob
        e += prob * float(n * n)
        n += 1

        if old_cumu_prob == cumu_prob:

    return e

Then, it’s easy enough to straightforwardly compute an approximation of 𝖤⁡[𝑁⁡(𝑘)2] − 𝖤⁡[𝑁⁡(𝑘)]2. I did this, and I unfortunately could not find an exact formula for the results. Instead, I fiddled around with some approximations — using quadratic regression provided by NumPy as a starting point — and ended up with this:

Approximation of 𝖵𝖺𝗋⁡(𝑁⁡(𝑘)) ◐

Raw LaTeX
\mathrm{Var}(N(k)) &\approx \frac{3k^2 + 10k - 4}{12} \\[1ex]
                   &\simeq  \Theta{\left(k^2\right)}

🤷🏽‍♀️ Sure, why not. It looks nice. And, it seems to be a pretty solid approximation! The major takeaway here is that 𝖵𝖺𝗋⁡(𝑁⁡(𝑘)) is very much quadratic in 𝑘, which makes sense, as we expected the variance to be quite large on the basis of those plots of 𝖱𝖡𝖢 that we saw above. Note that this is very comparable to the variance of the uniform distribution, which is just (𝑏 − 𝑎)2∕12, where [𝑎, 𝑏] is the support of the distribution. Whew… that’s really saying something!

Median, mode

What about the median? The mode? Both of these can be calculated. For the median, you can do something like the estimate_expectation function above, except you don’t have to keep a running “expectation” (e) total, and you simply stop as soon as the cumulative probability is 0.5 or greater. For the mode (which is just the peak of the distribution — for a given value of 𝑘, this is the value of 𝑛 that maximises 𝖱𝖡𝖢⁡(𝑘)⁡(𝑛)), you can do something like:

def mode(k):
    p = 0.0
    n = 1
    while True:
        prob = pmf_float(k, n)
        if prob > p:
            p = prob

        n += 1

    return n - 1

We know the shape of the distribution, so we just wait for it to start decreasing, and we know that we’ve already seen the mode.

I didn’t go any further with these two measure of central tendency, as I was very much not confident in my ability to find some pattern… In any case, here’s a table of some modes, if that makes you feel better:


The 𝑘 values here are the absolute lowest 𝑘 values for which the mode is as listed.


What about the information entropy[1] of this distribution? If you’re not already familiar with the idea of entropy, the basic idea is that less likely outcomes carry more “surprise” or “information” with them. A simple example might be if you got a single bit sent to you (say, via telegraphy) every 24 hours that indicated whether or not it was raining in a particular locale. If that locale is considered to be arid, then most of the time, you’re going to get a “not raining” message. Thus, getting a “not raining” message gives you very little information; you are not very surprised. On the other hand, on the occasions where it does rain, and you get an “it’s raining” message, that’s a lot of information & surprisal — you wouldn’t have guessed that it was raining, had you not gotten that message. If we have an idea of the probability for each outcome (for “not raining” and for “it’s raining”), then we can calculate the amount of information/surprisal that we get from a single telegraph message in general, without knowing the actual contents of the message — it’s just the expectation of the surprisal (the surprisal of each outcome, weighted by the probability of that outcome, all summed together).

What’s the “information content”[1] of a particular outcome? Well, as it turns out, it’s basically just −𝗅𝗈𝗀⁡(𝑝), where 𝑝 is the probability of that outcome. If you’re wondering what in the gosh darned heck logarithms have to do with this, one thing that can help it make intuitive sense (if you’re like me and have computer brain) is thinking of bit strings: a bit string of length 𝑛 has 2𝑛 possible values, so to go the other way (from 2𝑛 to 𝑛), you’re going to have to undo that exponentiation… with a logarithm. In particular, if we have — for example — a uniform distribution over all possible values of the bit string, then the probability of any one outcome is 𝑝 = 2−𝑛. Then, for any 𝑝, −𝗅𝗈𝗀⁡(𝑝) = −𝗅𝗈𝗀⁡(2−𝑛) = 𝑛.

In any case, we can again use the same technique that we used for estimating 𝖤⁡[𝑁⁡(𝑘)] and for estimating 𝖵𝖺𝗋⁡(𝑁⁡(𝑘)) to estimate Η⁡(𝑁⁡(𝑘)). Note that the “Η” in “Η⁡(𝑁⁡(𝑘))” is an uppercase Greek letter “eta” (U+397), not an uppercase Latin letter “(h)aitch” (U+48); the “Η” stands for the initial ⟨e⟩ in the word entropy.

Here’s a Python function that does just that:

import math

def estimate_entropy(k, epsilon):
    eta = 0.0
    cumu_prob = 0.0
    n = 1
    while cumu_prob < 1.0 - epsilon:
        prob = pmf_float(k, n)
        old_cumu_prob = cumu_prob
        cumu_prob += prob
            eta += prob * math.log(prob)
        n += 1

        if old_cumu_prob == cumu_prob:

    return -eta

I did something similar to what I did with 𝖵𝖺𝗋⁡(𝑁⁡(𝑘)), except this time, I had no automated regression to help me out. I probably could have written something by hand to assist in this procedure, but I honestly wasn’t sure quite what I was looking for. So instead, I literally just tried some approximations and visually looked (on a plot) at how they compared to the estimated values of Η⁡(𝑁⁡(𝑘)), and then did adjustments by hoof as I saw fit. The result is this halfway-decent approximation:

Approximate formula for Η⁡(𝑁⁡(𝑘)) ◐

Raw LaTeX
\mathrm{H}(N(k)) &\approx \frac{9(\ln(k + 1) + \ln\ln(k + 2)) + 4}{12} \\[1ex]
                 &\simeq  \Theta(\log k)

Sooo… yep. I was debating over the use of the “+ 𝗅𝗇⁡ 𝗅𝗇⁡(𝑘 + 2)” term, and ended up deciding to use it. Regardless of this term, the result is asymptotically logarithmic in 𝑘.

Let’s have a look at how good this approximation is (the blue line is the estimated values, and the chunky dashed reddish line is the approximation):

Plot of estimated values of Η⁡(𝑁⁡(𝑘)) vs. approximation of Η⁡(𝑁⁡(𝑘)). The blue line is the estimated values, and the chunky dashed reddish line is the approximation.

Eh. It’s alright, I guess. 🤷🏽‍♀️ I’ve got two Y-axes for ease of reading; the one on the left is in nat (nats), and the one on the right is in Sh (shannons; the information-theoretic analogue of the bit).

Footnotes for “Entropy”
  1. [↑] See also: “Information content”.

I don’t think that any of that is going to help us clear Pianus…

So, there you have it. No one asked for any of it, but there it is. You’re very welcome. 😌

If, for some godforsaken reason, this actually interested you (seems highly unlikely to me, but then again, what do I know about probability?), I’ll just note that there’s more that can be done. 𝖱𝖡𝖢 could use a better formula, maybe. An exact formula for 𝖵𝖺𝗋⁡(𝑁⁡(𝑘)) and/or Η⁡(𝑁⁡(𝑘)) — or better approximations — would be nice. Maybe a formula for the median and/or mode… Plus, you can consider other related distributions. For example, how is the value of 𝑑 distributed, given a value of both 𝑘 and 𝑛?

Have fun! :3

Habyte maketh no monke, ne vvearynge of gylte Å¿purres maketh no knyght

With this 2022 Anniversary–Summer event, I — as usual — took a look at the unique hairs, faces, & NX equipment offered by the new event. The way that it usually goes is:

Hmm, none of these faces look like MapleStory faces. I guess these are all from, like, MapleStory version three thousand or something…? That’s too bad… To be honest, I don’t really like any of them.

None of these hairs really look good to me either. Well, I admit that that one seems like it could be pretty cute, but it’s really not my style. Plus, what’s with all of the hairs looking super washed out? They all look white no matter what colour they are…

At least some of these NX equips look neato. Maybe I’ll get a few… maybe do a few wardrobe changes on a character here or there…

Well, this event actually came with two faces that I did kinda like: the Charming Face and the Calm Lucid Face (specifically, I had in mind the female versions, although the male versions are only subtly different). And furthermore, this event came with a hair that I really like: the Balayage Hair (again, specifically the female version; in this case, the male version is completely different)! So, I thought hard about it, and I decided that I wanted to give my woodsmaster capreolina a makeover!! After a lot of fiddling around in and getting some useful opinions from Lv1Crook (Level1Crook, Lvl1Crook, xXCrookXx) and Harlez (VigiI), I came up with a look, and grinded out Summer Points to get it…:


capreolina’s new look

Cute, right??? For this image, I’ve specifically chosen the Kandiva to match the red eyes, shirt, facepaint, and the kinda reddish highlights.

I had a kinda “left over” look from my experimentations that led to capreolina’s new look, and I was really satisfied with how I felt about giving capreolina a new look. So, I was emboldened to do the same for my pure STR bishop cervid…!:

cervid’s new look


I originally was going to use a different cape, and then realised that the one that I had used in (which I thought that I remembered checking for in-game) was only purchasable with donor points… and I didn’t have any. Oh well. This cape is cool too.

I think that both of these looks do a pretty good job of preserving the original “vibes” that I was going for with these two characters’ outfits: cervid still has the dark “goth” vibe, and capreolina still looks like she either lives in a tree, or could chop a tree down with her bare hands. However, in another sense, the tables have turned: capre now looks gayer than cervid, instead of the other way around!! In any case, these are my original two characters, so I think that it makes sense for them to both have the same hair style :P

Some other people have agreed that the new looks are cute:

i forgot how cute capre was

However, the reception has not been entirely positive…:

bring back REAL DEER


Rest assured that I am the real deer, and I may dress myself however I please!! So there!!!

What hath Dunas wrought!

Over in Neo Tokyo, on my darksterity knight rusa, I did some Bergamoting with marksman xBowtjuhNL, paladin aWildLynux, and bishop Pebbytan:

O Dunas…

aWildLynux, xBowtjuhNL, Pebbytan, & rusa vs. Bergamot

Afterwards, I trioed D’una with them (minus aWildLynux):

Pebby, rusa, & Ramon trioing D’una

This was rough. D’una is already a massive pain in the ass for me, as a melee attacker, thanks to his insistence on constantly dispelling (DPing) anyone within melee range, and frequently stunning, super-knockbacking, Darkness’ing, etc.… And zerking doesn’t exactly make it easier to survive DR consistently…! But it’s also not so easy for bishops either, despite the fact that they are mages and are thus unaffected (barring STR mages & the like) by physical damage reflection (physical DR). Bishops have to get close enough to Dispel D’una’s pesky buffs (particularly the WDEF ones…), and yet this puts them within D’una’s DP range, thus endangering them by possibly DPing their Magic Guard. As a result of all of this, our first run was a failure: both Pebby & I died. 😭 We did better on the second time around, though :]

Did you know that it’s possible to get up into the branches of the tree at the Forest of Oblivion (the map that connects the Mushroom Shrine to Kamuna)?:

toot toot

Yep! And it requires ≥123% JUMP — and thus exactly 123% JUMP, as that is the maximum possible JUMP.

I also did Bergamot with bowmaster xRook (misandrist), who had never done Berga before! We were accompanied by fellow bowmaster OliverQueen and shadower Harlez (VigiI). During the beginning of the first body of the first run, we had xRook crouching and watching so that they would see what Bergamot’s DP animation actually looks like:

OliverQueen, Harlez, & rusa helping xRook with their first-ever Bergamot run

Getting DP’d is potentially a big hazard for xRook, as they rely on my HB to survive. Of course, any time that Bergamot does the laser DP animation, I’m always watching very intently to see if anyone does get hit by the smol laser (the one that actually DPs), so that I can quickly rebuff them — which often involves quickly mounting and running to the left, so that I’m within HB distance of them after they get knocked back by the laser.

With that, xRook got the hang of it, and was able to survive the whole run, without the assistance of any bishops!:

xRook, rusa, Harlez, & OliverQueen vs. Bergamot

I also duoed Berga, with xBowtjuhNL…:

xBowtjuhNL & rusa duoing Bergamot

It actually wasn’t so bad; we didn’t have any serious hitches, and the run took around ≈55 minutes in total. The drops didn’t quite make it worth it, but I guess they were okay — Shield for WATK 70% ain’t half bad (we were both 5⧸5 on cards already):

xBowtjuhNL & rusa duoed Bergamot

I also did a trio Nibelung run with xBowtjuhNL and Harlez. xBowtjuhNL said that this was going to be his last run before going to bed, and indeed, he fell asleep during the run…:

Sleepty Ramon…

Eventually, as you can see, I was able to wake him up by calling him on Discord™! You can also see that I was having some pretty nasty networking issues at the time…

In the end, though, it was a great run! I levelled up to 173~!:

rusa hits level 173~!

And Nib was kind enough to drop an EP for us!!:


Plus, I got my first Nib card :]

Nibelung card get~!

With the splits from selling the EP — which I got from Harlez in pCoins, rather than in mesos — I was able to make another Chaos Scroll, so that I could give another shot to that 4 WATK, 4 slot PGC that I made in the previous entry…:


Oh. Well, that’s a 0 WATK PGC. Maybe next time…… 😢

Swiftness is the essence of warfare

I have procrastinated long enough on getting a Silver Mane for my woodsmaster capreolina. The Silver Mane quest ain’t easy — and it ain’t cheap, either — but the only pre-requisites are being level ≥120[1], and already having the Hog, both of which capreolina has satisfied for… well, like 22 levels now. So, I headed out into the wilderness of the Minar Forest with fellow fourth-job archer Level1Crook (Lvl1Crook, xXCrookXx) to collect the 600 Rexton Leathers, 400 Wooden Shoulder Pads, and 400 Skull Shoulder Pads that we each (yes, that’s 2 800 big fat Leafre ETC items in total!) needed for our respective Silver Mane quests:


Oh, I died. How embarrassing…

Well, I was already pretty close to levelling before we started, so it was no surprise when I levelled mid-grind!!:

capreolina hits level 143~!

After getting our 1.2k Rexton Leathers, we headed to The Hidden Dragon Tomb II for the rest of the ETCs:

capreolina vs. Cornians

i dyed

I dyed… again… how embarrassing… Now, I’m not going to say exactly how many times that I died, but I will say that whoever runs the “Cash Shop” was very happy about their Safety Charm sales on that day…

Not gonna lie, 2.8k big Leafre ETCs is way too fucking many. We almost called it a day after we had just a few hundred ETCs left to go, but we agreed to just trudge on and get the entire damn thing over with. The result was that we both suffered mild brain rot that we may, or may not, ever recover from. But we did it. We got the things.

Of course, there was also the issue of getting 50M(!) mesos. For mine, I decided to just steal transfer 50M mesos off of my vicloc dagger spearwoman d34r. And with that, we were now officially Fast As Fuck™:



Footnotes for “Swiftness is the essence of warfare”

  1. [↑] As of this writing, the Silver Mane quest in MapleLegends is bugged, so this is not strictly true for the MapleLegends implementation. As far as I know, the MapleLegends implementation (again, as of this writing) has no level requirement at all, and is instead gated on the PC’s class: namely, they must be a fourth job class, or a beginner.

A clock; a cursèd tree; a cryptid; a character of evil; a captain.

I did a few fun all-odd Capt. Lat runs on my darksterity knight rusa with STRginner extraordinaire Taima (Girlmoder, Boymoder, Nyanners, Tacgnol, Hanyou), helping to HB permabeginners Cortical (GishGallop, Medulla, SussyBaka, RedNurJazlan, CokeZeroPill) and uayua (2sus4u, hueso, shadowban, tb303):

A grandfather clock; a Guatemalan statue; a great foot; a god; a ghost pilot.

Taima, rusa, & Cortical vs. Capt. Latanica

Taima, rusa, & uayua vs. Capt. Lat

Welp, we had some deaths, but that’s to be expected when you’re constantly on the brink of being two-shot by a stray ranged attack from the ol’ Captain.

I also did a single duo Peppered Lattice run on my I/L archmagelet cervine, with marksman Level1Crook (Lanius, Lvl1Crook, xXCrookXx), in which I did some… I don’t know, pretty okay CL damage? To Papal Lettuce? Next time, I’ll have to use some actual MATK potions 😅

cervine & Level1Crook vs. Papulatus

That being said, the amount of AVOID that I get from being a magelet really helps when fighting bosses like this; I pretty much stayed on the top platforms for the entire run, so I never got dispelled (DP’d) even once!

We also teamed up with PoultryWoman (Yoshis, Furbs, SwordFurb) and SaviorSword to do some Ravs (with me as my pure STR bishop cervid)…:

cervid, Taima, Level1Crook, PoultryWoman, & SaviorSword vs. Ravana

…And some Peppy Laters (with me as my woodsmaster capreolina):

capreolina, Taima, Level1Crook, PoultryWoman, & SaviorSword vs. Papulatus

And, with marksman xBowtjuhNL and shadower Harlez (VigiI), we did some fun Poppy Lanterns…:

Level1Crook, cervine, xBowtjuhNL, Harlez, & Taima vs. Papu

…During which, Taima took advantage of her @roll 100 for loot order being the highest by… looting the one “““dilk””” that dropped:

roll for loot order


And, finally, when swashbuckler Yoshis hit the big level 115, it was time for her to try Ravving, too (featuring F/P archmage 2sus4u)!:

2sus4u, Level1Crook, Taima, rusa, & Yoshis vs. Rav

Level1Crook has LITERALLY ONE (1) JOB

SMH my head… I cannot believe this… How could Level1Crook do this to us?? He was our only MCP-opener……

On cervid, I also helped out xBowtjuhNL, OliverQueen, BeMyRavebae, BimboBucc, and atsumoo sell some AFK zhelms!:

cervid helping OliverQueen, BeMyRavebae, BimboBucc, atsumoo, & xBowtjuhNL sell AFK zhelms

It was a little weird not being the only active bishop in the party, but it worked out well enough:

you upheld your vows

I also did some Zakking on capreolina, where — as usual — I got some weird looks when I bust out the ol’ PSB for the arms stage:

si plz

I also did an… interesting Zak run on rusa, wherein we lost our bishop and bowmaster quite early on during the arms stage, leaving just me, Harlez, nightlord Peeko, and paladin SirAmik… And then SirAmik died during the first body. So, yeah, it was kinda a trio Zak without even so much as SE:

rusa gigazerking @ trio Zak

So, yeah, that was pretty brutal. And by the end of it, only one third (⅓) of our original party remained 😆:

SirAmik & Harlez ded

And still, no pouch… 🙄


Over on Maple Island, I was killing some Zombie Lupins on my islander ozotoceros:

ozotoceros vs. Zombie Lupins

…Wait, what? Oh, right, that’s BRPQ (Boss Rush Party Quest; BPQ), not Maple Island. Naturally, the Faust inside of BRPQ spawns zlupins just like the ordinary Faust does, so it’s a favourite pastime of islanders during every summer event to try to squeeze a Red Whip out of one of ’em…

Unfortunately, the Red Whip drop rate is not great, so I didn’t get any. :P But I did get a fancy Dark Pilfer! And some Cursed Dolls…

In any case, I did amass enough anniversary points to purchase myself a fancy-dancy claw: the Sweet Fork Cake!!:

ozotoceros gets a Sweet Fork Cake!

And, after a sufficient amount of prayer directed at the Maple gods, I was blessed with a set of Maple Throwing-Stars that I got from a Maple Equipment Box that I bought with more anniversary points!!!:

Maple Stars get!!!

ozo tests out her new Maple Stars

For those not aware, Maple Throwing-Stars are extremely valuable on Maple Island, which may come as a surprise given that they are virtually worthless elsewhere. The extra +6 WATK over what you’d get from Subis makes a huge difference (as your total WATK is naturally low anyways), and it’s not like there’s any way to get Ilbis on Maple Island…

Oh, and I hit level 46~!!:

ozotoceros hits level 46~!

The raffle loves giving me useless equipment like Maple Guns, Maple Crows, and most of all: Red Bandanas.

ozo does not need Red Bandanas…


And, of course, you know I finished all five weeks of DJ JM’s quests! So I walked away with my chair of choice…

ozotoceros in the Pia chair

I’ve seen some people in alliance chat hating on my homegirl Pia, and all I have to say is that Pia is the O.G. Pia has been in Henesys Park since before y’all were even old enough to operate a keyboard, so put some damn respect on her name!!

A squamate appears

As teased in the previous diary entry (which I started writing before I actually turned in the corresponding quest), I got the next Monster Book Ring tier for my darksterity knight rusa: T4!

rusa gets the T4 ring!

And just in time for another Horntail run!:

HT with runYoShit and Brokeen

Our nightlord Brokeen accidentally entered the main body map just a fraction of a second earlier than shadower Harlez (VigiI) — our intended seduce target — which is why you see him dead in the above image.

Unfortunately for me, I got what I asked for by screenshotting someone else’s death, and a later screenshot caused my client to crash when all three heads were still alive. ☹️

You can see corsair runYoShit in the screenshot above, which is notable because runYoShit was relying on my HB to survive, so my crashing meant that they didn’t get to continue the run either!! 😭

In spite of this, I nevertheless signed up for the next run that xBowtjuhNL hosted, mistakenly believing (not without reason) that the screenshotting was what caused the crash — after all, the crash happened instantaneously after hitting the Prt Sc key, which activates the client’s built-in screenshotting ability (which regrettably cannot be unbound from said key):

dont dc rusa dont screenshot

So, true to my word, I did not screenshot. And, in spite of this, I crashed at the beginning of the main body fight anyways…:

Sitting with Pebbytan after crashing

This really took the wind out of my sails, as I was not only saddened that I couldn’t HT anymore (as I quite like HTing, and it’s good EXP to boot!), but was also ashamed that I had let down runYoShit on more than one occasion as a result of my own technical issues effectively killing them. I posted about these technical issues, and some related ones, that are associated with the 2022 Anni–Summer event patch — and that I could confirm were not unique to me, i.e. I was getting multiple identical experiential reports from other players — in a MapleLegends forum post entitled “Client crashing/stability/UB issues since Anni–Summer event patch”. Unfortunately, w.r.t. the specifically crashing issues, the client appears to be unwilling to emit crash logs, so the reports that I give are purely phenomenological.

I’ve even desperately attempted to run the game client natively on Windows™ via multi-booting, just for the purpose of not crashing in crash-y situations (read: Horntail). I know, I know, Bill G8s owns my computer now and probably knows all of my deepest darkest secrets, BUT I REALLY WANT TO HT, OKAY‽‽‽ Plus, I always make sure to wash my hooves after using Windows™. :] Unfortunately, it seems that running the MapleLegends client on Windows™ is even crashier than running it in Wine on Linux like I normally do — I don’t even have to go into HT (nor do anything special, really…) to crash! But I’m not terribly surprised, considering that running ML in Wine has always had a much better track record — for me, at least — than it’s had for those running the game natively.

Dark doubts between the promise & event

As the 2022 Anniversary–Summer Event has continued, I have continued BRPQing — including a pair of BRPQs that I did on my vicloc dagger spearwoman d34r, alongside STRginner Cortical (GishGallop, Medulla, SussyBaka, RedNurJazlan, CokeZeroPill). We fought Franky, everyone’s favourite PQ boss:

d34r & Cortical vs. Franky

And I then realised my limitations when our party arrived at Manon, and I could scarcely hit the damn thing — it’s 19 levels higher than me, and has 39 AVOID!:

d34r & Cortical vs. Manon

Good thing Manon isn’t real and Leafre is a hoax perpetuated by so-called “fourth-jobbers”, so I don’t have to worry about it outside of BRPQ.

On my darksterity knight rusa, I did enough BRPQs and Maple Syrup’d enough Maple Leaves to make a pretty awesome one!:

rusa’s Maple Leaf

8 STR and 3 DEX on a face accessory? Yell heah!!

I also ended up doing a number of solo BRPQs, as a result of underwhelming traffic at BRPQ around the times that I was logging on. Like on my I/L archmagelet cervine:

cervine soloing BRPQ

And on my daggermit alces:

alces soloing BRPQ

Unfortunately, both of them ended up wasting most of their points on failed Maple Leaves… But cervine still has a usable clean one, so that’s good, at least.

Oh, and I did manage to get just one (1) good raffle between my two viclockers: a Rabbit Lamp Chair!!:

d34r gets a Rabbit Lamp Chair!!

Normally I really couldn’t care less about chairs (other than their value on the market when I rid myself of them), but I do enjoy getting chairs on my area-restricted characters, i.e. my islander and my viclockers. :]



An ongoing investigation into ██████ █████████ by the name of ███████ found ██████ █████ ████ ██████████████ in addition to █████████ activity near the 101st floor of the Eos Tower, where the subject was reported █████████ ██████████████ ████, although █████████ ████████:

Subject spotted at Eos Tower 101st floor

█████████ ██████████ ███ █████████ ██ and — it is suspected — █████████ ██████. ██████ agents █████████ and ███████ were reported dead after ████ █████████ █████████████ ███ and attempting reconnaissance on the subject:

Savage Blows in the dark

It should be reiterated that the subject is armed & dangerous, and only ████████ agents with high levels of clearance, training, & expertise are allowed on this case. It should also be noted that █████ ███ █████████████ ████ ████████, with ████████ ██████ and access to highly advanced technology, including ████████ █████ and ████ ████████ ██ appears to be some kind of temporary invincibility, pending further investigation:

Temporary invincibility(?)

Concerningly, the subject █████ ████████ ██████ ███, gaining access to the highly restricted █████████ ██████ ██████████ Eos Tower spire, where ██████ ██████ killing ████████ and ███████ (nippled clockwhale specimen):

Subject whiting Alishar


R. t. tarandus

As usual, it’s time for a quick episode of everyone’s favourite series involving a pugilist whose name starts with a ⟨t⟩ and ends with an ⟨s⟩: Questing With tara™!

This time around, I headed to Singapore to do some more Capt. Latanica prequests, now that I was high enough level to continue them. I got the necessary Selkie Jr. and Slimy kills at MP3…:

Questing With tara

tara @ MP3

…And the necessary Mr. Anchor kills at GS6:

tara @ GS6

With that, I was now at the final part of the questline: to slay Capt. Latanica and bring back his Black Essence. Knowing that I wouldn’t have accuracy issues due to Capt. Latanica’s abysmally low AVOID, I decided to just give it a shot and try soloing the Captain:

tara vs. Capt. Lat

By the end of it, I had used three Unripe Onyx Apples and one Maple Music Fever, and… well, I decided against wasting more resources & time getting irritated at how little damage I was doing to Lat. Could I have finished? Sure. There’s no timer, so I could just eat all of my resources and chip away endlessly at Lat until he finally submitted to my unending patience. But I wasn’t really willing to do this unless I brought some full-blown Onyx Apples with me, so I bailed. I figured that it would have been easier, given that Lat has “only” 700 WDEF, but I neglected to take into account Capt. Lat’s 37-level advantage. 🤦🏽‍♀️

Of course, not only was this a waste of the WATK potions that I used, but it was also a waste of the White Essence that I got from “The great secret reveals”. But that was okay, because I needed to do The Lost White Essence anyways — for the question completion, if nothing else. So I grinded some more GS6 to finish that quest, and got my White Essence back.

The problem was that, now that I’d done The Lost White Essence, actually successfully soloing Capt. Lat would give me enough EXP to level up! And how tf am I supposed to wear my INTelligent pyjamas if I’m soloing the Captain?? 🙄 So I decided on the “level up, and then come back” route, even though it would mean soloing Capt. Lat at level 64 at the lowest, rather than 63 as I had originally planned (as 63 is the absolute minimum level to have the Capt. Lat kill quest active). So, it’s not as cool, but whatever. I’m still slightly torn on whether or not I should come back with some Onyx Apples at level 64 to solo this big guy. It’s kind of stupid & a waste, but “doing conventionally inadvisable things because I can” is kind of a running theme of my diary, so, you know… Let me know if you have an opinion on the matter…

In any case, for the whole levelling-up thing, I decided to start on the MLG[1] quests. So I headed to Mu Lung to punch some STDs (not to be confused with STDs) for No Gong’s Teaching:

tara vs. STDs

And, welp, that was enough for level 64~!:

tara hits level 64~!

I also took a detour to Victoria Island to do some Evil Eye slaying for the latter part (the 999-kill quest) of POLLUTED! ⟨1—Evil Eye⟩, alongside Vicloc I/L gish VigiI (Harlez):

VigiI & tarandus @ CoEEII

Tune in next time, to Questing With tara™, to learn the fate of the captain known as “Latanica”……

Footnotes for “R. t. tarandus”

  1. [↑] “Mu Lung Garden”, the region that encompasses all of the Mu Lung and Herb Town maps.

The Room of the Guilty

And, finally, I did one (1) OPQ. On my OPQ mule, sets. With DEXginner inhale (DexBlade, vork) and disgraced[1] former Viclocker Lanius (Level1Crook, Lvl1Crook, xXCrookXx). After a great deal of struggle within the Tower of the Goddess, we killed Father Pixel, with the assistance of an aubergine dressed as a human by the name of AgentPewPew:

inhale, Lanius, & sets vs. Papa Pixie

The random folx (including AgentPewPew) that we’d partied with were obsessing over my damage and equipment (which is almost entirely borrowed from my other characters…), but I was more interested in inhale’s damage (see “Comparing DEXginners” in the previous diary entry).

In any case, it was in this PQ that inhale hit the big level 70~!! Oh — and I also levelled up, to 67 :]

inhale hits level 70~!! (And sets 67)

…And then I passed tf out. 🛏️😴💤

Footnotes for “The Room of the Guilty”

  1. [↑] (Not actually disgraced; the character was always planned to be a HS mule.)



i can’t wait to boom my feet

sorts’s jorts

This diary entry is dedicated to Pebbytan, who was permanently banned from MapleLegends without possibility of appeal, without having broken any of the rules of the server, merely through guilt by association. ❤️