Skip to the main content
Skip to the entry’s beginning
First published on .

rangiferโ€™s diary: pt. xci

Whatโ€™s our level range, again?

If youโ€™ve ever done โ€œpoints farmingโ€ or BRPQ (โ€œBoss Rush Party Questโ€; โ€œBPQโ€) during a MapleLegends event, then you already know how party recruitment goes. For farming event points, youโ€™re looking for fellow HHGII-goers (or PP1-goers, or what have you) that are within 20 levels of you. In this context, โ€œ๐‘1 is within ๐‘˜ levels of ๐‘2โ€ means that the absolute difference between the levels of PCs ๐‘1 and ๐‘1 is less than or equal to ๐‘˜. Symbolically, weโ€™d write this as |โ„“1ย โˆ’ย โ„“2|ย โ‰คย ๐‘˜, where โ„“๐‘– is ๐‘๐‘–โ€™s level. For event point farming, ๐‘˜ย =ย 20; for BRPQ, ๐‘˜ย =ย 30.

Tolerance

Youโ€™ll notice that โ€œ๐‘1 is within ๐‘˜ levels of ๐‘2โ€ sounds exactly like some kind of binary relation. More specifically, for a given value of ๐‘˜, this is a homogeneous relation over ๐ถ, where ๐ถ is the set of all PCs (๐‘๐‘–ย โˆˆย ๐ถ). This homogeneous relation โ€” which weโ€™ll just call ๐‘…๐‘˜ โ€” has some nice properties:

However, it also lacks some properties that weโ€™re familiar with having in many binary relations:

Like I said, ๐‘…๐‘˜ canโ€™t be an equivalence relation because it lacks transitivity. However, I found out that thereโ€™s actually a special name for this kind of relation: a tolerance relation. The name makes sense; we essentially have some level of โ€œtoleranceโ€ (which in our particular case is called ๐‘˜) for how โ€œdistantโ€ objects may be from one another before we consider them to be โ€œdiscernibleโ€ from one another. Equivalence relations (or rather, congruence relations) are the special case of tolerance relations where the โ€œtoleranceโ€ is โ€œzeroโ€, or is otherwise nonexistent in some sense; then, itโ€™s not possible for โ€œtolerancesโ€ to โ€œaccumulateโ€ (as they do in the โ€œ๐‘1โก๐‘…20โก๐‘3โ€ example above) and cause intransitivity. In the language of tolerance relations, when ๐‘1โก๐‘…๐‘˜โก๐‘2 holds, we say that โ€œ๐‘1 is indiscernible from ๐‘2โ€. This makes (๐ถ,ย ๐‘…๐‘˜) a tolerance space.

Furthermore, a tolerance relation over a finite set is called a dependency relation. Dependency relations are important in concurrent computation, where they are so called because they relate two objects iff those objects depend on one another in some way, thus precluding them from being computed concurrently. For example, if you want to cook some food and then eat it, then there is a dependency between the โ€œcookingโ€ part and the โ€œeatingโ€ part; you cannot do them concurrently โ€” unless you actually wanted to eat the uncooked food, in which case, why even bother cooking in the first place?

Aggregate tolerance

In our particular case of recruiting for a party (whether it be for farming event points, or for BRPQing, or for something similar), we ideally want โ€” and, in the case of BRPQ, absolutely need โ€” this tolerance relation ๐‘…๐‘˜ to hold pairwise for all members of the party. Symbolically, if our party is ๐‘† (s.t. ๐‘†ย โŠ†ย ๐ถ), then we want:

โˆ€(๐‘1,ย ๐‘2)ย โˆˆย ๐‘†2 : ๐‘1โก๐‘…๐‘˜โก๐‘2.

With this in mind, thereโ€™s a particular phenomenon that we quickly notice: as the partyโ€™s size (i.e. its number of members) grows, we see a shrinking of the range of levels that a new member could possibly be. For example, if we represent our party as a multiset of levels, and we let ๐‘˜ย โ‰”ย 20, then we might start with e.g. {100}, and the range of levels that a new member could possibly be is [80,ย 120]. If we select some arbitrary level from this interval, letโ€™s say e.g. 110, then our party is {100,ย 110}, and the valid level interval for a new member is now [90,ย 120], which is a strict subset of the original interval. In general, adding a new party member does not necessarily change the valid level interval, so the new interval is always a subinterval (not necessarily strict) of the old one. This means that the length of the interval decreases weakly monotonically as the party size grows.

In particular, the valid level interval for a new party member, as a function of the multiset of levels of the current party members (call this multiset ๐ฟ), is:

โ„๐‘˜โก(๐ฟ) = [๐—†๐–บ๐—‘โก(๐ฟ)ย โˆ’ย ๐‘˜,ย ๐—†๐—‚๐—‡โก(๐ฟ)ย +ย ๐‘˜].

Note that an interval like [1,ย 0] โ€” where the lower bound is strictly greater than the upper bound โ€” is the empty set.

As a result, the interval only changes if the new member (weโ€™re not going to consider the possibility of members leaving the party or being kicked out) changes the maximum level or minimum level present in the party. In other words, the interval only changes if the new memberโ€™s level is either strictly greater than, or strictly less than, all old memberโ€™s levels.

Interval/bound convergence

I noted that the length of the interval decreases weakly monotonically as the party size grows, and in fact, itโ€™s clear that it can โ€œconvergeโ€ to an absolutely minimal length of ๐‘˜. This occurs when ๐—†๐–บ๐—‘โก(๐ฟ)ย โˆ’ย ๐—†๐—‚๐—‡โก(๐ฟ)ย =ย ๐‘˜, and thus โ„๐‘˜โก(๐ฟ)ย =ย [๐—†๐—‚๐—‡โก(๐ฟ),ย ๐—†๐–บ๐—‘โก(๐ฟ)]. At exactly this point, it no longer matters how many new members join, nor what levels those new members are โ€” the interval will never change.

We donโ€™t typically have any way to accurately predict the level of the next PC who will ask to join the party โ€” weโ€™re usually just recruiting whichever random people happen to show up at that time. Thus, we have no good choice but to model the level of a new party member as effectively a random variable. In doing so, assuming that we assign a nonzero probability to each level[1], we can intuitively[2] see that |โ„๐‘˜| almost surely converges to ๐‘˜, as the party size tends to +โˆž.

Given that we expect to converge in this way after a sufficient number of new members join our party, the question that I want to ask seems simple: how long does it take to converge?

In order to answer this question, we need to define things in a way that is a bit more clear-cut. When we say โ€œhow longโ€, we really mean โ€œhow many new party membersโ€; each time that a new party member is added, I call this a single โ€œstepโ€ of the process. Furthermore, as it is, thereโ€™s actually no guarantee that we converge at all, because weโ€™re still talking about MapleStory parties, which are limited to a maximum cardinality of 6. So we want to lift that restriction, allowing arbitrarily large (but still always finite) parties. Similarly, we want to lift the level restrictions that MapleStory inherently possesses, because it makes things more asymmetrical and more difficult to reason about; if ๐‘˜ย =ย 20 and ๐ฟย =ย {190}, then the formula above suggests that โ„๐‘˜โก(๐ฟ)ย =ย [170,ย 210], but in practice itโ€™s actually [170,ย 200], because the maximum level in MapleStory is 200. So instead, we will assume that any element of โ„ค is a valid level for a PC to have, and anything else (e.g. 13.5, ฯ€, โ™ฃ, etc.โ€ฆ) is invalid. Also similarly, we want to constrain ๐‘˜ so that ๐‘˜ย โˆˆย โ„•.[3]

Another part that we have to specify is how PC levels are distributed. Weโ€™ve already accepted that the level of a new party member is random, but weโ€™ve yet to specify in what way. By invoking the principle of indifference, we can just assume that PC levels are distributed uniformly. Obviously, in practice, the โ€œactualโ€ (empirical) distribution is not strictly uniform; for example, level 120 characters are probably more common than level 119 characters in MapleLegends. But trying to get a better approximation is almost certainly more trouble than itโ€™s worth, and much more importantly, we donโ€™t actually care; although this problem is modelled on empirical player-driven phenomena that occur in (a particular implementation of) MapleStory, it is really just a mathematical problem that we have in mind.

WLOG (weโ€™ll get a better idea of why this doesnโ€™t lose generality in just a bit), we can assume for simplicity that our initial value of ๐ฟ is {0}. Then, given some value of ๐‘˜, we want to ask: what is the probability that we converge after ๐‘› steps (๐‘›ย โˆˆย โ„•)? What weโ€™re asking for here is a probability mass function (PMF) that is parameterised by ๐‘˜. Because this PMF more or less defines[4] the probability distribution that weโ€™re after, Iโ€™m going to refer to the PMF as simply ๐–ฑ๐–ก๐–ขโก(๐‘˜), where โ€œ๐–ฑ๐–ก๐–ขโ€ stands for โ€œrecruitment bound convergenceโ€. Then, ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(๐‘›) is the probability that, given some value for ๐‘˜, we converge in exactly ๐‘› steps.

Why do we care about ๐–ฑ๐–ก๐–ขโก(๐‘˜)? Iunno. I just thought of it one time when I was farming event points or something. Sue me. The point is that I spent absolutely way too long obsessing over it, and it is now that we will see what I have uncovered about this mysterious probability distribution. Iโ€™m going to go over what I learned in roughly the order that I learned it, and with roughly the methods by which I learned it.

Footnotes for โ€œInterval/bound convergenceโ€

  1. [โ†‘] At least, to every level that is valid for a PC to be, which in MapleStory is usually [1,ย 200]ย โˆฉย โ„ค. Of course, merely assigning a nonzero probability to each level in โ„๐‘˜ does the trick, as well.
  2. [โ†‘] By waving our hooves and invoking the infinite monkey theorem, or somethingโ€ฆ
  3. [โ†‘] In general, we could loosen the constraints on ๐‘˜ so that it merely has to be a non-negative real number. However, if we do this, then โŒŠ๐‘˜โŒ‹ and ๐‘˜ are effectively the same, so it will make things a lot easier to just assume by convention that ๐‘˜ is a natural number.
  4. [โ†‘] See also: โ€œProbability spaceโ€.

Phase I: Worked examples

Because this whole thing was a little too abstract, I decided to first make it concrete by working out, by hoof, a particular example.

Consider ๐‘˜ย โ‰”ย 4. Whatโ€™s the probability that we converge in exactly 1 step? In other words, what is the value of ๐–ฑ๐–ก๐–ขโก(4)โก(1)? Well, we converge in exactly 1 step iff the first PC to join the party is level 4 or is level โˆ’4. We can see that โ„4โก{0}ย =ย [โˆ’4,ย 4], so the set of possible levels that this first step could add to the party is the intersection of this interval with โ„ค, viz. {โˆ’4, โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 4}. The cardinality of this set is 9, and there are 2 possible values in the set that cause us to converge (viz. {โˆ’4,ย 4}), so ๐–ฑ๐–ก๐–ขโก(4)โก(1)ย =ย 2โ„9ย =ย 0.222โ€ฆ. Easy, right?

Okay, but what about ๐–ฑ๐–ก๐–ขโก(4)โก(2)? Well, we can take advantage of the symmetry of ๐‘…๐‘˜ to observe that, in the first step, adding a level โ„“ PC to the party is the same as adding a level โˆ’โ„“ PC to the party. More generally, instead of laboriously tracking the exact state of ๐ฟ at each step, we can note that we really only care about |โ„4โก(๐ฟ)|, which itself only depends on ๐—†๐—‚๐—‡โก(๐ฟ) and ๐—†๐–บ๐—‘โก(๐ฟ). So, because levels are distributed uniformly (meaning that ๐ฟ is effectively invariant under constant shifts[1], e.g. adding +17 to all levels), we can simply pretend that ๐ฟย =ย {0,ย ๐—†๐–บ๐—‘โก(๐ฟ)ย โˆ’ย ๐—†๐—‚๐—‡โก(๐ฟ)}. For example, weโ€™d simplify {0,ย 1,ย 3} to {0,ย 3}, and weโ€™d simplify {โˆ’2,ย โˆ’2,ย โˆ’1,ย 1} to {0,ย 3} as well. These observations & simplifications will make it significantly easier to compute values of ๐–ฑ๐–ก๐–ข by hoof, but will also contribute to a more systematic (read: not by hoof) treatment of ๐–ฑ๐–ก๐–ข. Working through the calculation of ๐–ฑ๐–ก๐–ขโก(4)โก(2), we end up with something like:

we add๐ฟvalid new levelswe addprobability
โˆ’3,ย 3{0,ย 3}โˆ’1, 0, 1, 2, 3, 4โˆ’1,ย 42โ„9ย โ‹…ย 2โ„6 = 2โ„27
โˆ’2,ย 2{0,ย 2}โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’2,ย 42โ„9ย โ‹…ย 2โ„7 = 4โ„63
โˆ’1,ย 1{0,ย 1}โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’3,ย 42โ„9ย โ‹…ย 2โ„8 = 1โ„18
0{0,ย 0}โˆ’4, โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’4,ย 41โ„9ย โ‹…ย 2โ„9 = 2โ„81

These 4 cases make up all possible cases where we converge in exactly 2 steps: with our simplifications, there are only 4 possible levels that we can effectively add in the first step without converging (we canโ€™t converge at that point because then it would be converging in exactly 1 step instead of 2), and then we always have to choose a level that causes us to converge in the second step, of course. Notice that no matter how many steps we converge in, there are always exactly 2 possible levels that can be added to ๐ฟ in the final step to make it converge: either the level thatโ€™s exactly high enough to converge, or thatโ€™s exactly low enough. Because these 4 causes are exhaustive, we can simply add their probabilities to get the value of ๐–ฑ๐–ก๐–ขโก(4)โก(2) = 2โ„27ย +ย 4โ„63ย +ย 1โ„18ย +ย 2โ„81 = 247โ„1134 โ‰ˆ 0.217โ€ฏ813.

That was a slight bit of a pain in the ass, but donโ€™t worry โ€” it gets way more painful from here. What about ๐–ฑ๐–ก๐–ขโก(4)โก(3)? Prepare to be sorry that you asked:

we add๐ฟvalid new levelswe add๐ฟvalid new levelswe addprobability
0{0,ย 0}โˆ’4, โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 40โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡{0,ย 0}โˆ’4, โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’4,ย 41โ„9ย โ‹…ย 1โ„9ย โ‹…ย 2โ„9 = 2โ„729
โˆ’1โ€‡โ€‡โ€‡1โ€‡โ€‡โ€‡โ€‡{0,ย 1}โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’3,ย 41โ„9ย โ‹…ย 2โ„9ย โ‹…ย 2โ„8 = 1โ„162
โˆ’2โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡2โ€‡โ€‡{0,ย 2}โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’2,ย 41โ„9ย โ‹…ย 2โ„9ย โ‹…ย 2โ„7 = 4โ„567
โˆ’3โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡3{0,ย 3}โˆ’1, 0, 1, 2, 3, 4โˆ’1,ย 41โ„9ย โ‹…ย 2โ„9ย โ‹…ย 2โ„6 = 2โ„243
โˆ’1,ย 1{0,ย 1}โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 40โ€‡1โ€‡โ€‡โ€‡โ€‡{0,ย 1}โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’3,ย 42โ„9ย โ‹…ย 2โ„8ย โ‹…ย 2โ„8 = 1โ„72
โˆ’1โ€‡โ€‡โ€‡โ€‡โ€‡2โ€‡โ€‡{0,ย 2}โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’2,ย 42โ„9ย โ‹…ย 2โ„8ย โ‹…ย 2โ„7 = 1โ„63
โˆ’2โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡3{0,ย 3}โˆ’1, 0, 1, 2, 3, 4โˆ’1,ย 42โ„9ย โ‹…ย 2โ„8ย โ‹…ย 2โ„6 = 1โ„54
โˆ’2,ย 2{0,ย 2}โˆ’2, โˆ’1, 0, 1, 2, 3, 40โ€‡1โ€‡2โ€‡โ€‡{0,ย 2}โˆ’2, โˆ’1, 0, 1, 2, 3, 4โˆ’2,ย 42โ„9ย โ‹…ย 3โ„7ย โ‹…ย 2โ„7 = 4โ„147
โˆ’1โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡โ€‡3{0,ย 3}โˆ’1, 0, 1, 2, 3, 4โˆ’1,ย 42โ„9ย โ‹…ย 2โ„7ย โ‹…ย 2โ„6 = 4โ„189
โˆ’3,ย 3{0,ย 3}โˆ’1, 0, 1, 2, 3, 40โ€‡1โ€‡2โ€‡3{0,ย 3}โˆ’1, 0, 1, 2, 3, 4โˆ’1,ย 42โ„9ย โ‹…ย 4โ„6ย โ‹…ย 2โ„6 = 4โ„81

Fun!!! Adding up the probabilities of these (exhaustive) cases gets us ๐–ฑ๐–ก๐–ขโก(4)โก(3) = 2โ„729 + 1โ„162 + 4โ„567 + 2โ„243 + 1โ„72 + 1โ„63 + 1โ„54 + 4โ„147 + 4โ„189 + 4โ„81 = 48649โ„285768 โ‰ˆ 0.170โ€ฏ239โ€ฏ5.

I think thatโ€™s about enough of thatโ€ฆ

Footnotes for โ€œPhase I: Worked examplesโ€

  1. [โ†‘] More formally, the elements of ๐ฟ are points in an affine space. This affinity is naturally induced by the fact that ๐‘…๐‘˜ is defined in terms of subtraction between these โ€œpointsโ€.

Phase II: Whatโ€™s the probability that we converge in exactly ๐‘› steps?

Now that weโ€™ve worked a good way through an example, weโ€™re going to try to make the leap to a generalisation. I did a lot of scribbling nonsense, and a lot of thinking really hard about it, and Iโ€™m going to try to give a brief explanation of my reasoning.

First of all, weโ€™re going to define a new variable called ๐‘‘, which represents the difference (you can also think of it as standing for โ€œdistanceโ€ or โ€œdiameterโ€) between ๐—†๐–บ๐—‘โก(๐ฟ) and ๐—†๐—‚๐—‡โก(๐ฟ). In other words, the simplification of ๐ฟ that I talked about above can be defined as simply {0,ย ๐‘‘}.

As I was trying to come up with a general solution, I started scribbling out some pseudocode, and I ended up with something like this to capture the general structure of the example used in Phase I โ€” in particular, ๐–ฑ๐–ก๐–ขโก(4)โก(3):

sum a โ† [-3..3]:
    d_a := |a|
    sum b โ† [(d_a - 3)..3]:
        d_b := max{d_a - b, b, d_a}
        sum _ โ† [0, 1]:
            1/9 * 1/(9 - d_a) * 1/(9 - d_b)

Iโ€™m using the sum a โ† [-3..3]: notation to mean that weโ€™re summing over a, with the values of a being taken from the list [-3..3] (which, in Haskell notation, is the same as the list [-3, -2, -1, 0, 1, 2, 3]), and the summands simply being the value(s) that the block takes on (which may themselves be summationsโ€ฆ). Iโ€™m also using := for assignment.

As you can see, Iโ€™m essentially just summing over all possible cases, i.e. every possible level that the first added party member could have, combined with every possible level that the second added party member could have, and so on. In the first two sum loops, we are avoiding convergence and considering every remaining possibility, and in the third and final sum loop, we just consider the 2 cases that cause convergence โ€” which is why there is no summation variable, and instead just a blank _ โ€” Iโ€™ve used the list [0, 1], but any list of length 2 would work equally well.

The pattern here is hopefully clear: if we wanted to consider larger values of ๐‘› like ๐–ฑ๐–ก๐–ขโก(4)โก(4), ๐–ฑ๐–ก๐–ขโก(4)โก(5), โ‹ฏ, we would just add another sum layer (sum c โ† [(d_b - 3)..3]:, โ‹ฏ) that calculates yet another value of ๐‘‘ (d_c := max{d_b - c, c, d_b}, โ‹ฏ), and the arithmetic expression in the innermost sum loop would look like 1/9 * 1/(9 - d_a) * 1/(9 - d_b) * 1/(9 - d_c) * โ‹ฏ.

Of course, thereโ€™s also the question of how I came up with โ€œmax{d_a - b, b, d_a}โ€ in the first place. I basically came up with this expression by considering two cases, which are together exhaustive: that the summation variable ๐‘ is negative (i.e. ๐‘ย <ย 0), and that ๐‘ is non-negative (i.e. ๐‘ย โ‰ฅย 0):

In other words:

๐—†๐–บ๐—‘โ€™ing the โ€œ๐‘ย โˆ’ย ๐‘‘๐‘Žโ€ with 0 covers the โ€œ๐‘ is somewhere within the interval [0,ย ๐‘‘๐‘Ž]โ€ case above, causing ๐‘‘ to be increased by 0 (i.e. to not change) in that case. Thus, we can refine how we split up the computation of โ€œ๐‘‘๐‘Žย +ย ๐—†๐–บ๐—‘{๐‘ย โˆ’ย ๐‘‘๐‘Ž,ย 0}โ€ above:

So, we now have these three exhaustive cases:

โ€ฆWhich is where โ€œ๐—†๐–บ๐—‘{๐‘‘๐‘Žย โˆ’ย ๐‘,ย ๐‘,ย ๐‘‘๐‘Ž}โ€ (max{d_a - b, b, d_a}) comes from.

This brings us to the first implementation of ๐–ฑ๐–ก๐–ข that I came up with that, you know, at the very least didnโ€™t require doing anything โ€œby hoofโ€. Itโ€™s an implementation in Python, with a function called pmf0:

from fractions import Fraction as Q


def pmf0(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return sum(pmf_inner0(k, n, 0))


def pmf_inner0(k, n, d):
    denom = 2 * k + 1 - d

    if n == 1:
        yield Q(2, denom)
    else:
        for elem in range(d - (k - 1), k):
            d_prime = max(d - elem, elem, d)
            for f in pmf_inner0(k, n - 1, d_prime):
                yield Q(1, denom) * f

pmf0 is entirely correct, and it calculates its result with unlimited precision (using Pythonโ€™s built-in rational number arithmetic). It is, however, basically the worst implementation โ€” I continued refining it gradually to make something better.

As youโ€™d expect, pmf0 (or rather, pmf_inner0) is recursive, in order to implement that variably-sized (viz. the size of ๐‘›) nested stack of sum loops that we saw in the pseudocode above. The strategy here is to implement the inner recursive bit (pmf_inner0) as a generator that yields the summands one by one, and then pmf0 does the job of summing them all up with the built-in sum function.

We also have special-casing for when ๐‘˜ย =ย 0, in which case the starting state (๐ฟย โ‰”ย {0}) is already converged, so the probability of converging in exactly 0 steps is 1, and the probability of converging in exactly ๐‘› steps for any other value of ๐‘› is 0. On the other hand, if ๐‘˜ย โ‰ ย 0 and yet ๐‘›ย <ย 1, then we cannot possibly converge (as we need at least 1 step to get from ๐ฟย โ‰”ย {0} to convergence), so we always return 0 in such cases.

Phase III: Can we go faster?

So, we accomplished something nice: we now have a computer program that will calculate ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(๐‘›) accurately (indeed, with unlimited precision) for any values of ๐‘˜ and ๐‘›. But I still have three serious issues with what weโ€™ve got so far:

  1. This implementation is ludicrously slow. Using pmf0 is not recommended unless you just want to laugh at how slow it is.
  2. What I was dreaming of, in my mind, when starting on this problem, was coming up with a nice-looking formula โ€” the kind that you might see in a textbook on, say, probability theory or something. Weโ€™re definitely not there yet.
  3. I also care about some other properties of the distribution. Knowing the PMF is certainly extremely important and effectively defines the distribution, but I also want to know things like the expectation, the variance, etc.

So, letโ€™s tackle these three issues, roughly in the order stated above.

My next iteration on the Python implementation is called pmf1, and it looks something like this:

def pmf1(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return pmf_inner1(k, n, 0)


def pmf_inner1(k, n, d):
    denom = 2 * k + 1 - d

    if n == 1:
        return Q(2, denom)
    else:
        return sum(
            Q(1, denom) * pmf_inner1(k, n - 1, max(d - i, i, d))
            for i in range(d - (k - 1), k)
        )

As you can see, pmf1 basically looks like pmf0 except that it calls pmf_inner1 instead of pmf_inner0, and it doesnโ€™t sum the results. The main thing that distinguishes pmf_inner1 from pmf_inner0 is that it does its own summation, thus being an ordinary function (which just returns a single rational number) instead of a generator. This makes things more explicit. It also possibly removes some overhead that the generator approach might have, but really I wouldnโ€™t expect it to be much faster โ€” the added explicitness is whatโ€™s important.

That brings us to the next iteration, pmf2, which is where we see the real performance gains:

def pmf2(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return pmf_inner2(k, n, 0)


def pmf_inner2(k, n, d):
    cache = {}

    def f(n, d):
        if (n, d) in cache:
            return cache[(n, d)]

        denom = 2 * k + 1 - d

        if n == 1:
            return Q(2, denom)
        else:
            r = sum(
                Q(1, denom) * f(n - 1, max(d - i, i, d))
                for i in range(d - (k - 1), k)
            )
            cache[(n, d)] = r

            return r

    return f(n, d)

Thatโ€™s right, this is a job for dynamic programming! By memoising already-computed values of f(n, d) (k is a constant for a given invocation of pmf_inner2, so we donโ€™t need to store its values) in a dictionary called cache, we avoid most of the recursion that we would otherwise perform, thus dramatically cutting down the computational complexity of pmf2.

Oh, but we can do even better. As it turns out, weโ€™re actually unnecessarily duplicating quite a bit of work. We did all of that reasoning above to arrive at โ€œ๐‘‘๐‘ย โ‰”ย ๐—†๐–บ๐—‘{๐‘‘๐‘Žย โˆ’ย ๐‘,ย ๐‘,ย ๐‘‘๐‘Ž}โ€. However, thereโ€™s actually a similar, but easier, way to look at it. If ๐ฟย =ย {0,ย ๐‘‘} and we havenโ€™t yet converged, then the next level added to ๐ฟ falls into exactly one of three cases: strictly less than 0, somewhere within [0,ย ๐‘‘], or strictly greater than ๐‘‘. Crucially, the number of possible levels in the first case is exactly the same as the number of possible levels in the third and final case โ€” viz. ๐‘˜ย โˆ’ย ๐‘‘ โ€” and these levels are symmetrical; we get the same result if we consider adding โˆ’1 to ๐ฟ as if we consider adding ๐‘‘ย +ย 1, the same result if we consider adding โˆ’2 as if we consider adding ๐‘‘ย +ย 2, and so on. And adding any level to ๐ฟ that is within [0,ย ๐‘‘] yields the same result as adding any other level thatโ€™s also within that interval, so we can just do the computation for one particular level, and then multiply the result by the number of possible levels within this interval, viz. ๐‘‘ย +ย 1.

Just to make things more concrete with an example, consider ๐‘˜ย =ย 8 and ๐ฟย โ‰”ย {0,ย 3}. The set of possible levels that could be added to ๐ฟ in the next step are as follows (this is computed as โ„8โก{0,ย 3}ย โˆฉย โ„ค):

{โˆ’5, โˆ’4, โˆ’3, โˆ’2, โˆ’1, 0, 1, 2, 3, 4, 5, 6, 7, 8}.

We can partition this set into the three cases that we have in mind:

{โˆ’5,ย โˆ’4,ย โˆ’3,ย โˆ’2,ย โˆ’1} โˆช {0,ย 1,ย 2,ย 3} โˆช {4,ย 5,ย 6,ย 7,ย 8}.

As expected, the first and last sets in this partition both have cardinality ๐‘˜ย โˆ’ย ๐‘‘, and the middle set has cardinality ๐‘‘ย +ย 1. In this example, the symmetry is that โˆ’1ย &ย 4 are paired, โˆ’2ย &ย 5 are paired, โˆ’3ย &ย 6 are paired, โˆ’4ย &ย 7 are paired, and โˆ’5ย &ย 8 are paired. By โ€œpairedโ€, I of course mean that the result (i.e. the new value of ๐‘‘) is effectively the same, no matter which element of the pair is added to ๐ฟ in the next step. So, for example, instead of computing a probability under the assumption that โˆ’1 is added to ๐ฟ in the next step, and then doing the same for 4, and then adding those two results; we can simply do it for just 4, and then double the result. Furthermore, instead of doing separate computations for every element of {0,ย 1,ย 2,ย 3} and then summing the results, we can just do it for 0, and then multiply that result by ๐‘‘ย +ย 1.

This gets us the next iteration of our little Python program, which I simply call pmf:

def pmf(k, n):
    if k == 0:
        return Q(1) if n == 0 else Q(0)
    elif n < 1:
        return Q(0)

    return pmf_inner(k, n, 0)


def pmf_inner(k, n, d):
    cache = {}

    def f(n, d):
        if (n, d) in cache:
            return cache[(n, d)]

        denom = 2 * k + 1 - d

        if n == 1:
            return Q(2, denom)
        else:
            r = sum(Q(2, denom) * f(n - 1, i) for i in range(d + 1, k)) + Q(
                d + 1, denom
            ) * f(n - 1, d)
            cache[(n, d)] = r

            return r

    return f(n, d)

This was about as far as I could manage to refine this function, at least whilst still getting results with unlimited precision. We can go quite a bit faster by switching from Pythonโ€™s fractions.Fraction to its built-in float datatype, which simultaneously limits our precision to a fixed number of bits (viz. 64 bits), and takes advantage of the fact that modern CPUs contain specialised hardware for working with IEEE 754 double-precision (a.k.a. binary64) floating point numbers. The result is pmf_float:

def pmf_float(k, n):
    if k == 0:
        return 1.0 if n == 0 else 0.0
    elif n < 1:
        return 0.0

    return pmf_inner_float(k, n, 0)


def pmf_inner_float(k, n, d):
    cache = {}

    def f(n, d):
        if (n, d) in cache:
            return cache[(n, d)]

        denom = float(2 * k + 1 - d)

        if n == 1:
            return 2.0 / denom
        else:
            r = (
                2.0 * sum(f(n - 1, i) for i in range(d + 1, k))
                + float(d + 1) * f(n - 1, d)
            ) / denom
            cache[(n, d)] = r

            return r

    return f(n, d)

Note that this implementation sticks to integer arithmetic wherever possible, only resorting to floating-point arithmetic when absolutely necessary. pmf_float is very usable and gives results of quite good accuracy, as thereโ€™s nothing being done here that is particularly numerically unstable. So, it should be preferred over pmf unless you actually need to know the exact rational number for some reason.

Okay, but, how fast is it, really? Oh yes, itโ€™s time for some benchmarks! But only really crappy ad hoc ones because Iโ€™m lazy and thereโ€™s no reason to be obsessive about accuracy hereโ€ฆ

But first, we need just one more implementation to benchmark: a Rust implementation of pmf_float! I didnโ€™t bother making a Rust implementation of pmf because, again, I was too lazy, and I really just wanted to see how quickly I could compute ๐–ฑ๐–ก๐–ข to an accuracy thatโ€™s sufficient for any realistic application. The Rust implementation looks something like this:

use rustc_hash::FxHashMap as Map;

pub fn pmf(k: u32, n: u32) -> f64 {
    if k == 0 {
        if n == 0 {
            1.0
        } else {
            0.0
        }
    } else if n < 1 {
        0.0
    } else {
        pmf_inner(k, n, 0)
    }
}

fn pmf_inner(k: u32, n: u32, d: u32) -> f64 {
    fn f(cache: &mut Map<(u32, u32), f64>, k: u32, n: u32, d: u32) -> f64 {
        if let Some(x) = cache.get(&(n, d)) {
            return *x;
        }

        let denom = f64::from(2 * k - d + 1);

        if n == 1 {
            2.0 / denom
        } else {
            let sum: f64 = (d + 1..k).map(|i| f(cache, k, n - 1, i)).sum();
            let r =
                (2.0 * sum + f64::from(d + 1) * f(cache, k, n - 1, d)) / denom;
            cache.insert((n, d), r);

            r
        }
    }

    let mut cache = Map::default();

    f(&mut cache, k, n, d)
}

As you can see, this is pretty much exactly equivalent to how the Python pmf_float function is written, with the slight exception that I donโ€™t use the hash function from Rustโ€™s standard library (std), and instead depend on the rustc-hash crate.[1]

So, uhm, letโ€™s see the (crappy) benchmarks?:

Raw outputs

bench.py:

pmf2: 19.257901867997134
pmf: 7.60964249499375
pmf_float: 0.23313972300093155

benches/pmf_benchmark.rs:

pmf(20, 24)             time:   [50.477 ยตs 50.482 ยตs 50.487 ยตs]
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) low mild
  2 (2.00%) high mild
languagefunctiontime (ยตsโงธiter)time (relative)
Pythonpmf2192โ€ฏ579.023โ€ฏ814.96ร—
Pythonpmf76โ€ฏ096.421โ€ฏ507.46ร—
Pythonpmf_float2โ€ฏ331.4046.18ร—
Rustpmf50.481.00ร—

The implementations were benchmarked solely on their ability to compute ๐–ฑ๐–ก๐–ขโก(20)โก(24). Youโ€™ll notice that I did not benchmark pmf0 nor pmf1, because those implementations lack the memoisation required to run in a reasonable amount of time.

For Python, I used timeit to do the benchmarking (see bench.py), and for Rust, I used Criterion.rs (see benches/pmf_benchmark.rs).

You can see the full listings for the Python implementations at rbc.py, and for the Rust implementation at src/lib.rs & src/tests.rs.

Footnotes for โ€œPhase III: Can we go faster?โ€

  1. [โ†‘] As of this writing, Rustโ€™s standard library uses SipHash (specifically SipHash-1-3) by default. This is because SipHash makes use of an ARX structure that makes it highly resistant to hash flooding attacks, thus making it a good default choice if youโ€™re not sure whether or not the user will need that security. However, we donโ€™t care about that kind of security in this case, so we can just use rustcโ -โ hash to hash things stupidly quickly. If youโ€™re wondering what rustcโ -โ hash actually looks like, it hashes a usize (which is a pointer-sized unsigned integer) at a time, like so (simplified under the assumption that youโ€™re compiling for a 64-bit architecture):

    const K: usize = 0x517cc1b727220a95;
    
    fn add_to_hash(&mut self, i: usize) {
        self.hash = self.hash.rotate_left(5).bitxor(i).wrapping_mul(K);
    }
    

    Note that 0x517cc1b727220a95 = 5โ€ฏ871โ€ฏ781โ€ฏ006โ€ฏ564โ€ฏ002โ€ฏ453 = 32ย โ‹…ย 7ย โ‹…ย 112ย โ‹…ย 47ย โ‹…ย 173ย โ‹…ย 94โ€ฏ732โ€ฏ711โ€ฏ681.

Phase IV: Where da formula @, doe?

I regret to inform you that I could not figure out a closed-form expression for ๐–ฑ๐–ก๐–ข โ€” at least, not for a restrictive definition of โ€œclosed-formโ€. Boooooโ€ฆ ๐Ÿ˜ค๐Ÿ‘Ž Embarrassingโ€ฆโ€ฆ

However, it is certainly possible to come up with a recursive โ€œformulaโ€ modelled on the programmatic implementations that you see above. This is not a single expression, so itโ€™s not a โ€œclosed-form expressionโ€, but it is a fixed number of expressions (really, just two expressions) that are, individually, technically closed-form โ€” at least, by Wikipediaโ€™s definition of โ€œclosed-formโ€, which allows summation, so long as itโ€™s not infinite. Basically, this is the best I got:

Formula for ๐–ฑ๐–ก๐–ข โ—

Raw LaTeX
\begin{align*}
\mathrm{RBC}(k, n) &\equiv f(k, n, 0)                           \\[1ex]
        f(k, 1, d) &=      \frac{2}{2k + 1 - d}                 \\[1ex]
        f(k, n, d) &=      \frac{1}{2k + 1 - d}\sum_{i = d - (k - 1)}^{k - 1}
                             f(k,\,n - 1,\,\max\{d - i, d, i\}) \\[1ex]
                   &=      \frac{1}{2k + 1 - d}\left((d + 1)\,f(k,\,n - 1,\,d)
                             + 2\sum_{i = d + 1}^{k - 1}
                             f(k,\,n - 1,\,i)\right)
\end{align*}

As you can see, there are two forms that you can choose from for the recursive bit of ๐‘“: the one on top is modelled on pmf2, and the one on bottom is modelled on pmf. The pmf2-based one actually looks a bit cleaner and less verbose, but of course, we know that itโ€™s computationally just not as good. Plus, we avoid using ๐—†๐–บ๐—‘ at all in the pmf-based one, which is nice.

So, that is (or rather, โ€œthose areโ€, as there are two versions) the closest thing that I have to a โ€œnice formulaโ€ for ๐–ฑ๐–ก๐–ข. If you are a better mathematician than I am (frankly, not a very high bar to exceedโ€ฆ) and can figure out an even nicer formula, please please do let me know about it!

To get an idea of what ๐–ฑ๐–ก๐–ข looks like, we can plot it! Check out this plot that focuses in on just the first 15 values of ๐–ฑ๐–ก๐–ขโก(20):

A plot of ๐–ฑ๐–ก๐–ขโก(20)โก(๐‘›), for ๐‘› from 1 to 15

(This, and other plots shown here, were created using matplotlib; see plot.py.)

Cool!! Iโ€™ve used cubic Hermite spline interpolation to show a smooth curve that fits all 15 points, but do remember that this is a PMF, not a PDF! There are really only 15 points (probability masses) here, and the smoothed curve is just there to guide your eyeballs.

To get a bigger picture than this, we can consider the first 100 values of ๐‘› instead of the first 15, and we can consider multiple values of ๐‘˜, as well:

A plot of ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(๐‘›), for ๐‘˜ โˆˆ {10, 20, 40, 80}, and ๐‘› from 1 to 100

๐Ÿคฉ Wowwwโ€ฆ Pretty~

You can see that this distribution gets real flat-like for larger values of ๐‘˜. For ๐‘˜ย =ย 80, it nearly looks like a damn linear function with a mildly negative slope!

Phase V: How do I know what to expect?

So, we have a PMF. But we also want other nice things, like the expectation of that PMF. How in the damn hell am I supposed to figure that out?

Well, I decided that estimating the expectation was a good start. So I wrote a little Python function to do just that:

def estimate_expectation(k, epsilon):
    e = 0.0
    cumu_prob = 0.0
    n = 1
    while cumu_prob < 1.0 - epsilon:
        prob = pmf_float(k, n)
        old_cumu_prob = cumu_prob
        cumu_prob += prob
        e += prob * float(n)
        n += 1

        if old_cumu_prob == cumu_prob:
            break

    return e

The idea here is very simple: for a given value of ๐‘˜, we calculate ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(1), ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(2), ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(3), and so on, keeping track of the sum of the probability masses that weโ€™ve calculated so far. For each such ๐–ฑ๐–ก๐–ข calculation, we add ๐‘›ย โ‹…ย ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(๐‘›) to our running total, and then return that running total when the sum of the probability masses that weโ€™ve calculated so far is sufficiently close to 1.

What does โ€œsufficiently closeโ€ mean? Well, that is determined by the value of epsilon (ฮต); we just need to get to at least 1ย โˆ’ย ฮต. I used sys.float_info.epsilon for the value of ฮต, but the basic idea is that larger values of ฮต will give less accurate results more quickly, and vice versa.

By running estimate_expectation for the first few values of ๐‘˜, I got something like this:

[
    0.0,
    1.4999999999999991,
    2.2499999999999845,
    2.9166666666666576,
    3.541666666666626,
    4.141666666666597,
    4.724999999999947,
    5.296428571428546,
    5.858928571428415,
    6.414484126984054,
    6.964484126984016,
    7.509938672438514,
    8.051605339105192,
    8.590066877566773,
    9.125781163280985,
    9.65911449661434,
    10.190364496614361,
    10.719776261320147,
    11.247554039097999,
    11.77386982857149,
    12.298869828571545,
    12.822679352381105,
    13.345406625108346,
    13.867145755543156,
    14.387979088876463,
    14.907979088876447,
    15.427209858107194,
    15.945728376625619,
    16.46358551948275,
    16.98082689879308,
    17.497493565459727,
    18.013622597717774,
    18.52924759771753,
]

Naturally, these are just estimations, so we expect them to be a little bit off the mark โ€” specifically, a little bit lower than the true value. Nevertheless, this level of accuracy was enough to see some clear patterns. I decided to take the difference sequence of these estimations, essentially calculating estimations of ๐–คโก[๐–ฑ๐–ก๐–ขโก(2)]ย โˆ’ย ๐–คโก[๐–ฑ๐–ก๐–ขโก(1)], ๐–คโก[๐–ฑ๐–ก๐–ขโก(3)]ย โˆ’ย ๐–คโก[๐–ฑ๐–ก๐–ขโก(2)], ๐–คโก[๐–ฑ๐–ก๐–ขโก(4)]ย โˆ’ย ๐–คโก[๐–ฑ๐–ก๐–ขโก(3)], and so on. I then went through this sequence, by hoof, and for each difference in the sequence, I wrote down which rational number it looked like it was estimating. For example, 1.4999999999999991 looks like itโ€™s estimating 3โ„2. The result was this:

3โ„2, 3โ„4, 2โ„3, 5โ„8, 3โ„5, 7โ„12, 4โ„7, 9โ„16, 5โ„9, 11โ„20, 6โ„11, 13โ„24, 7โ„13, 15โ„28, 8โ„15, โ‹ฏ

As it turns out, the first element in this sequence (3โ„2) is a red herring โ€” it results from the fact that weโ€™ve defined ๐–ฑ๐–ก๐–ขโก(0)โก(๐‘›) to be 0 for any value of ๐‘›. By ignoring this first term, I was able to see that the terms in this sequence are just (๐‘˜ย +ย 1)โˆ•(2โข๐‘˜), with the exception of the value for ๐‘˜ย =ย 1 being 3โ„2 by fiat. It was this knowledge that I was able to come up with the expected value for ๐–ฑ๐–ก๐–ข:

Expected value for ๐–ฑ๐–ก๐–ข โ—

Raw LaTeX
\begin{align*}
            N(k) &\sim \mathrm{RBC}(k) \\[1ex]
\mathrm{E}[N(1)] &=    \frac{3}{2}     \\[1ex]
\mathrm{E}[N(k)] &=    \mathrm{E}[N(k - 1)] + \frac{k + 1}{2k}
\end{align*}

\begin{center}
\noindent\rule{14.2cm}{1pt}
\end{center}

\vspace*{-\baselineskip}

\begin{align*}
\mathrm{E}[N(k)]       ={}& \frac{1}{2}\left(k + H_k + 1\right) \\[1ex]
                 \approx{}& \frac{1}{2}
                              \left(k + \ln(k) + \gamma + 1 +
                              \frac{1}{2k}\right)               \\[1ex]
                  \simeq{}& \Theta(k)
\end{align*}

In this context, โ€œ๐‘โก(๐‘˜)ย โˆผย ๐–ฑ๐–ก๐–ขโก(๐‘˜)โ€ means that ๐‘โก(๐‘˜) is a random variable that has ๐–ฑ๐–ก๐–ขโก(๐‘˜) as its distribution. Iโ€™ve called it โ€œ๐‘โ€ because the concrete values that it takes on are values of ๐‘›, i.e. exactly how many steps it took us to converge.

Below the horizontal line, I show an equivalent formula that apparently isnโ€™t recursive, and thus looks like a closed-form expression for ๐–คโก[๐‘โก(๐‘˜)]. However, ๐ป๐‘˜ denotes the ๐‘˜th harmonic number, which itself doesnโ€™t have a closed-form definition in the strictest sense of โ€œclosed-formโ€ โ€” thereโ€™s just the usual โ€œpartial sum of the first ๐‘˜ terms of the harmonic seriesโ€, which may or may not be a closed form, depending on whom you ask. ๐Ÿ˜‰

In any case, there is a really really (like, really) good approximation for ๐ป๐‘˜ that is most definitely closed-form, which I used to come up with the approximation that you see after the โ€œโ‰ˆโ€. The โ€œ๐—…๐—‡โ€ stands for โ€œnatural logarithmโ€, and ฮณ is the Eulerโ€“Mascheroni constant (ฮณย โ‰ˆย 0.577โ€ฏ215โ€ฏ664โ€ฏ901โ€ฏ532โ€ฏ9). Iโ€™ve also used big ฮ˜ notation to express that ๐–คโก[๐‘โก(๐‘˜)] asymptotically grows as ฮ˜โก(๐‘˜) โ€” that is, linearly.

If you want to know just how good this approximation is, feast your eyes:

A plot comparing the approximation of ๐–คโก[๐‘โก(๐‘˜)] to its true values

Sooooโ€ฆ yeah. Itโ€™s basically indistinguishable from the true value at ๐‘˜ย โ‰ฅย 10 or soโ€ฆ If youโ€™re wondering whatโ€™s up with the random singularity that appears in the plot of the approximation, thatโ€™s due to the 1โˆ•(2โข๐‘˜) term, which comes from a truncation (yet another partial sum!) of the Taylor series representation.

Also in rbc.py, there are implementations for calculating the exact expectationโ€ฆ:

def expectation(k):
    if k == 0:
        return Q(0)

    e = Q(3, 2)
    for i in range(2, k + 1):
        e += Q(i + 1, 2 * i)

    return e

โ€ฆAnd the same thing, but using floatsโ€ฆ:

def expectation_float(k):
    if k == 0:
        return 0.0

    e = 1.5
    for i in range(2, k + 1):
        e += (i + 1.0) / (2.0 * i)

    return e

โ€ฆAs well as for the approximation:

from math import log

GAMMA_PLUS_1 = 1.577_215_664_901_532_8


def expectation_approx(k):
    if k == 0:
        return 0.0

    k = float(k)

    return 0.5 * (k + log(k) + GAMMA_PLUS_1 + 1.0 / (2.0 * k))

Phase VI: How do I know what not to expect?

Okay, so we know the expected value of ๐‘โก(๐‘˜), which is great. But what about other nice statistical properties? Likeโ€ฆ

Variance

The variance? Once again Iโ€™ve no clue how I would go about finding this analytically, so weโ€™re going to once again resort to estimations.

The variance can be expressed as simply the expectation of the square of the value, minus the square of the expectation of the value. Symbolically:

๐–ต๐–บ๐—‹โก(๐‘โก(๐‘˜)) = ๐–คโก[๐‘โก(๐‘˜)2]ย โˆ’ย ๐–คโก[๐‘โก(๐‘˜)]2.

Using this equation directly in computation โ€” when using floating-point arithmetic โ€” can be quite numerically unstable, but weโ€™re going to do it anyways, because I donโ€™t think that weโ€™re going to have that issue (particularly, because it seems like ๐–ต๐–บ๐—‹โก(๐‘โก(๐‘˜)) is quite large).

Thus, we can use the same strategy that we used for estimating the expectation to approximate ๐–คโก[๐‘โก(๐‘˜)2]:

def estimate_expectation_sq(k, epsilon):
    e = 0.0
    cumu_prob = 0.0
    n = 1
    while cumu_prob < 1.0 - epsilon:
        prob = pmf_float(k, n)
        old_cumu_prob = cumu_prob
        cumu_prob += prob
        e += prob * float(n * n)
        n += 1

        if old_cumu_prob == cumu_prob:
            break

    return e

Then, itโ€™s easy enough to straightforwardly compute an approximation of ๐–คโก[๐‘โก(๐‘˜)2]ย โˆ’ย ๐–คโก[๐‘โก(๐‘˜)]2. I did this, and I unfortunately could not find an exact formula for the results. Instead, I fiddled around with some approximations โ€” using quadratic regression provided by NumPy as a starting point โ€” and ended up with this:

Approximation of ๐–ต๐–บ๐—‹โก(๐‘โก(๐‘˜)) โ—

Raw LaTeX
\begin{align*}
\mathrm{Var}(N(k)) &\approx \frac{3k^2 + 10k - 4}{12} \\[1ex]
                   &\simeq  \Theta{\left(k^2\right)}
\end{align*}

๐Ÿคท๐Ÿฝโ€โ™€๏ธ Sure, why not. It looks nice. And, it seems to be a pretty solid approximation! The major takeaway here is that ๐–ต๐–บ๐—‹โก(๐‘โก(๐‘˜)) is very much quadratic in ๐‘˜, which makes sense, as we expected the variance to be quite large on the basis of those plots of ๐–ฑ๐–ก๐–ข that we saw above. Note that this is very comparable to the variance of the uniform distribution, which is just (๐‘ย โˆ’ย ๐‘Ž)2โˆ•12, where [๐‘Ž,ย ๐‘] is the support of the distribution. Whewโ€ฆ thatโ€™s really saying something!

Median, mode

What about the median? The mode? Both of these can be calculated. For the median, you can do something like the estimate_expectation function above, except you donโ€™t have to keep a running โ€œexpectationโ€ (e) total, and you simply stop as soon as the cumulative probability is 0.5 or greater. For the mode (which is just the peak of the distribution โ€” for a given value of ๐‘˜, this is the value of ๐‘› that maximises ๐–ฑ๐–ก๐–ขโก(๐‘˜)โก(๐‘›)), you can do something like:

def mode(k):
    p = 0.0
    n = 1
    while True:
        prob = pmf_float(k, n)
        if prob > p:
            p = prob
        else:
            break

        n += 1

    return n - 1

We know the shape of the distribution, so we just wait for it to start decreasing, and we know that weโ€™ve already seen the mode.

I didnโ€™t go any further with these two measure of central tendency, as I was very much not confident in my ability to find some patternโ€ฆ In any case, hereโ€™s a table of some modes, if that makes you feel better:

mode๐‘˜
00
11
25
313
426
545
670
7103
8142
9188
10242

The ๐‘˜ values here are the absolute lowest ๐‘˜ values for which the mode is as listed.

Entropy

What about the information entropy[1] of this distribution? If youโ€™re not already familiar with the idea of entropy, the basic idea is that less likely outcomes carry more โ€œsurpriseโ€ or โ€œinformationโ€ with them. A simple example might be if you got a single bit sent to you (say, via telegraphy) every 24 hours that indicated whether or not it was raining in a particular locale. If that locale is considered to be arid, then most of the time, youโ€™re going to get a โ€œnot rainingโ€ message. Thus, getting a โ€œnot rainingโ€ message gives you very little information; you are not very surprised. On the other hand, on the occasions where it does rain, and you get an โ€œitโ€™s rainingโ€ message, thatโ€™s a lot of information & surprisal โ€” you wouldnโ€™t have guessed that it was raining, had you not gotten that message. If we have an idea of the probability for each outcome (for โ€œnot rainingโ€ and for โ€œitโ€™s rainingโ€), then we can calculate the amount of information/surprisal that we get from a single telegraph message in general, without knowing the actual contents of the message โ€” itโ€™s just the expectation of the surprisal (the surprisal of each outcome, weighted by the probability of that outcome, all summed together).

Whatโ€™s the โ€œinformation contentโ€[1] of a particular outcome? Well, as it turns out, itโ€™s basically just โˆ’๐—…๐—ˆ๐—€โก(๐‘), where ๐‘ is the probability of that outcome. If youโ€™re wondering what in the gosh darned heck logarithms have to do with this, one thing that can help it make intuitive sense (if youโ€™re like me and have computer brain) is thinking of bit strings: a bit string of length ๐‘› has 2๐‘› possible values, so to go the other way (from 2๐‘› to ๐‘›), youโ€™re going to have to undo that exponentiationโ€ฆ with a logarithm. In particular, if we have โ€” for example โ€” a uniform distribution over all possible values of the bit string, then the probability of any one outcome is ๐‘ย =ย 2โˆ’๐‘›. Then, for any ๐‘, โˆ’๐—…๐—ˆ๐—€โก(๐‘)ย =ย โˆ’๐—…๐—ˆ๐—€โก(2โˆ’๐‘›)ย =ย ๐‘›.

In any case, we can again use the same technique that we used for estimating ๐–คโก[๐‘โก(๐‘˜)] and for estimating ๐–ต๐–บ๐—‹โก(๐‘โก(๐‘˜)) to estimate ฮ—โก(๐‘โก(๐‘˜)). Note that the โ€œฮ—โ€ in โ€œฮ—โก(๐‘โก(๐‘˜))โ€ is an uppercase Greek letter โ€œetaโ€ (U+397), not an uppercase Latin letter โ€œ(h)aitchโ€ (U+48); the โ€œฮ—โ€ stands for the initial โŸจeโŸฉ in the word entropy.

Hereโ€™s a Python function that does just that:

import math


def estimate_entropy(k, epsilon):
    eta = 0.0
    cumu_prob = 0.0
    n = 1
    while cumu_prob < 1.0 - epsilon:
        prob = pmf_float(k, n)
        old_cumu_prob = cumu_prob
        cumu_prob += prob
        try:
            eta += prob * math.log(prob)
        except:
            pass
        n += 1

        if old_cumu_prob == cumu_prob:
            break

    return -eta

I did something similar to what I did with ๐–ต๐–บ๐—‹โก(๐‘โก(๐‘˜)), except this time, I had no automated regression to help me out. I probably could have written something by hand to assist in this procedure, but I honestly wasnโ€™t sure quite what I was looking for. So instead, I literally just tried some approximations and visually looked (on a plot) at how they compared to the estimated values of ฮ—โก(๐‘โก(๐‘˜)), and then did adjustments by hoof as I saw fit. The result is this halfway-decent approximation:

Approximate formula for ฮ—โก(๐‘โก(๐‘˜)) โ—

Raw LaTeX
\begin{align*}
\mathrm{H}(N(k)) &\approx \frac{9(\ln(k + 1) + \ln\ln(k + 2)) + 4}{12} \\[1ex]
                 &\simeq  \Theta(\log k)
\end{align*}

Soooโ€ฆ yep. I was debating over the use of the โ€œ+ย ๐—…๐—‡โกย ๐—…๐—‡โก(๐‘˜ย +ย 2)โ€ term, and ended up deciding to use it. Regardless of this term, the result is asymptotically logarithmic in ๐‘˜.

Letโ€™s have a look at how good this approximation is (the blue line is the estimated values, and the chunky dashed reddish line is the approximation):

Plot of estimated values of ฮ—โก(๐‘โก(๐‘˜)) vs. approximation of ฮ—โก(๐‘โก(๐‘˜)). The blue line is the estimated values, and the chunky dashed reddish line is the approximation.

Eh. Itโ€™s alright, I guess. ๐Ÿคท๐Ÿฝโ€โ™€๏ธ Iโ€™ve got two Y-axes for ease of reading; the one on the left is in nat (nats), and the one on the right is in Sh (shannons; the information-theoretic analogue of the bit).

Footnotes for โ€œEntropyโ€
  1. [โ†‘] See also: โ€œInformation contentโ€.

I donโ€™t think that any of that is going to help us clear Pianusโ€ฆ

So, there you have it. No one asked for any of it, but there it is. Youโ€™re very welcome. ๐Ÿ˜Œ

If, for some godforsaken reason, this actually interested you (seems highly unlikely to me, but then again, what do I know about probability?), Iโ€™ll just note that thereโ€™s more that can be done. ๐–ฑ๐–ก๐–ข could use a better formula, maybe. An exact formula for ๐–ต๐–บ๐—‹โก(๐‘โก(๐‘˜)) and/or ฮ—โก(๐‘โก(๐‘˜)) โ€” or better approximations โ€” would be nice. Maybe a formula for the median and/or modeโ€ฆ Plus, you can consider other related distributions. For example, how is the value of ๐‘‘ distributed, given a value of both ๐‘˜ and ๐‘›?

Have fun! :3

Habyte maketh no monke, ne vvearynge of gylte ลฟpurres maketh no knyght

With this 2022 Anniversaryโ€“Summer event, I โ€” as usual โ€” took a look at the unique hairs, faces, & NX equipment offered by the new event. The way that it usually goes is:

Hmm, none of these faces look like MapleStory faces. I guess these are all from, like, MapleStory version three thousand or somethingโ€ฆ? Thatโ€™s too badโ€ฆ To be honest, I donโ€™t really like any of them.

None of these hairs really look good to me either. Well, I admit that that one seems like it could be pretty cute, but itโ€™s really not my style. Plus, whatโ€™s with all of the hairs looking super washed out? They all look white no matter what colour they areโ€ฆ

At least some of these NX equips look neato. Maybe Iโ€™ll get a fewโ€ฆ maybe do a few wardrobe changes on a character here or thereโ€ฆ

Well, this event actually came with two faces that I did kinda like: the Charming Face and the Calm Lucid Face (specifically, I had in mind the female versions, although the male versions are only subtly different). And furthermore, this event came with a hair that I really like: the Balayage Hair (again, specifically the female version; in this case, the male version is completely different)! So, I thought hard about it, and I decided that I wanted to give my woodsmaster capreolina a makeover!! After a lot of fiddling around in maples.im and getting some useful opinions from Lv1Crook (Level1Crook, Lvl1Crook, xXCrookXx) and Harlez (VigiI), I came up with a look, and grinded out Summer Points to get itโ€ฆ:

👀

capreolinaโ€™s new look

Cute, right??? For this image, Iโ€™ve specifically chosen the Kandiva to match the red eyes, shirt, facepaint, and the kinda reddish highlights.

I had a kinda โ€œleft overโ€ look from my experimentations that led to capreolinaโ€™s new look, and I was really satisfied with how I felt about giving capreolina a new look. So, I was emboldened to do the same for my pure STR bishop cervidโ€ฆ!:

cervidโ€™s new look

YAY I LOOK ADORABLE LOL

I originally was going to use a different cape, and then realised that the one that I had used in maples.im (which I thought that I remembered checking for in-game) was only purchasable with donor pointsโ€ฆ and I didnโ€™t have any. Oh well. This cape is cool too.

I think that both of these looks do a pretty good job of preserving the original โ€œvibesโ€ that I was going for with these two charactersโ€™ outfits: cervid still has the dark โ€œgothโ€ vibe, and capreolina still looks like she either lives in a tree, or could chop a tree down with her bare hands. However, in another sense, the tables have turned: capre now looks gayer than cervid, instead of the other way around!! In any case, these are my original two characters, so I think that it makes sense for them to both have the same hair style :P

Some other people have agreed that the new looks are cute:

i forgot how cute capre was

However, the reception has not been entirely positiveโ€ฆ:

bring back REAL DEER

THAT WAS FOR DEER

Rest assured that I am the real deer, and I may dress myself however I please!! So there!!!

What hath Dunas wrought!

Over in Neo Tokyo, on my darksterity knight rusa, I did some Bergamoting with marksman xBowtjuhNL, paladin aWildLynux, and bishop Pebbytan:

O Dunasโ€ฆ

aWildLynux, xBowtjuhNL, Pebbytan, & rusa vs. Bergamot

Afterwards, I trioed Dโ€™una with them (minus aWildLynux):

Pebby, rusa, & Ramon trioing Dโ€™una

This was rough. Dโ€™una is already a massive pain in the ass for me, as a melee attacker, thanks to his insistence on constantly dispelling (DPing) anyone within melee range, and frequently stunning, super-knockbacking, Darknessโ€™ing, etc.โ€ฆ And zerking doesnโ€™t exactly make it easier to survive DR consistentlyโ€ฆ! But itโ€™s also not so easy for bishops either, despite the fact that they are mages and are thus unaffected (barring STR mages & the like) by physical damage reflection (physical DR). Bishops have to get close enough to Dispel Dโ€™unaโ€™s pesky buffs (particularly the WDEF onesโ€ฆ), and yet this puts them within Dโ€™unaโ€™s DP range, thus endangering them by possibly DPing their Magic Guard. As a result of all of this, our first run was a failure: both Pebby & I died. ๐Ÿ˜ญ We did better on the second time around, though :]

Did you know that itโ€™s possible to get up into the branches of the tree at the Forest of Oblivion (the map that connects the Mushroom Shrine to Kamuna)?:

toot toot

Yep! And it requires โ‰ฅ123% JUMP โ€” and thus exactly 123% JUMP, as that is the maximum possible JUMP.

I also did Bergamot with bowmaster xRook (misandrist), who had never done Berga before! We were accompanied by fellow bowmaster OliverQueen and shadower Harlez (VigiI). During the beginning of the first body of the first run, we had xRook crouching and watching so that they would see what Bergamotโ€™s DP animation actually looks like:

OliverQueen, Harlez, & rusa helping xRook with their first-ever Bergamot run

Getting DPโ€™d is potentially a big hazard for xRook, as they rely on my HB to survive. Of course, any time that Bergamot does the laser DP animation, Iโ€™m always watching very intently to see if anyone does get hit by the smol laser (the one that actually DPs), so that I can quickly rebuff them โ€” which often involves quickly mounting and running to the left, so that Iโ€™m within HB distance of them after they get knocked back by the laser.

With that, xRook got the hang of it, and was able to survive the whole run, without the assistance of any bishops!:

xRook, rusa, Harlez, & OliverQueen vs. Bergamot

I also duoed Berga, with xBowtjuhNLโ€ฆ:

xBowtjuhNL & rusa duoing Bergamot

It actually wasnโ€™t so bad; we didnโ€™t have any serious hitches, and the run took around โ‰ˆ55 minutes in total. The drops didnโ€™t quite make it worth it, but I guess they were okay โ€” Shield for WATK 70% ainโ€™t half bad (we were both 5โงธ5 on cards already):

xBowtjuhNL & rusa duoed Bergamot

I also did a trio Nibelung run with xBowtjuhNL and Harlez. xBowtjuhNL said that this was going to be his last run before going to bed, and indeed, he fell asleep during the runโ€ฆ:

Sleepty Ramonโ€ฆ

Eventually, as you can see, I was able to wake him up by calling him on Discordโ„ข! You can also see that I was having some pretty nasty networking issues at the timeโ€ฆ

In the end, though, it was a great run! I levelled up to 173~!:

rusa hits level 173~!

And Nib was kind enough to drop an EP for us!!:

EP

Plus, I got my first Nib card :]

Nibelung card get~!

With the splits from selling the EP โ€” which I got from Harlez in pCoins, rather than in mesos โ€” I was able to make another Chaos Scroll, so that I could give another shot to that 4 WATK, 4 slot PGC that I made in the previous entryโ€ฆ:

0 WATK PGCโ€ฆโ€ฆ

Oh. Well, thatโ€™s a 0 WATK PGC. Maybe next timeโ€ฆโ€ฆ ๐Ÿ˜ข

Swiftness is the essence of warfare

I have procrastinated long enough on getting a Silver Mane for my woodsmaster capreolina. The Silver Mane quest ainโ€™t easy โ€” and it ainโ€™t cheap, either โ€” but the only pre-requisites are being level โ‰ฅ120[1], and already having the Hog, both of which capreolina has satisfied forโ€ฆ well, like 22 levels now. So, I headed out into the wilderness of the Minar Forest with fellow fourth-job archer Level1Crook (Lvl1Crook, xXCrookXx) to collect the 600 Rexton Leathers, 400 Wooden Shoulder Pads, and 400 Skull Shoulder Pads that we each (yes, thatโ€™s 2โ€ฏ800 big fat Leafre ETC items in total!) needed for our respective Silver Mane quests:

wah

Oh, I died. How embarrassingโ€ฆ

Well, I was already pretty close to levelling before we started, so it was no surprise when I levelled mid-grind!!:

capreolina hits level 143~!

After getting our 1.2k Rexton Leathers, we headed to The Hidden Dragon Tomb II for the rest of the ETCs:

capreolina vs. Cornians

i dyed

I dyedโ€ฆ againโ€ฆ how embarrassingโ€ฆ Now, Iโ€™m not going to say exactly how many times that I died, but I will say that whoever runs the โ€œCash Shopโ€ was very happy about their Safety Charm sales on that dayโ€ฆ

Not gonna lie, 2.8k big Leafre ETCs is way too fucking many. We almost called it a day after we had just a few hundred ETCs left to go, but we agreed to just trudge on and get the entire damn thing over with. The result was that we both suffered mild brain rot that we may, or may not, ever recover from. But we did it. We got the things.

Of course, there was also the issue of getting 50M(!) mesos. For mine, I decided to just steal transfer 50M mesos off of my vicloc dagger spearwoman d34r. And with that, we were now officially Fast As Fuckโ„ข:

FAST AS FUCK

NYOOOOOOOOOOOOMโ€ฆ

Footnotes for โ€œSwiftness is the essence of warfareโ€

  1. [โ†‘] As of this writing, the Silver Mane quest in MapleLegends is bugged, so this is not strictly true for the MapleLegends implementation. As far as I know, the MapleLegends implementation (again, as of this writing) has no level requirement at all, and is instead gated on the PCโ€™s class: namely, they must be a fourth job class, or a beginner.

A clock; a cursรจd tree; a cryptid; a character of evil; a captain.

I did a few fun all-odd Capt. Lat runs on my darksterity knight rusa with STRginner extraordinaire Taima (Girlmoder, Boymoder, Nyanners, Tacgnol, Hanyou), helping to HB permabeginners Cortical (GishGallop, Medulla, SussyBaka, RedNurJazlan, CokeZeroPill) and uayua (2sus4u, hueso, shadowban, tb303):

A grandfather clock; a Guatemalan statue; a great foot; a god; a ghost pilot.

Taima, rusa, & Cortical vs. Capt. Latanica

Taima, rusa, & uayua vs. Capt. Lat

Welp, we had some deaths, but thatโ€™s to be expected when youโ€™re constantly on the brink of being two-shot by a stray ranged attack from the olโ€™ Captain.

I also did a single duo Peppered Lattice run on my I/L archmagelet cervine, with marksman Level1Crook (Lanius, Lvl1Crook, xXCrookXx), in which I did someโ€ฆ I donโ€™t know, pretty okay CL damage? To Papal Lettuce? Next time, Iโ€™ll have to use some actual MATK potions ๐Ÿ˜…

cervine & Level1Crook vs. Papulatus

That being said, the amount of AVOID that I get from being a magelet really helps when fighting bosses like this; I pretty much stayed on the top platforms for the entire run, so I never got dispelled (DPโ€™d) even once!

We also teamed up with PoultryWoman (Yoshis, Furbs, SwordFurb) and SaviorSword to do some Ravs (with me as my pure STR bishop cervid)โ€ฆ:

cervid, Taima, Level1Crook, PoultryWoman, & SaviorSword vs. Ravana

โ€ฆAnd some Peppy Laters (with me as my woodsmaster capreolina):

capreolina, Taima, Level1Crook, PoultryWoman, & SaviorSword vs. Papulatus

And, with marksman xBowtjuhNL and shadower Harlez (VigiI), we did some fun Poppy Lanternsโ€ฆ:

Level1Crook, cervine, xBowtjuhNL, Harlez, & Taima vs. Papu

โ€ฆDuring which, Taima took advantage of her @roll 100 for loot order being the highest byโ€ฆ looting the one โ€œโ€œโ€œdilkโ€โ€โ€ that dropped:

roll for loot order

๐Ÿคท๐Ÿฝโ€โ™€๏ธ

And, finally, when swashbuckler Yoshis hit the big level 115, it was time for her to try Ravving, too (featuring F/P archmage 2sus4u)!:

2sus4u, Level1Crook, Taima, rusa, & Yoshis vs. Rav

Level1Crook has LITERALLY ONE (1) JOB

SMH my headโ€ฆ I cannot believe thisโ€ฆ How could Level1Crook do this to us?? He was our only MCP-openerโ€ฆโ€ฆ

On cervid, I also helped out xBowtjuhNL, OliverQueen, BeMyRavebae, BimboBucc, and atsumoo sell some AFK zhelms!:

cervid helping OliverQueen, BeMyRavebae, BimboBucc, atsumoo, & xBowtjuhNL sell AFK zhelms

It was a little weird not being the only active bishop in the party, but it worked out well enough:

you upheld your vows

I also did some Zakking on capreolina, where โ€” as usual โ€” I got some weird looks when I bust out the olโ€™ PSB for the arms stage:

si plz

I also did anโ€ฆ interesting Zak run on rusa, wherein we lost our bishop and bowmaster quite early on during the arms stage, leaving just me, Harlez, nightlord Peeko, and paladin SirAmikโ€ฆ And then SirAmik died during the first body. So, yeah, it was kinda a trio Zak without even so much as SE:

rusa gigazerking @ trio Zak

So, yeah, that was pretty brutal. And by the end of it, only one third (โ…“) of our original party remained ๐Ÿ˜†:

SirAmik & Harlez ded

And still, no pouchโ€ฆ ๐Ÿ™„

Acer

Over on Maple Island, I was killing some Zombie Lupins on my islander ozotoceros:

ozotoceros vs. Zombie Lupins

โ€ฆWait, what? Oh, right, thatโ€™s BRPQ (Boss Rush Party Quest; BPQ), not Maple Island. Naturally, the Faust inside of BRPQ spawns zlupins just like the ordinary Faust does, so itโ€™s a favourite pastime of islanders during every summer event to try to squeeze a Red Whip out of one of โ€™emโ€ฆ

Unfortunately, the Red Whip drop rate is not great, so I didnโ€™t get any. :P But I did get a fancy Dark Pilfer! And some Cursed Dollsโ€ฆ

In any case, I did amass enough anniversary points to purchase myself a fancy-dancy claw: the Sweet Fork Cake!!:

ozotoceros gets a Sweet Fork Cake!

And, after a sufficient amount of prayer directed at the Maple gods, I was blessed with a set of Maple Throwing-Stars that I got from a Maple Equipment Box that I bought with more anniversary points!!!:

Maple Stars get!!!

ozo tests out her new Maple Stars

For those not aware, Maple Throwing-Stars are extremely valuable on Maple Island, which may come as a surprise given that they are virtually worthless elsewhere. The extra +6 WATK over what youโ€™d get from Subis makes a huge difference (as your total WATK is naturally low anyways), and itโ€™s not like thereโ€™s any way to get Ilbis on Maple Islandโ€ฆ

Oh, and I hit level 46~!!:

ozotoceros hits level 46~!

The raffle loves giving me useless equipment like Maple Guns, Maple Crows, and most of all: Red Bandanas.

ozo does not need Red Bandanasโ€ฆ

๐Ÿคฆ๐Ÿฝโ€โ™€๏ธ

And, of course, you know I finished all five weeks of DJ JMโ€™s quests! So I walked away with my chair of choiceโ€ฆ

ozotoceros in the Pia chair

Iโ€™ve seen some people in alliance chat hating on my homegirl Pia, and all I have to say is that Pia is the O.G. Pia has been in Henesys Park since before yโ€™all were even old enough to operate a keyboard, so put some damn respect on her name!!

A squamate appears

As teased in the previous diary entry (which I started writing before I actually turned in the corresponding quest), I got the next Monster Book Ring tier for my darksterity knight rusa: T4!

rusa gets the T4 ring!

And just in time for another Horntail run!:

HT with runYoShit and Brokeen

Our nightlord Brokeen accidentally entered the main body map just a fraction of a second earlier than shadower Harlez (VigiI) โ€” our intended seduce target โ€” which is why you see him dead in the above image.

Unfortunately for me, I got what I asked for by screenshotting someone elseโ€™s death, and a later screenshot caused my client to crash when all three heads were still alive. โ˜น๏ธ

You can see corsair runYoShit in the screenshot above, which is notable because runYoShit was relying on my HB to survive, so my crashing meant that they didnโ€™t get to continue the run either!! ๐Ÿ˜ญ

In spite of this, I nevertheless signed up for the next run that xBowtjuhNL hosted, mistakenly believing (not without reason) that the screenshotting was what caused the crash โ€” after all, the crash happened instantaneously after hitting the Prt Sc key, which activates the clientโ€™s built-in screenshotting ability (which regrettably cannot be unbound from said key):

dont dc rusa dont screenshot

So, true to my word, I did not screenshot. And, in spite of this, I crashed at the beginning of the main body fight anywaysโ€ฆ:

Sitting with Pebbytan after crashing

This really took the wind out of my sails, as I was not only saddened that I couldnโ€™t HT anymore (as I quite like HTing, and itโ€™s good EXP to boot!), but was also ashamed that I had let down runYoShit on more than one occasion as a result of my own technical issues effectively killing them. I posted about these technical issues, and some related ones, that are associated with the 2022 Anniโ€“Summer event patch โ€” and that I could confirm were not unique to me, i.e. I was getting multiple identical experiential reports from other players โ€” in a MapleLegends forum post entitled โ€œClient crashing/stability/UB issues since Anniโ€“Summer event patchโ€. Unfortunately, w.r.t. the specifically crashing issues, the client appears to be unwilling to emit crash logs, so the reports that I give are purely phenomenological.

Iโ€™ve even desperately attempted to run the game client natively on Windowsโ„ข via multi-booting, just for the purpose of not crashing in crash-y situations (read: Horntail). I know, I know, Bill G8s owns my computer now and probably knows all of my deepest darkest secrets, BUT I REALLY WANT TO HT, OKAYโ€ฝโ€ฝโ€ฝ Plus, I always make sure to wash my hooves after using Windowsโ„ข. :] Unfortunately, it seems that running the MapleLegends client on Windowsโ„ข is even crashier than running it in Wine on Linux like I normally do โ€” I donโ€™t even have to go into HT (nor do anything special, reallyโ€ฆ) to crash! But Iโ€™m not terribly surprised, considering that running ML in Wine has always had a much better track record โ€” for me, at least โ€” than itโ€™s had for those running the game natively.

Dark doubts between the promise & event

As the 2022 Anniversaryโ€“Summer Event has continued, I have continued BRPQing โ€” including a pair of BRPQs that I did on my vicloc dagger spearwoman d34r, alongside STRginner Cortical (GishGallop, Medulla, SussyBaka, RedNurJazlan, CokeZeroPill). We fought Franky, everyoneโ€™s favourite PQ boss:

d34r & Cortical vs. Franky

And I then realised my limitations when our party arrived at Manon, and I could scarcely hit the damn thing โ€” itโ€™s 19 levels higher than me, and has 39 AVOID!:

d34r & Cortical vs. Manon

Good thing Manon isnโ€™t real and Leafre is a hoax perpetuated by so-called โ€œfourth-jobbersโ€, so I donโ€™t have to worry about it outside of BRPQ.

On my darksterity knight rusa, I did enough BRPQs and Maple Syrupโ€™d enough Maple Leaves to make a pretty awesome one!:

rusaโ€™s Maple Leaf

8 STR and 3 DEX on a face accessory? Yell heah!!

I also ended up doing a number of solo BRPQs, as a result of underwhelming traffic at BRPQ around the times that I was logging on. Like on my I/L archmagelet cervine:

cervine soloing BRPQ

And on my daggermit alces:

alces soloing BRPQ

Unfortunately, both of them ended up wasting most of their points on failed Maple Leavesโ€ฆ But cervine still has a usable clean one, so thatโ€™s good, at least.

Oh, and I did manage to get just one (1) good raffle between my two viclockers: a Rabbit Lamp Chair!!:

d34r gets a Rabbit Lamp Chair!!

Normally I really couldnโ€™t care less about chairs (other than their value on the market when I rid myself of them), but I do enjoy getting chairs on my area-restricted characters, i.e. my islander and my viclockers. :]

็ปๅฏ†

็ปๅฏ†

An ongoing investigation into โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ by the name of โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ found โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ in addition to โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ activity near the 101st floor of the Eos Tower, where the subject was reported โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆ, although โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ:

Subject spotted at Eos Tower 101st floor

โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ and โ€” it is suspected โ€” โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ. โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ agents โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ and โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ were reported dead after โ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ and attempting reconnaissance on the subject:

Savage Blows in the dark

It should be reiterated that the subject is armed & dangerous, and only โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ agents with high levels of clearance, training, & expertise are allowed on this case. It should also be noted that โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ, with โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ and access to highly advanced technology, including โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ and โ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ appears to be some kind of temporary invincibility, pending further investigation:

Temporary invincibility(?)

Concerningly, the subject โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ, gaining access to the highly restricted โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ Eos Tower spire, where โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ killing โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ and โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ (nippled clockwhale specimen):

Subject whiting Alishar

CURRENT STATUS: TOP PRIORITY. ALL DISCRETIONARY RESOURCES ARE TO BE DEDICATED TO THIS CASE.

R. t. tarandus

As usual, itโ€™s time for a quick episode of everyoneโ€™s favourite series involving a pugilist whose name starts with a โŸจtโŸฉ and ends with an โŸจsโŸฉ: Questing With taraโ„ข!

This time around, I headed to Singapore to do some more Capt. Latanica prequests, now that I was high enough level to continue them. I got the necessary Selkie Jr. and Slimy kills at MP3โ€ฆ:

Questing With tara

tara @ MP3

โ€ฆAnd the necessary Mr. Anchor kills at GS6:

tara @ GS6

With that, I was now at the final part of the questline: to slay Capt. Latanica and bring back his Black Essence. Knowing that I wouldnโ€™t have accuracy issues due to Capt. Latanicaโ€™s abysmally low AVOID, I decided to just give it a shot and try soloing the Captain:

tara vs. Capt. Lat

By the end of it, I had used three Unripe Onyx Apples and one Maple Music Fever, andโ€ฆ well, I decided against wasting more resources & time getting irritated at how little damage I was doing to Lat. Could I have finished? Sure. Thereโ€™s no timer, so I could just eat all of my resources and chip away endlessly at Lat until he finally submitted to my unending patience. But I wasnโ€™t really willing to do this unless I brought some full-blown Onyx Apples with me, so I bailed. I figured that it would have been easier, given that Lat has โ€œonlyโ€ 700 WDEF, but I neglected to take into account Capt. Latโ€™s 37-level advantage. ๐Ÿคฆ๐Ÿฝโ€โ™€๏ธ

Of course, not only was this a waste of the WATK potions that I used, but it was also a waste of the White Essence that I got from โ€œThe great secret revealsโ€. But that was okay, because I needed to do The Lost White Essence anyways โ€” for the question completion, if nothing else. So I grinded some more GS6 to finish that quest, and got my White Essence back.

The problem was that, now that Iโ€™d done The Lost White Essence, actually successfully soloing Capt. Lat would give me enough EXP to level up! And how tf am I supposed to wear my INTelligent pyjamas if Iโ€™m soloing the Captain?? ๐Ÿ™„ So I decided on the โ€œlevel up, and then come backโ€ route, even though it would mean soloing Capt. Lat at level 64 at the lowest, rather than 63 as I had originally planned (as 63 is the absolute minimum level to have the Capt. Lat kill quest active). So, itโ€™s not as cool, but whatever. Iโ€™m still slightly torn on whether or not I should come back with some Onyx Apples at level 64 to solo this big guy. Itโ€™s kind of stupid & a waste, but โ€œdoing conventionally inadvisable things because I canโ€ is kind of a running theme of my diary, so, you knowโ€ฆ Let me know if you have an opinion on the matterโ€ฆ

In any case, for the whole levelling-up thing, I decided to start on the MLG[1] quests. So I headed to Mu Lung to punch some STDs (not to be confused with STDs) for No Gongโ€™s Teaching:

tara vs. STDs

And, welp, that was enough for level 64~!:

tara hits level 64~!

I also took a detour to Victoria Island to do some Evil Eye slaying for the latter part (the 999-kill quest) of POLLUTED! โŸจ1โ€”Evil EyeโŸฉ, alongside Vicloc I/L gish VigiI (Harlez):

VigiI & tarandus @ CoEEII

Tune in next time, to Questing With taraโ„ข, to learn the fate of the captain known as โ€œLatanicaโ€โ€ฆโ€ฆ

Footnotes for โ€œR. t. tarandusโ€

  1. [โ†‘] โ€œMu Lung Gardenโ€, the region that encompasses all of the Mu Lung and Herb Town maps.

The Room of the Guilty

And, finally, I did one (1) OPQ. On my OPQ mule, sets. With DEXginner inhale (DexBlade, vork) and disgraced[1] former Viclocker Lanius (Level1Crook, Lvl1Crook, xXCrookXx). After a great deal of struggle within the Tower of the Goddess, we killed Father Pixel, with the assistance of an aubergine dressed as a human by the name of AgentPewPew:

inhale, Lanius, & sets vs. Papa Pixie

The random folx (including AgentPewPew) that weโ€™d partied with were obsessing over my damage and equipment (which is almost entirely borrowed from my other charactersโ€ฆ), but I was more interested in inhaleโ€™s damage (see โ€œComparing DEXginnersโ€ in the previous diary entry).

In any case, it was in this PQ that inhale hit the big level 70~!! Oh โ€” and I also levelled up, to 67 :]

inhale hits level 70~!! (And sets 67)

โ€ฆAnd then I passed tf out. ๐Ÿ›๏ธ๐Ÿ˜ด๐Ÿ’ค

Footnotes for โ€œThe Room of the Guiltyโ€

  1. [โ†‘] (Not actually disgraced; the character was always planned to be a HS mule.)

(โ€ฆcnvpstdfโ€ฆ)

cnvpstdf

i canโ€™t wait to boom my feet

sortsโ€™s jorts

This diary entry is dedicated to Pebbytan, who was permanently banned from MapleLegends without possibility of appeal, without having broken any of the rules of the server, merely through guilt by association. โค๏ธ