Feeds:
Posts
Comments

Archive for the ‘Math’ Category

Wikipedia is a madhouse.  I get lost in articles everytime I go there, and today was no exception.  While doing a bit of brush up for writing this, I got sidetracked into reading up on Non-standard Analysis and Hyperreal Numbers.  I’d like to note that I think there’s a set-theoretical definition of an infinite number.

Assume some definition of sets.  I’m not going to get into deciding on which particular system for defining ‘set’ is best, I think this applies to all.  The first thing I need is a concept of the Natural Numbers, N.  To provide this, I don’t think I’ll use the standard model of attempting to define a set which defines a natural number.  That seems to me to be wholly circular.  A natural number seems intuitively to describe a relation between sets, and therefore would best be described as a function.  The set of natural numbers would be the set of values denoted by a size function spanning the domain of all sets.  It seems apparent that a value is not a set, and therefore the set of natural numbers cannot be a set of sets.  That, for me, calls into question the conventional definition.

How, then, can we define a natural number in order to describe the set of all natural numbers?  I would suggest we do so by counting, which is the main purpose of natural numbers anyway.  One simple way to progress would be this: we may construct a new set, N, whose members denote different sizes of sets.  To do so, we start with the member 0, which denotes the size of the empty set.

We know that for any non-empty set, s, there exists at least one subset whose only subsets are the empty set and itself. From any set we examine, remove one such subset.  Add to our description of N a new member: 1.  If the set left after the removal of the subset is empty, the function Size(s) maps to 1.  Otherwise, remove another such set and add a new member to N: 11.  Size(s) maps to 11.  Repeat this procedure, adding a new member to N with a new 1 attached or until all that remains is the empty set.  The final addition to N is the size of set s.

This is equivalent to me creating a set N whose members are symbolic constructions which are assembled according to a simple rule.  The generated set is not, I’d argue, the set of natural numbers.  It is merely a set whose construction is readily understandable to us and about whose members we can easily deduce a great deal.  Having generated this set of symbols, we may then use the above algorithm to generate a function, Size(), which maps from (a subset of {sets}->N).  I’d say that the set of Natural Numbers, N, is the set of all such mappings.

I say Size() maps from (a subset of {sets} -> N) in order to avoid including infinite sets.  Specifically, Size(N) is not an element of N.  This is demonstrable by noting that we generate a new element of N for every element we remove.  Also of note is that, for the generated subset of N, n, from Size(s), Size(n) ~= Size(s).  That’s because of the inclusion of the 0 symbol, to provide a size for the empty set.

To return to the problem of Size(N), which is equivalent to the question of Size(N), we run into cardinality issues.  What’s interesting is there seems no way to generate a new set, say R, such that Size(N) where Size maps from (subset of {Sets} -> R)  maps to a member of R and Size(R) also maps to a member of R.  That’s pretty much the essence of Cantor’s Theorem.

And finally, I wanted to touch on the Second Derivative, which has come up a lot in economic discussions about the current economic downturn.  The equity markets are a pretty common proxy for the health of the economy, so let’s examine the second derivative in terms of equities.  First, what’s being discussed is really standard calculus/physics.  The dimension being analyzed is position as a function of time: p = f(t), where p is position and t is time.  From there, the first derivative is velocity, change in position per change in time: v = f'(t).  The second derivative is acceleration: the change in velocity per change in time: a = f”(t).

A common recession is often described as “U” shaped, which is ideally described by a square: p = f(t) = (m)t^2 + n(t) + c where m, n, and c are constants that morph the shape of the U.  This is known as a quadratic equation and if you took Algerba 2, you probably spent a lot of time working on it.  If you took physics, you likely worked on things like constant acceleration from gravity and may recall that this produces a quadratic equation.

Well, the second derivative of a quadratic is a constant.  Since the economists are hellbent on finding a change in the second derivative, from positive to negative, I have to assume they are not, in fact, talking about a quadratic function describing change in value of equity indices.

By discussing the second derivative, I get the nasty feeling that they’re bandying about math terms that don’t make as much sense for their data as we might wish.  Perhaps they are discussing us having had an L-Shaped recovery, in which case acceleration is something like (1/t^3).  Think of that like a skateboarder on a half-pipe.  The component of their motion which is affected by acceleration declines as time passes, which can be seen by tendency of p towards 0 as t -> infinity.

I feel like the examination of technical information fails to capture properly the manner of change in an economy.  Specifically, what if the value of the equity market was driven by multiple variables?  In that case, examining the value of the second derivative, while it will provide insight into changes in velocity, may not give us the full picture.

In addition, velocity may be changed by a variety of factors.  An externality which induces a constant velocity against drag, say throwing a ball up into the air, will demonstrate an initial reduction in velocity to 0 followed by an increase in the absolute value of velocity.  Acceleration remains constant.  In the L-shaped case, acceleration declines towards 0, but remains positive.  Velocity is also positive, but tends towards 0.

So I’m left to wonder about the relevance of this”second derivative” beyond a fancy way of pointing at the much more mainstream, and therefore vague, acceleration.

Read Full Post »

Discontinuous Demand

Whenever I encounter economics, I am presented with the standard supply-demand curve picture.  These relatively abstract pictures always depict a downward sloping demand curve that is continuous.  Now, from a naive perspective I can roll with that: it’s certainly got intuitive appeal, that as prices change, the amount a buyer is willing to purchase of that good also changes.  We’re generally content, in addition, to say that as price declines, more of the good will be purchased and vice versa.

I admit, I’ve never been totally comfortable with either curve.  I’ve always been a bit wary of simplicity, particularly with regard to something that seems to have so many input variables.  While the aggregate demand curve may very well be of this kind, as aggregation of disparates and simple scale tends to smooth out chinks, is it reasonable to make this estimation for the total demand for a good or group of goods?

More particularly, the demand curve seems to me to be really problematic from a microeconomics perspective.  To explain that, let me talk about prices.  Prices are usually said to be set on the basis of supply vs. demand, with markets tending towards the equilibrium market clearing price.  Since the market will always move towards the equilibrium price, we can just discuss price of good at that point, or discuss how long it will take to exit a disequilibrium state.  I know there are other models and more sophisticated analyses, but I think this sums up the basic neoclassical micro system.

Well, I’d like to talk about that a bit.  To understand microeconomics, we want to look at the decision making process taken by purchasers when presented with a good or set of goods.  Let’s start with a simple case: an economys with one, single good and many buyers.  How do we determine the price this single good will fetch?  To find that, we need to look at the number of currency units (whatever unit price is denominated in) each potential purchaser has.

Let’s assume some set of purchasers, where each purchaser has a finite, arbitrary amount of currency units (CU). Only one purchaser may purchase the good, all purchasers desire the good, and all purchasers wish to pay the lowest possible cost. We’ll assume the seller is apathetic towards which purchaser obtains the good, desires to sell the good, and wishes to take the highest possible price for it. We’re basically talking about rational, profit-maximizing agents here. Under these conditions, how do we discover the price that will be paid?

To simplify discussion, here’s a new term: a wallet is the set of currency units a given purchaser has. So, for our discussion, there exists a set of wallets which has a one-to-one mapping onto the set of purchasers. The highest possible price is the same as the value of the largest wallet. Is that the price paid? No. The holder of the largest wallet need not pay out the entirety of their wallet; they need only pay out 1 CU more than the second largest wallet.

A couple other assumptions come up here. This result is the eventual result of an auction…There are easily imaginable scenarios where the price paid is larger than this one. What happens if things happen at separate times, under circumstances of limited information? What if no one knows the size of anyone’s wallet? What if, additionally, people come to the seller at different times to attempt to make the purchase? So, another purchaser might approach the seller and make some offer for the good. The highest offer they could make is equivalent to the size of their wallet. If the seller purchased the good at some point in the past, since they are a profit maximizer, they’ll only accept a price higher than what they paid. They are equally content with either the good or equivalent cash, so they can’t be convinced to give it up for less than or equal to what they paid.

It is at this point that a certain amount of uncertainty enters the price picture. Should another purchaser approach the seller with a wallet larger than what the seller paid, we know the highest possible price and the lowest possible price. We cannot be entirely sure what price will be set between those two. We know that if the seller imagines they might sell it elsewhere for more, they will hold out to maximize profit. As information trickles in to the seller, on the basis of bids placed, they will begin to gather an idea of the best possible price they can receive. If they were not the highest wallet holder, they will be able to find a bid larger than what they placed, and will sell it. Eventually, though, the largest wallet holder, assuming they can find the current seller, will buy the good and that will set the price.

Something of note: each time a purchase is made, the composition of at least two wallets changes. Let’s say the largest wallet holder buys from the next largest. Well, according to our above discussion, it’s likely the price was 1 CU above the second largest total asset value (good price + wallet). The second largest wallet holder now has (good price + wallet + 1CU). This should be kept in mind.

Well, what if we have two goods of the same kind? Let’s go with the auction based price: that’s the easiest to determine and helps establish scoping. Well, the highest price paid will be 1 CU above the size of the wallet of the third largest wallet user, by the same logic. Note that this could have been a large jump, dependent on the distribution of wallet sizes. This also is a discrete jump in price, rather than a continuous curve. Adding in a third good would produce another jump, and so on. The change in price would between different numbers of goods would not necessarily describe a “curve”. In fact, if we could sell partials, I am not certain that they would “bend” around units, but would rather sharply change price.

The above holds particularly true for an auction of a consumed good. If the purchaser consumes rather than resells the good, we can simply close the market for that good upon initial purchase. That allows us to introduce the vague concept of consumption utility, or desirability. That is, purchasers can desire a good in varying amounts. In our little economy of a single good, varying desires didn’t compete: it really didn’t matter how much people wanted it or not, it was the only thing to spend money on. But if there were different goods on the market, different purchasers would value them differently.

Let’s consider the case of two different goods in the same auction market. The desirability of a given good would help set its price. We can say that every wallet holder is willing to apportion their wallet according to how much they desire each good. That sets the largest value they’re willing to pay for each, when both are available. Once that is done, the price set for the first auctioned good will be set dependent on the second largest amount set aside for that good (+1 CU). Before any purchase is made, we establish the price that will be paid if either is put on auction. However, as soon as one is made available, the price of the other may change. That is because as soon as the purchase of one is made, we revert back to using the total size of wallets to determine the price of the other. Since the amounts set aside based on desirability may differ proportionately from the total sizes of wallets, we now get a different relative wallet distribution.

Look at it this way. Bob, Susan, and Emily are having just such an auction for a peach and a pear. Bob has $100, and really likes peaches but hates pears: he’s willing to put down $90 for the peach and only 10 for the pear. Susan has $50 and loves both equally, so she’ll spend up to $25 for each. Emily has $30 and only wants the pear. If we auction the peach first, Bob will buy it for $26 ($1 more than Susan, since Emily doesn’t want it). When the pear comes up next, Bob also buys that: he had $76 left, more than either of the other two. Susan can only bid $50 for it, beating Emily’s $30 bid. Bob beats Susan with a $51 bid and gets both the peach and the pear.

But what if the pear was placed for auction first? Susan would buy it with her $26 bid, since Bob will only pay $10. Bob would then buy the peach for $51. We’ve completely reversed the prices…and all because of the purchase order. More particularly, we can see that the price set on one good impacts the price for another, dependent on the amount a purchaser is willing to pay for it. We can also see that even a small change in price on one good can have a disproportionately large effect on the price of other goods.

Alright, that’s enough for a meager little look at microeconomics. I know I should be looking at more non-auction settings and dealing with purchasers as price takers rather than price setters, but this is enough for now.

Read Full Post »

I hopped on the PTR to test out lifebloom and see how the numbers matched up with my speculation. First, it is indeed the case that it refunds half the base cost. Therefore, any cost reductions you have are essentially doubled if you allow lifebloom to expire.

Further, the bloom portion is indeed increased per application. 9k non-crit blooms are awfully pretty.

Finally, my testing revealed that the coefficient numbers from wowwiki, at least on the HoT, are somewhat incorrect. The incorrectness is on the order of a tenth of a percent, so it’s not a hefty change. Essentially, a full stack of lifeblooms was healing for about 3% more than predicted. If I find time, I’ll go through and update my numbers with the appropriate values. I’ve already done some work on it, as the edit notes reveal below.

Regardless, the conclusion are unchanged. Lifebloom has gained a significant buff if you are willing to change your usage patterns.

Obviously, lifebloom now relies very heavily on the bloom portion to maintain its throughput and efficiency.  This means that a great deal of our healing is now somewhat out of our hands…we will become implicitly less efficient, since it is likely a substantial chunk will result in overheal.   I am not given to minding.  While maintaining a rolling lifebloom stack on multiple targets was par for the course in BC, it has become substantially simpler with the increased duration of the HoT and reduced GCD.  Also, we received substantial buffs on other spells, such that I’m uncertain we needed to rely on lifebloom rolling any more.  For me, I will move towards maintaining rejuv+regrowth on the main tank.  On top of that, I’ll keep some version of lifebloom running.  Likely I’ll move between a slow/fast stack depending on my mana and how much damage they’re receiving.  Obviously, for spot heals swiftmend, NT+HT, and either regrowth or nourish.  I’m still planning some number crunching on those two.

Read Full Post »

The version Patch 3.1 currently on the PTR has introduced a few changes that have a decided impact on Resto druid healing in PvE.  I wanted to do a quick run through of some numbers and see how the changes might affect healing when the patch drops.

So first, let’s list the changes that I want to discuss:

  • Lifebloom: Mana cost of all ranks doubled. When Lifebloom blooms or is dispelled, it now refunds half the base mana cost of the spell per application of Lifebloom, and the heal effect is multiplied by the number of applications
  • Living Seed: This talent now accounts for total healing including overhealing.
  • Improved Regrowth (Tier 6) renamed Nature’s Bounty. Increases the critical effect chance of your Regrowth and Nourish spells by 5/10/15/20/25%. (Previously increased just regrowth crit by 10/20/30/40/50%)
  • Glyph of Nourish: Your Nourish heals an additional 6% for each of your heal over time effects present on the target.

Lifebloom has been the core component of druid pve healing since Burning Crusade introduced the spell.  From my own experience, druid healing always starts with a full stack of lifeblooms on a target, which, at very high spell power levels, produces a phenomenal throughput for a minimal expenditure.  Once that stack has been established, the healer’s next goal is to analyze how much additional healing is needed and layer additional hots to help compensate for that.  For myself, the priority has been rejuv is they just need an extra bit of additional healing, then regrowth if they’ve taken some damage.  If more healing is needed on top of that, swiftmend, followed by replacing the consumed HoT or let lifebloom finish and then restack.

Well, obviously the change to lifebloom has the potential to alter that paradigm.  Additionally, Regrowth was nerfed and nourish buffed, I believe with the hope of making nourish a viable competitor with Regrowth.  Unfortunately, I think Blizzard has positioned the two of them too closely.  Ultimately, it looks like either one will be clearly superior when glyphed, or they’ll be equivalent, in which case it’ll be up to the player which they like casting more.  In neither case is it a strategic decision whether you use Nourish or Regrowth.  They both fill the same role of a rapid cast top up to supplement HoTs.  I believe the same holds for Healing Touch when glyphed, though I’m not entirely sure on that one.

I have a whole bunch of number crunching on lifebloom below the fold, but here’s the summary for people who just want to skip to the bottom:

If the intent of the lifebloom change is to nerf druid efficiency, the intent failed.  As shown above, throughput with a slightly different casting style has actually gained, without requiring any more effort on the druid’s part (as measured in, say,GCD usage).  Additionally, the increased bloom has improved the efficacy of lifebloom enormously, such that different rotations (fast stack v. slow stack, for instance) can have marked differences on output and efficiency.  While I like the change, it doesn’t appear to have addressed the design issue it was intended to.

Edit: Wowwiki is mildly retarded, and I had to apply a minor fix to the spell power coefficient applied to the HoT and the direct heal.  Does not alter final conclusion, does alter some numbers in the analysis.

Edit 2: This is an enormous pain in the ass, but I think I finally figured out the HoT coefficient: 66.9%.  Will adjust later.

Lifebloom number crunching after the jump

Read Full Post »