Ghostcrawler posted today to talk about their intent for talent trees in Cataclysm. It’s a rundown of stuff we already know, but gathered together; I thought it would be interesting to discuss it and wave my hands a bit about what this means we might expect from talent trees (and the new passive bonuses) in Cataclysm.
I’ll just quote the relevant part in full:
1) We are changing talent trees, in some cases substantially. The major focus is pruning out boring but valuable talents that passively increase say damage or healing.
2) A secondary focus is to fix the clunky areas (e.g. warlocks having two conflicting range increasing talents). We’re not going to remove old favorites or radically change the focus of the trees. You will definitely have to rethink your builds however.
3) Because you earn passive bonuses just for spending points in the tree, those fun, niche or utility talents won’t seem as expensive as they do today. We want to create a lot more choices where you are choosing utility vs. utility. We want to see far more “cookie cutter” build guides that say “Spend the last 5-10 points where you want.”
4) In some cases, trees will flat out get new abilities. These are in addition to the new level 81+ abilities.
5) You will get 5 additional talent points for the new levels.
6) We are not deepening the trees. This actually unlocks some interesting opportunities. For the first time, you can reach a 51 point talent and a 21 point talent in another tree.
I recalled that I’d already done a speculative examination of the Druid Balance tree in light of the plans announced at Blizzcon. I went back to have a look, keeping in mind Ghostcrawler’s summary and stand by my basic analysis. Hearing GC, though, I suspect I was a bit over-generous with my shears compared to Blizzard. For instance, I said Improved Moonfire might go away. Given GC’s comment that “we’re not going to remove old favorites”, I think Improved Moonfire will stay very close to its present form. It may even receive a buff.
In general, I think Blizzard will be getting rid of all blanket passive damage increases (like the Earth and Moon spell damage increase) and pushing those into passives. However, I think Blizzard will nerf those passives along the way, and make up for it by moving bonuses into talents that impact 1-2 abilities.
I’m wholly expecting a certain amount of experimentation here. Blizzard saw a certain amount of success in moving damage dealers to priority-based rotations and proc-based rotations. However, I don’t think they’re done (at least, I hope not). The move to add more utility to talents provides the chance to make strings of abilities, and ability chains, viable.
Stepping back from WoW specifically, to make gameplay interesting, you want to present players with interesting choices. Interesting choices involve a certain amount of unpredictability: they require the player to assess their options with respect to the current situation. Let me illustrate by pointing at a common situation in Halo: you’ve got yourself an assault rifle, a rocket launcher with a single rocket, and a few grenades. A group of enemies is ahead of you, with more potentially around the corner, though you’re not entirely sure how many or what weapons caches might be along the way. You can use your rocket to take out the group, use a grenade and potentially mop up with your rifle (or rocket), or just plow in with the assault rifle. Already we’re at three basic options, none of which seem, a priori, worse. The biggest question here is the time that it takes you to kill them, as that impacts the time they have to kill you. The longer they have, the more chance you die. Considering nothing else, it’s wisest to hit them with the rocket/grenade, and maybe mop up remainders with the rifle.
That’s generally the WoW option method, particularly in pve: always hit with your strongest ability, because there’s no particular penalty to not doing so: there’s no tradeoff. However, in the above situation, we’re forced to consider the possibilities presented by the encounters beyond this current one. If I spend my rocket now, I won’t have it later. Depending on just how big a group I’m facing now, versus what I think I may see later, I may be better off holding the rocket till later. Heck, even if I have absolutely no idea what’s up ahead, if I think I can safely take out my current opponents with just a rifle, that is probably the most robust solution. Rifle ammo is generally easier to come across than rockets or grenades. Since rockets are generally very effective against vehicles and are easier to aim than grenades, if I’m facing a pack of infantry I’d rather use a grenade if I can; they’re easier to find than rockets.
Halo 3 presents us with meaningful decisions: we have a small set of options at any given time, each providing certain tradeoffs and benefits. Each option works best for particular situations and choosing among them is not necessarily easy. We have to attempt to assess the current situation for the optimal course, then help make that choice robust against future uncertainty.
World of Warcraft pve tends not to present meaningful decisions of this sort. The most complicated damage rotations in WoW tend to be proceed based, and are purely a function of rapid reaction time and being aware of the optimal action. For instance, because damage rotations are resource-neutral, there’s no tradeoff between burst damage and sustainability. There are no coordination puzzles, where a series of actions can set up a new situation which alters the optimal action choices.
My hope for Cataclysm is to see meaningful decisions and situational analysis brought into rotations. More buttons is not purely the solution. The feral cat rotation, complex as it is, is not the solution, because Ovale can solve it. Interesting choices are precisely those choices which cannot be easily scripted and require a certain amount of creativity to arrive at a spontaneous solution.
(Sirlin does a pretty solid breakdown of this in his series on game design.)
In pve WoW we’re presented with a single encounter, of a reasonably known duration, after which all available resources are effectively reset. Currently, ability options are pretty well theory-crafted: we know what is best, in what circumstances. Those circumstances don’t really vary in a way that’s unpredictable. For instance, we know whether our target will have the 5% crit debuff on them and how that will affect our damage. While a proc does add a bit of variety, it’s not a choice to make: there exists a strictly optimal action to take after a proc; the only thing WoW tests is our knowledge of that and our ability to perceive the proc. Therefore, there exists no uncertainty in optimal rotations – no risk/reward trade-offs.
Let’s assume a purely single-player situation, where only your actions and your enemy’s actions impact your situation (i.e. removing the complicating factors of other player’s actions). How can we make your damage rotation interesting? Let’s assume, for the moment, that we’re worried strictly about damage – additional utility of actions is irrelevant beyond the damage increases they can provide.
I think the best way to examine this would be to posit a sort of baseline damage level, which is resource neutral and consistently maintainable – the rough equivalent of the assault rifle in the above Halo 3 situation. To make things interesting, we want to give players the opportunity to bring their damage above that baseline, but at the risk of dipping below. The risk derives from an improper analysis of the situation; proper pressing of buttons is assumed to hit the baseline. To move above the baseline, the player must deviate from it; that deviation establishes the first risk of dipping below. The risk here is to not let the deviation become a new baseline; this is the situation WoW finds itself in now.
Bear in mind that I’m ignoring things that encounter design, etc. add in. For instance, if a lot of the move towards variable damage comes from combos of abilities that rely on you being able to remain near the target and encounters involve lots of movement, well, we’ve got room for interesting decisions. That makes the combo string duration the resource being managed, with very short combos forming the baseline, and the ability to perform strings that risk dropping below the baseline if they don’t complete but increasing output if you do finish them offer up interesting choices.
But can we make a Patchwerk-style fight interesting without adding any core mechanics to WoW? I would suggest yes, to an extent. Frankly, I think that we would gain more from Blizzard making utility offer more of an impact…especially because that would support cooperation. For instance, the ability to proc short-duration buffs/debuffs on command could be coordinated amongst the party. 10-second damage reduction on the boss that’s on a 30-second cooldown…but must be triggered by a DPS class? Or perhaps a mini-heroism for healers. I’m talking short buffs, debuffs that can be fed off, etc. Things that, when coordinated around, produce multiplicative gains over simply pressing them on command. This in itself is a trade off, because you’re likely giving up a straight damage ability in order to, hopefully, magnify the damage of your group. The key here would be building these such that there exists a significant difference between using them on CD and coordinating them with the group for maximum effect.
The other way I’d approach this would be decision-making built around trading-off sustained damage in favor of a chance at greater success. For instance, as a very sloppy example, imagine that, by deviating from the baseline, you can build up an effect…say charges are placed on you over the course of time, but you dip below the baseline damage and you cease to be resource neutral. You may expend those charges using another ability, taking you back up to baseline. However, you also have a chance to proc an effect that can magnify your final ability. However, you’re running out of resources. If you wait too long, or you’ll miss out on the chance for the extra damage, dipping below baseline until you can work your way back up to it. If you proc it and use it too early, you won’t use enough charges to see much damage increase.
This is akin to Arcane mages at the end of BC, switching between their high-dps and high efficiency rotations to manage mana between evocation CDs, but involves a more complex tradeoff. Imagine a fire mage: their baseline is fireball spam. They may switch to another spell, say inflame, which costs more mana than fireball (taking you away from mana neutral), deals less damage, and has a chance to put a fiery veins proc on you, which increase the damage of fireblast by 10%. Just blowing charges on fireblast is not enough to bring inflame back in line with fireball damage, though. Instead, you can cast exploding heart, which has a chance to double your current charges and deals damage to the enemy equivalent to inflame, but has a long cooldown. Now you have a choice: exploding heart will take this whole combo chain above fireball spam…if it procs. Further, if you can chain exploding heart procs, you can get a massive boost. However, your mana is being annihilated here. The longer you hold out, the more you’re losing over fireball spam, because it’ll take sometime to get mana ramped back up.
That turns mana into a medium term resource to be watched. We want it to refill quickly, so you can get back in the action. But we want it to take long enough to be harmful to burn through it all. We also want the pool to be sizable enough to give people using this method some room to play with. Then we want to turn buffs/debuffs into short-term resources. The above example added two abilities and two procs to manage. Just a couple more, and you can produce interesting synergies between buffs/procs, such that a series of ability chains become viable, based on 1) where your mana is, 2) what triggers you have access to, and 3) what you’ve procced leading to now.
I actually request some feedback here on people’s ideas. This is something of an interesting design question, and I’d like to hear people’s thoughts.
Terminal Computing
Posted in Commentary, Computing on January 27, 2010| Leave a Comment »
Apple has revealed their iPad, which has launched various discussions of the utility of tablets. One of those conversations – highly tangential to the tablet discussion – was regarding computers as terminals rather than the current model of (mostly) locally hosted applications. Basically, you simply have a monitor at home and enough computing horsepower to send input information over the wire and display the visual information sent back. A server, or “the cloud” or whatever, receives your input data, processes it, generates visual data, and ships it back to you for your terminal to display.
The concept of terminal computing is nothing new. Telnet has been around for ages, and many software people use VPN for telecommuting. Windows has had Remote Desktop since XP. For a while, Microsoft was betting heavily on terminal computing taking off, but it hasn’t yet. Why?
On the face of it, it seems like a great idea and the natural evolution of computing. It’s specialization, right? Datacenters and server farms can focus purely on providing horsepower and network connections (as they already do), while consumers can spend less for something very much like what they already have: a monitor, mouse, and keyboard connected to a box that connects to the internet. Presumably savings would be realized in moving all the circuitry out of the box and into the server farms.
OnLive is predicated on precisely this concept.
Network applications are inherently distributed. Common software engineering design patterns tend to emphasize this (MVC, for instance), which emphasizes this, reducing the need for all software components to be co-located. If we think of the act of using a modern computer as akin to interacting with a single application, we see this is even more the case. What that means is that, from a software development perspective, you want to try to get things running in places where they’re most effectively run. For now, that usually means rendering is offloaded to the client. Small applications can be offloaded to the client, where there resources are generally somewhat abundant, while back-end logic and data access can run on servers, where locality is important and access is non-trivial.
Effectively, the entirety of computing is very much like running a complex network application.
Now, computer hardware runs has a very interesting curve on their returns per manufacturing cost. For low transistor-count parts, the cost of adding additional transistors shrinks the more you add…until you hit a certain ceiling, where that trend abruptly reverses. That implies there exists a “sweetspot”, a processing unit with the most circuits at the lowest cost per circuit. To maximize the computing power/cost ratio, you want to use as many chips as possible which are in that sweet-spot (obviously the precise optimal transistor/money ratio changes as technology changes). The broad economics is going to emphasize this, as both suppliers and consumers tend to prefer the sweetspot, if possible.
That optimal chip size also implies there’s a space optimum…that is, you need a place to put the chip, along with supporting infrastructure. Because of this, co-location implies diminishing marginal returns, once a certain ceiling is reached…eventually, your building just runs out of room, so there’s a very strong drive to insure that only things that most require a giant processing center are actually run there.
By the same token, that manufacturing sweet spot means that, if it doesn’t cost much more to have more computing power, why not buy it (until you hit that optimum spot)? Which is generally precisely what is done. For any given space constraint, we tend to fill it with the computing hardware that sits in at that optimal price/performance spot. Which means that, when people are buying boxes to sit next to their terminal, they’ll probably be more powerful than they need to just to send inputs to the network and render received display info. That will become more and more the case as time passes, as transistor packages get smaller and smaller and cheaper and cheaper. Look at the size of the iPad. Or the iMac. They’re basically terminals, as far as space requirements go…and yet they come stuffed with more computing power than a terminal needs. If you effectively are unavoidable sitting on that much computing power, and an application can make use of it…then why not use it, rather than unnecessarily run it in a server farm?
Effectively, the most extreme form of terminal computing will never really be realized because it’s just not necessary or terribly helpful on costs. Because it’s reasonably easy to assume that the client machine will have some processing horsepower, then asking them to run client applications is a savings for server farms and imposes very little cost on the consumer.
Basically, people who talk about computing in “the cloud” neglect to mention that, by dint of connecting to it, a client machine becomes, to a large extent, a part of “the cloud”.
Read Full Post »