Feeds:
Posts
Comments

Archive for the ‘Computing’ Category

I was reading this article on the history of Myspace, and I came across this quote:

“Using .NET is like Fred Flintstone building a database,” says David Siminoff, whose company owns the dating website JDate, which struggled with a similar platform issue. “The flexibility is minimal. It is hated by the developer community.”

My first thought was “What?”.    Even in 2005 .NET had a robust toolset and was built on on established and well-supported webserver architecture: ASP.  Granted, it was easier to deploy on a LAMP server, which has a really robust community behind it, and JSP I imagine had a much larger population of knowledgable devs at the time, but still.

And this move to ASP was brought about because:

At that point it was too late to switch over to the open-source-code software favored by developers; changing would have delayed the site for a year or two just as it was exploding in popularity. The easiest move, says DeWolfe, was to switch to .NET, a software framework created by Microsoft.

What?  Why would that at all be easier than building a LAMP set up or Java-driven set up?  In any case you have to completely re-write the server…

And then I remembered I was completely nitpicking details that were mostly irrelevant.

Read Full Post »

Just read this (H/T Naked Capitalism).

It’s really nothing terribly new for anyone who has worked in web development or studied the internet.  In general, your IP Address can be narrowed down to a specific geographical region.  Additionally, tracking cookies can know an awful lot about you.  In this case, [x+1] uses either its own tracking cookie technology or accesses other advertising database’s tracking cookies to get site visit lists and makes an educated guess about you, based on heuristics of other people with similar surfing patterns.  This is verifiable do to the situation with the woman who felt they had mixed her up with her husband…since they likely share a computer, they also share tracking cookies.  I suspect the husband surfs the net more frequently in this case, or has a more consistent pattern to his surfing.

Both of these can actually be pretty readily addressed.  Using a proxy server to surf the internet effectively hides your IP address.  Without your IP address, they have a much harder time pinpointing your zip code.  System specific information (such as your MAC address) is not sent via http, so once you’ve obfuscated your IP address, you’re pretty safe.

Second, you want to deal with tracking cookies.  Cookies are the other major method [x+1] would have of attempting to uniquely identify you, but area  bit tougher to deal with.  Basic advertising cookies are actually fairly simple: you can delete them on exit or refuse to accept cookies, period.  The first case is more friendly to the internet: almost every site in existence uses cookies to store session information, and if you delete them on exit, you can’t be identified via cookies the next time you run your browser.  Refusing to accept cookies will do a lot of damage to your surfing experience, but will completely halt any attempts to use cookies to identify you.

I suspect [x+1] uses tracking cookies to track habits and ties them to IPs (as do advertisers) in order to narrow in on specific details.  Doing this has limits: most ISPs use DHCP to distribute their limited IP address set to people, meaning users IPs will change slightly day-to-day.  Tracking cookies can be left around for ever, but can be deleted at will by users.  I suspect they’re using a combination of IP and current tracking cookie set to narrow in on a user…or they just figure the tracking cookies have been around long enough to be useful.

In either case, if you want to avoid this, browsing in privacy mode (supported by Chrome, Firefox, and IE) plus using a proxy server should keep you safe from this sort of identification.

Read Full Post »

Terminal Computing

Apple has revealed their iPad, which has launched various discussions of the utility of tablets.  One of those conversations – highly tangential to the tablet discussion – was regarding computers as terminals rather than the current model of (mostly) locally hosted applications.  Basically, you simply have a monitor at home and enough computing horsepower to send input information over the wire and display the visual information sent back.  A server, or “the cloud” or whatever, receives your input data, processes it, generates visual data, and ships it back to you for your terminal to display.

The concept of terminal computing is nothing new.  Telnet has been around for ages, and many software people use VPN for telecommuting.  Windows has had Remote Desktop since XP.  For a while, Microsoft was betting heavily on terminal computing taking off, but it hasn’t yet.  Why?

On the face of it, it seems like a great idea and the natural evolution of computing.  It’s specialization, right?  Datacenters and server farms can focus purely on providing horsepower and network connections (as they already do), while consumers can spend less for something very much like what they already have: a monitor, mouse, and keyboard connected to a box that connects to the internet.  Presumably savings would be realized in moving all the circuitry out of the box and into the server farms.

OnLive is predicated on precisely this concept.

Network applications are inherently distributed.  Common software engineering design patterns tend to emphasize this (MVC, for instance), which emphasizes this, reducing the need for all software components to be co-located.  If we think of the act of using a modern computer as akin to interacting with a single application, we see this is even more the case.  What that means is that, from a software development perspective, you want to try to get things running in places where they’re most effectively run.  For now, that usually means rendering is offloaded to the client.  Small applications can be offloaded to the client, where there resources are generally somewhat abundant, while back-end logic and data access can run on servers, where locality is important and access is non-trivial.

Effectively, the entirety of computing is very much like running a complex network application.

Now, computer hardware runs has a very interesting curve on their returns per manufacturing cost.  For low transistor-count parts, the cost of adding additional transistors shrinks the more you add…until you hit a certain ceiling, where that trend abruptly reverses.  That implies there exists a “sweetspot”, a processing unit with the most circuits at the lowest cost per circuit.  To maximize the computing power/cost ratio, you want to use as many chips as possible which are in that sweet-spot (obviously the precise optimal transistor/money ratio changes as technology changes).  The broad economics is going to emphasize this, as both suppliers and consumers tend to prefer the sweetspot, if possible.

That optimal chip size also implies there’s a space optimum…that is, you need a place to put the chip, along with supporting infrastructure.  Because of this, co-location implies diminishing marginal returns, once a certain ceiling is reached…eventually, your building just runs out of room, so there’s a very strong drive to insure that only things that most require a giant processing center are actually run there.

By the same token, that manufacturing sweet spot means that, if it doesn’t cost much more to have more computing power, why not buy it (until you hit that optimum spot)?  Which is generally precisely what is done.  For any given space constraint, we tend to fill it with the computing hardware that sits in at that optimal price/performance spot.  Which means that, when people are buying boxes to sit next to their terminal, they’ll probably be more powerful than they need to just to send inputs to the network and render received display info.  That will become more and more the case as time passes, as transistor packages get smaller and smaller and cheaper and cheaper.  Look at the size of the iPad.  Or the iMac.  They’re basically terminals, as far as space requirements go…and yet they come stuffed with more computing power than a terminal needs.  If you effectively are unavoidable sitting on that much computing power, and an application can make use of it…then why not use it, rather than unnecessarily run it in a server farm?

Effectively, the most extreme form of terminal computing will never really be realized because it’s just not necessary or terribly helpful on costs.  Because it’s reasonably easy to assume that the client machine will have some processing horsepower, then asking them to run client applications is a savings for server farms and imposes very little cost on the consumer.

Basically, people who talk about computing in “the cloud” neglect to mention that, by dint of connecting to it, a client machine becomes, to a large extent, a part of “the cloud”.

Read Full Post »