Mac + ARM: one more thing

11 Aug

I’ve written appreciatively of Apple’s vertical integration and also about their Ax architecture, noting that the Imagination graphics processor could be readily boosted within a plugged-in device like the Apple TV to deliver console-quality games. M. Gassée’s MacIntel: The End Is Nigh and Mr. Richman’s Apple and ARM, Sitting In A Tree suggest many potential benefits in supply-chain control, overall cost, and battery life to an even deeper “Mac+ARM” vertical integration strategy which would shift Macs to use ARM processors instead of Intel’s more expensive, power-consuming x86 architecture. There are plenty of arguments to be made against such a shift to ARM, but there is a subtle trend at play to Mac+ARM if you are a programmer that it seems to me could make such a shift a compelling and strongly differentiable position. It’s an argument I haven’t heard from anybody else. It’s about how we improve the performance of this software stuff that’s eating the world.

Game performance as an example

To understand the situation it’s worth taking a quick look at game software as an example. Today a great deal of game software is built against software engines like Source, Unity and Unreal that can detect different graphics hardware and adjust how a game runs and also ease cross-targeting different operating systems (Windows, Mac, PC), mobile devices, and consoles from the same source code and game artwork. The way game software runs atop game engines often allow existing games to “look better for free” or with minimal added effort on newer devices with better graphics processors (GPUs) and to degrade gracefully on older devices. The same software can run at faster frame-rates, so animations are more pleasing and buttery. The same software renders more detailed, higher-resolution content with finer textures and richer special effects in lighting or smoke or fire or fog. The same software may support more game-controlled opponents behaving more realistically, adding complexity and realism to the game. Just by updating the graphics processor, the GPU.

Two reasons this happens so strikingly with games and GPU hardware are (1) there is a natural parallelism to graphics rendering itself, but also importantly (2) the natural way to organize game software and the engines around that natural rendering parallelism is a very approachable concurrent programming model for programmers. Upgrade a GPU with more parallel “core” elements which can render more triangles, more complex textures, more lighting effects and perform more physical simulation every 30th or 60th of a second… the software “logic chunks” of the game software have already been split by programmers into small pieces that these GPU cores and the game engine can parallelize to do more each frame. Newer GPUs can often take the same source artwork and render more detailed characters or draw texturing images with more fidelity on more of the screen or to a bigger screen, or realize a more realistic lighting or fog effect which was already part of the software’s definition. (A similar type of free speedup effect seems afoot with iOS8’s Metal.)

“Free speedup” isn’t a new thing in software; it’s a natural consequence of Moore’s Law, after all. Free speedup happened for gaming and non-gaming software on the PC/Wintel platform through about 2002, almost entirely due to increases in clock speeds (what you see as the Gigahertz, or GHz, of your computer – it’s a measure of how many billions of individual operations like adding, subtracting, or moving data around your processor can do every second). From the birth of PCs until the early 2000’s newer machines arrived with speedier processors as well as more and faster memory and other faster system components.  New PCs automatically improved the performance of existing software like your operating system and the few apps you were using: a browser, Word, Excel, Adobe Photoshop, etc. Each year a new PC felt like a dramatic performance improvement for you as a user, and the easily perceived productivity improvements helped drive rapid PC replacement cycles.

In the early 2000’s we began reaching the “thermal limit” of the processor speed race – we couldn’t make a single processor run at faster speeds without literally melting your laptop. Intel began adding additional processors to a single chip and using techniques such as “hyperthreading” to add even more “virtual” processors. Instead of having a single processor chip running at 4GHz we have a single chip which has 2, 4 or 8 2GHz processors. Having more real and virtual processors made the operating system more responsive and also gave “free speedup” benefits to PCs being used in server environments. Database servers and web servers often run identical software fragments for every user connected to them, every time a request for a web page or a piece of data comes in; this is the same kind of naturally parallelizable software that gets a free ride from having more processors even when the speed of any single processor is not much faster.  Unchanged, plenty of server software can handle more simultaneous users or web-connections or database queries on hardware with more processors, without most high-level programmers having to do a complex rewriting to accommodate the change. Their software was already written for concurrency.

Alas, most desktop software hasn’t gotten as much of a free ride in the last decade from multiple processors or multiple cores. You experience some speed improvements when you buy a new machine due to faster graphics, more memory, faster solid-state disks, or a faster network, but not like you used to in the 90’s. It’s telling that people are more excited these days by a faster internet connection than by a brand new laptop! It turns out that under the covers our apps and their user-interfaces are built using not-very-concurrent software programming techniques, so those extra processors alone don’t make your favorite apps feel much faster. Unless you are a programmer running many tools at once (like me!) or you work with specific high-end media software which has been painstakingly re-written to take advantage of all those processors, not much free speedup for you. Personally I think this lack of perceived speedup may account for some part of the decline in the PC industry – slower replacement cycles and less consumer desire to upgrade because there is no obvious benefit to a new machine.

Here’s a big part of why this happened:

The traditional software programming technique for concurrency, for taking advantage of multiple processors, is called multithreading, where the work of your software is manually broken into smaller pieces which are given to different processors to work on. In school every programmer is taught about threading and suffers through logic tests about semaphores, mutexes and other mind-numbing locking and synchronization techniques. It turns out that although low-level developers of operating system kernels, database engines, web-servers and some games and game-engines can pull off this form of concurrent programming to get the most out of multiple processors, most programmers (the ones building all your apps) are easily confused by threading and either can’t get it to work properly or can’t get it to work well when there are many actual threads running on many actual processors. Programmers don’t do the multithreading work or don’t do it well; most apps don’t feel much faster. Edward E. Lee’s famous 2006 “The Problem With Threads” pointed out that basic threads “discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism” and he suggested new programming language techniques to facilitate concurrent programming, to make it easier for programmers to do well.

As a response to these trends and then in response to the very pronounced performance impact of long-running and slow-network-constrained apps on mobile devices – things like the lag of touchscreens and unresponsiveness of buttons and lists if software is not concurrent enough -various platforms introduced new concurrent programming features around this time in an attempt to push new software into taking advantage of a future with many processors. Some of them seem to have taken Mr. Lee’s insights to heart about simplifying concurrency for programmers. Others did not.

Java introduced java.util.concurrent in 2006 with some useful queuing and “futures” features, but also with many simplistic and mostly not useful wrappers around the traditional complex threading models. As part of its response to sluggish UI compared to iOS, Android followed up in 2010 with the addition of AsyncTask as well as guidelines for programmers to “do more work” in separate threads. In my opinion, Java and Android have not taken Mr. Lee’s insight very deeply to heart. Programmers can use some concurrent programming techniques, but concurrent programming is not the norm.

In mid-2010, Microsoft introduced Parallel Extensions to its .NET platform and runtime, then more “completion-based” and quasi-asynchronous APIs for Windows Phone through 2011 to prevent long-running operations from causing UI stutter and hangs, and finally introduced new await/async keywords in the mid-2012 C# 5.0 update. I think Microsoft folks definitely took the global trends and Mr. Lee’s insights to heart when building PLINQ and TPL, but their lack of platform focus & consistent messaging in the past few years and troubles with Windows Phone market share has meant that their concurrent programming model has not caught on deeply with developers. Also, although many developers love and use C#, the concurrent programming model does not permeate all of the many disjointed Microsoft APIs and so software is not yet being broken up to take strong advantage of the future with many processors.

Apple’s iOS launched with the iPhone in 2007, then to developers as an SDK and platform in early 2008. It arrived with a great deal of natural concurrency over its entire API surface. Not just guidelines for which APIs to use when or an admonishment to add threads for long-running tasks (though it had these aplenty), but also with some fundamental structure (delegates and delayed message sending and asynchronous APIs) which prioritized UI responsiveness and assumed slow network and input/output operations of all kinds. Soon after, in 2009, Grand Central Dispatch (GCD) was introduce: a technique for creating and scheduling multiple queues of work-chunks independent of the number of processors or threads (effectively hiding thread management from programmers). GCD and Blocks, a technique for writing the work-chunks to put into those GCD queues and to create reusable work-chunks more succinctly than the delegate and callback mechanism, made their way to iOS and Mac OS X by early 2011. GCD and blocks have meant that Apple’s own software like iMovie/Final Cut Pro and iWork can actually use all available processor “for free” without overthinking threading and concurrency. Over the past few years blocks and queues have come to permeate Apple’s APIs – we create graphical animations with blocks, we handle data loading and saving with blocks, we handle synchronizing input and UI with blocks, they are everywhere. And they feel pretty natural to developers I’ve talked with. And way less error prone than traditional multithreading. On the Apple platform developers have, for several years now, been actively breaking up their applications into smaller work-chunks and being encouraged by example code and the APIs to re-organizing around a concurrent programming model which is simpler, less error prone, and more scalable to a future with many, many processors.

That’s the lead-up. Here’s the point.

The subtle but major benefit to a Mac+ARM strategy might be the ability to add many many many more processors to Macs and sell them as the fastest computers that consume the least power – not just matching Intel’s GHz performance or number of processors but radically leapfrogging performance and power because only Apple software and its app developers are positioned to take advantage of so many processors due to how this long-game of shifting to a simpler concurrent programming model has been playing out. And only the Ax / ARM architecture can fit 16 or 32 cores into a smaller power profile than the ~8 core top-of-the-line mobile Intel processors, or 48 or 64 cores into the power profile of the top-of-the-line desktop and server Intel processors. Mac laptops could be lighter, run cooler, last longer on the same battery, and feel dramatically faster running concurrency-aware apps than any Intel-based laptop. Mac Desktop systems, already targeting high-end developers and media professionals who use concurrency-capable software – could be smaller and use much less energy, and would also feel dramatically faster.

Fighting this performance battle would be very difficult for Intel and PC OEMs in the laptop and tablet space given their continuing struggles around price and power consumption – it’s unlikely they can match the number of processors and power consumption combination for 3-5 years, and only then if they were under pressure.  It would also be an uphill battle for Microsoft and PC OEMs without competitive Intel parts. Although they might try to shift to ARM and a provider like Qualcomm might create a 64-bit highly multi-processor ARM part they simply lack the software. Microsoft’s operating system, web-server, and database server are extremely multi-processor capable, but as yet not fully ported to ARM. In addition their APIs are not only in a disjointed state but also not solidly founded in concurrency – legacy apps, originally their strong advantage, become a disadvantage, feeling old and sluggish and consuming more power. Nor does Microsoft have the strong developer following and loyalty they once had due to their ongoing product, platform and API disarray and consumer market share woes. Microsoft’s response to ARM in cloud and backend enterprise apps is pretty straightforward; it’s hard to picture how they could react, or how quickly, to ARM in the consumer space.

Will Mac+ARM happen? I really don’t know, this is just my thoughts about advantages to Mac+ARM that I hadn’t seen anybody else notice. It’s worth thinking and talking about.

quick thoughts on iOS Metal

3 Jun

One of many surprises to me out of Apple’s WWDC 2014 keynote yesterday was the Metal API announcement – a very low-level API for performing complex graphics and computation on Apple devices. Basically Metal strips out a layer of overhead which exists to simplify graphics programming for most but which gets in the way of the very advanced programmers. As always, AnandTech has a terrific deep dive and a take on the overall market and ecosystem impact, and I also think Alex St. John’s perspective is scathing of OpenGL and low-level APIs while also utterly insightful at the same time.

Since I’ve got only  a few minutes before I have to check out of this SoCal hotel and be somewhere else, I’ll just add a few quick thoughts.

First, it’s worth noting that even some very advanced graphics programmers may not see huge performance wins from Metal:

(Brad Larson maintains an excellent graphics library for image manipulation called GPUImage)

That said, of the class of very advanced programmers who will jump on Metal are… the teams that maintain the game engines, frameworks, and toolchains used by 95% (perhaps 99%) of the games for mobile. Unity3D, Unreal Engine, and a few others simply dominate mobile gaming on both iOS and Android and have traditionally targeted a relatively common core of OpenGL ES for both platforms.

Due to this I find it unlikely that the API itself will act to lock anybody into iOS from a classic API perspective – everybody is using an engine or framework and indeed tools much higher up the value chain. But… Metal could very well offer an iOS performance lock-in on mobbile.

The most realistic rendering games will look great on iOS until Google does deeper/better driver work on Android. As it turns out, that is crazy hard due to the diversity and fragmentation of Android hardware. In this respect, if Metal is indeed a 10x speed improvement or a 10x detail improvement, it may very well be a masterful move – non-iOS games from the same engines will just look lousy on Android. Wow.

vertical integration of design and post-pcs

9 Apr

The news that Apple has been building an RF/baseband team is a great reminder about how cool vertical integration of intellectual property design can be as design and final manufacturing continue to fracture.

I wasn’t a business strategy wonk growing up, I was too busy writing software, so my first view of vertical integration in manufacturing, contract manufacturing and white-label manufacturing came during the mid- and late-90’s at Microsoft while working with PC OEM’s on the troubling issue of “low-cost consumer PCs.” OEM’s were in a price war that was driving their margins into the dirt and were giving Microsoft (the $70 Windows software license) and Intel/AMD (the $50 CPU price) grief over those parts of their cost as well as trying to figure out how to differentiate their products. We were helping key OEMs prototype different special-purpose uses for the Windows operating system which could be sold with new high-volume consumer products under a lower licensing cost to hit the <$300 retail price point. (This effort and some of our prototyping was one contributor to the initial XBox.) I was fascinated to learn details about how much PC OEM’s had outsourced manufacturing (and some forms of the hard intellectual property design) to foreign white-label manufacturers. Some small players had literally outsourced everything but their logo, their sales staff, and their direct-mailing lists. It was clear even then that they were not differentiable and fully doomed. Others, like Dell, were still doing final customer-specific options assembly and industrial/mechanical (particularly pluggable component) design but were no longer designing much of their printed circuit boards (PCBs). The more I learned the more this seemed like a difficult-to-defend position without unique software capabilities to differentiate the clearly commodity hardware. PC OEM’s had no brand-exclusive content.

One PC OEM that stood out and then led me down the rabbit hole of game consoles was Sony, who I learned was an extremely vertically oriented company – at one point it probably built the trucks that dug the sand and copper to be carried by its ships to its factories to be turned into glass and magnets for TV tubes to be carried again by its ships to markets around the world. Sony’s vertical integration experience in many different CE devices from Walkman to CD players to stereos to TVs taught it how to manufacture Playstation One consoles cheaply and then to radically reduce their build costs each year over the life of the console. It was using this technique in PCs and notebooks as well, delivering the most appealing and smallest PCs and commanding the highest margins for a time (though the PC OEM war of specs and hundreds of configurations dilute and defeat many of these advantages). Studying Sony’s’ Playstation and PC/notebook businesses as well as their various content business illuminated two important things for me which may seem at odds, but they are not: (1) vertical integration of hardware intellectual property is critical for differentiation, though the actual manufacturing can be carefully out-sourced if possible, and (2) Software differentiation (content) is the even more important differentiator. Ironically for Sony, it was the fact that even the strongest advantages of their vertical integration and their deep investment in hardware intellectual property for consoles wasn’t enough to keep ahead of the price-performance trajectory of commodity PC CPUs and GPUs. (It’s good to see them embracing the PC ecosystem and focusing on exclusive content now.)

Which brings me back to Apple, who clearly learned more lessons than everybody else combined from the PC OEM wars. Lessons about how differentiation matters, how intellectual property design must keep its distance as far as possible from manufacturing, and most importantly how to prevent a cross-over threat from another ecosystem.

In the classic PC/notebook space, Macs continue to use many off-the-shelf PC parts (ethernet chips, CPUs from Intel, memory), but their deep investment in industrial design and consumer-important features like thinness, lightness, screens and longevity require expertise and investment in the intellectual property of PCBs, glass, mechanics, aluminum, manufacturing, just to name a few. They use the intellectual property of hardware design to make their products unique and their exclusive software clinches the deal, allowing them to keep their margins high.

More interesting still, though, is the mobile, Post-PC or “Internet Of Things” space. Here with iOS and ARM-based in-house-designed CPUS Apple’s overall vertical integration strategy is just shockingly impenetrable for the foreseeable future. Post-PCs will be small, highly-capable, full of sensors, network-connected, power-sipping, and accessible to developers. Apple’s environment is all this, and is particularly strong in low-power. At this point Apple just lacks dedicated in-house designers of displays, touch-screens and batteries, though they appear to have long-term investments and future capacity contracts with their key suppliers and manufacturers. They don’t actually own the team which designs the graphics processor (Imagination Technologies, creators of the PowerVR GPU) though there is evidence of a deep investment & long-term contract. I suspect there must be a right-of-first-refusal or right-of-first-purchase in place. (I still don’t understand why Imagination hasn’t been bought by somebody, they are an amazing company who understand low-power better than just about anybody).

Android plus off-the-shelf hardware from the non-Apple ecosystem of ARM CPUs, GPUs and baseband controllers are nearly price-competitive, but already at the cost of very slim margins for all the intermediaries (increasingly for the medium- and high-end, this is just Qualcomm). Apple building custom baseband chips will mean Apple has fewer intermediaries and so pays less (it would likely pay $20-30 less per device using in-house baseband, or 10% less of its fully-loaded bill of material), and I’m guessing they will continue to outperform on power-consumption. Qualcomm will feel pressure from OEMs to further reduce prices and power-consumption, leading to lower margins and less ability to invest long-term. This is the aspect of the strategy which prevents an ecosystem cross-over – living in the same ecosystem as your competitors, retaining exclusive content, as in PCs and notebooks, but being able to do cutting edge intellectual property investment in literally every component with no exceptions. By bringing the hard intellectual property design of the very same ecosystem in-house and securing inexpensive manufacturing there simply is no competitive price-performance curve for competitor to cross over.

I shake my head at the genius of not just managing your supply chain but literally eating every bit of intellectual property designed within it except the lowest margin manufacturing. I see no offensive strategy capable of cracking Apple’s Post-PC lead at this time. Perhaps (I hope not) anti-trust will eventually be used, but it’s really more a waiting game for Apple to stumble and slow their pace of innovation.

Game Console Ecosystems – Part 2, Strategies (Now What?)

2 Apr

In Part 1 I wrote about content, price, and lifecycle patterns of game consoles and described ways they are blocking their own adoption. This second part describes now what? strategies for consoles, micro-consoles and others in the TV, CE and video, with the exception of Valve/Steam which is complex enough it deserves its own post. In another future post I’ll talk about something even more important to me: how to place your bet as a developer, and how VR (and AR) will radically impact developers and the CE space.

Many super smart people wonder What’s going to happen in TV? After Amazon’s FireTV announcement and the Android TV leak, even the most dull-witted among us now realize that small, inexpensive, network-connected, cloud-backed, UI-excellent, rapidly improving devices easily replaced every 12-18mo are the most natural product to deliver content to less-frequently purchased & expensive “big pieces of dumb glass” (televisions). The United States has an estimated installed base of 270M TVs (2.24 per household), 240M are sold worldwide annually, and the worldwide installed base is estimated north of 2B. Americans spend 5+ hours each day around their televisions. Selling internet-connected devices, services and content to that big an audience is not a hobby.

I’m a big console, now what?

The big-three consoles are in for a world of hurt as fast-improving, cheap-to-update, lower-cost mobile hardware and an enormous ecosystem of mobile developers transition into this market, either through a variety of Android-based micro-consoles (including Amazon’s FireTV) or an iOS-based Apple TV, or simply if TV’s begin to lose user attention to video streaming and games on tablets and phones. At the same time the most lucrative and loyal hard-core gamers and developers are drawn to high-end PCs which out-perform consoles by increasing margins and are easier and cheaper to work with. High-end PCs are likely to arrive in a TV-friendly form factors via Valve’s Steam box initiative by this holiday season. The question is how quickly and how hard, not if, this world of hurt descends on existing consoles.

Putting content and usability mis-steps aside for a moment, the past two generation of consoles have tried to ride a particular spot on the price-function curve in their graphics hardware and content arms-race, a spot that has pushed their price up and made them and their developers dangerously dependent on blockbuster hits. Chasing hits that fully exploit expensive custom hardware causes hardware and software that are fundamentally over-priced and increasingly over-squeezed.

To defend themselves the big consoles’ best chance in direct consumer sales is to reduce their competitors’ advantages and increase their own. On the hardware side:

  • Accelerate the current generation subsidy. Since micro-consoles, Amazon FireTV and Apple TV competitors will be priced at $99-149 in the 2014 holiday season, subsidize to $169-199 to make consoles an easier choice. It may be possible to strip some storage space or software services out for the upcoming season, but be prepared to spend on this defense even if nothing can quickly change. Creating better bundles with a contracted service subscription, unbundling Kinect and the controller and adding support for older XBox, Wii and DualShock-3 controllers which consumers already have are some ideas; there are many others to be found. Crazy to subsidize so heavily, you say? How could you possibly subsidize 10-20M units per year? To that I say – hey, if you are lazy, do no work to reduce manufacturing and sales costs or to understand your supply-chain and have to subsidize at $300 apiece it might cost you $15B over three years – are you saying that a beachhead entertainment device is worth less than what Facebook is paying for WhatsApp?
  • As part of subsidizing, use 2-year service subscription contracts. You can only do this if subscriptions are more like Playstation Network and Steam membership (full of value, free games, sales) and less like today’s XBox Live (subscribe and pay so that you can even use Netflix and other apps – stupidity), as most consumers won’t see any value. See also Spotify-style subscriptions described below under micro-consoles.
  • Commit to yearly hardware updates and forward and backward game compatibility over three to four years to defend against yearly micro-consoles updates which will follow the app compatibility model from mobile devices. This makes sure new hardware has an existing catalog of titles instead of resetting every generation, which is attractive to consumers and developers while still giving developers and users access to the cutting edge. Another side-effect: this model appears to match Valve’s public strategy and so may counter a Valve and Steam machine OEM advantage.
  • Introduce yearly staircase pricing: each year a model phases out, last year’s model steps down the price ladder and a new, faster model takes its place at the top. In 2015 drop the current hardware to a steeper $149-169 subsidy and introduce more interesting hardware, hopefully less subsidized if you’re doing your supply-chain work, at $199-249. Rinse and repeat in 2016. Throughout this time period the goal is to create a many-generations road-map with a razor-focus on reducing hardware costs so that the subsidy costs come down while function improves. The larger goal is to get your console to a more price-defensible position on the price-function curve, keeping it enough ahead of peak function coming from mobile CPU+GPU hardware in micro-consoles that better games are possible but not so far ahead that you chasing the arms-race of cutting-edge PC graphics, which is too expensive. All consoles lose some high-end PC and Steam Box gamers, but this should help block lower-end Steam Boxes from capturing a large chunk of market.

On the software side:

  • Make your platform wide-open for independent developers. Kids, students, anybody with skill and spare time who owns one of your devices should be able to download free tools (for PC and Mac) and write games that they can give away for free, sell in your app store, or just show their friends. Review apps to prevent junk, spam, and copy-cats, curate lists of great content to put it front-and-center in your store, but most importantly just let any developer write software for their own console using free or very inexpensive tools. Don’t just speak about it and slowly roll it out over a year or two, do it immediately; remove the strange sign-ups, verification, approval, wait-list nonsense. Remove the strange pricing rules, size limits, trial periods and overall regulations found in Microsoft’s XBLIG/XBLCG and SCEA’s Indie Outreach. Let the community of developers share code and support one another without hindrance.
  • Offer a better “App Store,” let prices float, but don’t drive to zero. Draw the best of simple finding, paying, and in-app-payment as well as curation from the iOS App Store and the best of sales, membership and deals from XBox Live, Playstation Network and Steam. Remove the constraints and restrictions that held prices extremely high but don’t
  • Perform a immediate radical dissection of your user experience, particularly around how you navigate and watch or play content and around how you find and purchase additional content to watch or play. Use voice through phones, tablets, and remotes, not by yelling at your TV. Simplify first-run and every launch to speed access to content. Simplify billing, account setup, account recovery, subscriptions. Speed up launch and software updates. Eviscerate all error messages. Make backup/recovery, roaming your gamer profile, and restoring to a newer console work seamlessly.
  • Do even more to allow control of the device, its services, and the TV through mobile and tablet “remote” apps and bluetooth hardware, and open up the control mechanism to third-party mobile app developers through free SDKs, hardware development kits, open protocols, and tools.
  • Invest more deeply in exclusive game content, minimally for time-windowed exclusivity.

To summarize, the goal is to truly level the playing field in simplicity, usability, and price as a defense against lower-cost devices that can’t yet deliver high-quality game content while also creating a broader defense to a multi-tiered, multiple-OEM PC market through more frequent updates and console-exclusive content (more on that play, which is Valve’s Steam, in a future post). This should be a sustainable gaming location for several years. Microsoft is at a slight disadvantage to Sony in adopting this PC-isolating approach as it is more difficult for them to choose exclusivity of content between PCs and consoles; Microsoft is not sure whether PCs or consoles are going to be the larger volume, dominant devices long-term.

I’m a micro-console, now what?

Apple TV, micro-consoles like Ouya and now Amazon FireTV (and potential Android TV devices) have an advantage over current consoles in being a profitable piece of hardware at a reasonable and interesting consumer price-point. Ouya and Amazon FireTV don’t have a depth of supply-chain control and they have to pay more middlemen for parts, so Apple’s 35-40% hardware margin at $99 retail will be out of their reach initially. Over a few years if they achieve high volumes they can either find higher margins or drop their prices below Apple TV (their retail strategy will likely be the latter, which suits Apple just fine); future cable operator subsidies and Apple’s brand strength may obviate any retail price advantage vs Apple for most consumers. In any case micro-consoles could be self-profitable on hardware alone, and they will definitely provide a profitable distribution path for existing subscription streaming partners like NetFlix and Hulu. To have gaming content help drive their growth and to grow gaming and an app ecosystem, though, they must create:

  1. Must-have, exclusive, break-out content that helps move 5-10M units of the hardware.  Initial hits are needed to seed the market which in turn causes more content developers to focus on exclusives for the device, to see its market potential, and to see a viable customer base for doing business long-term. For games this is particularly important since “console” games tend to be longer and require longer development cycles. Amazon has primed its pump for FireTV with original streaming content (which has been somewhat well received but not yet as critically acclaimed as NetFlix) and is at least trying on the games front with the acquisition of some strong game teams and a commitment to first-party titles, and a big outreach to many game studios. There are no two ways around the fact that Ouya really must scout out and invest money (if they have it) in an exclusive must-have game title which showcases its excellent little product.
  2. A virtuous-cycle ecosystem where money can be made by content developers. This is more than an app store with a 70/30 revenue split, it’s more than supporting payments seamlessly or supporting in-app-purchase. Its about an overall business model and community culture that ensures long-term profitable businesses can be built in the environment, not just get-rich-quick schemes or games that prey on addictive behavior or psychological chicanery. Directly copying the current iOS App Store is not without risk – many aspects of the pricing and free-to-play/in-app-purchase model have soured game development on mobile, leading to an exodus of great game developers back to PCs and Steam. In the last year Ouya’s everything-has-free-trials policy (now rescinded) was quite a bad mis-step in my mind because it set customer expectations on “free” and it also put developers immediately into the free-to-play/in-app-purchase mindset of get-rich-quick schemes dominating in mobile. The Amazon FireTV slide describing the average selling price of paid games as $1.85 sets up a similarly low and cheap expectation which may prevent the creation of break-out game content. My spidey-sense is that as Apple has already spotted the negative customer satisfaction impact from free-to-play and unlimited in-app-purchase as well as highly creative developers shifting their attention away from iOS and are poised to make App Store rule adjustments. I won’t try to read Apple tea leaves, but some suggestions for other micro-console ecosystems to avoid scaring away developers are
    • proactively block clones and knock-offs under guidelines such as Apple’s 2.12
    • adopt time-window constraints on the frequency and amount of in-app-purchases (perhaps introducing several different categories that apps can choose which best fit their game mechanics), with the underlying goal of disrupting app dependencies on “whales
    • introduce Spotify-style subscriptions of all-you-can-eat daily, weekly, or monthly access to groups of games and pay developers in proportion to the amount of time consumers spend during the period in their game, with the underlying goal of encouraging the creation of content users like to spend time with (time spent being an imperfect representative of their enjoyment)
    • If I personally ruled the world I would also set a minimum base price of $0.99 or $1.29 for apps, just to keep consumers aware that content has value.

The complaints I have previously leveled against consoles and which I suggest they fix – better UI, easier setup & account management, faster game loading, etc – are a baseline for micro-consoles as well. Though they start at a simpler place than consoles and bring less baggage, they still have room to tighten up, and getting ease-of-use just-right in the $99 space is going to be how they differentiate and sell. Apple TV is in solid shape, though voice search in FireTV ups the ante quite a bit. FireTV UI looks good, but until I set it up later this week and play with it for a while and get a software update I won’t truly know. Ouya is a pretty rough around the edges, but they have been making updates and have a good software team; I think they know these issues are important for their future, I look forward to seeing what they do.

The final point for micro-consoles is having an excellent bundled content remote and an excellent bundled or separate game controller. From what I’ve seen, FireTV has nailed the remote, especially with voice integration – I’m looking forward to trying it. Both Ouya and FireTV are not off to a strong start with their gamepads, though Ouya supports Playstation DualShock3 controllers and wired XBox 360 controllers, which is smart. Iterating aggressively on their own game controllers or drafting off the excellent open-protocol Bluetooth DualShock3/4 controllers is a great idea (I recommend the DualShock4 – the speaker is a surprisingly great addition to the controller). Apple’s TV remote is excellent – it only remains to be seen if they integrate voice in the next version. I expect Apple to design a really great gamepad as well as supporting existing customers with DualShock 3 & 4. It would cost a licensing fee to integrate XBox 360/One controller support since Microsoft uses some (stupid) proprietary technology – I doubt Apple or any others will choose to support it.

I’m a streaming stick/dongle or mini-set-top-box, now what?

Right now these are exciting little products for consumers. The sticks and dongles remind people of convenient thumb drives. They are incredibly inexpensive and can be justified as an impulse purchase just to get Netflix or Hulu – most consumers have a spare HDMI port on their new television and what the heck, it’s only $35! The mini-set-top-boxes are small and don’t take up much space near the TV or cause much additional house clutter – your partner won’t complain. Existing Smart TV software is so bad and changes so slowly that when someone sees a better looking UI demo reel they want to give it a try.

In this category the Roku products are excellent. The Chromecast is fairly underpowered but somewhat good; my best guess is Chromecast sells well to Androidees who want to project their pictures, videos, and youtube to a television, which I love doing with my iOS devices and Apple TV, but I have only anecdotal evidence that this is how Chromecast is being used. Myriad other teeny dongles out there which offer photo or video streaming or Netflix/Hulu are mostly meh in quality.

But even as the hardware improves and prices come further down there is a fundamentally narrow range within which general-purpose sticks and dongles can operate given their size. You can’t dissipate much heat from such a small device volume, and so you can’t draw much power or carry much storage or content. Pure video streaming isn’t a problem, but buffering multiple streams quickly is, and you are barely going to get smooth UI transitions and compelling graphics or even carry a lot of software or content, especially as screens grow in density from HD to 4K. No matter how Moore’s Law progresses, the stick/dongle form-factor will be too far down the price-function curve to be super appealing. Technically sticks and dongles can be carried easily to a friend’s house or on a vacation, and while this a niche use has utility today, I suspect it eventually dies in a cloud world. The $99 mini-set-top-box which has dedicated power and a larger volume to dissipate heat is the most interesting form-factor for the foreseeable future.

So what to do?

  • Focus on software. Making your software exceptionally easy to use, modular, and easily licensed and rebranded. Rethink and innovate on the tough issues on TV like discovery, search and parental controls. Unless you’re Apple, pick Android as your base so you can appeal to developers and improve your own application development. (Note: this is where I think Roku, otherwise executing with excellence, will be in trouble with its Linux+Brightscript SDK)
  • Make your devices controllable via smartphone “remote” apps and bluetooth. Create a free SDK for mobile and hardwared developers to use to write custom controllers – don’t think that you can do the best job. Rapidly and generously buy up the best solutions from your software and hardware developer community rather than trying to copy them in-house; don’t alienate developers.
  • License your software and hardware solutions directly to “Smart TV” manufacturers who need to get out of the software business. Promise them better software, more frequent updates, and better customer support.
  • Use the stick/dongle and Smart TV integration as the free/cheap entry to your broader software platform. Assuming you have long-term ambitions to be part of a TV ecosystem, take a look at how Roku has created a set of devices that span the portable/cheap stick to a plugged-in form factor with more hardware horsepower potential.

Is there room for single-purpose free or cheap HDMI sticks and dongles to do things like just video conferencing or just displaying photo albums or just letting you do presentations from your phone or tablet or streaming games from a PC? Absolutely there is room for these niche players doing this for several years until apps and high-powered $99 mini-devices take over completely, just don’t expect to build a huge business; use sticks and dongles during the transition.

I’m a “Smart TV,” now what?

Because of the slow replacement cycle of TVs and the accelerating pace of computer and graphics hardware improvements I’m pretty skeptical that it is useful for the “smarts” of a TV to live inside large & expensive TVs. Evidence suggests that even inexpensive tablets have long replacement cycles (perhaps they are used primarily as portable TVs?). In the short-term you can solidify your position as the best piece of dumb-glass moving forward as follows:

  • Don’t ship another single unit carrying your worthless, unusable, frustrating custom software.
  • Pull every stop to partner and integrate 3rd-party software with great UI by the 2014 holiday. At the moment I’d recommend Roku, though Ouya is a smarter choice due to the Android base (their UI is not quite as refined as Roku, though), but soon we should hear what Android TV’s licensing terms are. Please don’t roll your own Android version, custom store, and UI – you are not a softare company.
  • Be sure to integrate AirPlay, iView and the protocols underlying Chromecast so that your TV is accessible by the majority of mobile devices without users having to think about or buy an additional device. There may be other protocols specific to your geographic or cultural market – the key is to choose a partner which has some form of application SDK so you can add features and target specific models of your TV quickly and easily (this is Roku’s one big challenge at the moment, and why Ouya or another Android-based system stands a chance)
  • Focus on usability and customer service. Hey, what do I know, but here are some suggestions: Streamline setup. Be faster to turn on (with less of your logo). Ideally detect but also OK to let me name my inputs – like “XBox One” and “Cable” and “DirectTV” instead of “HDMI-1″ and “HDMI-2″. Don’t make changing inputs vs. changing channels modal – go watch families struggle with TVs, it’s not rocket science. Each time the TV turns on, show me thumbnails of all my named inputs – what could be more frustrating than a blank screen showing “HDMI-2″ when the last person left a different (now turned off) source selected? Revamp your manuals.
  • If you’ve got a speaker, support audio playback even when the screen isn’t on. Integrate Spotify through Roku and let people have ambient music, controlled by their phone or tablet.
  • Bonus points for TV/AV folks: Buy Sonos or partner deeply with them instead of trying to copy their features poorly in your line of sound-bars, TV’s, A/V receivers and 5.1 speaker sets using barely functional Spotify, Pandora, TuneIn, and Rdio integration. You do not have the software chops to build your way to a solution, you should just buy: they are doing a much better job than you possibly can because they focus on software and hardware working harmoniously.

Fundamentally, you are in a really, really tough spot long-term as a purveyor of dumb glass – but these are my suggestions for remaining differentiated while you figure out your next step.

I’m a cable- or satellite-operator with my own set-top-box, now what?

All your set-top-box hardware, remote controls, and software have been universally condemned and unconditionally criticized as slow, difficult to use, lacking in cutting-edge features, and slow to update – even back when you blatantly copied TiVo or built in their technologies. Because you aren’t a software company, because you have a huge installed base of odd TV and stereo configurations you fear weaning from traditional remotes, and because you completely subsidize the cost of the device or charge a small monthly fee, your goal is to simply minimize the cost of the hardware, software, and support associated with it. You have been either actively creating barriers to prevent your set-top-box from being a gateway/hub for other web-based or local-to-the-home photo & video content or have been integrating it poorly with custom-built apps. Your own set top boxes are not a competitive edge and continuing to invest in them will never lead to you growing your market share or improving customer satisfaction. There are two things you have traditionally done that you should keep doing:

  1. Secure content exclusively to your own networks, especially video content, especially sports. Most video content is becoming commoditzed, so you need desirable short-shelf-life exclusive content like sports as well as a broad tail of ideally exclusive niche content. Be willing to spend big to secure exclusive content.
  2. Improve your service. Faster internet speeds, better reliability, better customer service. Lower prices than your competitors is great, but dramatically better service is always the strongest long-term differentiator.

What should you do differently? You should either ease your way out of the hardware and deep software business by using off-the-shelf packages like Android TV and white-labeled hardware, or you should partner with a company already selling game console or set-top-box hardware to the public directly and draft on their business model. In either case you should sell, rent, or help subsidize new device to all your customers as quickly as possible, taking yourself out of the hardware and deep software business and into the app business. Integrate your tuner hardware and DRM technology if that is technically necessary.

If you are not going to use Android TV and white-labeled hardware, you have three serious choices for third-party complete ecosystems at this time: Microsoft XBox One, Apple TV, and now Amazon FireTV.

My guess is Microsoft will make XBox One available to many different cable operators as one choice for consumers among the set-top-box options available from the operator, and they are looking for a small subsidy assist or subscription percentage. It is typical for Microsoft to pursue consumer choice & perceived quantity over deep quality. (This isn’t meant as a dig – it is just their traditional method for hedging bets and keeping multiple OEM or other partners happy and more future moves available). The fact that it has on-board storage, a cloud infrastructure, and reasonably good programming guide integration makes it an attractive looking option. Unless it’s subsidized deeply to the free-$100 range, though, I’m skeptical that consumers will readily choose it over basic set-top-box options, but there may be some attractive ways to bundle it with new services which surprise me. It is also physically a little bit big and requires more complex physical integration and setup.

In contrast Apple would I think be looking to initially partner with a single operator to get a better time-window exclusive and to help further pull down the consumer price-point, to increase the subsidy so that the operator gets the cachet of exclusivity, drawing new subscribers. Recall AT&T and the iPhone – which cable operator wouldn’t want to be AT&T in 2007? The leaks around content deals with a combined Comcast/TimeWarner are mostly random noise without much depth, but in-between the lines I see indications that an Apple TV distribution partnership with new content is actually the deal that’s pending. Whether or not XBox One is also an option for Comcast/TimeWarner customers, I suspect the next generation Apple TV will be available as a free or <$50 option to Comcast customers and will include deep direct program guide and custom Comcast app integration. Apple TV will also be available at retail, like an unlocked iPhone, for the higher $99 (or future $149?) price-point. I’ve written that the next Apple TV will support gaming, and I even that I thought games would be its launch focus. But reading about content negotiations, thinking about Eddie Cue and how Apple typically chooses a single product facet for launches in order to have crisp messaging, I think the pending Apple TV update will focused entirely on user-interface issues like search and parental controls, new streaming content partners, and a Comcast/TimeWarner distribution partnership with live- and time-shifted-tv programming guide integration. Though it will have the hardware specs and iOS capabilities to support apps and specifically games later, that will likely be a separate fall/holiday 2014 announcement.

As a cable operator I would certainly be reaching out to Amazon if they hadn’t reached out to me already, because the voice-search, parental controls, Android ecosystem, UI/support and overall sizzle are what I need, but I suspect that in 2014 Amazon FireTV is a pure consumer play and they haven’t had time to pursue deep cable operator integration and getting into this form of subsidy.

If I were a cable operator or ISP of any kind, I would be reasonably worried about partnering with any of these companies and would be drawn to retaining control over my own destiny by sticking with custom STB hardware or choosing the path with the most opportunity for customization (perhaps Android TV). This would be a poor choice, though. Cable operators should look to the history of the past 7 years in wireless operators and smartphones: long-term you do want to support many different devices, but short term you want the most compelling product deeply integrated so that you can acquire more contracted customers. You want to be the AT&T of this round, you want the exclusive Apple TV.


Thus ends my current thoughts on navigating the connected TV, set-top-box, console, and micro-console landscape – a bit adjusted at the last minute to account for Amazon’s FireTV announcement and stripped of references to Oculus VR and Facebook while I try to get my head around what that will mean. Feel free to tell me I’m wrong in the comments or harass me on twitter, I’m @natbro. You perhaps won’t be surprised to hear I sometimes consult and brainstorm with companies in the CE industry about these issues in more detail; if that interests you, I can be found through natbro@gmail.com.

Game Console Ecosystems – Part 1, Strategeries

15 Mar
a mixed up Rubik's cube

Looks complex, actually pretty formulaic.

Next-generation consoles from Microsoft and Sony launched a few months ago, and initial sales figures are starting to roll in: about 6M Playstation 4’s and 4M XBox Ones sold worldwide. TechCrunch dug through monthly sales, compared them with older consoles, and said hyperbolically that The Console Market is in Crisis. Re/code more correctly interprets the raw data showing Microsoft and Sony growing a bit while Nintendo shrinks, and other reports show game revenue growing slowly. To me these and other signals unequivocally indicate a contraction is underway in TV-based gaming. Consumers are showing less interest in big-ticket devices and there are not must-have console-exclusive games. Game studios have trouble justifying the very high costs of console game development and even successful console games aren’t succeeding financially. Most independent developers avoid console development.

Ouya and other low-priced micro-consoles rumored from Google and Amazon should be more appealing to developers and consumers, but either haven’t shipped yet or aren’t yet hits. Polygon criticizes Ouya’s plan to embed their platform in other hardware and says Ouya may not be dead, but its long history of stumbles makes success unlikely, taking a particularly hard jab at their controller. I also find the controller poor, but they are doing solid developer relations and an embedded platform + store service with common content which consolidates and grows the Android-based micro-console market is the only proper start of a strategy for Ouya (the other part is a business model where developers make enough money – more on that in Part 2). The elephant in the room for micro-consoles is killer games. It’s not quantity or even quality; Ouya crossed 700 total games recently, and there are many gorgeous, fun & diverse titles available. Rather, it’s the elusive killer-app: an exclusive, unique, must-have hit game that will make new users want and buy an inexpensive micro-console just for that game. A hit game is needed to start micro-console demand snowballing with consumers, and I’m not yet convinced they yet have all the elements they need to give rise to a hit, in particular a proper revenue model.

Even if console sales are growing relative to themselves 10 years ago, their boats are not lifting with the overall rising tide of gaming and are under market pressure from several directions at once. From above, a resurgence of high-end PC/Valve gaming using cutting-edge GPUs with dramatically better performance and graphics than the newest “next-generation” consoles. These draw away hard-core gamers, the biggest spenders and influencers, and the small- and mid-sized game studios which target them. From below, the exponential growth of mobile and casual gaming, which deliver simpler, more immediate gratification for play outside the time spent near your TV, are becoming the main introduction to gaming for new players and developers, instead of PCs or consoles. From within, developers are flocking to the high-end and mobile segments of the market where they see more growth and opportunity, lower barriers to entry, lower development and distribution costs, and faster product cycles. From outside, the time and money consumers have available for gaming on consoles is being undercut by video media streaming to tablets and phones through Netflix, HBO Go, and others, and by the many TV streaming dongles and devices like Chromecast, Roku, and of course by my favorite Apple TV. The draw is also a new wave of highly-compelling and socially spread video content, and it is eating away at peoples’ limited time and attention (Ben Thompson’s The Jobs TV Does is a great overview of limited attention for escapism).

The original console business model has become strategery. Infrequently updated, big-ticket subsidized hardware; high-priced games and high-priced services; poor bundling of commodity video services as poorly integrated “apps”; limited exclusivity and tightly controlled game publishing. It hasn’t worked very well in the past 10 years to differentiate consoles or to expand the market, and it will work even less well moving forward. Creating a real console/set-top-box strategy that grows the market and profits may seem as impossible as solving a jumbled Rubik’s cube, but there are just three degrees of freedom: contentprice, and lifecycle. In Part 1 I’ll describe what each of these elements are and how they have been, are being, and likely will be used. In part 2 I’ll describe some different ways of combining them into a coherent strategy that could work for consoles and micro-consoles in coming years.

Content

These days any TV-based device needs commodity content (TV, Movies, Netflix, Hulu, some number of games) just to enter the market, but to grow it must find compellingunique and exclusive content or offer a better user experience around content (think TiVo and Netflix) or both in order to create new demand and to snowball device sales. Game content specifically requires compelling hardware and software and an ecosystem of skilled developers and companies betting on and building viable businesses around the console’s success (just like video content requires studios producing content). Making all forms of content easy to buy and consistent to access through software and within the overall business model has also become a critical user experience differentiator. When you are missing or undermining any of these traits, you are blocking the creation and sale of unique and exclusive content. You won’t grow the market or your share. You won’t attract new users. You will be vulnerable to competitors.

Through a combination of studio consolidation among game publishers and a smaller appetite for spending to secure exclusives, XBox and Playstation evolved from their earlier generation to have weak content differentiation. Today most of the exact same blockbuster games — Call of Duty, Assassin’s Creed, Need For Speed, Battlefield, etc — were available on both XBox One and Playstation 4 at or near launch. For years many “exclusive” titles have simply become first-party non-unique variants of a genre like “first person shooter involving war/zombies/science-fiction” which sell well enough but are not killer hits broadening the customer base and moving more consoles. Other platform exclusives are killer in terms of sales and their attach rate, but are indistinguishable to non-experts (e.g. Forza on XBox vs Gran Turismo on Playstation). These grow the market, but equally, and so don’t differentiate consoles from one another.

Blocking, back-stabbing, or limiting novice, first-time, or independent development through onerous distribution contracts, high-cost development systems or difficult toolchains frustrates developers and limits the overall amount of innovation in an ecosystem. Small, independent developers and young people who want to learn to program and build their own games are a tremendous source of innovation and energy – reducing all barriers to their participation in an ecosystem is critically important. Microsoft and Sony have flip-flopped madly on development systems, development tools, and supporting independent developers and independent game distribution over the last 10 years. At this point it’s still not clear whether they will deliver on all the changes to support frictionless independent development that they’ve promised, but it is clear that Sony has promised and delivered more for Playstation 4 recently. Microsoft is behind this curve.

The launch this month of Titanfall exclusively for XBox One and inFAMOUS Second Son exclusively for Playstation 4, though, are a solid test of how quality exclusive content drives sales. Microsoft is inexplicably weakening its wager by allowing Titanfall on the older XBox 360 and PC in coming months and has few other exclusives up its sleeve – it seems to have much of its marketing budget and all its PR eggs in this basket, betting it will recreate the HALO phenomenon from the original XBox (which was a recreation of the Zelda+GoldenEye phenomenon from the Nintendo 64). Sony, on the other hand, is spending much less on marketing and has several very interesting exclusives pending. They had already been investing more heavily in exclusives like The Last Of Us for the Playstation 3 – again, I’d posit that Microsoft is behind this curve.

It’s difficult for me to write even a brief summary of how flawed user experiences are on XBox and Playstation around game and video content, subscriptions/login, or navigating to and within streaming apps like Netflix, Hulu, or HBO Go within the XBox dashboard and XBox Live or within the Sony dashboard and Playstation network. Suffice to say that neither console has taken better and consistent user experience around content to heart, though at least Playstation doesn’t double-charge for access to streaming services (you must be an XBox Live subscriber for $5-10/mo before you can use your $7.99/mo Netflix account – pure insanity). At every level of account setup, login, password recovery, network configuration, troubleshooting, game-saving, subscription- and single-purchase management, billing, channel and content navigation, launch and navigation delays, update management — you name it — Sony and Microsoft user experiences are complex and difficult for novice users to access. Both are very vulnerable to products with better or even limited and simplified user experience, and both face tremendous technical challenges in simplifying their designs due to their underlying architectures, teams, and processes.

These are all grave content execution mistakes: Too much effort on the quantity and comparability of titles (they’ve got a racing game, I’ve got a racing game). The short-term tactic of matching your competitor (they’ve got 15 launch titles, I need 15 launch titles). Not enough effort betting on and paying big for exclusive content which grows your installed base. Not enough effort to create a “minor league system” of strong and resilient independent developers and new/young developers. Making digital and video content difficult to find and purchase, or making setup and content navigation inconsistent and frustrating.

Price + Lifecycle

While many other prices for entertainment have gone down – prices for large flat-screen TV’s, prices for movie entertainment including DVD/Blu-Ray players and discs,  prices for casual web- and mobile-apps and their in-app purchases, prices for streaming service subscriptions for Netflix, Spotify, and others — the prices for consoles, console games, game subscriptions and downloadable game content (DLC) has stayed mostly steady over the last 10 years and has now risen for the latest console generation.

High software and DLC prices, high hardware prices and the multi-year lifecycle of console hardware were originally tied to the only business model that could deliver graphics for compelling games: high-priced custom hardware and a controlled publishing model moving high-priced software through a price-regulated distribution channel. Only this combination of price, channel, business model, and lifespan could return a large enough software attach rate and the corresponding lifetime average revenue per user (ARPU) to make consoles + first-party game publishing an overall high-margin business.

In the period around 1999-2000 the confluence of commodity GPU advances (which due to Moore’s law and exponential advancement very suddenly matched or beat custom designed hardware from Sony and Nintendo) and the robust ecosystem of DOS and Windows game developers, tools, and APIs, was the unique crossover point we used at Microsoft to enter the console business with xBox. At that point, the commodity PC CPU and GPU were less expensive in terms of up-front design & manufacturing time & expense (Intel/AMD and NVidia were already making those investments and paying them down across millions of units in the desktop and laptop categories), but were still fairly expensive on a per-unit basis. We could use the difference in lower up-front investment to differentiate the XBox with faster time-to-market, local storage, high-speed networking and on-line services, but still needed the longer lifecycle business model to recapture the overall investment and high cost of the hardware itself. The original XBox vision (as I pitched it) was to reduce the lifecycle length while maintaining forward game compatibility and ride commodity PC component prices down on volume, a strategy which would greatly disadvantage other console players dependent on custom hardware development. It would also potentially advantage Microsoft by influencing operating system, tool, and API priorities internally with the concrete pressures from devices and games needing fast-boot, stability, simplified install/uninstall and overall simpler user interaction for consumers. For reasons that still don’t make sense to me, the subsequent XBox 360 generation diverged from the Intel/PC architecture, making a deep investment in custom PowerPC hardware which bifurcated toolchains and along with nonexistent (at worst) and dysfunctional (at best) small/independent developer support, alienated developers, sending many back to the PC and eventually towards the Valve/Steam ecosystem. Several terrific exclusive XBox 360 games, a thriving on-line service, a surge caused by Kinect, and the fact that the Playstation 3 made an even deeper and more tragic hardware investment in the impossible-to-program Cell processor allowed the 7th console generation to stumble along from 2005 through 2013 with XBox slightly in the lead. It is telling that the 7th generation was exceptionally long and that both 8th generation devices have now returned to a commodity PC architecture. XBox One and Playstation 4 are using virtually identical 3-year-old Intel-compatible CPU+GPU SoC components, much to the relief of (and probably due in great part to lobbying by) the largest game studios.

But both new-generation consoles are expensive items – their base prices will likely land at $300-$400 for the 2014 holiday season with game, storage, extra-controller and service bundles creating average retail revenue of $450-$600 per unit. At this price they will not out-perform similarly priced PC rigs or Steam machines which will get almost all the same content since they are tepid on paying for exclusives and studios are reluctant to bet heavily on either leader. The 8th-generation consoles have once again been designed for a multi-year lifecycle – these are not easy-to-justify prices for consumers, these are not purchases that can be made yearly or even biennially.

Microconsoles like Ouya’s, the Apple TV with a game controller which I expect shortly, and the rumored Google and Amazon set-top-box/consoles based on commodity phone/tablet mobile SoC’s running Android or iOS are at another unique crossover point for competitors to enter and disrupt the console and set-top-box market. Just like Microsoft’s entry at a crossover point in hardware costs with the original XBox which disrupted the custom hardware and software development phase of existing consoles, new, cheaper devices will disrupt both the long-lifecycle and the subsidized hardware characteristics of the traditional console business model, and they will enter with an even larger and stronger developer ecosystem than the big consoles as they draw on experienced mobile developers. Some, like Apple TV and Ouya, will be able to sell hardware which improves graphics performance radically every year at a low (in the case of Ouya and its licensees) to high (in the case of Apple) profit margin due to their much higher volumes in phones and tablets and basic economies of scale. The prices of their core ARM-based CPU and GPU alone will be 1/4 to 1/5 the price of the PC-architecture-based chips, while exponentially gaining on their PC counterparts in performance, matching them within 4-5 years. In particular, Apple holds a strong advantage having disintermediated chip suppliers – they can fine-tune custom chips for gaming at the lowest possible price.  Others, like Google and Amazon, may sell similarly fast-improving hardware yearly at cost or at a small loss, subsidizing it with their overall software and service ecosystem. Apple and Amazon are I think also likely to offer dramatically better user interface for purchasing content and watching video from multiple sources, which will be further disruption to the inconsistent ways that cable/television, Netflix, and other streaming providers are paid for and integrated on the latest consoles, to say nothing of inconsistencies in setup, login, saving, and other common user experiences. In all cases, after the new entrants use this crossover point it will not be possible for long-lifecycle products to survive without making the same transition to more open development, and a shorter 1-2 year, backward-compatible lifecycle.

Interestingly, if either of Microsoft or Sony do adopt a shorter hardware cycle and more open development, notice how closely they start to mirror Valve’s simple Steam/Steam-Machine strategy: higher-priced hardware updating on a yearly basis surrounded by a strong, well-established developer community who already understands forwards- and backwards-compatibility; simple, open and totally free toolchains; digital distribution; a consistent architecture with enough variations to allow hardware competition and multiple price-points, but enough similarity to prevent developers from having to test too many diverse configurations. My only question is honestly which of these two gets the courage to partner with or buy out Valve first.

Tune in next week for Part 2 where I’ll suggest some non-strategery strategies for micro-consoles to compete and for traditional consoles to shift gears to save themselves from extinction.

Apple TV + games

24 Jan

Did you know 240M televisions were sold worldwide in 2012? Almost 40M in the US alone. I’ve written before about what I’d want in a set-top-box and how xBox and Playstation could be disrupted by an Apple TV supporting apps & games. Now that the new iPhones and iPads are out and show the hardware roadmap, rumors about an updated Apple TV in 2014 are swirling and I’ve spent more time with the XBox One and Playstation 4 checking out their gaming, set-top-box & media integration. I think the time is finally ripe for apps and games on Apple TV.

What it seems likely Apple will do:

  • Introduce a new model Apple TV with better graphics, more memory, and local storage for apps, priced at $149 (16GB) – $249 (64GB), retaining a 40%+ profit margin. Use a slightly beefed up 64-bit A7x chip like the one found in the iPad Air & mini but with even more GPU horsepower and running at a higher clock speed since it’s a plugged-in device and can both use and dissipate more power. An “A7x+” – 2x to 4x the GPU cores/power and a somewhat faster CPU. Updating the CPU/SoC has negligible manufacturing cost impact, but boosting to 4GB DRAM (+$25) and local storage/flash drives up the price slightly.
  • Introduce its own bluetooth gamepad controller which works with older and newer Apple TVs for $79-$99. It would be brilliant to enable support for users with existing Sony DualShock 3/4 and XBox Controllers – there is evidence of DualShock 3 support in iOS, but this may be a red herring. These are both great controllers (DualShock is simply Bluetooth), and they mostly fit the MFI specs for iOS controllers.
  • keep the existing $99 Apple TV price point, updated to the 64-bit A7x with 1GB DRAM and 4-8GB flash, perhaps enabling support for (on just the most recent/older 3rd-generation models) some non-graphically-intense apps and games for existing Apple TV users, but the lack of RAM and storage limits this possibility.  The newest $99 model in any case won’t have the more powerful GPU or storage capacity for more intense games – it’s the SKU for basic streaming and basic apps, but it’s easy for most consumers to prefer the $149+ versions.
  • update the on-screen UI to support using the bluetooth game controller for navigation. Irrelevant to this product to have a dramatically different UI than the past, but they might roll out a more iOS7-like UI as long as they’re updating.
  • introduce an App Store for buying games and other TV App content, with some restrictions on what can run on older/normal/$99 vs. newer/$149+ Apple TV’s – e.g. photo/screensaver apps can run on either, racing and first-person-shooter games only on newer models, as happened with games on iPhone 3G vs 4 vs 5 depending on their use of OpenGL ES. The UI target resolution will be 1920×1080 (1080p) and this will become another “universal app” target for developers.
  • likely some minor announcements around new or improved movie/tv/streaming/content partners, but this update will be more focused on games.

What doesn’t seem likely:

Some of my previous thoughts about ideal set-top-boxes include better integration with my cable box via HDMI pass-through and the ability to control other peripherals and do a universal guide overlay and unify search. I still dream of this idyllic future, but having used the disappointing XBox One, TIVO, Playstation 4, and other devices which try and fail to integrate other devices well, fail on voice, and don’t do a great job integrating other sensors, I think many other features are not possible technically and business-wise to the level of Apple’s user-satisfaction bar in 2014. Kinect-like interaction via the PrimeSense acquisition isn’t in 2014 for Apple. Ultra-HD/4K is not a 2014 target, either. Games and utility applications (weather, screen-saver, home-calendar) accessed with the standard remote and a quality bluetooth gamepad are the simple no-brainer to support adding new content – developers are, in fact, champing at the bit to put games and other types of apps on Apple TV with a quality, responsive controller. I have heard some hints from some game developers that they are doing work “sort of like this.”

Why not just improve AirPlay from existing devices?

AirPlay can be used to project audio, photos, and video or to project the screen contents from an iOS device to a TV through Apple TV. For showing your friends a few photos or videos off your phone or watching a slideshow from iPhoto this works pretty well, and it can also work for some simple types of games and apps. But, using an iPhone or iPad as the main CPU and GPU and input controller to run a sophisticated game (or any application with touch or accelerometer interaction) and then projecting it to an Apple TV to your TV simply has too much input lag due to the way the device must process your input, then generate graphics, the frame-buffer must be encoded, transmitted over WiFi, and then decoded and sent to the TV — about 0.5-1.0s of lag. 4K media would make this even worse. The CPU+GPU and storage will have to be directly wired to the screen for the foreseeable future.

What about games that have some UI on the TV and some on your iPhone/iPad?

Nothing will prevent developers from doing dual-UI with their games, and I’m sure some will do so (it’s pretty fun to do on the Wii U if you haven’t tried Super Mario 3D World with a friend, you should), but developers will be do it with applications triggering one another’s launch via bluetooth and communicating small amounts of data peer-to-peer over Bluetooth and WiFi, with code running on both devices, not by having the iPhone/iPad project video to the Apple TV or vice-versa. There is simply too much input lag, and Apple cares about smooth and responsive.

What about an actual television?

I personally think there is a great opportunity for somebody to disrupt the TV space with the smarts found in Apple TV. TV manufacturers  struggle with software and UI – the smartest “smart TV’s” out there offer horrendous software and services from every angle compared to using an XBox, Playstation, Roku or Apple TV as the main device. Lots of opportunities to beat existing TV folks, especially for the likes of Roku and Apple, who have clean UI. Apple is also in a unique position to sell high-margin flat-screen TV’s from their retail locations – many people underestimate the value of those retail locations so nearby consumers. That said, I don’t think it makes sense for Apple to sell an all-in-one Apple TV  + screen in 2014 or possibly ever for two reasons:

  1. it’s not a great idea to couple the long-term purchase of the expensive screen (average replacement cycle for TV screens is 3-5 years these days) to the goal of an every-year-improving Apple TV set-top-box. Consumers will spend $99-$149 for easily-updated devices that get better and better along some axis, and there is tremendous room for hardware improvement in the CPU and GPU of this device while retaining the $99-$149 price-point
  2. the big transition in screens coming is UltraHD/4K – I would expect if Apple wants to start selling TV’s it would do so by selling a 4K TV + Apple TV bundle and encouraging you replace the docked Apple TV portion yearly for $99-$149 rather than having you replace the whole screen. My other guess is they would do this kind of work only after securing enough capacity for retina-quality displays for all Macbook Air’s and iMacs, so 2015 at the earliest.

Will it compete with XBox and Playstation?

In the short-term, not exactly – the types of games that can be written for a device with even these greatly improved specifications can not, I think, be as immersive and intense as the sports, racing, and combat titles which dominate sales on traditional consoles. You will probably hear it being dismissed by gamers and gaming industry executives at launch because it’s won’t have the power to run these types of games. However, longer term it will have tremendously disruptive effects on consoles. In fact, life-threatening effects, such as:

  • Raising the user-interface and user-experience bar dramatically. Many of the UI atrocities I documented and hundreds if not thousands more (like how long games take to load, how you interact with streaming services, etc) are taken for granted on traditional consoles. A simplified, more iOS-like approach to applications and how they are installed, save data, launch and how you switch between them will make consumers far less tolerant of existing consoles. Neither Sony nor Microsoft have shown great ability to simplify their own UI or influence the UI of their games.
  • Driving down game prices. A more open distribution channel like the App Store as well as an inexpensive but not-subsidized initial console creates an ecosystem where app and game prices will compete and get driven down. Sony and Microsoft need to recover money lost on the console itself from game sales, and they act to curate and control titles and keep prices high. A console that already makes its manufacturer 40% margin has literally no incentive for prices on content to be high – in fact they actively work to get content as cheap as possible, as free as possible, to create customer demand.
  • Shortening the console lifecycle. Apple TV hardware updates yearly, like iPhone and iPad, and it will continue with better graphics, more memory and storage and things like support for 4K resolution output. Shorter cycles do not fit the current console business model where a 5-7 year cycle makes it possible to improve manufacturing yield, decrease production costs, and recoup initial R&D.

How is it different from Ouya or other “micro-consoles”

An Apple TV running apps and games is actually a validation of many of the concepts of “micro-consoles,” like the Ouya, except that it will likely not be as open a platform as most micro-console proponents desire. It will offer developers a much simpler and cheaper path for development and distribution than existing consoles. What truly makes it different is that it would be a unified offering from Apple – Ouya is an Android-based micro-console, so it can bring plenty of Android developers to bear, but it is a custom App Store and a custom product, struggling to get momentum and sales. Apple will have a much easier time selling more Apple TV’s – adding apps and games will increase the value proposition of the current device.

That’s my $0.02. I’m looking forward to developing for an updated Apple TV.

TILE: find words fast

30 Dec

TILE in xcodeI’m excited (and not just a bit self-conscious) to be releasing TILE for iPhone and iPad. TILE is the first app from the AppsGuild studio I’ve spun up here in Seattle. I’ll write more about AppsGuild (and what makes it a new, interesting, and very different kind of organization) soon, but for now I thought I’d tell you about TILE.

Oh no! Another Word Game?

Even if  you’ve played Boggle®, Scrabble®, LetterPress, Words With Friends, SpellTower or any of the other 10,000+ word- and letter-based games on your devices (like I have), I think you’ll find there are several things quite different about TILE, things that I like, things that I think the word-game-hungry might also like. I think this because I really still enjoy playing it, even after playing many hundreds (thousands?) of times while building and testing it, and because my 20 beta testers played an astonishing 6,000 one-minute rounds in the first 30 days of playing with it. Zoinks!

As you might expect from an xBox founder, I like games. I’ve always been a big fan of word- and letter-based games, from hidden-words to crosswords, anything. To me they tap into my reading and thinking brain and don’t feel like a waste of time. I don’t mind my kids play these kinds of games. I have a lot of great memories of playing Scrabble and Boggle board games growing up, and of playing GrabScrab at iLike (though I honestly wasn’t very good at that one). I really enjoy LetterPress and Words With Friends on my phones and tablets. I even like Hangman and still play it with my kids! But my very very favorite games on mobile devices are quick-to-play, like Dots, or combine quick-play and head-to-head strategy and real-time competition (Galcon on iPhone is my all-time favorite). As much as I like LetterPress and Words With Friends and Scrabble (and other turn-based games like Hero Academy), they are just not my favorite pacing for mobile gameplay. I like to pick up my phone and whittle away a few moments if I have some to spare, or to take a break from something and clear my head with something fun. Cut The Rope and Angry Birds are fun and fast to pickup and drop, but they are really about hand-eye-coordination – though I thoroughly enjoy them I honestly feel like I’m wasting a lot of time playing them. In the industry these are the “snacking” games of the “casual games” category, they are a quick bite to eat and don’t take a lot of time or effort. TILE is perhaps a more brain-healthy “snack” — certainly I am pulling words out of my long-term SAT-taking memory to improve my high-scores!

Another impetus is that I find turn-based games just too slow-paced, or I have to start many concurrent games if I want to spend more than 30-seconds with the game. I get irritated with LetterPress opponents who take a long time to take their turn, and the notifications from the game interrupt me over the next several hours – ugh. I have also found most word and letter turn-based games to be full of people who cheat to win – given no time-limit on your turn and an internet full of Scrabble- and LetterPress-cheat sites, I just don’t enjoy playing against people who brilliantly come up with words like “quezal” regularly.

I tried off-and-on for many, many years to build a good version of multi-player GrabScrab for mobile, and I think the solo play version captures the adrenaline while the coming-soon head-to-head Versus version of TILE will capture the true anxiety of that fun game in a shorter time-format.

Building and building and building and building…

Late in May of 2013 I woke up at about 2am with the fully-formed idea for TILE, finally realizing how to marry ideas from LetterPress and Dots and Galcon to yield a GrabScrab-like fun and easy-to-snack game. I cracked open XCode and wrote a quick solo-mode proof-of-concept in about 2 hours and found it fun to play. Shockingly, my wife also found it fun to play! I then spent about 10 days putting together a smoother UI, high-score infrastructure, and a more robust dictionary behind the application before putting solo-play-mode it into beta with about 20 friends. When they played it over 6,000 times in the next 30 days and had friends borrowing their phone to play and sending me beseeching email to add me to the device-limited beta, I was pretty sure it was actually fun and perhaps a bit addictive. In hindsight, I could have (and should have) shipped TILE right then – it was almost in its currently shipping form, minus some scaling and UI tweaks. I was delayed a bit by summer (my wife and I dedicate summer to spending time with our kids and family and traveling with friends) but through the summer and fall I was also delayed by an almost Sisyphean obsession with the head-to-head Versus mode of TILE. After an inordinate amount of coding, test-harnessing, testing, debugging and feedback on the multiplayer match-making and head-to-head mode, I decided I really needed to rework its UI further before releasing it, so it will come in a future update.

Pricing: It’s not free-to-play

For now TILE unabashedly costs $0.99 – you get exactly what you pay for, and it’s more entertainment than 1/3 of your morning coffee. If you like word games at all, I promise you’ll get $0.99 worth of real enjoyment from TILE in the first 10 minutes playing. I don’t run ads between every few games like you get in most word games, and I don’t try to squeeze money out of people for different colors, timers, or cheats. I do hope to eventually come up with an unobtrusive way to charge for useful or fun things within TILE so that you won’t need to pay for it up-front, but this simple low price is where I’m starting because I myself prefer paying for things simply and just once.

Thank You!

To all my beta testers, especially but not limited to my wife, Adam Doppelt (and his wife Shannon!), Hadi Partovi and Phil Kast who gave me such great feedback — thanks so very much for your advice and support, I promise I’ll take more of it to heart more quickly in the future, especially the “ship sooner” advice.

I very much hope you enjoy TILE and look forward to your feedback about it.

follow TILE on twitter | share TILE on facebook

x’ing my fingers about xBox

21 May

No surprise I wasn’t invited to the big tent. Despite my recent venting of frustration on the state of xBox, I’m honestly crossing my fingers that the xBox-720/xBox-8/xBox-∞/neXtBox XBox One announcement goes well today and they have a successful launch this fall.

It will go well if they demo a game that blows our mind and that we want to play. It will go poorly if there is no unique game demonstration and we are instead shown a lengthy set-top-box-tv-app-platform-bullshit-bullshit demonstration.

Great must-have games, sell consoles. PacMan, Pitfall! and Asteroids moved the Atari 2600. Mario sold the NES and Super NES. Zelda, 007-GoldenEye and MarioKart sold the Nintendo-64. Tony Hawk, Metal Gear Solid, and Grand Turismo sold the Playstation 1. There was that Halo xBox thing that seemed to have worked pretty well, also. This isn’t a truism just about consoles – this is a truism for all computers and operating systems and phones and tablets and e-Readers and devices as well. PCs were originally mostly purchased for Multiplan – there’s always a “killer app” or set of apps or content which kickstarts the market. Since the new xBox and Playstation 4 aren’t going to be backward-compatible with their old titles… there had better be some very unique and must-have content ready for launch.

Sony’s Playstation 4 announcement event and PR did a good job focusing on games and game-tech, and everything about the event, the graphics, the PR was fine-tuned to gamers. This was solid and refreshing versus the guide-tv-movies-netflix focus that Microsoft has been spewing of late. I remain impressed that Sony are holding back on showing the physical console – this is terrific restraint and gives them strong opportunities to take back press coverage from Microsoft post xBox-announce today. But the games – inFamous Second Son, Killzone: Shadow Fall, Watchdogs, some others – while truly fantastic looking, did not look particularly approachable or compelling to me. I think Sony better have some very compelling games up its sleeve for launch. These are good but not buy-a-new-console good, in my opinion. And I don’t seem to be alone in this opinion.

Alex St. John (another former Microsoftie and, oddly, also an Alaskan) wrote an insightful post giving his perspective on the new xBox — it’s worth a read, I agree with quite a bit of it. His point that xBox is being run and guided by non-hardcore-gamers and old-dudes who think more about places to watch Sponge Bob and listen to music than they do about games is particularly spot-on. I also agree that technologically it’s good that Microsoft is returning to more of a PC architecture and hopefully more of a PC operating system kernel (likely since Dave Cutler is working on it these days). I was the original proponent of xBox using the Windows kernel so that we could share technology and improve the PC+Windows experience with everything we learned about stability, fast-boot and UI/UX from consoles. This goal took a bit of a back-seat in the original xBox and was completely tossed-out in xBox 360, and as Alex notes, it caused damage to the PC game ISV community, as they became fractured — DirectX was consistent between PCs and xBox, but everything else about programming them was different.

Like Alex, I’m not overly excited about the expected hardware. I do think it’s interesting that it will have HDMI-in as well as HDMI-out, there are cool things to do there — if done right (a big “if” given who we’re talking about) you could make the console the primary input and control point for the TV, as I pointed out in my post about what I’d want in an Apple TV device. This is a strategic point to own.

In any case, I’m looking forward to watching the live-blogs of the launch. If the device is quiet, small, and really fast and has at least one cool games, I will of course get one.

Actually, I’m lying. Even if it’s big, loud and slow, if it has a kick-ass unique game that looks playable and approachable and fun I will buy one and many others will as well, despite whatever horrific TV-guide and Blu-Ray and Netflix and lame third-party apps are announced and demonstrated in the big tent today.

(update: the announcement was almost the very worst of every horrible possibility I could have imagined. TV-focused. No live-game demos. Bad jokes. Horrible presentation. No game footage at all until 35m in – overall we saw more Price is Right footage than game footage. It’s not clear yet if the launch titles will be super-compelling, we’ll have to wait until E3.)

Morin Tastes Own Medicine

7 May

Last Friday Facebook blocked Path’s “Find Friends” feature over brewing spam complaints. Shortly thereafter Dave Morin, Path’s CEO, proudly stated that “Path does not spam users,” but the tactics he is defending today are the very same practices that he himself cracked down on as “spam” when he was running the Facebook Platform. I watched the crack-down from the front-row at iLike, an early Facebook platform partner.

Compare Morin’s description of Path’s “feature” to the Facebook policy that was put in place while Morin was the head of developer relations for Facebook Platform. The official Facebook policy was that apps were forbidden from doing exactly what Morin now calls “not spam”:

Misleading Notifications To Users Will Be Blocked By Dave Morin - Thursday, August 16, 2007 at 2:31pmOver the last few weeks we have noticed several developers misleading our users into clicking on links, adding applications and taking actions. While the majority of developers are doing the right thing and playing by the rules, a few aren’t – and are creating spam as a result. Going forward, if you are deceptively notifying users or tricking them into taking actions that they wouldn’t have otherwise taken, we will start blocking these notifications. The bottom line is that if the notifications you send are the result of a genuine action by a Facebook user and that action is truthfully reported to the recipient so they can make an informed decision, you should have no problems. If you do find some notifications blocked, it was probably because this wasn’t the case and we will be happy to inform you of some best practices by other developers that have prevented this issue. If you've been blocked by us for deceptive notifications, the error message you will see is - 200 Permissions Error.

Here’s Dave Morin’s opposite opinion of spammy app behavior from Facebook’s 2007 anti-spam policy – pretty much the same policy that’s still in effect and caused Path to lose access: https://developers.facebook.com/blog/post/26/

Specifically, an app with a pre-selected (opt-out) checkbox sending messages to your entire address book if you simply pressed “OK” was what Facebook, under Morin’s oversight, considered to be a punishable violation of the terms of use. Let me say that again: the exact form of invitation interface Path uses was specifically called out and forbidden by Facebook under Morin oversight, in order to preserve the sanctity and user-experience of the Facebook platform ecosystem.

Obviously the tables are turned now that Morin is trying to build his business rather than regulating an ecosystem. The early Facebook Platform was so over-run with user-acquisition spam that it’s easy to understand why Facebook took the measures it took, to crack down on the incredibly aggressive techniques used by ethically challenged companies abusing the system to grow their user-base.

So it really doesn’t surprise me that Path’s access to Facebook friends was blocked, and in fact I’m glad as a user that Facebook is enforcing the rules. It does seem disingenuous at best and genuinely ethically questionable to me to spend your last job regulating such spammy activities and enforcing policy that forbade them, only to turn around and build a company based on just those activities. To then publicly defend your actions as “not spam” is just sarcasm.

Why do I care? I was CTO at iLike, and we were a launch partner on Facebook platform in 2007, at one point acquiring over 10M users in just 2 weeks. It was an amazing roller-coaster ride in engineering, operations, and business development. Although we used Facebook sharing and invites and we A/B-tested our notifications like crazy, we shied away from the extremely spammy tactics of the Slide‘s and RockYou‘s and others of that era, yet Dave Morin’s platform team punished good and bad apps alike. We watched the Facebook platform devolve into a sheep-throwing race to the bottom for users, and a cat-and-mouse game between aggressive apps skirting rules and the inconsistent Facebook enforcement of that time. We always aimed to keep iLike users’ best interest first and so focused our efforts on creating user value around music, concerts, and artists. Our viral user acquisition growth stuttered and suffered, but our artist and user happiness kept growing, just more slowly. Because I think we had a useful app with useful notifications and a company culture of respect for users and their privacy, I personally wish Facebook’s platform team had acted to block-out and shut-down aggressive apps and companies doing bad things rather than creating a treadmill of technical restrictions for all apps which hurt good apps while also punishing the bad. I wish they had early on implemented a simpler set of broad rules and used an active review and harsher enforcement policy more like Apple’s App Store. They’re doing better at this by shutting out Path over this kind of violation.

As for Path, I was skeptical but willing to give them a second chance after the egregious Apple address book issue last year. Now I think their M.O. is clear. It’s hard to imagine ever trusting Path to put users first when the CEO can so completely change the definition of “not spam” from one job to the other, depending on which side of the table he’s sitting on. What exactly do Dave Morin and Path believe is right for users and their product? Whatever works right now? Whatever they can get away with for growth?

Dealing With xBox Always On

8 Apr

Nat Brown

xBox’s @adam_orth “deal with it” mashable.com/2013/04/05/xbo…. what a prick. in other news, new xBox controller relabels ABXY buttons FCKU.

— Nat Brown (@natbro) April 5, 2013


After the Adam Orth PR disaster and subsequent apology several people have asked me what I think about the rumors around always-on-digital-rights-management (DRM) in the next generation xBox and the potential to not support used games.

I don’t have definitive knowledge here. I can say that I have been hearing conflicting stories from insiders about what “always on-line” means, and it sounds as if there is confusion internally and externally about how users and games will be authenticated to xBox Live (XBL) accounts and to the console, and it’s all about the used-games market.

Purely from an end-user simplicity and usability perspective, I personally think it would be incredibly stupid to require on-line access all the time. Always-on-line authentication for instant piracy prevention is something that overly-anti-piracy numbskulls at Microsoft have been suggesting for 15+ years for Windows and Office as a way to combat piracy on PC’s. It has never been reasonable to do this given the spotty connectivity of the world’s computers, although some of their other dumb-crazy/-irritating ideas — DRM companion chips, Intel “secure-boot”, etc — have made it into the PC ecosystem, just like they stuck these overly-protective and mostly ineffective and expensive things into the original and subsequent xBox. For the most part, all of these mechanisms do little to protect from hard-core pirates and simply cause problems for average users and hobbyists who aren’t trying to pirate but are just exploring their paid-for hardware devices. And they stifle independent game development quite a bit. There are so many edge cases about missing credentials, delays propogating authentication and revocations, that I think it’s simple a very bad idea to try to build always-online, instant authentication into consoles.

So maybe, maybe xBox will require always-online and try to perform real-time piracy prevention. If they do I think that is and will become another Stupid, Stupid xBox! moment for them because users will hit the many horrible edge cases and hate it.

What I think is vastly more likely, which has been misunderstood in these always-on leaks and speculation, is requiring that online checks happen eventually but not instantly. Specifically on-line checks:

  1. initially or within N-hours/-days of a new or used games first being inserted or launched, so that the physical disc can be paired/bonded to your XBL account and to some degree to your console, and sometimes (re)paid for, and
  2. occasional on-line checks to de-authorize discs/content that has been paired to another account or console.

The specific purpose of the on-line authentication checks and pairing of content to the XBL-account/console is to make sure the game studios can take a cut of used-games downstream. Today I can buy a brand new (disc-based) copy of a game, play it out for 72hrs, then resell it for almost full price. Game studios aren’t too keen on this. What they would prefer: I can buy EA’s HotNewGame for $70, play it out, then sell it to my friend, Abe, as a used title or to GameStop for some money, but when Abe or some other user inserts the disc that was paired to my XBL-account, eventually (within some hours or days) he will need to pay up to EA to enable the used copy to continue working. There’s not much difference between a time-limited free-trial and a used game at this point.

Making the used game market less profitable for consumers and more profitable for the game studios has always been the intent of controls and limits on used games, just as a small closed market with tight-DRM and limited indie-developer access is intended to prop up game title prices for studios (and for the console maker, who let’s remember, needs to see a lot of licensing revenue come back to pay for the hardware losses). On-line checks and title pairing to XBL or device would help make the knee-jerk “no used games” decision less a binary and PR-unfriendly on/off for the platform, and let the market find reasonable prices for used games.  If you as a buyer know that a used copy of EA’s HotNewGame bought from another player for $20 then costs another $50 to activate, you’ll just buy a new copy for $70. Or you will negotiate down from $20 for the used copy to a reasonable rate.

If this pairing and non-instantaneous occasional on-line authorization and de-authorization of content is indeed what Microsoft and the game studios are dreaming up for the next generation xBox, I actually don’t think it’s the worst thing in the world. I do think there are a lot of obscure errors about network connectivity and key-server outages revocation lag that can crop up even when you go with deferred authentication, which I would hope they simplify and eliminate and err-towards making gameplay work for the majority of users rather than ensuring tight control 100% of the time.

I also hope that game studios charge a reduced rate based on how long a title has been out and leave a little oxygen and profit in the used-game ecosystem for users – if they don’t, that will cause further PR backlash.

Follow

Get every new post delivered to your Inbox.

Join 72 other followers