The Fungibility Myth

3 Dec

Ernest Hemingway wrote extremely popular books that made lots of money. We’ll hire a writer to write more books similar to Hemingway’s. They will be popular. Then we’ll hire 10 more writers and sell more popular books, faster.

At every level this is clearly a stupid and worthless business plan, but substitute engineer or product manager for “writer” and change the product from a book to a consumer device, mobile app, or enterprise service, and suddenly you’re looking at many tech startup and bigco project plans.

When you do those word substitutions it’s cheap and easy to criticize and correct peripheral stupidity. That category is hit-based and customer acquisition is expensive, you’ll say. There are already lots of similar products with established branding, how will they differentiate and become popular, you’ll say. That service needs an expensive hands-on sales team. Hm, is the addressable market really as big as that? Let’s research. And refine. And adjust.

Fine, play some MBA games on it. But what makes the Hemingway business plan stupid at its core is that arbitrary writers are not fungible into Hemingway or JRR Tolkien or JK Rowling. Neither are arbitrary engineers, product managers, or partners necessarily fungible into the resources a specific product needs.

This is why great companies, smart investors, and venture capitalists focus so intently on the individual founding team members, the team leads, or on a team’s ability to hire and create and maintain the culture conducive to its products.

I’m not saying resources aren’t ever fungible. Some products and efforts are routine or well understood, and that’s OK.

But certain products, certain interesting new products, certain great products are great entirely because of the one or two (sometimes hundreds of) Hemingways that are uniquely capable of bring them to life. No amount of wishing, willing, cajoling or micro-managing can squeeze water from the rock. If you don’t believe individuals matter, even to huge projects, you’re falling for the fungibility myth.

Apple TV Pre-Reprecussions

8 Sep

If you’ve followed my prior posts about what’s wrong with xBox, how game console ecosystems aren’t working and how to fix them, what the hardware in a 2014 / 2015 Apple TV will be capable of, and how strong Apple’s advantages in its silicon and developer ecosystem are, you know that I believe it’s more than games at stake with Apple TV. Not just console makers, but also cable companies, ISPs, and television manufacturers are in for a world of hurt as Apple App-ifies television channels, cable-bundles, games and other services with a device that defies all existing television business models.

Apple TV will be the first television-attached general purpose networked computer with a permissionless content ecosystem built specifically around how people interact with television screens. This is a very big deal, because software eats the world of television only when there are enough people who can afford easy-to-use hardware and enough creators capable of keeping up with a growing demand for content.

This is the first of a few posts about Apple TV and its repercussions. This one is about the expected Apple TV product and why an App Store changes everything, not just games and concludes with my take on Apple TV killing consoles.

The Apple TV Product

If the rumored hardware specs and pricing are correct, they don’t differ too much from my own guesses except for:

  • A8 System-on-Chip (SoC) processor. Using a clock-, RAM- and GPU-boosted A8 instead of the A8x or newer A9 and thereby having one fewer CPU core is slightly surprising. This may just be confirmation of prior rumors that they were holding the already designed product in the wings for a year while waiting for other business & content deals to close. As much as I love huge texture fill rates and mass quantities of GPU cores, Apple does not fight on specs – if it is on-paper less powerful than an A9 it will not detract from Apple TV gaming – if anything it tees up next year’s hardware update easily.
  • No 4K support. This was always a bit of a stretch goal technically and I only suggested it because supporting hi-resolution screens and having the best resolution trailers is very Apple. The current HDMI version 1.4 hardware only supports 4K video at 24Hz refresh rates, which is horrible beyond words on televisions and monitors (though it works fine in digital movie theaters). The HDMI 2.0 spec which supports 4K@60Hz refresh was finalized just two years ago, so that ink is barely dry by hardware standards. I think they could have done HDMI 2.0 this year, but it’s not a huge loss and again tees up next year’s upgrade.
  • Siri and a new remote supporting universal voice-based search, what I had hoped for but wasn’t expecting. The Apple TV remote and 10-foot UI were already best of breed, but as I said last year, a microphone and Siri button on the remote is killer. Having tried perhaps every form of television input device ever shipped (and many that never did), I can honestly say remote-based voice search simplifies 10-foot UI dramatically and is magical. Voice search was the single stand-out delightful feature of Amazon’s FireTV, and was frustrating there only by the lack of universal/cross-provider search. Voice search on XBox One has never worked with my television or movie content, and initiating interactions with “trigger words” yelled across the room followed by additional yelling towards the television simply doesn’t feel natural.
  • Motion-sensing and a touch-pad on the new remote I did not expect. Traditional 10-foot UI’s, even with added voice recognition, remain inconsistent for entering voice-difficult text – things like network passwords, email addresses, or the names of people and places are still hard and awkward. I’m tentatively excited by additional inputs and sensors on the remote because I hope Apple has at least one cool text-entry trick up its sleeve which it will make available to all apps (text entry in console games is utterly inconsistent, it doesn’t ever seem to be provided by the operating systems). If the remote manages to enable some existing games with touch-based input or new types of games centered around gestures or motion as input, that will obviously be an interesting opening for existing and new games tuned to the remote. In general I have not been impressed with adaptations of touch-based games to 10-foot UI and one-button remotes (or touch adaptations to gamepads). Games and their preferred input mechanism tend to be well coupled.
  • Infrared transmitter (IR) on the remote. Presumably to take over power and volume controls so you can use the single Apple TV remote and discard the horrible one that came with your Samsung TV.
  • Storage capacity & pricing. I thought two models 16GB/32GB at $149/$249 vs the rumors of 8GB/16GB at $149/$199 or a single model 16GB at $149. I was more concerned about storage space for large games, but iOS 9 App Thinning and On-Demand Resource caching, alleviate my concerns, though they do require developer effort. On-demand partitioned resource delivery from a CDN is a great example of a transparent system service that stagnant operating systems and consoles have failed to give developers and customers – have fun instead on XBox One and Playstation 4 manually removing and moving your games and saves around using a difficult 10-foot UI. Similar fun managing your apps and settings on Windows (or Mac).
  • No word on an Apple-designed bluetooth gamepad. The rumors suggest gamepads are left to third parties and the MFi program, which would make them uncommon among consumers and more difficult for developers to depend on when targeting Apple TV. If so, this suggests Apple may be continuing to skate around a direct confrontation with existing consoles. This could be because they want to avoid direct comparisons or they believe gamepad-based games on Apple TV will compare unfavorably to similar console titles on the current Apple TV hardware. Or it could be because console publishers are giving them grief about console revenue cannibalization? I’ll be contrarian and say I still expect to see an Apple gamepad on September 9 – a beautifully designed gamepad that costs $15 to produce and sells for $79 seems too Apple-like to pass up, and gamepads are a not just a well-established input mechanism for games on television, both for users and developers, but also very good input devices.

An App Store Changes Everything

The world has had 8 years to internalize how valuable an open, (virtually) permissionless App Store is for Apple’s mobile product ecosystem, how it has created a virtuous cycle (like the Windows ecosystem long ago) helping grow and being grown by its vast developer community. And yet no television devices have copied even a fraction of its blueprint nor seems to understand the interrelated features of its success. There are set-top-boxes and consoles where applications have to pay the owner/operator for placement, reminiscent of carriers controlling phones in the pre-smartphone era. There are open platforms with expensive paid tools and poor software development kits (SDKs) that can’t attract developers even when they are paid to bring port their apps. There are open platform micro-consoles and HDMI dongles with great SDKs but inadequate processing power for interesting applications, no payment infrastructure for content developers, and no distribution plan or marketing budget. Consoles have come close, but their poor UI and high-priced, subsidized hardware business model which requires controlled distribution to prop up pricing are boat anchors. Consoles barely foster proper independent game development, afraid of the implications to big publishers. They offer janky tools and SDKs to controlled groups of elite developers, stalling innovation and preventing open support communities from forming. Console setup doesn’t demand a payment instrument, so developers can’t always depend on the digital marketplace or in-app purchases or subscriptions. Consoles make setting up accounts and payments very hard, have high-friction stores with inadequate payment types, and make business as well as UI distinctions between games and things like video streaming services or applications. None of this happens in the Apple App Store. (It’s worth noting that almost none of this happens in Valve’s Steam, either).

Not just games and new kinds of TV apps that developers will dream up, but for all content we traditionally have watched on channels or buy in cable bundles. It will all become App-ified. Use your Comcast or DirecTV set-top-box to add HBO service to your account – one of mine asks me to call customer service to confirm, the other one has no option at all, though I can do it via the website, and it takes 12hrs to take effect. This is for the services these operators have already approved – if an independent movie or television producer creates a small-budget show what are the chances that it could appear on some unused channel slot on your Comcast or DirecTV program guide? Zero. Not so with Apple TV – if you’ve got the rights, build an app and charge what you want for it. Permissionless innovation allowed and encouraged here.

But Wait, Aren’t TVs Being Killed By Mobile?

App-ifying television sure sounds great, but isn’t television a zombie? Many tech folks see the millions of televisions around us as dumb glass which mobile devices in our pockets will eliminate or just drive blindly. Lots of trends support this theory – increasing time spent on mobile devices, less television watching and video game playing, declining television advertising, slow replacement cycles for televisions, to name a few. But I say look more closely. Mobile is killing television in a different way than you think. Technology from the mobile supply chain is taking over television and peripherals, making them cheaper, faster to iterate, and giving them better, simpler, more stable software. Mobile is killing television by unleashing hordes of mobile developers and powerful SDKs onto television where previously no software could run. There are 4,000,000,000 square feet of LCD glass shipping worldwide each year, about 2/3rds of it televisions. It’s worth thinking about how we interact with television screens and to consider what set of jobs televisions are hired to do. What might TVs continue to do or even do more of, even as they lose some of their traditional jobs and attention to mobile devices? What additional features could help them do more jobs and be better at some traditional jobs, even better at jobs done in concert with mobile devices?

Three Jobs Television Screens Are Hired For

The primal job consumers hire televisions for is watching television, movies, and sports. Although time spent in front of televisions watching “passive” entertainment is declining, especially among younger americans who have shifted to different entertainment or to other devices for video entertainment, there is still a lot of reality, sitcom, news, live events like sports, DVR, and DVD-/BluRAY-watching happening in front of televisions and it is controlled with remotes. One remote controlling power, volume, and inputs. Perhaps the same remote or another (or several!) for a cable set-top-box, for DVD or BluRay, for TiVO, for DirectTV. Lots of complex remotes with too many inscrutable buttons. Many different, complex, and inconsistent 10-foot User Interfaces. Apple TV’s simple UI and new bluetooth remote with voice search (and bluetooth beacon which can trigger apps in your phone as you come near) will radically improve consumers’ experience with what has traditionally been a very frustrating task: changing channels and finding content. Apple’s base subscription service will apparently include local channels, and the metaphor shifts from channels and having to map shows to brands to channels to simply named brand containers: ABC, CBS, etc nationally, KIRO, KING, etc locally, HBO, FOOD, etc from premium services. Elements of de-coupling channel numbers from brands to simplify discovery has been happening in various set-top-boxes from Roku, DirecTV, TiVO, XFinity, and even already in Apple TV, but adding in local channels will make a tremendous difference for typical consumers. After initial setup, I suspect most Apple TV users will never use their old TV remote to change inputs again. Ever. Apple is entering the television market through its most popular activity – watching – using the most magical version of a boring and before-now hated television remote control that consumers will ever have experienced.

Another important job that consumers hire televisions to do is gaming, to the tune of $50bn per year worldwide in console hardware, software, and subscriptions. We buy a video game console and attach it to one of our television’s inputs, use the TV remote to turn on power, choose the game input, and control sound. We then shift to a gamepad to navigate the 10-foot user interface “dashboard” and to play games. By including an App Store on Apple TV and making games a category of navigable content just like movies and television shows, a lot of the small frustrations that parents, spouses, and roommates have with gaming on a shared television screen – how do I change inputs? why is the screen blank? – are eliminated. Because Apple TV games are purchased, installed, and launched just like on smart phones, without inserting disks or navigating an unfamiliar store, games and other apps will be more approachable for more users who have experienced mobile app stores. If some games can be controlled with the new remote, or if there is an accessory “standard” format gamepad that is commonly (10-15%) purchased for gaming, this will bring more users to gaming on television in the near term. Microsoft attempted to enter and shape the television market through this other popular activity – playing video games – and also invested in a terrific input device, the XBox Controller. But it never shipped a simple standard remote to ease the watching/volume/playing discord, nor did it simplify its UI for watchers or casual gamers. In 2014 It tried to bypass the television remote with Kinect gestures and voice, and it tried to become the root UI for the job of watching television without negotiating content licenses by including an HDMI-pass-through, but these features do not work magically and the device’s price and branding has limited its mass adoption.

The last major job that we hire televisions for is, broadly speaking, signage. Restaurants, bars, grocery stores, building lobbies, elevators, bus-stops, schools, in the lounges and reception areas of every public and private place where people congregate or wait, from airports to subways to the line to get your driver’s license, there are screens. Typically viewers don’t interact directly with these kinds of screens, there is no remote or input or even UI for us as users, we just watch. But there is a vast industry of vendor-specific custom hardware, custom software, advertising, control, and maintenance which use computers – mostly small form-factor PCs running Windows or Linux and embedded systems controlled by PCs over the IP network – to drive digital signage with static, video and dynamic “custom application” content. Many of these signage screens plan to incorporate NFC, beacons, cameras, and other sensors in future versions. Apple TV, a low-power draw $149 device with solid security, with kiosk-mode UI, enterprise, and “fleet management” features plus a gigantic developer ecosystem for creating custom applications and best-of-breed built-in video and audio capabilities; it completely changes the economics of signage. Even if Apple pays zero attention to features for this market, it will become the dominant device in use, and a great opportunity for iOS developers.

So, Does Apple TV Kill Consoles?

In a word, no, but it’s complicated. The Apple TV is another “peace dividend of the smartphone wars“, and not just its hardware. Sure it is inexpensive to build and draws little power because of the investments in silicon that Apple could make due to the iPhone and the global mobile supply chain. But iOS’s  simplicity and ease-of-use, it’s audio & video capabilities, its digital store, identity management, payment infrastructure, and security all come from investments in both desktop and iPhone software driven by the smartphone wars. The tools, apps, and gigantic developer community are the result of the war, too – developers have become an extension of Apple’s mobile supply chain, ready to apply themselves to new Apple product categories like Watch and Apple TV.

Apple TV will draw more people to gaming on televisions, so “television-based gaming” as a category will grow due to Apple TV in the near term. Consoles will continue to meet a very clear consumer demand for higher-end gaming using gamepads, though many independent game developers and even high-end publishers will abandon lagging consoles over time for the less restrictive Apple TV market as the Apple TV silicon matures, and for PC gaming on Steam (and with luck on Windows). If Apple introduces a gamepad or has a strong attach rate for third-party MFi gamepads, I expect a lot of current console buyers will within a few years find Apple TV’s performance, price-point and more appealing catalog of gamepad content their choice over the next Microsoft or Sony consoles, if those even get built.

In the next 3-5 years existing consoles will either die off or hold a smaller niche, leave the majority segment of “television-based gaming” to Apple TV and other smartphone-hardware-based devices. Why? Because of price-elasticity; the lower unit volumes they are capable of at their $300-$500 price point (a 12-36mo lag of the high-end PC gaming hardware supply-chain) and because they are anchored to the console business model, which tries to control developers, content, and prices. Content developers shift to permissionless markets with higher volumes every time.

No, consoles are killing themselves by not taking advantage of the mobile supply chain – hardware, software, and developers.

Can I Have A Show of Hands?

21 Apr

Advances in sensors computer vision algorithms have introduced interesting new ways to interact with technology. Body- and gesture-tracking for game consoles like Kinect, FOVE’s eye tracking, detailed hand-tracking like Leap Motion and Nimble VR. Most recently I was pretty blown away by Microsoft Research’s HandPose, demonstrating a low-latency, highly accurate and robust full hand and finger sensing from a single depth camera.

I think this is cool tech and hope folks keep researching it super hard. But I’ll tell you what: I don’t think hand and finger tracking is the main way we’ll be interacting with computers and especially in Virtual or Augmented Reality.

On the one hand, much of our interaction with the world involves hands and fingers. Humans have highly evolved hands and fingers wired with enormously complicated, nerve-dense, small-muscle-dense, fine-motor-control abilities to manipulate tools with our hands and fingers. We also have a highly evolved sense of proprioception – the knowledge of the positions of our body, and especially of our hands and fingers within arms-reach, even in the dark or with our eyes closed, even when we can’t see our body or hands. Our strongest sense of 3D and depth-perception given our binocular vision is within reach of our arms. It’s absolutely the case that our hands will be involved in input in VR and AR. But the reality is we manipulate our tools very, very subtly with learned tool-specific haptic feedback, very often out of our own sight using our proprioception. Our hands are rarely the tools themselves.

Forks, knives, spoons, bowls, dishes, stoves, faucets and sponges. Our hands manipulate these tools to prepare, eat, and clean up food. Learning to use most of these tools is pretty easy, and there are both fine- and gross-motor skills involved (but professional chefs can accurately chop hundreds of 0.5mm slices of vegetables).

Pianos, cellos, flutes. Our hands (and sometimes our mouths) manipulate these tools to create music, and there is an exceptionally fine level of arm, hand, and finger control and lengthy training involved to become skilled at making music – there may be a difference of 0.5mm or less in finger position and a few hundredths of Newtons of force difference between the correct and incorrect note played on a flute.

Keyboards, mice, touchpads, game controllers. These tools capture a tremendous amount of information from very small motions of our fingers, hands, wrists, and forearms. A keyboard keypress may involve your fingertip traveling 1-2mm, and typing whole sentences rarely involves fingers moving more than a few cm in any direction. Mice and touchpads allow very small pressures and sub-millimeter motions to accurately translate to complicated 2D interfaces. Gamepads are designed to accurately register joystick and trigger motions of less than 0.1mm radially or linearly at 60Hz or higher and skilled gamers can trigger multiple 1-2mm actuated buttons 30 times per second in brief bursts.

Mobile phone touchscreens and user interactions based on touch are fascinating tools combining the eye-hand coordination of mice with very fine spatial horizontal and vertical finger motor control and direct manipulation visual feedback. As a direct manipulation tool, touchscreens rarely take advantage of proprioception.

I think there will be some uses in technology and even in VR/AR for gesture-based, non-haptic interaction of the type we saw with Kinect and as we’re seeing with more accuracy and better hand models with LeapMotion and HandPose. But we are tool users. We hold our tools in our hands or we rest our hands on them or we lay our hands across our tools. And we move our tools around, sometimes out of sight behind us. We do this so that our tools amplify the millions of years of evolution that went into our brains, eyes, hands, fingers, and our sense of proprioception.

So far I’ve run across two tools that make sense in VR (and I’ve tried… all of them): The first is the game controller – the XBox controller and the Playstation DualShock4 controller both work well to give you a well-understood, input-dense tool which can be mapped to in-VR manipulations easily. I slightly prefer the DualShock4 since it also provides accelerometer and gyroscope sensor data, giving even more input. For gamers, it’s a positive that game controllers have an established haptic and proprioceptive model (gamers know the controller’s feel and layout without seeing it), but several negatives in my mind, primarily that gamers expect their left thumb to control motion, but also that your hands feel stuck together by this tool.

The second tool is the Valve/HTC Vive lighthouse wands (prototypes shown to the left, the actual devices are somewhat different). These controllers are, in a nutshell, unbelievably great. A highly accurate cross between a joystick and a touchpad beneath each thumb, a trigger beneath each pointer finger, and a gentle squeeze-actuated button on the barrel. Very clear haptic sense just in its shape. I have only used them a few times so I don’t yet have a sense of how dense the input can be, but it definitely feels right in VR to have both hands free and in a neutral position, to be able to use the fine motor control of your wrists, thumbs, and fingers to manipulate the environment, and to do so from any position you choose to put them (above your head, behind your back, etc). When you get a chance to try these, I think you’ll agree that it takes VR to a different level.

Apple TV + games: redux

20 Mar
Hey Tim, watch me pull a rabbit out of my hat!
Eddy, that trick never works!

Exciting rumors about an update to Apple TV with Apps coming this summer/fall!

In January of 2014 I wrote at length about updating Apple TV with stronger graphics, more RAM and storage, App Store support, and adding a bluetooth controller to create a super-competitive micro-console. In retrospect, given the barrage of new and updated products coming out of Apple this last year and troubles negotiating content deals, I’m not surprised they passed on updating Apple TV last year. Exciting for me is what another year of Moore’s Law and Apple’s insane miniaturization and supply-chain management could bring to Apple TV at $99 and higher price-points.

I suggested last year that they’d keep the older model Apple TVs at $99 for people who only want streaming, and introduce higher price points ($149, $249) based on storage. They’ve already dropped the current A5-based model to $69, so I suspect they’ll keep this price-point for a low-end model and probably update its hardware. I think there will be two higher price points based on storage, probably $149 for 32GB and $249 for 64GB rather than the 16GB/32GB I predicted which was cost-effective last year.

Updating the processor to something similar to the iPad Air 2’s A8x (2GB RAM, runs at 1.5GHz, 1080p/HD-capable GPU) is a huge step up for games and apps – if you haven’t seen the CPU and GPU specs, it’s worth looking at AnandTech’s awesome comparison chart here.

The 2GB of RAM that was a stretch last year (but needed for gaming) is now baseline in the A8x. Given the state of HD and 4K televisions and 4K content and the fact that HDMI 1.4 support is more stable and broadly available in hardware, I suspect that they will introduce some kind of A8x+ or maybe even an A9x with 4-6GB of RAM so that it can drive 4K resolutions. Higher resolutions and deals with content partners to deliver 4K shows through their iTunes CDN would be a very Apple approach to TV. My suspicion is that they will also speed up this device quite a bit – plugged into a wall and with a much larger body for heat dissipation, it could clock to 1.6-1.8GHz, which would be a big benefit for games.

If the rumors are true, it will have an App Store, and so we know there will be games. Will they build a great bluetooth controller to sell alongside Apple TV for $79 like I predicted last year? Yes, I’m sticking with that prediction; I’m betting on that. But I bet you’ll be able to bring your own DualShock 3 or 4 controller from your Playstation to it as well.

I’m looking forward to WWDC this summer – really hopeful that Eddy Cue will pull the rabbit out of the hat this time.

UPDATE #1: 3/23/2015 – a few people have asked me about pricing and margins. For a rough guess at the build cost of an updated Apple TV, consider the estimated fully-loaded BOM for an iPad Air 2 via IHS. Now strip out the screen, touch-screen, sensors, battery, and many complex sensors, reduce the body build costs and pad a bit. You’ll probably find yourself at about $90-100 for a 32GB version and $110-120 for a 64GB version – healthy 35-50% margins at $149-$249.

UPDATE #2: 3/23/2015 – if you want to compare Apple products and consoles based on GPU performance (great article) that it’s also worth thinking about price.  The CPU ($100) and RAM ($85-110) in Sony’s Playstation 4 and Microsoft’s XBox One are a $190-$210 contributor to the cost of building those consoles, with an additional $20 for a high-wattage power-supply to drive them and a supporting $30-40 HDD for storage. Contrast this with Apple’s $22 CPU+GPU, $8-16 for 2-4GB of RAM, and $20-$40 for SSD storage (albeit much less storage). It’s pretty great to utterly own your IP and supply-chain.

Mac + ARM: one more thing

11 Aug

I’ve written appreciatively of Apple’s vertical integration and also about their Ax architecture, noting that the Imagination graphics processor could be readily boosted within a plugged-in device like the Apple TV to deliver console-quality games. M. Gassée’s MacIntel: The End Is Nigh and Mr. Richman’s Apple and ARM, Sitting In A Tree suggest many potential benefits in supply-chain control, overall cost, and battery life to an even deeper “Mac+ARM” vertical integration strategy which would shift Macs to use ARM processors instead of Intel’s more expensive, power-consuming x86 architecture. There are plenty of arguments to be made against such a shift to ARM, but there is a subtle trend at play to Mac+ARM if you are a programmer that it seems to me could make such a shift a compelling and strongly differentiable position. It’s an argument I haven’t heard from anybody else. It’s about how we improve the performance of this software stuff that’s eating the world.

Game performance as an example

To understand the situation it’s worth taking a quick look at game software as an example. Today a great deal of game software is built against software engines like Source, Unity and Unreal that can detect different graphics hardware and adjust how a game runs and also ease cross-targeting different operating systems (Windows, Mac, PC), mobile devices, and consoles from the same source code and game artwork. The way game software runs atop game engines often allow existing games to “look better for free” or with minimal added effort on newer devices with better graphics processors (GPUs) and to degrade gracefully on older devices. The same software can run at faster frame-rates, so animations are more pleasing and buttery. The same software renders more detailed, higher-resolution content with finer textures and richer special effects in lighting or smoke or fire or fog. The same software may support more game-controlled opponents behaving more realistically, adding complexity and realism to the game. Just by updating the graphics processor, the GPU.

Two reasons this happens so strikingly with games and GPU hardware are (1) there is a natural parallelism to graphics rendering itself, but also importantly (2) the natural way to organize game software and the engines around that natural rendering parallelism is a very approachable concurrent programming model for programmers. Upgrade a GPU with more parallel “core” elements which can render more triangles, more complex textures, more lighting effects and perform more physical simulation every 30th or 60th of a second… the software “logic chunks” of the game software have already been split by programmers into small pieces that these GPU cores and the game engine can parallelize to do more each frame. Newer GPUs can often take the same source artwork and render more detailed characters or draw texturing images with more fidelity on more of the screen or to a bigger screen, or realize a more realistic lighting or fog effect which was already part of the software’s definition. (A similar type of free speedup effect seems afoot with iOS8’s Metal.)

“Free speedup” isn’t a new thing in software; it’s a natural consequence of Moore’s Law, after all. Free speedup happened for gaming and non-gaming software on the PC/Wintel platform through about 2002, almost entirely due to increases in clock speeds (what you see as the Gigahertz, or GHz, of your computer – it’s a measure of how many billions of individual operations like adding, subtracting, or moving data around your processor can do every second). From the birth of PCs until the early 2000’s newer machines arrived with speedier processors as well as more and faster memory and other faster system components.  New PCs automatically improved the performance of existing software like your operating system and the few apps you were using: a browser, Word, Excel, Adobe Photoshop, etc. Each year a new PC felt like a dramatic performance improvement for you as a user, and the easily perceived productivity improvements helped drive rapid PC replacement cycles.

In the early 2000’s we began reaching the “thermal limit” of the processor speed race – we couldn’t make a single processor run at faster speeds without literally melting your laptop. Intel began adding additional processors to a single chip and using techniques such as “hyperthreading” to add even more “virtual” processors. Instead of having a single processor chip running at 4GHz we have a single chip which has 2, 4 or 8 2GHz processors. Having more real and virtual processors made the operating system more responsive and also gave “free speedup” benefits to PCs being used in server environments. Database servers and web servers often run identical software fragments for every user connected to them, every time a request for a web page or a piece of data comes in; this is the same kind of naturally parallelizable software that gets a free ride from having more processors even when the speed of any single processor is not much faster.  Unchanged, plenty of server software can handle more simultaneous users or web-connections or database queries on hardware with more processors, without most high-level programmers having to do a complex rewriting to accommodate the change. Their software was already written for concurrency.

Alas, most desktop software hasn’t gotten as much of a free ride in the last decade from multiple processors or multiple cores. You experience some speed improvements when you buy a new machine due to faster graphics, more memory, faster solid-state disks, or a faster network, but not like you used to in the 90’s. It’s telling that people are more excited these days by a faster internet connection than by a brand new laptop! It turns out that under the covers our apps and their user-interfaces are built using not-very-concurrent software programming techniques, so those extra processors alone don’t make your favorite apps feel much faster. Unless you are a programmer running many tools at once (like me!) or you work with specific high-end media software which has been painstakingly re-written to take advantage of all those processors, not much free speedup for you. Personally I think this lack of perceived speedup may account for some part of the decline in the PC industry – slower replacement cycles and less consumer desire to upgrade because there is no obvious benefit to a new machine.

Here’s a big part of why this happened:

The traditional software programming technique for concurrency, for taking advantage of multiple processors, is called multithreading, where the work of your software is manually broken into smaller pieces which are given to different processors to work on. In school every programmer is taught about threading and suffers through logic tests about semaphores, mutexes and other mind-numbing locking and synchronization techniques. It turns out that although low-level developers of operating system kernels, database engines, web-servers and some games and game-engines can pull off this form of concurrent programming to get the most out of multiple processors, most programmers (the ones building all your apps) are easily confused by threading and either can’t get it to work properly or can’t get it to work well when there are many actual threads running on many actual processors. Programmers don’t do the multithreading work or don’t do it well; most apps don’t feel much faster. Edward E. Lee’s famous 2006 “The Problem With Threads” pointed out that basic threads “discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism” and he suggested new programming language techniques to facilitate concurrent programming, to make it easier for programmers to do well.

As a response to these trends and then in response to the very pronounced performance impact of long-running and slow-network-constrained apps on mobile devices – things like the lag of touchscreens and unresponsiveness of buttons and lists if software is not concurrent enough -various platforms introduced new concurrent programming features around this time in an attempt to push new software into taking advantage of a future with many processors. Some of them seem to have taken Mr. Lee’s insights to heart about simplifying concurrency for programmers. Others did not.

Java introduced java.util.concurrent in 2006 with some useful queuing and “futures” features, but also with many simplistic and mostly not useful wrappers around the traditional complex threading models. As part of its response to sluggish UI compared to iOS, Android followed up in 2010 with the addition of AsyncTask as well as guidelines for programmers to “do more work” in separate threads. In my opinion, Java and Android have not taken Mr. Lee’s insight very deeply to heart. Programmers can use some concurrent programming techniques, but concurrent programming is not the norm.

In mid-2010, Microsoft introduced Parallel Extensions to its .NET platform and runtime, then more “completion-based” and quasi-asynchronous APIs for Windows Phone through 2011 to prevent long-running operations from causing UI stutter and hangs, and finally introduced new await/async keywords in the mid-2012 C# 5.0 update. I think Microsoft folks definitely took the global trends and Mr. Lee’s insights to heart when building PLINQ and TPL, but their lack of platform focus & consistent messaging in the past few years and troubles with Windows Phone market share has meant that their concurrent programming model has not caught on deeply with developers. Also, although many developers love and use C#, the concurrent programming model does not permeate all of the many disjointed Microsoft APIs and so software is not yet being broken up to take strong advantage of the future with many processors.

Apple’s iOS launched with the iPhone in 2007, then to developers as an SDK and platform in early 2008. It arrived with a great deal of natural concurrency over its entire API surface. Not just guidelines for which APIs to use when or an admonishment to add threads for long-running tasks (though it had these aplenty), but also with some fundamental structure (delegates and delayed message sending and asynchronous APIs) which prioritized UI responsiveness and assumed slow network and input/output operations of all kinds. Soon after, in 2009, Grand Central Dispatch (GCD) was introduce: a technique for creating and scheduling multiple queues of work-chunks independent of the number of processors or threads (effectively hiding thread management from programmers). GCD and Blocks, a technique for writing the work-chunks to put into those GCD queues and to create reusable work-chunks more succinctly than the delegate and callback mechanism, made their way to iOS and Mac OS X by early 2011. GCD and blocks have meant that Apple’s own software like iMovie/Final Cut Pro and iWork can actually use all available processor “for free” without overthinking threading and concurrency. Over the past few years blocks and queues have come to permeate Apple’s APIs – we create graphical animations with blocks, we handle data loading and saving with blocks, we handle synchronizing input and UI with blocks, they are everywhere. And they feel pretty natural to developers I’ve talked with. And way less error prone than traditional multithreading. On the Apple platform developers have, for several years now, been actively breaking up their applications into smaller work-chunks and being encouraged by example code and the APIs to re-organizing around a concurrent programming model which is simpler, less error prone, and more scalable to a future with many, many processors.

That’s the lead-up. Here’s the point.

The subtle but major benefit to a Mac+ARM strategy might be the ability to add many many many more processors to Macs and sell them as the fastest computers that consume the least power – not just matching Intel’s GHz performance or number of processors but radically leapfrogging performance and power because only Apple software and its app developers are positioned to take advantage of so many processors due to how this long-game of shifting to a simpler concurrent programming model has been playing out. And only the Ax / ARM architecture can fit 16 or 32 cores into a smaller power profile than the ~8 core top-of-the-line mobile Intel processors, or 48 or 64 cores into the power profile of the top-of-the-line desktop and server Intel processors. Mac laptops could be lighter, run cooler, last longer on the same battery, and feel dramatically faster running concurrency-aware apps than any Intel-based laptop. Mac Desktop systems, already targeting high-end developers and media professionals who use concurrency-capable software – could be smaller and use much less energy, and would also feel dramatically faster.

Fighting this performance battle would be very difficult for Intel and PC OEMs in the laptop and tablet space given their continuing struggles around price and power consumption – it’s unlikely they can match the number of processors and power consumption combination for 3-5 years, and only then if they were under pressure.  It would also be an uphill battle for Microsoft and PC OEMs without competitive Intel parts. Although they might try to shift to ARM and a provider like Qualcomm might create a 64-bit highly multi-processor ARM part they simply lack the software. Microsoft’s operating system, web-server, and database server are extremely multi-processor capable, but as yet not fully ported to ARM. In addition their APIs are not only in a disjointed state but also not solidly founded in concurrency – legacy apps, originally their strong advantage, become a disadvantage, feeling old and sluggish and consuming more power. Nor does Microsoft have the strong developer following and loyalty they once had due to their ongoing product, platform and API disarray and consumer market share woes. Microsoft’s response to ARM in cloud and backend enterprise apps is pretty straightforward; it’s hard to picture how they could react, or how quickly, to ARM in the consumer space.

Will Mac+ARM happen? I really don’t know, this is just my thoughts about advantages to Mac+ARM that I hadn’t seen anybody else notice. It’s worth thinking and talking about.

quick thoughts on iOS Metal

3 Jun

One of many surprises to me out of Apple’s WWDC 2014 keynote yesterday was the Metal API announcement – a very low-level API for performing complex graphics and computation on Apple devices. Basically Metal strips out a layer of overhead which exists to simplify graphics programming for most but which gets in the way of the very advanced programmers. As always, AnandTech has a terrific deep dive and a take on the overall market and ecosystem impact, and I also think Alex St. John’s perspective is scathing of OpenGL and low-level APIs while also utterly insightful at the same time.

Since I’ve got only  a few minutes before I have to check out of this SoCal hotel and be somewhere else, I’ll just add a few quick thoughts.

First, it’s worth noting that even some very advanced graphics programmers may not see huge performance wins from Metal:

(Brad Larson maintains an excellent graphics library for image manipulation called GPUImage)

That said, of the class of very advanced programmers who will jump on Metal are… the teams that maintain the game engines, frameworks, and toolchains used by 95% (perhaps 99%) of the games for mobile. Unity3D, Unreal Engine, and a few others simply dominate mobile gaming on both iOS and Android and have traditionally targeted a relatively common core of OpenGL ES for both platforms.

Due to this I find it unlikely that the API itself will act to lock anybody into iOS from a classic API perspective – everybody is using an engine or framework and indeed tools much higher up the value chain. But… Metal could very well offer an iOS performance lock-in on mobbile.

The most realistic rendering games will look great on iOS until Google does deeper/better driver work on Android. As it turns out, that is crazy hard due to the diversity and fragmentation of Android hardware. In this respect, if Metal is indeed a 10x speed improvement or a 10x detail improvement, it may very well be a masterful move – non-iOS games from the same engines will just look lousy on Android. Wow.

vertical integration of design and post-pcs

9 Apr

The news that Apple has been building an RF/baseband team is a great reminder about how cool vertical integration of intellectual property design can be as design and final manufacturing continue to fracture.

I wasn’t a business strategy wonk growing up, I was too busy writing software, so my first view of vertical integration in manufacturing, contract manufacturing and white-label manufacturing came during the mid- and late-90’s at Microsoft while working with PC OEM’s on the troubling issue of “low-cost consumer PCs.” OEM’s were in a price war that was driving their margins into the dirt and were giving Microsoft (the $70 Windows software license) and Intel/AMD (the $50 CPU price) grief over those parts of their cost as well as trying to figure out how to differentiate their products. We were helping key OEMs prototype different special-purpose uses for the Windows operating system which could be sold with new high-volume consumer products under a lower licensing cost to hit the <$300 retail price point. (This effort and some of our prototyping was one contributor to the initial XBox.) I was fascinated to learn details about how much PC OEM’s had outsourced manufacturing (and some forms of the hard intellectual property design) to foreign white-label manufacturers. Some small players had literally outsourced everything but their logo, their sales staff, and their direct-mailing lists. It was clear even then that they were not differentiable and fully doomed. Others, like Dell, were still doing final customer-specific options assembly and industrial/mechanical (particularly pluggable component) design but were no longer designing much of their printed circuit boards (PCBs). The more I learned the more this seemed like a difficult-to-defend position without unique software capabilities to differentiate the clearly commodity hardware. PC OEM’s had no brand-exclusive content.

One PC OEM that stood out and then led me down the rabbit hole of game consoles was Sony, who I learned was an extremely vertically oriented company – at one point it probably built the trucks that dug the sand and copper to be carried by its ships to its factories to be turned into glass and magnets for TV tubes to be carried again by its ships to markets around the world. Sony’s vertical integration experience in many different CE devices from Walkman to CD players to stereos to TVs taught it how to manufacture Playstation One consoles cheaply and then to radically reduce their build costs each year over the life of the console. It was using this technique in PCs and notebooks as well, delivering the most appealing and smallest PCs and commanding the highest margins for a time (though the PC OEM war of specs and hundreds of configurations dilute and defeat many of these advantages). Studying Sony’s’ Playstation and PC/notebook businesses as well as their various content business illuminated two important things for me which may seem at odds, but they are not: (1) vertical integration of hardware intellectual property is critical for differentiation, though the actual manufacturing can be carefully out-sourced if possible, and (2) Software differentiation (content) is the even more important differentiator. Ironically for Sony, it was the fact that even the strongest advantages of their vertical integration and their deep investment in hardware intellectual property for consoles wasn’t enough to keep ahead of the price-performance trajectory of commodity PC CPUs and GPUs. (It’s good to see them embracing the PC ecosystem and focusing on exclusive content now.)

Which brings me back to Apple, who clearly learned more lessons than everybody else combined from the PC OEM wars. Lessons about how differentiation matters, how intellectual property design must keep its distance as far as possible from manufacturing, and most importantly how to prevent a cross-over threat from another ecosystem.

In the classic PC/notebook space, Macs continue to use many off-the-shelf PC parts (ethernet chips, CPUs from Intel, memory), but their deep investment in industrial design and consumer-important features like thinness, lightness, screens and longevity require expertise and investment in the intellectual property of PCBs, glass, mechanics, aluminum, manufacturing, just to name a few. They use the intellectual property of hardware design to make their products unique and their exclusive software clinches the deal, allowing them to keep their margins high.

More interesting still, though, is the mobile, Post-PC or “Internet Of Things” space. Here with iOS and ARM-based in-house-designed CPUS Apple’s overall vertical integration strategy is just shockingly impenetrable for the foreseeable future. Post-PCs will be small, highly-capable, full of sensors, network-connected, power-sipping, and accessible to developers. Apple’s environment is all this, and is particularly strong in low-power. At this point Apple just lacks dedicated in-house designers of displays, touch-screens and batteries, though they appear to have long-term investments and future capacity contracts with their key suppliers and manufacturers. They don’t actually own the team which designs the graphics processor (Imagination Technologies, creators of the PowerVR GPU) though there is evidence of a deep investment & long-term contract. I suspect there must be a right-of-first-refusal or right-of-first-purchase in place. (I still don’t understand why Imagination hasn’t been bought by somebody, they are an amazing company who understand low-power better than just about anybody).

Android plus off-the-shelf hardware from the non-Apple ecosystem of ARM CPUs, GPUs and baseband controllers are nearly price-competitive, but already at the cost of very slim margins for all the intermediaries (increasingly for the medium- and high-end, this is just Qualcomm). Apple building custom baseband chips will mean Apple has fewer intermediaries and so pays less (it would likely pay $20-30 less per device using in-house baseband, or 10% less of its fully-loaded bill of material), and I’m guessing they will continue to outperform on power-consumption. Qualcomm will feel pressure from OEMs to further reduce prices and power-consumption, leading to lower margins and less ability to invest long-term. This is the aspect of the strategy which prevents an ecosystem cross-over – living in the same ecosystem as your competitors, retaining exclusive content, as in PCs and notebooks, but being able to do cutting edge intellectual property investment in literally every component with no exceptions. By bringing the hard intellectual property design of the very same ecosystem in-house and securing inexpensive manufacturing there simply is no competitive price-performance curve for competitor to cross over.

I shake my head at the genius of not just managing your supply chain but literally eating every bit of intellectual property designed within it except the lowest margin manufacturing. I see no offensive strategy capable of cracking Apple’s Post-PC lead at this time. Perhaps (I hope not) anti-trust will eventually be used, but it’s really more a waiting game for Apple to stumble and slow their pace of innovation.

Game Console Ecosystems – Part 2, Strategies (Now What?)

2 Apr

In Part 1 I wrote about content, price, and lifecycle patterns of game consoles and described ways they are blocking their own adoption. This second part describes now what? strategies for consoles, micro-consoles and others in the TV, CE and video, with the exception of Valve/Steam which is complex enough it deserves its own post. In another future post I’ll talk about something even more important to me: how to place your bet as a developer, and how VR (and AR) will radically impact developers and the CE space.

Many super smart people wonder What’s going to happen in TV? After Amazon’s FireTV announcement and the Android TV leak, even the most dull-witted among us now realize that small, inexpensive, network-connected, cloud-backed, UI-excellent, rapidly improving devices easily replaced every 12-18mo are the most natural product to deliver content to less-frequently purchased & expensive “big pieces of dumb glass” (televisions). The United States has an estimated installed base of 270M TVs (2.24 per household), 240M are sold worldwide annually, and the worldwide installed base is estimated north of 2B. Americans spend 5+ hours each day around their televisions. Selling internet-connected devices, services and content to that big an audience is not a hobby.

I’m a big console, now what?

The big-three consoles are in for a world of hurt as fast-improving, cheap-to-update, lower-cost mobile hardware and an enormous ecosystem of mobile developers transition into this market, either through a variety of Android-based micro-consoles (including Amazon’s FireTV) or an iOS-based Apple TV, or simply if TV’s begin to lose user attention to video streaming and games on tablets and phones. At the same time the most lucrative and loyal hard-core gamers and developers are drawn to high-end PCs which out-perform consoles by increasing margins and are easier and cheaper to work with. High-end PCs are likely to arrive in a TV-friendly form factors via Valve’s Steam box initiative by this holiday season. The question is how quickly and how hard, not if, this world of hurt descends on existing consoles.

Putting content and usability mis-steps aside for a moment, the past two generation of consoles have tried to ride a particular spot on the price-function curve in their graphics hardware and content arms-race, a spot that has pushed their price up and made them and their developers dangerously dependent on blockbuster hits. Chasing hits that fully exploit expensive custom hardware causes hardware and software that are fundamentally over-priced and increasingly over-squeezed.

To defend themselves the big consoles’ best chance in direct consumer sales is to reduce their competitors’ advantages and increase their own. On the hardware side:

  • Accelerate the current generation subsidy. Since micro-consoles, Amazon FireTV and Apple TV competitors will be priced at $99-149 in the 2014 holiday season, subsidize to $169-199 to make consoles an easier choice. It may be possible to strip some storage space or software services out for the upcoming season, but be prepared to spend on this defense even if nothing can quickly change. Creating better bundles with a contracted service subscription, unbundling Kinect and the controller and adding support for older XBox, Wii and DualShock-3 controllers which consumers already have are some ideas; there are many others to be found. Crazy to subsidize so heavily, you say? How could you possibly subsidize 10-20M units per year? To that I say – hey, if you are lazy, do no work to reduce manufacturing and sales costs or to understand your supply-chain and have to subsidize at $300 apiece it might cost you $15B over three years – are you saying that a beachhead entertainment device is worth less than what Facebook is paying for WhatsApp?
  • As part of subsidizing, use 2-year service subscription contracts. You can only do this if subscriptions are more like Playstation Network and Steam membership (full of value, free games, sales) and less like today’s XBox Live (subscribe and pay so that you can even use Netflix and other apps – stupidity), as most consumers won’t see any value. See also Spotify-style subscriptions described below under micro-consoles.
  • Commit to yearly hardware updates and forward and backward game compatibility over three to four years to defend against yearly micro-consoles updates which will follow the app compatibility model from mobile devices. This makes sure new hardware has an existing catalog of titles instead of resetting every generation, which is attractive to consumers and developers while still giving developers and users access to the cutting edge. Another side-effect: this model appears to match Valve’s public strategy and so may counter a Valve and Steam machine OEM advantage.
  • Introduce yearly staircase pricing: each year a model phases out, last year’s model steps down the price ladder and a new, faster model takes its place at the top. In 2015 drop the current hardware to a steeper $149-169 subsidy and introduce more interesting hardware, hopefully less subsidized if you’re doing your supply-chain work, at $199-249. Rinse and repeat in 2016. Throughout this time period the goal is to create a many-generations road-map with a razor-focus on reducing hardware costs so that the subsidy costs come down while function improves. The larger goal is to get your console to a more price-defensible position on the price-function curve, keeping it enough ahead of peak function coming from mobile CPU+GPU hardware in micro-consoles that better games are possible but not so far ahead that you chasing the arms-race of cutting-edge PC graphics, which is too expensive. All consoles lose some high-end PC and Steam Box gamers, but this should help block lower-end Steam Boxes from capturing a large chunk of market.

On the software side:

  • Make your platform wide-open for independent developers. Kids, students, anybody with skill and spare time who owns one of your devices should be able to download free tools (for PC and Mac) and write games that they can give away for free, sell in your app store, or just show their friends. Review apps to prevent junk, spam, and copy-cats, curate lists of great content to put it front-and-center in your store, but most importantly just let any developer write software for their own console using free or very inexpensive tools. Don’t just speak about it and slowly roll it out over a year or two, do it immediately; remove the strange sign-ups, verification, approval, wait-list nonsense. Remove the strange pricing rules, size limits, trial periods and overall regulations found in Microsoft’s XBLIG/XBLCG and SCEA’s Indie Outreach. Let the community of developers share code and support one another without hindrance.
  • Offer a better “App Store,” let prices float, but don’t drive to zero. Draw the best of simple finding, paying, and in-app-payment as well as curation from the iOS App Store and the best of sales, membership and deals from XBox Live, Playstation Network and Steam. Remove the constraints and restrictions that held prices extremely high but don’t
  • Perform a immediate radical dissection of your user experience, particularly around how you navigate and watch or play content and around how you find and purchase additional content to watch or play. Use voice through phones, tablets, and remotes, not by yelling at your TV. Simplify first-run and every launch to speed access to content. Simplify billing, account setup, account recovery, subscriptions. Speed up launch and software updates. Eviscerate all error messages. Make backup/recovery, roaming your gamer profile, and restoring to a newer console work seamlessly.
  • Do even more to allow control of the device, its services, and the TV through mobile and tablet “remote” apps and bluetooth hardware, and open up the control mechanism to third-party mobile app developers through free SDKs, hardware development kits, open protocols, and tools.
  • Invest more deeply in exclusive game content, minimally for time-windowed exclusivity.

To summarize, the goal is to truly level the playing field in simplicity, usability, and price as a defense against lower-cost devices that can’t yet deliver high-quality game content while also creating a broader defense to a multi-tiered, multiple-OEM PC market through more frequent updates and console-exclusive content (more on that play, which is Valve’s Steam, in a future post). This should be a sustainable gaming location for several years. Microsoft is at a slight disadvantage to Sony in adopting this PC-isolating approach as it is more difficult for them to choose exclusivity of content between PCs and consoles; Microsoft is not sure whether PCs or consoles are going to be the larger volume, dominant devices long-term.

I’m a micro-console, now what?

Apple TV, micro-consoles like Ouya and now Amazon FireTV (and potential Android TV devices) have an advantage over current consoles in being a profitable piece of hardware at a reasonable and interesting consumer price-point. Ouya and Amazon FireTV don’t have a depth of supply-chain control and they have to pay more middlemen for parts, so Apple’s 35-40% hardware margin at $99 retail will be out of their reach initially. Over a few years if they achieve high volumes they can either find higher margins or drop their prices below Apple TV (their retail strategy will likely be the latter, which suits Apple just fine); future cable operator subsidies and Apple’s brand strength may obviate any retail price advantage vs Apple for most consumers. In any case micro-consoles could be self-profitable on hardware alone, and they will definitely provide a profitable distribution path for existing subscription streaming partners like NetFlix and Hulu. To have gaming content help drive their growth and to grow gaming and an app ecosystem, though, they must create:

  1. Must-have, exclusive, break-out content that helps move 5-10M units of the hardware.  Initial hits are needed to seed the market which in turn causes more content developers to focus on exclusives for the device, to see its market potential, and to see a viable customer base for doing business long-term. For games this is particularly important since “console” games tend to be longer and require longer development cycles. Amazon has primed its pump for FireTV with original streaming content (which has been somewhat well received but not yet as critically acclaimed as NetFlix) and is at least trying on the games front with the acquisition of some strong game teams and a commitment to first-party titles, and a big outreach to many game studios. There are no two ways around the fact that Ouya really must scout out and invest money (if they have it) in an exclusive must-have game title which showcases its excellent little product.
  2. A virtuous-cycle ecosystem where money can be made by content developers. This is more than an app store with a 70/30 revenue split, it’s more than supporting payments seamlessly or supporting in-app-purchase. Its about an overall business model and community culture that ensures long-term profitable businesses can be built in the environment, not just get-rich-quick schemes or games that prey on addictive behavior or psychological chicanery. Directly copying the current iOS App Store is not without risk – many aspects of the pricing and free-to-play/in-app-purchase model have soured game development on mobile, leading to an exodus of great game developers back to PCs and Steam. In the last year Ouya’s everything-has-free-trials policy (now rescinded) was quite a bad mis-step in my mind because it set customer expectations on “free” and it also put developers immediately into the free-to-play/in-app-purchase mindset of get-rich-quick schemes dominating in mobile. The Amazon FireTV slide describing the average selling price of paid games as $1.85 sets up a similarly low and cheap expectation which may prevent the creation of break-out game content. My spidey-sense is that as Apple has already spotted the negative customer satisfaction impact from free-to-play and unlimited in-app-purchase as well as highly creative developers shifting their attention away from iOS and are poised to make App Store rule adjustments. I won’t try to read Apple tea leaves, but some suggestions for other micro-console ecosystems to avoid scaring away developers are
    • proactively block clones and knock-offs under guidelines such as Apple’s 2.12
    • adopt time-window constraints on the frequency and amount of in-app-purchases (perhaps introducing several different categories that apps can choose which best fit their game mechanics), with the underlying goal of disrupting app dependencies on “whales
    • introduce Spotify-style subscriptions of all-you-can-eat daily, weekly, or monthly access to groups of games and pay developers in proportion to the amount of time consumers spend during the period in their game, with the underlying goal of encouraging the creation of content users like to spend time with (time spent being an imperfect representative of their enjoyment)
    • If I personally ruled the world I would also set a minimum base price of $0.99 or $1.29 for apps, just to keep consumers aware that content has value.

The complaints I have previously leveled against consoles and which I suggest they fix – better UI, easier setup & account management, faster game loading, etc – are a baseline for micro-consoles as well. Though they start at a simpler place than consoles and bring less baggage, they still have room to tighten up, and getting ease-of-use just-right in the $99 space is going to be how they differentiate and sell. Apple TV is in solid shape, though voice search in FireTV ups the ante quite a bit. FireTV UI looks good, but until I set it up later this week and play with it for a while and get a software update I won’t truly know. Ouya is a pretty rough around the edges, but they have been making updates and have a good software team; I think they know these issues are important for their future, I look forward to seeing what they do.

The final point for micro-consoles is having an excellent bundled content remote and an excellent bundled or separate game controller. From what I’ve seen, FireTV has nailed the remote, especially with voice integration – I’m looking forward to trying it. Both Ouya and FireTV are not off to a strong start with their gamepads, though Ouya supports Playstation DualShock3 controllers and wired XBox 360 controllers, which is smart. Iterating aggressively on their own game controllers or drafting off the excellent open-protocol Bluetooth DualShock3/4 controllers is a great idea (I recommend the DualShock4 – the speaker is a surprisingly great addition to the controller). Apple’s TV remote is excellent – it only remains to be seen if they integrate voice in the next version. I expect Apple to design a really great gamepad as well as supporting existing customers with DualShock 3 & 4. It would cost a licensing fee to integrate XBox 360/One controller support since Microsoft uses some (stupid) proprietary technology – I doubt Apple or any others will choose to support it.

I’m a streaming stick/dongle or mini-set-top-box, now what?

Right now these are exciting little products for consumers. The sticks and dongles remind people of convenient thumb drives. They are incredibly inexpensive and can be justified as an impulse purchase just to get Netflix or Hulu – most consumers have a spare HDMI port on their new television and what the heck, it’s only $35! The mini-set-top-boxes are small and don’t take up much space near the TV or cause much additional house clutter – your partner won’t complain. Existing Smart TV software is so bad and changes so slowly that when someone sees a better looking UI demo reel they want to give it a try.

In this category the Roku products are excellent. The Chromecast is fairly underpowered but somewhat good; my best guess is Chromecast sells well to Androidees who want to project their pictures, videos, and youtube to a television, which I love doing with my iOS devices and Apple TV, but I have only anecdotal evidence that this is how Chromecast is being used. Myriad other teeny dongles out there which offer photo or video streaming or Netflix/Hulu are mostly meh in quality.

But even as the hardware improves and prices come further down there is a fundamentally narrow range within which general-purpose sticks and dongles can operate given their size. You can’t dissipate much heat from such a small device volume, and so you can’t draw much power or carry much storage or content. Pure video streaming isn’t a problem, but buffering multiple streams quickly is, and you are barely going to get smooth UI transitions and compelling graphics or even carry a lot of software or content, especially as screens grow in density from HD to 4K. No matter how Moore’s Law progresses, the stick/dongle form-factor will be too far down the price-function curve to be super appealing. Technically sticks and dongles can be carried easily to a friend’s house or on a vacation, and while this a niche use has utility today, I suspect it eventually dies in a cloud world. The $99 mini-set-top-box which has dedicated power and a larger volume to dissipate heat is the most interesting form-factor for the foreseeable future.

So what to do?

  • Focus on software. Making your software exceptionally easy to use, modular, and easily licensed and rebranded. Rethink and innovate on the tough issues on TV like discovery, search and parental controls. Unless you’re Apple, pick Android as your base so you can appeal to developers and improve your own application development. (Note: this is where I think Roku, otherwise executing with excellence, will be in trouble with its Linux+Brightscript SDK)
  • Make your devices controllable via smartphone “remote” apps and bluetooth. Create a free SDK for mobile and hardwared developers to use to write custom controllers – don’t think that you can do the best job. Rapidly and generously buy up the best solutions from your software and hardware developer community rather than trying to copy them in-house; don’t alienate developers.
  • License your software and hardware solutions directly to “Smart TV” manufacturers who need to get out of the software business. Promise them better software, more frequent updates, and better customer support.
  • Use the stick/dongle and Smart TV integration as the free/cheap entry to your broader software platform. Assuming you have long-term ambitions to be part of a TV ecosystem, take a look at how Roku has created a set of devices that span the portable/cheap stick to a plugged-in form factor with more hardware horsepower potential.

Is there room for single-purpose free or cheap HDMI sticks and dongles to do things like just video conferencing or just displaying photo albums or just letting you do presentations from your phone or tablet or streaming games from a PC? Absolutely there is room for these niche players doing this for several years until apps and high-powered $99 mini-devices take over completely, just don’t expect to build a huge business; use sticks and dongles during the transition.

I’m a “Smart TV,” now what?

Because of the slow replacement cycle of TVs and the accelerating pace of computer and graphics hardware improvements I’m pretty skeptical that it is useful for the “smarts” of a TV to live inside large & expensive TVs. Evidence suggests that even inexpensive tablets have long replacement cycles (perhaps they are used primarily as portable TVs?). In the short-term you can solidify your position as the best piece of dumb-glass moving forward as follows:

  • Don’t ship another single unit carrying your worthless, unusable, frustrating custom software.
  • Pull every stop to partner and integrate 3rd-party software with great UI by the 2014 holiday. At the moment I’d recommend Roku, though Ouya is a smarter choice due to the Android base (their UI is not quite as refined as Roku, though), but soon we should hear what Android TV’s licensing terms are. Please don’t roll your own Android version, custom store, and UI – you are not a softare company.
  • Be sure to integrate AirPlay, iView and the protocols underlying Chromecast so that your TV is accessible by the majority of mobile devices without users having to think about or buy an additional device. There may be other protocols specific to your geographic or cultural market – the key is to choose a partner which has some form of application SDK so you can add features and target specific models of your TV quickly and easily (this is Roku’s one big challenge at the moment, and why Ouya or another Android-based system stands a chance)
  • Focus on usability and customer service. Hey, what do I know, but here are some suggestions: Streamline setup. Be faster to turn on (with less of your logo). Ideally detect but also OK to let me name my inputs – like “XBox One” and “Cable” and “DirectTV” instead of “HDMI-1” and “HDMI-2”. Don’t make changing inputs vs. changing channels modal – go watch families struggle with TVs, it’s not rocket science. Each time the TV turns on, show me thumbnails of all my named inputs – what could be more frustrating than a blank screen showing “HDMI-2” when the last person left a different (now turned off) source selected? Revamp your manuals.
  • If you’ve got a speaker, support audio playback even when the screen isn’t on. Integrate Spotify through Roku and let people have ambient music, controlled by their phone or tablet.
  • Bonus points for TV/AV folks: Buy Sonos or partner deeply with them instead of trying to copy their features poorly in your line of sound-bars, TV’s, A/V receivers and 5.1 speaker sets using barely functional Spotify, Pandora, TuneIn, and Rdio integration. You do not have the software chops to build your way to a solution, you should just buy: they are doing a much better job than you possibly can because they focus on software and hardware working harmoniously.

Fundamentally, you are in a really, really tough spot long-term as a purveyor of dumb glass – but these are my suggestions for remaining differentiated while you figure out your next step.

I’m a cable- or satellite-operator with my own set-top-box, now what?

All your set-top-box hardware, remote controls, and software have been universally condemned and unconditionally criticized as slow, difficult to use, lacking in cutting-edge features, and slow to update – even back when you blatantly copied TiVo or built in their technologies. Because you aren’t a software company, because you have a huge installed base of odd TV and stereo configurations you fear weaning from traditional remotes, and because you completely subsidize the cost of the device or charge a small monthly fee, your goal is to simply minimize the cost of the hardware, software, and support associated with it. You have been either actively creating barriers to prevent your set-top-box from being a gateway/hub for other web-based or local-to-the-home photo & video content or have been integrating it poorly with custom-built apps. Your own set top boxes are not a competitive edge and continuing to invest in them will never lead to you growing your market share or improving customer satisfaction. There are two things you have traditionally done that you should keep doing:

  1. Secure content exclusively to your own networks, especially video content, especially sports. Most video content is becoming commoditzed, so you need desirable short-shelf-life exclusive content like sports as well as a broad tail of ideally exclusive niche content. Be willing to spend big to secure exclusive content.
  2. Improve your service. Faster internet speeds, better reliability, better customer service. Lower prices than your competitors is great, but dramatically better service is always the strongest long-term differentiator.

What should you do differently? You should either ease your way out of the hardware and deep software business by using off-the-shelf packages like Android TV and white-labeled hardware, or you should partner with a company already selling game console or set-top-box hardware to the public directly and draft on their business model. In either case you should sell, rent, or help subsidize new device to all your customers as quickly as possible, taking yourself out of the hardware and deep software business and into the app business. Integrate your tuner hardware and DRM technology if that is technically necessary.

If you are not going to use Android TV and white-labeled hardware, you have three serious choices for third-party complete ecosystems at this time: Microsoft XBox One, Apple TV, and now Amazon FireTV.

My guess is Microsoft will make XBox One available to many different cable operators as one choice for consumers among the set-top-box options available from the operator, and they are looking for a small subsidy assist or subscription percentage. It is typical for Microsoft to pursue consumer choice & perceived quantity over deep quality. (This isn’t meant as a dig – it is just their traditional method for hedging bets and keeping multiple OEM or other partners happy and more future moves available). The fact that it has on-board storage, a cloud infrastructure, and reasonably good programming guide integration makes it an attractive looking option. Unless it’s subsidized deeply to the free-$100 range, though, I’m skeptical that consumers will readily choose it over basic set-top-box options, but there may be some attractive ways to bundle it with new services which surprise me. It is also physically a little bit big and requires more complex physical integration and setup.

In contrast Apple would I think be looking to initially partner with a single operator to get a better time-window exclusive and to help further pull down the consumer price-point, to increase the subsidy so that the operator gets the cachet of exclusivity, drawing new subscribers. Recall AT&T and the iPhone – which cable operator wouldn’t want to be AT&T in 2007? The leaks around content deals with a combined Comcast/TimeWarner are mostly random noise without much depth, but in-between the lines I see indications that an Apple TV distribution partnership with new content is actually the deal that’s pending. Whether or not XBox One is also an option for Comcast/TimeWarner customers, I suspect the next generation Apple TV will be available as a free or <$50 option to Comcast customers and will include deep direct program guide and custom Comcast app integration. Apple TV will also be available at retail, like an unlocked iPhone, for the higher $99 (or future $149?) price-point. I’ve written that the next Apple TV will support gaming, and I even that I thought games would be its launch focus. But reading about content negotiations, thinking about Eddie Cue and how Apple typically chooses a single product facet for launches in order to have crisp messaging, I think the pending Apple TV update will focused entirely on user-interface issues like search and parental controls, new streaming content partners, and a Comcast/TimeWarner distribution partnership with live- and time-shifted-tv programming guide integration. Though it will have the hardware specs and iOS capabilities to support apps and specifically games later, that will likely be a separate fall/holiday 2014 announcement.

As a cable operator I would certainly be reaching out to Amazon if they hadn’t reached out to me already, because the voice-search, parental controls, Android ecosystem, UI/support and overall sizzle are what I need, but I suspect that in 2014 Amazon FireTV is a pure consumer play and they haven’t had time to pursue deep cable operator integration and getting into this form of subsidy.

If I were a cable operator or ISP of any kind, I would be reasonably worried about partnering with any of these companies and would be drawn to retaining control over my own destiny by sticking with custom STB hardware or choosing the path with the most opportunity for customization (perhaps Android TV). This would be a poor choice, though. Cable operators should look to the history of the past 7 years in wireless operators and smartphones: long-term you do want to support many different devices, but short term you want the most compelling product deeply integrated so that you can acquire more contracted customers. You want to be the AT&T of this round, you want the exclusive Apple TV.

Thus ends my current thoughts on navigating the connected TV, set-top-box, console, and micro-console landscape – a bit adjusted at the last minute to account for Amazon’s FireTV announcement and stripped of references to Oculus VR and Facebook while I try to get my head around what that will mean. Feel free to tell me I’m wrong in the comments or harass me on twitter, I’m @natbro. You perhaps won’t be surprised to hear I sometimes consult and brainstorm with companies in the CE industry about these issues in more detail; if that interests you, I can be found through

Game Console Ecosystems – Part 1, Strategeries

15 Mar
a mixed up Rubik's cube

Looks complex, actually pretty formulaic.

Next-generation consoles from Microsoft and Sony launched a few months ago, and initial sales figures are starting to roll in: about 6M Playstation 4’s and 4M XBox Ones sold worldwide. TechCrunch dug through monthly sales, compared them with older consoles, and said hyperbolically that The Console Market is in Crisis. Re/code more correctly interprets the raw data showing Microsoft and Sony growing a bit while Nintendo shrinks, and other reports show game revenue growing slowly. To me these and other signals unequivocally indicate a contraction is underway in TV-based gaming. Consumers are showing less interest in big-ticket devices and there are not must-have console-exclusive games. Game studios have trouble justifying the very high costs of console game development and even successful console games aren’t succeeding financially. Most independent developers avoid console development.

Ouya and other low-priced micro-consoles rumored from Google and Amazon should be more appealing to developers and consumers, but either haven’t shipped yet or aren’t yet hits. Polygon criticizes Ouya’s plan to embed their platform in other hardware and says Ouya may not be dead, but its long history of stumbles makes success unlikely, taking a particularly hard jab at their controller. I also find the controller poor, but they are doing solid developer relations and an embedded platform + store service with common content which consolidates and grows the Android-based micro-console market is the only proper start of a strategy for Ouya (the other part is a business model where developers make enough money – more on that in Part 2). The elephant in the room for micro-consoles is killer games. It’s not quantity or even quality; Ouya crossed 700 total games recently, and there are many gorgeous, fun & diverse titles available. Rather, it’s the elusive killer-app: an exclusive, unique, must-have hit game that will make new users want and buy an inexpensive micro-console just for that game. A hit game is needed to start micro-console demand snowballing with consumers, and I’m not yet convinced they yet have all the elements they need to give rise to a hit, in particular a proper revenue model.

Even if console sales are growing relative to themselves 10 years ago, their boats are not lifting with the overall rising tide of gaming and are under market pressure from several directions at once. From above, a resurgence of high-end PC/Valve gaming using cutting-edge GPUs with dramatically better performance and graphics than the newest “next-generation” consoles. These draw away hard-core gamers, the biggest spenders and influencers, and the small- and mid-sized game studios which target them. From below, the exponential growth of mobile and casual gaming, which deliver simpler, more immediate gratification for play outside the time spent near your TV, are becoming the main introduction to gaming for new players and developers, instead of PCs or consoles. From within, developers are flocking to the high-end and mobile segments of the market where they see more growth and opportunity, lower barriers to entry, lower development and distribution costs, and faster product cycles. From outside, the time and money consumers have available for gaming on consoles is being undercut by video media streaming to tablets and phones through Netflix, HBO Go, and others, and by the many TV streaming dongles and devices like Chromecast, Roku, and of course by my favorite Apple TV. The draw is also a new wave of highly-compelling and socially spread video content, and it is eating away at peoples’ limited time and attention (Ben Thompson’s The Jobs TV Does is a great overview of limited attention for escapism).

The original console business model has become strategery. Infrequently updated, big-ticket subsidized hardware; high-priced games and high-priced services; poor bundling of commodity video services as poorly integrated “apps”; limited exclusivity and tightly controlled game publishing. It hasn’t worked very well in the past 10 years to differentiate consoles or to expand the market, and it will work even less well moving forward. Creating a real console/set-top-box strategy that grows the market and profits may seem as impossible as solving a jumbled Rubik’s cube, but there are just three degrees of freedom: contentprice, and lifecycle. In Part 1 I’ll describe what each of these elements are and how they have been, are being, and likely will be used. In part 2 I’ll describe some different ways of combining them into a coherent strategy that could work for consoles and micro-consoles in coming years.


These days any TV-based device needs commodity content (TV, Movies, Netflix, Hulu, some number of games) just to enter the market, but to grow it must find compellingunique and exclusive content or offer a better user experience around content (think TiVo and Netflix) or both in order to create new demand and to snowball device sales. Game content specifically requires compelling hardware and software and an ecosystem of skilled developers and companies betting on and building viable businesses around the console’s success (just like video content requires studios producing content). Making all forms of content easy to buy and consistent to access through software and within the overall business model has also become a critical user experience differentiator. When you are missing or undermining any of these traits, you are blocking the creation and sale of unique and exclusive content. You won’t grow the market or your share. You won’t attract new users. You will be vulnerable to competitors.

Through a combination of studio consolidation among game publishers and a smaller appetite for spending to secure exclusives, XBox and Playstation evolved from their earlier generation to have weak content differentiation. Today most of the exact same blockbuster games — Call of Duty, Assassin’s Creed, Need For Speed, Battlefield, etc — were available on both XBox One and Playstation 4 at or near launch. For years many “exclusive” titles have simply become first-party non-unique variants of a genre like “first person shooter involving war/zombies/science-fiction” which sell well enough but are not killer hits broadening the customer base and moving more consoles. Other platform exclusives are killer in terms of sales and their attach rate, but are indistinguishable to non-experts (e.g. Forza on XBox vs Gran Turismo on Playstation). These grow the market, but equally, and so don’t differentiate consoles from one another.

Blocking, back-stabbing, or limiting novice, first-time, or independent development through onerous distribution contracts, high-cost development systems or difficult toolchains frustrates developers and limits the overall amount of innovation in an ecosystem. Small, independent developers and young people who want to learn to program and build their own games are a tremendous source of innovation and energy – reducing all barriers to their participation in an ecosystem is critically important. Microsoft and Sony have flip-flopped madly on development systems, development tools, and supporting independent developers and independent game distribution over the last 10 years. At this point it’s still not clear whether they will deliver on all the changes to support frictionless independent development that they’ve promised, but it is clear that Sony has promised and delivered more for Playstation 4 recently. Microsoft is behind this curve.

The launch this month of Titanfall exclusively for XBox One and inFAMOUS Second Son exclusively for Playstation 4, though, are a solid test of how quality exclusive content drives sales. Microsoft is inexplicably weakening its wager by allowing Titanfall on the older XBox 360 and PC in coming months and has few other exclusives up its sleeve – it seems to have much of its marketing budget and all its PR eggs in this basket, betting it will recreate the HALO phenomenon from the original XBox (which was a recreation of the Zelda+GoldenEye phenomenon from the Nintendo 64). Sony, on the other hand, is spending much less on marketing and has several very interesting exclusives pending. They had already been investing more heavily in exclusives like The Last Of Us for the Playstation 3 – again, I’d posit that Microsoft is behind this curve.

It’s difficult for me to write even a brief summary of how flawed user experiences are on XBox and Playstation around game and video content, subscriptions/login, or navigating to and within streaming apps like Netflix, Hulu, or HBO Go within the XBox dashboard and XBox Live or within the Sony dashboard and Playstation network. Suffice to say that neither console has taken better and consistent user experience around content to heart, though at least Playstation doesn’t double-charge for access to streaming services (you must be an XBox Live subscriber for $5-10/mo before you can use your $7.99/mo Netflix account – pure insanity). At every level of account setup, login, password recovery, network configuration, troubleshooting, game-saving, subscription- and single-purchase management, billing, channel and content navigation, launch and navigation delays, update management — you name it — Sony and Microsoft user experiences are complex and difficult for novice users to access. Both are very vulnerable to products with better or even limited and simplified user experience, and both face tremendous technical challenges in simplifying their designs due to their underlying architectures, teams, and processes.

These are all grave content execution mistakes: Too much effort on the quantity and comparability of titles (they’ve got a racing game, I’ve got a racing game). The short-term tactic of matching your competitor (they’ve got 15 launch titles, I need 15 launch titles). Not enough effort betting on and paying big for exclusive content which grows your installed base. Not enough effort to create a “minor league system” of strong and resilient independent developers and new/young developers. Making digital and video content difficult to find and purchase, or making setup and content navigation inconsistent and frustrating.

Price + Lifecycle

While many other prices for entertainment have gone down – prices for large flat-screen TV’s, prices for movie entertainment including DVD/Blu-Ray players and discs,  prices for casual web- and mobile-apps and their in-app purchases, prices for streaming service subscriptions for Netflix, Spotify, and others — the prices for consoles, console games, game subscriptions and downloadable game content (DLC) has stayed mostly steady over the last 10 years and has now risen for the latest console generation.

High software and DLC prices, high hardware prices and the multi-year lifecycle of console hardware were originally tied to the only business model that could deliver graphics for compelling games: high-priced custom hardware and a controlled publishing model moving high-priced software through a price-regulated distribution channel. Only this combination of price, channel, business model, and lifespan could return a large enough software attach rate and the corresponding lifetime average revenue per user (ARPU) to make consoles + first-party game publishing an overall high-margin business.

In the period around 1999-2000 the confluence of commodity GPU advances (which due to Moore’s law and exponential advancement very suddenly matched or beat custom designed hardware from Sony and Nintendo) and the robust ecosystem of DOS and Windows game developers, tools, and APIs, was the unique crossover point we used at Microsoft to enter the console business with xBox. At that point, the commodity PC CPU and GPU were less expensive in terms of up-front design & manufacturing time & expense (Intel/AMD and NVidia were already making those investments and paying them down across millions of units in the desktop and laptop categories), but were still fairly expensive on a per-unit basis. We could use the difference in lower up-front investment to differentiate the XBox with faster time-to-market, local storage, high-speed networking and on-line services, but still needed the longer lifecycle business model to recapture the overall investment and high cost of the hardware itself. The original XBox vision (as I pitched it) was to reduce the lifecycle length while maintaining forward game compatibility and ride commodity PC component prices down on volume, a strategy which would greatly disadvantage other console players dependent on custom hardware development. It would also potentially advantage Microsoft by influencing operating system, tool, and API priorities internally with the concrete pressures from devices and games needing fast-boot, stability, simplified install/uninstall and overall simpler user interaction for consumers. For reasons that still don’t make sense to me, the subsequent XBox 360 generation diverged from the Intel/PC architecture, making a deep investment in custom PowerPC hardware which bifurcated toolchains and along with nonexistent (at worst) and dysfunctional (at best) small/independent developer support, alienated developers, sending many back to the PC and eventually towards the Valve/Steam ecosystem. Several terrific exclusive XBox 360 games, a thriving on-line service, a surge caused by Kinect, and the fact that the Playstation 3 made an even deeper and more tragic hardware investment in the impossible-to-program Cell processor allowed the 7th console generation to stumble along from 2005 through 2013 with XBox slightly in the lead. It is telling that the 7th generation was exceptionally long and that both 8th generation devices have now returned to a commodity PC architecture. XBox One and Playstation 4 are using virtually identical 3-year-old Intel-compatible CPU+GPU SoC components, much to the relief of (and probably due in great part to lobbying by) the largest game studios.

But both new-generation consoles are expensive items – their base prices will likely land at $300-$400 for the 2014 holiday season with game, storage, extra-controller and service bundles creating average retail revenue of $450-$600 per unit. At this price they will not out-perform similarly priced PC rigs or Steam machines which will get almost all the same content since they are tepid on paying for exclusives and studios are reluctant to bet heavily on either leader. The 8th-generation consoles have once again been designed for a multi-year lifecycle – these are not easy-to-justify prices for consumers, these are not purchases that can be made yearly or even biennially.

Microconsoles like Ouya’s, the Apple TV with a game controller which I expect shortly, and the rumored Google and Amazon set-top-box/consoles based on commodity phone/tablet mobile SoC’s running Android or iOS are at another unique crossover point for competitors to enter and disrupt the console and set-top-box market. Just like Microsoft’s entry at a crossover point in hardware costs with the original XBox which disrupted the custom hardware and software development phase of existing consoles, new, cheaper devices will disrupt both the long-lifecycle and the subsidized hardware characteristics of the traditional console business model, and they will enter with an even larger and stronger developer ecosystem than the big consoles as they draw on experienced mobile developers. Some, like Apple TV and Ouya, will be able to sell hardware which improves graphics performance radically every year at a low (in the case of Ouya and its licensees) to high (in the case of Apple) profit margin due to their much higher volumes in phones and tablets and basic economies of scale. The prices of their core ARM-based CPU and GPU alone will be 1/4 to 1/5 the price of the PC-architecture-based chips, while exponentially gaining on their PC counterparts in performance, matching them within 4-5 years. In particular, Apple holds a strong advantage having disintermediated chip suppliers – they can fine-tune custom chips for gaming at the lowest possible price.  Others, like Google and Amazon, may sell similarly fast-improving hardware yearly at cost or at a small loss, subsidizing it with their overall software and service ecosystem. Apple and Amazon are I think also likely to offer dramatically better user interface for purchasing content and watching video from multiple sources, which will be further disruption to the inconsistent ways that cable/television, Netflix, and other streaming providers are paid for and integrated on the latest consoles, to say nothing of inconsistencies in setup, login, saving, and other common user experiences. In all cases, after the new entrants use this crossover point it will not be possible for long-lifecycle products to survive without making the same transition to more open development, and a shorter 1-2 year, backward-compatible lifecycle.

Interestingly, if either of Microsoft or Sony do adopt a shorter hardware cycle and more open development, notice how closely they start to mirror Valve’s simple Steam/Steam-Machine strategy: higher-priced hardware updating on a yearly basis surrounded by a strong, well-established developer community who already understands forwards- and backwards-compatibility; simple, open and totally free toolchains; digital distribution; a consistent architecture with enough variations to allow hardware competition and multiple price-points, but enough similarity to prevent developers from having to test too many diverse configurations. My only question is honestly which of these two gets the courage to partner with or buy out Valve first.

Tune in next week for Part 2 where I’ll suggest some non-strategery strategies for micro-consoles to compete and for traditional consoles to shift gears to save themselves from extinction.

Apple TV + games

24 Jan

Did you know 240M televisions were sold worldwide in 2012? Almost 40M in the US alone. I’ve written before about what I’d want in a set-top-box and how xBox and Playstation could be disrupted by an Apple TV supporting apps & games. Now that the new iPhones and iPads are out and show the hardware roadmap, rumors about an updated Apple TV in 2014 are swirling and I’ve spent more time with the XBox One and Playstation 4 checking out their gaming, set-top-box & media integration. I think the time is finally ripe for apps and games on Apple TV.

What it seems likely Apple will do:

  • Introduce a new model Apple TV with better graphics, more memory, and local storage for apps, priced at $149 (16GB) – $249 (64GB), retaining a 40%+ profit margin. Use a slightly beefed up 64-bit A7x chip like the one found in the iPad Air & mini but with even more GPU horsepower and running at a higher clock speed since it’s a plugged-in device and can both use and dissipate more power. An “A7x+” – 2x to 4x the GPU cores/power and a somewhat faster CPU. Updating the CPU/SoC has negligible manufacturing cost impact, but boosting to 4GB DRAM (+$25) and local storage/flash drives up the price slightly.
  • Introduce its own bluetooth gamepad controller which works with older and newer Apple TVs for $79-$99. It would be brilliant to enable support for users with existing Sony DualShock 3/4 and XBox Controllers – there is evidence of DualShock 3 support in iOS, but this may be a red herring. These are both great controllers (DualShock is simply Bluetooth), and they mostly fit the MFI specs for iOS controllers.
  • keep the existing $99 Apple TV price point, updated to the 64-bit A7x with 1GB DRAM and 4-8GB flash, perhaps enabling support for (on just the most recent/older 3rd-generation models) some non-graphically-intense apps and games for existing Apple TV users, but the lack of RAM and storage limits this possibility.  The newest $99 model in any case won’t have the more powerful GPU or storage capacity for more intense games – it’s the SKU for basic streaming and basic apps, but it’s easy for most consumers to prefer the $149+ versions.
  • update the on-screen UI to support using the bluetooth game controller for navigation. Irrelevant to this product to have a dramatically different UI than the past, but they might roll out a more iOS7-like UI as long as they’re updating.
  • introduce an App Store for buying games and other TV App content, with some restrictions on what can run on older/normal/$99 vs. newer/$149+ Apple TV’s – e.g. photo/screensaver apps can run on either, racing and first-person-shooter games only on newer models, as happened with games on iPhone 3G vs 4 vs 5 depending on their use of OpenGL ES. The UI target resolution will be 1920×1080 (1080p) and this will become another “universal app” target for developers.
  • likely some minor announcements around new or improved movie/tv/streaming/content partners, but this update will be more focused on games.

What doesn’t seem likely:

Some of my previous thoughts about ideal set-top-boxes include better integration with my cable box via HDMI pass-through and the ability to control other peripherals and do a universal guide overlay and unify search. I still dream of this idyllic future, but having used the disappointing XBox One, TIVO, Playstation 4, and other devices which try and fail to integrate other devices well, fail on voice, and don’t do a great job integrating other sensors, I think many other features are not possible technically and business-wise to the level of Apple’s user-satisfaction bar in 2014. Kinect-like interaction via the PrimeSense acquisition isn’t in 2014 for Apple. Ultra-HD/4K is not a 2014 target, either. Games and utility applications (weather, screen-saver, home-calendar) accessed with the standard remote and a quality bluetooth gamepad are the simple no-brainer to support adding new content – developers are, in fact, champing at the bit to put games and other types of apps on Apple TV with a quality, responsive controller. I have heard some hints from some game developers that they are doing work “sort of like this.”

Why not just improve AirPlay from existing devices?

AirPlay can be used to project audio, photos, and video or to project the screen contents from an iOS device to a TV through Apple TV. For showing your friends a few photos or videos off your phone or watching a slideshow from iPhoto this works pretty well, and it can also work for some simple types of games and apps. But, using an iPhone or iPad as the main CPU and GPU and input controller to run a sophisticated game (or any application with touch or accelerometer interaction) and then projecting it to an Apple TV to your TV simply has too much input lag due to the way the device must process your input, then generate graphics, the frame-buffer must be encoded, transmitted over WiFi, and then decoded and sent to the TV — about 0.5-1.0s of lag. 4K media would make this even worse. The CPU+GPU and storage will have to be directly wired to the screen for the foreseeable future.

What about games that have some UI on the TV and some on your iPhone/iPad?

Nothing will prevent developers from doing dual-UI with their games, and I’m sure some will do so (it’s pretty fun to do on the Wii U if you haven’t tried Super Mario 3D World with a friend, you should), but developers will be do it with applications triggering one another’s launch via bluetooth and communicating small amounts of data peer-to-peer over Bluetooth and WiFi, with code running on both devices, not by having the iPhone/iPad project video to the Apple TV or vice-versa. There is simply too much input lag, and Apple cares about smooth and responsive.

What about an actual television?

I personally think there is a great opportunity for somebody to disrupt the TV space with the smarts found in Apple TV. TV manufacturers  struggle with software and UI – the smartest “smart TV’s” out there offer horrendous software and services from every angle compared to using an XBox, Playstation, Roku or Apple TV as the main device. Lots of opportunities to beat existing TV folks, especially for the likes of Roku and Apple, who have clean UI. Apple is also in a unique position to sell high-margin flat-screen TV’s from their retail locations – many people underestimate the value of those retail locations so nearby consumers. That said, I don’t think it makes sense for Apple to sell an all-in-one Apple TV  + screen in 2014 or possibly ever for two reasons:

  1. it’s not a great idea to couple the long-term purchase of the expensive screen (average replacement cycle for TV screens is 3-5 years these days) to the goal of an every-year-improving Apple TV set-top-box. Consumers will spend $99-$149 for easily-updated devices that get better and better along some axis, and there is tremendous room for hardware improvement in the CPU and GPU of this device while retaining the $99-$149 price-point
  2. the big transition in screens coming is UltraHD/4K – I would expect if Apple wants to start selling TV’s it would do so by selling a 4K TV + Apple TV bundle and encouraging you replace the docked Apple TV portion yearly for $99-$149 rather than having you replace the whole screen. My other guess is they would do this kind of work only after securing enough capacity for retina-quality displays for all Macbook Air’s and iMacs, so 2015 at the earliest.

Will it compete with XBox and Playstation?

In the short-term, not exactly – the types of games that can be written for a device with even these greatly improved specifications can not, I think, be as immersive and intense as the sports, racing, and combat titles which dominate sales on traditional consoles. You will probably hear it being dismissed by gamers and gaming industry executives at launch because it’s won’t have the power to run these types of games. However, longer term it will have tremendously disruptive effects on consoles. In fact, life-threatening effects, such as:

  • Raising the user-interface and user-experience bar dramatically. Many of the UI atrocities I documented and hundreds if not thousands more (like how long games take to load, how you interact with streaming services, etc) are taken for granted on traditional consoles. A simplified, more iOS-like approach to applications and how they are installed, save data, launch and how you switch between them will make consumers far less tolerant of existing consoles. Neither Sony nor Microsoft have shown great ability to simplify their own UI or influence the UI of their games.
  • Driving down game prices. A more open distribution channel like the App Store as well as an inexpensive but not-subsidized initial console creates an ecosystem where app and game prices will compete and get driven down. Sony and Microsoft need to recover money lost on the console itself from game sales, and they act to curate and control titles and keep prices high. A console that already makes its manufacturer 40% margin has literally no incentive for prices on content to be high – in fact they actively work to get content as cheap as possible, as free as possible, to create customer demand.
  • Shortening the console lifecycle. Apple TV hardware updates yearly, like iPhone and iPad, and it will continue with better graphics, more memory and storage and things like support for 4K resolution output. Shorter cycles do not fit the current console business model where a 5-7 year cycle makes it possible to improve manufacturing yield, decrease production costs, and recoup initial R&D.

How is it different from Ouya or other “micro-consoles”

An Apple TV running apps and games is actually a validation of many of the concepts of “micro-consoles,” like the Ouya, except that it will likely not be as open a platform as most micro-console proponents desire. It will offer developers a much simpler and cheaper path for development and distribution than existing consoles. What truly makes it different is that it would be a unified offering from Apple – Ouya is an Android-based micro-console, so it can bring plenty of Android developers to bear, but it is a custom App Store and a custom product, struggling to get momentum and sales. Apple will have a much easier time selling more Apple TV’s – adding apps and games will increase the value proposition of the current device.

That’s my $0.02. I’m looking forward to developing for an updated Apple TV.