Monday, August 9, 2010

Unified Excellence: Rodman's view of the Polycom-Microsoft alliance

Microsoft has been an interesting duck for a long time.  They’re popular by many measures:  they’re big, they’re found on the majority of corporate computers, their stuff is fully featured and functional, and they make a wide variety of software tools that function with, and on, devices and platforms from just about any vendor out there.  
That last point evolves into an interesting challenge with Microsoft’s continuing move into Unified Communications, initially via OCS and now with the next step of Communications Server Wave 14.  By moving deeper into UC and the mainstream of live human communications, this extension has positioned them in a place where they’re now measured by a whole new set of metrics.  Real-time human communications are different from traditional text-based channels because there is an inherent philosophical shift:  ideas are no longer funneled through the time-insensitive construct of "text", but are swapped back and forth in real time, through our voices and our vision; we're chatterboxes, our expectations become the same as we have when we're together in person, and those expectations are high.   But further, they require different techniques, different measurements, and a lot of very specific expertise to do the job well.  
If you're a company that begins raising your presence in a new part of the market as Microsoft is doing, it's not a bad idea to identify and partner with the strongest expertise and track record in that market.
In announcing a new strategic alliance with Polycom (see Information Week,, Microsoft revealed exactly that strategy today.  Polycom and Microsoft have extended their strategic relationship, and have committed to a substantial dedication of resources and investments to make Polycom’s highly regarded video and voice solutions an integral part of the comprehensive Microsoft end-to-end UC solution set.  That’s a big deal; both Polycom and Microsoft are known for being selective about the partners they choose, and this collaboration holds high promise for transforming how people communicate.  I’ll be writing more in the next few days to describe a fuller picture of how and why this partnership is so compelling, but it's a classic synergy play: two market leaders, each in their own field; each is positioned to maximize the strengths of the other, and everybody benefits. 

Saturday, August 7, 2010

Freeing UC to grow

Jessica Scarpati's coverage of the tradeoffs between single-vendor and multiple-vendor UC strategies ( got me thinking about why "single-vendor UC" is an oxymoron.  
Something as big as UC cannot all come from one vendor.  How do I know this?   
Because nobody knows what "UC" is!  Not really, not completely.  Sure, it's "Unified Communications," but that's just words and it doesn't mean as much as we wish it did.   And yes, it's videoconferencing and video communications, integrated with calendaring and texting and voice and presence, but isn't that just more words?  
And IM, of course.  Is that it now?  No, we forgot multiscreen telepresence and real-time translation services and media archiving.  And and iPhone apps.  Android apps, SIP/H.323. And speech to text, I almost forgot that, and how it should link to calendaring and security...  
You see the point?  UC is a constantly evolving story.  As Ralph Waldo Emerson said about life:  UC is a journey, not a destination.  
Every user has a different story and takes a different journey that presents a different application, different needs, different priorities.  Every user is different; that's why users will always need vendors who can put real focus on particular applications.  In Scarpati's article, Gartner's Elliot confirms this - "you can't actually get a [full UC] solution from a single vendor, despite what they're saying."
This is why open standards are so important and why so many industry leaders are joining UCIF ( to ensure that UC stays open, effective and compatible.  UC is open-ended by definition, and that end needs to be fully and openly defined.  
UC's potential and its excitement are emerging because it's being created by a community, not a hermit.  Let's remain clear of hermits and their closed platforms.  

Monday, August 2, 2010

Everybody's Rich!

Recently I've heard some people say "oh, foo, that telepresence, it's just for the rich."  But really, what does "telepresence" mean these days?  
When the word was first invented thirty years ago, it was adapted from "teleoperation," used to describe a rather abstract, all-senses extension in which you not only saw and heard something at another place, but could do stuff and sense stuff from far away: you could feel its temperature, kick its tires, smell its roses.  It wasn't just video, not even just immersive video, but it carried all the elements of "being there."  But it was a rationed commodity - it was one-of-a-kind lab cookups with miles of cable in universities, it was driving your robot on Mars in science fiction, it was doctoral dissertations backed up with fragments of machinery and sparks.  
More recently, "telepresence" became applied to a small corner of teleconferencing, what's now known as "immersive telepresence."  This usage started gaining traction when Polycom's Destiny division (TeleSuite at the time) began shipping it with PictureTel components in 1993; that's what my company, Polycom still calls it, and it's a pretty good description.  It's the whole room:  controlled lighting, spectacular audio, flawless transport and one-button operation if you want to push a button (you can have a conference without pushing that button, too).  It's a wonderful experience.  And yes, it's expensive because it carries multiple channels of HD Video, spatial HD audio, and includes everything you need, right down to ceiling panels and chairs.  If you want a no-excuses, best-in-the-world experience for a high-end application like a board of directors, it's a good way to go.  But it's not the only "telepresence" out there.  
Telepresence has acquired many shades of gray since its early days, and there are some distinctions that some people are only now catching on to.  Not all telepresence is big, brilliant, and quiveringly expensive anymore; some is big, brilliant, and priced to make a CFO smile.  In the same way that cars come in styles and prices from Maserati to Kia, "telepresence" is now applied to anything from those higher-end immersive systems to a simple HD desktop system. If it's well done, it can even be used to describe a handheld experience!  
As quality and understanding continues to increase, more systems are becoming increasingly "telepresent," and as UC continues to mature, these different kinds of systems can talk easily to one another.  That's one of the great things about open standards: they allow handheld video to connect to a half-million dollar immersive system, which radically boosts the value proposition, the usefulness, and the quality of experience for everyone who participates.
If you find yourself pondering telepresence, give serious thought to how you plan to use it,  and what your specific needs are.  If you're thinking "video," think "telepresence."  It's almost guaranteed that there's a telepresence solution out there that fits your needs, and your budget.   
Don't listen to anyone who tells you that telepresence is just for the rich.  When it comes to "telepresence," you're rich now too!

Wednesday, July 28, 2010

One Word: Writing!

This has been the summer of the most lavishly successful internship program that Polycom has ever conducted.  Some very talented students have shared their skills and ideas with our people, and have learned from them, in what has turned out to be a really invigorating few months.  
Before the interns started to splinter off and return back to school, I agreed to host a lunch with them.  I came in expecting six or seven ragged summer hires, seven or eight over-mayonnaised sandwiches, and some lukewarm coffee.  The food was better than that, but the interns were better still: interesting, engaged, sparking the discussion.  They  totaled about 35 in five sites, brought together by Polycom telepresence.  
The lunch went pretty well, if you don't judge its success by the small amount of food I was allowed to eat.  Finally, amid the crumpling of sandwich wrappers and zipping of laptop bags, someone asked if I had any last words.  I don’t recall exactly what I said, but it was something lame, right up there with "be good to your mother and don’t drive on the sidewalk." 
You know those times when you think of the perfect riposte only after the other guy has left?  You’re left with a Homer Simpson “D’oh!” moment, all by yourself?   Well, it was only after the thing was over that I had one of those.  Too late, I remembered what I had wanted to say.  
So for those of you who were at that lunch, please edit the media stream in your mind to append this next bit.  Touch the date code, improvise inflections.  
I've got three last words: work on writing.  
The reason is simple.  Whatever you do, people can’t see it until it has passed through the filter of writing: an introduction, a note, a whole letter, an article.  There's always writing that precedes and surrounds it.  This is true whether you’re an engineer, a financier, a marketeer, even an artist.  And because your words frame your work, those words add a lot of leverage, for better or worse.   Simple errors like misspellings, faulty word choices, or flawed grammar affect the reader’s perception of the work itself.  
Simple error’s like mispelings, messed up words and/or whether you used the right tense etc make u look like I better take another look at you're work.  Even if its genius its now pushing against a head-wind.  Get it?
Writing will be an important part of your work, whatever field you go into.  You can give yourself a real boost by learning to write well, and then to write better.  Here are some of the most important suggestions I can make.
1.  The best single guide to writing I have found is a classic, “The Elements of Style” by Strunk and White. It’s short, readable, interesting, and stuffed with useful tips.  Get on top of that, and you're already leading the pack.
2.  Homophones are words that sound the same but are spelled differently and mean different things (homonyms are similar in concept, but are spelled the same as well as meaning the same - check out for a great list).  Picking the wrong one is a quick way to crack your veneer of competence.  When I see someone who's confused "your" and "you're," or "discrete" and "discreet," I sigh.  You see this a lot in blogs, so keep your sights set high.
3.  Is there a better word?  I always keep a thesaurus nearby.  
4.  Are you unsure of the spelling?  Check it.  Sending a note with a wrong spelling is like speaking with spinach in your teeth.
5.  Finally:  re-read, review, revise.  The last couple of minutes can make or break the whole message.
Since I'm away from the web often, I've made friends with some apps on the iPhone:  OAWT (Oxford American Writer's Thesaurus), Advanced English Dictionary, and (if you dabble in songwriting like me) "Rhymer."  You'll find other tools, too.  
To all the interns, thanks for sharing your summer with Polycom.  I once had this experience as a summer hire with some of the most gifted engineers in the world at Hughes Aircraft Company, and the experience still shines in my memory.  Be well, go forth and succeed!

Monday, May 31, 2010

Primum Succurrere: Telemedicine and fast access

Should telemedicine be prohibited because the doctor doesn't physically touch the patient?  That's the argument being made by the Texas Board of Health, according to a recent article in the New York Times at  The issue raised is that there might be subtle cues that are more likely to be noticed if the patient is there than over a video connection.

Like everything, this position has to be taken in moderation.  Good decisions aren't made by looking only at the possible risks, but at the possible benefits as well.  There will always be more information that could be gathered, but the real point is whether sufficient information is available to justify an action, and whether inaction would be worse.

We all make medical decisions, and we make them frequently.  My shoulder hurts: do I have bone cancer?  Should I drop everything and race to the emergency room?   I evaluate my  options using the information available to me, including the fact that I spent much of yesterday digging in the back yard, and conclude that it's more likely a soreness from fixing the plumbing under my wife's lawn fountain than something more ominous down in the marrow.  I feel reasonably confident that my arm will not snap off if I give it a couple of days and see what happens.  I'm not a medical doctor and I have no scans or X-rays to support my decision, but I'm making a decision of proportional scope and, in all likelihood, consequence.

Doctors do the same.  The real questions are how good the information is and what actions are appropriate based on that information.  And - at least as important - what level of inaction is acceptable?  Is it better to take some action, even if some things remain uncertain, or to wait until fuller information is available?  It’s the kind of question that faces doctors and paramedics every day.

Along with the familiar medical dictum, primum non nocere or "First, do no harm," there’s also a second one, equally important: primum succurrere,  "First, hasten to help."  Both philosophies, whether the considered evaluation prior to treatment, or the more urgent application of palliation or assistance, need fast, accurate information, and that’s home turf for telemedicine and medical telepresence. 

One morning as a young engineer, I had a realization so vital that I made it into a sign and hung it in my office:  "Every Decision Must Be Made on the Basis of Insufficient Information" (this was before computers, fonts and laser printers, so I actually had to exercise my drafting skills making that sign).  I realized that the search for full and complete information was inherently impossible because there was always another piece of data, somewhere, that might - just might - be relevant to a decision.  I realized that part of the value I brought to the job was my ability to evaluate the information available, to decide when I had enough to take action on.  

The same is true in medicine.  There’s always a process of deciding what information is necessary, and where enough has been accumulated to support a decision for a course of action.  This is where telemedicine has become such a valuable addition to the medical arsenal:  Telemedicine can cut the time for information delivery to a doctor by an order of magnitude or more.  This facilitates the processes of triage, diagnosis and treatment.  Siince time is often critical in medicine, this also means that telemedicine can save lives.  It's the doctor's responsibility to determine whether the available information is sufficient and reliable, and to decide what actions to take on the basis of that information.  This decision is never the same, however.   It’s different for every circumstance, and the doctor is in the best position to decide.  

Many of the major metrics of modern medicine are already available remotely, such as blood pressure, blood sugar, and pulse rate; even ultrasound scans are now available, delivered via iPhone.  Yes, there's always a chance that a physical visit might add a piece of information, but the incremental advantage of the physical visit, relative to the high cost of delay in many cases, has decreased in modern medicine; as Dr. Boultinghouse says in the NYT article, "in today’s world, the physical exam plays less and less of a role. We live in the age of imaging.”  Add in the growing availability of remote imaging devices such as microscopes and ear-nose-throat cameras, and the remote imaging arsenal is becoming extraordinarily powerful.

This is where modern telemedicine has become such a game-changer: by bringing secure,  live high-definition video, both one-way and two-way, between doctor and patient, it enables not only the directed examination necessary to understand a problem, but also a significant degree of the relationship-building and random observation that can play an important role in medicine as well.  In effect, HD telemedicine has brought much of the physical exam back into the game, even when doctor and patient are thousands of miles apart. 

Saturday, May 15, 2010

Redundant, Robust, and UC

Through the events of the past couple of years, we’re again seeing that the two essential elements of a communication strategy are redundancy and robustness.

The conventional meaning of redundancy is having a second phone as well as the deskphone, or a battery backup in case the AC fails.  But what I’m talking about goes beyond that: it’s not just separate duplicate abilities, it’s having communication paths that use different media entirely, maybe following different physical tracks or even different laws of physics. And similarly, while “robust” may mean a phone that you can drop, it doesn’t help much if the phone wire itself has been torn loose in a hurricane; the strategy, not just the device, needs to be robust.

This kind of redundant backup is something that we have in the wild, but often lose when we’re connected by technology. If we’re standing together and I talk to you, we’ve got some options when a thunderstorm strikes.  If you can’t hear me, you can see me and I can signal to you.  And if it’s dark, and you can’t see me either, I can tap you on the shoulder. So what has happened?  The audio failed, so I resorted to video; that didn’t work, so I went to touch.  Three entirely different media, and I was able to connect. Redundancy.

Closed standards destroy "robust" because they also close off options.  Texting, e-mail, videoconferencing, presence, shared workspaces, multiple unsynchronized clients, cloud and local implementations, there’s a mess of them and they keep coming, yet they often don't work together.  And that’s before we add Yammer and Twitter and Tumblr and Flickr and Facebook and LinkedIn and Myspace, Posterous, Qaiku, Ning, Digg, Mixx, Reddit…you see the problem? The proliferation of tools and media that’s supposed to be empowering us?  It’s disabling us.  What should strengthen us instead makes us more frail.

This is why this shaking-out in human communications called Unified Communications or UC, is so essentially linked to open standards.  It’s often presented as the next, uber-cooler, the even higher technology, but I see it as a naturalizing, a humanization, of this flock of new and augmented communication tools.  “Unified” is the important part here.  In the same way that my arm-waving is a natural extension of my shout, UC is all about making this rag-tag zoo tie together so one way of connecting is an effortless, obvious extension of another: I don’t need to look up another phone number, URL, or Skype name.  If one tool or one vendor chooses to use their own proprietary standard and can't talk to others, it's not really Unified at all.  Perhaps we should call those implementations "Fragmented Communications."  FC?

Finding ways of ensuring confident cross-platform connection via open-standards based UC will be one of the big enablers of human communication in our future.

Wednesday, February 3, 2010

Video Silicon in the Laptop?

As video chat and videoconferencing becomes democratized, the question comes up: will computers now sprout dedicated silicon to perform these video-specific tasks?  As in most things, the answer is yes and no; in large part, it depends on what you call a computer.  Here's what we're likely to see in the traditional laptop.

Computers and laptops muddy the question because they can already do pretty good video processing with what they have.  The mainstream has brought prodigious speed advances, multi-core architectures, partitionable 64-bit data processing, and integral pixel processing functions that really accelerate the kind of number crunching required for VC.  So one answer lies here: yes we'll see specialized silicon to do video processing in laptops because we added it years ago, we just didn't do it as an encapsulated hardware codec.  

The laptop exists in a continual tension between cost and performance.  A computer maker is always looking at the tradeoffs between adding abilities and improving existing ones, versus adding the cost to make these additions.  This is why we don't see embedded cameras on every laptop, and why every every laptop's camera we do see doesn't have forty megapixels and a 15:1 zoom lens.  What happens is that when they're added, these cameras get the minimum performance that will do a general-purpose job.  It's the same reason that computers make lousy speakerphones - they have mics and speakers, but they're compromised to fit the price available. And the space, of course; nobody's advertising a laptop computer that's "New!  Improved!  A Half-Inch Thicker for Better Sound!"

Now, sure - not every user is average.  Those who want to stretch the envelope with very high-power requirements, like high pixel counts and frame rates, may need special silicon.  But if they're doing this, they'll need more than just 
the processing - they need the better camera to feed it, and better sound, and a better lens.  And the baseline performance level is continuing to improve as well, so that off-the-shelf computers converge with an ever-increasing subset of user requirements (does anyone besides me find that their cellphone camera takes care of a surprising fraction of their snapshot needs?) 

So this all boils down to one conclusion: computer users are either satisfied with the increasingly good video they can get with mainstream laptops, or they'll invest in outboard stuff to enhance it and that's where high-end specialized processing will wind up.  We won't be seeing dedicated H.264 processors in laptop computers anytime soon, at least as we currently think of the laptop. 

A tip of the hat to Michael Graves,, for asking the question that sparked this train of thought.