Wednesday, January 04, 2006

Gartner Has Named "IT"

It's official. "Retail Drives Enterprise" has an official name. In what Infoworld called an "unusually brief and cryptic" research note from Gartner analyst David Mitchell Smith, the topic was dubbed "Globalize IT Through Consumerization".

Here is the Infoworld blurb.
http://www.infoworld.com/article/05/10/28/44OPanalysts_1.html?STORAGE%2520HARDWARE

Wednesday, October 05, 2005

Retail Drives Enterprise

Welcome to my first post after leaving the position of Borland’s Chief Technology Officer. Assuming I haven’t been excised from the Borland site you can get a flavor of my thoughts about the tech industry and its impact on Borland’s business at http://blogs.borland.com/pjkerpan.

Why is it that retail technology from Best Buy or online sites is better than what many people have at work?

This is an effect that appeared to have begun in the early 90's. Sometime in the preceding years retail technology began to drive business technology, not the other way around. If there is a canonical example of the old model it is the facsimile machine. It wasn’t until enough businesses bought these that competition drove the price down into the range of the home office user.

Personal computers followed this arc until the early 90s. In the 91-ish timeframe I met with one of the heads of Hewlett Packard’s PC group with me bemoaning the poor design style of their product which we referred to as having a “babyshit brown design center”. We told them we would buy 4000+ Intel boxes if they could come up with some other color – to which the HP manager responded “But how would the users recognize they were computers?”

Since then the major PC vendors have continued the practice of having their “business computer” which is characterized by having lower performance and old components. In exchange for this a business is supposedly buying “stability”. Simultaneously of course the vendor is selling 10x of their retail/home/gaming computers. Buying the business computer is kind of like ordering the special stew at a restaurant – which is clearly made up of this week’s left over entrée ingredients.

So if there is a lesson to be drawn from today’s world is it “Businesses are stupid and consumers are smart”?

On the one hand consumer’s are ruthless adopters of technology to enable their personal lives; Fast PC’s, iPods, nearly free inkjet scanners/printers, photo printers, Tivos, xBoxes, Ultima Online, Ebay, Amazon, Gmail, etc.. They either have the wherewithal to go get Trillian or they just run 3 or 4 different IM’s – doesn’t matter. Alternatively, “business people” burn things off the asset register, use crummy computers handed down from R&D and Sales, to Marketing to G&A. To roll out a new software package takes anywhere from 1 to 3 years to install, migrate and train the users.

Meanwhile those same people go home and on the weekend switch photo sharing services, sell their “Hammer of Thor” in one online gaming environment for enough hard currency to buy the “Cerulean Plasma Rifle” in another, sell some things on Ebay, buy some things in zSHops, transfer their address book from Cingular to T-Mobile, and compare Orbitz, Priceline and Hotwire for vacation deals – personally besting most supercomputers at the traveling salesman problem.

What’s up with that?

The ease of access to information, the accessibility of new computing, networking and storage capabilities in the consumer market, the remote control and intelligent agency of Tivo, Orb Networks, Sling etc. will drive the business systems of tomorrow. Retail technology will continue to be the pace car in setting user expectations for their business capabilities.

“Should you encounter God on your journey” (Borland blog)

As some of you may know this is my last BDN post as a Borland employee. I have completed an 8 year journey that began in May of 1997 when I left my position as Managing Director at Canadian Imperial Bank of Commerce and started a small software company, Bedouin. That ended 16 years on the “buy side” of information technology and began an 8 year trek focusing predominantly on software development productivity. It led to the Starbase acquisition – my running of the StarTeam and CaliberRM business unit to the point that they clearly became integral to Borland’s future. After that came my tenure as CTO working to evolve both Borland’s integrated “ALM” offering and the evolution towards Software Delivery Optimization.

Enough about me, let’s talk about me. It became clear to me early this year that two things had happened. One, the back of my mind was filling up with “mad science experiments” that were clearly outside the bounds of any foreseeable Borland business model. Two, I was no longer in the minority of people in the company at large, nor R&D who understood the needs of the enterprise customer and could align their needs with our product agenda. I was an important part of the team but perhaps somewhat less critical than in prior years.

Likewise, I enjoyed the entrepreneurial experience of Bedouin and it is something I am interested in doing again. So I am off to tease mad science experiments out of the back of my mind into perhaps a few front-of-mind commercial possibilities. To keep track with these I invite you to occasionally check in at pjktech.blogspot.com.

Now let’s talk about Borland. Borland is chock full of engineering talent. We have established the Office of Chief Scientists as a voice for the engineering team that regularly connects with executive management. Here is the one thing you need to know about Borland’s Chief Scientists, they are so smart that their brains have brains. We have a head of engineering for a good part of the ALM products who is A) The best engineering manager I have ever had work for me. B) The best engineering manager I have ever heard of. C) The best engineering manager I could dream up. D) All of the above. (The answer is “D”.)

For Delphi we have Danny, Alan, Eli and others. We have a walking encyclopedia (database?, Diamond Age Ractive?, one-man hive-mind?) of software – David I. He has voluminous knowledge of the past, a canny sense of the present and deep insight to the future. We have systems engineers who make a career of walking around the planet and kicking the heck out of IBM Rational. (SIDENOTE: It is amazing – the acquisitions that Atria – Pure – Rational did – and multiple years into the IBM stewardship customers are confronted with essentially LAN-based products in an exploding world of distributed development.) Our St. Petersberg team is not “offshore, low cost programmers” – they are a legitimate R&D team contributing mightily to the state of the art in tooling for Model Driven Development and Model Driven Architecture. Which reminds me – some of the Togethersoft guys in the United States are treasures; R.S, R.G, D.M, C.K. Who’s going to beat them – Telelogic? Serena VDM? No way.

I could write initials and cute hints for pages about R&D and still not have yet gotten to the relatively new and improved marketing team now in place which is arming a better educated direct-sales force, better than ever.

My management peers? You won’t find better people. They are honest, driven people committed to the success of the company. Does the management team I have been a part of make mistakes? Of course it does – we are unfortunately all too human. Do they castigate themselves; self-critique themselves to a depth greater than any of the outside critics – certainly. What could they do better? Be more ruthless. The products are good – the team is good – the market is huge – go for it guys. Stomp the also-rans like Telelogic and Serena.

My message for the company and friends I leave behind, for the people who I hope will live on in my IM list forever …

“Should you encounter God on your journey, God will be cut."

-Hattori Hanzo, Kill Bill Volume 1

Tuesday, September 06, 2005

All Hail the Red Queen... (from my Borland blog)

“Now here you see it takes all the running you can do, to keep in the same place. If you want to get somewhere, you have to run at least twice as fast as that”

- Red Queen – “Alice Through the Looking Glass”

I spoke at BZ Media’s EclipseWorld last week about the impact of “the Eclipse Effect” and the challenges and opportunities it presents both Enterprise IT and ISV’s as they build out their next releases of products. After the keynote, a theme that came up in multiple discussions is the acceleration of how often companies need to re-consider their competitive positioning and their differentiable value. It seems like this was once a generational question for software companies and IT departments that arose every 8-10 years. Now one needs to manage “differentiable value” more like a series of cash flows used to value a financial instrument. Your product portfolio’s feature sets are a bucket of these value flows that need to be assessed independently and on an ongoing basis. This creates a “net differentiable value” which can be used to determine product creation priorities and go-to-market strategy.

Every few years it seems like someone comes up with another thing to apply financial derivatives mathematics to. My bet is determining optimal commercial differentiable value and the use of open source will be two more of them. Hey, any of my old friends from O’Connor, Swiss Bank or CIBC, want to take a crack at it? Give me a call.

So where has this acceleration come from? What were the pre-conditions? I have two themes I think are part of it that I will write about in subsequent posts: “Cyberphysics” and “The Kids are Alright”. Stay tuned.


UPDATE ON THIS POST: After my previous Borland posts and then this one the core of people whose rabid concern was the future of Delphi and Borland C++ began to emerge. The main complaint being that my Borland blog should be used to pretty much talk about release schedules, features and futures. Unfortunately for those folks I didn't see that as the role of CTO, and now with time in retrospect, I still don't agree with them. Borland had over 100+ people involved in the development, marketing, support and management of these products - their words and messages should be sufficient. Another thing I noticed in the comments was a refusal to accept (not understand, but accept), that officers of public companies cannot reveal specific about product plans without doing so in a careful, planned, and legal way. Regardless, below this update I include a response to one of the comments longer than the original post. And in a single comment by "Anonymous" here - I have put all of the critiques.

My response to one of the critics:

A couple comments in response.

Yes this is a “BDN” blog sponsored by Borland Developer Network. That said I really can't provide any specifics on product direction for any Borland product that has not already been publicly stated. In my handful of blog postings as you can see I focus on the trends or themes I see in our enterprise customer base – trends and themes which I believe will ultimately have an impact on enterprise developers, if only as a result of organizational behavior, as well as their technological effect.

In this post I was ruminating without conclusion on one of the things I am seeing our customer’s struggle with as well as our ISV partners; what is a company’s differentiable value? I still don’t have conclusions – but here is some of the underlying thought in process.

In prior generations before open source, you looked at competitors and customer demand. If a competitor had a feature you were for the most part expected to counter it with your own variant. This is “me too”. In order to have differentiable value you had to think up and implement customer-desired features before your competition, “me first”. So determining product features was a “me too” vs. “me first” balancing act. Hard to be good at, but the dimensionality of the problem didn’t make your head explode.

Given the advent of (to name a few):
Internet connectedness as a force to:
- focus the accumulation of intellectual capital
- provide planetary scale development on relatively small problems
- Growing enterprise acceptance of open source
- The “API-ification” of almost everything

How does this change “me first” vs. “me too”. It introduces, to oversimplify, “should I ever”, “who else”, and “along with”.

“Should I ever” – means “is this a capability that I believe will have a valid commercial life before commonly understood and implemented in open source?”

“Who else” – is what are the other sources of components for my integrated product – probably an order of magnitude more complicated as a result of independent open source contributions, foundation open source contributions, and corporate open source contributions”, as well as partner and community contributions via the “api-ification” effect.

“Along with” is perhaps a combination of your contributions to community and open source initiatives, as well as your investments into the api-ification of your product lines.

All of these are new dimensions which potentially require one to BELIEVE something about the future, which in my book means you need to start reviewing all of this probabilistically to get some sense of what your expected outcomes are.

Once you are into “probabilistic, expected outcome” you ought to start thinking about standard techniques used everyday in other domains to help you. Look at Borland’s CaliberRM, one of the most compelling features is the use of Monte Carlo simulation on industry standard historical data and company-specific historical project data to provide a probabilistic estimate of project success given the time, budget, staff, etc.. Does this probabilistic approach guarantee anything? If all the inputs are completely wrong then it does nothing. If the estimates are close – it gives you some sense of the possible outcomes.

So as I think about the granularity of major feature areas of all of a vendor’s products with respect to “me first”, “me too”, “who else”, “should I ever”, and “along with” – combined with the fact that all of these judgments require some belief about an inherently exactly unknowable, albeit estimable future, it turns my thoughts to the techniques used in derivatives analysis.

As an over-simplified primer here are two pointers. Modern financial mathematics got its jump start when Fischer Black (U of C) and economist Myron Scholes (MIT) collaborated on the concept that option pricing was essentially the same as the thermal conduction law of classical thermodynamics. Also, (thanks to Peter Hoadley) play around with option pricing graphs at http://www.hoadley.net/options/optiongraphs.aspx .
Notice the impact of “volatility” which is your probability estimate. Notice sensitivity to “time to expiration”.

This calculator is for a relatively simple set of cash flows, which in some ways can be thought of as probability flows. Complex financial instruments require you to analyze thousands of future cash flows, with future beliefs about potentially numerous cross rates, interest rates, inflation rates, credit ratings, etc.. At that level of complexity, with that many guesses of the future, it is necessary to know maximal risk exposure depending upon numerous probability paths.

SO…since option theory is increasingly used in other domains to understand types of probabilistic risk exposure I was wondering if it is about time to have a more formal, statistical approach to product management given what I believe is the increasing dimensionality of that problem. Does this affect developers directly? If I am wrong and we live in the simple world of “me too” vs. “me first”, then certainly not. On the other hand, maybe it is worth a thought with respect to the products one is making human and financial commitment to – whether products built or products bought.

And, because of my “give a moose a muffin” type thought processes I also wonder how did we get here? How did this acceleration start? Why are there so many people willing to work in a collaborative way to solve problems like an open source OS, open source appserver, etc..? Where did they come from? What are the generational or cultural shifts that have caused the acceleration of openness, api-ification, and collaborative connectedness? Which as I said, I will post some gobbledygook about in the future.

Thanks for the comments.

pk

Thursday, May 05, 2005

Billions in Bits (from my Borland blog)

As Borland’s SDO vision evolves, one of our stated areas of interest and investment is how to unlock the value of the business information embedded in our integrated ALM solution. Clearly this will involve features and functions of, and integration to, the emerging world of enterprise governance solutions such as Mercury ITG, Niku and others. Seeing this trend as the “next big thing” is perhaps bittersweet for thinkers like Paul Strassman whom have been highlighting the need for such approaches since the late 80’s. I have to admit I am a big fan of Strassman – and in this posting highlight some pointers to his work on this topic from the late 90’s. His multi-decade quest has been how to value the entrusted information capital that resides in our organizations.

To illustrate the massive impact of information capital to an organization’s value, look at the value of Google. Just shy of $3 billion in assets, it has a market capitalization of over $60 billion. In financial reporting, the gap between these two numbers is never formally recognized unless the assets are sold. Upon sale, the gap between asset value and sales price would be stuck into the accounting catch-all of “goodwill”. Though intangible, these assets hold real — and obviously significant value – to the marketplace. What are these hidden assets? Why does the stock market recognize $57 billion in intangible assets? Part of this number is the market’s present value for future earnings on existing and new revenue streams. However, that is still not all. Many economists are quite comfortable ascribing the remainder of the hidden assets as the value of Google’s intellectual capital.

Accepting that a significant component of a company’s economic value is the intangible asset of its staff’s current and future capabilities begs some significant questions about how companies and the people they employ are managed. I would like to highlight these questions and illustrate some key thoughts regarding the role of intellectual capital in companies; especially in the ever-burgeoning multi-billion dollar operating budget environment known as the “back-office”. First, I’ll say a little about what I call your corporate economy.


What is a Corporate Economy?
One of the traditional organizational breakdowns companies use is “front-office” and “back-office”. Another popular concept is “profit center” versus “cost center.” Unfortunately, these segmentations suffer from a number of issues. Within many organizations they have become politicized or become pejoratives. (How terrible to be one of the “costs” of the organization, but not part of the value!) Even worse, in light of emerging thoughts on the value of intellectual capital, these traditional distinctions may not be optimally useful in determining corporate strategy.

I think a useful dividing line is between staff that are intimately involved in customer transactions (transactional staff), and the staff that provides the services which enable those transactions (service teams). This categorization seems especially poignant in companies whose profitability stems from transactions of intangible products (interest rate swaps or foreign currency options, insurance policies, DNA sequences, information models of distribution networks, etc). With this as a definition of your corporate economy, there are three questions - when asked, and well answered - give structure to your corporate economy.

What is the nature of my relationship with my clients? This is answered in part by your business transaction systems which tell you what you sold, to whom, for how much. The emergence over the last decade of customer management systems indicates that the transaction systems are only part of the answer, and companies are searching for the other half of the answer.

What is the nature of my relationship with my competitors? To the extent I can legally know - what are they selling, to whom, for how much? But also, what are their product plans, business emphases, and keys for continued success. Software to help companies answer these questions is just beginning to emerge under the loose category of “competitive intelligence”.

And finally, what is the nature of my relationship to my internal service sector? What are the key activities of my “service teams” that support my “transactional staff”? Which services are most needed in order to produce more, higher quality, customer transactions? Today’s crop of Project Portfolio Management systems and Enterprise Corporate Governance systems are attempting to provide software systems to help answer the third of these questions. These systems are evolving to help business management know and evaluate its service sector, and equally as important, allow business management to evaluate its effectiveness in using the corporate service sector.


How is the value of intellectual capital defined?

Barron’s defines intellectual property as “a special type of intangible property, arising from the creative endeavors of the human mind.” If intellectual property is the end product, then it can be inferred that the “equipment” asset used to create the product is the human mind.

Paul A. Strassmann, a 40-year veteran of the information technology business in both the government and commercial sectors, defines information capital, or to use his trademarked term Knowledge Capital®, as forming when “employees think or talk about how they are delivering goods and services. This usually occurs when workers are engaged in overhead tasks, not when they’re actually delivering goods or services”. The delivery of goods and services is usually considered a “front office” (transactional staff) function; overhead tasks are usually considered the realm of the “back office” (service teams).

Looking at the Google example suggests that in a world divided into front and back offices, the excess market capitalization widely recognized as the intellectual capital value of the company, is wholly attributable to the “front office”. But could this be true? It is unlikely. Some value must occur in the “back office” to enable the transactional staff to be productive in generating revenue. Once the concession is made that a company’s service sector staff have an impact of millions to billions on corporate stock market capitalization, then it is time for a change.

The ability to leverage service sector intellectual capital takes the stage, displacing the overly simplistic cost management approach commonly and broadly applied to large segments of the company. Organizations that treat intellectual capital as “out of sight, out of mind” are left wondering how the competition seems to make great gains “out of nowhere” or “from nothing.”


How is intellectual capital measured?
It is widely understood that the cost of acquiring knowledge and the profit-generation potentials of such knowledge are unrelated. The value of intellectual property is in its use, not in its costs.
— Paul A. Strassmann

Intangibility is attributed as the reason intellectual capital is difficult to measure. Intellectual capital deviates from traditionally-noted sources of capital, because instead of being implemented to accomplish a defined task and then depreciated over time, it can expand its original role or be a form of innovation in new areas (for example, NASA innovations which work their way into consumer products). Knowledge assets, unlike what we traditionally consider capital, have infinite potential. Yet, as seen with many skill sets, particularly technological, intellectual capital can also depreciate – making it all the more important to understand, control, and capitalize on how your information resources are used. How and whether the potential is reached will increasingly be the advantage or downfall of any knowledge-driven company.

The Google example illustrates how intellectual capital assets are valued by traditional accounting with an organization’s other, more tangible assets. The consensus as to whether or not knowledge capital should be accounted for and with what measurement methods is about as tangible as the asset in question. What follows are three different approaches used for valuing intellectual capital. They are good examples of the approaches that exist; it is hard to say which is the most appropriate for an organization to use.

Approach #1:
Customer relationships & growth-in-value

Skandia Inc., a Swedish insurance company, is the only company in the world I know of which has reported intellectual capital value in tandem with its financial reports. The inconsistency that occurs when valuing intellectual capital was in the past acknowledged on Skandia’s Internet site: Many Swedish companies on the Stockholm Stock Exchange are valued at 3 - 8 times their book value, i.e., the financial capital. This implies that there will be huge hidden values in companies, which are not visible in traditional accounting, yet increasingly larger investments are made precisely in these hidden assets. Such investments concern, [for example] customer relations, information technology, networks, and competence. Skandia has “long maintained that our truly sustainable earnings are derived from the interaction between our intellectual capital and financial capital.” Their synthesis of this relationship was first published as a supplement to their 1994 annual report.

Approach #2:
Intellectual capital is calculable

Strassmann has conducted extensive research toward defining intellectual capital as a calculable number. Strassmann conducted a study of 359 U.S. industrial companies which revealed the collective intellectual capital of these organizations was valued at $1.7 trillion, or 217% of their net financial assets. Strassmann reports that at the end of 1995, Standard & Poor's 500 companies (about 70% of all publicly traded companies) had a combined $1.12 billion fixed assets, and a market value of $4.5 billion. When categorized as overhead, intellectual capital is typically charged against profits. Some argue that this is inaccurate because while a cost to acquire intellectual capital may exist, writing it off as an expense does not consider the on-going value the intellectual capital has. Strassmann’s calculations are a means of quantifying the value of intellectual capital, which as mentioned earlier, is only captured in the sale of a corporate entity, and then only captured within traditional accounting as “goodwill”. Redefining intellectual capital investments with calculable benefits can better reflect its proportion of market value. For those interested in his actual calculations see http://www.strassmann.com.


Approach #3:
“Stick to the basics”

Strassmann and other information experts have created intellectual capital valuation formulas (primarily based on traditional accounting and economic equations) and organizational theories which illustrate their perspectives on information capital. This terminology helps separate and frame the uniqueness of this intangible resource. However, the art and science of accounting has made it 500 years without these innovations and there are those who question its formal introduction onto the corporate balance sheet. It is interesting to note that Microsoft, perhaps one of the most iconic examples of a balance sheet rich with intellectual capital, does not believe such measures should be formally introduced.

Microsoft’s previous chief financial officer Mike Brown said during his tenure that sticking with the basics is the best way to thrive in constantly changing and complex business environment. By focusing on the extremes —at the individual level and the big picture — Microsoft vigilantly manages resources at the immediate level while “keeping internal financial statements simple and meaningful.”

Regardless of the approach, perhaps the takeaway point is that having an inventory of your knowledge assets and analyzing how those assets are invested toward your organization’s success is critical. Whether done with Strassman’s relatively simple balance sheet ratios, or Skandia’s detailed profiles, or just good people management - the exercise requires a shift in perspective from how organizational assets typically are viewed.


What is the source of my information capital?

Take my assets, but leave me my organization and in five years I’ll have it all back.
— Alfred P. Sloan

As a manager you are already responsible for optimal utilization of your intellectual capital, but the tools or approach may not effectively account for information capital performance. The right tools and approach, incorporated into your organization’s daily heartbeat, can help you act on knowledge which enables your intellectual assets to perform at organizational expectations and beyond. Studying changing valuation approaches and accounting methods helps us gain an understanding of how all organizations are part of the knowledge economy, not just those whose end products are obviously knowledge products, such as Google. The companies that not only lead, but define their industry or product, have as a core competency the ability to effectively and immediately evaluate the effective use of their human capital.

The approaches, methodologies, and even service providers to assist you with these evaluations are diverse and the list is exploding. Every organization’s objectives and environment is different; you should select the thought process that is most comfortable for you and best accommodates your needs.

Recognizing knowledge as a strategic resource is the first step, but to manage knowledge as a valuable asset requires evaluating where (and in whom) the knowledge value lies. PPM/Governance tools are specifically designed to collect information about people, their time and their actions for strategic analysis, making them the ideal springboard for whatever overall solution you implement.


Focus on intellectual capital and intellectual continuity may pay-off!

The knowledge society will inevitably become far more competitive than any society we have yet known — for the simple reason that with knowledge being universally accessible, there will be no excuses for non-performance.
— Peter F. Drucker

This Drucker quote clearly was a harbinger of the recent “world is flat“ discussion. Perhaps nurturing the human spirit is not untrue to the bottom line. Thoreau’s comment on the majority of people leading “lives of quiet desperation” may not be specific to vocation or level of visibility, but perhaps indicates the universal need for self-value. If so, giving employees an understanding of their role in value creation is a truly empowering move. And in the emerging era of competition, maybe a rational, justifiable, capitalist strategy.

After years of inconsistent motivation and recognition programs, employees risk becoming jaded and leery of anything that purports that an organization “cares” about them as human beings. To recognize humans as value contributors, versus wage-earners-detracting-from-profits emphasizes what is uniquely human: the ability to reason beyond the boundaries of pre-defined formulas.

This rebellion against pre-fabbed idea boundaries and support of self-motivation is often evident in “Generation X” or “D” co-workers. We seem to have this mixture of awe and disdain for these employees; a twenty-four year-old who was considered an arrogant maverick in one environment becomes a young billionaire in another, bringing to the picture nothing other than himself. That “nothing” sometimes represents over two-thirds of the organization’s assets. That nothing can magically accelerate a sleepy organization in a few short quarters into a market leader. Invoking this financial magic from nothingness has created business practices for consulting firms everywhere. PPM and Governance isn’t magic. It requires involvement and commitment to achieve real results. But, as a tool and as part of an evolutionary process, it can work with the magic that is already within your organization: your environment, your processes, your culture ... and your people.

Thursday, March 24, 2005

“Where does the newborn go from here? The net is vast and infinite.” (from my Borland blog)

These are the final lines of the popular Japanese anime “Ghost in the Shell”. This seems appropriate to describe the momentum, interest, and uncertainty around the Eclipse project. David Intersimone and I spoke recently at EclipseCon 2005 on this topic.

Our presentation “The IDE is Dead, Long Live the IDE” was well received with people sitting in the aisles and standing in the back and the doorway. (Memo to self: Always get your session put in a small room so it looks exceedingly popular).

David gave a smart overview of where we have come from with IDE’s over the past 20+ years and made at least one super-duper smart point. When you take a step back and look we have retooled the underlying tool framework of the IDE almost every time we have changed from one software engineering “epoch” to another. A good example is the OO environments we had on the LISP machines, or for Prolog and Smalltalk. They were aptly built and customized for the emergence of the object-oriented age. As “OO” ceased to be the defining invention, as we moved into perhaps the “distributed epoch” – the OO vendors did not necessarily have the economic oomph, nor the point of view necessary to incorporate these dimensions into their products. A new generation of IDE’s emerged that were more optimized for database or distributed development, and which unfortunately ignored a number of the features of use from the previous epoch.

David made the optimistic point (with which I concur) that the Eclipse project represents a significant chance that we now have a tool framework which will outlive our current engineering epoch and allow us to tool into the next epoch without rebuilding the entire foundation. Don’t misunderstand us – PRODUCTS have survived epochs before, for example Delphi, JBuilder and Visual Studio. However, whilst the products moved forward across epochs their underlying foundations were rebuilt multiple times.

Next item to address is the fear that “Oh my god the sky is falling – Eclipse is commoditizing the core IDE functions and now Borland will have nothing left to sell”. Not true at all. In case you missed it we have grown up the stack since the days of Turbo Pascal. Download it and take a look, how could there of been anything left to build after that? Same thing with Eclipse – there is a lot left to do to make commercial products that support the individual’s productivity, team productivity, and align the projects for business success.

Another thing about commoditization is its several possible outcomes; it either drives capital investment to the sidelines, which is bad, or it causes capital investment to chase higher value solutions. I think if you look at the activities of thousands of entrepreneurs, privately held companies, and publicly held companies innovating on top of Eclipse, you don’t see capital in slothful repose.

As we see the emergence of tooling foundations like Eclipse and Visual Studio as platforms, the challenges falls to Borland to help customers bridge the gap between disparate platforms and find ever higher levels of value.

Domo arigato. Oyasuminasai.

Saturday, January 22, 2005

If You Give a Moose a Muffin (or If You Give a CTO a Weblog) (from my Borland blog)

For those of you who have had small kids you probably know these books that tell the tales of distraction that occur if you give a mouse a cookie, or take it to a movie or alternatively give a moose a muffin, or a pig a pancake.

That is what happened to me recently. (As well as closing Q4 '04, visiting numerous customers and partners, and surviving sales kickoff '05.)

Days after my last post I was all set to write about some of the new capabilities in J2SE 5.0. One of my regular curmudgeonly complaints of “these darn kid programmers and their fancy college comp sci degrees” is that they don’t know how to write multi-threaded systems; neither clients nor servers. One of my proof points has been the lack of built-in queue data structure in the language class libraries. So I was excited to see these new features in the J2SE 5.0 release.

With keyboard in hand I was all set to write an entry on “My Favorite Data Queue”.

So the first thing I do is look at the new queue structures only to find the J2SE queue designers have gone about this very differently than I would have (proving that one man’s abstract class is another man’s instantiable). They are nice queues but my favorite one is missing “the Associative Answer Queue”.

The Associative Answer Queue was one of the first pieces of code I wrote on my trusty Symbolics LISP machine back in the mid 80’s, so maybe time to do a Java version. Since I have been using JBuilder for years I thought I would take the opportunity to download a copy of Together Architect to see what its Java modeling experience is like. I then went through the joys of downloading this (maybe the subject of a future post). While I was waiting, I poked around downloads and was wondering about the newest Together products such as the upcoming Together Designer.

I sent off an email to the team to get a look at the field trial version of Together Designer 2005 which I downloaded right after Together Architect. Once I started up Together Designer 2005 field test of course I had to go through all of the latest samples and examples, especially some of the new UML 2.0 capabilities. As I used it – I thought about its use of the underlying Eclipse platform which made me think about the Eclipse 3.0 Core platform.

After spending time on the net reviewing the latest bits in the Eclipse 3.0 core platform it got me thinking about other Borland products like StarTeam which has an exceptional embedded user interface for Eclipse. Especially the “synchronize” view for Eclipse – a really cool piece of version control UI. Which made me realize that I needed some of the marketing screenshots to include in my latest powerpoint decks because I love to highlight our combined support for the 3 leading IDE platforms; BORL, Visual Studio, and Eclipse.

One StarTeam image in MS PowerPoint led to another because I realized I only had early alpha shots of the capabilities which are now shipping with StarTeam 2005 as “Borland Search Server 2005”. In case you hadn’t heard of it, Search Server gives you search access to all of the meta-data in CaliberRM and StarTeam repositories as well as the file vault and attachment contents. With a “consumer” and “producer” SDK there is also a way to create “harvesters” for other enterprise systems such as help desk systems like Clarify. Truly neat to have sub-two-second response time on a version search without having to resort to the brutal hammer of “get lots of revisions – and then find in files”.

Which of course got me thinking about the next version of Search Server (no announced date or feature set for sure). But if the next version of search server made use of its indices and its scan phases over the content to do some interesting relationship discovery, I would be thrilled. That said, I don’t think it would be giving anything away to say that one of its prototype UIs uses scalable vector graphics (SVG). Which of course, I needed to download and install on my machine to review the latest Search Server prototype.

The SVG download of course made me think of the upcoming Avalon UI from Microsoft, which in turn had me think of checking in with www.xamlon.com to see what their latest product version is doing. Looking at the declarative user interfaces in XAML of course made me grumble and say to myself “oh yeah, all of those angle brackets and special characters are a lot easier to understand than parentheses and the “#” sign from LISP.

This turns my thoughts back to the LISP code I had written which is lying around the house ensconced in DC300XL/P cartridge tapes from the 1980’s. Actually, rather than re-implement my favorite queue from memory, I should finally get the files off of tape. Time to hit the Symbolics Online Museum and see what type of tape drive those tapes used.

While finding the tape drive at the online museum I see a reference to Symbolic’s competitor Texas Instruments and its TI Explorers. Which of course reminds me of the “E3/Explorer 3” project which was building a software emulator for the old TI hardware. Unfortunately the site www.unlambda.com/lispm seems gone. Another thought pops into my head about TI – that’s where my current CEO Dale Fuller spent time in the wayback before we knew each other. While I was a tech support guy for Symbolics, Dale was busily running some part of the TI Explorer group. Maybe that’s why he and I can usually understand each other, we have similar LISPs.

Which reminds me that Dale is paying me for more than weblogging – and that I had better get back to work. Of course, not before deciding whether to buy a shiny, re-furbished LISP machine from www.symbolics.com.

I guess I’ll have to write about my favorite queue some other time.

Friday, October 22, 2004

Si fallor, sum. (from my Borland blog)

I have doubts how all the things I want to include in this post really do connect. But they fall in the realm of thoughts that line up near each other, within a conceptual arm’s length.

One of the connecting points for me is a technological/sociological contrast of today - “Did offshoring create collaboration, or did collaboration create offshoring?

Numerous articles have been written about how the plummeting cost of telecommunication has been a key enabler of staff offshoring. But cheap phone calls are only part of the trend.

Simultaneously, transistors are getting both bigger and more powerful, and smaller and faster. And these shifts in each dimension have wide sweeping implications for how our world will continue to change. Big transistors are being used to manage what amounts to digital power (insulated gate bipolar transistors) – the underlying innovation which is making hybrid automotives possible.

Small transistors, which can use light, are coming our way courtesy of the inventor of the LED. As described in a recent MIT Technology Review ,

“The light-emitting transistor will be the focus of a new lab at the University of Illinois, where the versatile device was recently invented. Researchers at the Hyper-Uniform Nanophotonic Technology Center will work on improving and exploiting the transistor, which can pump out both electrical and optical signals in response to an input and could make it possible to replace the electrical wires in circuit boards with much faster optical pathways.”

Wow. Maybe not in my next Dell laptop – but I am betting that we see impact on the collaborative capabilities of computing from these inventions in the future. Better use of power, and faster computing pathways, both potentially end up in everything from laptops to factories. When is enough, enough? How will we manage all the computation and collaboration thrown off by these advances, almost like a cultural form of heat? What will we do with it?

Let’s look at some innovation in the “how we manage”. Telecom players, and providers to telecom players, are simultaneously scrambling to keep TCP/IP-based communication from eroding their revenue streams, while trying to create ways to charge for the manner in which TCP/IP-based communications is eroding their revenue streams. Two players have introduced capabilities that combine traditional telephony, IP-based telephony, point-of-presence information and multi-device management software. They are Nortel with its virtual enterprise efforts (think routers and switches for people), and Siemens with its HiPath Openscape which similarly attempts to unite presence with communication routes. Of course, all compliments of the IT Industry’s best and brightest who have clawed their way out of the primordial communications swamp, through the ooze of H.323 to emerge onto somewhat solid ground of SIP. (Hey Borland engineers, how about a little SIP with my ALM?)

What about “what do we do with it”? Here are a couple initiatives which are making use of the collaborative world we are creating.

If the dark side of our advances is found in some forms of outsourcing, then certainly the bright side is found in programs like the Digital Vision Program at Stanford (in conjunction with the Reuters Foundation) led by Stuart Gannes. It is a nine-month fellowship course that brings together 15 experienced technologists and social entrepreneurs from around the globe for technologically interesting and socially valuable projects. Stuart is a guy with plenty of talents – that he chooses to apply them in this endeavor is a tribute both to his qualities as person intent on making a better world, and to the capabilities we are imbuing in the world with collaborative technologies.

Not so far reaching, but an interesting thread of the tale comes in the formation of the Avalanche Technology Corp., a for-profit Minnesota-based cooperative. Created by a group of Midwestern CIO’s (folks are so darn friendly and practical in the Midwest), the purpose is to share pools of non-competing software capabilities in order to manage in-house development costs and risks.

And also from the plains states comes another attempt to keep IT jobs in America – albeit in new locations, through “farm-shoring”. It can be difficult for businesses in the large metropolitan areas, where many IT organizations are, to keep staff costs competitive. High housing costs and long commutes are at least partial culprits. Fact of the matter is, life isn’t nearly so expensive in Bagley, WI or Jonesboro, KS – unfortunately there are few relatively high paid jobs there. Kathy Brittain White, former CIO of Cardinal Health, is out to change that with her new company Rural Sourcing Inc., which has set up two development centers in rural Arkansas. Maybe this is a way to keep the kids close to the farm? If this catches on, it sure beats the heck out of fighting for the remaining jobs at the leather tannery in Sheboygan. It could also keep families and communities intact that have been under assault by our shift away from rural manufacturing centers over the last decades.

(Mom, clean my room, I’m coming back to Oklahoma!)

In any event, these ideas live for me in close conceptual proximity, and thanks to RSS, the blogosphere, and search engines they will at least briefly live on in text proximity search. No doubt.

Thursday, October 14, 2004

I'm "all in". (from my Borland blog)

There is a creaking sound around me these days. (Real creaking? Well no, that is just a metaphor.) The source of this is the discontent in many quarters with the metaphor of software factories, ERP for software, or industrialization of software.

Back to software factories in a second. It is somewhat surprising that people who live in the magic world of abstractions, such as software developers, programmers, coders, hackers do, would object so extraordinarily to an abstraction – like a metaphor. (I just realized I can barely write a paragraph without utilizing simile through comparisons using “such as” or “like”, but I digress.)

Anyway, the reasons why I found myself owning a computerized voice changer are irrelevant to the following point, but suffice it to say that for a sophomoric prank not too long ago I found myself a proud owner of a computerized voice changer for my Verizon Kyocera phone. I put in the batteries, turned it on, and proudly called my wife upstairs. Chuckling I spoke to her only to have her interrupt and say – “it sounds like you”. Darn. I flipped it on and off a couple times, back to on, and spoke some more – but it still sounded like me. I took the batteries out, rubbed them, rubbed the contact points, battery back in, turned it on, spoke – still me. Because my wife is a good person, many years used to me, she sat there on her phone, waiting for me to get over the disappointment. Wondering why it wasn’t working I stood there flicking it on-and-off. All of a sudden – my wife yelled – there – you sounded like a robot. I looked at the device – and it was off. “Ahhh…”, I thought., “cheap, foreign, imported electronic device.” They put the contacts in up side down, they mislabeled the battery compartment – and “off” really means “on.”

After this success – for some reason – I decided to read the instructions. And at this moment I realized how hard metaphor is to deal with. Because staring me in the face was the fact that even concrete, binary concepts like “ON and “OFF” are culturally and experientially based. The instructions told me “in order to operate, turn your voice ‘off’”. So – it was not mixed up terminals, it was not bad labeling, it was the fact that from the point of view of the device maker – the product concept was about turning my voice on and off – not the device.

So, if we can’t define “off” and “on” clearly across different experiences – how do we share metaphor? The answer is – in dribs and drabs, gradients, and shades of grey. We may sometime misunderstand metaphors, we may sometimes find them stretched, and yes, they can definitely be pushed (metaphorically speaking) too hard.

Back to software factories. Why are so many perturbed by this? Do you really think the objective is to turn all of us into something halfway between the drones of Lang’s Metropolis and Marshall’s Laverne and Shirley watching bottles sail by at the Schotz brewery?

Here just so we understand – let me turn my voice “on”.

Calm down. We are talking about the entire delivery process for software – not the writing of the actual lines of code. So – here is what I believe is a strong approximation of what Borland thinks about this.

Software developers, coders, hackers, programmers do something extraordinarily magical. They take abstractions and concepts and worry them, work them, and massage them until they turn into something almost real. Something so close to real that clicking a mouse button can cause thousands of real world actions to occur. Fundamentally, your job as developers is to push metaphors until they are willed into existence. So forgive those of us who once wrote code, who no longer write code, and maybe never did if we occasionally push a little too far.

Believe me. We know what you do is magical and cannot be replaced by a factory filled with drones. But think about it. There are no non-creative, no redundant, rote, boring activities left in coding? Haven’t we used advances in development technology to eliminate those things in the past? Aren’t there some left?

And then when we look at all of the software delivery process – requirements, documentation creation, builds, code reviews, deployment, collaboration, audits, metrics, and so on, aren’t there maybe a handful of things that can be more rigorously automated – without creating a soulless environment?

And what about the world we live in? Let’s say I get a job on a huge, super distributed transaction processing system – a really cool system that will be linked to dozens of financial exchanges around the world and requires local regulatory customization – so the team has many super duper brilliant people working on it around the world – each capable of extraordinary invention. Aren’t there “factory-ish” things we can do to help these often anonymous participants in the projects actually get a sense of each other? Can’t we bring some physicality to the virtuality – can’t we improve the collaborative infrastructure? Aren’t there “industrializing” things we can do to make this process go more smoothly project after project? When one of the team members leaves – is it a necessity that all the remaining participants are harmed by the amount of knowledge lost? Can’t a little bit be left behind – all without loss of soul or dignity?

I think it can happen. And I can’t think of a better group of people to try than Borland’s engineers in Santa Ana, Cupertino, Scotts Valley, Raleigh, St. Pete’s (Russia), Stockholm, Singapore, Brazil, Chicago, Atlanta, and places I am sure I am forgetting.

I think we can help individuals be productive, teams be productive and businesses be productive. But – hey, that’s just me. From my point of view it is all a game of Texas Hold-em and I, for one, am all in with my Doyle Brunson.

Monday, October 04, 2004

Welcome to the Blur (from my Borland blog)

I wanted my first Borland blog entry to provide some insight into the kinds of issues we are now seeing in our global customer base.

I have been fortunate enough since Q1 of this year to participate in many IT executive forums, CIO councils, and executive briefings to customers. After speaking with the first 50 or so CxOs, VP’s of Application Development, and Development Directors, the theme of their business reality became clear – they called it “The Blur”.

The “blur” designation came from a C-level executive attending a Dallas event. He went on to quantify that this came from a confluence of two effects that were driving his team; the increase of the speed of his company’s business cycle (not the IT delivery cycle, but the actual product change cycle for the business they were in), and the increase in the rate of change of technologies, in fact even standards (or near standards) coming too quickly to track.

As this year has advanced the metaphor has been echoed in meeting after meeting. The conversation goes something like this:

- My business delivery cycles are getting ever faster.
- The economy seems to be heating up – but where are my increased resources? (Hint: There may not be any.)
- Do I need to be thinking about CMMI certification?
- Do I need to go offshore? How do I stay onshore?
- Ouch! Sarbanes-Oxley regulation is starting to cost real money.
- What about Java vs. .NET?
- What about Weblogic vs. Websphere vs. JBoss vs. JOnAS?
- Is SOA real? When is it real? What do I do with it?
- And so on…

At the same time this ad-hoc community of IT executives seems to be honing in on practical strategies for confronting these challenges. Common themes are apparent in how these managers are coping. Interesting to note is how their strategies are often time “refinement” strategies – not major shifts – just subtle changes in point of view. (But remember the saying of the great technologist Alan Kay “Point of view is worth 80 IQ points”.) Here are some examples:

- They are employing a “product” model for development, not just a “project” model. (Probably a whole posting on this difference in the future.)

- They are also differentiating process work from project work – with an extraordinary desire to codify and communicate the iterative and perpetual work items that make up their processes

- Developing approaches for managing requirements as opposed to merely capturing them.

- Along those lines, a shift to testing requirements as well as testing code.

- Understanding that Bug Tracking is only one workflow of many workflows in the broader domain of managing Change Requests.

What do these practical solutions have in common? They all reinforce why customers have been so positive in their response to our last 2+ years of providing ever better, highly iterative, highly integrated application lifecycle management. And they reinforce and imply the need for Borland to do more. They make it clear that despite the progress we have made in helping individuals and teams be more productive in writing software – there are tangible, actionable capabilities that are a mere arms length away from today’s capabilities, that will help businesses be productive in delivering their software.

At Borcon 2004 we announced the Borland vision for Software Delivery Optimization along with indications of the types of features, partners and integrations involved in its realization. Only time will tell, but it seems to me that its ambitions will make real progress in bringing The Blur back into focus.