2010/10/13

Why new Secure Internet solutions are technically Hard

Information Security is both very hard and very easy at the same time.

Not only are Internet Nasties a nuisance, or worse, they prevent the  new, useful Applications and Networks like e-Commerce, i-EDI, e-Health, e-Banking, e-Government and other business/commercial transactions systems.

Perfect Security isn't possible: ask any bank.

Defenders need to be 100.00% correct, every minute of every day.
Attackers need just one weakness for a moment to get in.

Not all compromises/breaches are equal: from nothing of consequence, up to being in full control with system owners not being aware of it.

All 'Security Systems' can only be "good enough" for their role, which depends on many factors.
How long do you need to keep your secrets? Minutes or Decades?


2010/09/20

Quality and Excellence: Two sides of the same coin

Quality is predicated on Caring.
High Performance, also called "Excellence",  first requires people to Care about their results.

They are related through the Feedback Loop of Continuous Improvement, also known as O-O-D-A (Observe, Orient, Decide, Act) and Plan-Do-Check-Act (from W. Edwards Deming).

The Military take OODA another level with After-Action-Reviews or After-Action-Reports (AAR's), a structured approach to acquiring "Lessons Learned".

High Performance has two aspects: work-rate and consistency.
It's not enough to produce identical/consistent goods or results everytime, but you have to do it with speed.

There's an inviolate Quality Dictum:
You can't Check your own work.

For Organisations, this Dictum becomes:
 Objective assessment requires an Independent Expert Body.

From which follows the necessity for an External Auditor:
  Only Independent persons/bodies can check an Organisation and its people/processes for compliance and performance.

For around 80 years, Aviation has separated the roles of Investigation, or Root Cause Analysis, from Regulation, Compliance and Consequences. In the USA the NTSB Investigates and the FAA Regulates. This has led to consistent, demonstrable improvement in both Safety and Performance. Profitability is linked to Marketing, Financial Management and Administration, not just Performance.

All of which leads to the basic Professional Test for individuals:
 "Never Repeat, or allow to be repeated, Known Errors, Faults and Failures".

And the Raison d'ĂȘtre of Professional Associations or Bodies:
 To collect, preserve and disseminate Professional Learnings of Successes, Failures, Discovery and Invention.

Barry Boehm neatly summaries the importance of the Historical Perspective as:
Santayana half-truth: “Those who cannot remember the past are condemned to repeat it”

Don’t remember failures?
  • Likely to repeat them
Don’t remember successes?
  • Not likely to repeat them

All these statements are about Organisations as Adaptive Control Systems.

To effect change/improvement, there has to be reliable, objective measures of outputs and the means to effect change: Authority, the Right to Direct and Control, the ability to adjust Inputs or Direct work.

Which points the way as to why Outsourcing is often problematic:
  The Feeback Loop is broken because the hirer gives up Control of the Process.

Most Organisations that Outsource critical functions, like I.T., completely divest themselves of all technical capability and, from a multitude of stories, don't contract for effective Quality, Performance or Improvement processes.

They give up both the capability to properly assess Outputs and Processes and Control mechanisms to effect change. Monthly "management reports" aren't quite enough...

2010/09/12

Business Metrics and "I.T. Event Horizons"

Is there any reason the "Public Service", as we call paid Government Administration in Australia, isn't the benchmark for good Management and Governance??

Summary: This piece proposes 5 simple metrics that reflect, but are not in themselves pay or performance measures for, management effectiveness and competence:
  • Meeting efficiency and effectiveness,
  • Time Planning/Use and Task Prioritisation,
  • Typing Speed,
  • Tool/I.T. Competence: speed and skill in basic PC, Office Tools and Internet tools and tasks, and
  • E-mail use (sent, read, completed, in-progress, pending, never resolved, personal, social, other).


2010/08/29

Top Computing Problems

The 7 Millennium Prize Problems don't resonate for me...

These are the areas that do engage me:
The piece for the second item, "Multi-level memory" is old and not specifically written for this set of questions. Expect it to be updated at some time.

    Internetworking protocols

    Placemarker for a piece on Internetworking protocols and problems with IPv4 (security and facilities) and IPv6 (overheads, availability).

    "The Internet changes everything" - the Web 2.0 world we have is very different to where we started in 1996, the break-through year of 'The Internet' with IPv4.

    But it is creaking and groaning.
    Around 90% of all email sent is SPAM (Symantec quarterly intelligence report).

    And since 2004 when the "Hackers Turned Pro", Organised Crime makes the Internet a very dangerous place for most people.

    IPv6 protocols have been around for some time, but like Group 4 Fax before them, are a Great Idea, but nobody is interested...

    What are the problems?
    What shape could solutions have?
    Are there (general) solutions to all problems?

    Systems Design

    Are these new sorts of systems possible with current commercial or FOSS systems?
    What Design and Implementation changes might be needed?

    How do they interact with the other 'Computing Challenges' in this series?

    Flexible, Adaptable Hardware Organisations

    Placemarker for a piece on flexible hardware designs.

    I'd like to be able to buy a CPU 'brick' at home for on-demand compute-intensive work, like Spreadsheets.
    I'd like be able to easily transfer an application, then bring it back again.

    Secondly, if my laptop has enough CPU grunt, it won't have the Graphics processing or Displays (type, size, number) needed for some work... I'd like to be able to 'dock' my laptop and happily get on with it.
    The current regime is to transfer files and have separate environments that operate independently and I have to go through that long login-start-apps-setup-environment cycle.

    I prefer KDE (and other X-11 Windows Managers) to Aqua on Snow Leopard (OS/X 10.6) because they remember what was running in a login 'session', and recreate it when I login again.

    In 1995, I first used HP's CDE (IIRC) on X-11, that provided multiple work-spaces. This was mature technology then.

    It was only this year, 15 years on, that Apple provided "Spaces" for their uses.
    Huh??

    We already have good flexible storage options for most types of sites.
    Cheap NAS appliances are available for home use, up to high-end SAN solutions for large Enterprises.

    For micro- and portable-devices, the main uses are "transactional" web-based.
    These scale well already, and little, if nothing, can be done to improve this.

    Systems Design

    What flows from this 'wish list' is that no current Operating System design will support it well.
    The closest, "Plan 9", developed around 1990, allows for users to connect different elements to a common network and Authentication Domain:
    • (graphic) Terminals
    • Storage
    • CPU
    The design doesn't support the live migration of applications.

    Neither do the current designs of Virtual Machines (migrate the whole machine) or 'threads' and multi-processors.

    Datacentre Hardware organisation

    Related posts:
    Senior Google staffers wrote The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, which I thought showed break-through thinking.

    The importance of this piece is it wasn't theoretical, but a report of What Works in practice, particularly at 'scale'.

    Anything that Google, one of the star performers of the Internet Revolution, does differently is worthy of close examination.  What do they know that the rest of us don't get?

    While the book is an extra-ordinary blueprint, I couldn't but help asking a few questions:
    • Why do they stick with generic-design 1RU servers when they buy enough for custom designs?
    • How could 19-inch racks, designed for mechanical telephone exchanges a century ago, still be a good, let alone best, packaging choice when you build Wharehouse sized datacentres?
    • Telecommunications sites use DC power and batteries. Why take AC, convert to DC, back to AC, distribute AC to every server with inefficient, over-dimensioned power-supplies?
    Part of the management problem with datacentres is minimising input costs whilst maximising 'performance' (throughput and latency).

    2010/07/30

    What can you learn from a self-proclaimed "World's Greatest"?

    Note: This document is copyright© Steve Jenkin 1998-2010. It may not be
    reproduced, modified or distributed in any way without the explicit
    permission of the author. [Which you can expect to be given.]

    Lessons from the Worlds' Greatest Sys Admin - July 1998
    Presented at SAGE-AU Conference, July 1998
    Contents
    Introduction
    Background
    Principles of System Admin
    Some WGSA Attributes
    About The WGSA
    Sayings of the WGSA.
    Some Sound Management Laws
    So What?
    How do you work with a "World's Greatest ..."
    Some "Good Stuff" I learnt from friends.
    Some of the WGSA's work
    Summary

    2010/05/09

    Microsoft Troubles - IX, the story unfolds with Apple closing in on Microsoft size.

    Three pieces in the trade press showing how things are unfolding.

    Om Malik points out that Intel and Microsoft fortunes are closely intertwined.
    Jean-Louis Gassée suggests that "Personal Computing" (on those pesky Personal Computers) is downsizing and changing.
    Joe Wilcox analyses Microsoft latest results and contrasts a little with Apple.

    2010/05/03

    Everything Old is New Again: Cray's CPU design

    I found myself writing, during a commentary on the evolution of SSD's in servers, that  large-slow-memory like Seymour Cray used (not cache), would affect the design of Operating Systems. The new scheduling paradigm:
    Allocate a thread to a core, let it run until it finishes and waits for (network) input, or it needs to read/write to the network.
    This leads into how Seymour Cray dealt with Multi-Processing, he used multi-level CPU's:
    • There were Application processors, many bits, many complex features like Floating Point and other fancy stuff, but had no kernel mode features or access to protected regions of hardware or memory, and
    • Peripheral Processors (PP's), really a single very simple, very high-speed processor, multiplexed to look like 10 small, slower processors that performed all kernel functions and controlled the operation of the Application Processors (AP's)
    Not only did this organisation result in very fast systems (Cray's designs were the fastest in the world for around 2 decades), but very robust and secure ones as well: the NSA and other TLA's used them extensively.

    The common received wisdom is that interrupt-handling is the definitive way to interface unpredictable hardware events with the O/S and rest of the system. That polling devices, the old-way, is inefficient and expensive.

    Creating a fixed overhead scheme is more expensive in compute cycles than an on-demand, or queuing, system, until the utilisation rate is very high. Then the cost of all the flexibility (or Variety in W. Ross Ashby's Cybernetics term) comes home to roost.

    Piers Lauder of Sydney University and Bell Labs improved total system throughput of a VAX-11/780 running Unix V8 under continuous full (student/teaching) load by 30% by changing the serial-line device driver from 'interrupt handling' to polling.

    All those expensive context-switches went away, to be replaced by a predictable, fixed overhead.
    Yes, when the system was idle or low-load, it spent a little more time polling, but marginal.
    And if the system isn't flat-out, what's the meaning of an efficiency metric?

    Dr Neil J Gunther has written about this effect extensively with his Universal Scaling Law and other articles showing the equivalence of the seemingly disparate approaches of Vector Processing and SMP systems in the limit of their performance.

    My comment about big, slow memory changing Operating System scheduling can be combined with the Cray PP/AP organisation.

    In the modern world of CMOS, micro-electronics and multi-core chips, we are still facing the same Engineering problem Seymour Cray was attempting to address/find an optimal solution to:
    For a given technology, how do you balance maximum performance with the Power/Heat Wall?
    More power gives you more speed, this creates more Heat, which results in self-destruction, the "Halt and Catch Fire" problem. Silicon junctions/transistors are subject to thermal run-away, as they get hotter, they consume more power and get hotter still. At some point that becomes a viscous cycle (positive feedback loop) and its game over. Good chip/system designs balance on just the right side of this knife edge.

    How could the Cray PP/AP organisation be applied to current multi-core chip designs?
    1. Separate the CPU designs for kernel-mode and Application Processors.
      A single chip needs only have a single kernel-mode CPU controlling a number of Application CPU's. With its constant overhead cost already "paid for", scaling of Application performance is going to be very close to linear right up until the limit.
    2. Application CPU's don't have forced context switches. They roar along as fast as they can for as long as they can, or the kernel scheduler decides they've had their fair share.
    3. System Performance and Security both improve by using different instruction sets and processor architectures for different applications. While a virus/malware might be able to compromise an Application, it can't migrate into the kernel unless it's buggy. The Security Boundary and Partitioning Model is very strong.
    4. There doesn't have to be competition between the kernel-mode CPU and the AP's for cache memory 'lines'. In fact, the same memory cell designs/organisations used for L1/L2 cache can be provided as small (1-2MB) amounts of very fast direct access memory. The modern equivalent of "all register" memory.
    5. Because the kernel-mode CPU and AP's don't contend for cache lines, each will benefit hugely in raw performance.
      Another, more subtle, benefit is the kernel can avoid both the 'snoopy cache' (shared between all CPU's) and VM systems. It means a much simpler, much faster and smaller (= cooler) design.
    6. The instruction set for the kernel-mode CPU will be optimised for speed, simplicity and minimal transistor count. You can forget about speculative execution and other really heavy-weight solutions necessary in the AP world.
    7. The AP instruction set must be fixed and well-know, while the kernel-mode CPU instruction set can be tweaked or entirely changed for each hardware/fabrication iteration. The kernel-mode CPU runs what we'd now call either a hypervisor or a micro-kernel. Very small, very fast and with just enough capability. A side effect is that the chip manufacturers can do what they do best - fiddle with the internals - and provide a standard hypervisor for other O/S vendors to build upon.
    Cheaper, Faster, Cooler, more robust and Secure and able to scale better.

    What's not to like in this organisation?

    A Good Question: When will Computer Design 'stabilise'?

    The other night I was talking to my non-Geek friend about computers and he formulated what I thought was A Good Question:
    When will they stop changing??
    This was in reaction to me talking about my experience in suggesting a Network Appliance, a high-end Enterprise Storage device, as shared storage for a website used by a small research group.
    It comes with a 5 year warranty, which leads to the obvious question:
    will it be useful, relevant or 'what we usually do' in 5 years?
    I think most of the elements in current systems are here to stay, at least for the evolution of Silicon/Magnetic recording. We are staring at 'the final countdown', i.e. hitting physical limits of these technologies, not necessarily their design limits. Engineers can be very clever.

    The server market has already fractioned into "budget", "value" and "premium" species.
    The desktop/laptop market continues to redefine itself - and more 'other' devices arise. The 100M+ iPhones, in particular, already out there demonstrate this.

    There's a new major step in server evolution just breaking:
    Flash memory for large-volume working and/or persistent storage.
    What now may be called internal or local disk.
    This implies a major re-organisation of even low-end server installations:
    Fast local storage and large slow network storage - shared and reliable.
    When the working set of Application data in databases and/or files will fit on (affordable) local flash memory, response times improve dramatically because all that latency is removed. By definition, data outside the working set isn't a rate limiting step, so its latency only slightly affects system response time. However, throughput, the other side of the Performance Coin, has to match or beat that of the local storage, or it will become the system bottleneck.

    An interesting side question:
     How will Near-Zero-Latency local storage impact system 'performance', both response times (a.k.a. latency) and throughput.

    I conjecture that both system latency and throughput will improve markedly, possibly super-linearly, because one of the bug-bears of Operating Systems, the context switch, will be removed. Systems have to expend significant effort/overhead in 'saving their place', deciding what to do next, then when the data is finally ready/available, to stop what they were doing and start again where they left off.

    The new processing model, especially for multi-core CPU's, will be:
    Allocate a thread to a core, let it run until it finishes and waits for (network) input, or it needs to read/write to the network.
    Near zero-latency storage removes the need for complex scheduling algorithms and associated queuing. It improves both latency and throughput by removing a bottleneck.
    It would seem that Operating Systems might benefit from significant redesign to exploit this effect, in much the same way that RAM is now large and cheap enough that system 'swap space' is now either an anachronism or unused.

    The evolution of USB flash drives saw prices/Gb halving every year. I've recently seen 4Gb SDHC cards at the supermarket for ~$15, whereas in 2008, I paid ~$60 for USB 4Gb.

    Rough server pricing for RAM in 2010 is A$65/Gb ±$15.
    List prices by Tier 1/2 vendors for 64Gb SSD is $750-$1000 (around 2-4 times cheaper from 'white box' suppliers).
    I've seen this firmware limited to 50Gb to improve performance and reliability comparable to current production HDD specs.
    This is $12-$20/Gb, depending on what base size and prices used.

    Disk drives are ~A$125 for 7200rpm SATA and $275-$450 for 15K SAS drives.
    With 2.5" drives priced in-between.
    Ie. $0.125/Gb for 'big slow' disks and $1 per GB for fast SAS disks.

    Roll forward 5 years to 2015 and 'SSD' might've doubled in size three times, plus seen the unit price drop. Hard disks will likely follow the same trend of 2-3 doublings.
    Say SSD 400Gb for $300: $1.25/Gb
    2.5" drives might be up to 2-4Tb in 2015 (from 500Gb in 2010) and cost $200: $0.05-0.10/Gb
    RAM might be down to $15-$30/Gb.

    A caveat with disk storage pricing: 10 years ago RAID 5 became necessary for production servers to avoid permanent data loss.
    We've now passed another event horizon: Dual-parity, as a minimum, is required on production RAID sets.

    On production servers, price of storage has to factor in the multiple overheads of building high-reliability storage (redundant {disks, controllers, connections}, parity and hot-swap disks and even fully mirrored RAID volumes plus software, licenses and their Operations, Admin and Maintenance) from unreliable parts. A problem solved by electronics engineers 50+ years ago with N+1 redundancy.

    Multiple Parity is now needed because in the time taken to recreate a failed drive, there's a significant chance of a second drive failure and total data loss. [Something NetApp has been pointing out and addressing for some years.] The reason for this is simple: the time to read/write a whole drive has steadily increased since ~1980. Recording density (bits per inch) times areal density (tracks per inch) have increased faster than read/write speeds, roughly multiplying recording density times rotational speed.

    Which makes running triple-mirrors a much easier entry point, or some bright spark has to invent a cheap-and-cheerful N-way data replication system. Like a general use Google File System.

    Another issue is that current SSD offerings don't impress me.

    They make great local disk or non-volatile buffers in storage array, but are not yet, in my opinion, quite ready for 'prime time'.

    I'd like to see 2 things changed:
    • RAID-3 organisation with field-replaceable mini-drives. hot-swap preferred.
    • PCI, not SAS or SATA connection. I.e. they appear as directly addressable memory.

    This way the hardware can access flash as large, slow memory and the Operating System can fabricate that into a filesystem if it chooses - plus if it has some knowledge of the on-chip flash memory controller, it can work much better with it. It saves multiple sets of interfaces and protocol conversions.

    Direct access flash memory will be always be cheaper and faster than SATA or SAS pseudo-drives.

    We would then see following hierarchy of memory in servers:

    • Internal to server
      • L1/2/3 cache on-chip
      • RAM
      • Flash persistent storage
      • optional local disk (RAID-dual parity or triple mirrored)
    • External and site-local
      • network connected storage array, optimised for size, reliability, streaming IO rate and price not IO/sec. Hot swap disks and in-place/live expansion with extra controllers or shelves are taken as a given.
      • network connected near-line archival storage (MAID - Massive Array of Idle Disks)
    • External and off-site
      • off-site snapshots, backups and archives.
        Which implies a new type of business similar to Amazon's Storage Cloud.
    The local network/LAN is going to be ethernet (1Gbps or 10Gbps Ethernet, a.k.a 10GE), or Infiniband if 10GE remains very expensive. Infiniband delivers 3-6Gbps over short distances on copper, external SAS currently uses the "multi-lane" connector to deliver four channels per cable. This is exactly right for use in a single rack.

    I can't see a role for Fibre Channel outside storage arrays, and these will go if Infiniband speed and pricing continues to drop. Storage Arrays have used SCSI/SAS drives with internal copper wiring and external Fibre interfaces for a decade or more.
    Already the premium network vendors, like CISCO, are selling "Fibre Channel over Ethernet" switches (FCoE using 10GE).

    Nary a tape to be seen. (Hooray!)

    Servers should tend to be 1RU either full-width or half-width, though there will still be 3-4 styles of servers:
    • budget: mostly 1-chip
    • value: 1 and 2-chip systems
    • lower power value systems: 65W/CPU-chip, not 80-90W.
    • premium SMP: fast CPU's, large RAM and many CPU's (90-130W ea)
    If you want removable backups, stick 3+ drives in a RAID enclosure and choose between USB, firewire/IEEE 1394, e-SATA or SAS.

    Being normally powered down, you'd expect extended lifetimes for disks and electronics.
    But they'll need regular (3-6-12 months) read/check/rewrite cycling or the data will degrade and be permanently lost. Random 'bit-flipping' due to thermal activity, cosmic rays/particles and stray magnetic fields is the price we pay for very high density on magnetic media.
    Which is easy to do if they are kept in a remote access device, not unlike "tape robots" of old.
    Keeping archival storage "on a shelf" implies manual processes for data checking/refresh, and that is problematic to say the least.

    3-5 2.5" drives will make a nice 'brick' for these removable backup packs.
    Hopefully commodity vendors like Vantec will start selling multiple-interface RAID devices in the near future. Using current commodity interfaces should ensure they are readable at least a decade into the future. I'm not a fan of hardware RAID controllers in this application because if it breaks, you need to find a replacement - which may be impossible at a future date. (fails 'single point of failure' test).

    Which presents another question using a software RAID and filesystem layout: Will it still be available in your O/S of the future?
    You're keeping copies of your applications, O/S, licences and hardware to recover/access archived data, aren't you? So this won't be a question... If you don't intend to keep the environment and infrastructure necessary to access archived data, you need to rethink what you're doing.

    These enclosures won't be expensive, but shan't be cheap and cheerful:
    Just what is your data worth to you?
    If it has little value, then why are you spending money on keeping it?
    If it is a valuable asset, potentially irreplaceable, then you must be prepared to pay for its upkeep in time, space and dollars. Just like packing old files into archive boxes and shipping them to a safe off-site facility cost money, it isn't over once they are out of your sight.

    Electronic storage is mostly cheaper than paper, but it isn't free and comes with its own limits and problems.

    Summary:
    • SSD's are best suited and positioned as local or internal 'disks', not in storage arrays.
    • Flash memory is better presented to an Operating System as directly accessible memory.
    • Like disk arrays and RAM, flash memory needs to seamlessly cater for failure of bits and whole devices.
    • Hard disks have evolved to need multiple parity drives to keep the risk of total data loss acceptably low in production environments.
    • Throughput of storage arrays, not latency, will become their defining performance metric.
      New 'figures of merit' will be:
      • Volumetric: Gb per cubic-inch
      • Power: Watts per Gb
      • Throughput: Gb per second per read/write-stream
      • Bandwidth: Total Gb per second
      • Connections:  Number simultaneous connections.
      • Price: $ per Gb available and $ per Gb/sec per server and total
      • Reliability: probability of 1 byte lost per year per Gb
      • Archive and Recovery features: snapshots, backups, archives and Mean-Time-to-Restore
      • Expansion and Scalability: maximum size (Gb, controllers, units, I/O rate) and incremental pricing
      • Off-site and removable storage: RAID-5 disk-packs with multiple interfaces are needed.
    • Near Zero-latency storage implies reorganising and simplifying Operating Systems and their scheduling/multi-processing algorithms. Special CPU support may be needed, like for Virtualisation.
    • Separating networks {external access, storage/database, admin, backups} becomes mandatory for performance, reliability, scaling and security.
    • Pushing large-scale persistent storage onto the network requires a commodity network faster than 1Gbps ethernet. This will either be 10Gbps ethernet or multi-lane 3-6Gbps Infiniband.
    Which leads to another question:
    What might Desktops look like in 5 years?

    Other Reading:
    For a definitive theoretical treatment of aspects of storage hierarchies, Dr. Neil J Gunther, ex-Xerox PARC, now Performance Dynamics, has been writing about "The Virtualization Spectrum" for some time.

    Footnote 1:
    Is this idea of multi-speed memory (small/fast and big/slow) new or original?
    No: Seymour Cray, the designer of the world's fastest computers for ~2 decades, based his designs on it. It appears to me to be a old idea whose time has come again.

    From a 1995 interview with the Smithsonian:
    SC: Memory was the dominant consideration. How to use new memory parts as they appeared at that point in time. There were, as there are today large dynamic memory parts and relatively slow and much faster smaller static parts. The compromise between using those types of memory remains the challenge today to equipment designers. There's a factor of four in terms of memory size between the slower part and the faster part. Its not at all obvious which is the better choice until one talks about specific applications. As you design a machine you're generally not able to talk about specific applications because you don't know enough about how the machine will be used to do that.
    There is also a great PPT presentation on Seymour Cray by Gordon Bell entitled "A Seymour Cray Perspective", probably written as a tribute after Cray's untimely death in an auto accident.

    Footnote 2:
    The notion of "all files on the network" and invisible multi-level caches was built in 1990 at Bell Labs in their Unix successor, "Plan 9" (named for one of the worst movies of all time).
    Wikipedia has a useful intro/commentary, though the original on-line docs are pretty accessible.

    Ken Thompson and co built Plan 9 around 3 elements:
    • A single protocol (9P) of around 14 elements (read, write, seek, close, clone, cd, ...)
    • The Network connects everything.
    • Four types of device: terminals, CPU servers, Storage servers and the Authentication server.
    Ken's original storage server had 3 levels of transparent storage (in sizes unheard of at the time):
    • 1Gb of RAM (more?)
    • 100Gb of disk (in an age where 1Gb drives where very large and exotic)
    • 1Tb of WORM storage (write-once optical disk. Unheard of in a single device)
    The usual comment was, "you can go away for the weekend and all your files are still in either memory or disk cache".

    They also pioneered permanent point-in-time archives on disk in something appearing to the user as similar to NetApp's 'snapshots' (though they didn't replicate inode tables and super-blocks).

     My observations in this piece can be paraphrased as:
    • re-embrace Cray's multiple-memory model, and
    • embrace commercially the Plan 9 "network storage" model.

    Promises and Appraising Work Capability and Proficiency

    Max Wideman, PMI Distinguished Contributor and Person of the Year and Canadian author of several Project Management books plus a slew of published papers, not only responded to, and published, some comments and conversations of between us, he then edited up some more emails into a Guest Article of his site.

    Many thanks to you Max for all your fine work and for seeing something useful in what I penned.

    2010/04/18

    Australia and the Researchers' Workbench

    This is a pitch for something new: the "Researchers' Workbench".
    Australia has the wealth and inventiveness to do it, but most probably, not the political will.
    Chalk that up to "the Cultural Cringe".

    2010/04/04

    Death by Success II

    There is another, much more frequent "Death by Success" cause, first introduced to me by Jerry Weinberg and Wayne Strider and Elaine Cline (Strider and Cline).

    It's the same process that some herbicides use: unconstrained growth.
    Monsanto's flagship herbicide Round Up is exactly this sort of agent.

    If you are very good at what you do and much sought after, this can lead directly to massive Failure - personally and in business.

    Growth is Good, but too much, too fast is a Killer.

    The only protection is awareness.
    As  Virginia Satir pointed out, "We can't see inside other people's heads, nor can we see ourselves as others see us" (courtesy again of Jerry and "Strider and Cline".)

    Typically you need objective, external help is recognising this condition.
    Once you have restored Situational Awareness, you can choose your response. Which may be "I'm outa here", Denial or something in between.

    There is an alternative form of "Death by Success", which again we see in the Plant Kingdom.

    Your initial approach, solution or technique may not Scale-Up or have a fixed Upper-Bound.
    E.g. if you sell "factory seconds", there is a limited supply that sets your maximum turnover.
    Or selling fragments of the Berlin Wall - at some point the Genuine Article is all gone...

    The example in the Plant Kingdom are when tree seedlings 'set' in unsuitable places, like a small pot or within a bottle. Down the road, they will become "root bound", which slows growth, then they'll consume all the nutrients and having converted 'everything' into plant material, die.

    That's it for that plant - all of one resource has been exhausted and it's Game Over.

    Death by Success

    The things you do in the beginning, when you're the minnow-against-the-giants, to start and build a business may not work well when you're successful, when you've become The Giant.

    Exactly what leads to Success can eventually lead to your downfall.

    You become very good at the things that have gained and seemingly maintained Success.  Every problem and challenge you've met have been solved with your brilliance and individual style.

    Why would you ever want or need to vary that approach?

    Until something new comes along and it all goes wrong:
      Inevitably in Business and Life, things change (perturbations arise in Control Systems terms).
      Responding with "More of the Same", as in the past, will, at some point, not work.
      If you've grown large, it will take time to fail, you'll have notice "things aren't great".
      Many companies only ever do "More of the Same",  often amping-it-up as results don't appear.
      The results are as predictable are throwing oil on a fire.

    Often I mention Sydney Finkelstein's book, "Why Smart Executives Fail" in which Finkelstein describes the results of 6 years of research.  He self-describes as "Steven Roth Professor of Management at the Tuck School of Business at Dartmouth College, where I teach courses on Leadership and Strategy".

    In Smart Executives, Finkelstein and his team documents a whole slew of companies (50) that burned bright and collapsed. This book was published in 2003, covering a turbulent period of US and global business, as well as some famous cases going back decades.

    The subjects of the research were chosen precisely because they were wildly successful and suffered a notable collapse. Enron and Worldcom are on the list, plus many I.T. companies such as Wang Computers.  The common thread is the collapse was avoidable and predictable.

    Would the conclusions, Lessons Learned and "Early Warning Signs" be different post the 2008 GFC (Global Financial Crisis)?  I think not...

    Finkelstein lists 7 naive causes of failure:
    1. The Executive were Stupid.
    2. The Executives couldn't have known What was Coming.
    3. It was a Failure to Execute.
    4. The Executives weren't trying Hard Enough.
    5. The Executives lacked Leadership Ability.
    6. The Company lacked the Necessary Resources.
    7. The Executives were simply a Bunch of Crooks.
    and comments in a para entitled "Failure to understand Failure":
    All seven of these standard explanations for why executives fail are clearly insufficient. (Because the companies had demonstrated excellence in becoming highly successful.)
    The next 300 pages are his answer. Part I describes "Great Corporate Failures" and Part II their Causes.
    This research ends with a positive message, Part III is "Learning from Mistakes":
    • Predicting the Future, Early Warning Signs.
    • How Smart Executives Learn, Living and Surviving in a World of Mistakes.
    His "Seven Habits of Spectacularly Unsuccessful People"  are worth reiterating:
    1. They see themselves and their companies as dominating their environments.
    2. They identify so completely with the company that there is no clear boundary between their personal interests and their corporation's interests.
    3. They think they have All the Answers.
    4. They ruthlessly eliminate anyone who isn't 100% behind them.
    5. They are consummate company spokespersons, obsessed with the company image.
    6. They underestimate major obstacles.
    7. They stubbornly rely on what worked for them in the past.
    Each of the 11 chapters has 30-50 references.  Although written and published for the general market, this isn't any "Puff piece".

    2010/03/07

    MMC - the Microsoft death blow for non-Enterprise markets

    MMC, "Mostly Macintosh Compatible", the equivalent for OS/X of WINE for Windows, doesn't yet exist, that I'm aware of.

    2010/02/28

    Why Microsoft is being left behind

    Paul Budde recently questioned, "Will Microsoft be able to make the jump?"
    [04-Apr-2010] For other comments see my pieces "Death by Success" and "Death by Success II".

    He quotes the marketing "S-curve" and Summer Players by Carol Velthuis describing company performance and market maturity in seasons of the year.

    2010/02/27

    ICT Productivity and the Failure of Australian Management

    Prior Related Posts:
    Quantifying the Business Benefits of I.T. Operations
    The Triple Whammy - the true cost of I.T. Waste
    Force Multipliers - Tools as Physical and Cognitive Amplifiers
    I.T. in context

    Alan Kohler and Robert Gottleibsen have been writing in "Business Spectator" about the relationship between jobs and Economic Productivity.

    They note that the USA has improved productivity in the last year while in Australia it has declined (+4% and -3% respectively).  My take on this is: a gross Failure of Australian Management.

    There is solid research/evidence that "ICT" is the single largest contributor to both partial and multi-factor Productivity, and is expected to be so for the next 20 years.  This is an big issue.

    2010/02/11

    Microsoft Troubles - VIII, MS-Office challenged

    "Microsoft Office is obsolete, or soon will be" By Joe Wilcox.

    I hadn't picked this trend, it's quite important.
    It squeezes their 2nd "birthright" (the other is the PC Operating System, I'd focussed on.)

    2010/02/06

    Microsoft Troubles - VII, An Insiders View

    A friend sent me this link to a New York Times Op-Ed 'contribution'.
    Huge news...
    February 4, 2010
    Op-Ed Contributor
    Microsoft’s Creative Destruction
    By DICK BRASS
    Dick Brass was a vice president at Microsoft from 1997 to 2004.
    This guy was a VP in the glory years - either side of Y2K, and before the 2004/5 Longhorn 'reset'.
    The failure to build the successor to XP was a breaking-point: the forced upgrade cycle was gone.

    He's likely to have a bunch of stock, or options, and a vested interest in the company's success/survival. His comments are likely to be both informed and as positive as they can be...