Why I Hate Moore‘s Law
by Gregory Levine
The number of transistors on integrated circuits keeps doubling approximately every 18 months. I remember my
first manager telling me that banks will never be able to provide common real-time transactions
(balance inquiry, withdrawals, deposits) to customers around the world because there will never be enough CPU,
storage, and bandwidth to support it. Thirty years later,
I can conduct all e-commerce transactions on my mobile device from a Starbucks in Mongolia.
Storage density has been increasing at an exponential pace too. In the 1980‘s a string of 3380 DASD was 20GB‘s,
cost tens of $000‘s, and took up more space than 10 refrigerators. For less than $500 my new MacBook has a 1TB
solid-state device the size of a postcard (with power and cooling specs orders of magnitude lower than the
string of 3380‘s).
This is all great. Right? Unfortunately, we technologists are like drug addicts when it comes to infrastructure
resources (compute, storage, bandwidth, memory, channels, etc.). We can‘t get enough of it. And since technology
keeps getting faster, denser, and cheaper, we keep buying more under the false pretense that it is inexpensive. Right?
I mean, what‘s another server or ten, or another 10 – 50 TBs of storage. My App is chatty? No big deal,
the network supports 1Gb client NICs and 10Gb host side NICs.
So if technology keeps getting more powerful at lower and lower unit costs, why does the overall technology hardware
expense keep increasing year after year at our firms? I mean, based on Moore‘s law, processing power and density
are doubling every 18 months. Surely our companies are not growing at the same rate.
Perhaps our applications require more than double the amount of resources every two years. Really?
Yeah, I know there are initiatives like adding high-availability or implementing disaster recovery for critical
applications, which require additional kit. But these one-time
events do not explain why our annual hardware expenses keep increasing.
Hold on. Who cares? The absolute hardware budget is just one of many data points to measure IT and I would argue
not a very important one. A far better IT metric is the unit cost trend for transaction processing. If the unit cost
for processing a stock trade, streaming a movie, or purchasing a pair of shoes is decreasing, your CFO and CEO should be
very happy even if your hardware budget is increasing. You are helping to improve bottom-line growth
(i.e., margin growth) for the firm.
I remember trying to explain this to one of my CEO‘s. He was convinced that my hardware costs should be going down
(due to Moore‘s Law). I explained that the application grew significantly (from ~5,000 clients to over 60,000) and
required more servers and storage to support this business growth. So even though the cost per server and GB were lower
than in the past, the total cost of the hardware now required to support the business growth offset any hardware unit cost
savings. However, the cost per business transaction decreased by an order of magnitude so the margin (profits) we were
making per client increased significantly. In fact, I went on to explain that most of the margin growth improvement for
the firm was due to IT driving down the business transaction unit costs.
Now, many of you might not have this type of business growth where the amount of hardware to support it offsets any
cost benefit due to Moore‘s Law. Even if you do, can you honestly say that your engineering prowess is the reason
for the lower transaction cost? Or did you just benefit from Moore‘s Law, and in reality if more time was spent on
re-engineering/optimization/re-architecting, then the transaction cost would be even lower (and your overall hardware
costs would be lower too)?
The fact of the matter is we all got lazy. Again, what‘s another server or TB of storage? Decades ago, IT spent a
lot of time on application, platform, and infrastructure optimization. The savings associated with delaying
hardware capacity purchases easily covered the optimization cost efforts. But the exponential effect of Moore‘s
Law and the increasing commoditization of hardware has led to a decreased focus on driving data processing efficiencies.
This is why I hate Moore‘s Law. We just keep bringing in more kit ("it‘s cheap"), and then
wonder why we need more
people and software tools to support it all. Oh, and when was the last time we sunset anything? The end result,
IT CapEx and OpEx budgets keep increasing, and IT is becoming less agile and unable to provide decent service levels
to the business due to the growing amount of technical debt.
Now, I know, we are all under tremendous pressure to deliver. Our focus is on project delivery.
Time to market for a new business application, new sales tool, new call center feature, etc. trumps any time spent
on optimization or re-engineering. What about devoting some manpower to sunset a legacy system? No time, have to
start work on the next critical business initiative.
By the way, as IT starts to leverage the public Cloud for IaaS, the problem is only getting worse. Have you checked
your AWS bill lately? Can you explain it? Do you know who/what/why is consuming AWS services? Some words of
wisdom – do not repeat the mistakes of the last 20 years when you go into the public Cloud. You‘ll die!
If you are all expecting a nice gift-wrapped silver bullet solution from me that will solve this mess, then time
for a career change – and can I have whatever you‘re smoking? Otherwise, roll up your sleeves and let‘s get
Time to raise the priority of (incentives for) technical debt reduction initiatives, IT optimization,
elastic infrastructure, IT Service Management, and building a DevOps culture to address the dangers of
Hararei can help companies plan and execute strategies to address their spiraling IT costs and
technical debt, and assist with their journey to the Cloud. Contact us for a no obligation consultation.