Posts Tagged ‘risk’
I have a confession, I am a former software vendor, I’ve worked for software vendors as a consultant, and I’ve been guilt of delivering a solution which didn’t work as advertised. This is sadly an all too common occurrence which can be due to many things which I will not be discussing here. Here I will identify some of the common pitfalls that all vendors face, and what can be done to reduce and mitigate the risks these pose.
Just reading the Telegraph article Groupon demand almost finishes cupcake-maker:
A businesswoman has accused Groupon of almost ruining her bakery company after she was forced to make 102,000 cupcakes at a loss when too many people took up her cut-price offer.
I was thinking about what went wrong. She underestimated the popularity, under priced an already cheap product, and rather than break a bad contract she chose to fulfil with what she herself called a second rate product – damaging her brand.
What she did right is contacting the newspapers and getting herself on Slashdot.
Posted with WordPress for CrackBerry.
The age old lie told by ISP support desks: ” The Internet is down,” was briefly reality again yesterday.
The past couple of days I’d been seeing and hearing comments that there was a disturbance in the force of the Internet. Initially a NANOG message was posted about a general malaise or instability in the Internet, some humorous quips were posted in response and the matter was soon forgotten.
A network operator looking with hindsight said that they had been able to see more than normal numbers of updates coming on BGP which is normally an indicator of network instability being solved by rerouting round the problem. That is all part of the normal operation of the Internet. And sometime yesterday morning as the east coast of the US was getting to work the looming disaster struck.
Juniper network devices started core dumping and restarting due to a bug in the code which handled the BGP UPDATE messages as another large updated was arriving. The self healing properties of the Internet broke and the Internet went with it. The Great Juniper Outage of 2011 was born.
Almost certainly. The reliance on the hardware of one specific vendor on the part of large ISPs – backbone carriers – creates a single point of failure which is bad – mkay. A fail over situation should always be in place, not just at the ISPs. Companies who rely on the Internet for business should take this into account too. A recent outages at some of companies I consulted said that by placing their faith in one specific vendor they had created a single point of failure which had caused some high profile repercussions.
Do you have a single point of failure?
I had heard of The Quants and wanted to buy it, after my father and I discussed how it was that all this money disappeared during the credit crisis I thought it might be wise to get an in depth view of the “China syndrome hedge fund catastrophe.” This is more than just a review of the book.
The first thing that I noticed were the multiple references to Ed Thorpe’s “Beat the Dealer”, a book on card counting Black Jack using a Hi-Lo method, and “Liar’s Poker“. Both books are on my bookshelf. Liar’s Poker highlights the years 1985-1987 as a trader at Salomon Brothers. There is some overlap between the characters of the book, such as John Meriwether who famously was challenged to a game of liar’s poker for 1 million dollars and replied: “If we’re going to play for those kind of numbers, I’d rather play for real money. Ten million dollars. No tears.”
The book reminded me of playing the computer game “Capitalism” when I was 16 in which I would game the system by creating a company which produced a little profit and initially plowing that profit into buying companies by hostile takeovers on the mini stock market and then avoid the system creating more AI companies – it had a fixed number of AI companies and mergers would cause new AI companies to be created – by buying a controlling interest in the AI companies and forcing them to turn out high dividends until all the AI companies in the stock market were under my control. And leave the computer AIs to tend to the companies and all their business while the dividends pushed my company’s profit into 12 digits.
The Quants is less of a narrative than Liar’s Poker, much of it is carefully crafted from multiple interviews with most of the players, books, magazines and newspaper articles. The tale of hedge fund managers and traders taking ever increasing risk just to earn the same amount that they did the previous year is and as it notes “Hedge fund managers who’ve seen big losses can be especially dangerous. Investors [...] may become demanding and impatient. … [T]here can be a significant incentive to push the limits of the fund’s capacity to generate large gains [...] If a big loss is no worse than a small loss or meager gains [...] the temptation to jack up the leverage and roll the dice can be powerful.”
Even the glaring warning of Meriwether’s LTCM failure in 1998, like Daedalus’ warning to Icarus, it was ignored by most of the hedge funds. “By 1998, nearly every bond arbitrage desk and fixed-income hedge fund on Wall Street had copied LTCM’s trades.” They were leveraged up to their eyeballs, and while making huge debts of their own they traded with the debts of others, bonds, collateralized debt obligations and credit default swaps. Some hedge fund had leverages of 30 to 1, which means they borrowed $30 for each dollar they had as an asset. “Coming into 2008, hedge funds were in control of $2 trillion.” And the banks they were borrowing from had leverages of at least 9 to 1, because of fractional-reserve banking, these same banks “… Morgan Stanley, Goldman Sachs, Citigroup, Lehman Brothers, Bear Stearns, and Deutsche Bank, [...] were rapidly transforming from staid white-shoe bank companies into hot-rod hedge fund vehicles fixated on the fast buck…” These banks had “… trillions more in leverage that juiced their returns like anabolic steroids.”
And it wasn’t just the banks, insurance companies go into the action too. These insurance companies insured the credit default swaps, “[i]f the value of the underlying asset insured by the swaps declined for whatever reason, the protection provider [...] would have to put up more collateral, since the risk of default was higher.”
The light at the end of the tunnel is an oncoming train.
–Wall Street proverb
“… [T]here were legitimate concerns that as computer-driven trading reached unfathomable speeds, danger lurked. Many of these computer-driven funds were gravitating to a new breed of stock exchange called ‘dark pools’—secretive, computerized trading networks that match buy and sell orders for blocks of stocks in the frictionless ether of cyberspace. … In these invisible electronic pools, vast sums change hands beyond the eyes of regulators. While efforts were afoot to push the murky world of derivatives trading into the light of day, stock trading was sliding rapidly into the shadows.”
“The findings of behavioral finance .. had shown time and again that people don’t always make optimal choices when it comes to money [...] [N]euroeconomics, was delving into the hardwiring of the brain to investigate why people often make decisions that aren’t rational [...] Evidence was emerging that certain parts of the brain are subject to a ‘money illusion’ that blinds people to the impact of future events, such as the effect of inflation on the present value of cash—or the possibility of a speculative bubble bursting.”
To me it also looks like they were and still are blinded to money. Two great reads for the weekend.
Image source: Amazon
After reading the Newsweek article “I Can’t Think!” I started wondering what the consequences are in SCRUM and XP teams of information overload. In my experience of SCRUM and XP there is a lot of information exchange, including Pair Programming and Test Driven Development even more possible information sources are forming a threat to the developers.
One of the best examples of my own experience is when I was working with ESB platform and Java Messaging (JMS). I had had no experience with the former and a reasonable amount of experience with the latter. I was asked to explain the inner workings of the software package I had had little experience with. The answer: “It does some magic, with a secret sauce!” was not enough to counter the fears of the PO who had made the choice to use the software. This caused me to need to get far more information about a subject to be able to explain in layman’s terms what was occurring.
The additional information I needed to collect was a stone around my neck for the initial introduction and understanding of the platform. And with the benefit of hindsight I should have said that I needed to get a basic understanding of the platform before committing to the demands of the PO. The downside was that we needed to be able to estimate the User Stories, and an estimation of 100 was a little excessive.
This is an example of information overload caused by not able to put the learned information into a context. And there are a number of cases which of Information Overload which can occur in Agile methodologies that may cause teams to temporarily derail, as the article notes:
The brain is wired to notice change over stasis. An arriving email that pops to the top of your BlackBerry qualifies as a change; so does a new Facebook post. We are conditioned to give greater weight in our decision-making machinery to what is latest, not what is more important or more interesting.
Which in practice means:
- incoming mail, text or instant messages
- too many search results
- other interruptions
As the article further notes:
Experts advise dealing with emails and texts in batches, rather than in real time; that should let your unconscious decision-making system kick in. Avoid the trap of thinking that a decision requiring you to assess a lot of complex information is best made methodically and consciously; you will do better, and regret less, if you let your unconscious turn it over by removing yourself from the info influx.
These are issues where the SCRUM Master is the gatekeeper to the team, and should factor in time in the day when these interruptions can take place.
Image source: Jerry Wong
Failing gracefully is one of the most important things, whether it is your responsibility or not ultimately customers believe it is your responsibility to perform in extraordinarily difficult situations. Some companies forget this and force their view and ideas of the world on their customers, that’s one of the quickest ways to turn customers into ex-customers.
I was inspired when I was at a customer site checking my Google Reader and selected Little Gamers, which is considered profanity according to the content filter, and received the message below. I could see the item in Google Reader when I used https rather than http to access Google Reader, although the cartoon was obviously blocked due to the content filter.
This is a fine example of failing gracefully.
In continuation of my article: Data Erasing for your own Protection I got into a discussion about other ways to protect you data from law enforcement.
I was told by a former law enforcement member that after the crime scene has been secured that the the computer tech checks the computer is functional and then has the equivalent of a mover ship the computer, like a box, to the computer lab. The issue with this being that a mercury switch and power source could be used to zap the computer with the needed Gauss to erase/destroy the hard disk.
Another method would be to use a RAM Disk, whether this is a physical or virtual RAM Disk. The first has the advantage that in the case of a brownout the data is saved for X hours, although this could be a disadvantage too, another disadvantage is that you may have a memory limit which is imposed by the hardware. The advantage over the physical RAM disk is that a higher amount of memory can be allocated, although you don’t get the protection from brownouts.
Important to also remember is that there is also a data remanence with data in RAM, which also should to be mitigated. This may be possible by passing an electric charge over the memory to erase them, although I have yet to find relevant references.
A third method may be by raising the temperature of the hard disk to above the Curie Point, which with effect the magnetics of the disk. I will need to investigate this more too.
Embedding part of the computer in epoxy still applies to all the above.
- Gigabyte I-RAM DDR PCI Virtual RAM Disk Drive SATA W/ Backup Battery – backup power lasts ~16 hours and it supports 4Gb RAM.
- Data remanence: Data in RAM
- Curie Point
Image source: Michiel2005
I’ll describe the problem I think that you have: You have data stored on computers which you don’t want the police or the governments to have, something that cryptography can not protect, as XKCD so eloquently puts in the cartoon Security below. You are not the only one: internet companies; financial institutions; churches; organizations working for freedom; lawyers; criminals and innocent individuals all need to protect themselves.
It’s possible to use something like Darik’s Boot and Nuke (DBAN) which is a self-contained boot disk that securely wipes the hard disks of most computers, however this takes time sometimes a number of hours and requires human interaction. Time that may not be available if the long arm of the law comes down on you like a ton of bricks. And it can even be the case that the power is shut off before the computer is secured, the police do this to keep the data on the computer secured for the investigation. So I thought about what would be needed to magnetically erase the hard disk.
Secondly I read that degaussing can cause permanent irreversible damage to hard drives which means they are not reusable. Unlike tapes the mechanism to read the magnetic track is part of the device and is also magnetic. So don’t expect to be able to use the disk after you have tested the electromagnet.
Thirdly the magnetic induction (also referred to as magnetic flux density or saturation flux density) needed to correctly erase some hard disk can be from 6000 – 7000 gauss (0.6 – 0.7 tesla), an NSA approved degausser puts out 22000 gauss (2.2 telsa). From some sources I learned that mostly the core of electromagnets is made from a magnetic material – power ferrite – which has a magnetic flux density of under 4000 gauss, this wouldn’t be enough. A different material would be needed for the core of the electromagnet. I discovered that MPP (molypermalloy powder) material has a magnetic flux density of 7000 gauss, which is what is needed for this PoC. Iron powder and high-flux can yield 10000 and 15000 gauss respectively.
Fourth you need thick copper wire wound round the core, this is called a solenoid. This creates the B-field which is the magnetic field which will erase the hard disk, using a gauss or EMF meter it is possible to measure the magnetic flux density in gauss or telsa produced by your electromagnet and experiment with getting the level to 6000-7000 gauss.
Fifthly you need an Uninterruptible Power Supply (UPS), this will ensure that when the power is switched off that the electromagnet is powered up to erase the hard disk.
Lastly you need to install your electromagnet round your hard disk, hook up the UPS and fill the computer with epoxy so it cannot be taken apart by the police. Let’s just hope you don’t have a brownout.
Sadly this method will not work for solid state disks, although you can possibly attach squibs using a similar setup. That may be something for a future article.
- Degaussing : Irreversible damage to some media types
- I am creating an electromagnet for my school’s science fair project. Does the shape of the iron core make a difference? [...]
Image source: Michiel2005