Archive for the ‘security’ Category
Sometimes I need to use my computer to read things, as somebody who considers himself security conscious it means I need to have my finger at the ready as my computer turns on the screen saver after 3 minutes (180 seconds) and password locks it. However I do need to be able to read something without needing to be sure to interact with my HIDs. My old solution was to set my screensaver to lock after 15 minutes (900 seconds), however this poses a security risk as I can forget to lock my screen using the hot corner (see image) if I were distracted by something.
So I needed a way to automatically reset the screensaver back to the 3 minutes after a grace period. OSX has a tool called default which allows changes to be made to the system settings, and it allows you to change the screensaver like this:
defaults -currentHost write com.apple.screensaver idleTime -int 180
Now the only time I really do this is at home, so I need my computer to be secure before I leave for work in the morning. I decided that 8:30 would certainly be a time that I would still be in the house and my Macbook would be open should I have forgotten to reset the timeout. So I first added this following line using crontab:
30 8 * * * defaults -currentHost write com.apple.screensaver idleTime -int 180
Naturally this is still not very secure, sure the effort is half way there, and it should really reset it more often. Perhaps I would want to set it every 15 minutes, which should give me a maximum grace period of 15 minutes. This would be unhandy in the evening when at home.
0,15,30,45 * * * * defaults -currentHost write com.apple.screensaver idleTime -int 180
Or every 15 minutes only for the period of time you are in an environment where you may not entirely control who has physical access to your machine. This could be on a customer site, at a conference or in a shared office space. Or at home while the kids are still awake.
0,15,30,45 8-21 * * * defaults -currentHost write com.apple.screensaver idleTime -int 180
This says between the hours of 8:00 and 21:45 I want you to set my screen saver to 3 minutes every 15 minutes.
Image source: me, Brian Solis
Continuous Integration and Continuous Delivery or Deployment are terms which are often heard, and the former is sometimes confused with the later. So what are they and why should you care?
Continuous Integration (CI) is a process of linking the code the code that is being developed in the company into a quality control system. This can be for homogenous or heterogeneous code environments. A simple example is of a tool which is being developed in the company by multiple developers. To ensure that the new code complies with all the new requirements and still works with the existing code or data model, a set of automated unit, functional, integration and verification/validation tests are created to verify that this true. All the test sets can cover different parts of the functionality from low level code tests to the highest level user experience tests. The final acceptance is done by a human and deployed explicitly.
With Continuous Deployment (CD) the philosophy is taken one step further. Rather than having a final acceptance by a human who would deploy the application explicitly, the assumption is made that the level of testing is sufficient that the final manual stage can be automated. This means that the live/production software is continuously replaced with the newest version that passed all the tests.
As I first explain the process of CI most developers, they are quite enthusiastic. Once they realize that it means that they need to write unit tests they suddenly start thinking it sounds quite difficult. They think that they need a well-developed test-suite required to achieve automated testing advantages. This is naturally not true. Just as with writing new code unit tests can be created incrementally for existing code. Before changing any existing code first write a suitable test which ensures that the changed code will perform in the same way as the existing code so as not to break the existing code or any of its dependancies.
Whereas while first explaining the process of CD to most development teams they think that it is impossible to continuously deliver software that could be deployed directly to the live environment or customer. Yet they forget that in many cases they themselves are guilty of deploying less rigorously tested software into a production environment. Many managers usually simply state that it cannot be done. And it usually cannot be done before a CI has been in place for a while, and unlike CI it needs more than just a large set of unit tests. It also needs a test suite of functional, integration and validation tests.
What I consider most important for CD is to change the way bugs or suspected bugs are handled. Currently many bugs are closed with the message “could not reproduce”. In a CD environment there is no reason to not write validation tests for all bug reports. Consider it possible that all bugs, including irreproducible bugs, are suggestions for validation tests, if you believe that a bug should not happen it should be tested for.
Image source: xt0ph3r
Today I spend my day at the Red Hat Open Cloud Tour, this is what happened today:
Just heard the opening by Rajiv Sodhi, who is here despite having a baby due any moment.
Margaret J. Rimmler’s keynote was interesting. One of the key takeaways being openness RedHat customers should have the choice to remain portable and replace RedHat, if that is what they want. Read the rest of this entry »
Last Saturday I was invited to go to a Physical Security Workshop organized by Independent Films to promote the movie Flypaper. The workshop was given by Thomas Hackner of Hackner Security Intelligence, an independent security auditing company.
The workshop started with a large amount of statistics on the current rates of crime in Austria, and a discussion of the methods by which property crimes are commited. Next there was some practical analysis of the security measures which are currently implemented in securing different classes of objects – houses, office, secure facilities, etc – and the various security measures that are implemented to ensure a certain level of security.
And as with any workshop there was a destructive and non-destructive practicum for most of the items discussed: windows, doors, locks, chains and social engineering. Besides from lockpicking and designing tools to circumvent security non-destructively, we also got to break into a door by destroying the lock and manually manipulating the locking mechanism.
It was great fun!
Image source: Daniël W. Crompton
Having worked without RDBMS for much of the beginning of my carrier I have always been confused by people’s love of relational databases, in my mind they are merely a collection of CSV files with relationships, with some of extended capabilities that all other databases have such as indices and caching. I love that the concept of something that is not a Relational Database, or a complicated Key-Value store, has found it’s place in the world and it’s called NoSQL.
And didn’t we already have a solution which matched the requirements: scalable, ordered, hierarchical, sharded, consistent, atomic, distributed and object? And wasn’t it a key-value and document database with graph capabilities? An engineer wisely said: “Relational databases give you too much. They force you to twist your object data to fit a RDBMS.” What system doesn’t force you to twist your object data and still allows you to maintain the objects in the way you desire?
When we faced this issue we were having much trouble with a traditional database vendor and the mail software they were producing, we wanted to extend the capabilities of this software and not be reliant on the on-disk mailstores they provided. Mail should be stored distributed and be approachable from different angles, whether it is with a traditional POP3 client – the norm; a HTTP browser – emerging; or a IMAP4 client – which in those days was hideously complicated as the RFC had some features which were almost impossible to implement easily. We also wanted to be able to add USENET – which had the same format which we also wanted to be able to store, and even chat – be it IRC or private messaging. And while we were at it we might as well add FTP in the mix.
The external connections would be implemented in an Enterprise Service Bus design pattern, and the storage part was what posed the real problem. All of the data would need to be secure, distributed and/or sharded over multiple locations for efficiency and security. And with security as our first demand we thought of an open standard which we and almost the whole planet used and uses for their internal authentication. A database which has a key-value store at the core, based on a protocol extension written in round about 1993 and optimized in 1996. A database idea so SMART that every huge large software company in the world sells it: LDAP.
“LDAP?” I hear you cry, “That’s No NoSQL!” And you’d be right!
Image source: LinkedIn NoSQL Group
A company was having intermittent trouble with their new authenticated SSL, it wasn’t that they experienced trouble with the certificates, which came from a large international CA, or the authentication. There was a bug which caused the OCSP check on some certificates to fail. And after it had failed the first time for a certificate it would continue to fail for that certificate until the application server had been restarted. As this was a mission critical application for their customers between 8am-6pm they had taken to restarting the servers at 7am to ensure that there would be less issues during the day. This was obviously not a permanent solution so the vendor was called to fix the issue.
Recently on NANOG I saw the item below, I was thinking about what this actually means. A computer would – similar to DynDNS – register itself and it’s hostname to a DNS server using some kind of authentication. Naturally I immediately thought this was a brilliant plan, and didn’t understand why nobody, with the exception of DynDNS, had thought of this before. The immediate afterthought was that this would be easy to implement with a soft-token, which is the software equivalent of a physical token like RSA’s SecureID, or complicated to implement with PKI infrastructure.
From: Mark Andrews <[email protected]>
Re: mailing list bounces
It will be much better when the OS’s just register themselves in
the DNS. Humans shouldn’t have to do this when a machine renumbers.
Named can already authenticate PTR updates based on using TCP and
the source address of the update. For A/AAAA records you setup a
cryptographically strong authentication first.
DynDNS uses username password, which is less secure than the cryptographically strong solution that Mark Andrews mentions below.
Image source: Bill McCurdy
Failing gracefully is one of the most important things, whether it is your responsibility or not ultimately customers believe it is your responsibility to perform in extraordinarily difficult situations. Some companies forget this and force their view and ideas of the world on their customers, that’s one of the quickest ways to turn customers into ex-customers.
I was inspired when I was at a customer site checking my Google Reader and selected Little Gamers, which is considered profanity according to the content filter, and received the message below. I could see the item in Google Reader when I used https rather than http to access Google Reader, although the cartoon was obviously blocked due to the content filter.
This is a fine example of failing gracefully.