Subscribe: Technological Musings
Added By: Feedage Forager Feedage Grade B rated
Language: English
card  certificate  certificates  cloud  data  garage  make  network  new  online  security  service  services  ssl  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Technological Musings

Technological Musings

Musings, ramblings, rants ...


Yes, your iToaster needs security
Let’s talk about your house for a moment.  For the sake of argument, we’ll assume that you live in a nice house with doors, windows, the works.  All of the various entries have the requisite locking devices.  As with most homes, these help prevent unwanted entry, though a determined attacker can surely bypass them.  For the moment, let’s ignore the determined attacker and just talk about casual attempts. Throughout your time living in your home, casual attempts at illegal entry have been rebuffed.  You may or may not even know about these attempts.  They happen pretty randomly, but there’s typically not much in the way of evidence after the attacker gives up and leaves.  So you’re pretty happy with how secure things are. Recently, you’ve heard about this great new garage from a friend who has one.  It’s really nice, low cost, and you have room for it on your property, so you decide to purchase one.  You place the order and, after a few days, your new garage arrives.  It’s everything you could have imagined.  Plenty of room to store all the junk you have in the house, plus you can fit the car in there too! You use the garage every day, moving boxes in and out of the garage as needed until one day you return home and, for some inexplicable reason, your car won’t fit all the way in.  Well, that’s pretty weird, you think.  You decide that maybe you stored too much in the garage, so you spend the rest of the day cleaning out the garage.  You make some tough decisions and eventually you make enough room to put the car back in the garage. Time passes and this happens a few more times.  After a while you start to get a bit frustrated and decide that maybe you need to buy a bigger garage.  You pull out your trusty measuring tape to verify the dimensions of the garage and, to your amazement, the garage is smaller than what you remember.  You do some more checking and, to your amazement, the garage is bigger on the outside.  So you call an expert to figure out what’s going on. When the expert arrives, she takes one look at the situation and tells you she knows exactly what has happened.  You watch with awe as she walks up to the closed garage, places her hand on the door, and the door opens by itself!  Curious, you ask how she performed that little magic trick.  She explains that this particular model of garage has a little known problem that allows the door to be opened by putting pressure on just the right place.  Next, she head into the garage and starts poking around at the walls.  After a few moments, one of the walls slides open revealing another room full of stuff you don’t recognize. Your expert explains that obviously someone else knows about this weakness and has set up a false wall in your garage to hide their own stuff in.  This is the source of the shrinking space and your frustration.  She helps you clean up the mess and tear down the false wall.  After everything is back to normal, she recommends you contact the manufacturer and see if they have a fix for the faulty door. While this story may sound pretty far fetched when we’re talking about houses and garages, it’s an all too common story for consumer grade appliances.  And as we move further into this new age of connected devices, commonly called the Internet of Things (IoT), it’s going to become and even bigger issue. Network access itself is the first challenge.  Many of the major home router vendors have already experienced problems with security.  So right out of the gate, home networks are potentially vulnerable.  This is a major problem, especially given the potentially sensitive nature of data being transmitted by a variety of new IoT devices. Today’s devices are incredibly data-centric.  From fitness trackers to environmental sensors, our devices are tracking everything.  This data is collected and then transmitted to an internet-connected service where it is made available to the user in a variety of ways.  Some users may find this data to be sensitive, hoping to keep it relatively p[...]

Hacker is not a dirty word
Have you ever had to fix a broken item and you didn’t have the right parts?  Instead of just giving up, you looked around and found something that would work for the time being.  Occasionally, you come back later and fix it “the right way,” but more often than not, that fix stays in place indefinitely.  Or, perhaps you’ve found a novel new use for a device.  It wasn’t built for that purpose, but you figured out that it fit the exact use you had in mind. Those are the actions of a hacker.  No, really.  If you look up the definition of a hacker, you get all sort of responses.  Wikipedia has three separate entries for the word hacker in relation to technology : Hacker - someone who seeks and exploits weaknesses in a computer system or computer network Hacker - (someone) who makes innovative customizations or combinations of retail electronic and computer equipment Hacker - (someone) who combines excellence, playfulness, cleverness and exploration in performed activities Google defines it as follows : 1. a person who uses computers to gain unauthorized access to data. (informal) an enthusiastic and skillful computer programmer or user. 2. a person or thing that hacks or cuts roughly. And there are more.  What’s interesting here is that depending on where you look, the word hacker means different things.  It has become a pretty contentious word, mostly because the media has, over time, used it to describe the actions of a particular type of person.  Specifically, hacker is often used to describe the criminal actions of a person who gains unauthorized access to computer systems.  But make no mistake, the media is completely wrong on this and they’re using the word improperly. Sure, the person who broke into that computer system and stole all of that data is most likely a hacker.  But, first and foremost, that person is a criminal.  Being a hacker is a lifestyle and, in many cases, a career choice.  Much like being a lawyer or a doctor is a career choice.  Why then is hacker used as a negative term to identify criminal activity and not doctor or lawyer?  There are plenty of instances where doctors, lawyers, and people from a wide variety of professions have indulged in criminal activity. Keren Elazari spoke in 2014 at TED about hackers, and their importance in our society.  During her talk she discusses the role of hackers in our society, noting that there are hackers who use their skills for criminal activity, but many more who use their skills to better the world.  From hacktivist groups like Anonymous to hackers like Barnaby Jack, these people have changed the world in positive ways, helping to identify weaknesses in systems to weaknesses in governments and laws.  In her own words : My years in the hacker world have made me realize both the problem and the beauty about hackers: They just can't see something broken in the world and leave it be. They are compelled to either exploit it or try and change it, and so they find the vulnerable aspects in our rapidly changing world. They make us, they force us to fix things or demand something better, and I think we need them to do just that, because after all, it is not information that wants to be free, it's us. It’s time to stop letting the media use this word improperly.  It’s time to take back what is ours.  Hacker has long been a term used to describe those we look up to, those we seek to emulate.  It is a term we hold dear, a term we seek to defend.  When Loyd Blankenship was arrested in 1986, he wrote what has become known as the Hacker’s Manifesto.  This document, often misunderstood, describes the struggle many of us went through, and the joy of discovering something we could call our own.  Yes, we’re often misunderstood.  Yes, we’ve been marginalized for a long time.  But times have changed since then and our culture is strong and growing. [...]

Network Enhanced Telepathy
I’ve recently been reading Wired for War by P.W. Singer and one of the concepts he mentions in the book is Network Enhanced Telepathy.  This struck me as not only something that sounds incredibly interesting, but something that we’ll probably see hit mainstream in the next 5-10 years. According to Wikipedia, telepathy is "the purported transmission of information from one person to another without using any of our known sensory channels or physical interaction.”  In other words, you can think *at* someone and communicate.  The concept that Singer talks about in the book isn’t quite as “mystical” since it uses technology to perform the heavy lifting.  In this case, technology brings fantasy into reality. Scientists have already developed methods to “read” thoughts from the human mind.  These methods are by no means perfect, but they are a start.  As we’ve seen with technology across the board from computers to robotics, electric cars to rockets, technological jumps may ramp up slowly, but then they rocket forward at a deafening pace.  What seems like a trivial breakthrough at the moment may well lead to the next step in human evolution. What Singer describes in the book is one step further.  If we can read the human mind, and presumably write back to it, then adding a network in-between, allowing communication between minds, is obvious.  Thus we have Network Enhanced Telepathy.  And, of course, with that comes all of the baggage we associate with networks today.  Everything from connectivity issues and lag to security problems. The security issues associated with something like this range from inconvenient to downright horrifying.  If you thought social engineering was bad, wait until we have a direct line straight into someone’s brain.  Today, security issues can result in stolen data, denial of service issues, and, in some rare instances, destruction of property.  These same issues may exist with this new technology as well. Stolen data is pretty straightforward.  Could an exploit allow an attacker to arbitrarily read data from someone’s mind?  How would this work?  Could they pinpoint the exact data they want, or would they only have access to the current “thoughts” being transmitted?  While access to current thoughts might not be as bad as exact data, it’s still possible this could be used to steal important data such as passwords, secret information, etc.  Pinpointing exact data could be absolutely devastating.  Imagine, for a moment, what would happen if an attacker was able to pluck your innermost secrets straight out of your mind.  Everyone has something to hide, whether that’s a deep dark secret, or maybe just the image of themselves in the bathroom mirror. I’ve seen social engineering talks wherein the presenter talks about a technique to interrupt a person, mid-thought, and effectively create a buffer overflow of sorts, allowing the social engineer to insert their own directions.  Taken to the next level, could an attacker perform a similar attack via a direct link to a person’s mind?  If so, what access would the attacker then attain?  Could we be looking at the next big thing in brainwashing?  Merely insert the new programming, directly into the user. How about Denial of Service attacks or physical destruction?  Could an attacker cause physical damage in their target?  Is a connection to the mind enough access to directly modify the cognitive functions of the target?  Could an attacker induce something like Locked-In syndrome in a user?  What about blocking specific functions, preventing the user from being able to move limbs, or speak?  Since the brain performs regulatory control over the body, could an attacker modify the temperature, heart rate, or even induce sensations in their target?  These are truly scary scenarios and warrant serious thought and discussion. Technology is racing ahead at breakneck speeds and the future is an exciting one.  These technologies could allow h[...]

Suspended Visible Masses of Small Frozen Water Crystals
The Cloud, hailed as a panacea for all your IT related problems. Need storage? Put it in the Cloud. Email? Cloud. Voice? Wireless? Logging? Security? The Cloud is your answer. The Cloud can do it all. But what does that mean? How is it that all of these problems can be solved by merely signing up for various cloud services? What is the cloud, anyway? Unfortunately, defining what the cloud actually is remains problematic. It means many things to many people. The cloud can be something "simple" like extra storage space or email. Google, Dropbox, and others offer a service that allows you to store files on their servers, making them available to you from "anywhere" in the world. Anywhere, of course, if the local government and laws allow you to access the services there. These services are often free for a small amount of space. Google, Microsoft, Yahoo, and many, many others offer email services, many of them "free" for personal use. In this instance, though, free can be tricky. Google, for instance, has algorithms that "read" your email and display advertisements based on the results. So while you may not exchange money for this service, you do exchange a level of privacy. Cloud can also be pure computing power. Virtual machines running a variety of operating systems, available for the end-user to access and run whatever software they need. Companies like Amazon have turned this into big business, offering a full range of back-end services for cloud-based servers. Databases, storage, raw computing power, it's all there. In fact, they have developed APIs allowing additional services to be spun up on-demand, augmenting existing services. As time goes on, more and more services are being added to the cloud model. The temptation to drop self-hosted services and move to the cloud is constantly increasing. The incentives are definitely there. Cloud services are affordable, and there's no need for additional staff for support. All the benefits with very little of the expense. End-users have access to services they may not have had access to previously, and companies can save money and time by moving services they use to the cloud. But as with any service, self-hosted or not, there are questions you should be asking. The answers, however, are sometimes a bit hard to get. But even without direct answers, there are some inferences you can make based on what the service is and what data is being transferred. Data being accessible virtually anywhere, at any time, is one of major draws of cloud services. But there are downsides. What happens when the service is inaccessible? For a self-hosted service, you have control and can spend the necessary time to bring the service back up. In some cases, you may have the ability to access some or all of the data, even without the service being fully restored. When you surrender your data to the cloud, you are at the mercy of the service provider. Not all providers are created equal and you cannot expect uniform performance and availability across all providers.  This means that in the event of an outage, you are essentially helpless.  Keeping local backups is definitely an option, but oftentimes you’re using the cloud so that you don’t need those local backups. Speaking of backups, is the cloud service you’re using responsible for backups?  Will they guarantee that your data will remain safe?  What happens if you accidentally delete a needed file or email?  These are important issues that come up quite often for a typical office.  What about the other side of the question?  If the service is keeping backups, are those backups secure?  Is there a way to delete data, permanently, from the service?  Accidents happen, so if you’ve uploaded a file containing sensitive information, or sent/received an email with sensitive information, what recourse do you have? Dropbox keeps snapshots of all uploaded data for 30 days, but there doesn’t seem to be an official way to permanently delete a file. [...]

Boldly Gone
I have been and always shall be your friend.


It's a sad day. We've lost a dear friend today, someone we grew up with, someone so iconic that he inspired generations. At the age of 83, Leonard Nimoy passed away. He will be missed.

It's amazing to realize how much someone you've never met can mean to you. People larger than life, people who will live on in memory forever. I've been continually moved for hours at the outpouring of grief and love online for Leonard. He has meant so much for so many, and his memory will live on forever.

Of all the souls I have encountered in my travels, his was the most... human.
allowfullscreen="" frameborder="0" height="315" src="" width="560">

Will online retailers be the next major breach target?
In the past year we have seen several high-profile breaches of brick and mortar retailers. Estimates range in the tens of millions of credit cards stolen in each case. For the most part, these retailers have weathered the storm with virtually no ill effects. In fact, it seems the same increase in stock price that TJ Maxx saw after their breach still rings true today. A sad fact indeed.Regardless, the recent slew of breaches has finally prompted the credit card industry to act. They have declared that 2015 will be the year that chip and pin becomes the standard for all card-present transactions. And while chip and pin isn't a silver bullet, and attackers will eventually find new and innovative ways to circumvent it, it has proven to be quite effective in Europe where it has been the standard for years.Chip and pin changes how the credit card information is transmitted to the processor. Instead of the credit card number being read, in plain text, off of the magnetic strip, the card reader initiates an encrypted communication between the chip on the card and the card reader. The card details are encrypted and sent, along with the user's PIN, to the card processor for verification. It is this encrypted communication between the card and, ultimately, the card processor that results in increased security. In short, the attack vectors used in recent breaches is difficult, if not impossible to pull off with these new readers. Since the information is not decrypted until it hits the card processor, attackers can't simply skim the information at the card reader. There are, of course, other attacks, though these have not yet proven widespread.At it's heart, though, chip and pin only "fixes" one type of credit card transaction, card-present transactions. That is, transactions in which the card holder physically scans their card via a card reader. The other type of transaction, card-not-present transactions, are unaffected by chip and pin. In fact, the move to chip and pin may result in putting online transactions at greater risk. With brick and mortar attacks gone, attackers will move to online retailers. Despite the standard SSL encryption used between shoppers and online retailers, there are plenty of ways to steal credit card data. In fact, one might argue that a single attack could net more card numbers in a shorter time since online retailers often store credit card data as a convenience for the user.It seems that online fraud, though expected, is being largely ignored for the moment. After all, how are we going to protect that data without supplying card readers to every online shopper? Online solutions such as PayPal, Amazon Payments, and others mitigate this problem slightly, but we still have to rely on the security they've put in place to protect cardholder data. Other solutions such as Apple Pay and Google Wallet seemingly combine on and offline protections, but the central data warehouse remains. The problem seems to be the security of the card number itself. And losing this data can be a huge burden for many users as they have to systematically update payment information as the result of a possible breach. This can often lead to late payments, penalties, and more.One possible alternative is to reduce the impact a single breach can cause. What if the data that retailers stored was of little or no value to an attacker while still allowing the retailer a way to simplify payments for the shopper? What if a breach at a retailer only affected that retailer and resulted in virtually no impact on the user? A solution like this may be just what we need.Instead of providing a retailer your credit card number and CVV, the retailer is provided a simple token. That token, coupled with a private retailer-specific token should be all that is needed to verify a transaction. Tokens can and should be different for each retailer. If a retailer is compromised, new to[...]

Bleeding Heart Security
Unless you've been living under a rock the past few days, you've probably heard about the Heartbleed vulnerability in OpenSSL that was disclosed on Monday, April 7th. Systems and network administrators across the globe have spent the last few days testing for this vulnerability, patching systems, and probably rocking in the corner while crying. Yes, it's that bad. What's more, there are a number of reports that intelligence agencies may have known about this vulnerability for some time now. The quick and dirty is that a buffer overflow bug in the code allows an attacker to remotely read memory of an affected system in 64k chunks. The only memory accessible to an attacker would be memory used by the process being connected to, but, depending on the process, there may be a LOT of useful data in there. For instance, Yahoo was leaking usernames and passwords until late Tuesday evening. The fabulous web comic, xkcd, explains how the attack works in layman's terms. If you're interested in the real nitty gritty of this vulnerability, though, there's an excellent write-up on the IOActive Labs blog. If you're the type that likes to play, you can find proof-of-concept code here. And let's not forget about the client side, there's PoC code for that as well. OpenSSL versions 1.0.1 through 1.0.1f as well as the 1.0.2 beta code are affected. The folks at OpenSSL released version 1.0.1g on Monday which fixed the problem. Or, at least, the current problem. There's a bit of chatter about other issues that may be lurking in the OpenSSL codebase. Now that a few days have passed, however, what remains to be done? After all, everyone has patched their servers, right? Merely patching doesn't make the problem disappear, though. Vulnerable code is out there and mistakes can be made. For the foreseeable future, you should be regularly scanning your network for vulnerable systems with something like Nmap. The Nmap NSE for Heartbleed scanning is already available. Alternatively, you can use something like Nagios to regularly check your existing servers. Patching immediately may not have prevented a breach, either. Since Heartbleed doesn't leave much of a trace beyond some oddities that your IDS may have seen, there's virtually no way to know if anything has been taken. The best way to deal with this is to just go ahead and assume that your private keys are compromised and start replacing them. New keys, new certs. It's painful, it's slow, but it's necessary. For end users, the best thing you can do is change your passwords. I'm not aware of any "big" websites that have not patched by now, so changing passwords should be relatively safe. However, that said, Wired and Engadget have some of the best advice I've seen about this. In short, change your passwords today, then change them again in a few weeks. If you're really paranoid, change them a third time in about a month. By that time, any site that is going to patch will have already patched. Unfortunately, I think the fun is just beginning. I expect we'll start seeing a number of related attacks. Phishing attacks are the most likely in the beginning. If private keys were compromised, then attackers can potentially impersonate websites, including their SSL certificates. This would likely involve a DNS poisoning attack, but could also be accomplished by compromising a user's local system and setting a hosts file entry. Certificate revocation is a potential defense against this, but since many browsers have CRL checks disabled by default, it probably won't help. Users will have to watch what they click, where they go, and what software they run. Not much different from the advice given already. Another possible source of threats are consumer devices. As Bruce Schneier put it, "An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn't going to be f[...]

Looking into the SociaVirtualistic Future
Let's get this out of the way. One of the primary reasons I'm writing this is in response to a request by John Carmack for coherent commentary about the recent acquisition of Oculus VR by Facebook. My hope is that he does, in fact, read this and maybe drop a comment in response. Hi John! I've been a huge Carmack fan since the early ID days, so please excuse the fanboyism.And I *just* saw the news that Michael Abrash has joined Oculus as well, which is also incredibly exciting. Abrash is an Assembly GOD. Ok, on to the topic a hand. The Oculus Rift is a VR headset that got its public start with a Kickstarter campaign in September of 2012. It blew away it's meager goal of $250,000 and raked in almost $2.5 Million. For a mere $275 and some patience, contributors would receive an unassembled prototype of the Oculus Rift. Toss in another $25 and you received an assembled version.But what is the Oculus Rift? According to the Kickstarter campaign :Oculus Rift is a new virtual reality (VR) headset designed specifically for video games that will change the way you think about gaming forever. With an incredibly wide field of view, high resolution display, and ultra-low latency head tracking, the Rift provides a truly immersive experience that allows you to step inside your favorite game and explore new worlds like never before.In short, the Rift is the culmination of every VR lover's dreams. Put a pair of these puppies on and magic appears before your eyes.For myself, Rift was interesting, but probably not something I could ever use. Unfortunately, I suffer from Amblyopia, or Lazy Eye as it's commonly called. I'm told I don't see 3D. Going to 3D movies pretty much confirms this for me since nothing ever jumps out of the screen. So as cool as VR sounds to me, I would miss out on the 3D aspect. Though it might be possible to "tweak" the headset and adjust the angles a bit to force my eyes to see 3D. I'm not sure if that's good for my eyes, though.At any rate, the Rift sounds like an amazing piece of technology. In the past year I've watched a number of videos demonstrating the capabilities of the Rift. From the Hak5 crew to Ben Heck, the reviews have all been positive.And then I learned that John Carmack joined Oculus. I think that was about the time I realized that Oculus was the real deal. John is a visionary in so many different ways. One can argue that modern 3D gaming is largely in part to the work he did in the field. In more recent years, his visions have aimed a bit higher with his rocket company, Armadillo Aerospace. Armadillo started winding down last year, right about the time that John joined Oculus, leaving him plenty of time to deep dive into a new venture.For anyone paying attention, Oculus was recently acquired by Facebook for a mere $2 Billion. Since the announcement, I've seen a lot of hatred being tossed around on Twitter. Some of this hatred seems to be Kickstarter backers who are under some sort of delusion that makes them believe they have a say in anything they back. I see this a lot, especially when a project is taking longer than they believe it should.I can easily write several blog posts on my personal views about this, but to sum it up quickly, if you back a project, you're contributing to make something a reality. Sometimes that works, sometimes it doesn't. But Kickstarter clearly states that you're merely contributing financial backing, not gaining a stake in a potential product and/or company. Nor are you guaranteed to receive the perks you've contributed towards. So suck it up and get over it. You never had control to begin with.I think Notch, of Minecraft fame, wrote a really good post about his feeling on the subject. I think he has his head right. He contributed, did his part, and though it's not working[...]

Keepin' TCP Alive
I was debugging an odd network issue lately that turned out to have a pretty simple explanation. A client on the network was intermittently experiencing significant delays in accessing the network. Upon closer inspection, it turned out that prior to the delay, the client was being left idle for long periods of time. With this additional information it was pretty easy to identify that there was likely a connection between the client and server that was being torn down for being idle. So in the end, the cause of the problem itself was pretty simple to identify. The fix, however, is more of a conundrum. The obvious answer is to adjust the timers and prevent the connection from being torn down. But what timers should be adjusted? There are the keepalive timers on the client, the keepalive timers on the server, and the idle teardown timers on the firewall in the middle. TCP keepalive handling varies between operating systems. If we look at the three major operating systems, Linux, Windows, and OS X, then we can make the blanket statement that, by default, keepalives are sent after two hours of idle time. But, most firewalls seem to have a default TCP teardown timer of one hour. These defaults are not conducive to keeping idle connections alive. The optimal scenario for timeouts is for the clients to have a keepalive timer that fires at an interval lower than that of the idle tcp timeout on the firewall. The actual values to use, as well as which devices should be changed, is up for debate. The firewall is clearly the easier point at which to make such a change. Typically there are very few firewall devices that would need to be updated as compared to the larger number of client devices. Additionally, there will likely be fewer firewalls added to the network over time, so ensuring that timers are properly set is much easier. On the other hand, the defaults that firewalls are generally configured with have been chosen specifically by the vendor for legitimate reasons. So perhaps the clients should conform to the setting on the firewall? What is the optimal solution? And why would we want to allow idle connections anyway? After all, if a connection is idle, it's not being used. Clearly, any application that needed a connection to remain open would send some sort of keepalive, right? Is there a valid reason to allow these sorts of connections for an extended period of time? As it turns out, there are valid reasons for connections to remain active, but idle. For instance, database connections are often kept for longer periods of time for performance purposes. The TCP handshake can take a considerable amount of time to perform as opposed to the simple matter of retrieving data from a database. So if the database connection remains established, additional data can be retrieved without the overhead of TCP setup. But in these instances, shouldn't the application ensure that keepalives are sent so that the connection is not prematurely terminated by an idle timer somewhere along the data path? Well, yes. Sort of. Allow me to explain. When I first discovered the source of the network problem we were seeing, I chalked it up to lazy programming. While it shouldn't take much to add a simple keepalive system to a networked application, it is extra work. As it turns out, however, the answer isn't quite that simple. All three major operating systems, Windows, Linux, and OS X, all have kernel level mechanisms for TCP keepalives. Each OS has a slightly different take on how keepalive timers should work. Linux has three parameters related to tcp keepalives : tcp_keepalive_time The interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further tcp_k[...]

Becoming your own CA
SSL, as I mentioned in a previous blog entry, has some issues when it comes to trust. But regardless of the problems with SSL, it is a necessary part of the security toolchain. In certain situations, however, it is possible to overcome these trust issues. Commercial providers are not the only entities that are capable of being a Certificate Authority. In fact, anyone can become a CA and the tools to do so are available for free. Becoming your own CA is a fairly painless process, though you might want to brush up on your openSSL skills. And lest you think you can just start signing certificates and selling them to third parties, it's not quite that simple. The well-known certificate authorities have worked with browser vendors to have their root certificates added as part of the browser installation process. You'll have to convince the browser vendors that they need to add your root certificate as well. Good luck. Having your own CA provides you the means to import your own root certificate into your browser and use it to validate certificates you use within your network. You can use these SSL certificates for more than just websites as well. LDAP, RADIUS, SMTP, and other common applications use standard SSL certificates for encrypting traffic and validating remote connections. But as mentioned above, be aware that unless a remote user has a copy of your root certificate, they will be unable to validate the authenticity of your signed certificates. Using certificates signed by your own CA can provide you that extra trust level you may be seeking. Perhaps you configured your mail server to use your certificate for the POP and IMAP protocols. This makes it more difficult for an attacker to masquerade as either of those services without obtaining your signing certificate so they can create their own. This is especially true if you configure your mail client such that your root certificate is the only certificate that can be used for validation. Using your own signed certificates for internal, non-public facing services provides an even better use-case. Attacks such as DNS cache poisoning make it possible for attackers to trick devices into using the wrong address for an intended destination. If these services are configured to only use your certificates and reject connection attempts from peers with invalid certificates, then attackers will only be able to impersonate the destination if they can somehow obtain a valid certificate signed by your signing certificate. Sound good? Well, how do we go about creating our own root certificate and all the various machinery necessary to make this work? Fortunately, all of the necessary tools are open-source and part of most Linux distributions. For the purposes of this blog post, I will be explaining how this is accomplished using the CentOS 6.x Linux distribution. I will also endeavor to break down each command and explain what each parameter does. Much of this information can be found in the man pages for the various commands. OpenSSL is installed as part of a base CentOS install. Included in the install is a directory structure in /etc/pki. All of the necessary tools and configuration files are located in this directory structure, so instead of reinventing the wheel, we'll use the existing setup. To get started, edit the default openssl.cnf configuration file. You can find this file in /etc/pki/tls. There are a few options you want to change from their defaults. Search for the following headers and change the options listed within. [CA_default] default_md = sha256 [req] default_bits = 4096 default_md = sha256 default_md : This option defined the default message digest to use. Switching this to sha256 result in a stronger message digest being used. default_bits : This option defines the defaul[...]

SSL "Security"
SSL, a cryptographically secure protocol, was created by Netscape in the mid-1990's. Today, SSL, and it's replacement, TLS, are used by web browsers and other programs to create secure connections between devices across the Internet. SSL provides the means to cryptographically secure a tunnel between endpoints, but there is another aspect of security that is missing. Trust. While a user may be confident that the data received from the other end of the SSL tunnel was sent by the remote system, the user can not be confident that the remote system is the system it claims to be. This problem was partially solved through the use of a Public Key Infrastructure, or PKI. PKI, in a nutshell, provides the trust structure needed to make SSL secure. Certificates are issued by a certificate authority or CA. The CA cryptographically signs the certificate, enabling anyone to verify that the certificate was issued by the CA. Other PKI constructs offer validation of the registrant, indexing of the public keys, and a key revocation system. It is within these other constructs that the problems begin. When SSL certificates were first offered for sale, the CAs spent a great deal of time and energy verifying the identity of the registrant. Often, paper copies of the proof had to be sent to the CA before a certificate would be issued. The process could take several days. More recently, the bar for entry has been lowered significantly. Certificates are now issued on an automated process requiring only that the registrant click on a link sent to one of the email addresses listed in the Whois information. This lack of thorough verification has significantly eroded the trust a user can place in the authenticity of a certificate. CAs have responded to this problem by offering different levels of SSL certificates. Entry level certificates are verified automatically via the click of a link. Higher level SSL certificates have additional identity verification steps. And at the highest level, the Extended Validation, or EV certificate requires a thorough verification of the registrants identity. Often, these different levels of SSL certificates are marketed as stronger levels of encryption. The reality, however, is that the level of encryption for each of these certificates is exactly the same. The only difference is the amount of verification performed by the CA. Despite the extra level of verification, these certificates are almost indistinguishable from one another. With the exception of EV certificates, the only noticeable difference between differing levels of SSL certificates are the identity details obtained before the certificate is issued. An EV certificate, on the other hand, can only be obtained from certain vendors, and shows up in a web browser with a special green overlay. The intent here seems to be that websites with EV certificates can be trusted more because the identity of the organization running the website was more thoroughly validated. In the end, though, trust is the ultimate issue. Users have been trained to just trust a website with an SSL certificate. And trust sites with EV certificates even more. In fact, there have been a number of marketing campaigns targeted at convincing users that the "Green Address Bar" means that the website is completely trustworthy. And they've been pretty effective. But, as with most marketing, they didn't quite tell the truth. sure, the EV certificate may mean that the site is more trustworthy, but it's still possible that the certificate is fake. There have been a number of well known CAs that have been compromised in recent years. Diginotar and Comodo being two of the more high profile ones. In both cases, it became possible for rogue certificates to be created for any we[...]

BSides Delaware 2013
The annual BSides Delaware conference took place this past weekend, November 8th and 9th. BSides Delaware is a free community driven security event that takes place at the Wilmington University New Castle campus. The community is quite open, welcoming seasoned professionals, newcomers, curious individuals, and even children. There were a number of families who attended, bringing their children with them to learn and have fun.I was fortunate enough to be able to speak at last years BSides and was part of the staff for this years event. There were two tracks for talks, many of which were recorded and are already online thanks to Adrian Crenshaw, the IronGeek. Adrian has honed his video skills and was able to have every recording online by the closing ceremonies on Saturday evening.In all there were more than 25 talks over the course of two days covering a wide variety of topics, logging, Bitcoins, forensics, and more. While most speakers were established security professionals, there were a few new speakers striving to make a name for themselves.This year also included a FREE wireless essentials training class. The class was taught by a team of world-class instructors including Mike Kershaw (drag0rn), author of the immensely popular Kismet wireless tool, Russell Handorf from the FBI Cyber Squad, and Rick Farina, lead developer for Pentoo. The class covered everything from wireless basics to software-defined radio hacking. An absolutely amazing class.In addition to the talks, BSides also features not one, but two lockpick villages. Both Digital Trust as well as Toool were present. The lockpick villages were a big hit with seasoned professionals as well as the very young. It's amazing to see how adept a young child can be with a lockpick.Hackers for Charity was present as well with a table of goodies for sale. They also held a silent (and not so silent) auction where all proceeds went to the charity. Hackers for Charity raises money to help with a variety of projects they engage in across the world. From their website :We employ volunteer hackers and technologists through our Volunteer Network and engage their skills in short projects designed to help charities that can not afford traditional technical resources. ...We’ve personally witnessed how one person can have a profound impact on the world. By giving of their skills, time and talent our volunteers are profoundly impacting the world, one “hacker” at a time.BSides 2013 was an amazing experience. This was my second year at the conference and it's amazing how it has grown. The dates for BSidesDE 2014 have already been announced, November 14th and 15th. Mark your calendars and make an effort to come join in the fun. It's worth it. [...]

Pebble Review
In April of 2012, a Kickstarter project was launched by a company aiming to create an electronic watch that served as a companion to your smartphone. A month later, the project exceeded it's funding goal by over 100%, closing at over $10 million in pledges. Happily, I was one of the over 68,000 people that pledged. I received my Pebble about a month ago or so and I've been wearing it ever since. The watch itself is fairly simple, a rectangular unit with an e-ink display, four buttons, and a rubberized plastic strap. The screen resolution is 144x168, plenty of pixels for some fairly impressive detail. The watch communicates with your mobile phone (Android or iPhone only) via a bluetooth connection. All software updates and app installation occurs over the bluetooth connection. There is a 3-axis accelerometer as well a a pretty standard vibrating motor for silent alerts.According to the official Pebble FAQ, battery life is 7+ days on a single charge, but this depends on your overall use of the device. The more alerts your receive, the more the backlight comes on, and the more apps you use on the device, the shorter your battery life.Pebble is still in the process of building the initial run of watches for backers. Black watches, being the majority of the orders, were built first. Other colors are coming online in more recent weeks. Pebble has a website where interested parties can track how many pebbles have been built and shipped.I've been pretty impressed with the watch thus far. Pebble has been fairly responsive to inquiries I've made, and they seem dedicated to making sure they have a top quality product. Of course, as is typical on the Internet, not everyone is happy. There seem to be a lot of complaints about communication, how long it's taking to get watches, and about the features themselves.It's hard to say whether these complaints have any merit, though. For starters, I can't imagine it's a simple task to design and build 68,000 watches in a short period of time. And to complicate matters further, it seems that many backers of Kickstarter projects don't understand the difference between being a backer and being a customer.When you back a Kickstarter project, you're pledging money to help start the project. As a "reward" for contributing, if the project is successful, you are entitled to whatever the project owners have designated for your level of contribution. The key part of this being, if the project is successful. Some projects take longer than others, and times often slip. That said, I've only been part of one Kickstarter that has failed, and even that one is being resurrected by other interested parties.But there are some legitimate complaints, some that can be addressed, and others that likely won't. For instance, I've noticed that with recent firmware releases, the battery life on my watch had dropped considerably. Based on communication with the developers, they are aware of this and are actively working to resolve it. I'm not sure what the problem is, exactly, but I'm confident they'll have it fixed in the next firmware update.The battery indicator is a source of frequent discussion. Right now, there's no indicator of battery life until the battery is running low. And that indicator doesn't show on the watchface, it only shows when you are in other menus. This, in my opinion, is a poor UI choice. I'd much rather see a battery indicator option available for the watchface itself.Menu layout was also a frequent source of frustration for users. In previous firmware releases, you had to actively go to the watchface you wanted. Recent releases changed this so that the watch was the default view and other screens were chosen as needed. The[...]

Customer Dis-Service
In general, I'm a pretty loyal person. Especially when it comes to material things. I typically find a vendor I like and stick with them. Sure, if something new and flashy comes along, I'll take a look, but unless there's a compelling reason to change, I'll stick with what I have.But sometimes a change is forced upon me. Take, for instance, this last week. I've been a loyal Verizon customer for … wow, about 15 years or so. Not sure I realized it had been that long. Regardless, I've been using Verizon's services for a long time. I've been relatively happy with them, no major complaints about services being down or getting the runaround on the phone. In fact, my major gripe with them had always been their online presence which seemed to change from month to month. I've had repeated problems with trying to pay bills, see my services, etc. But at the end of the day, I've always been able to pay the bill and move on. Since that's really the only thing I used their online service for, I was content to leave well enough alone.In more recent months, we've been noticing that the 3M DSL service we had is starting to lack a bit. Not Verizon's fault at all, but the fault of an increased strain on the system at our house. Apparently 3M isn't nearly enough bandwidth to satisfy our online hunger. That, coupled with the price we were paying, had me looking around for other services. Verizon still doesn't offer anything faster than 3M in the area and, unfortunately, the only other service in the area is from a company that I'd rather not do business with if I could avoid it.In the end, I thought perhaps I could make some slight changes and at least reduce the monthly bill by a little until we determined a viable solution. I was considering adding a second DSL line, connected to a second wireless router, to relieve the tension a bit. This would allow me to avoid that other company and provide the bandwidth we needed. My wife and I could enjoy our own private upstream and place the rest of the house on the other line.Ok, I thought, let's dig into this a bit. First things first, I decided to get rid of the home phone, or at least transfer it to a cheaper solution. My cell provider offered a $10/month plan for home phones. Simple process, port he number over, install this little box in the house, and poof. Instant savings. Best part, that savings would be just about enough to get that second DSL line.Being cautious, and not wanting to end up without a DSL connection, I contacted Verizon. Having worked for a telco in the past, I knew that some telcos required that you have a home phone line in order to have DSL service. This wasn't a universal truth, however, and it was easy enough to verify. The first call to Verizon went a little sideways, though. I ended up in an automated system. Sure, everyone uses these automated systems nowadays, but I thought this one was particularly condescending. They added additional sound effects to the prompts so that when you answered a question, the automated voice would acknowledge your request and then type it in. TYPE IT IN. I don't know why, but this drove me absolutely crazy. Knowing that I was talking to a recorded voice and then having that recorded voice playing sounds like they were typing on a keyboard? Infuriating. And, on top of it, I ended up in some ridiculous loop where I couldn't get an operator unless I explicitly stated why I wanted an operator, but the automated system apparently couldn't understand my request.Ok, time out, walk away, try again later. The second time around, I lied. I ended up in sales, so it seems to have worked. I explained to the lady on the phone what I was looking for. I w[...]

Programming Note

In 2012 I posted a little over a dozen entries to this blog. I like to think that each entry was well thought out and time well spent. But only a dozen? That's about one entry a month... I'd really like to do more.

So, new year, time to make some changes.. I spent a lot of time judging whether each post was "worth the effort" and "long enough to matter." I need to get past that. My goal is to start posting a number of smaller entries. I definitely want the quality to be there, but I want to avoid agonizing over each and every entry.

So here's to a new year and more content!