Subscribe: The Falcon's View
http://www.secureconsulting.net/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
analyst  awareness  behavior  culture  falcon view  falcon  make  people  practices  security  technical  time  view  work  year 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: The Falcon's View

The Falcon's View



Mental meanderings of an infosec obsessive...



Updated: 2018-03-12T13:15:15Z

 



Measure Security Performance, Not Policy Compliance - The Falcon's View

2018-03-12T13:15:15Z

I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that... I started my security (post-sysadmin) career heavily focused on security policy frameworks. It took me down many roads, but everything always came back to a few simple notions, such as that policies were a means of articulating security direction, that you had to prescriptively articulate desired behaviors, and that the more detail you could put into the guidance (such as in standards, baselines, and guidelines), the better off the organization would be. Except, of course, that in the real world nobody ever took time to read the more detailed documents, Ops and Dev teams really didn't like being told how to do their jobs, and, at the end of the day, I was frequently reminded that publishing a policy document didn't translate to implementation. Subsequently, I've spent the past 10+ years thinking about better ways to tackle policies, eventually reaching the point where I believe "less is more" and that anything written and published in a place and format that isn't "work as usual" will rarely, if ever, get implemented without a lot of downward force applied. I've seen both good and bad policy frameworks within organizations. Often they cycle around between good and bad. Someone will build a nice policy framework, it'll get implemented in a number of key places, and then it will languish from neglect and inadequate upkeep until it's irrelevant and ignored. This is not a recipe for lasting success. Thinking about it further this week, it occurred to me that part of the problem is thinking in the old "compliance" mindset. Policies are really to blame for driving us down the checkbox-compliance path. Sure, we can easily stand back and try to dictate rules, but without the adequate authority to enforce them, and without the resources needed to continually update them, they're doomed to obsolescence. Instead, we need to move to that "security as code" mentality and find ways to directly codify requirements in ways that are naturally adapted and maintained. End Dusty Tomes and (most) Out-of-Band Guidance The first daunting challenge of security policy framework reform is to throw away the old, broken approach with as much gusto and finality as possible. Yes, there will always be a need for certain formally documented policies, but overall an organization Does. Not. Need. large amounts of dusty tomes providing out-of-band guidance to a non-existent audience. Now, note a couple things here. First, there is a time and a place for providing out-of-band guidance, such as via direct training programs. However, it should be the minority of guidance, and wherever possible you should seek to codify security requirements directly into systems, applications, and environments. For a significant subset of security practices, it turns out we do not need to repeatedly consider whether or not something should be done, but can instead make the decision once and then roll it out everywhere as necessary and appropriate. Second, we have to realize and accept that traditional policy (and related) documents only serve a formal purpose, not a practical or pragmatic purpose. Essentially, the reason you put something into writing is because a) you're required to do so (such as by regulations), or b) you're driven to do so due to ongoing infractions or the inability to directly codify requirements (for example, requirements on human behavior). What this leaves you with are requirements that can be directly implemented and that are thus easily measurable. KPIs as Policies (et al.) If the old ways aren't working, then it's time to take a step back and think about why that might be and what might be better going forward. I'm convinced the answer to this query lies in stretching the "security as code" notion a step further by focusing on security performance metrics for everything and[...]



The Thankless Life of Analysts - The Falcon's View

2018-01-25T21:09:36Z

There are shenanigans afoot, I tell ya; shenanigans! I was recently contacted by an intermediary asking if I'd be interested in writing a paid blog post slamming analysts, to be published on my own blog site, and then promoted by... There are shenanigans afoot, I tell ya; shenanigans! I was recently contacted by an intermediary asking if I'd be interested in writing a paid blog post slamming analysts, to be published on my own blog site, and then promoted by the vendor. No real details were given other than the expectation to slam analyst firms, but once I learned who was funding the initiative, it became pretty clear what was going on. Basically, this vendor has received, or is about to receive, a less-than-stellar review and rating from one of the analyst firms and they're trying to get out in front of the news by trying to proactively discredit analyst reports. My response to the offer was to decline, and now as I'm hearing some may take up the opportunity, I've decided it's time to myself get out ahead of this potential onslaught of misleading propaganda. Mind you, I'm not a huge fan of the analyst firms, and I found myself incredibly frustrated and disappointed during my time at Gartner when I was constantly told to write about really old and boring topics rather than being allowed to write more progressive reports that would actually help move the industry forward. But I'll get to that in a moment... How Research Happens First and foremost, let's talk about how research happens. I can only speak from my direct experience at Gartner, but my understanding is it's not too dissimilar at other organizations. Also, let me provide a disclaimer or three: I don't work for Gartner any more, I never wrote Magic Quadrant reports, and I really have nothing to personally gain from defending or attacking them. As always, my goal here is to be fair and balanced (for real, not in a Fox News or CNN kinda way). So, research... there are a lot of different kinds of reports, but let's talk a little bit about the biggies that everyone sees (e.g., MQs, Waves, etc.). These reports typically combine primary research (customer surveys and interviews) with analysis of open source information and interviews with the vendors themselves. One of the first challenges is defining a market niche, followed by then identifying vendors in the niche, and then conducting the research. For vendors who whine about being excluded from the research process, part of this is your fault for not making yourself known to analysts, and part of this is falling victim to arbitrary market segment rules created to keep the pool of rated vendors to a reasonable size (think for example, of the old, retired Gartner MQ for EGRC, which easily could have included hundreds of vendors, so instead was arbitrarily pared down to 20 or so vendors who no one in their right mind would ever put on a comparative chart). Customer surveys (references) generally play a huge role in information collection, and the pros and cons highlighted in those surveys and interviews will often get incorporated directly into reports. Which is to say, keep your customers happy and be sure any customer references you provide aren't going to trash you! Beyond that, it then comes down to an analyst's judgment (and biases) in terms of how they score you. The scoring is consistent across all vendors included in a report insomuch as the same criteria are used for everyone. Contrary to popular mythos, analysts are not compensated for sales, renewals, etc., and that is precisely to keep the analysts as neutral and objective as possible. That said, I've noted that it's very common for larger vendors to buy lots of analyst time in order to increase their exposure to the analyst(s) covering them. This has included off-site analyst information seminars in sunny locations, which I've always found a bit suspect. The bottom line here, though, is that if there's an image of bias and favoritism toward larger, richer vendors, it's at least partly correct insomuch as th[...]



Design For Behavior, Not Awareness - The Falcon's View

2017-10-31T15:32:15Z

October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely,... October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely, wasted a lot of time, money, and good will. Most security awareness programs and practices are horrible BS. This extends out to include many practices heavily promoted by the likes of SANS, as well as the current state of "best" (aka, failing miserably) practices. We shouldn't, however, be surprised that it's all a bunch of nonsense. After all, awareness budgets are tiny, the people running these programs tend to be poorly trained and uneducated, and in general there's a ton of misunderstanding about the point of these programs (besides checking boxes). To me, there are three kinds of security awareness and education objectives: 1) Communicating new practices 2) Addressing bad practices 3) Modifying behavior The first two areas really have little to do with behavior change so much as they're about communication. The only place where behavior design comes into play is when the secure choice isn't the easy choice, and thus you have to build a different engagement model. Only the third objective is primarily focused on true behavior change. Awareness as Communication The vast majority of so-called "security awareness" practices are merely focused on communication. They tell people "do this" or "do that" or, when done particularly poorly, "you're doing X wrong idiots!" The problem is that, while communication is important and necessary, rarely are these projects approached from a behavior design perspective, which means nobody is thinking about effectiveness, let alone how to measure for effectiveness. Take, for example, communicating updated policies. For example, maybe your organization has decided to revise its password policy yet again (woe be to you!). You can undertake a communication campaign to let people know that this new policy is going into effect on a given date, and maybe even explain why the policy is changing. But, that's about it. You're telling people something theoretically relevant to their jobs, but not much more. This task could be done just as easily be your HR or internal communication team as anyone else. What value is being added? Moreover, the best part of this is that you're not trying to change a behavior, because your "awareness" practice doesn't have any bearing on it; technical controls do! The password policy is implemented in IAM configurations and enforced through technical controls. There's no need for cognition by personnel beyond "oh, yeah, I now have to construct my password according to new rules." It's not like you're generally giving people the chance to opt out of the new policy, and there's no real decision for them to make. As such, the entire point of your "awareness" is communicating information, but without any requirement for people to make better choices. Awareness as Behavior Design The real role of a security awareness and education program should be on designing for behavior change, then measuring the effectiveness of those behavior change initiatives. The most rudimentary example of this is the anti-phishing program. Unfortunately, anti-phishing programs also tend to be horrible examples because they're implemented completely wrong (e.g., failure to benchmark, failure to actually design for behavior change, failure to get desired positive results). Yes, behavior change is what we want, but we need to be judicious about what behaviors we're targeting and how we're to get there. I've had a strong interest in security awareness throughout my career, including having built and delivered awareness training and education programs in n[...]



Incremental "Gains" Are Just Slower Losses - The Falcon's View

2017-10-29T16:20:17Z

Anton Chuvakin and I were having a fun debate a couple weeks ago about whether incremental improvements are worthwhile in infosec, or if it's really necessary to "jump to the next curve" (phrase origin: Guy Kawasaki's "Art of Innovation," watch... Anton Chuvakin and I were having a fun debate a couple weeks ago about whether incremental improvements are worthwhile in infosec, or if it's really necessary to "jump to the next curve" (phrase origin: Guy Kawasaki's "Art of Innovation," watch his TedX) in order to make meaningful gains in security practices. Anton even went so far as to write about it a little over a week ago (sorry for the delayed response - work travel). As promised, I feel it's important to counter his arguments a bit. Anton started by asking, "[Can] you really jump to the "next curve" in security, or do you have to travel the entire journey from the old ways to the cutting edge?" Of course, this is a sucker's question, and it belies misunderstanding the whole "jump to the next curve" argument, which was conceived by Kawasaki in relation to innovation, but can be applied to strategy in general. In speaking of the notion, Kawasaki says "True innovation happens when a company jumps to the next curve-or better still, invents the next curve, so set your goals high." And this, here, is the point of arguing for organizations to not settle for incremental improvements, but instead to aim higher. To truly understand this notion in context, let's first think about what would be separate curves in a security practice vertical. Let's take Anton's example of SOCs, SIEM, log mgmt, and threat hunting. To me, the curves might look like this: - You have no SOC, SIEM, log mgmt - You start doing some logging, mostly locally - You start logging to a central location and having a team monitor and manage - You build or hire a SOC to more efficiently monitor and respond to alerts - You add in stronger analytics, automation, and threat hunting capabilities Now, from a security perspective, if you're in one of the first couple stages today (and a lot of companies are!), then a small incremental improvement like moving to central logs might seem like a huge advance, but you'd be completely wrong. Logically, you're not getting much actual risk reduction by simply dumping all your logs to a central place unless you're also adding monitoring, analytics, and response+hunting capabilities at the same time! In this regard, "jump to the next curve" would likely mean hiring an MSSP to whom you can send all your log data in order to do analytics and proactive threat hunting. Doing so would provide a meaningful leap in security capabilities and would help an organization catch-up. Moreover, even if you spent a year making this a reality, it's a year well-spent, whereas a year spent simply enabling logs without sending them to a central repository for meaningful action doesn't really improve your standing at all. In Closing In the interest in keeping this shorter than usual, let's just jump to the key takeaways. 1) The point of "jump to the next curve" is to stop trying to "win" through incremental improvements of the old and broken, instead leveraging innovation to make up lost ground ground by skipping over short-term "gains" that cost you time without actually gaining anything. 2) The farther behind you are, the more important it is to look for curve-jumping opportunities to dig out of technical debt. Go read DevOps literature on how to address technical debt, and realize that with incremental gains, you're at best talking about maintaining your position, not actually catching up. Many organizations are far behind today and cannot afford such an approach. 3) Attacks are continuing to rapidly evolve, which means your resilience relies directly on your agility and ability to make sizable gains in a short period of time. Again, borrowing from DevOps, it's past time to start leveraging automation, cloud services, and agile technique[...]



A Change In Context - The Falcon's View

2017-09-22T14:41:09Z

Today marks the end of my first week in a new job. As of this past Monday, I am now a Manager, Security Engineering, with Pearson. I'll be handling a variety of responsibilities, initially mixed between security architecture and team...

Today marks the end of my first week in a new job. As of this past Monday, I am now a Manager, Security Engineering, with Pearson. I'll be handling a variety of responsibilities, initially mixed between security architecture and team management. I view this opportunity as a chance to reset my career after the myriad challenges experienced over the past decade. In particular, I will now finally be able to say I've had administrative responsibility for personnel, lack of which having held me back from career progression these past few years.

This change is a welcome one, and it will also be momentous in that it will see us leaving the NoVA/DC area next Summer. The destination is not finalized, but it seems likely to be Denver. While it's not the same as being in Montana, it's the Rockies and at elevation, which sounds good to me. Not to mention I know several people in the area and, in general, like it. Which is not to say that we dislike where we live today (despite the high price tag). It's just time for a change of scenery.

I plan to continue writing on the side here (and on LinkedIn), but the pace of writing may slow again in the short-term while I dedicate most of my energy to ramping up the day job. The good news, however, is this will afford me the opportunity to continue getting "real world" experience that can be translated and related in a hopefully meaningful manner.

Until next time, thanks and good luck!




Quit Talking About "Security Culture" - Fix Org Culture! - The Falcon's View

2017-09-06T13:57:04Z

I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about... I have a pet peeve. Ok, I have several, but nonetheless, we're going to talk about one of them today. That pet peeve is security professionals wasting time and energy pushing a "security culture" agenda. This practice of talking about "security culture" has arisen over the past few years. It's largely coming from security awareness circles, though it's not always the case (looking at you anti-phishing vendors intent on selling products without the means and methodology to make them truly useful!). I see three main problems with references to "security culture," not the least of which being that it continues the bad old practices of days gone by. 1) It's Not Analogous to Safety Culture First and foremost, you're probably sitting there grinding your teeth saying "But safety culture initiatives work really well!" Yes, they do, but here's why: Safety culture can - and often does - achieve a zero-sum outcome. That is to say, you can reduce safety incidents to ZERO. This factoid is excellent for when you're around construction sites or going to the hospital. However, I have very bad news for you. Information (or cyber or computer) security will never be a zero-sum game. Until the entirety of computing is revolutionized, removing humans from the equation, you will never prevent all incidents. Just imagine your "security culture" sign by the entrance to your local office environment, forever emblazoned with "It Has Been 0 Days Since Our Last Incident." That's not healthy or encouraging. That sort of thing would be outright demoralizing! Since you can't be 100% successful through preventative security practices, you must then shift mindset to a couple things: better decisions and resilience. Your focus, which most of your "security culture" programs are trying to address (or should be), is helping people make better decisions. Well, I should say, some of you - the few, the proud, the quietly isolated - have this focus. But at the end of the day/week/month/year you'll find that people - including well-trained and highly technical people - will still make mistakes or bad decisions, which means you can't bank on "solving" infosec through better decisions. As a result, we must still architect for resiliency. We must assume something will breakdown at some point resulting in an incident. When that incident occurs, we must be able to absorb the fault, continue to operate despite degraded conditions, while recovering to "normal" as quickly, efficiently, and effectively as possible. Note, however, that this focus on resiliency doesn't really align well with the "security culture" message. It's akin to telling people "Safety is really important, but since we have no faith in your ability to be safe, here's a first aid kit." (yes, that's a bit harsh, to prove a point, which hopefully you're getting) 2) Once Again, It Creates an "Other" One of the biggest problems with a typical "security culture" focus is that it once again creates the wrong kind of enablement culture. It says "we're from infosec and we know best - certainly better than you." Why should people work to make better decisions when they can just abdicate that responsibility to infosec? Moreover, since we're trying to optimize resiliency, people can go ahead and make mistakes, no big deal, right? Part of this is ok, part of it is not. On the one hand, from a DevOps perspective, we want people to experiment, be creative, be innovative. In this sense, resilience and failure are a good thing. However, note that in DevOps, the responsibility for "fail fast, recover fast, learn fast" is on the person doing the experimenting!!! The DevOps movement is diametrically opposed to fostering enableme[...]



Introducing Behavioral Information Security - The Falcon's View

2017-08-29T14:49:24Z

I recently had the privilege of attending BJ Fogg's Behavior Design Boot Camp. For those unfamiliar with Fogg's work, he started out doing research on Persuasive Technology back in the 90s, which has become the basis for most modern uses... I recently had the privilege of attending BJ Fogg's Behavior Design Boot Camp. For those unfamiliar with Fogg's work, he started out doing research on Persuasive Technology back in the 90s, which has become the basis for most modern uses of technology to influence people (for example, use of Facebook user data to influence the 2016 US Presidential Election). The focus of the boot camp was around "behavior design," which was suggested to me by a friend who's a leading expert in modern, progress security awareness program management. Thinking about how best to apply this new-found knowledge, I've been mulling opportunities for application of Fogg models and methods. Suddenly, it occurred to me, "Hey, you know what we really need is a new sub-field that combines all aspects of security behavior design, such as security awareness, anti-phishing, social engineering, and even UEBA." I concluded that maybe this sub-field would be called something like "behavioral security" and started doing searches on the topic. Well, low-and-behold, it already exists! There is already a well-established sub-field within information security (infosec) known as "Behavioral Information Security." Most of the literature I've found (and there's a lot in academia) has popped-up over the past 5 years or so. However, I did find a reference to "behavioral security" dating back to May 2004 (see "Behavioral network security: Is it right for your company?"). Going forward, I believe that organizations and standards should stop listing "security awareness" as a single line item requirement, and instead pivot to the expanding domain of "behavioral infosec." NIST CSF would be a great place to start (though I'm assuming it's too late for the v1.1 release, expected sometime soon). Nonetheless, I will be using this phrasing and description going forward. The inevitable question you might have is, "How do you define the domain/sub-field of Behavioral Information Security?" To me, the answer is quite simple: Any practice or capability that monitors or seeks to modify human behavior to reduce risk or improve security falls under behavioral infosec. These practice areas would include everything from modern, progressive security education, training, and awareness programs (these are programs well beyond posters and blind anti-phishing, including developer education tied to appsec testing data), progressive anti-phishing programs (that is, those that baseline and then measure impact), all forms of social engineering (including red team testing, blue team testing, etc.), and user behavior monitoring through tools like UEBA (User and Entity Behavior Analytics). Behavioral InfoSec Engineering programs and teams should be instantiated that are charged with these practice areas (definitely security awareness and various testing, measuring, and reporting practices). Personnel should be suitably trained, not just in analytical areas, but also in technical areas in order to best develop technical content and practices designed to impact human behavior. Lastly, why human behavior as a focus? Because reports (like VzB DBIR) consistently report year after year after year that one wrong click by a human can break an entire security chain. Thus, we need to help people make better decisions. This notion is also very DevOps-friendly thinking. We should not want to see large security programs built and maintained within organizations, but rather must work to thoroughly embed as many security practices and decisions as possible within non-security teams in order to improve security overall (this is something emphasized in DevSecOps programs). Security resources wil[...]



Confessions of an InfoSec Burnout - The Falcon's View

2017-08-29T14:36:39Z

Soul-crushing failure. If asked, that is how I would describe the last 10 years of my career, since leaving AOL. I made one mistake, one bad decision, and it's completely and thoroughly derailed my entire career. Worse, it's unclear if... Soul-crushing failure. If asked, that is how I would describe the last 10 years of my career, since leaving AOL. I made one mistake, one bad decision, and it's completely and thoroughly derailed my entire career. Worse, it's unclear if there's any path to recovery as failure piles on failure piles on failure. The Ground I've Trod To understand my current state of career decrepitude, as well as how I've seemingly become an industry pariah... I have worked for 11 different organizations over the past 10 years. I left AOL in September 2007, right before a layoff (I should have waited for the layoff and gotten a package!). I had been there for more than 3.5 years and I was miserable. It was a misery of my own making in many ways. My team manager had moved up the ranks, leaving an opening. All my teammates encouraged me to throw my hat in the ring, but I demurred, telling myself I simply wasn't ready to manage. Oops. Instead, our new manager came through an internal process, and immediately made life un-fun. I left a couple months later. When I left AOL, it was to take a regional leadership role in BT-INS (BT Global Services - they bought International Network Services to build-out their US tech consulting). A month into the role as security lead for the Mid-Atlantic, where I was billable on day 1, the managing director left and a re-org merged us in with a different region where there was already a security lead. 2 of 3 sales reps left and the remaining person was unable and unwilling to sell security. I sat on the bench for a long time, traveling as needed. An idle, bored Ben is a bad thing. From BT I took a leadership role with this weird tech company in Phoenix. There was no budget and no staff, but I was promised great things. They let me start remote for a couple months before relocating. I knew it was a bad fit and not a good company before we made the move. I could feel it in my gut. But, I uprooted the family in the middle of the school year (my wife is an elementary teacher) and went to Phoenix, ignoring my gut. 6 months later they eliminated the position. The fact is that they'd hired a new General Counsel who also claimed a security background (he had a CISSP), and thus they made him the CISO. The year was 2009, the economy was in tatters after the real estate bubble had burst. We were stranded in a dead economy and had no place to go. Thankfully, after a month of searching, someone threw me a life-line and I promptly started a consulting gig with Foreground Security. Well, that was a complete disaster and debacle. We moved back to Northern Virginia and my daughter immediately got sick and ended up in the hospital (she'd hardly had a sniffle before!). By the time she got out of the hospital I was sicker than I'd ever been before. The doctors had me on a couple different antibiotics and I could hardly get out of bed. This entire time the president of the company would call and scream at me every day. Literally, yelling at the top of his lungs over the phone. Hands-down the most unprofessional experience I'd had. The company partnership subsequently fell apart and I was kacked in the process. I remember it clearly to this day: I'm at my parents house in NW MN over the winter holidays and the phone rings. It's the company president, who starts out by telling me they'd finally had the kid they were expecting. And, they're letting me go. Yup, that's how the conversation went ("We had a baby. You're termed."). Really, being out of Foreground was a relief given how awful it had been. Luckily they relocated us no strings attached, so I didn't owe anything. But, I once again w[...]



On Titles, Jobs, and Job Descriptions (Not All Roles Are Architects) - The Falcon's View

2017-08-02T13:21:00Z

Folks: Please stop calling every soup-to-nuts, everything-but-the-kitchen-sink security job a "security architect" role. It's harmful to the industry and it's doing you no favors trying to find the right resources. In fact, please stop posting these "one role does everything... Folks: Please stop calling every soup-to-nuts, everything-but-the-kitchen-sink security job a "security architect" role. It's harmful to the industry and it's doing you no favors trying to find the right resources. In fact, please stop posting these "one role does everything security under the sun" positions altogether. It's hurting your recruitment efforts, and it makes it incredibly difficult to find positions that are a good fit. Let me explain... For starters, there are generally three classes of security people, management and pentesters aside: - Analysts - Engineers - Architects (Note that these terms tend to be loaded due to their use in other industries. In fact, in some states you might even have to come up with a different equivalent term for positions due to legal definitions (or licensing) of roles. Try to bear with me and just go with the flow, eh?) Analysts are people who think about stuff and write about stuff and sometimes help initiate actions, but they are not the implementers of security tools or practices. An analyst may or may not be particularly technical, depending on the nature of the role. For example, there are tons of entry-level SOC analyst positions today that can provide a first taste of infosec work life. You rarely need to have a lot of technical skills, at least initially, to land one of these gigs (this varies by org). Similarly, there are GRC analyst roles that tend not to be technical at all (despite often including "technical writing," such as for policies, in the workload). On the far end of the spectrum, you may have incident response (IR) analysts who are very technical, but again note the nature of their duties: thinking about stuff, writing about stuff, and maybe initiating actions (such as the IR process or escalations therein). Engineers are people who do most of the hands-on work. If you're looking for someone to do a bunch of implementation work, particularly around security tools and tech, then you want a security engineer, and that should be clearly stated in your job description. Engineers tend to be people who really enjoy implementation and maintenance work. They like rolling up their sleeves and getting their hands dirty. You might also see "administrator" used in this same category (though that's muddy water as sometimes a "security administrator" might be more like an analyst in being less technical, skilled in one kind of tool, like adding and removing users to Active Directory or your IAM of choice). In general, if you're listing a position that has implementation responsibilities, then you need to be calling it an engineer role (or equivalent), not an analyst and certainly not an architect. Architects are not your implementers. And, while they are thinkers who may do a fair amount of technical writing, the key differentiators here are that 1) they tend to be way more technical than the average analyst, 2) they see a much bigger picture than the average analyst or engineer, and 3) they've often risen to this position through one or both of the other roles, but almost certainly with considerable previous hands-on implementation experience as an engineer. It's very important to understand that your architects, while likely having a background in engineering, is unlikely to want to do much hands-on implementation work. What hands-on work they are willing/interested to do is likely focused heavily on proofs of concept (POCs) and testing new ideas and technologies. Given their technical backgrounds, they'll be able to go toe-to-toe on technical topics with jus[...]



Reflection on Working From Home - The Falcon's View

2017-03-30T16:13:17Z

In a moment of introspection last night, it occurred to me that working from home tends to amplify any perceived slight or sources of negativity. Most of my "human" interactions are online only, which - for this extrovert - means...

In a moment of introspection last night, it occurred to me that working from home tends to amplify any perceived slight or sources of negativity. Most of my "human" interactions are online only, which - for this extrovert - means my energy is derived from whatever "interaction" I have online in Twitter, Facebook, email, Slack, etc.

It turns out that this can be highly problematic. Last year I turned off Facebook for months at a time because of all the negativity. I constantly felt myself slipping into depression because everything weighed me down. And don't get me wrong, I'm as much a source of negative posts as anyone else. I don't think we can help it in this political environment.

However, where this gets particularly challenging is around non-internet interactions. Whether it be having tea with a friend or just chatting with them on the phone... I've come to realize that a lot of my happiness ends up hanging on these very rare interactions, which can be highly problematic when folks are busy or when unexpected events conspire to prevent such meetings. The negative side of my brain then latches onto these as "proof" that I'm unworthy of friends or friendship and starts trying to commence the dark downward spiral.

To that end, now that I'm aware of these feelings, I can start developing mechanisms to cope with them. I think one of the big challenges for someone my age, with a family and working from home, is trying to find new opportunities for interaction. Real interactions - not phony interactions via "networking" events and BS like that. We're kind of at that point in the parenthood cycle where the kids' schedules tend to dominate our lives.

Anyway... this is my observation for the morning. I need to find new forms of positive human interactions. Preferably real human interactions. And, in the meantime, I need to stop letting negative interactions and disappointments amplify disproportionately to the degree that it triggers a major downward swing. This is not an easy thing to do, but in seeing the pattern, at least now I can tackle it.




Introspection on a Recent Downward Spiral - The Falcon's View

2017-02-28T00:18:19Z

Alrighty... now that my RSA summary post is out of the way, let's get into a deeply personal post about how absolutely horrible of a week I had at RSA. Actually, that's not fair. The first half of the week... Alrighty... now that my RSA summary post is out of the way, let's get into a deeply personal post about how absolutely horrible of a week I had at RSA. Actually, that's not fair. The first half of the week was ok, but some truly horrible human beings targeted me (on social media) on Wednesday of that week, and it drove me straight down into a major depressive crash that left me reeling for days (well, frankly, through to today still). I've written about my struggles with depression in the past, and so in the name of continued transparency and hope that my stories can help others, I wanted to relate this fairly recent tale. If you can't understand or identify with this story, I'm sorry, but that's on you. The Holy Trinity: Health, Career, and Relationships This story really starts before the RSA conference. 2016 was an up-and-down year for a variety of reasons, but overall my health had been ok as I was able to re-establish a regular exercise routine. My weight was higher throughout the year (a negative), in large part to ending 2015 with a major flu bug and sinusitis that lingered for several months. Frankly, even today, I'm worn out and not as resilient as I think I should be. At any rate, I was doing ok in the health department until November when I traveled to Austin, TX, to speak at a small event. The night before I was supposed to speak I ended up eating something bad (I suspect a pickled jalapeño plucked from a jar on the table of a BBQ place) and contracting food poisoning. I got no sleep and was unable to eat all day, so speaking at 4pm after all of that to an audience of 5 (or less) was... not good. This lead into more travel in early December such that by the middle of the month, I was sick. Two weeks of vacation on the road (sick), and suffice to say, by the time 2017 rolled around, I was completely worn out. Once health falls, poor diet routines tend to fall into place as I caffeinate to be functional during the day, which negatively impacts sleep, which negatively impacts weight, which creates the negative, reinforcing cycle around which everything else starts to circle and devolve. Suffice to say, one of the three pillars had fallen, and as is common for me the past few years (ever since get pneumonia in June 2014), the road back is slow and requires a lot of willpower. From a mental health perspective, once health falls, the danger is real that a depressive episode may approach if anything else takes a hit. Enter the career/work angle... I'm not going to say a lot about this, but suffice to say, there's been a lot of personal job stress. Such an occurrence has been a trigger for me in the past, because - like so many people - a lot of my personal identity is wrapped around the work that I do. For the rare person reading this post who doesn't know, I work in the cybersecurity space, which is already beset with far above average burnout rates, which means the conditions are already tilted against success, happiness, and mental well-being. Add in my career history that's been so incredibly adverse and challenging, and the picture quickly shapes up that I can very quickly start feeling like I'm nothing more than a waste of space. After all, if work isn't fulfilling, and if I don't feel like I'm doing anything meaningful with my life, then it translates into feeling like I am meaningless. Don't argue, don't comment, don't provide some response about "no, man, you matter." It's not about rationality in this context, it's about how I feel at my core, which tends to be incredibly dark when the wheels fall off and th[...]



RSA USA 2017 In Review - The Falcon's View

2017-02-27T17:46:38Z

Now that I've had a week to recover from the annual infosec circus event to end all circus events, I figured it's a good time to attempt being reflective and proffer my thoughts on the event, themes, what I saw,... Now that I've had a week to recover from the annual infosec circus event to end all circus events, I figured it's a good time to attempt being reflective and proffer my thoughts on the event, themes, what I saw, etc, etc, etc. For starters, holy moly, 43,000+ people?!?!?!?!?! I mean... good grief... the event was about a quarter of that a decade ago. If you've never been to RSA, or if you only started attending in the last couple years, then it's really hard to describe to you how dramatic the change has been since ~2010 when the numbers started growing like this (to be fair, yoy growth from 2016 to 2017 wasn't all that huge). With that... let's drill into my key highlights... Size Matters Why do people like me go to RSA? Because it's the one week in the year where I can see almost every single vendor in the industry, as well as see people I know and like who I otherwise would never get to see in person (aka "networking"). It truly is an enormous event, and it has definitely passed the threshold of being overwhelming. Several people I've know for years did not make the trip this year, and I suspect this will become a trend, but in the meantime, in many ways it's a "must attend" event. The down-side to an event this large, and something I learned back in my Gartner days, is that - as someone with nearly 2 decades of industry experience - this is not an event where you're going to find much great content. Talks must, out of necessity, be tuned to the median audience, which means looking backward at what was cutting-edge 5-10 years ago. Sad, but true. There's simply not much room for cutting-edge thinking or discussion at the event nay more. Soooo... why go back? Again, so long as there's business development and networking benefit, it is an essential event, but it's also very costly. Hotel pricing alone makes this an increasingly difficult prospect. For as much as we're spending on hotels each year, I could very likely visit friends in 3-4 different parts of the country and break even on travel costs. It's also increasingly a lot of noise, and much harder to sift value from that noise. I truly believe RSA is nearing the point where they'll have to either break the event into multiple events (kind of like 3 wks of SxSW), or they'll at least need to move to a different model where you're attending a conference within a conference (similar to "schools" within a large university). As it stands today, it's simply too easy to get lost in the shuffle and derive diminishing value. Automation Nearing the Mainstream We've been talking about security automation and orchestration for several years now, but it's often been with only a handful of examples, and generally quick forward-looking. We're just now finally reaching the point where the automation message is being picked up in the mainstream and more expansive examples are emerging. One thing I noticed this year is that "automation" was prevalent in many booths. There are now at least a dozen vendors purportedly in the space (up from the days of it being Invotas (FireEye) and Phantom). No, I can't remember any names, but suffice to say, it's a growing list. Also, separately, I've noticed that orgs like Chef and Puppet have also made an attempt to expand their automation appeal to security (not to mention Service Now doing the same). The point here is this: The mainstream consensus is finally starting to catch up with the reality that we will never be able to scale human resources fast enough to successfully address the rapidly changing threat landsc[...]



NCS Blog: DevOps and Separation of Duties - The Falcon's View

2017-01-26T23:54:10Z

From my NCS blog post: Despite the rapid growth of DevOps practices throughout various industries, there still seems to be a fair amount of trepidation, particularly among security practitioners and auditors. One of the first concerns that pops up is...

From my NCS blog post:

Despite the rapid growth of DevOps practices throughout various industries, there still seems to be a fair amount of trepidation, particularly among security practitioners and auditors. One of the first concerns that pops up is a blurted out "You can't do DevOps here! It violates separation of duties!" Interestingly, this assertion is generally incorrect and derives from a general misunderstanding about DevOps, automation, and the continuous integration/deployment (CI/CD) pipeline.

Continue reading here...




NCS Blog: "Minimum Viable" MUST Include Security - The Falcon's View

2017-01-18T21:58:17Z

"If you're a startup trying to get a product off the ground, you've probably been told to build an "MVP" - a minimum viable product - as promoted by the Lean Startup methodology. This translates into products being rapidly developed...
"If you're a startup trying to get a product off the ground, you've probably been told to build an "MVP" - a minimum viable product - as promoted by the Lean Startup methodology. This translates into products being rapidly developed with the least number of features necessary to make an initial sale or two. Oftentimes, security is not one of the features that makes it into the product, and then it gets quickly forgotten about down the road."
Continue reading here...



Rapid Iteration Doesn't Mean "Stop Thinking" - The Falcon's View

2016-12-01T19:34:07Z

In the world of DevOps we often like to talk about rapid iteration in relationship to shortened feedback cycles, and yet oftentimes something gets lost in translation. Specifically, just because failure is ok, because failure leads to learning, it does...

In the world of DevOps we often like to talk about rapid iteration in relationship to shortened feedback cycles, and yet oftentimes something gets lost in translation. Specifically, just because failure is ok, because failure leads to learning, it does not mean that we shouldn't be thinking at all. And, yet... it's all too common!

A perfect example... firing off a response to an email before fully reading and comprehending the email. Or, responding before reading all the way through a thread. In either case, we make snap decisions to respond to some point, and yet we lose the overall context, even when that context explicitly answers questions or concerns being raised in our response.

Another example is iterating too rapidly through updates. Maybe it's coding to meet an ever-changing design. Maybe it's making design changes to a site or PowerPoint presentation. Maybe it's simply providing feedback on something. Stopping, taking a moment, actually engaging the brain, being mindful and present... these are all essential to being successful, as well as key components of DevOps and Lean (and, by extension, Lean Security). In fact, being mindful, respectful, and taking the time to think is an essential part of a generative culture.

There is more to be written (soon!) on generative culture, Lean Security, and other related topics, but for now, I leave you with this: rapid failures as part of rapid iteration due to lack of brain engagement is dumb. Don't fall into that trap. Shortened feedback loops have to do with providing feedback as quickly as possible, but it must be meaningful, mindful, and respectful feedback. If you're not engaging the brain and just responding, then your response exists at a primitive level and - even if it bears out being correct/accurate - overall we really do not want to be living primitively. Rather, we must engage higher thinking and extend ourselves outward in order to start influencing org culture in the way that will benefit ourselves, our teammates, and our employers/clients.

(originally posted to New Context blog)