2016-10-19T15:42:11+00:00Mondays. There are many reasons Monday have a bad reputation. Few of us would claim to like Mondays. My Monday earlier this week got off to a poor start. I was traveling to attend a workshop (a good one, on ethics in cyber) and staying, yet again, at a hotel. As sometimes happen when I travel, I wasn’t sleeping well. I awoke shortly after 3am and couldn’t get back to sleep. Being the compulsive gadget user I am, I checked my email on my cellphone. There, I saw a new message posted from Europe that made my Monday quite a bit better. (Unfortunately, it didn’t help me get back to sleep.) Actually, as I write this on Wednesday, I’m still pretty happy, as well as better rested. The email informed me that I am the 2017 IFIP TC-11 recipient of the Kristian Beckman Award. IFIP is the International Federation of Information Processing Societies, and the Beckman Award is one of the top recognitions in the field. Many of the previous recipients of this honor have been mentors and heroes of mine. As noted on their WWW site, IFIP is recognized by the UN, and it represents IT societies from 56 countries/regions, covering five continents with a total membership of over half a million. TC-11 is the subgroup (technical committee) devoted to security and privacy protection in information processing systems. The Kristian Beckman Award is presented annually, starting in 1993. According to the web site, "The objective of the award is to publicly recognize an individual, not a group or organisation, who has significantly contributed to the development of information security, especially achievements with an international perspective." The letter noted my achievements in research, education, and service; my creation and leadership of CERIAS; my guidance and mentorship of students developing security tools in widespread use; and my work as Academic Editor then Editor-in-Chief of Computers & Security, the oldest journal in the field of information security. The award will be formally presented at 32nd International Conference on ICT Systems Security and Privacy Protection (IFIP SEC 2017) in Rome, in May 2017. I will be presenting an invited plenary address as part of the award. I am honored to be named as a recipient of this award. I have worked with IFIP TC-11 on various things over the last 25 years, including as a subcommittee chair (TC 11.4), as a member of several other groups, and serving as editor of Computers & Security, which is recognized as the official journal for TC-11. Along with ACM, ISSA, (ISC)2, IFIP is a significant force in research and education in cyber security. I have been quite fortunate in my career. With the Beckman Award, I believe I have now been recognized with every major cyber security award, including the National Computer System Security Award; ISSA Hall of Fame; Harold F. Tipton Award; Cyber Security Hall of Fame; SANS Lifetime Achievement Award; Outstanding Contribution Award from ACM SIGSAC; the Joseph Wasserman Award from ISACA. I haven’t done all this on my own — I have been fortunate enough to work with some outstanding students, colleagues, and staff. I will always be grateful for their collegial support. I would also like to note that many of these awards can be seen as "lifetime" awards. Although the administrators and some of my colleagues at Purdue think I’m no longer functional, I want to assure everyone else that I’m not done yet — I still have some ideas to pursue, possibly another book or two to write, and more students to teach and advise! Now, if only I could get enough sleep on a regular basis…but I’m willing to wake up for news like this! And no, I still don’t particularly like Mondays. [...]
Stephen T. Walker recently died. He was the founder of the pioneering Trusted Information Systems, a prime force behind the establishment of the NCSC (now the Commericial Solutions Center, but also the producer of the Rainbow Series), and he was the recipient of the first National Computer Security Systems Award His obituary lists his many notable accomplishments and awards. Steve was a major influencer (and mentor) in the field of cyber security for decades.
I only recall meeting Steve once, and I am poorer for not having had more contact with him.
If you work in cyber security, you should read his obituary and ponder the contributions that have led to the current state of the field, and how little we have credited people like Steve with having had a lasting influence.
2016-06-30T23:20:53+00:00Today (June 30) is my last day as CERIAS Executive Director. This marks the end of a process that began about 15 months ago, when it was unexpectedly announced that my appointment was not being renewed. Last week, the dean responsible announced the appointment of Professor Dongyan Xu as interim executive director as of July 1. He also announced, to our surprise, that Professor Elisa Bertiino would not be reappointed as CERIAS Director of Research. I wish to express my deep gratitude to Elisa for her support and her participation in the growth of CERIAS; I very much value having Elisa as a colleague. I will not make any other public comments at this time about this transition other than to voice my unequivocal support of Dongyan, and of the wonderful CERIAS staff. Dongyan is an outstanding scholar and colleague, and he has a long history of active involvement with CERIAS. I helped recruit him to Purdue in 2001 as a new assistant professor working in security, so I am very familiar with his background. He has worked with CERIAS as he has advanced through the academic ranks, so he has the experience — both professional and personal — to handle the job in this time of transition. Looking back, I have had the honor of working with some incredible people over the last 25 years, first as leader of the COAST Laboratory, and then as the founder and (executive) director of CERIAS. CERIAS participants have set an example of “thinking differently” to effect a profound and lasting set of changes — many of which are not recognized nor appreciated locally; As with most things in academia, the further away one gets from one’s home institution in space and time, the more the value of contributions are understood! It is widely acknowledged outside that our faculty, staff, and students have made a huge contribution to establishing cyber security as an academic discipline. When CERIAS was founded in 1998, there were only four small academic groups in the world that were devoted to cyber security, and they were all quite small. CERIAS was established to help build the field, establish leadership, and investigate new ideas, all while embracing the spirit of the land-grant university to perform research in the public good. In the years since then, our local community has: grown our participating faculty to over 100, with visitors and senior grads of at least as many again assisted over a dozen other universities, and dozens more smaller institutions, develop curricula and degrees in the area initiated research into hundreds of new topic areas, bringing in over $100 million in externally funded research supported several dozen companies and government agencies in our partner program, with research, policy, and hiring What is more, we helped show that the whole field of cyber protection is really multidisciplinary — it is more than computer science or engineering, but a rich area of study that includes a range of disciplines. Over the last 18 years, we have had faculty from 20 different academic departments participate in CERIAS activities…and still do. Also back in 1998, there were few programs producing graduates with concentrations in cyber security. I did a survey for some Congressional testimony at the time, and found that only about 3 PhDs a year were being produced in all of the US (and almost none elsewhere) in the field (excluding cryptography). Although not explicitly part of CERIAS, which is a research-only entity, CERIAS participants also: helped produced 250 new PhDs in cyber security, cyber forensics, and privacy, and many more hundreds with MS degrees established the first graduate program with an explicit information security degree established a graduate certificate in public policy and cyber security established an academic program in cyber forensics As the (in parallel) head of the Interdisciplinary Information Security (INSC) graduate program, I have seen the synergy between CERIAS and INSC, and[...]
The nomination cycle for the 2016 induction into the Cyber Security Hall of Fame is now open.
2016-03-07T04:21:05+00:00I have attended 10 of the last 15 RSA conferences. I do this to see what’s new in the market, meet up with friends and colleagues I don’t get to see too often, listen to some technical talks, and enjoy a few interesting restaurants and taverns in SF. Thereafter, I usually blog about my impressions (see 2015 and 2014, for example).I think I could reuse my 2015 comments almost unchanged… There have been some clear trends over the years: The technical talks each year seem more focused on superficial approaches and issues: there seemed to be less technical content, at least in the few I observed. This goes with the rather bizarre featured talks by cast members of CSI: Cyber and Sean Penn — well known experts on cyber. Not. (Several others told me they thought the same about the sessions.) Talks a decade ago seemed to me to be deeper. This matches some of what I observed at booths. The engineers and sales reps at the booths have little deep knowledge about the field. They know the latest buzzwords and market-speak, but can’t answer some simple questions about security technologies. They don’t know people, terms, or history. More on this later. There is still an evident level of cynicism among booth personnel that surprised me, but less than last year. There seemed to be more companies exhibiting (both sides of Moscone were full). There also seemed to be more that weren’t there last year and are unlikely to be around next year; I estimate that as many as 20% may be one-time wonders. This year showed some evidence of effectiveness of new policies against “booth babes.” I talked to a number of women engineers who were more comfortable this year working at the booths. A couple indicated they could dress up a little without being mistaken for “the help.” That is a great step forward, but it needs reinforcement and consistency. At least one tried to come close to the edge and sparked some backlash. As I noted above, the majority of people I talked to at vendor booths didn’t seem to have any real background in security beyond a few years of experience with the current market. This is a longer-term trend. The market has been tending more towards patching and remediation of bad software rather than strong design and really secure posture. It is almost as if they have given up trying to fix root causes because few end-users are willing to make the tough (and more expensive) choices. Thus, the solutions are after-the-fact, or intended to wrap broken software rather than fix it. Employees don’t need to actually study the theory and history of security if they’re not going to use it! Of course, not everyone is in that category. There are a number of really strong experts who have extensive background in the field, but it seems to me (subjectively) that the number attending decreases every year. Related to that, a number of senior people in the field that I normally try to meet with skipped the conference this year. Many of them told me that the conference (and lodging and…) is not worth what they get from attending. (As a data point, the Turing Award was announced during the first day of the conference. I asked several young people, and they had no idea who Diffie and Hellman were or what they had done. They also didn’t know what the Turing Award was. Needless to say, they also had no idea who I was, which is more or less what I expect, but a change from a decade ago.) As far as buzzwords, this year didn’t really have one. Prior years have highlighted “the cloud,” “big data,”, and “threat intelligence” (to recap a few). This year I thought there would be more focus on Internet of Things (IoT), but it wasn’t. If anything, there seemed to be more with “endpoint protection” as the theme. Anti-virus, IDS, and firewalls were not emphasized much on the exhibit floor.[...]
2015-12-06T14:31:10+00:00It may seem odd to consider June 2016 as January approaches, but I try to think ahead. And June 2016 is a milestone anniversary of sorts. So, I will start with some history, and then an offer to get something special and make a charitable donation at the same time. In June of 1991, the first edition of Practical Unix Security was published by O’Reilly. That means that June 2016 is the 25th anniversary of the publication of the book. How time flies! Read the history and think of participating in the special offer to help us celebrate the 25th anniversary of something significant! History In summer of 1990, Dan Farmer wrote the COPS scanner under my supervision. That toolset embodied a fair amount of domain expertise in Unix that I had accumulated in prior years, augmented with items that Dan found in his research. It generated a fair amount of “buzz” because it exposed issues that many people didn’t know and/or understand about Unix security. With the growth of Unix deployment (BSD, AT&T, Sun Microsystems, Sequent, Pyramid, HP, DEC, et al) there were many sites adopting Unix for the first time, and therefore many people without the requisite sysadmin and security skills. I thus started getting a great deal of encouragement to write a book on the topic. I consulted with some peers and investigated the deals offered by various publishers, and settled on O’Reilly Books as my first contact. I was using their Nutshell handbooks and liked those books a great deal: I appreciated their approach to getting good information in the hands of readers at a reasonable price. Tim O’Reilly is now known for his progressive views on publishing and pricing, but was still a niche publisher back then. I contacted Tim, and he directed me to Debby Russell, one of their editors. Debby was in the midst of writing her own book, Computer Security Basics. I told her what I had in mind, and she indicated that only a few days prior she had received a proposal from a well-known tech author, Simson Garfinkel, on the same topic. After a little back-and-forth, Debby introduced us by phone, and we decided we would join forces to write the book. It was a happy coincidence because we each brought something to the effort that made the whole more than the sum of its parts. That first book was a little painful for me because it was written in FrameMaker to be more easily typeset by the publisher, and I had never used FrameMaker before, Additionally, Simson didn’t have the overhead of preparing and teaching classes, so he really pushed the schedule! I also had my first onset of repetitive stress injury to my hands — something that bothers me occasionally to this day, and has limited me over the years from writing as much as I’d like. I won’t blame the book as the cause, but it didn’t help! The book was completed in early 1991 and included some of my early work with COPS and Tripwire, plus a section on some experiments with technology for screening networks. I needed a name for what I was doing, and taking a hint from construction work I had done when I was younger, I called it a “firewall.” To the best of our recollection, I was the one who coined that term; I had started speaking about firewalls in tutorials and conferences in at least late 1990, and the term soon became commonplace. (I also described the DMZ structure for using firewalls, although my term for that didn’t catch on.) Anyhow…. the book appeared in the summer of 1991 and became a best seller (for its kind; last I heard, over 100,000 copies have been sold in 11 languages, and at least twice that many copies pirated). Thereafter, Simson and I also worked on a book on www security (editions in 1997 and 2002), along with our various other projects. After several years, we produced a major rewrite and update of the Unix security book to include material on internetworkin[...]
This evening, someone pointed out Congressional testimony I gave over 6 years ago. This referenced similar testimony I gave in 2001, and I prepared it using notes from lectures I gave in the early-to-mid 1990s.
What is discouraging is that if I were asked to provide testimony next week, I would only need to change a few numbers in this document and it could be used exactly as is. The problems have not changed, the solutions have not been attempted, and if anything, the lack of leadership in government is worse.
Some of us have been saying the same things for decades. I’m approaching my 3rd decade of this, and I’m a young’un in this space.
If you are interested, read the testimony from 2009 and see what you think.
2015-10-13T20:54:23+00:00On September 24 and 25 of this year, Purdue University hosted the second Dawn or Doom symposium. The event — a follow-up to the similarly-named event held last year — was focused on talks, movie, presentations, and more related to advanced technology. In particular, the focus has been on technology that poses great potential to advance society, but also potential for misuse or accident that could cause great devastation. I was asked to speak this year on the implications of surveillance capabilities. These have the promise of improving use of resources, better marketing, improved health care, and reducing crime. However, those same capabilities also threaten our privacy, decrease some potential for freedom of political action, and create an enduring record of our activities that may be misused. My talk was videotaped and is now available for viewing. The videographers did not capture my introduction and the first few seconds of my remarks.The remaining 40 or so minutes of me talking about surveillance, privacy, and tradeoffs are there, along with a few audience questions and my answers. If you are interested, feel free to check it out. Comments welcome, especially if I got something incorrect — I was doing this from memory, and as I get older I find my memory not not be quite as trustworthy as it used to be. You can find video of most of the other Dawn or Doom 2 events online here. The videos of last year's Dawn or Doom event are also online. I spoke last year about some of the risks of embedding computers everywhere, and giving those systems control over safety-critical decisions without adequate safeguards. That talk, Faster Than Our Understanding , includes some of the same privacy themes as the most recent talk, along with discussion of security and safety issues. Yes, if you saw the news reports, the Dawn or Doom 2 event is also where this incident involving Barton Gellman occurred. Please note that other than some communication with Mr. Gellman, I played absolutely no role in the taping or erasure of his talk. Those issues are outside my scope of authority and responsibility at the university, and based on past experience, almost no one here listens to my advice even if they solicit it. I had no involvement in any of this, other than as a bystander. Purdue University issued a formal statement on this incident. Related to that statement, for the record, I don’t view Mr. Gellman’s reporting as “an act of civil disobedience.” I do not believe that activities of the media, as protected by the First Amendment of the US Constitution and by legal precedent, can be viewed as “civil disobedience” any more than can be voting, invoking the right to a jury trial, or treating people equally under the law no matter their genders or skin colors. I also share some of Mr. Gellman’s concerns about the introduction of national security restrictions into the entire academic environment, although I also support the need to keep some sensitive government information out of the public view. That may provide the topic for my talk next year, if I am invited to speak again. [...]
2015-08-03T19:39:39+00:00I recently had a couple of students (and former students, and colleagues) ask me if I was attending any of a set of upcoming cons (non-academic/organizational conferences) in the general area of cyber security. That includes everything from the more highly polished Black Hat and DefCon events, to Bsides events, DerbyCon, Circle City Con, et al. (I don’t include the annual RSA Conference in that list, however.) 25 years ago there were some as the field was starting up that I attended. One could argue that some of the early RAID and SANS conferences fit this category, as did some of the National Computer Security Conferences. I even helped organize some of those events, including the 2nd RAID workshop! But that was a long time ago. I don’t attend cons now, and haven’t for decades. There are two main reasons for that. First, is finances. Some of the events are quite expensive to attend — travel, housing, and registration all cost money. As an academic faculty member, and especially as one at a state university, I don’t have a business account covering things like these as an expense item. Basically, I would have to pay everything out of pocket, and that isn’t something I can afford to do on a regular (or even sporadic) basis. I manage to scrape up enough to attend the main RSA conference each year, but that is it. Yes, faculty do sometimes have some funds for conferences. When we have grants from agencies such as NSF or DARPA, they often include travel funds, but usually we target those for places where the publication of our research (and that of our students) gives the most academic credit — IEEE & ACM events, for instance. Sometimes donors will provide some gifts to the university for us to use on things not covered by grants, including travel. And some faculty have made money by spinning off companies and patenting their inventions, so they can use that. None of that describes my situation. Over the last 20 years I have devoted most of my efforts at raising (and spending) funds towards the COAST lab and then CERIAS. When I have had funding for conferences, I have usually spent it on my students, first, to allow them to get the professional exposure. There is seldom money left over for me to attend anything. I show up at a few events because I’m invited to speak and the hosts cover the expenses. The few things I’ve invented I’ve mostly put out in the public domain. I suppose it would be great if some donor provided a pot of money to the university for me to use, but I’ve gotten in the habit of spending what I have on junior colleagues and students so I’m not sure what I’d do with it! There is also the issue of time. I have finite time (and it seems more compressed as I get older) and there are only so many trips I have time (and energy) to make, even if I could afford more. Several times over the last few years I’ve hit that limit, as I’ve traveled for CERIAS, for ACM, and for some form of advising, back to back to back. Second, I’m not sure I’d learn much useful at most cons. I’ve been working (research, teaching, advising) in security and privacy for 30 years. I think I have a pretty good handle on the fundamentals, and many of the nuances. Most cons present either introductions for newbies, or demonstrations of hacks into existing systems. I don’t need the intros, and the hacks are not at all surprising. There is some great applications engineering work being done by the people involved, but unlike some people, I don’t need to see an explicit demonstration to understand the weaknesses in supply chains, poor authentication, lack of separation, no root of trust, and all the other problems that underlie those hacks. I eventually hear about the presentations after the fact when the[...]
2015-06-12T06:15:58+00:00The U.S. limits the export of certain high-tech items that might be used inappropriately (from the government’s point of view). This is intended to prevent (or slow) the spread of technologies that could be used in weapons, used in hostile intelligence operations, or used against a population in violation of their rights. Some are obvious, such as nuclear weapons technology and armor piercing shells. Others are clear after some thought, such as missile guidance software and hardware, and stealth coatings. Some are not immediately clear at all, and may have some benign civilian uses too, such as supercomputers, some lasers, and certain kinds of metal alloys. Recently, there have been some proposed changes to the export regulations for some computing-related items. In what follows, I will provide my best understanding of both the regulations and the proposed changes. This was produced with the help of one of the professional staff at Purdue who works in this area, and also a few people in USACM who provided comments (I haven’t gotten permission to use their names, so they’re anonymous for now). I am not an expert in this area so please do not use this to make important decisions about what is covered or what you can send outside the country! If you see something in what follows that is in error, please let me know so I can correct it. If you think you might have an export issue under this, consult with an appropriate subject matter expert. Export Control Some export restrictions are determined, in a general way, as part of treaties (e.g., nuclear non-proliferation). A large number are as part of the Wassenaar Arrangement — a multinational effort by 41 countries generally considered to be allies of the US, including most of NATO; a few major countries such as China are not, nor are nominal allies such as Pakistan and Saudi Arabia (to name a few). The Wassenaar group meets regularly to review technology and determine restrictions, and it is up to the member states to pass rules or legislation for themselves. The intent is to help promote international stability and safety, although countries not within Wassenaar might not view it that way. In the U.S., there are two primary regulatory regimes for exports: ITAR and EAR. ITAR is the International Traffic in Arms Regulations in the Directorate of Defense Trade Controls at the Department of State. ITAR provides restrictions on sale and export of items of primary (or sole) use in military and intelligence operations. The EAR is the Export Administration Regulations in the Bureau of Industry and Security at the Department of Commerce. EAR rules generally cover items that have “dual use” — both military and civilian uses. These are extremely large, dense, and difficult to understand sets of rules. I had one friend label these as “clear as mud.” After going through them for many hours, I am convinced that mud is clearer! Items designed explicitly for civilian applications without consideration to military use, or with known dual-use characteristics, are not subject to the ITAR because dual-use and commodity items are explicitly exempted from ITAR rules (see sections 121.1(d) and 120.41(b) of the ITAR). However, being exempt from ITAR does not make an item exempt from the EAR! If any entity in the US — company, university, or individual — wishes to export an item that is covered under one of these two regimes, that entity must obtain an export license from the appropriate office. The license will specify what can be exported, to what countries, and when. Any export of a controlled item without a license is a violation of Federal law, with potentially severe consequences. What constitutes an export is broader than some people may realize, including: Shipping something out[...]
2015-06-06T16:32:56+00:00Preface by Spaf Chair, ACM US Public Policy Council (USACM)† About 20 years ago, there was a heated debate in the US about giving the government mandatory access to encrypted content via mandatory key escrow. The FBI and other government officials predicted all sorts of gloom and doom if it didn’t happen, including that it would prevent them from fighting crime, especially terrorists, child pornographers, and drug dealers. Various attempts were made to legislate access, including forced key escrow encryption (the “Clipper Chip”). Those efforts didn’t come to pass because eventually enough sensible — and technically literate — people spoke up. Additionally, the economic realities also made it clear that people weren’t knowingly going to buy equipment with government backdoors built in. Fast forward to today. In the intervening two decades, the forces of darkness did not overtake us as a result of no restrictions on encryption. Yes, there were some terrorist incidents, but either there was no encryption involved that made any difference (e.g., the Boston Marathon bombing), or there was plenty of other evidence but it was never used to prevent anything (e.g., the 9/11 tragedy). Drug dealers have not taken over the country (unless you consider Starbucks coffee a narcotic). Authorities are still catching and prosecuting criminals, including pedophiles and spies. Notably, even people who are using encryption in furtherance of criminal enterprises, such as Ross “Dread Pirate Roberts” Ulbricht, are being arrested and convicted. In all these years, the FBI has yet to point to anything significant where the use of encryption frustrated their investigations. The doomsayers of the mid-1990s were quite clearly wrong. However, now in 2015 we again have government officials raising a hue and cry that civilization will be overrun, and law enforcement will be rendered powerless unless we pass laws mandating that back doors and known weaknesses be put into encryption on everything from cell phones to email. These arguments have a strong flavor of déjà vu for those of us who were part of the discussion in the 90s. They are even more troubling now, given the scope of government eavesdropping, espionage, and massive data thefts: arguably, encryption is more needed now that it was 20 years ago. USACM, the Public Policy Council of the ACM, is currently discussing this issue — again. As a group, we made statements against the proposals 20 years ago. (See, for instance, the USACM and IEEE joint letter to Senator McCain in 1997). The arguments in favor of weakening encryption are as specious now as they were 20 years ago; here are a few reasons why: Weakening encryption to catch a small number of “bad guys” puts a much larger community of law-abiding citizens and companies at risk. Strong encryption is needed to help protect data at rest and in transit against criminal interception; A “golden key” or weakened cryptography is likely to be discovered by others. There is a strong community of people working in security — both legitimately and for criminal enterprises — and access to the “key” or methods to exploit the weaknesses will be actively sought. Once found, untold millions of systems will be defenseless — some, permanently. There is no guarantee that the access methods won’t be leaked, even if they are closely held. There are numerous cases of blackmail and bribery of officials leading to leaked information. Those aren’t the only motives, either. Consider Robert Hanssen, Edward Snowden, and Chelsea (Bradley) Manning: three individuals with top security clearances who stole/leaked extremely sensitive and classified infor[...]
Let me recommend an article in Communications of the ACM, June 2015, vol 6(2), pp. 64-69. The piece is entitled PLUS ÇA CHANGE, PLUS C’EST LA MÊME CHOSE, and the author is the redoubtable Corey Schou.
Corey has been working in information security education as long (and maybe longer) than anyone else in the field. What’s more, he has been involved in numerous efforts to help define the field, and make it more professional.
His essay distills a lot of his thinking about information security (and its name), its content, certification, alternatives, and the history of educational efforts in the area.
If you work in the field in any way — as a teacher, practitioner, policy-maker, or simply hobbyist, there is probably something in the piece for you.
(And yes, there are several indirect references to me in the piece. Two are clearer than others — can you spot them? I definitely endorse Corey’s conclusions so perhaps that is why I’m there. (image)
In the late 1980s, around the time the Airbus A340 was introduced (1991), those of us working in software engineering/safety used to exchange a (probably apocryphal) story. The story was about how the fly-by-wire avionics software on major commercial airliners was tested.
According to the story, Airbus engineers employed the latest and greatest formal methods, and provided model checking and formal proofs of all of their avionics code. Meanwhile, according to the story, Boeing performed extensive design review and testing, and made all their software engineers fly on the first test flights. The general upshot of the story was that most of us (it seemed) felt more comfortable flying on Boeing aircraft. (It would be interesting to see if that would still be the majority opinion in the software engineering community.)
Today, in a workshop, I was reminded of this story. I realized how poor a security choice that second approach would be even if it might be a reasonable software test. All it would take is one engineer (or test pilot) willing to sacrifice himself/herself, or a well-concealed attack, or someone near the test field with an air to ground missile, and it would be possible to destroy the entire pool of engineers in one fell swoop…as well as the prototype, and possibly (eventually) the company.
Related to recent events, I would also suggest that pen-testing at the wrong time, with insufficient overall knowledge (or evil intent) could lead to consequences with some similar characteristics. Testing on live systems needs to be carefully considered if catastrophic failures are possibly realized.
No grand conclusions here, other than to think about how testing interacts with security. The threat to the design organization needs to be part of the landscape — not simply testing the deployed product to protect the end-users.
Here are a couple of items of possible interest to some of you.
First, a group of companies, organizations, and notable individuals signed on to a letter to President Obama urging that the government not mandate “back doors” in computing products. I was one of the signatories. You can find a news account about the letter here and you can read the letter itself here. I suggest you read the letter to see the list of signers and the position we are taking.
Second, I’ve blogged before about the new book by Carey Nachenberg — a senior malware expert who is one of the co-authors of Norton Security: The Florentine Deception. This is an entertaining mystery with some interesting characters and an intricate plot that ultimately involves a very real cyber security threat. It isn’t quite in the realm of an Agatha Christie or Charles Stross, but everyone I know how has read it (and me as well!) have found it an engrossing read.
So, why am I mentioning Carey’s book again? Primarily because Carey is donating all proceeds from sale of the book to a set of worthy charities. Also, it presents a really interesting cyber security issue presented in an entertaining manner. Plus, I wrote the introduction to the book, explaining a curious “premonition” of the plot device in the book. What device? What premonition? You’ll need to buy the book (and thus help contribute to the charities), read the book (and be entertained), and then get the answer!
You can see more about the book and order a copy at the website for The Florentine Deception.
Dear Friends of CERIAS
This Wednesday, April 29, will be the second annual Purdue Day of Giving. During this 24-hour online event, CERIAS will be raising awareness and funds for infosec research, security education, and student initiatives.
Plus, through a generous pledge from Sypris Electronics, every donation received this Wednesday will be matched, dollar-for-dollar! So, whether its $10 or $10,000, your donation will be doubled and will have twice the impact supporting CERIAS research, education, and programs (i.e. Women in Infosec, student travel grants, student conference scholarships, the CERIAS Symposium, …)
Make your donation online here (CERIAS is listed in the left column, about 1/3 down).
Now through Wednesday help us spread the word by tagging your Twitter and Instragram posts with BOTH #PurdueDayofGiving and #CERIAS., and sharing our message on Facebook and LinkedIn. You can post your thoughts, share the Day of Giving video, or encourage others to donate.
Thank you for your continued support of CERIAS and for considering a Purdue Day of Giving donation this Wednesday (April 29).