A few weeks ago you had a chance to ask Brian Krebs about security, cybercrime and what it's like to be the victim of Swatting. Below you will find his answers to your questions.Cowards as affiliates
You appear dedicated on continuing reporting on cybercrime, even though it may result to harm you (swatting etc). How often have you come into situation where someone you work with states they don't want to work with you any longer as association to you may result them to being target for criminals or some such?
Krebs: I don't think I've had anyone unfriend me or stop talking to me because of what you describe, but it happens fairly often that I hear from strangers who have some information to impart but who are nervous about anyone finding out it was them who shared it.
Mostly, this comes from researchers who say they want to share some findings about something -- a specific cybercrime actor, site or service -- but in no way do they wish to be named, cited, credited or in any way referenced. It's impossible to know how many people decide it's not worth reaching out because of such concerns, but I hope it's not many.
Long term solutions?
Right now, security is a purely defensive battle, at best we have the enemy at a stalemate, where their attacks are foiled. There is no way to "win", since the attacker usually is located in a country with little to no cyber-crime laws, or even in a hostile country that rewards it. At best, we tread water.
Would a long term solution be creating private networks like SIPRNet or NIPRNet, so that the barrier for entry is raised, so an attacker has to get onto that private network, and this might be something where physical access is needed. Not 100% secure, but it raises the bar so that attackers have to have "boots on the ground".
If not, what would be workable, other than just air-gapping as much as possible? Would it be wise for each nation to mimic China and have their own Great Firewall, so attacks have the ability to be be stopped well away from their intended targets?
Krebs: I think I understand the premise of your question, and the desire to wall everything off and/or start over. And do I detect what may be a passing reference to the money quote from Joshua in the excellent 1983 film War Games: "Strange game. The only winning move is not to play."
But, I'd have to respectfully agree with several of the commenters here in saying that I think creating a whole bunch more secret or separate networks is very much not the answer here. As someone already stated, this is actually the reality that we have today with corporate intranets, which everyone seems to have and these don't seem to do much to stop the data (s)pillage or malicious hackers getting in and having their way with the target and all of its information.
What would be wise is if the United States made it a national goal to become the world leader in developing software that is far more secure and robust than anywhere else. Unfortunately, this will probably never happen unless the market demands it, and the market generally responds to what consumers want, which is usually convenience (ease-of-use) over security.
Anyways...how about a nice game of chess?
by Anonymous Coward
Brian, Are you generally in the Responsible Disclosure camp or the Full Disclosure camp? And why? (I recognize that you may handle this on a case by case basis. In that event, what determines your approach?)
Krebs: Yeah, this definitely depends. I find it endlessly fascinating and frustrating at the same time to watch how differently organizations respond to reports about security vulnerabilities in their products, services and their own infrastructure. How they respond speaks volumes about their security maturity. Companies and organizations that lack a mature process for handling and responding to threats and vulnerabilities tend to react negatively -- lashing out at the individual reporting the weakness, ignoring the reporter, or even taking legal steps against the researcher.
Companies that have a mature process for handling this kind of thing can comparatively be a joy to work with, and are quite often grateful for anyone who privately reports their findings. The best manifestation of this is the bug bounty program, versions of which many companies are now beginning to embrace to varying degrees.
It seems like the the phrases "responsible disclosure" and "full disclosure" are sort of loaded terms at this point in the debate. It's the journalistic equivalent of framing the abortion debate in camps of "anti-abortion" and "pro-rights". Disclosure is a two-way street, and it starts with organizations taking responsibility for security holes in software and hardware that they create, sell and/or give away. When companies fail to do this in a timely manner, I think it's perfectly reasonable for researchers to disclose what they've found -- hopefully exercising a modicum of restraint in the process. The disclosure debate usually kicks into high gear when a company responsible for a serious bug in widely-used software behaves like a child when presented with research into a vulnerability in its products.
I've been fortunate enough to be a fly on the wall, if you will, in several of these vulnerability reports, watching in disbelief as the vendor hems and haws and generally stalls for time, protesting that the bug is not remotely exploitable or isn't that big of a deal for such-and-such reasons, etc. That's frustrating and again speaks to the maturity level of the organization. In my experience, most security researchers are quite content to be agreeable on disclosure timelines if they feel like the vendor is taking seriously the time and effort the researcher has spent on his findings.
Granted, there's a great deal of room for debate over what constitutes a "reasonable" amount of time to wait for the vendor to respond before going public, but I do think it's important to give the vendor at least a few weeks to respond. However, in cases where the vulnerability is actively being exploited, disclosing immediately, publicly and completely is always in the public interest.
Should We Trust Kaspersky?
As we seem to be heading back down into the familiar territory of the cold war I often wonder if nationalism is something we should consider when thinking about security. For instance I believe that Kaspersky is a very talented company but I can't help but to feel that they would be quite willing to turn a blind eye to malware from their own government. I hear commercials for Kaspersky threat detection software all the time but I would be hard pressed to actually use any of it. It certainly seems China, Russia and parts of Europe are taking country of origin into account when evaluating American security products. Am I wearing a tin-foil hat in feeling we should think twice about trusting Kaspersky?
Krebs: I don't think you necessarily have a tin-foil hat on. I should preface my remarks by saying that I'm sure every security firm has all kinds of dirty laundry they would prefer never saw the light of day. And I personally know many of the security researchers at Kaspersky and find them to be some of the best at what they do, and very good people as well. If it means anything, I have, for many years, used Kaspersky's software to protect my own networks. It's about the best at what it does.
That said, allow me to share an observation that really struck me on my visit to Moscow in 2011. I was a guest of Kaspersky Lab and they were very gracious and hospitable. However, I went there in large part in the hopes of rounding out some information I'd compiled about several big time cybercriminals that I was tracking at the time -- probably a dozen or so guys that I knew were definitely in Moscow and would almost certainly be known to anyone even moderately interested in cybercrime (on either side). I sat down with probably 8 or 9 different researchers at Kaspersky and in my interviews with them asked each about various individuals who were quite well known in the hacker scene in Russia but also abroad. To my surprise, nobody there would talk to me about these individuals. I have no idea if this was because of a corporate policy about it or what, but I found it singularly amazing that these experts would have so little interest in the actors who were so clearly operating under their noses.
Internet of Things
by Dr J. keeps the nerd
Hi Brian, Thanks for joining us. What are the worst mistakes we are already making on connected devices, and what should we be doing to make them less desirable as targets?
Krebs: You mean, besides connecting them in the first place? Seriously, the main reason I keep a software firewall installed on one of my machines is to learn which programs or gadgets on my home network are phoning home or who-knows-where. For the most part, we've shown ourselves to be incapable of designing or at least releasing software for mass commercial use that is not Swiss Cheese from a security perspective. So why should we expect things to be any different when we talk about network-aware devices and embedded appliances? All we've done in that case is take the buggy software and stuffed it into something that is even more difficult (if not impossible) to update.
What should we be doing to make all these devices less desirable as targets? Quit connecting them to the internet! Seriously. It would be nice if more companies that shipped devices made them disconnected from the Internet by default, or at least minimally so. But in most cases the opposite is true; the thing tries to get an IP address and you have to remember to disable a raft of features in said thing.
A lot of security is determined by the default settings, because the vast majority of users/customers never alter the defaults. With stuff that falls under the "internet of things" category, we'd all be much better off if they were more like "things with internet optional."
White vs Grey Hat
Hey Brian, I'm wondering what side of the fence you think you are on. Your readership and affiliations seem to be the mainstream "white-hat" security community; but many of your tactics can be described as grey-hat at best -- e.g. doxxing hackers/malware authors/spammers, using social engineering to obtain information, etc. It seems as though this is justified because it is used against targets you perceive as being immoral, unethical, and/or worthy of such intrusion. My question is: do you feel you are a white-hat hacker, or do you think your use of black-hat tactics against black hats makes you something different?
Krebs: Not sure specifically what "grey hat" and "black hat" techniques you're referring to in particular. Also, I take issue with your assertion that I somehow practice social engineering to gain information. I'll admit to once or twice useing Spooftel to get someone who is dodging my calls to answer the phone, but I've never misrepresented myself or what I'm doing. In all of my reporting and investigation -- even with black hats -- I am up front about who I am and what I'm after.
Now, it is true that some of my reporting has been based on hacked cybercrime forums and hacked cybercriminals, but I can't recall an instance wherein I was the one responsible for the hacking. My first book, "Spam Nation," would not have been possible if two of the biggest cybercrime kingpins had not employed their top spammers and cybercrooks to break into each other’s networks and steal several years’ worth of banking and customer data, and then leak that data to Yours Truly and to the authorities. In my experience, the only thing cybercrooks like better than breaking into databases and stealing/selling data for financial gain is hacking each other for profit/amusement/insert reason here.
If I approach people on cybercrime forums, it is always just to learn more about the services and products they have to offer and are quite willing to talk about. Will I register on cybercrime forums under my own name? Of course not! Then again, nobody on those forums does that!
Actually, I *did* try to do that several years back, in two different cases. In one instance, when I told the admin in charge that I wanted the nickname "briankrebs," he laughed and said basically, "good one!" The other time I tried to claim that nickname, it was already taken.
I'll confess, though, that I've been guilty of a certain schadenfreude when it comes to writing about the arrest, conviction and or other demise of people who have -- apparently apropos of nothing -- targeted me and/or my family publicly and at the same time hidden behind an assumed veil of anonymity. These kinds of cowards consistently ruin the Internet for everyone, and I won't apologize for calling them out.
On a more philosophical note, I find it fascinating that so many involved in black hat activities online are so horrible at operational security. That probably has more to do with the general lack of consequences for most actors involved in this type of activity -- particularly those in certain Eastern European countries.
defining "computer security" for your clients
Mr. Krebs, thank you for the time. My question is about defining "computer security" in relation to public perceptions vs technical facts. It was reported in 2006 that the NSA was keeping massive databases of American's phone calls and metadata. Obviously, Snowden's revelations were much more heavily reported, and contained more info, but the public was shocked at information that was already public. When it comes to cyber security customers, how do you explain and contextualize what service you are providing given the vast differences in perception of "security"?
Krebs: I try, as much as I am able, to focus on reporting stories that you won't find anywhere else. As an independent reporter, I have the luxury of not spending a great deal of time chasing other reporters' stories. Also, I try not to practice "churnalism," which is just regurgitating stories that other reporters have written. As for a "service" I might be offering, all I can say is that my goal is to communicate in as simple and straightforward way as I can news that is not getting enough attention or is not being well served by other outlets.
To your question about the differences in perception about security, I couldn't agree more. But to paraphrase Tip O'Neil, all security is local: Security as a news subject means little unless you can communicate the complex stuff in a way that mere mortals can comprehend, appreciate and do something about. If I am able to do that well and consistently, I hope that's a service of a kind.