1) Serious Threats?
While studying cryptanalysis, I've been learning about a number of interesting attacks such as timing attacks and differential power attacks (your specialty, if I recall). While these attacks certainly seem to help cryptanalysis of various ciphers, how practical are they in terms of real security? That is to say, what are the chances that these methods are actively being used by attackers?
It depends on the target. If the system you are trying to protect isn't worth an attacker's effort, or if there are easier ways to break in, the chances are small. On the other hand, if you are protecting extremely desirable data (money, data that will affect stock prices, Star Trek episodes, government secrets, etc.) you have to assume that smart people are going to attack your security. We spend a lot of time helping credit card companies and other smart card users build testing programs -- their products need to operate in high-risk environments where DPA, timing analysis, and other sophisticated attacks are a real problem.
2) Worst implementation?
In your consulting capacity (and without naming names), have you ever run across a companies security implementation that was so bad, so insecure, so open to exploitation that you felt an overwhelming compulsion to shut down the servers, lock the doors and call in a security SWAT team? That you actually felt like going out and shorting the companies stock? That you had to hold back from whomping someone upside the head? That you inquired about having the head of security investigated to make sure he wasn't a black hat hacker/competitor's security spy/foreign agent? How bad was the worst implementation you've ever seen?
To save typing, can I make a list of the systems that don't make me uncomfortable?
A smart, creative, experienced, determined attacker can find flaws in just about any standard commercial product. Our security evaluations find catastrophic problems more than half the time, even though evaluation projects generally have very limited budgets.
The most common situation is where the systems' security objectives could theoretically be met if the designers, implementers, and testers never made any errors. For example, in a quest for slightly better performance, operating systems put lots of complexity into the kernel and give device drivers free reign over the system. This approach would be great if engineers were infallible, but it's a recipe for trouble if all you have are human beings.
What I find most frustrating isn't bad software -- it's situations where we tell a company about a serious problem, but they decide to ignore it because we're under an NDA and therefore the problem won't hurt sales. If your company is knowingly advertising an insecure or untrustworthy product as secure, try to do something about it. Intentionally misleading customers is illegal, immoral, and a gigantic liability risk. (Keywords: Enron, asbestos, cigarettes.)
It's also frustrating that users keep buying products from companies that make misleading or unsupported claims about their security. If users won't pay extra for security, companies are going to keep selling insecure products (and our market will remain relatively small :-).
As for the worst security, I nominate the following password checking code:
gets(userEntry); if (memcmp(userEntry, correctPassword, strlen(userEntry)) != 0) return (BAD_PASSWORD);
ROT13 SPOILER: Na rzcgl cnffjbeq jvyy cnff guvf purpx orpnhfr gur pbqr hfrf gur yratgu bs gur hfre ragel, abg gur yratgu bs gur pbeerpg cnffjbeq. Bgure cbgragvny ceboyrzf (ohssre biresybjf, rgp.) ner yrsg nf na rkrepvfr sbe gur ernqre. [Funzryrff cyht: Vs lbh rawbl ceboyrzf yvxr guvf, unir fgebat frphevgl rkcrevrapr, pbzzhavpngr jryy, naq jnag n wbo ng n sha (naq cebsvgnoyr) pbzcnal, ivfvg uggc://jjj.pelcgbtencul.pbz/pbzcnal/pnerref.ugzy.]
3) Internet broken?
The Internet was primarily designed for use by researchers who were collaborating on similar projects, and so security was not part of the design. Would you advocate designing and building another Internet where security was a major design goal? Or can we tweak the current Internet to reduce that amount of maliciousness that goes on now?
I don't think the core Internet is the problem. While some protocols need upgrading, the Internet does a great job of providing untrusted, unreliable communications. Trying to impose security policies in the network layer would destroy the spontaneity and openness that make the Internet great. In other words, we need to find ways to cope with the fact that the Internet is always going to be dangerous.
The place where I see the real need for improved security is in the protocols, applications, and devices that use the Internet. For example, Moore's Law has made processing power so cheap that there is no reason why web pages aren't all encrypted. Similarly, IPSEC, VPN tunnels, and e-mail encryption should be used much more widely.
Of course, large networks are always going to have unpredictable complex security risks. As a result, if you are dealing with critical systems, they should be as disconnected as possible.
4) Dive Right In
by Accidental Hack
What does a newbie do? Having been put in a position where I'm partly responsible for server security, and having been put in that position without the proper background (and the responsibility is here to stay), how do I get my head straight on the core issues and make sure I'm not leaving the doors open for anyone to do whatever they want? Reading books/articles doesn't seem to be enough, but if that's the best place to begin, any recommendations?
You are really asking two questions: how to learn about security, and what to do if you are put in situations where you don't know what to do.
For people wanting to learn about security or cryptography, I'm a big supporter of hands-on experience. When you hear about a security bug, go see what actually went wrong. Implement DES, AES, RSA, and your own big-number library. Set up a couple of poorly-configured Linux boxes and break into them. Install a sniffer and sniff your own network traffic. Observe and modify software programs. Learn C/C++. Study known bugs in open-source crypto code and hunt for new ones. If you have the budget at work, hire a security expert and ask lots of questions. Whatever you do, be careful to follow the laws (even if you disagree with them) and act ethically.
The question of what to do if you are put in a situation beyond your skill level ultimately depends on the risks involved. With ordinary servers (corporate e-mail and the like), occasional problems may not be that catastrophic if you have good backups.
On the other hand, if the chances or consequences of failure are severe, you can't just "give it a try" any more than I should try open heart surgery or piloting a 747. For example, if you are dealing with critical infrastructure, likely fraud targets, pay TV networks (or anything involving piracy), or large customer databases, get help. Even if you are experienced, you need to have someone check your work. When you do hire someone, make sure they will answer questions, educate you, and provide good documentation. Avoid mad scientists, people who have never done serious engineering, and anyone who views security audits as threatening or insulting.
5) Quantum Computing and Cryptography
by Nova Express
Will the advent of quantum computing render even current, state-of-the-art cryptography obsolete? Is there any way that cryptography can overcome the challenge presented by quantum computing? And how long will it be, if ever, until quantum computer's can break current, state-of-the-art cryptography?
Quantum computing is possibly the coolest discovery in theoretical computer science in the last few decades because it completely changes the rules of computation.
As a practical matter, however, it's not a significant security risk compared to the other things we have to worry about. I think it's highly unlikely that quantum computers will overtake regular computers in the next 50 years at (for example) breaking RSA. The reason for my skepticism is that the challenges involved in building a useful quantum computer are staggering. For example, decoherence becomes a much greater problem as the computer gets larger, yet quantum computers have to be huge because they don't operate sequentially. (Imagine hardware design with no flip flops -- just combinatorial logic.) While error-correction techniques have been proposed, these further increase the complexity of the circuit.
If someone did find a way to build arbitrarily large quantum computers, it would be the end of most existing public key cryptographic schemes. Symmetric cryptography, however, would still work, though key lengths would need to be doubled to get the same level of security.
Note: Quantum computing is different from quantum cryptography. The latter is a method for preventing eavesdropping, typically using polarized photons and entanglement. While quantum cryptography is feasible to implement and is also neat research, I don't see any practical use for it because it requires that parties exchange photons directly. As a result, it won't work over packet switched networks. Furthermore, existing algorithms like AES can do all the same things, and much more. As a result, the only scenario I can see where quantum cryptography would be relevant would be unbelievably weird discovery that completely demolished cryptography, such as someone showing that P=NP.
6) SSL and Forward Security
First of all, thank you for agreeing to be interviewed here. It's greatly appreciated.
I'm curious if you wouldn't mind elaborating a bit on the catastrophic failure of the SSL security architecture given the compromise of an RSA private key. An attacker can literally sniff all traffic for a year, break in once to steal the key, then continue to passively decrypt not only all of last year's traffic but all of next year's too. And if he'd like to partake in more active attacks -- session hijacking, malicious data insertion, etc. -- that's fine too.
In short, why? After so much work was done to come up with a secure per-session master secret, what caused the asymmetric component to be left so vulnerable? Yes, PGP's just as vulnerable to this failure mode, but PGP doesn't have the advantage of a live socket to the other host.
More importantly, what can be done for those nervous about this shortcoming in an otherwise laudable architecture? I looked at the DSA modes, but nothing seems to accelerate them (which kills its viability for the sites who would need it most). Ephemeral RSA seemed interesting, but according to Rescola's documentation it only supports a maximum of 512 bits for the per-session asymmetric key -- insufficient. If Verisign would sign a newly generated key each day, that'd work -- but then, you'd probably need to sign over part of your company to afford the service. Would it even be possible for them to sign one long term key, tied to a single fully qualified domain name, that could then sign any number of ephemeral or near-ephemeral short term keys within the timeframe allotted in the long term cert?
Thanks again for any insight on the matter you may be able to provide!
I specifically designed the ephemeral Diffie-Hellman with DSA option in SSL 3.0 to provide perfect forward secrecy (PFS). While it used to be true that DSA's performance was a concern, it shouldn't be a problem anymore.
[*] If you want to avoid DSA, you can also do a normal RSA handshake then immediately renegotiate with an uncertified ephemeral Diffie-Hellman handshake. (SSL 3.0 and TLS 1.0 allow either party to request a renegotiation at any time, with the renegotiation process protected underneath the first handshake.) As your question mentions, short-lived certificates would work if a suitable CA provided them.
Making PFS mandatory wasn't feasible in SSL 3.0 because of performance requirements, the need to maintain compatibility with legacy RSA certificates, and licensing issues. (Back in 1996, RSA was patented and most companies only had limited RSA toolkit licenses, not patent licenses.)
Overall, I'm delighted so see how many ways SSL 3.0 is being used and that it's become the most widely deployed cryptographic protocol in the world. While there are reasons to debate design choices I made, I don't know that the protocol's handling of PFS is one of them. Although some implementations have had bugs and guidelines had to be added to address error-analysis attacks, the overall protocol has held up well.
[*] In 1996 (when the SSL 3.0 spec came out), computers were only 4% of their current speed. (Moore's Law predicts 4.67 speed doublings in 7 years.) Today, any modern CPU should give well beyond 200 2048-bit DSA verifies/second. Averaging 10 handshakes/second (5% load) = 864K connections daily per CPU. Unless you are running one of the largest web sites (or have your server misconfigured to disable session resumption), this isn't likely to be a problem. For really high-volume servers, SSL accelerators are affordable and very fast. In general, it's rare these days to find a situation where the speed of standard cryptographic operations is actually a problem.
7) trust in open p2p communities
as a software engineer building open source p2p applications (gnutella), we are faced with a huge problem: how do we establish trust in a open environment where any application that speaks the protocol can participate? we've thought of various cryptographic systems to establish trust, but they have several fatal flaws - they require some sort of centralization (a no-no in a p2p environment), they lock out 'untrusted' vendors, etc.
what can we do to maintain an open environment and establish trust between peers?
There certainly are decentralized ways to establish trust (PGP's web of trust comes to mind), but you can't have trust and complete anonymity. The closest you'll be able to do is to evaluate participants based on their past actions and assertions. Before you can begin a design, you'll need to clearly define what you are trying to enable, what you are trying to prevent, and what automated rules can distinguish between legitimate and illegitimate actions.
(Note: While I presume that the question relates to legitimate P2P applications, piracy over P2P systems is driving copyright owners to seek legislative and legal relief. The fact that the Internet can be used to massively violate intellectual property rights doesn't make it moral to do so.)
8) How do you think?
by Charles Dodgeson
When I first read about some discovery of a weakness (for example, I know your name from your work on MD5), I am always struck by the thinking beyond the framework of the designer of the system and of the community to date. The same things strikes me about timing attacks and similar sorts of things. These are things that I wouldn't have thought of in a million years. Can you give any insight into how minds like yours work. And to what extent you think that this might be a trainable skill.
I normally hate the cliche of "thinking outside of the box", but here it is fully appropriate.
Security work requires understanding systems at multiple levels. For example, differential power analysis involves transistor-level properties affecting logic units affecting microcode affecting CPUs affecting crypto algorithms affecting protocols affecting business requirements. For engineers who are used to working at only a single layer, security results often seem surprising. Broad experience is also important because the vast majority of security problems involve unintended interactions between areas designed by different people.
Two specific subjects that I think are often neglected are low-level programming and statistics. These are essential to understand how things actually work and to assess the likelihood that systems will fail. A skeptical mindset is also important. Try to assume things are bad until you are convinced otherwise.
Some specific questions that are helpful to ask include:
- What information and capabilities are available to attackers?
- What information and capabilities are available to attackers?
- What esoteric corner cases has nobody studied carefully?
- How would a lazy or inexperienced designer have designed the system?
- What states can each participant be in?
- Where is the most complexity in the security perimeter? (Complex parts are the most likely to fail.)
- What unwritten assumptions are being made, and are they correct?
9) Is the Technology ahead of us?
Thanks for letting us ask you these questions.
Over the last couple of decades, cryptography has gone from being the domain of major governments, big business, and the odd hobbyist and researcher to being a massive public industry that anyone can (and does) participate in, with new algorithms published and new applications announced almost every week. Meanwhile, we learn of vulnerabilities in various implementations of cryptosystems much more frequently than we hear of people discovering fundamental flaws in the cryptosystems themselves.
Given these facts, do you think we need to change focus, turning to validating and "approving" implementations of cryptosystems (such as your own SSL 3.0) or should the emphasis of the "crypto community" continue to be innovation in fundamentals of cryptographic systems and new applications for them? How important is it to have someone verify that a cryptosystem is implemented well?
Validation is by far the most critical unsolved problem in security.
I view security as probabilistic: there is always some chance of failure, and validation is the only way to reduce the odds of failure. For example, a well-tested piece of code is more secure than an identical piece of code that hasn't been tested.
Although innovation is great on the research side, real-world systems should use well-tested techniques wherever possible. For example, on the algorithm side, we use RSA, triple DES, AES, and SHA-1 at Cryptography Research unless we have to use something else. (This is rare.) We use these algorithms because they are well reviewed, making the risk of an unexpected cryptanalytic attack low. In contrast, catastrophic flaws in new schemes are very common.
When you move beyond the basic algorithms, validation unfortunately becomes extremely difficult for many reasons:
- The complexity of software is increasing exponentially, but the number of skilled security experts (and their intelligence and productivity) is staying roughly constant.
- Many designs are so poorly architected or implemented that they are infeasible to validate.
- Validation is much more difficult than writing new code (and it's less fun), so many people avoid it.
- Engineers are cranking out such vast quantities of code that testing can't possibly keep up.
- Existing validation tools are really quite poor.
- The cost of security testing can be hard to justify because most users won't pay extra for better security.
- There is no easy way for users to distinguish between well-tested products and those that aren't.
- Testing takes a long time, slowing down product launches.
- There is no easy way to standardize security evaluations because attackers don't limit themselves to standard attacks.
- Catching 90% of the flaws doesn't help if attackers are willing to look 10 times harder to find flaws.
- Developers don't have much incentive to make painful sacrifices for security because they aren't the ones who incur the risk.
by Anonymous Coward
0eefa Uv, V'z jbaqrevat vs lbh guvax gurer'f n shgher sbe EBG13. V'ir urneq vg'f cerggl frpher...
Lbh pna ernq guvf? Qnza!
Holy cow! Juvyr lbh znl unir svtherq bhg zl fhcre-frperg EBG13 pvcure, abobql jvyy rire penpx *guvf* zrffntr orpnhfr V fjvgpurq gb bhe hygen-frperg cyna O: nccylvat n Pnrfre pvcure 13 gvzrf :-).