Forgot your password?
typodupeerror
Security

Security Expert Paul Kocher Answers, In Detail 227

Posted by Roblimo
from the your-lock-your-key dept.
Paul Kocher, president of Cryptography Research, Inc. and one of the architects of SSL 3.0, said, "The questions were great -- definitely one of the most fun interviews I've ever done." His answers score high on the 'informative' scale, too. Thanks to everyone who submitted such fine questions, and thanks to Paul for putting some real time and effort into his answers.

1) Serious Threats?
by Prizm

While studying cryptanalysis, I've been learning about a number of interesting attacks such as timing attacks and differential power attacks (your specialty, if I recall). While these attacks certainly seem to help cryptanalysis of various ciphers, how practical are they in terms of real security? That is to say, what are the chances that these methods are actively being used by attackers?

Paul:

It depends on the target. If the system you are trying to protect isn't worth an attacker's effort, or if there are easier ways to break in, the chances are small. On the other hand, if you are protecting extremely desirable data (money, data that will affect stock prices, Star Trek episodes, government secrets, etc.) you have to assume that smart people are going to attack your security. We spend a lot of time helping credit card companies and other smart card users build testing programs -- their products need to operate in high-risk environments where DPA, timing analysis, and other sophisticated attacks are a real problem.

2) Worst implementation?
by burgburgburg

In your consulting capacity (and without naming names), have you ever run across a companies security implementation that was so bad, so insecure, so open to exploitation that you felt an overwhelming compulsion to shut down the servers, lock the doors and call in a security SWAT team? That you actually felt like going out and shorting the companies stock? That you had to hold back from whomping someone upside the head? That you inquired about having the head of security investigated to make sure he wasn't a black hat hacker/competitor's security spy/foreign agent? How bad was the worst implementation you've ever seen?

Paul:

To save typing, can I make a list of the systems that don't make me uncomfortable?

A smart, creative, experienced, determined attacker can find flaws in just about any standard commercial product. Our security evaluations find catastrophic problems more than half the time, even though evaluation projects generally have very limited budgets.

The most common situation is where the systems' security objectives could theoretically be met if the designers, implementers, and testers never made any errors. For example, in a quest for slightly better performance, operating systems put lots of complexity into the kernel and give device drivers free reign over the system. This approach would be great if engineers were infallible, but it's a recipe for trouble if all you have are human beings.

What I find most frustrating isn't bad software -- it's situations where we tell a company about a serious problem, but they decide to ignore it because we're under an NDA and therefore the problem won't hurt sales. If your company is knowingly advertising an insecure or untrustworthy product as secure, try to do something about it. Intentionally misleading customers is illegal, immoral, and a gigantic liability risk. (Keywords: Enron, asbestos, cigarettes.)

It's also frustrating that users keep buying products from companies that make misleading or unsupported claims about their security. If users won't pay extra for security, companies are going to keep selling insecure products (and our market will remain relatively small :-).

As for the worst security, I nominate the following password checking code:

  gets(userEntry);
     if (memcmp(userEntry, correctPassword, strlen(userEntry)) != 0)
         return (BAD_PASSWORD);

ROT13 SPOILER: Na rzcgl cnffjbeq jvyy cnff guvf purpx orpnhfr gur pbqr hfrf gur yratgu bs gur hfre ragel, abg gur yratgu bs gur pbeerpg cnffjbeq. Bgure cbgragvny ceboyrzf (ohssre biresybjf, rgp.) ner yrsg nf na rkrepvfr sbe gur ernqre. [Funzryrff cyht: Vs lbh rawbl ceboyrzf yvxr guvf, unir fgebat frphevgl rkcrevrapr, pbzzhavpngr jryy, naq jnag n wbo ng n sha (naq cebsvgnoyr) pbzcnal, ivfvg uggc://jjj.pelcgbtencul.pbz/pbzcnal/pnerref.ugzy.]

3) Internet broken?
by bpfinn

The Internet was primarily designed for use by researchers who were collaborating on similar projects, and so security was not part of the design. Would you advocate designing and building another Internet where security was a major design goal? Or can we tweak the current Internet to reduce that amount of maliciousness that goes on now?

Paul:

I don't think the core Internet is the problem. While some protocols need upgrading, the Internet does a great job of providing untrusted, unreliable communications. Trying to impose security policies in the network layer would destroy the spontaneity and openness that make the Internet great. In other words, we need to find ways to cope with the fact that the Internet is always going to be dangerous.

The place where I see the real need for improved security is in the protocols, applications, and devices that use the Internet. For example, Moore's Law has made processing power so cheap that there is no reason why web pages aren't all encrypted. Similarly, IPSEC, VPN tunnels, and e-mail encryption should be used much more widely.

Of course, large networks are always going to have unpredictable complex security risks. As a result, if you are dealing with critical systems, they should be as disconnected as possible.

4) Dive Right In
by Accidental Hack

What does a newbie do? Having been put in a position where I'm partly responsible for server security, and having been put in that position without the proper background (and the responsibility is here to stay), how do I get my head straight on the core issues and make sure I'm not leaving the doors open for anyone to do whatever they want? Reading books/articles doesn't seem to be enough, but if that's the best place to begin, any recommendations?

Paul:

You are really asking two questions: how to learn about security, and what to do if you are put in situations where you don't know what to do.

For people wanting to learn about security or cryptography, I'm a big supporter of hands-on experience. When you hear about a security bug, go see what actually went wrong. Implement DES, AES, RSA, and your own big-number library. Set up a couple of poorly-configured Linux boxes and break into them. Install a sniffer and sniff your own network traffic. Observe and modify software programs. Learn C/C++. Study known bugs in open-source crypto code and hunt for new ones. If you have the budget at work, hire a security expert and ask lots of questions. Whatever you do, be careful to follow the laws (even if you disagree with them) and act ethically.

The question of what to do if you are put in a situation beyond your skill level ultimately depends on the risks involved. With ordinary servers (corporate e-mail and the like), occasional problems may not be that catastrophic if you have good backups.

On the other hand, if the chances or consequences of failure are severe, you can't just "give it a try" any more than I should try open heart surgery or piloting a 747. For example, if you are dealing with critical infrastructure, likely fraud targets, pay TV networks (or anything involving piracy), or large customer databases, get help. Even if you are experienced, you need to have someone check your work. When you do hire someone, make sure they will answer questions, educate you, and provide good documentation. Avoid mad scientists, people who have never done serious engineering, and anyone who views security audits as threatening or insulting.

5) Quantum Computing and Cryptography
by Nova Express

Will the advent of quantum computing render even current, state-of-the-art cryptography obsolete? Is there any way that cryptography can overcome the challenge presented by quantum computing? And how long will it be, if ever, until quantum computer's can break current, state-of-the-art cryptography?

Paul:

Quantum computing is possibly the coolest discovery in theoretical computer science in the last few decades because it completely changes the rules of computation.

As a practical matter, however, it's not a significant security risk compared to the other things we have to worry about. I think it's highly unlikely that quantum computers will overtake regular computers in the next 50 years at (for example) breaking RSA. The reason for my skepticism is that the challenges involved in building a useful quantum computer are staggering. For example, decoherence becomes a much greater problem as the computer gets larger, yet quantum computers have to be huge because they don't operate sequentially. (Imagine hardware design with no flip flops -- just combinatorial logic.) While error-correction techniques have been proposed, these further increase the complexity of the circuit.

If someone did find a way to build arbitrarily large quantum computers, it would be the end of most existing public key cryptographic schemes. Symmetric cryptography, however, would still work, though key lengths would need to be doubled to get the same level of security.

Note: Quantum computing is different from quantum cryptography. The latter is a method for preventing eavesdropping, typically using polarized photons and entanglement. While quantum cryptography is feasible to implement and is also neat research, I don't see any practical use for it because it requires that parties exchange photons directly. As a result, it won't work over packet switched networks. Furthermore, existing algorithms like AES can do all the same things, and much more. As a result, the only scenario I can see where quantum cryptography would be relevant would be unbelievably weird discovery that completely demolished cryptography, such as someone showing that P=NP.

6) SSL and Forward Security
by Effugas

Paul,

First of all, thank you for agreeing to be interviewed here. It's greatly appreciated.

I'm curious if you wouldn't mind elaborating a bit on the catastrophic failure of the SSL security architecture given the compromise of an RSA private key. An attacker can literally sniff all traffic for a year, break in once to steal the key, then continue to passively decrypt not only all of last year's traffic but all of next year's too. And if he'd like to partake in more active attacks -- session hijacking, malicious data insertion, etc. -- that's fine too.

In short, why? After so much work was done to come up with a secure per-session master secret, what caused the asymmetric component to be left so vulnerable? Yes, PGP's just as vulnerable to this failure mode, but PGP doesn't have the advantage of a live socket to the other host.

More importantly, what can be done for those nervous about this shortcoming in an otherwise laudable architecture? I looked at the DSA modes, but nothing seems to accelerate them (which kills its viability for the sites who would need it most). Ephemeral RSA seemed interesting, but according to Rescola's documentation it only supports a maximum of 512 bits for the per-session asymmetric key -- insufficient. If Verisign would sign a newly generated key each day, that'd work -- but then, you'd probably need to sign over part of your company to afford the service. Would it even be possible for them to sign one long term key, tied to a single fully qualified domain name, that could then sign any number of ephemeral or near-ephemeral short term keys within the timeframe allotted in the long term cert?

Thanks again for any insight on the matter you may be able to provide!

Yours Truly,

Dan Kaminsky
DoxPara Research
Paul:

I specifically designed the ephemeral Diffie-Hellman with DSA option in SSL 3.0 to provide perfect forward secrecy (PFS). While it used to be true that DSA's performance was a concern, it shouldn't be a problem anymore.

[*] If you want to avoid DSA, you can also do a normal RSA handshake then immediately renegotiate with an uncertified ephemeral Diffie-Hellman handshake. (SSL 3.0 and TLS 1.0 allow either party to request a renegotiation at any time, with the renegotiation process protected underneath the first handshake.) As your question mentions, short-lived certificates would work if a suitable CA provided them.

Making PFS mandatory wasn't feasible in SSL 3.0 because of performance requirements, the need to maintain compatibility with legacy RSA certificates, and licensing issues. (Back in 1996, RSA was patented and most companies only had limited RSA toolkit licenses, not patent licenses.)

Overall, I'm delighted so see how many ways SSL 3.0 is being used and that it's become the most widely deployed cryptographic protocol in the world. While there are reasons to debate design choices I made, I don't know that the protocol's handling of PFS is one of them. Although some implementations have had bugs and guidelines had to be added to address error-analysis attacks, the overall protocol has held up well.

[*] In 1996 (when the SSL 3.0 spec came out), computers were only 4% of their current speed. (Moore's Law predicts 4.67 speed doublings in 7 years.) Today, any modern CPU should give well beyond 200 2048-bit DSA verifies/second. Averaging 10 handshakes/second (5% load) = 864K connections daily per CPU. Unless you are running one of the largest web sites (or have your server misconfigured to disable session resumption), this isn't likely to be a problem. For really high-volume servers, SSL accelerators are affordable and very fast. In general, it's rare these days to find a situation where the speed of standard cryptographic operations is actually a problem.

7) trust in open p2p communities
by smd4985

as a software engineer building open source p2p applications (gnutella), we are faced with a huge problem: how do we establish trust in a open environment where any application that speaks the protocol can participate? we've thought of various cryptographic systems to establish trust, but they have several fatal flaws - they require some sort of centralization (a no-no in a p2p environment), they lock out 'untrusted' vendors, etc.

what can we do to maintain an open environment and establish trust between peers?

Paul:

There certainly are decentralized ways to establish trust (PGP's web of trust comes to mind), but you can't have trust and complete anonymity. The closest you'll be able to do is to evaluate participants based on their past actions and assertions. Before you can begin a design, you'll need to clearly define what you are trying to enable, what you are trying to prevent, and what automated rules can distinguish between legitimate and illegitimate actions.

(Note: While I presume that the question relates to legitimate P2P applications, piracy over P2P systems is driving copyright owners to seek legislative and legal relief. The fact that the Internet can be used to massively violate intellectual property rights doesn't make it moral to do so.)

8) How do you think?
by Charles Dodgeson

When I first read about some discovery of a weakness (for example, I know your name from your work on MD5), I am always struck by the thinking beyond the framework of the designer of the system and of the community to date. The same things strikes me about timing attacks and similar sorts of things. These are things that I wouldn't have thought of in a million years. Can you give any insight into how minds like yours work. And to what extent you think that this might be a trainable skill.

I normally hate the cliche of "thinking outside of the box", but here it is fully appropriate.

Paul:

Security work requires understanding systems at multiple levels. For example, differential power analysis involves transistor-level properties affecting logic units affecting microcode affecting CPUs affecting crypto algorithms affecting protocols affecting business requirements. For engineers who are used to working at only a single layer, security results often seem surprising. Broad experience is also important because the vast majority of security problems involve unintended interactions between areas designed by different people.

Two specific subjects that I think are often neglected are low-level programming and statistics. These are essential to understand how things actually work and to assess the likelihood that systems will fail. A skeptical mindset is also important. Try to assume things are bad until you are convinced otherwise.

Some specific questions that are helpful to ask include:

  • What information and capabilities are available to attackers?
  • What information and capabilities are available to attackers?
  • What esoteric corner cases has nobody studied carefully?
  • How would a lazy or inexperienced designer have designed the system?
  • What states can each participant be in?
  • Where is the most complexity in the security perimeter? (Complex parts are the most likely to fail.)
  • What unwritten assumptions are being made, and are they correct?
If you aren't sure how to begin an evaluation, consider sketching out how you would have done the design. You can then compare your design against the target. The differences often reveal mistakes you made (a great way to learn) or identify problems with the target system.

9) Is the Technology ahead of us?
by Coz

Thanks for letting us ask you these questions.

Over the last couple of decades, cryptography has gone from being the domain of major governments, big business, and the odd hobbyist and researcher to being a massive public industry that anyone can (and does) participate in, with new algorithms published and new applications announced almost every week. Meanwhile, we learn of vulnerabilities in various implementations of cryptosystems much more frequently than we hear of people discovering fundamental flaws in the cryptosystems themselves.

Given these facts, do you think we need to change focus, turning to validating and "approving" implementations of cryptosystems (such as your own SSL 3.0) or should the emphasis of the "crypto community" continue to be innovation in fundamentals of cryptographic systems and new applications for them? How important is it to have someone verify that a cryptosystem is implemented well?

Paul:

Validation is by far the most critical unsolved problem in security.

I view security as probabilistic: there is always some chance of failure, and validation is the only way to reduce the odds of failure. For example, a well-tested piece of code is more secure than an identical piece of code that hasn't been tested.

Although innovation is great on the research side, real-world systems should use well-tested techniques wherever possible. For example, on the algorithm side, we use RSA, triple DES, AES, and SHA-1 at Cryptography Research unless we have to use something else. (This is rare.) We use these algorithms because they are well reviewed, making the risk of an unexpected cryptanalytic attack low. In contrast, catastrophic flaws in new schemes are very common.

When you move beyond the basic algorithms, validation unfortunately becomes extremely difficult for many reasons:

  • The complexity of software is increasing exponentially, but the number of skilled security experts (and their intelligence and productivity) is staying roughly constant.
  • Many designs are so poorly architected or implemented that they are infeasible to validate.
  • Validation is much more difficult than writing new code (and it's less fun), so many people avoid it.
  • Engineers are cranking out such vast quantities of code that testing can't possibly keep up.
  • Existing validation tools are really quite poor.
  • The cost of security testing can be hard to justify because most users won't pay extra for better security.
  • There is no easy way for users to distinguish between well-tested products and those that aren't.
  • Testing takes a long time, slowing down product launches.
  • There is no easy way to standardize security evaluations because attackers don't limit themselves to standard attacks.
  • Catching 90% of the flaws doesn't help if attackers are willing to look 10 times harder to find flaws.
  • Developers don't have much incentive to make painful sacrifices for security because they aren't the ones who incur the risk.
Long-term, I expect security will become like the pharmaceutical and aviation industries. Regulations and liability would improve safety, but would also make product development hugely expensive. Regardless of whether this would be better or worse than the current state of affairs, it looks inevitable.

10) Re:fhnlsfdlkm&5nlkd%Bvbcvbc
by Anonymous Coward

0eefa Uv, V'z jbaqrevat vs lbh guvax gurer'f n shgher sbe EBG13. V'ir urneq vg'f cerggl frpher...

Lbh pna ernq guvf? Qnza!

Paul:

Holy cow! Juvyr lbh znl unir svtherq bhg zl fhcre-frperg EBG13 pvcure, abobql jvyy rire penpx *guvf* zrffntr orpnhfr V fjvgpurq gb bhe hygen-frperg cyna O: nccylvat n Pnrfre pvcure 13 gvzrf :-).

This discussion has been archived. No new comments can be posted.

Security Expert Paul Kocher Answers, In Detail

Comments Filter:
  • What struck me reading this was he mentioned that he has worked under the terms of an NDA and the company decided not to fix thier software. How can this be discovered? If he goes to the police surly this goes beyond the NDA. If anyone could clarify this I would appreciate it.
    • precludes him from going to the police or further identifying the offending company.

      And don't call me Surly.

      • No contract, not even an NDA can prevent you from reporting a crime to the police.
        • I guess priests would make good security-auditors as they are not obliged to disclose anything brought to them in confidence...
          • I guess priests would make good security-auditors as they are not obliged to disclose anything brought to them in confidence...

            Granted, my knowledge comes to me from episodes of "Law & Order", but my undersanding is that these yptes of confidentiality agreements are only applicable when they involve information that comes up while caring out pristly (or doctor or lawyer) duties. Hiring a priest to perform an audit is not going to get you the same level of privledge as if you went to one for absoluti

    • Don't assume that he should go directly to the police. After all, in most cases it's not a crime to be insecure. True, you might be opening yourself up for a civil suit later, probably for failing to exercise due diligence, but that doesn't imply an obligation for a consultant to report anything to anyone.
    • What struck me reading this was he mentioned that he has worked under the terms of an NDA and the company decided not to fix their software. How can this be discovered? If he goes to the police surly this goes beyond the NDA. If anyone could clarify this I would appreciate it.

      The police only handle criminal matters. You don't 'go to the police' (or go to jail, for that matter) for civil violations. There is a difference between being "not legal" and "illegal". Illegal means you are prohibited from do

  • Here's the ROT13 message decoded:

    An empty password will pass this check because the code uses the length of the user entry, not the length of the correct password. Other potential problems (buffer overflows, etc.) are left as an exercise for the reader. [Shameless plug: If you enjoy problems like this, have strong security experience, communicate well, and want a job at a fun (and profitable) company, visit http://www.cryptography.com/company/careers.html.]

    This was courtesy of ROT13 Java [geht.net]

    • This was courtesy of ROT13 JavaScript coder/decoder [geht.net]

      Pfft! Real USENET old timers can read ROT13 without a fancy, shmancy Javascript applet. Or if we need to turn it into normal, we know the vi/ex command sequence that will do it.

      For the lazy, we cut and past it into Vim and type g?G and use Vim's ROT13 function.

    • BUT NO! (Score:4, Funny)

      by Proaxiom (544639) on Thursday March 27, 2003 @04:52PM (#5610114)
      That's exactly what he wants you to think! You see, if you realize that this really isn't a ROT13 encoded message, that was just to throw off the amateur cryptanalysts. The truly insightful, such as myself, would have thought to treat this as a one-time pad, encrypted with the following key (in hex):

      070F00335427494C12525E57463D031751570B05591B450407 080849091258466331252631316E5552270609125248060502 150C4E526D2D253A276E2E3648002A1D0C11051A441B535302 0152551846071600060902450E434D03422870222733743620 5B5430100005165447061B471E030945313137742227285C4A 350D105A01622825373761372D3548412A170717001006100A 1C1C5A12555A01481C1D522661323D332C3D434223120B5352 3E1F5B08014E2633742A48532F4F1946411C04000B04074517 1308064C160F004E47545300241E02510B0A5B527B24141D1F 5218000346420615071A5D00221C00191146002B0E02420D11 00420017520E0E455901190B00131A56024953011C1C11540F 080B42223570323339265F56300441064B1A1D105601000952 0D0150213B2E3B5B5629011252480B041C59424F4E02194104 0D044754010004171D560711454E4D12030400510114034311 0B1E13472D2E2D21130038235A2B2F41246D0021374858490E 5B4949633A2F2F6A6A6A2E70656C63676274656E63756C2E70 627A2F70627A636E616C2F706E65727265662E75677A792E5D

      Which, of course, decodes the message to:

      In A.D. 2101 War was beginning. CAPTAIN: What happen? MECHANIC: Somebody set up us the bomb OPERATOR: We get signal CAPTAIN: What! OPERATOR: Main screen turn on CAPTAIN: It's You!! CATS: How are you gentlemen !! All your base are belong to us. You are on the way to destruction CAPTAIN: What you say !! CATS: You have no chance to survive make your time CATS: HA HA HA HA.......

      A devious one indeed, that Paul Kocher!

    • I am so reporting you under the DMCA.
    • Man, mod this one up. I know that Froze (and followers) think it's awsome fun to waste time hunting down a ROT-13 translator, pasting something in, and getting the results. Maybe they're too dumb to cut and paste the results which would be _actually useful_ .

      Thanks, gnuadam.

  • ROT13? (Score:3, Funny)

    by phraktyl (92649) <<moc.ooggard> <ta> <ttayw>> on Thursday March 27, 2003 @01:17PM (#5608188) Homepage Journal
    I'm never going to figure this out---damn those encryption experts!
  • Reading books/articles doesn't seem to be enough, but if that's the best place to begin, any recommendations?

    It may not be enough, but I perfer to believe that cryptography study should begin with books.

    Here are 81 cryptography books I've reviewed [youdzone.com].

    With most I've included an associated set of prerequisite book reading, math, and computer language skills necessary to understand the book. Hopefully this will help the beginner hit the ground running, and the more experienced should discover a few hard-to-

  • As for the worst security, I nominate the following password checking code [snipped]

    I really hate it when head stuck so far up their own arses their head sticks out of their head security types assume most programmers are stupid.

    Most programmers AREN'T that stupid, and you will never come across this code in the wild.

    Just like the SQL injection attacks that security types get off on. Doesn't happen.
    • Re:Aggghhhhh! (Score:4, Informative)

      by evilpenguin (18720) on Thursday March 27, 2003 @01:49PM (#5608450)
      I assume you must be trolling, but I'll feed ya. I've been programming in C and C++ for 16 years and I have seen code this bad in production systems every single year of my career.

      There are many more bad programmers than good programmers, and even good programmers occasionally make stupid mistakes. One of the biggest problems are the large software consulting businesses. They staff up large development projects at large companies by bringing in a handful of well seasoned architects and lead programmers and then a legion of fresh, inexperienced, and relatively cheap novice programmers. They spend 6-12 months spewing out massive amounts of code of highly variable quality and then leave, allowing staff programmers and consultants from smaller firms to clean up the mess.

      Memory leaks, unbounded stack accesses, and outright logic flaws abound in code you are using today. I guarantee it.

      You *will* come across that code in the wild. The only way you won't is if you don't look.
    • What's that about monkeys, typewriters and shakespear?

      Btw, I have encountered such stuff before. So apparently the non-most-programmers are that stupid.

    • Re:Aggghhhhh! (Score:3, Informative)

      Heh, by your Nick I'll assume this a troll, but programmers are lazy above all things. They tend to consider a problem "solved" once it minimally works, and do not like to polish it off with things like error handling, documentation, security hardening, etc.

      There's plenty of very talented programmers here who I constantly butt heads with because they do not want to update their apps which use rsh, rlogin, rwho, .rhosts files, and unauthenicated X sessions across the network, despite the fact that the risks
    • From Webster's Revised Unabridged Dictionary (1913) :

      Example \Ex*am"ple\, n. [A later form for ensample, fr. L. exemplum, orig., what is taken out of a larger quantity, as a sample, from eximere to take out. See Exempt, and cf. Ensample, Sample.]

      1. One or a portion taken to show the character or quality of the whole; a sample; a specimen.
    • Re:Aggghhhhh! (Score:2, Insightful)

      by mOdQuArK! (87332)
      Sorry, I've seen code like this on a pretty regular basis (not necessarily password checking, but this kind of defective logic).

      Even a decent programmer might can flake out occasionally (thinking of variable name while typing in another for instance), and the dangerous thing about this kind of code is that the compiler won't catch it, and unless code reviewers are specifically keeping an eye out for this kind of thing, they'll probably overlook it as well (since it looks kind of right).

      _You_ might be perf
      • This particular not-so-typical (I'm no god, but I'm a fairly decent programmer (I'm generally better at debugging)) saw this code, freaked out, and went to check some password checking code I've worked on. It is as abysmally simple (passwords in the clear, non-trivial to fix that), but it doesn't have the gaping holes the sample code has. Why did I check? Because I have made mistakes like that in the past. I likely will in the future.

        Everybody has brainfarts, no matter how good they are.

    • SQL injection *does* happen. I've seen it and plenty of web developers are not very SQL-savvy.

      Try these two phpnuke sql injection vulnerabilities ( 1 [securityfocus.com],2 [securityfocus.com]) for example from this week's securityfocus.com vulnerability list. Those are just a couple from the open source world.

      In early 2000, my dotcom would allow points to be redeemed for Flooz (remember them?) which could then be used at among other place, Tower Records. Throw a single quote in the search page, it dumped SQL statements including tables, columns,
    • I think he couched this well. Programmers just don't think in terms of security, and tend to make the same mistakes over and over.

      I remember a story of some programmer that was going to a crypto class. He thought of a cool encryption algorithm, essentially:

      char c, encrypted_c;
      encrypted_c = (c + rand(SOMETHING) ) % SOMETHING;


      His first homework assignment? Was to break that encryption. Programmers just don't think that way sometimes.
    • "Most programmers AREN'T that stupid, and you will never come across this code in the wild."

      Boy are you wrong. Heres a small improvement on the afformentioned code I found in the wild(comments are mine).

      if(password==NULL) /*strcmp cores if you feed it a null arg */
      return(-1) ;

      if(!strcmp(password,user->password))
      return(-1) ;

  • by Froze (398171) on Thursday March 27, 2003 @01:30PM (#5608295) Homepage
    http://www.rot13.com/index.php
  • I always ROT13 my secret messages twice.
  • Can the fellow who asked it please clarify what is meant by 'trust' in a Gnutella environment -- what features are being thought about?
    • I imagine it's "is the file person X is sending me the file they claim it is?" style trust.
    • First off, I'm very happy my question was sent to Mr. Kocher. I was hoping for a little more from his answer, but it is a hard problem that lacks easy answers.

      Secondly, I want to make it clear that we are not trying to validate content. Gnutella implementations are by nature content-agnostic - we have no prior knowledge of what a node may share or download and we have no way to control these things. Gnutella simply sets up a communications medium - what is said is up to the individual user.

      The features
      • I want to make it clear that we are not trying to validate content. Gnutella implementations are by nature content-agnostic - we have no prior knowledge of what a node may share or download and we have no way to control these things. Gnutella simply sets up a communications medium - what is said is up to the individual user.

        I'm not a Gnutella user, but I did fire up gtk-gnutella today to check things out and I do have a pretty good idea of what you're talking about. One of the things I noted was a number

    • you can't have trust and complete anonymity.

      I thought that this problem was resolved by the "who's paid for dinner" scenario.

      • Perhaps, although I can't help but wonder how many unbiased coin flips it would take to retrieve a DVD rip of the widescreen version of Clue.

        Kind of wierd that I never noticed the connection between the author of the Dining Cryptographers Problem and the creator of the (appropriately named) Chaum mix. I always thought both were a neat idea.

  • Can we get some names of "companies that make misleading or unsupported claims about their security" that people keep buying (other than Microsoft, which is too obvious to list)?
  • by realdpk (116490) on Thursday March 27, 2003 @02:03PM (#5608602) Homepage Journal
    The Microsoft/Netscape/Mozilla/Verisgn "conspiracy"(for lack of a better term) made the cost barrier far, far too high by requiring that certificates be issued only by "trusted" authorities for encrypted web pages. (And requiring that if the website owner doesn't fork out the cash, the user gets prompted with an ugly/annoying dialog suggesting that something may be wrong, causing confusion.)

    It's unfortunate that MS/NS (and now Mozilla) went along with this. A better system would allow for unauthenticated SSL (with no CA warning), for sites you just don't care so much about, like /. :-), and then authenticated SSL for banks, porn, etc important things.
    • What good would it do anyway?

      How many people, besides security consultants and compulsive look-under-the-hood types, ever look at the certificate validation chain? When's the last time you checked your browser settings to be sure that malware hadn't added a new trusted root cert? Have you ever read a certification practice statement to be sure it provides the level of verification you think is appropriate?

      For that matter, how many people check certificate revocation lists? That check is turned *off* by default in one widely used web browser.
    • It really has to give some sort of notification since it won't know what sites you consider authentication important for.

      However, that notification (in the early days) really need not have required clicking through 3 dialogs that seemed to be worded in the most alarming way possible (and convieniantly didn't mention that only the authentication was in question, not the encryption).

      Then, there's the issue of chain of trust. Am I really all that secure because one company I've never heard of (at that time


  • I very much liked reading the interview.

    I noticed that something is going unsaid, though. Breaking a cipher through cryptographic analysis only works if the attacker knows or can guess the algorithm. If data is encrypted and then encrypted again with another algorithm, and in between the bytes are scrambled, no mathematical attack can ever be successful.

    This method of encryption does not allow public-key encryption, of course, but it is 100% secure if only the sender and receiver know the encryptio
    • The reason the conventional wisdom tells you not to rely on secret algorithms is that the algorithm is a widely distributed piece of information (every sender and receiver uses the same one) and it's permanent.

      For example, the Enigma machines were supposed to be secret, but over the course of an entire war it was probably inevitable that one got captured. Then it was a key recovery problem.

      BTW, a typical modern block cipher works a lot like what you suggested, often with 16 or 32 rounds of scrambling, eac
      • One Time Pad is the only fully secure communications encryption method, and it's still not necessarily physically secure. And when it is used in any non one-time-use protocol, e.g. VENONA, it is no longer perfectly cryptic, either.

        All other forms of encryption are less secure, such that the economics of the subjective or objective value of decrypting the message and the cost of doing so dictate whether it remains secure.
    • by rjh (40933)
      We're fortunate that cryptography is a mathematical discipline. That way, whenever anyone makes claims about "no mathematical attack can ever be successful", we can say "great--prove it."

      There is only one cipher out there nowadays which has been formally proven to be totally immune to mathematical attack: the Vernam Cipher, which is conceptually brilliant but too impractical to use.

      Everything else (so far) has been proven susceptible.

      I would suggest reading Knuth's The Art of Computer Programming, where
    • Breaking a cipher through cryptographic analysis only works if the attacker knows or can guess the algorithm. If data is encrypted and then encrypted again with another algorithm, and in between the bytes are scrambled, no mathematical attack can ever be successful.

      Wrong. Allied cryptanalysis were able to successfully attack cryptosystems without knowing what it was they were attacking. Originally they did not have a device, an Enigma, or Purple, cipher machine, and were able to attack them based on ciphe
    • Actually, that turns out not to be the case. What you've described is "security through obscurity", which is vulnerable to lots and lots of kinds of attack.

      Many encryption attacks are against the encrypted text alone, and if your encryption approach sucks, most tools will go right through it.

      Some of the better decryption systems will then proceed to describe which long ago cebunked algorithms you used or which algorithms you tried to implement but messed up in some fairly pedestrian way.

      Security through
    • Gotta be careful with that! What you get from all of that is a complex composite of the two functions and keys, but it might still be broken.

      It is even possible in bad cases that you will manage to combine the weakneses of both algorithms. and end up with something really weak.

  • by Tomster (5075) on Thursday March 27, 2003 @02:39PM (#5608927) Homepage Journal
    "As for the worst security, I nominate the following password checking code:

    gets(userEntry);
    if (memcmp(userEntry, correctPassword,
    strlen(userEntry)) != 0)
    return (BAD_PASSWORD);
    "

    I just want everyone to know I wrote that code years ago and would never do something like that again. Really!!

    -Thomas
    • I was doing a bit of security consulting at a hospital once a long time ago and they had a system in place to prevent people from logging in. Unfortunately, I didn't notice it.

      It turns out, the security system used /etc/profile to figure out if you were authorized based on an entry in a text file somewhere...all written in bourne shell. It had the appropriate traps in place to prevent ^C and stuff working.

      The reason I didn't notice it, however, was that I had requested my account be created using tcsh.
  • He Said ...The fact that the Internet can be used to massively violate intellectual property rights doesn't make it moral to do so.

    I would assert that Intellectual Property is immoral, and that people trade freely on p2p networks and the internet, in part, to undo the damage caused by immoral copyright monopolies.

    The moral and historical foundation of property derives from the fact that property has tangable limits - not from a King who granted publishers monopolies in return for not publishing bad thi

    • If you don't like the law, change the law.

      • If you don't like the law, change the law.

        The best way to change this law is by open disrespect for copyrights and civil disobedience.

        • The best way to change this law is by open disrespect for copyrights and civil disobedience.


          I'm not sure what country you are from, but I know here in America, we have better ways. Civil Disobedience is the step that is usually taken prior to revolt. I just wish that people understood that Civil Disobedience is a limited form of civil warfare. It was called by someone "civilized civil war" after the the Marcos regime was toppled by the people of the Phillipines.

          Back to IP Law. First, I'm not sure tha
          • I'm not sure what country you are from, but I know here in America, we have better ways. Civil Disobedience is the step that is usually taken prior to revolt. I just wish that people understood that Civil Disobedience is a limited form of civil warfare. It was called by someone "civilized civil war" after the the Marcos regime was toppled by the people of the Phillipines.

            I am from America too, but in America there are large multi billion dollar media companies that will eternally kick your ass unless you

  • by debrain (29228) on Thursday March 27, 2003 @03:07PM (#5609156) Journal
    Something like this:

    /me thinks ROT13? WTF is that.
    /me googles ROT13.
    /me finds http://www.alliancestudio.com/cgi-bin/rot13.cgi
    /me sends:ROT13 SPOILER: Na rzcgl cnffjbeq jvyy cnff guvf purpx orpnhfr gur pbqr hfrf gur yratgu bs gur hfre ragel, abg gur yratgu bs gur pbeerpg cnffjbeq. Bgure cbgragvny ceboyrzf (ohssre biresybjf, rgp.) ner yrsg nf na rkrepvfr sbe gur ernqre. [Funzryrff cyht: Vs lbh rawbl ceboyrzf yvxr guvf, unir fgebat frphevgl rkcrevrapr, pbzzhavpngr jryy, naq jnag n wbo ng n sha (naq cebsvgnoyr) pbzcnal, ivfvg uggc://jjj.pelcgbtencul.pbz/pbzcnal/pnerref.ugzy.]
    /me receives english translation.
    /me acquires 31337ness.
    /me goes to shameless plug for job, only to find it slashdotted.
    DAMNIT
    /me feels 31337 status drain away.

  • The fact that the Internet can be used to massively violate intellectual property rights doesn't make it moral to do so.

    I'm sure Paul doesn't remember me, but I remember Paul. When I was all of 13 (this was arround '90) Paul Kotcher and I both lived in Corvallis, OR. IO didn't know Paul, met him only once (knew his brother Scott a bit better though) but he was the geek star of Corvallis, the only teenager we even knew about who could program assembler and crack copy protection. I remember playing a 'warez
    • I remember playing a 'warezed' (didn't have that word back then) version of Test Drive cracked by Paul.

      This is interesting, considering Paul's assertion:

      The fact that the Internet can be used to massively violate intellectual property rights doesn't make it moral to do so.

      Paul, you got some 'splainin' to do!!
  • by Michael Woodhams (112247) on Thursday March 27, 2003 @06:04PM (#5610610) Journal
    For those who enjoy solving simple substitution
    cyphers, the following command will encypher a file for you:

    perl -0777pe'$a="a";s/[a-z]/$b{lc$&}||=$a++/gei' filename

    I also have a program to help solve these cyphers, but it is too long to fit into the margin of this post.

    (And if you don't like solving alphametics problems (e.g. SEND+MORE=MONEY), I have a program that will do it for you in 135 bytes.)

  • Came across this problem whilst brainstorming for The Circle [thecircle.org.au].

    There is no way to verify that a peer is running some genuine/particular client or other (at least, not without DRM hardware).

    The only way to make sure that you're not uploading fies to RIAA dupes is to have a real-life web-of-trust amongst your users. Unless your web of trust is a serious conspiracy, it's unlikely to be effective.

    It *is* (theoretically) possible to detect misbehaving clients, though. Imagine you require all participants t

Help stamp out Mickey-Mouse computer interfaces -- Menus are for Restaurants!

Working...