Forgot your password?
typodupeerror
Security Privacy Software The Internet Worms IT Your Rights Online

Interviews: Eugene Kaspersky Answers Your Questions 82

Posted by timothy
from the that's-just-what-they-want-you-to-think dept.
Last week, you asked questions of Eugene Kaspersky; below, find his answers on a range of topics, from the relationship of malware makers to malware hunters, to Kasperky Labs' relationship to the Putin government, as well as whitelisting vs. signature-based detection, Internet ID schemes, and the SCADA-specific operating system Kaspersky is working on. Spoiler: There are a lot of interesting facts here, as well as some teases.
Which OS/OSs do you run?
by magic maverick

While MS Windows is the most common computer OS around, there are obviously many others. For your personal use, what is your main OS, and how do you keep it secure (do you, e.g., run MS Windows with anti-malware software, or do you run Ubuntu Linux with the defaults)? Is this a setup that you would suggest for others, or is it too esoteric?

Eugene Kaspersky: I'm afraid my answer's nothing special — I've got Windows 7 on my laptop + Kaspersky Internet Security 2013. To put it short, I've no need for any other operating systems like Ubuntu or Mac OS, and some software I need is available only under Windows.

Special thing about my devices is that I don't have a smartphone. I use a good old Sony Ericsson, whose most advanced feature is its (handy) flashlight. A simple phone like this is the safest mobile you could ever choose!

On this topic I also have a few tips I can share with you:
  • Outside the KL corporate network I always use a VPN connection. If you have the possibility to use VPN — do so. It's a very useful way to minimize risks.
  • Always use quality security software and keep it updated (automatically). That is an absolute must.
  • I prefer using browsers with a relatively high security level (e.g., Chrome) and I disable scripts in it.
  • And finally, the most important rule — also the simplest: always — always — use your head. I'm certain that the above + common sense is perfectly sufficient for secure personal use.

What color is your hat?
by eldavojohn

I feel like when someone is as deep in malware protection as you are, you're basically running malware and, I assume, developing malware or finding exploitable aspects of software. I notice you "discover" a lot of malware but I don't recall seeing you publish any exploits. How much malware development do you do? Any at all? Is there anyone in your company that attempts to mimic what other malware does so you can better understand it? Do you feel like that is a necessity in the field of malware protection?

EK: No, no and no. We don't develop malware and we don't publish exploits. Both happen to be illegal — and amoral. I don't recommend you doing either too.

Firemen don't start fires, doctors don't infect people, and antivirus companies don't create viruses. Any at all.

We detect 200,000 new threats every day as it is. Keeping on top of them all is quite a task. And another thing — we don't hire ex-hackers. Our business is built on trust, and we apply the highest standards in sensitive areas of our work: in malware analysis, product development, etc. Like a homicide detective doesn't need to kill to investigate a murder more effectively, a good expert doesn't need to be on the dark side to analyze viruses and predict what may come next.

Why do we still use the black list security model?
by Zaphod-AVA

Malware continues to be successful despite our current efforts. Why do we continue to use the same failed security model? Automated white listing seems like a better answer to modern security problems.

Imagine a whitelist that checks with a central repository that reputable software manufacturers send their updates to. Even with updates, checking the software you regularly run is now a simpler problem then comparing everything you run to a list of all the malware in existence.


EK: Actually we do use a whitelist security approach. Modern antiviruses are not simply based on signature analysis; they are sophisticated pieces of software containing whitelisting as well. Faced with constantly increasing malicious activity, the AV industry needs to seriously toughen up and come up with new approaches. One such new approach is the application of whitelisting technology.

Whitelisting takes a different view of computer files. It doesn't look for the bad things on your PC like with the traditional pattern-based approach, instead it just checks if files are safe based on whether such files are already whitelisted — already in the whitelist database of known-to-be-ok software. Any files that aren't already whitelisted are marked as potentially bogus. Our whitelist of ok'ed files is now populated by more than 530 million green-lighted files.

Now, depending on the settings you make in the antivirus program, files not included in the whitelist directory can be either automatically blocked (particularly useful in a corporate environment), or flagged as suspicious and sent for additional checks by anti-virus components. For the suspicious ones, a further stage of analysis can be performed by running them in Safe Run — an isolated sandbox environment from which maliciousness can't contaminate the computer's environment proper. Alternatively, right-clicking a file gives you its reputation info from our cloud-based KSN (video, details), which incidentally gets 400,000 file-checking requests per second!

The traditional pattern-based approach by its nature needs to catch 100% of all the maliciousness on a computer to be effective. Besides, every instance of malware needs to be analyzed and entered into a database, which takes time, and this is a crucial moment if we talk about epidemics. Whitelisting, on the other hand, isn't bothered about bogusness directly — it's not its concern. It concentrates instead on simply detecting possibly bogus files — files not included in the whitelist, just in case, as it were. And this task is completed in seconds — much quicker the traditional approach's task. Since today we detect around 200,000 malware samples every day, and this figure is only going to keep on increasing, just in case becomes crucially important, and isn't just some new bell/whistle addition to traditional antivirus.

Of course, let the pattern approach keep at it with the baddies, which it is doing, valiantly. But also let whitelisting do its thing with goodies. The result? Superior overall protection — a lot quicker. Kind of what we're all after, after all .

Re: Assembly code and vulnerability of Apple
by dave562

We see Apple growing in market share and one of the memes that has been accepted by a large part of the community is that Apple is not targeted by malware authors in part because the return on investment is not as high as it is for Windows machines. To put it another way, if a malware author targets Windows they get millions of home users, but more importantly, they also have the potential to infect corporate systems, server farms, etc. If they go after OS X, they get a bunch of home computers and some audio visual professionals.

Apple's market share is growing, and they also have converted their OS over to run on Intel chips. It now shares the same hardware base as PCs that run Windows. Given that all of the really advanced malware code (rootkits, polymorphism, etc.) is written in Assembly, do you foresee any tipping point coming where OS X will be targeted on a large scale like Windows has been? Or is there simply not enough of a payoff there for the malware creators, given the ease of exploitation and widespread deployment of Windows?


EK: Cybercrime today is no game; it's a very successful business. Its underlying principle is simple: risks are taken and attacks are invested in only if lots of money can be earned. The more users you can reach — the more money you may get. Simple. These days Mac OS market share is high enough to be attractive to the bad guys. In 2011 it was estimated that Apple had over 5% of worldwide desktop/laptop market share. And figures by web-tracking company Net Applications for the month of August 2012 show that Apple's combined share of the desktop market — counting versions 10.4 and after of OS X — is 7.11%, while Windows Vista for example takes 6.1%! This is a significant figure already, and that's why cyber criminals are turning their heads towards Apple.

The Flashfake epidemic, the first global Trojan for Mac OS, highlighted two things:

First, it showed that the most popular Windows attack scenario can be easily copied for Mac: a Trojan spreads via drive — by downloads — no user interaction needed, no clicks, no admin password. Just surf to a hacked website and the malware gets installed onto your computer automatically.

Second, epidemics are indeed now possible for Mac: if you compare the number of computers infected by Flashfake with the overall number of Macs, you'll find out that the "iBotnet" can be compared to Conficker — the biggest PC-botnet in history!

In sum this all means that we've reached the stage where attacks on Mac OS have become a usual phenomenon — not unusual as claimed in the past. And the scale will only increase. The Apple marketing people may not like it, but it's time to admit it — yes guys, your system is as vulnerable as Windows. Don't ignore the lesson of Flashfake. Think serious about security, not just different [sic].

Re: Healthcare/industry-specific software?
by HideyoshiJ

Many pieces of software and hardware used in healthcare are required to pass FDA certification, especially in areas like radiology. Often times, these vendors report that because they are certified on a certain patch level, these systems cannot be patched without losing that certification. Do you see any solutions to the current state of industry-specific software's seeming lack of quality, updates and security?

EK: What works best in these circumstances is whitelisting. We realized the importance of whitelisting a long time ago when we started our whitelisting program. Like many technologies, whitelisting is not a solution by itself, but in terms of more completely protected machines in healthcare it really does help. What's more, because such machines generally go unchanged the whitelisting rules can be extra strict. In our experience this works very well, especially in combination with technologies such as exploit prevention.

Anonymous Internet IDs
by AaronLS

Do you believe everyone could be issued an ID, and still remain anonymous? What I mean is, I believe that you could ensure each of your users is unique, but not necessarily know who they are. If everyone is issued a certificate signed by some trusted authority, one could verify that the certificate is valid, without the certificate exposing the information about who you are. You could even have a scheme that lets the authority issue you multiple IDs, but only one for each unique ForUseWithDomain attribute, such that if you wanted to keep your identity from being correlated across different sites, you could do so. This could probably even be automated.

This would ensure that if you banned a malicious user from your site, they wouldn't be able to come back without compromising someone else's certificate. Yet, you still get a high level of anonymity.

Sites that require non-anonymous access could deny anonymous certificates, and require that you authorize access to full name perhaps. This would be like OpenID in the way it will prompt you for a site requesting additional information, like your email.


EK: Firstly, in my opinion, Internet IDs aren't necessary for every type of Internet activity. Let me clarify in what cases I think Internet ID is needed. I believe the World Wide Web should be divided into three zones. Red zone is for critical processes: voting in elections, online banking, interactions with official bodies, and other critical transactions. For operations in this zone an Internet ID should be necessary. This is in everyone's interest — no one wants to lose private data which in some cases may lead to losing money, for example. Then comes the grey zone, where minimal authorization is needed. For example, age verification for online shops selling alcohol or adult stores. I don't think an Internet ID is necessary for this zone. You're right — Open ID is enough. And finally — the green zone: blogs, social networks, news sites, chats ... — everything related to your freedom of speech. No authorization required.

I suggest using special proxies for surfing in the red zone. You register using your Internet ID and then you use a nickname. Nobody can see your real name. If you break the law, your identity is subject to disclosure after legal procedures and a court decision. I want to stress that nobody can discover your real identity if you observe the law.

Re: Online anonymity
by gallondr00nk

Recent protest movements and the Arab Spring have shown that the ability to use the Internet anonymously is crucial to organizing resistance and circumventing censorship or oppression. In light of that have you modified your views on the "Internet ID"?

EK: My position on Internet ID is developing. The more governments speak about regulation of the Internet, the more liberal I become. I'm really worried that one day governments will go too far in their attempts to control the WWW and its users.

After the Arab spring I've slightly changed my views on the subject. I still think that Internet IDs are required for certain operations, but as I've explained above, you don't need them when, say, surfing social networks. And as far as I know it was specifically Twitter and Facebook that were used as communication tools for protesters during the Arab Spring.

Re: "Approved" Spyware
by Fnord666

I assume that various state sponsored agencies provide you with their "research" tools and ask that you not detect them with your products nor should you interfere with their operation. To what extent does this happen, to what degree are you "asked" to comply, and to what degree are you forbidden to discuss this topic? Do you, or if you had the opportunity to do so without repercussions would you, offer a version of your products that identified and disabled this spyware?

EK: There is nobody who can forbid me from discussing this topic, so here you go. The short answer is no — we don't have relations with state sponsored agencies in the way you describe. Nor ever will.

Reputation is an extremely important asset in our business. If you let somebody be your bodyguard you need to be 100% sure that you can rely on this guy. And it's the same for users and companies when choosing security software. Trust is everything for us. If we had such a skeleton in the closet, our rep would go into nosedive. And believe me, such a skeleton would be found if it ever existed: I'm pretty sure that our products are analyzed scrupulously by competitors, cyber criminals and governments. No, secret agreements with state agencies like the one you imagine — there's never been such a thing nor ever will be.

Kaspersky's relationship with the government
by swb

Does Kaspersky have a relationship with the Putin administration or the FSB? Do either of these organizations have any influence on the business practices or technology of Kaspersky antivirus? Should a security minded person be concerned with the geographic origin of security software?

EK: Firstly, we have relations with law enforcement agencies in many countries, not only in Russia, as per which we provide expertise. Moreover, all the world's leading security companies — Symantec, McAfee/Intel, and Kaspersky Lab — we all collaborate with law enforcement bodies in our own countries and worldwide — to help fight cybercrime. CERTs, the FBI, FSB, Interpol, etc. — our duty is to help them investigate criminal cases.

Without the expertise of security professionals, successful law enforcement operations would be an unattainable dream. When cybercrime cases are domestic, IT Security companies work with their law enforcement agencies to assist in investigations. When they're international, they work with the appropriate law enforcement authorities of the affected countries to abide by legal policies and federal jurisdictions. This cooperation is crucial in fighting cybercrime worldwide, and we are proud to be a part of the process.

Secondly, Kaspersky Lab is a private international company which registered its holding in Great Britain in 2006. This means that our financial reporting is completely transparent and freely available to anyone. As a private company we act independently. There's no organization that could influence our business or product development.

And finally, regarding origin: Paranoia can be useful sometimes, but you should have good reasons for it. Should the security minded person be concerned that his/her laptop is assembled in China? Or that Intel, which produces most processors, has plants not only in the US, but also in Israel, Ireland and China too? Many other chip companies of course design their chips but have them produced by third parties — mostly in Taiwan and China. Should one be worried that one of the leading Microsoft R&D centers is situated in Israel? Or that the SAP headquarters is in Germany, Sony's in Japan, and Acer's in Taiwan?

We live in the age of globalization. Kaspersky Lab has R&D centers and virus experts around the world, including Russia, Europe, Japan, China, the United States and Latin America. It's simply not a question of origin any more.

In the early 2000s, when we first entered both the UK and US markets, we were perceived with a somewhat prejudiced attitude. Nobody took much notice of our product quality, but only in its origin. However, I think that was because of lack of information about our company and the products we supplied. With years the situation has changed: it's impossible for a superior quality product to stay ignored.

Are you safe Mr. Kaspersky?

by Lieutenant_Dan

You're operating out of the same country that has a ton of botnet operators raking in some decent dough with cheap pharmaceutical sales thanks to people desperate or naive enough to do so.

There are have been some interesting stories hailing from your corner of your world. How do you feel with your ability to run your company the way you want and without any threats to you or your staff?


EK: Botnet operators? Cyber criminals? I'd say they're the most tamed animals in our zoo! In recent years we've been discovering much wilder, more dangerous stuff — more and more viruses that can be classified as cyber weapons, created by nation states or by private companies sponsored by them.

Though you can never be absolutely safe, our staff hasn't been threatened, and I hope never will be. This may be because we fight malware, we don't conduct criminal investigations. This is what the police should do.

Re: Your secure OS
by lister king of smeg

You plan on making a secure OS for industrial/infrastructure systems; do you plan on basing it on preexisting open kernels, such as BSD, Linux, Haiku, or Mach? Will it be Unix/Posix like? Will it be a monolithic or micro kernel? Or are you thinking more of a hypervisor that hosts and monitors the guest OS for SCADA systems?

It will not be based on Linux or any other OS. Existing operating systems weren't created with security in mind. Security is an extra option for many of them, and vulnerabilities are inevitable. Of course existing systems have a lot going for them — and we recognize that. But I think that their level of security isn't high enough to cope with today's threats.

We're developing our OS at the micro kernel level.

We support the POSIX standard to the extent it does not contradict with our security principles. Our main target is to create a development platform for those interested in producing software or hardware with very high levels of security. As for a hypervisor, its creation is not our original intent, although we're not completely disregarding such a development path.

Re: Your exploit-free OS
by eldavojohn

Recently you confirmed you're working on an exploit-free OS following all the SCADA attacks. Among other things, you're claiming it is to be written from scratch but I can't find many details on what it's going to look like architecturally. You say: "Architecturally, the operating system is constructed in such a way that even a break-in into any of the components or applications loaded onto it won't allow an intruder to gain control over it or to run malicious code."

Could you expound on this? Are you writing this code or still in the design phase? Or better yet, could you compare it to something like, say, CentOS or Debian, and tell us how your architecture is going to be more secure? I understand you're scoping down the requirements of your OS to be more easily manageable, but the skeptic in me feels like it just can't be done. The cat and mouse game must be played in some form or fashion.


EK: This highly-complex project is extremely time consuming. We are still writing the code but we already have several working prototypes.

Don't believe the skeptic inside you. It is possible. Our OS will guarantee the possibility to run just preliminarily and explicitly declared functionality. I'm afraid I'm not ready to disclose much information at this stage — our rivals are watching. We are also currently collaborating with hardware manufacturers. Where there is a need for a superior level of security we plan to provide an integral, reliable computer appliance developed by our own team of specialists. Regarding architecture, we're not restricting ourselves to anything specific such as x86 or ARM. The hardware will definitely have to meet some specific requirements because it will have a direct bearing on the ability to ensure the required security guarantees. Follow our news — it's going to be interesting.

Re: The importance of programming language to SCADA security?
by Anonymous Coward

How important will the process of choosing a "language-based system" be to ensure the security of the operating system you envision? Choosing a type-safe language to create a memory-safe OS can help with the threats posed by the Internet or malware while also reducing some complex code used to get around a lack of type-safety in an OS. Will you be creating your own system or general purpose programming language to ensure this security in this way? If not, there are a few languages already available, or partially available, to choose from: Cyclone (an extension of the last version of C), Red/System (still under development), Euphoria (a system language with type-checking, and it uses simple words instead of punctuation to improve readability) and the combination of a type-safe Assembly that handles hardware and memory with managed C# that handles the rest of the kernel and the applications (like Microsoft implements in the Verve OS and might implement in a future Windows; that is, code-name Midori).

EK: Using a type-safe language is an interesting and promising approach, although we're not using it in our micro-kernel. We give a higher priority to tailoring OS architecture along with our security principles, which do not depend on the implementation language. More details on the approaches we use we'll share later.

Re: Malware's history and future?
by Anonymous Coward

You've been in computer security a long time, and have seen many things come and go. DOS/bootsector viruses, Windows viruses, macro viruses, rise of worms to replace them, and now the commercialization of malware with botnets, extortion-ware and the targeted weaponised malware like the one that hit Iran (and who knows what else). What's changed? What's remained the same? What about the malware creators — has their motivation changed? Where do you believe things are headed?

EK: Twenty years ago malware was a curious toy for programmers. Ten years ago it was a criminal instrument for bad guys who wanted to earn some money. Today it's a cyber weapon for governments. And that is the main and the most dangerous tendency of recent years.

Recent malware — Stuxnet, Duqu, Flame, Gauss — proved that cyber weapons (i) are relatively cheap to produce, (ii) are effective, (iii) mostly go undetected, (iv) leave their authors anonymous, and (v) can be easily replicated. And they're hard to protect against. They look like perfect weapons to some governments. In the meantime, Pandora's box is now wide open.

The most dangerous aspect of cyber weapons is their unpredictable side effects. A worst case scenario is when a cyber weapon aimed at a specific industrial object — like, say, Stuxnet — isn't actually able to accurately pick out its victim — either down to a mistake in the algorithm or a banal error in the code. As a result of such an attack the targeted victim — let's say a nuclear power station — would not be the only one affected: all the other nuclear stations in the world built with the same design would be too. Sounds scary, doesn't it? And without control from an international body, it could become more than scary: catastrophic.

As concerns home/consumer users, the defining feature of the next decade will be an enormous shift to mobile OS — and all the cyber criminals will be there already to greet them. The more financial transactions we conduct using smartphones, the more cyber criminals will target them. Future developments are likely to see more mobile botnets and drive-by downloads. There is also a high probability that the first mass worm for Android will appear, capable of spreading itself via text messages and sending out links to itself at some online app store. We're also likely to see more mobile botnets, of the sort created using the RootSmart backdoor.

Digital concepts young people should learn?
by davecrusoe

There's much talk about combating malware through technical solutions (e.g., adding transparency to communication, building increasingly sophisticated scanning systems, etc.). But what interests me is what we should be teaching our young people (children in primary and secondary school) with respect to the expertise we wished all adults possessed. In your estimation, what are 2-3 things that, if young people understood well, would help them excel in the face of cyber adversity (e.g., malware, privacy theft, etc.)?

EK: The most important advice I can give to young people is to always use your head. It might sound too simplistic, but if everyone who surfs online followed this rule the risks would be minimized. Don't download suspicious applications, and use social networks with caution. The largest portion of viruses is being spread with the use of social engineering, so never open links or files from unknown persons. Never ever! And even if you know the person, double check before doing so. Another way is to open suspicious files or links in a Sandbox mode.

Also, always use up-to-date quality security software. Free AV products are not a solution. Don't forget to update your system regularly. Install all the patches from the software developer and don't ignore update notifications.

By following these few simple rules you can minimize the risks online. As I mentioned, I've got standard Windows running with Internet Security, and I don't experience any problems with online surfing.

This discussion has been archived. No new comments can be posted.

Interviews: Eugene Kaspersky Answers Your Questions

Comments Filter:
  • by eldavojohn (898314) * <eldavojohn&gmail,com> on Thursday December 13, 2012 @02:41PM (#42277049) Journal

    EK: No, no and no. We don't develop malware and we don't publish exploits. Both happen to be illegal — and amoral. I don't recommend you doing either too.

    Firemen don't start fires ...

    Actually, yeah they do [howstuffworks.com], it's called "live fire training." And since they do it in a controlled area like a shipping container or abandoned house marked for demolition worth nothing with nobody at risk, I would think that you too would do that sort of work considering you can set up a VM and have no risk and try to get ahead of the virus authors. That's exactly what the author had me do when I read and reviewed the Metasploit guide [slashdot.org].

    Do you have a link to the law that says writing viruses is illegal? You're saying that if I set up a network of computers in my house disconnected from the internet and infect them to study how a botnet mutates, that would be illegal? How do you actively combat mutating malware without studying it and growing it internally?

    Doctors might not infect people but they certainly grow cultures of bad bacteria and study viruses that they keep in a lab. Honestly I was quite shocked by this knee jerk response.

    • by Golddess (1361003)

      Do you have a link to the law that says writing viruses is illegal?

      I don't know if any have passed, but I believe /. has covered a few attempts to do so in various countries. Or I could be misremembering those articles.

    • And forensic investigators will shoot or burn pigs to simulate human flesh, although they mostly use already slaughtered bodies.

      Personally, I think he protests too much.
    • by phantomfive (622387) on Thursday December 13, 2012 @03:09PM (#42277499) Journal
      Throughout the interview you can see he comes down on the side of control, authority, and oversight. He wants to have internet IDs required before you can do banking. He happily works with law enforcement to hunt down criminals. He locks down computers to have a white list of only approved files. He doesn't think you can be safe on a computer without malware protection.

      Given all that, it's not surprising he doesn't like grey-hat activities at all.
    • by Dan East (318230) on Thursday December 13, 2012 @04:15PM (#42278711) Homepage Journal

      Perhaps that is a bad analogy on both parts. Firemen practice putting out controlled fires for three reasons:
      1) It is an extremely dangerous occupation, so training in a controlled environment is necessary. Obviously there is no personal risk to those fighting malware, thus there is no need to create fake "mild" malware to cut their teeth with. They have all the isolated VMs, etc, they need to experiment risk-free with no negative ramifications from the malware.
      2) Actual fires are a pretty rare thing, thus the only way for a lot of firefighters to get experience is to create a controlled fire. As Kaspersky said, new malware is being developed all the time, thus there is always a new "fire to fight" in the world of malware.
      3) Firefighting is a real-time occupation. You can't rewind a previous fire and fight it again and again to let others gain experience from it. Kasperksy would have an absolutely immense database of malware that could be loaded into a VM and experimented with, or used to teach various techniques to new programmers, etc. They can "re-live" any malware they want as needed.

    • by loufoque (1400831)

      Do you have a link to the law that says writing viruses is illegal?

      It's definitely not illegal.
      No law can ever make it illegal to write any code you want on your own machine.

      Publishing malware software however, might be illegal in certain cases, as it might be considered a weapon in certain legislations.

  • by Anonymous Coward on Thursday December 13, 2012 @02:55PM (#42277259)

    Eugene Kaspersky: I'm afraid my answer's nothing special — I've got Windows 7 on my laptop ...

    He runs Windows 7 for God's sake! His opinions and analysis mean NOTHING!

    ...

    ...Clunky? I'm working on my Ad Hominem.

    It's good enough for Talk Radio and Fox News, but I'm looking for pointers on using it against ... let's say ... more sophisticated folks.

    • Re: (Score:2, Insightful)

      by kc67 (2789711)
      "To put it short, I've no need for any other operating systems like Ubuntu or Mac OS, and some software I need is available only under Windows. " Just because he runs Win7 does not mean his opinions are meaningless. He runs one of the largest computer security companies in the world... I think his opinion outweighs yours.
  • by sneakyimp (1161443) on Thursday December 13, 2012 @02:58PM (#42277311)
    I have no doubt of Mr. Kaspersky's chops (Dr. Kaspersky?), but I think he may not be entirely honest regarding his company's association with governments. In fact he says this:

    Moreover, all the world's leading security companies — Symantec, McAfee/Intel, and Kaspersky Lab — we all collaborate with law enforcement bodies in our own countries and worldwide — to help fight cybercrime.

    And then later basically contradicts himself:

    This may be because we fight malware, we don't conduct criminal investigations.

    The definition of "cybercrime" is where things get a bit tricky, no?

    • by Cinder6 (894572)

      "Helping" fight cybercrime probably just means that they act as a consultant for law enforcement agencies--the guys who actually conduct the criminal investigations.

      • A juror also doesn't conduct a criminal investigation, but is arguably more pivotal to the case and certainly much easier to intimidate than a law enforcement official. I would imagine that intimidating a bookish security consultant might effectively curtail a criminal investigation and furthermore these consultants would are less able to defend themselves against intimidation than a law enforcement official. I would argue that the circumstances a security expert beg for collusion between said expert and t
  • by Maximum Prophet (716608) on Thursday December 13, 2012 @02:59PM (#42277325)
    Multics was the grandfather of Unix, and had many security features built in that were left out of Unix. It did require hardware support to keep the rings separate, but that's a good thing.
    • by VortexCortex (1117377) <VortexCortex@nOs ... t-retrograde.com> on Thursday December 13, 2012 @03:38PM (#42278015)

      Nope. And I'll tell you why: Stack Smashing. Anything that uses the same stack for function parameters, local variables AND code pointers is fucked. Unfortunately separating the call stack from the data stack has performance penalties considering that x86 evolved to include instructions explicity to speed up the bad practice in the first place: ENTER / LEAVE. I'm developing a secure OS too, as a hobby project. Even in user mode I can secure the call stack, a called function can not affect the return address. This also leads to dead simple Co-Routine implementations, which C is sadly lacking, thus I've abandoned it for now (since I'd have to re-implement much of the base of the compiler anyway). Eventually I'll implement C using the foundations of my security driven low level programming language.

      The point is that we continually sacrifice security for speed in modern OSs. Hell, even the browser that EK loves (Chrome) downloads arbitrary code as data, compiles sadi data to machine instructions, marks the data as code and EXECUTES ARBITRARY REMOTE CODE. Instead of abandoning the horribly designed JavaScript (ECMA Script) for a language designed with efficiency in mind, we're sacrificing our security. In my OS, the OS compiles software from cross platform bytecode into machine code at install time, and cryptographically signs it. This lets you distribute applications in a cross platform manner, and allows the OS to inspect code for used features before running any of it, without the performance draw back of a VM. An embedded VM allows the OS to run less trusted code as bytecode and provides sandboxing facilities at the cost of less performance. This "evaluation mode", solves the plugin problem whereby a .DLL / .SO based malicious plugin can take full control of the host program, without resorting to separate processes and IPC. IMO, every OS needs to be integrated with its compiler, rather than relying solley on hardware for security.

      In short: There is much innovation to be had, but we must ignore parts of POSIX to do so, and no, looking to the past isn't going to help much. We need a clean slate to solve the issues that those older OSs never considered. Past languages might be of interest though, eg: FORTH uses separate call and data stacks.

      • by Anonymous Coward

        You might be interested in Secure Virtual Architecture: http://sva.cs.illinois.edu

        I also recommend that you check out the Memory Safety Menagerie (http://sva.cs.illinois.edu/menagerie). It contains papers on a number of techniques for mitigating memory safety attacks on commodity software running on commodity hardware.

        Disclaimer: I'm the lead SVA developer.

      • by loufoque (1400831)

        EXECUTES ARBITRARY REMOTE CODE

        It's not arbitrary, it's signed.

        Past languages might be of interest though, eg: FORTH uses separate call and data stacks.

        No programming language can replace the problem of people not being able to write code without exploits.
        Stack smashing isn't the only problem.

        • by drinkypoo (153816)

          EXECUTES ARBITRARY REMOTE CODE

          It's not arbitrary, it's signed.

          Oh, good. Signing keys are never compromised or abused.

  • by swb (14022)

    ...but I find his answer less than compelling, frankly.

    I find it hard to believe that he can operate in Russia totally freely without being leaned on by the FSB or Putin's administration.

    They threw Khodorkovsky in the gulag over money and have killed investigative journalists, you mean to tell me that a top computer security guy can just do as he pleases?

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      I find it hard to believe that he can operate in Russia totally freely without being leaned on by the FSB or Putin's administration.

      They threw Khodorkovsky in the gulag over money and have killed investigative journalists, you mean to tell me that a top computer security guy can just do as he pleases?

      I wonder why he might not want to acknowledge it then.

    • Re: (Score:2, Interesting)

      I hear you. He answered my question as well in a similar fashion, and perhaps I'm too cynical but "fighting malware" = "disrupting botnets" = "ticking off organized crime". One cannot operate in Moscow and dodge that crowd.

      Nonetheless I appreciate the guy taking time to answer the questions and provide his views. Can't say I was expecting anything controversial although I was hoping for a surprise or two.

      Hey Editors, how about getting Mikko Hyppönen to answer some questions next time?

    • by _Shad0w_ (127912)

      I'm surprised you're not pointing to the fact that the university he went to in the USSR - The Institute of Cryptography, Telecommunications, and Computer Science - was sponsored by the KGB and Russian Ministry of Defence.

    • "have killed investigative journalists" - do you have any proof? There are lists of "killed" journalists, but by looking at these lists, many of "journalists" worked as journalists in the past, years before being killed. Others did not touch politics at all and where killed because they found criminal activity in some firm.
  • Sounds very much like he is describing Genode operating system framework
  • Firemen don't start fires, doctors don't infect people, and antivirus companies don't create viruses. Any at all.

    We detect 200,000 new threats every day as it is. Keeping on top of them all is quite a task. And another thing — we don't hire ex-hackers. Our business is built on trust, and we apply the highest standards in sensitive areas of our work: in malware analysis, product development, etc. Like a homicide detective doesn't need to kill to investigate a murder more effectively, a good expert doesn't need to be on the dark side to analyze viruses and predict what may come next.

    Wow, he lays it on pretty heavy here. Hollywood definition of "hacker" and the "abstinence-only sex ed. taught by a virgin" approach to computer security.

    BTW firemen start fires in training, research doctors infect people in testing, and any security company that wants to find and patch exploits before the bad guys do should be creating viruses (for in-house use). If that's not something in the scope of his business that's one thing and it would be fair for him to say it, but I'm really surprised he went ou

  • "Do you have a vacation home in Belize"?
  • The Apple marketing people may not like it, but it's time to admit it â" yes guys, your system is as vulnerable as Windows. Don't ignore the lesson of Flashfake. Think serious about security, not just different [sic].

    Because whoever posted this doesn't seem to get it: Apple's marketing slogan at around the turn of the millennium was "Think Different". It's a joke, not an unintentional grammatical error.

    • by LodCrappo (705968)

      Actually, it is very much an appropriate use of "sic". The only problem here is that sic doesn't mean quite what you think it does.

      From
      http://en.wikipedia.org/wiki/Sic [wikipedia.org]

      The Latin adverb sic ("thus"; in full: sic erat scriptum, "thus it had been written") added immediately after a quoted word or phrase (or a longer piece of text), indicates that the quotation has been transcribed exactly as found in the original source, complete with any erroneous spelling or other nonstandard presentation.

  • Safe Run missing. (Score:4, Interesting)

    by godel_56 (1287256) on Thursday December 13, 2012 @06:57PM (#42281391)
    From TFA:

    For the suspicious ones, a further stage of analysis can be performed by running them in Safe Run — an isolated sandbox environment from which maliciousness can't contaminate the computer's environment proper.

    He doesn't know his own product. Safe Run was dumped in the latest version of KIS and changed to Safe Money, which sandboxes your browser only, for connection to specific white-listed web sites such as banks.

    Looks like back to Sandboxie

    http://www.sandboxie.com/index.php?DownloadSandboxie

  • Too many false positives,i don't have anti-virus software installed,HIPS software is the best,like OP

"Only the hypocrite is really rotten to the core." -- Hannah Arendt.

Working...