Persönlicher Status und Werkzeuge

Security in IT systems: Machine learning for security

Type:Master Seminar
Semester:Wintersemester 2013/2014
Location:MI - 00.09.038 (5609.EG.038)Alan Turing
Time:16:00 - 18:00
Start:16.10.2013
Language:English
Preliminary Discussion:  04.07.2013, 16:00 - 17:00, Location: 00.09.038
Lecturer:Prof. Dr. Alexander Pretschner 
Prachi Kumari
SWS:2
ECTS:4
LvNr:0000000570
Links:Modul IN2107 Masterseminar
Contact:Prachi Kumari

Note:

Seminar presentations start from 8th January 2014. Before that, students’ participation in the seminar is through meetings with their supervisors and regular submissions.

 

Rules for participation and registration

  1. Plagiarism of any form (blatant copy-paste, summarizing some else's ideas/results without reference etc.) will result in immediate expulsion from the course!
  2. All submissions are mandatory.
  3. Late submissions will invite penalties.
  4. Non-adherence to submission guidelines will invite penalties.
  5. Slides must be discussed with the supervisor at least two weeks before the presentation.
  6. Participation in all seminar presentations is mandatory.
  7. Registration for the seminar started in the preliminary meeting with students selecting topics of their choice. Once allotted a topic,
    • Students must acknowledge their acceptance of the topic and participation in the seminar latest by 15th July 2013.
    • After their confirmation, they get the papers in a separate mail and,
    • we register them at TUMonline.

Important: The students must not register themselves for the seminar in TUM online. If they do so and fail to get a topic, the supervisors/admin of the course will not be held responsible in any way.

If you are interested in any of the seminar topics, please write to Prachi Kumari.

Schedule All topics


Seminar Instructions

The goal of a seminar is to introduce students to the major constituent of scientific method that is concerned with critically reading, understanding, summarizing, explaining and presenting existing scientific papers. Students read one or more papers that are assigned to them by their supervisors; they may find and read further relevant articles in the domain. Understanding the central statements of a paper includes highlighting, complementing and explaining possibly implicit assumptions as well as deliberately or accidentally incomplete chains of argumentation; it typically also includes coming up with own examples.

This understanding is reflected in the written exposé. This exposé shall include the problem that is tackled in the original papers as well as the central assumptions, results, statements and arguments that were identified while reading the original paper. Merely paraphrasing the original paper is not sufficient. Ideally, the exposé contains a section with the student's own assessment in terms of whether the paper is relevant, reasonable, esthetic, complete, sound, etc.

There is no need for students to defend the ideas presented in the paper by all means, neither in the exposé nor in the presentation. Students may be critical! However, do not forget that the original paper was usually written by senior scientists who know their domain, so that politeness, precaution, respect and very clean arguments are indispensable when criticizing.

The subject of the presentation is the problem, the underlying assumptions, and the solution. Three slides ("The Good, the Bad, the Ugly") demonstrate the student's crtical reflection of the subject.

Structure

    The presentation must be clearly structured. A useful template is the following:

    1. agenda
    2. introduction to the field
    3. precise problem statement
    4. development of the solution to this problem
    5. summary and assessment ("the good, the bad, the ugly," see above)
    6. references.

    It is of course possible to deviate from this structure; the six mentioned topics should, however, have their place in every presentation.

From papers to presentations

    The following steps help woth coming up with a good presentation:

    1. structure the content in small portions that can be presented on one slide
    2. present content intuitively, e.g., by images, diagrams, etc.
    3. find adequate examples that are short, comprehensible and relevant
    4. choose a level of presentation that is in line with the audience (MS or BS students)

    Experience shows that longer program texts are hard to present - avoid them, or if necessary, concentrate on the essential parts and use pseudo code.

Slides

    Slides are a useful means to complement the audio channel (what you say) with a visual channel. A few heuristics have proved useful:

    1. every slide should be self-contained with a title that summarizes the point of the slide
    2. slides should be well-structured; too much material on one slide will overwhelm the audience
    3. avoid entire sentences, use action words or graphics instead - the audience usually can not listen to your explanations and read entire sentences at the same time. Moreover, avoid simply reading your slides to the audience.
    4. use large fonts so that people in the back rows can read the slides
    5. in case you resort to slides to be used with an overhead projector: the landscape format is usually preferable to portrait because those in the back rows will not be impeded in their visual experience by the heads of those sitting in front of them
    6. don't use too many slides - as a rule of thumb, the presentation of one slide takes in-between two and five minutes.

Presentation

    The presentation itself should take 30 minutes. A few pieces of advice:

    1. speak slowly, loudly, and clearly
    2. try to connect to your audience - do not look at the wall. This helps with getting an idea of whether or not the audience can follow your reasoning.
    3. if possible, do answer questions. Questions are usually not meant to harrass you; instead, they show that you may not have been entirely clear. In case a question leads to a disruption of your presentation because it is meant as a remark or an aside, you may very well ask to postpone the discussion to the end of your talk.

Submissions

We expect your paper to be maximum 15 pages in Springer LNCS style. We will not accept any other formats. All submissions must be as PDF files: no other file formats are acceptable. The presentation will be 30 minutes + 15 minutes of discussion.

 

To top


Schedule 

Date & time / DeadlineAnnotation
Nov 14th, 23:59Send your paper to Prachi Kumari.
Nov 15th, 23:59You will get another student's paper. It's your task to review this paper.
Nov 22nd, 23:59Send your review of the other student’s paper to  Prachi Kumari. In turn you will get a review of your paper the next working day.
Nov 29th, 23:59Improve your paper using the review you got. Send the improved version of your paper to Prachi Kumari.
Dec 10th, 23:59Get a review of your paper from your supervisor.
Dec 20th, 23:59Send the final version of your paper to Prachi Kumari.
From Jan 8th, 16:00Presentation sessions every week in Room 00.09.038 (exact schedule depends upon the number of students participating in the course).


Individual presentations will take place according to the following schedule in room Alan Turing (MI 00.09.038)

 Attending each presentation is mandatory for the seminar participants. Students must read the final submissions of their colleagues and participate in the discussions. 

Date & time Speaker  Final submission 
Jan 13th, 14:00  Adria Puigdomenech Badia   Download
Jan 13th, 14:45  André S. Enger  Download
Jan 14th, 09:00 Sepideh Mesbah  Download
Jan 14th, 09:45 Omar Tarabai  Download
Jan 20th, 14:00 Dmytro Lapchuk  Download
Jan 20th, 14:45 Matthias Fischer  Download
Jan 21st, 09:00 Ankitaa Bhowmick   Download
Jan 21st, 09:45 Saahil Ognawala  Download
Jan 21st, 10:30 Urmimala Chakraborty    Download

 

To top


Topics

Machine learning is the study of algorithms that learn patterns and characteristic structures from example data. In this seminar students gain an insight into how machine learning techniques are applied for security context in order to detect attacks, anomalies, intrusion, or non-specified behaviour in software systems. This semester, following topics are on offer:

Machine learning for policies

Machine learning for Access Control. (supervisor: Enrico Lovat)

References:
[1] Martin E., Tao Xie. Inferring access-control policy properties via machine learning. In: Policies for Distributed Systems and Networks, 2006. Policy 2006. Seventh IEEE International Workshop on. IEEE, 2006. S. 4 pp.-238.

[2] Lujo Bauer, Yuan Liang, Michael K. Reiter, and Chad Spensky. Discovering access-control misconfigurations: New approaches and evaluation methodologies. In: Proceedings of the second ACM conference on Data and Application Security and Privacy. ACM, 2012. S. 95-104.

Machine learning for security policies. (supervisor: Prachi Kumari)

References:
[1] Patrick Gage Kelley, Paul Hankes Drielsma, Norman Sadeh, and Lorrie Faith Cranor. User-controllable learning of security and privacy policies. In: Proceedings of the 1st ACM workshop on Workshop on AISec. ACM, 2008. S. 11-18.

[2] Jason Cornwell, Ian Fette, Gary Hsieh, Madhu Prabaker, Jinghai Rao, Karen Tang, Kami Vaniea, Lujo Bauer, Lorrie Cranor, Jason Hong, Bruce McLaren, Mike Reiter, and Norman Sadeh. User-Controllable Security and Privacy for Pervasive Computing, 8th IEEE Workshop on Mobile Computing Systems and Applications, 2007.

Security Policy Mining (supervisor: Tobias Wüchner)

References:
[1] JeeHyun Hwang, Tao Xie, Vincent Hu, and Mine Altunay. Mining likely properties of access control policies via association rule mining. In: Data and Applications Security and Privacy XXIV. Springer Berlin Heidelberg, 2010. S. 193-208.

[2] Ian Molloy, Youngja Park, and Suresh Chari. Generative models for access control policies: applications to role mining over logs with attribution. In: Proceedings of the 17th ACM symposium on Access Control Models and Technologies. ACM, 2012. S. 45-56.

[3] Zhongyuan Xu and Scott D. Stoller. Mining parameterized role-based policies. In: Proceedings of the third ACM conference on Data and application security and privacy. ACM, 2013. S. 255-266.

 

Machine learning for (distributed) systems

Intrusion detection meets machine learning. (supervisor: Alexander Fromm)

References:
[1] Chih-Fong Tsai, Yu-Feng Hsu, Chia-Ying Lin, Wei-Yang Lin. Intrusion detection by machine learning: A review. Expert Systems with Applications, 2009, 36. Jg., Nr. 10.

[2] Thuy T.T Nguyen and Grenville Armitage. A survey of techniques for internet traffic classification using machine learning. Communications Surveys & Tutorials, IEEE, 2008, 10. Jg., Nr. 4, S. 56-76.

[3] Robin Sommer and Vern Paxson. Outside the closed world: On using machine learning for network intrusion detection. In: Security and Privacy (SP), 2010 IEEE Symposium on. IEEE, 2010. S. 305-316.

[4] Hai Thanh Nguyen and Katrin Franke. Adaptive Intrusion Detection System via online machine learning. In: Hybrid Intelligent Systems (HIS), 2012 12th International Conference on. IEEE, 2012. S. 271-277.

[5] Tran, T. P., Tsai, P., Jan, T., & Kong, X. Network Intrusion Detection using Machine Learning and Voting techniques. Machine Learning, 2010, S. 267-290.

Malware detection by machine learning. (supervisor: Alexander Fromm)

References:
[1] Asaf Shabtai, Uri Kanonov, Yuval Elovici, Channan Glezer, and Yael Weiss. “Andromaly”: a behavioral malware detection framework for android devices. Journal of Intelligent Information Systems, 2012, 38. Jg., Nr. 1, S. 161-190.

[2] John Demme, Metthew Maycock, Jared Schmitz, Adrian Tang, Adam Waksman, Simha Sethumadhavan, and Salvatore Stolfo. On the feasibility of online malware detection with performance counters. In: Proceedings of the 40th Annual International Symposium on Computer Architecture. ACM, 2013. S. 559-570.

[3] Igor Santos, Jaime Devesa, Felix Brezo, Jaview Nieves, and Pablo G. Bringas. OPEM: A static-dynamic approach for machine-learning-based malware detection. In: International Joint Conference CISIS’12-ICEUTE´ 12-SOCO´ 12 Special Sessions. Springer Berlin Heidelberg, 2013. S. 271-280.

[4] Yan Zhou and Meador Inge. Malware detection using adaptive data compression. In: Proceedings of the 1st ACM workshop on Workshop on AISec. ACM, 2008. S. 53-60.

[5] Mamoun Alazab, Stialakshmi Venkatraman, Paul Watters, and Moutaz Alazab. Zero-day malware detection based on supervised learning algorithms of api call signatures. In: Proceedings of the Ninth Australasian Data Mining Conference-Volume 121. Australian Computer Society, Inc., 2011. S. 171-182.

Machine learning for botnet traffic detection. (supervisor: Florian Kelbert)

References:
[1] Livadas C., Walsh R., Lapsley D., Strayer W.T., Usilng machine learning technliques to identify botnet traffic. In: Local Computer Networks, Proceedings 2006 31st IEEE Conference on. IEEE, 2006. S. 967-974.

[2] Strayer W.T., Walsh R., Livadas C., Lapsley D., Detecting botnets with tight command and control. In: Local Computer Networks, Proceedings 2006 31st IEEE Conference on. IEEE, 2006. S. 195-202.

Machine learning for detecting malicious web content. (supervisor: Florian Kelbert)

References:
[1] Yung-Tsung Hou, Yimeng Chang, Tsuhuan Chen, Chi-sung Laih, and Chia-Mei Chen. Malicious web content detection by machine learning. Expert Systems with Applications, 2010, 37. Jg., Nr. 1, S. 55-60.

[2] Justin Ma, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. Beyond blacklists: learning to detect malicious web sites from suspicious URLs. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009. S. 1245-1254.

Machine learning for data leakage prevention. (supervisor: Tobias Wüchner)

References:
[1] Yan Luo and Jeffrey J.P. Tsai. A framework for extrusion detection using machine learning. In: Object Oriented Real-Time Distributed Computing (ISORC), 2008 11th IEEE International Symposium on. IEEE, 2008. S. 83-88.

[2] Polina Zilberman, Shlomi Dolev, Gilad Katz, Yuval Elovici, and Asaf Shabtai. Analyzing group communication for preventing accidental data leakage via email. In: Proc. of the 2010 Workshop on Collaborative Methods for Security and Privacy (CollSec'10), Washington, DC. 2010.

[3] Michael Hart, Pratyusa Manadhata, and Rob Johnson. Text classification for data loss prevention. In: Privacy Enhancing Technologies. Springer Berlin Heidelberg, 2011. S. 18-37.

[4] Landon P. Cox and Peter Gilbert. Redflag: Reducing inadvertent leaks by personal machines. Tech. Rep. TR-2009-02, Duke University, 2009.

 

Machine learning for testing

Machine learning for Testing Protocols and Web Applications. (supervisor: Matthias Büchler)

References:
[1] Guoqiang Shu and David Lee. Testing security properties of protocol implementations-a machine learning based approach. In: Distributed Computing Systems, 2007. ICDCS'07. 27th International Conference on. IEEE, 2007. S. 25-25.

[2] Choudhary, S. R., Prasad, M. R., & Orso, A. Crosscheck: Combining crawling and differencing to better detect cross-browser incompatibilities in web applications. In: Software Testing, Verification and Validation (ICST), 2012 IEEE Fifth International Conference on. IEEE, 2012. S. 171-180.

[3] Hyunsang Choi, Bin B. Zhu, and Heejo Lee. Detecting malicious web links and identifying their attack types. In: Proceedings of the 2nd USENIX conference on Web application development. USENIX Association, 2011. S. 11-11.

Finding security vulnerabilities in software source code using machine learning. (supervisor: Sebastian Banescu)

References:
[1] Hovsepyan Aram, Riccardo Scandariato, Wouter Joosen, and James Walden. Software vulnerability prediction using text analysis techniques. In: Proceedings of the 4th international workshop on Security measurements and metrics. ACM, 2012. S. 7-10.

[2] Shabtai, A., Moskovitch, R., Feher, C., Dolev, S., & Elovici, Y. Detecting unknown malicious code by applying classification techniques on OpCode patterns. Security Informatics, 2012, 1. Jg., Nr. 1, S. 1-22.

[3] Stephan Neuhaus, Thomas Zimmermaqnn, Christian Holler, and Andreas Zeller. Predicting vulnerable software components. In: Proceedings of the 14th ACM conference on Computer and communications security. ACM, 2007. S. 529-540.

[4] Yamaguchi Fabian, Felix Lindner, and Konrad Rieck. Vulnerability extrapolation: assisted discovery of vulnerabilities using machine learning. In: Proceedings of the 5th USENIX conference on Offensive technologies. USENIX Association, 2011. S. 13-13.

 

Secure machine learning

Defenses for adversarial machine-learning. (supervisor: Sebastian Banescu)

References:
[1] Barreno Marco, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J.D. Tygar. Can machine learning be secure?. In: Proceedings of the 2006 ACM Symposium on Information, computer and communications security. ACM, 2006. S. 16-25.

[2] Huang Ling, Anthony D. Joseph, Blaine Nelson, Benjamin IP Rubinstein, and J.D. Tygar. Adversarial machine learning. Internet Computing, IEEE, 2011, 15. Jg., Nr. 5, S. 4-6.

[3] Laskov Pavel and Richard Lippmann. Machine learning in adversarial environments. Machine learning, 2010, 81. Jg., Nr. 2, S. 115-119.

[4] Benjamin I.P. Rubinstein, Blaine Nelson, Ling Huang, Anthony D. Joseph, Shing-hon Lau, Satish Rao, Nina Taft, and J. D. Tygar. ANTIDOTE: understanding and defending against poisoning of anomaly detectors. In: Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference. ACM, 2009. S. 1-14.

 

To top