Lectures and bios

Foundations of AI

Lecturer: Danilo Giordano is an Assistant Professor in the Department of Control and Computer Engineering at the Politecnico di Torino. His research interests focus on data analytics in Small Data and Big Data environments using statistical and Machine Learning (ML) techniques. In particular, he is interested in the development and application of ML in the context of network measurements and predictive maintenance. His interests in data analytics also include the study of current and future developments in shared mobility in smart cities. He has co-authored more than 40 conference and journal papers and is a member of the editorial board of Computer Network journal. He was awarded the best student paper award at the ITC conference and the IETF Applied Networking Research Prize in 2016. He also spent periods abroad at the CAIDA Research Center at the UC San Diego university to develop Big Data solutions for monitoring network traffic, and at Narus Inc. a San Francisco Bay Area company to investigate ML anomaly detection methods.

Abstract: Machine Learning (ML) and Artificial Intelligence (AI) have become ubiquitous technologies in almost every important aspect of our lives. The development of ML/AI applications requires starting from raw data to automatically extract meaningful and previously unknown knowledge from the raw data. However, such raw data is usually fraught with problems such as: data quality, dimensionality, privacy, imbalance, etc. For this reason, a large part of the effort to develop an effective ML/AI application lies in the proper pre-processing of the raw data before using the ML/AI algorithm. In this lecture, we will give an overview of the current trends in ML/AI, present the main problems and solutions related to data pre-processing and possible ML algorithms, followed by a short hands-on practice session.

 

Domain generation algorithms (DGA) detection using ML

Lecturer: Francesca Soro is a Scientist at the Center for Digital Safety and Security at the AIT Austrian Institute of Technology. She received her PhD in Electrical, Electronic and Telecommunication Engineering from Politecnico di Torino, Italy, in January 2022. Her research focuses mainly on applications of Big Data and machine learning in network traffic monitoring and cybersecurity, with the aim of detecting anomalies and rare events. She has participated in several practical cybersecurity exercises for organisations such as the International Atomic Energy Agency (IAEA) and the World Institute for Nuclear Security (WINS).

Abstract: Domain Generation Algorithms (DGAs) are widely used by botnets to create new and more resilient command and control infrastructures. The detection of DGA-generated domains in DNS traffic proved to be of fundamental importance to recognize said botnets and to put in place the necessary countermeasures. Given the significant volume of DNS traffic crossing network infrastructures, the manual detection of DGA domains is often a practically impossible task. For this reason, a large number of Machine Learning techniques relying on different approaches have been developed. In this talk, we will present an overview the ML techniques used in DGA detection and botnet classification in the SOCCRATES project (https://www.soccrates.eu/), followed by a short practical tutorial.

 

Towards Machine Learning Models that We Can Trust: Hacking and (properly) Testing AI

Lecturer: Maura Pintor is an Assistant Professor at the PRALab, in the Department of Electrical and Electronic Engineering of the University of Cagliari, Italy.  She received her Ph.D. in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations," provides a framework for optimizing and debugging adversarial attacks. She was a visiting student at Eberhard Karls Universitaet Tuebingen, Germany, from March to June 2020 and at the Software Competence Center Hagenberg (SCCH), Austria, from May to August 2021.

Abstract: As current data-driven AI and machine-learning methods have not been designed to deal with attackers and security in mind, it is important to evaluate these technologies properly before deploying them in the wild. To understand AI’s sensitivity to such attacks and counter the effects, machine-learning model designers craft worst-case adversarial perturbations and test them against the model they are evaluating. However, many of the proposed defenses have been shown to provide a false sense of security due to failures of the attacks rather than actual robustness. To this end, we will dive into the literature on machine learning evaluation in the context of evasion attacks and analyze (and reproduce) failures of the past to prevent these mistakes from happening again.

Autonomous applications leveraging Humanistic INtelliGence

Lecturer: Georg Macher worked as a Project Manager R&D focusing on autonomous vehicle projects and safety & cyber-security in AVL’s powertrain engineering R&D department. In September 2018, he joined the Institute of Technical Informatics as Senior Scientist and leads the Industrial Informatics research group. His research activities include systems and software engineering, software technology, process improvement, functional safety and cyber-security engineering. He is author and co-author of over 145 publications and a permanent member of EuroSPI program committee, SafeComp WS boards, and the SoQrates industrial working group (automotive OEM & Tier WG focusing on safety, cyber-security, and engineering processes). He is also an industry consultant, coach, and trainer, focusing on dependability engineering and the automotive domain.

Abstract: The integration of AI technologies into embedded automotive systems brings both challenges and opportunities. This talk explores the challenges and opportunities. By selecting suitable design patterns the dependability of AI-based systems and their contribution to overall system dependability can be significantly enhanced. The lecture will address the unique characteristics of embedded automotive systems, such as limited resources and safety-critical requirements.It will address the issues: (i) improving the dependability of AI-based systems themselves through fault tolerance and robustness, (ii) leveraging AI to enhance dependability in other automotive subsystems through anomaly detection and predictive maintenance, and (iii) integrating cloud-based AI services while considering latency, connectivity, and data privacy.

Meta-Learning for Intrusion Detection: Teamwork or Fight against each other?

Lecturer: Tommaso Zoppi is a Research Associate at University of Florence, Italy. He is involved in several European and nationally funded and even industrial projects. He currently serves as a member of the Program Committee at several International Conferences. His research focuses on Anomaly Detection, Security and Safety, and often applies standards to plan, design, develop, and implement appropriate Architectures or Software in the domain of Critical Systems.

Abstract: Intrusion detection systems rely on machine learning to detect anomalies i.e., deviations from the expected behaviour. However, different algorithms have their pros and cons, which could be ideally combined by using ensembles of different algorithms that are orchestrated according to a specific meta-learning strategy. Unfortunately, ensemble learning is more often than not used to create overcomplicated machine learners that barely outperform individual machine learners. The lecture will i) reviews the basis of anomaly detection and intrusion detection systems, ii) presents state-of-the-art on anomaly detection algorithms for cyber-security and on attacks datasets, iii) present benchmarks using individual algorithms, iv) present ways to combine ensembles of algorithms, and v) allow students to put into practice all the presented notions in a final hands-on session using attack datasets.

 

Dependability Challenges in Safety-Critical Systems: the adoption of Machine learning

Lecturer: Andrea Bondavalli is currently a Full Professor in computer science at the University of Florence. His research interests include the design and evaluation of resilient and secure systems and infrastructures. His scientific activities originated more than 220 papers that appeared in international journals and conferences. He led various national and European projects and has been chairing the program committee in several international conferences. He is a member of the IFIP W.G. 10.4 Working Group on “Dependable Computing and Fault-Tolerance.”

Abstract: Machine Learning components in safety-critical applications can perform some complex tasks that would be unfeasible otherwise. However, they are also a weak point concerning safety assurance. We will illustrate two specific cases where ML must be incorporated in SCS with much care. One is related to the interactions between machine-learning components and other non-ML components and how they evolve with training of the former. We argue that it is theoretically possible that learning by the Neural Network may reduce the effectiveness of error checkers or safety monitors, creating a major complication for safety assurance. An example on automated driving is shown. Among the results, we observed that indeed improving the Controller could make the Safety Monitor less effective; to a limit where a training increment makes the Controller’s own behavior safer but results in the vehicle to be less safe. The other one regards ML algorithms that perform binary classification as error, intrusion or failure detectors. They can be used in SCS provided that their performance complies with SCS safety requirements. However, the performance analysis of MLs relies on metrics that were not developed with safety in mind and consequently may not provide meaningful evidence to decide whether to incorporate a ML into a SCS. We analyze the distribution of misclassifications and thus show how to better assess the adequacy of a given ML.

 

Fake News Detection

Lecturer: Alexander Schindler (Dr.) is Senior Scientist for multimodal artificial intelligence (AI) approaches at the Center for Digital Safety and Security at the Austrian Institute of Technology.

Abstract: In his talk, Dr. Schindler will provide a brief introduction to AI methods relevant to security-related tasks and then discuss their application in various use cases and scenarios. These include law enforcement support for criminal and post-terrorist investigations, open source intelligence, resilience to hybrid threats, and combating disinformation, hate crime, and extremism on the Internet.

 

AI recursiveness and virtue

Lecturer: Wessel Reijers is a Postdoctoral Researcher at the Department of Philosophy, University of Vienna. Additionally, he holds visiting scholarships at the Technion and at the Robert Schuman Centre, European University Institute. He received a PhD in technology ethics from Dublin City University. Previously, he was a Max Weber Fellow at the European University Institute, and a Research Associate in the ERC project “BlockchainGov”, led by Dr. Primavera de Filippi. Wessel’s current research explores the impacts of emerging technologies on citizenship, most notably coming from social credit systems. Additionally, he explores the nature of distributed governance, investigating its potential as well as its pitfalls. Wessel is the author of Narrative and Technology Ethics and co-editor of the edited volume Interpreting Technology.

Abstract:
AI systems are recursive systems. When applied to practical contexts, they instantiate feedback loops between the environment and their internal states, constituting what has been called a learning process. This recursiveness also affects humans, when they are part of the environment that calls for adaptation: for instance in the case of online recommendation systems, predictive policing algorithms, and credit scores. More specifically, it affects human dispositions, and hence our ability to make the right choices in the right moment - in short, the virtues. In this talk, I discuss how AI systems impact the human ability to exercise virtue, what types of virtue it mediates, and what the promises and pitfalls are of this new development. The argument will draw from a case study of the Chinese Social Credit System, which has been explicitly designed as a recursive socio-technical system to shape virtue.

 

Applications: Safety architectures for self-driving cars

Lecturer: Wilfried Steiner received a degree of Doctor of Technical Sciences and the Venia Docendi in Computer Science, both from the Vienna University of Technology, Austria (in 2005 and 2018, respectively). From 2009 to 2012, he was awarded a Marie Curie International Outgoing Fellowship hosted by SRI International in Menlo Park, CA. His research is focused on dependable cyber-physical systems for which he designs algorithms and network protocols with real-time, dependability, and security requirements. Wilfried Steiner has been the SAE AS6802 (Time-Triggered Ethernet) editor and served multiple years as a voting member in the IEEE 802.1, standardizing time-sensitive networking (TSN). He is the Director of the TTTech Labs, which acts as the center for strategic research within the TTTech Group.

Abstract: The automotive industry is working full speed on self-driving cars. They must be reliable, even ultra-high reliable, to ensure the safety of their passengers and must not pose a threat to others around. They are also distributed systems because they must be fail-operational. Unfortunately, despite vast economic investments, we have not reached a sufficient level of safety. In this talk, I will review the major challenges in designing self-driving cars and discuss various ongoing activities that may lead to a solution.

 

ML for Cybersecurity in converged energy systems: a saviour or a villain?

Lecturer: Angelos Marnerides is a Senior Lecturer (Associate Professor) in the School of Computing Science (SoCS) at the University of Glasgow (UofG), UK. Dr. Marnerides leads the Glasgow Cyber Defence Group (GCDG) and is a member of the UofG’s Glasgow Systems Section (GLASS) dealing with applied security and resilience research for Internet-enabled cyber physical systems. Prior to that he was an Assistant Professor at the School of Computing and Communications at Lancaster University where he founded the innovative Digital Infrastructure Defense (i-DID) group and before that he spent two years as an Assistant Professor at the Department of Computer Science at Liverpool John Moores University. His research has received significant funding from the industry (e.g., Fujitsu, BAE, Raytheon), governmental bodies (e.g., EU, IUK, EPSRC) and he has been invited to serve as an expert reviewer in grant proposal panels and research assessment exercise frameworks (e.g., Hong-Kong RAE, Chilean research commission, Israel Innovation Authority). He has been a member of the IEEE and the ACM since 2007 and served as a Technical Program Committee (TPC) member, TPC track and workshop co-chair and organiser for several top IEEE and IFIP conferences including IEEE ICC, IEEE GLOBECOM, IEEE INFOCOM, IEEE CCNC, IFIP Networking, IEEE WoWMoM, and IEEE GLOBALSIP leading him to receive IEEE ComSoc contribution awards in 2016 and 2018. He holds an MSc and PhD (Lancaster University, '07 & '11) all in Computer Science and held postdoctoral and visiting researcher positions at Carnegie Mellon University (USA), University of Porto (Portugal), Lancaster University (UK) and University College London (UK).

Abstract: In today’s networked systems, ML-based approaches are regarded as core functional blocks for a plethora of applications ranging from network intrusion detection and unmanned aerial vehicles up to medical applications and smart energy systems. Nonetheless, regardless of the capabilities demonstrated through such schemes it has been recently shown that they are also prone to attacks targeting their intrinsic algorithmic properties. Therefore, attackers are nowadays capable at instrumenting adversarial ML processes mainly by injecting noisy or malicious training data samples in order to undermine the process of a given ML algorithm. This talk aims to discuss and describe this relatively new problem and further demonstrate examples targeting Virtual Power Plant (VPP) applications.

 

Applications: AI in Automotive systems

Lecturer: Javier Ibanez-Guzman (Member, IEEE) received the M.S.E.E. degree from the University of Pennsylvania, USA, as a Fulbright Scholar, and the Ph.D. degree from the University of Reading on an UK SERC fellowship. In 2011, he was Visiting Scholar at the University of California, Berkeley. Currently Corporate Expert on Autonomous Systems at Renault S.A., and co-director of the SIVALab Common Laboratory between the CNRS, UTC Compiegne and Renault working on intelligent vehicle technologies.  Formerly, he was a Senior Scientist with SimTech, A-Star research institute, Singapore, where he spearheaded work on autonomous ground vehicles. He is a C.Eng. and fellow of the Institute of Engineering Technology, U.K.

Absract: The security and safety of autonomous vehicles when operating in public roads is a major concern. They can be subject to internal faults and external disturbances resulting in dangerous situations. Machine learning methods are today an integral part of these systems. Despite notable progress, systems can be spoofed, they can be kidnaped, be subject to the malicious behavior of other road users, etc. The presentation addresses these issues from a functional perspective of an autonomous vehicle, to identify which are the components are the most vulnerable and the implications. It includes an outline on the potential Security Policy that needs to be elaborated when they are deployed.


 

Online user: 3 Privacy
Loading...