Nov 28 – Dec 2, 2022, University of Udine, Italy
Organized by the AIxIA Working Group on Artificial Intelligence and Cybersecurity.
The workshop aims at collecting contributions on applications of Artificial Intelligence to Cybersecurity (such as Intrusion Detection, Behavioral Analysis, etc.), and Cybersecurity as applied to Artificial Intelligence (privacy preservation, ethical and trustworthy data analysis, and so on).
The workshop aims at attracting contributions from both inside and outside the working group, reporting novel research results and industrial applications of the reported technologies. Topic of welcome submissions relate but are not limited to:
As AI becomes more and more pervasive with the introduction, in daily life, of new AI-based applications and services, a number of security, privacy and ethical concerns have arisen. In fact, as an increasing number of stakeholders look at the AI as an essential tool to process the huge amount of
data produced by each environment for commercial uses, governments and legal authorities are starting to look at the implications related to privacy, physical and software security, and human ethics which are brought by usage of AI-based services and applications.
Artificial Intelligence is a powerful tool for cybersecurity practitioners, effectively used to identify, tackle and prevent cyberattacks, identifying malware ahead of time, and detecting anomalous behaviors. Still, in recent years, cyberattackers have been exploiting AI-powered mechanisms to carry out more efficient and effective targeted cyberattacks. On another side, AI can be used to learn behavioral features of software, systems or human users, becoming a main enabler for key security processes such as identification, authentication and authorization. At the same time, AI-based technologies have been extensively used by intelligence agencies and repressive governments to systematically violate citizen privacy and track at the same time movements and private life of both criminals and dissidents. Thus, AI is a powerful and flexible technology, which has become an essential tool in fighting cybercrimes. Still, it can be used for malicious or exploitative purposes targeting systems integrity, availability and people’s privacy. This raised several concerns about the possible misuse of AI-based technologies, recently pushing to establish a regulatory framework, such as the one recently proposed by the European Commission, to classify AI applications based on their risk level and regulating their possible usage for each level.
This workshop aims at collecting contributions related to the two following macro-topics of research and
practice on AI and Cybersecurity:
i) AI-enabled/empowered cybersecurity applications and services (Cybersec through AI);
ii) Security and privacy aware AI, according to the concepts of trustworthy artificial intelligence and human
centered AI (Cybersec for AI).
Writing and Submission
Submitted papers should be written in English and formatted according to the Springer LNCS style.
– Regular papers must be original papers, which are not being submitted simultaneously for publication elsewhere. These papers should not exceed 12 pages, with any number of additional pages containing bibliographic references only;
– Discussion papers report results already published or accepted for publication in international conferences, and should not exceed 8 pages with any number of additional pages containing bibliographic references only;
– Short papers (5 pages), which are particularly suitable for presenting work in progress, software prototypes, extended abstracts of doctoral theses, or general overviews of research projects.
All papers will be peer-reviewed and final copies of papers for inclusion in the conference proceedings will be published on CEUR in the AI*IA Series (Scopus indexed).
Technical Program Committee
Joint Academia – Industrial panel including representatives from the working group and the CLUSIT – Italian Association for Computer Security.