Evidential Understanding of AI-Powered Software Security: Findings on LLM-Assisted Development and Data Quality
Like any scientific field, software security requires empirical evidence to understand the capabilities and limitations of using AI tools. Over the past few years, we have conducted extensive research exploring different aspects of software security for gathering, using, and disseminating empirical evidence. Our efforts aim to systematically understand and address the challenges that may negatively impact the trustworthiness and scalability of AI-powered solutions for engineering secure digital systems. This talk presents findings from our empirical examination of AI-generated code and software vulnerability prediction models. I will present empirical evidence showing where AI claims diverge from reality. My talk will focus on helping audience to better understand how LLMs generated code can introduce security weaknesses in software systems and what types of data quality issues result in less reliable vulnerability prediction models. The talk concludes with evidence-based guidance about what current AI tools for code generation can and cannot do in enterprise security workflows to support informed decisions about when and how to trust AI assistance.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
Loading virtual attendance info...
- Nişantepe Mah., Orman sokak, Özyeğin Üniversitesi
- Istanbul, Istanbul
- Türkiye 34794
- Click here for Map
Speakers
Ali Babar
Evidential Understanding of AI-Powered Software Security: Findings on LLM-Assisted Development and Data Quality
Abstract: Like any scientific field, software security requires empirical evidence to understand the capabilities and limitations of using AI tools. Over the past few years, we have conducted extensive research exploring different aspects of software security for gathering, using, and disseminating empirical evidence. Our efforts aim to systematically understand and address the challenges that may negatively impact the trustworthiness and scalability of AI-powered solutions for engineering secure digital systems. This talk presents findings from our empirical examination of AI-generated code and software vulnerability prediction models. I will present empirical evidence showing where AI claims diverge from reality. My talk will focus on helping audience to better understand how LLMs generated code can introduce security weaknesses in software systems and what types of data quality issues result in less reliable vulnerability prediction models. The talk concludes with evidence-based guidance about what current AI tools for code generation can and cannot do in enterprise security workflows to support informed decisions about when and how to trust AI assistance.
Biography:
Ali Babar is a Professor in the School of Computer Science, Adelaide University, Australia. Prof Babar is a visiting professor with the Özyegin University, Turkey. He is a co-founder and Chief Inspiration Officer (CIO) of a startup, Elevexai Systems, focused on consulting and engineering of AI-Native Secure and Scale software. Most recently, he was a theme leader on architecture and platform for security as service in Cyber Security Cooperative Research Centre (CSCRC), a large initiative funded by the Australian government, industry, and research institutes. Professor Babar was the technical lead of one of the largest projects on “Software Security” in the ANZEC region funded by the CSCRC. Software Security with Focus on Critical Infrastructure, SOCRATES, brings more than 75 researchers and practitioners from 10 organizations for developing and evaluating novel knowledge and AI-based platforms, methods, and tools for software security. After joining University of Adelaide in 2013, Prof Babar established an interdisciplinary research group called CREST, Centre for Research on Engineering Software Technologies, where he leads the research, development and education activities of more than 20 researchers and engineers in the areas of Engineering of AI-Native Software Systems, AI Ready Data, Software Security and Privacy, and Human-AI collaboration. Professor Babar has authored/co-authored more than 340 peer-reviewed research papers at premier Software journals and conferences. Professor Babar obtained a Ph.D. in Computer Science and Engineering from the school of computer science and engineering of University of New South Wales, Australia. He also holds a M.Sc. degree in Computing Sciences from University of Technology, Sydney, Australia. More information on Professor Babar can be found at https://alibabar.net.