OSS-Forge

OSS-Forge is an open research initiative focused on trustworthy, secure, and transparent AI-assisted software engineering.
We develop and publish:

Our mission is to build a transparent, verifiable, and secure ecosystem for integrating Large Language Models (LLMs) into software development, especially in safety-critical and security-sensitive contexts.


What You Will Find Here

This organization hosts resources from multiple research projects and publications in AI security, software engineering, and code generation. Current categories include:

Static Analyzers & Security Tools

Datasets for Security & Software Engineering

Robustness, Data Quality & Industrial Code Generation

Our repositories include code, experimental scripts, datasets, and reproducibility materials.


Research Themes

Our work spans four interconnected areas:

  1. Security of AI-generated Code
    Vulnerability detection, automated patching, exploit generation, and robustness testing.

  2. Trustworthy LLM Evaluation
    Correctness, equivalence checking, symbolic execution, reproducible benchmarks.

  3. Software Engineering with AI
    Defect analysis, complexity metrics, orthogonal defect classification (ODC).

  4. Adversarial ML for Code Models
    Data poisoning, robustness stress-testing, unsafe pattern injection.

All research artifacts are peer-reviewed and associated with publications at DSN, ISSRE, ICPC, IST, EMSE, JSS, AUSE, and other venues.


Publications Powered by These Repositories

A non-exhaustive list includes works presented at:

Full references are available inside each corresponding repository.


Contributing

We encourage contributions from the research and practitioner community.

You can contribute by:

Please open discussions or pull requests inside the relevant repository.


Contact

OSS-Forge is developed by a joint research team from the University of North Carolina at Charlotte (UNCC) and the University of Naples Federico II.

Scientific Leadership

Core Research Contributors