-
-
Notifications
You must be signed in to change notification settings - Fork 167
GSOC 2024 Project Ideas
See our page on applying for GSoC 2024: https://github.com/nexB/aboutcode/wiki/GSOC-2024
Here is a list of candidate project ideas for your consideration. Your own ideas are welcomed too! Please chat about them to get early feedback!
Other Project Ideas: https://github.com/nexB/aboutcode/wiki/Archived-GSoC-Project-Ideas
Repository: https://github.com/nexB/purldb
Project code: https://github.com/nexB/purldb/tree/main/purldb
Size: Large
Difficulty Level: Advanced
Tags: [Django], [PostgreSQL], [Web], [Redis/RQ]
Mentors:
- @jyang
- @tdruez
- @pombredanne
- @keshav-space
Related Issues:
Description:
This project consists mainly of the following:
- Improved and scalable SCIO instances:
Purldb uses a queue for scanning packages/archives where they are submitted for scanning, and then scanned through a SCIO instance. We should add an improved queue which can send scans to multiple SCIO instances for scale.
- Improve scan queue handling
This consists of multiple quality of life improvements in purldb scan queue like handle exceptions correctly on a SCIO crash/disconnect, actively look for finished scan for indexing, update status properly for submitted scans, etc.
Repository: https://github.com/nexB/purldb and https://github.com/nexB/scancode.io
Reference: https://github.com/ossf/scorecard
Size: Large
Difficulty Level: Intermediate
Tags: [Django], [PostgreSQL], [SBOM], [Metadata], [Security]
Mentors:
- @jyang
- @tdruez
- @pombredanne
- @AyanSinhaMahapatra
Related Issues:
Description:
We already have SBOM export (and import) options in scancode.io supporting SPDX and CycloneDX SBOMs, and we can enrich this data using the public https://github.com/ossf/scorecard#public-data or the RestAPI at: https://api.securityscorecards.dev/.
The specific tasks for this project are:
- Research and figure out how best to consume this data
- Add models to support external data sources/scores on packages
- store these as package data in purldb (or fetch this by package in SCIO?)
- Add a pipeline in scancode.io to fetch and show this data in the UI
- Map this data to SPDX/CycloneDX SBOM elements i.e. how it can be exported in a BOM
Repository: https://github.com/nexB/scancode.io
Project code: https://github.com/nexB/scancode.io/blob/main/scanpipe/pipelines/deploy_to_develop.py
Size: Medium
Difficulty Level: Intermediate
Tags: [Django], [PostgreSQL], [BinaryAnalysis], [Metadata], [Packages]
Mentors:
- @AyanSinhaMahapatra
- @keshav-space
- @jyang
- @pombredanne
Description:
Create a pipeline for deployent analysis in Android apps, where the app source and .apk binary is provided as inputs, and we:
- Scan the source/binary for packages and send these to be scanned in purldb
- Make sure package assembly in SCTK/package indexing and scanning in purldb works for android packages
- Map respective source files to their binary files
- Match deployed files to packages indexed in the purldb
- Handle android specific cases and test with some examples of FOSS android apps
We already have deployment analysis pipelines for the Java/Javascript ecosystem and that can be a really nice reference point to start this project.
Repository: https://github.com/nexB/scancode.io
Project code: https://github.com/nexB/scancode.io/blob/main/scanpipe/pipelines/deploy_to_develop.py
Size: Medium
Difficulty Level: Intermediate
Tags: [Django], [PostgreSQL], [BinaryAnalysis], [Metadata], [Packages]
Mentors:
- @AyanSinhaMahapatra
- @keshav-space
- @jyang
- @pombredanne
Description:
Create a pipeline for deployent analysis in python projects, where the source code and python wheels are provided as inputs, and we:
- Scan the source/binary for packages and send these to be scanned in purldb
- Verify and fix package assembly issues in SCTK
- Map respective source files to their binary files
- Match deployed files to packages indexed in the purldb
- Handle android specific cases and test with some examples of FOSS android apps
We already have deployment analysis pipelines for the Java/Javascript ecosystem and that can be a really nice reference point to start this project.
There are two main categories of projects for VulnerableCode:
-
A. COLLECTION: this category is to mine and collect or infer more new and improved data. This includes collecting new data sources, inferring and improving existing data or collecting new primary data (such as finding a fix commit of a vulnetrability)
-
B. USAGE: this category is about using and consuming the vulnerability database and includes the API proper, the GUI, the integrations, and data sharing, feedback and curation.
Repository: https://github.com/nexB/vulnerablecode
Reference: https://github.com/nexB/vulnerablecode/issues/251
Size: Large
Difficulty Level: Advanced
Tags: [Python], [Django], [PostgreSQL], [Security], [Vulneribility], [NLP], [AI/ML]
Mentors:
- @pombredanne
- @tg1999
- @keshav-space
- @Hritik14
- @AyanSinhaMahapatra
Related Issues:
Description:
The project would be to provide a way to effectively mine unstructured data sources for possible unreported vulnerabilities.
For a start this should be focused on a few prominent repos. This project could also find Fix Commits.
Some sources are:
- mailing lists
- changelogs
- reflogs of commit
- bug and issue trackers
This requires systems to "understand" vulnerability descriptions: as often security advisories do not provide structured information on which package and package versions are vulnerable. The end goal is creating a system which would infer vulnerable package name and version(s) by parsing the vulnerability description using specialized techniques and heuristics.
There is no need to train a model from scratch, we can use AI models pre-trained on code repositories (maybe https://github.com/bigcode-project/starcoder?) and then fine-tune on some prepared datasets of CVEs in code.
We can either use NLP/machine Learning and automate it all, potentially training data masking algorithms to find these specific data (this also involved creating a dataset) but that's going to be super difficult.
We could also start to craft a curation queue and parse as much as we can to make it easy to curate by humans and progressively also improve some mini NLP models and classification to help further automate the work.
VulnerableCode: Add more data sources and mine the graph to find correlations between vulnerabilities (Category A)
Repository: https://github.com/nexB/vulnerablecode
Reference: https://github.com/nexB/vulnerablecode/issues?q=is%3Aissue+is%3Aopen+label%3A"Data+collection"
Size: Medium/Large
Difficulty Level: Intermediate
Tags: [Django], [PostgreSQL], [Security], [Vulneribility], [API], [Scraping]
Mentors:
- @pombredanne
- @tg1999
- @keshav-space
- @Hritik14
- @jmhoran
Related Issues:
Description:
See https://github.com/nexB/vulnerablecode#how for background info. We want to search for more vulnerability data sources and consume them.
There is a large number of pending tickets for data sources. See https://github.com/nexB/vulnerablecode/issues?q=is%3Aissue+is%3Aopen+label%3A"Data+collection"
Also see tutorials for adding new importers and improvers:
- https://vulnerablecode.readthedocs.io/en/latest/tutorial_add_new_importer.html
- https://vulnerablecode.readthedocs.io/en/latest/tutorial_add_new_improver.html
More reference documentation in improvers and importers:
- https://vulnerablecode.readthedocs.io/en/latest/reference_importer_overview.html
- https://vulnerablecode.readthedocs.io/en/latest/reference_improver_overview.html
Note that this is similar to this GSoC 2022 project (a continuation):
Repository: https://github.com/nexB/vulnerablecode
Size: Large
Difficulty Level: Intermediate
Tags: [Python], [Django], [PostgreSQL], [Security], [web], [Vulneribility], [API]
Mentors:
- @pombredanne
- @tg1999
- @keshav-space
Related Issues:
- https://github.com/nexB/vulnerablecode/issues/1046
- https://github.com/nexB/vulnerablecode/issues/1008
Description:
Currently vulnerablecode runs importers in bulk where all the data from advisories are imported and stored to be displayed.
The objective of this project is to have another endpoint and API where we can:
- support querying a specific package by PURL
- we visit advisories/package ecosystem specific vulneribility datasources and query for this specific package
- this is irrespective of whether data related to this package being present in the db (i.e. both for new packages and refreshing old packages)
- modify importers to support querying by purl to get advisory data for a specific package.
This is not straightforward as many advisories do not store data by packages, as they are not package-first. See specific issues/discussions on these importers for more info.
Repository: https://github.com/nexB/vulnerablecode
Reference: https://github.com/nexB/vulnerablecode/tree/main/vulntotal
Size: Medium
Difficulty Level: Intermediate
Tags: [Python], [Security], [Web], [Vulneribility], [BrowserExtension], [UI]
Mentors:
- @keshav-space
- @pombredanne
- @tg1999
Related Issues:
Description:
Implement a firefox/chrome browser extension which would run vulntotal on the client side, and query the vulneribility datasources for comparing them. The input will be a PURL, similarly as vulntotal.
- research tools to run python code in a browser (brython/pyscript)
- implement the browser extension to run vulntotal
Repository: https://github.com/nexB/scancode.io
Size: Medium
Difficulty Level: Intermediate
Tags: [Python], [Django], [CI], [Security], [Vulneribility], [SBOM]
Mentors:
- @tdruez
- @keshav-space
- @jyang
- @pombredanne
Related Issue:
Description:
Packages which are downloaded and scanned in SCIO can be optionally stored and accessed to have a copy of the packages which are being used for a specific product for reference and future use, and could be used to meet source redistribution obligations.
The specific tasks would be:
- Store all packages/archives which are downloaded and scanned in SCIO
- Create an API and index by URL/checksum to get these packages on-demand
- Create models to store metadata/history and logs for these downloaded/stored packages
- Additionally support and design external storage/fetch options
There should be configuration variable to turn this on to enable these features, and connect external databases/storage.
Repository: https://github.com/nexB/scancode.io
Reference: https://github.com/nexB/scancode.io/issues/599
Size: Large
Difficulty Level: Intermediate
Tags: [Python], [Django], [CI], [Security], [License], [SBOM], [Compliance]
Mentors:
- @pombredanne
- @tdruez
- @keshav-space
- @tg1999
- @AyanSinhaMahapatra
Related Issue:
Description:
Enhance SCIO/SCTK to be integrated into CI/CD pipelines such as Github Actions, Azure Piplines, Gitlab, Jenkins. We can start with any one CI/CD provider like GitHub Actions and later support others.
These should be enabled and configured as required by scancode configuration files to enable specific functions to be carried out in the pipeline.
There are several types of CI/CD pipelines to choose from potentially:
-
Generate SBOM/VDRs with scan results:
- Scan the repo to get all purls: packages, dependencies/requirements
- Scan repository for package, license and copyrights
- Query public.vulnerablecode.io for Vulnerabilities by PackageURL
- Generate SPDX/CycloneDX SBOMs from them with scan and vulneribility data
-
License/other Compliance CI/CD pipelines
- Scan repo for licenses and check for detection accuracy
- Scan repo for licenses and check for license clarity score
- Scan repo for licenses and check compliance with specified license policy
- The jobs should pass/fail based on the scan results of these specific cases,
so we can have:
- a special mode to fail with error codes
- description of issues and failure reasons, and docs on how to fix these
- ways to configure and set up for these cases with configuration files
-
Dependency checkers/linters:
- download and scan all package dependencies, get scan results/SBOM/SBOMs
- check for vulnerable packages and do non-vulnerable dependency resolutuion
- check for test failures after dependency upgrades and add PR only if passes
-
Jobs which checks and fixes for misc other errors:
- Replaces standard license notices with SPDX license declarations
- checks and adds ABOUT files for vendored code
Repository: https://github.com/nexB/scancode-toolkit
Reference:
Size: Large/Medium
Difficulty Level: Advanced
Tags: [Python], [Summary], [Packages]
Mentors:
- @AyanSinhaMahapatra
- @jyang
- @pombredanne
Related Issue:
Description:
Today the summary and license clarity scores are computed for the whole scan. Instead we should compute them for EACH package (and their files). This is possible now that we are returning which file belong to a package.
- Add license clarity scores to package models, so every package can have these
- Store references to license detection objects for clarity score
- compute summary and package attributes from their key-files or other files:
- primary and other licenses
- copyrights and notices
- license clarity score (extra field in package model)
- authors and other misc info
- make sure the attributes are collected properly for all package ecosystems (like copyrights)
This would make sure all package attributes are properly computed and populated, from their respective package files, instead of only having a codebase level summary.
Repository: https://github.com/nexB/scancode-toolkit
Size: Medium
Difficulty Level: Intermediate
Tags: [Python], [Licenses], [LicenseDetection], [SPDX], [Matching]
Mentors:
- @AyanSinhaMahapatra
- @pombredanne
- @jyang
Related Issue:
Description:
There are lots of variability in license notices and declarations in practice, and one example of modeling this is the SPDX matching guidelines. Note that this was also one of the major ways scancode used to detect licenses earlier.
- Support grammer for variability in license rules (brackets, no of words)
- Do a massive analysis on license rules and check for similarity and variable sections This can be used to add variable sections (for copyright/names/companies) and reduce rules.
- Support variability in license detection post-processing for
extra-words
case - Add scripts to add variable sections to rules from detection issues (like bsd detections)
Repository: https://github.com/nexB/scancode-toolkit
Size: Large/Medium
Difficulty Level: Advanced
Tags: [Python], [ML/AI], [Licenses]
Mentors:
- @pombredanne
- @jyang
- @AyanSinhaMahapatra
Related Issue:
Description:
Required phrases are present in rules to make sure the rule is not matched to text in a case where the required phrase is not present in the text, which would be a false-positive detection.
We are marking required phrases automatically based on what is present in other rules and license attributes, but this still leaves a lot of rules without them.
- research and choose a model pre-trained on code (StarCoder?)
- use the dataset of current SCTK rules to train a model
- Mark required phrases in licenses automatically with the model
- Test required phrase additions, improve and iterate
- Bonus: Create a minimal UI to review rule updates massively
Here are some project related attributes you need to keep in mind while looking into prospective project ideas, see also finding the right project guide:
-
The repositories/projects are sorted in order of importance, (i.e. PURLdb, vulnerablecode and scancode.io are the most important ones, and then there are all other projects).
-
The project ideas within a project are not sorted by priority.
-
This doesn't mean we will always consider a project proposal with a higher priority idea over a relatively lower priority one, no matter the merit of the proposal. This is only one metric of selection, mostly to prioritize important projects.
-
You can also suggest your own project ideas/discuss changes/updates/enhancements based on the provided ideas, but you need to really know what you are doing here and have lots of discussions with the maintainers.
There are three project lengths:
- Small (~90 hours)
- Medium (~175 hours)
- Large (~350 hours)
If you are proposing an idea from this ideas list, it should match what is listed here, and additionally please have a discussion with the mentors about your proposed length and timeline.
We have marked our ideas with medium/large but this is tentative and a best guess only. In a few cases they are both used to mark a project as it can be both. But still most of these are on the larger side, as these are large complex projects and you're likely underestimating the complexity (and how much we'll bug you to make sure everything is up to our standards) if you're proposing a medium length project anyway. You must discuss your proposal and the size of project you are proposing with a mentor as otherwise we cannot consider your proposal fairly.
We likely would only select medium/large project ideas only as the small projects are too small to get familiar with and contribute meaningfully to any of our projects.
Please also note that there is a difference in the stipend based on what you select also.
Here are all the tags we use for specific projects, feel free to search this page using these if you only want to look into projects with specific technical background.
[Django], [PostgreSQL], [Web], [DataStructures], [Scanning], [Javascript], [UI], [LiveServer] [API], [Metadata], [PackageManagers], [SBOM], [Security], [BinaryAnalysis], [Scraping], [NLP], [Social], [Communication], [Review], [Decentralized/Distributed], [Curation]
We are generally using two level of difficulty to characterize the projects:
- Intermediate
- Advanced
If it is a difficult project it means there is significant domain knowledge required to be able to tackle this project successfully, and while this domain knowledge is not a hard pre-requirement before you start, you must consult with mentors/maintainers early, ask a lot of domain specific questions and must be ready to research and tackle greenfield projects if you choose a project in this difficulty category.
Most other intermediate projects do not require this much domain knowledge and can easily be acquired during proposal writing/contributing, if you're familiar with the tech stack used in the projct.